Implementing cloud for the enterprise
Understanding the architecture that makes it work
By Winston Damarillo
Editor’s Note: Cloud computing has the potential to change the configuration of automation systems in a number of areas. This article describes the fundamentals of cloud computing to provide a basic understanding. As you read this article, consider how this might be applied to improve your operations.
Amazon developed Amazon Web Services (AWS), a collection of remote computing services (or web services) in 2002 with the objectives of lowering the cost of experimentation, shortening time to market, and sponsoring innovation by eliminating low level system administration tasks. That launch of cloud-based services, which included storage, computation, and human intelligence was followed by an elastic cloud computing solution in 2006, making AWS the web service of choice for many companies wanting to jump into cloud computing.
Amazon’s Elastic Compute Cloud (EC2), in which customers pay for compute resources by the hour, and Simple Storage Service, for which customers pay based on storage capacity, were the first widely accessible cloud computing infrastructures and the first to have widespread impact and adoption with small to medium business and even Wall Street. “NASDAQ stores many terabytes of NASDAQ, NYSE, and Amex data in Amazon’s storage cloud,” according to Claude Courbois, associate vice president of product development at NASDAQ. “NASDAQ adds 30 gigabytes to 80 gigabytes of data every day to the cloud, about 300,000 flat files each representing 10 minutes worth of trading activity on a stock.”
An industry has since grown around EC2 and competitors for public cloud infrastructure started appearing in 2008 with Rackspace’s Mosso (February 2008), followed by the launch of Terremark’s cloud (June 2008), AT&T’s cloud (August 2008), and IBM’s AMI launch on EC2 and Savvis’ cloud (February 2009). More recently, many enterprises have requested the need for a software infrastructure to provide EC2-similar services on their own infrastructure. Thus, EUCALYPTUS (Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems) was launched and has generated a lot of interest in the last year.
During the same period, Amazon has grown its suite of web services to include content delivery, database, e-commerce, messaging, monitoring, networking, billing, and support on top of its initial product service offerings.
AWS provided a practical implementation model of Cloud Computing Infrastructure and published the APIs that enable integration. This provided a pathway for private clouds on private infrastructure to support private platforms.
Benefits of cloud implementation
According to the Gartner Group’s IT spending report for 2010, cloud services are seen as a key driver for the overall shift from signiﬁcant spending on the acquisition of technology to a pay-as-you-go model.
The report further stresses the importance of adopting the right foundational infrastructure for virtualization in 2010. Despite the relative infancy of cloud computing usage in the enterprise, the conditions are ripe for a long-term, service-driven infrastructure.
Private cloud computing, as compared to its public counterpart, treats the IT organization as a vendor to its users. Doing so effectively emulates the public model where services like real-time consumption of computing resources, elastic scaling, use-based pricing, and self-service are provided. This practice is an integral part of increasing utilization levels within existing private data centers. Further, private clouds allow opportunities for companies to assure their own regulatory compliance, data physical location needs, or other control requirements.
A platform for innovation
Cloud computing, once adopted by the enterprise, will serve as an ideal platform for innovation. Due to the game-changing nature of the technology, it brings into question the way the IT organization interacts with its end users. Deploying applications, for example, would traditionally involve multiple units within the IT departments, from infrastructure personnel setting up the hardware to system administrators conﬁguring the operating systems and databases. With a cloud computing platform in place, an application developer can deploy the same application instantly. The application is highly available, scalable, and accessible via an intuitive console.
Cloud computing starts with virtualization as the minimum pre-requisite to provide infrastructure as a service. A comprehensive cloud computing platform must provide all the necessary elements to deliver a scalable and fault tolerant environment necessary for an infrastructure, as well as the productivity and standardization enhancements of a platform. There is now a repeatable pattern of execution and framework to describe cloud computing as an architecture, which includes infrastructure as a service, and platform as a service.
The main “actors” of cloud architecture include:
Resource servers are machines that run the virtualized resources. These are typically high capacity servers capable of 10 virtual machines or more. Resource servers are combined together into an aggregate pool of virtual machines resources. In some cases, virtualized resource “services” are combined into the overall resource pools. The most common example of this is a high capacity load balancer appliance. To make the load balancer become a part of the cloud architecture, a software representation of it capabilities are deployed into the resource pool as an accessible “resource service.”
Cluster controllers, also known as Virtual Infrastructure Managers, act as overall managers of virtual resources. They keep track of the health of virtual machines, deployment locations, the resource capacities, load factors, and a model for handling a failure condition. The data compiled by the cluster controllers are aggregated to provide visibility for resource utilization and systems management.
The configuration manager (CM) configures the virtual machines with all the necessary software packages so it can be consumed inside the cloud. CMs eliminate the tedious work most IT implementers do in a deployment environment. CMs must be fast and configurable to enable rapid deployment of applications.
The control panel consolidates all the elements of the cloud. It compiles all the data in the cloud infrastructure.
Reliability in cloud architecture takes a whole new different meaning. “Virtualization” in cloud resources enables its implementers to deploy the IT architecture in a whole new way. It is called the Assume Failure Model.
Availability zones: Most cloud implementers cluster clouds into different profiles of reliability. This concept of models fault tolerance under different circumstances of failure of the underlying IT infrastructure. Availability zone models allow deployment planners to plan their overall resource pool in a manner that minimizes the possibilities of total failure.
Auto distribution of installed components: Good cloud architecture distributes the elements of its execution framework across different resource zones automatically. Cloud deployments environments are typically deployed with redundant resources.
Self healing enables the cloud environment to be fault tolerant for failures in the cloud environment. In case of failure, there will be a hot backup instance of the application ready to take over without disruption.
Cloud environments are “aware” of their overall resource pools. Effective cloud architecture allows for a design that facilitates auto-elasticity. This is the ability of a cloud environment to expand or contract as needed.
Load balancing or scaling is moving processes among servers to support a changing processing load.
Provisioning and de-provisioning of compute resources to ensure the cloud environment is highly utilized.
System configuration is the process of setting up hardware devices and assigning resources to them so they work together without problems without human intervention at runtime.
Triggers, caused by either reduction or increase of traffic or other metrics such as latency or delay, can also be a cause for auto scaling.
Other important features and benefits for a well designed cloud architecture:
- System administration
High-density system view allows system administrators to quickly see the system status, find problem spots, and reduce the need for repetitive and error-prone tasks.
Self-service interface for developers allow application deployment and management without requiring any intervention from system administrators.
- Application management
Automatically configured backups: Application code and data are automatically backed up without human intervention.
Billing and reporting immediately monitor usage and even integrate with third-party modules.
Public, private, hybrid clouds
Role of standards: Clouds are expected to be utility platforms. The platform needs to be as broad as possible. Supporting the AWS standards allow the private implementer to leverage work done by public clouds. For example, this has sped support of alternate virtualization models, operating systems, and integration to management platforms.
The IT implementation model is most flexible when the appropriate standards are implemented. Enterprises can deploy cloud inside their enterprise, trusted private networks, and leverage a public cloud as needed.
Private clouds are cloud environments that reside inside the firewall. For most companies, this is the safest implementation as it leverages existing security infrastructure.
Hybrid clouds combine private clouds with public clouds where appropriate for spike loads or less sensitive data. Implementations that use the Amazon AWS API simplify this implementation model through the methods of:
- SOA, which breaks the architecture into services, which individually can inhabit public or private clouds more easily than monolithic architectures.
- Security can be helped through audit trails made more possible by centralized data management in clouds than in distributed implementations on individual dispersed machines. Distributed security techniques can enforce access and other rights among public and private members of hybrid clouds.
- A consolidated management console creates the ability to manage the cloud and its components, wherever they physically reside, public or private, from a single interface that follows the one-to-many relationship that a lot of enterprises want.
Early use cases
Highly elastic web deployments allow web-based solutions to scale rapidly on fluctuating demands of a web infrastructure. This is best seen in very “spiky applications” such as tax preparation websites or social game platforms that may have seasonal or daily time-based fluctuations. By using the cloud infrastructure, these implementations can be deployed on its lowest capacity requirements plus a little headroom and then scale as needed in order to continually minimize costs. IT operations can then schedule the scale up or scale down of the platform based on known patterns. IT rules can also be injected into the system so when certain triggers are hit outside of expectations, the platform executes an auto scale up or scale down to compensate. These facilities can dramatically reduce the cost of deploying very high volume websites.
IT development platforms traditionally are very difficult to manage inside the enterprise and tend to be very resource inefficient. The main strength of a cloud environment is the ability to instantiate a development platform on the fly and then re-allocate the resource as needed. Having control over a cloud environment will enable IT architecture groups to enforce standardization over the development environments inside their enterprise or to their outsource providers.
Disaster recovery (DR) systems are the most underutilized, but also among the most critical section of IT infrastructure. Cloud environments can be modeled in software and deployed rapidly without using hardware when it would normally be idle. Enterprise that utilize cloud environments have access to virtually unlimited DR capacity with even more reliability than “fixed” standby units. The flexibility of cloud environments allow DR planners to constantly and periodically test the DR environment at very low costs.
The promise for maximizing the benefits of the cloud has led to a mix of proprietary and open solutions covering all components of the cloud ecosystem. There is a need to have a standards-based set of building blocks that allows flexibility and efficiency as follows.
Existing cloud vendors often offer a “complete” solution, but in turn subject customers to proprietary technologies that require a customer to buy all the requisite licenses offered as separate products. VMWare’s ﬂagship vSphere Enterprise cloud computing suite can only support and control VMWare powered virtual machines, making it difficult for a customer to also use other rapidly maturing technologies like KVM and Xen.
Eucalyptus, an open source cloud computing environment provider, has based its solution on the same Amazon API in order to ease the migration and adoption concerns of existing AWS clients.
As various cloud computing solutions evolve, so do the underlying technologies that support them. Morphlabs’ mCloud currently provides support using Eucalyptus as well as an alternative, Open Nebula. At the virtualization level, the open source solution Xen is the most widely adopted among users. But, in addition to Xen, KVM, and VMWare are all supported by mCloud.
To scale, there is a need for a solution that can manage a large number of systems or virtual machines without significant manual labor. Most companies prefer to use Puppet, which defines as “a model-driven open source framework designed to automate the building and configuration of servers, implementing normal administrative tasks, such as adding users, installing packages, and updating server configurations, on any number of systems, using essentially the same code, even if those systems are running completely different operating systems.” Puppet also allows launching services on EC2 and moving them to one’s own machines without changes to the overall system structure.
Lastly, a high density system administration interface has been rolled out recently by companies such as Morphlabs, on top of built-in monitoring tools like Nagios and Icinga. Doing so, this system administration interface provides a bird’s eye view of the entire system, enabling access to information like CPU, memory, and storage utilization by the virtual machine. Through this feature, CPU and RAM hotspots are immediately identified and, if necessary, the requisite adjustments are made to the application or the in-use resources.
ABOUT THE AUTHOR
Winston Damarillo (email@example.com) is chief executive officer of Morphlabs.