The power of Infrastructure as a Service technology is one of the fertilizers helping the DevOps movement bloom. Since the creation and configuration of infrastructure can be codified, the imperative to version everything ensures that all deploys are consist across environments from the (virtual) hardware level up, and the actions taken with each version are recorded. While this configuration history is useful, we gain a much more tangible value with powerful automation tools; a deployment orchestrator like IBM Rational UrbanCode Deploy can turn the collected processes into a one (or zero) click operation. By making use of the scripts and tools already in your repertoire with its own powerful object definition and versioning capabilities (and plugins for everything under the sun), a team essentially creates their own platform service, customized for the potentially complex reality of their unique software project.
In UrbanCode Deploy’s management schema, an Application’s definition lists every single Component it would contain. These components have their own processes for starting, stopping, and managing themselves, so the application’s processes can invoke these component processes to have an environment bootstrap itself. For this example, we are going to work with the AWS EC2 system, the ever-familiar and accessible cloud system from Amazon, by creating a component to represent an EC2 instance with a startup process. A plugin exists for UrbanCode Deploy that gives us the ability to work with EC2 machines, and by setting up our processes right we can manage the component effectively within the application lifecycles.
An EC2 image is initialized by referencing the ID code of an Amazon Machine Image (AMI) from either the many publically available offerings or an image created by the user. The startup and shutdown processes are going to be similar for any EC2 image, so we have created a Component Template with these processes defined; proper use of environment and component properties gives this the flexibility to be applied to any machine configuration we want.
An Amazon EC2 component has a couple files that every implementation will need: an AWS SDK jar file asked for by any EC2 step and a private key file to match one of your keys set up in the AWS account. Our example is using a simple file system to hold these, but you may wish to connect this to one of your existing repositories. The properties section defines Component Template properties that are statically accessible to all children; we have used it to define the AWS jar file’s name.
The Component Property Definitions section creates properties that are defined by the children. You may set default values here if desired, as we did with the access keys.
Lastly, we will create some Environment Properties. When a component is added to an environment, the environment properties defined by a component flow upwards, so that information needed by other components can be accessed. The name of the UDC Agent that will be created by this component and the host name of the virtual machine would be useful information, so we will define them here and populate them during our startup process.
Your organization may have a specific image used as a baseline; we will be using a Red Hat Enterprise Linux image. Our startup process first downloads the private key and jar file artifacts into a working directory on an existing UCD Agent. The AmazonEC2 steps are found under the Cloud folder after the plugin is installed, so we can drag and drop those steps into our process.
The Launch_Instance step is where the heavy lifting is done. EC2 provides Access Keys and Secret Keys, hash values that are used to give API permissions for tools to interact with it. The AMI ID is a constant identifier for specific images in the AWS repository. The keypair specifies the name of a keypair that is part of your AWS account, associated with the private key we downloaded in the first step (a versioned part of this component). These are set to component properties (we’ll define those in a minute) so that all components that inherit this template can define their values for their implementation. This configuration does not define a fluid instance count or type, or specify a security group and availability zone, but those can be just as easily given variables or values based on your needs. We are going to use an AMI that is listed in the default zone (Amazon’s Eastern Seaboard), but if you are using an AMI that exists in a different Availability Zone, be sure to specify it or the process will fail.
This step registers as complete when the image is finished initializing. The next steps will store the instance ID of the new virtual server, as well as the public DNS. Often little is documented in terms of what properties certain steps create (particularly with plugins), and one might assume that a Post Process Script must be created to jump through hoops and get information from the output log. Fortunately, there is a far simpler solution: the Component Request History lists every attempted process execution, and contains both a log of every step and a list of step Properties. This is how we learned that our EC2 Initialization step “Launch_Instance” creates a step property called “instances” that contains an array of all instance IDs started, and a “dns” property that holds the public IP. Thus, the last two steps in the process store that information in Component Environment Properties with the “Set Component Environment Property” function.
By referring to the environment and components by their id properties, this process remains loosely coupled. The environment properties will be accessible to everything in an environment that contains these components- we will need these to shut down or work with the machines.
Now that the template is done, we can make a component to represent our specific server. This is the configuration page for our component; note that it has fields for all the definitions in the template. The property definitions we have created already will work in their default states.
We can now add this component to an application and assign it to a resource within an environment. The variables used in this example were for a Linux machine, so we have an agent on a RHEL server to perform the first steps.
Our application process for setting up our cloud environment will look extremely simple right now. We need only invoke the EC2-RHEL-AMI component’s startup process.
And now for the fun part! Pick an environment and give the whole thing a go!
When we start this Application process, it has the component do the steps defined above, resulting in an online image on the Amazon cloud and a few environment variables for future use. The component is listed in the inventory, so we know where these images are while they are alive (they will be removed with a shutdown process).
We can click into the Application Process Execution and get to the component deploy page from there to look at logs and input/output data. A Properties tab shows all the step properties; this is the page to check after a test run to see if any variables you need are being created by a plugin step without explicitly telling you.
If we look at our AWS control panel, we can see that the instance we requested is indeed up and running.
This is only a simplistic initial process. By referencing the environment variables we set earlier, we can use the server as much as needed, and create a process like this one to take it offline:
By getting creative with properties and creating more complex workflows, you can utilize the power of the Amazon cloud with versioning, automation, and a paper trail, and have it directly integrated with all of your other deploy processes.