Automate vSphere Virtual Machine and OVA Appliance Deployments Using Ansible

If you have read any of my posts, you will quickly discover that I use Ansible a lot, for deploying virtual machines and VMware OVA appliances, on vSphere.

Ansible support for VMware is constantly growing and in the latest versions, it has become an essential tool that I use as part of my development process for standing up required infrastructure that is easy and quick to deploy or tear down. The key part of using Ansible for my deployments is that the process is repeatable and consistent.

In this post, I am going to cover some of the core Ansible modules that I use to perform these deployments and provide various use case examples. Once you understand these modules and lay down the groundwork, you’ll be deploying virtual machines or appliances in mere minutes, with the simple editing of some configuration files.

If you are not too familiar with what Ansible is, or what it’s used for, then I recommend that you check out the official documentation. You can also get a brief overview of what Ansible is at cloudacademy.com, and there is a wealth of online training and other material available to get you up to speed.

All examples used in this post, including a fully working Ansible solution, can be found on my Ansible-VMware Github.

Pre-requisites

You will need to have the following packages installed (through PIP) on your Ansible control machine:

  • ansible>=2.8.0
  • pyvmomi>=6.7.1.2018.12

I have also provided a requirements.txt file that you can install using PIP

pip install -r requirements.txt

Deploying Virtual Machines

The most basic task that you are ever likely to perform on any vSphere environment, is the deployment of a virtual machine. In this section, I am going to show you how Ansible can make the task of spinning up dozens of virtual machines a breeze.

Ansible provides the core module vmware_guest that can be used to create a new virtual machine or clone from an existing virtual machine template.

In these examples, I am going to demonstrate how you can create new virtual machines or clone an existing virtual machine from a template for both Windows and Linux, perform customization and configure hardware and advanced settings.

Create a New Virtual Machine (no template)

This is an example play that will create a virtual machine with no OS installed (not from a template). When the virtual machine is powered on, it will automatically try and PXE boot from the network, which can be useful in deployment pipelines where VMs are bootstrapped in this way.

I have a simple play called ‘vmware_create_virtual_machine.yml‘, which includes the tasks to create a virtual machine in VMware vSphere.

---
- hosts: local
  become: no
  gather_facts: False
  vars:
  tasks:
  - name: Create a New Virtual Machine
    vmware_guest:
      hostname: <vcenter hostname>
      username: <vcenter username>
      password: <vcenter password>
      validate_certs: no
      name: Linux_VM
      datacenter: SG1
      cluster: SG1-CLS-MGMT-01
      folder: /SG1/vm
      guest_id: centos7_64Guest
      disk:
      - size_gb: 20
        type: thin
        datastore: vsanDatastore
      hardware:
        memory_mb: 2048
        num_cpus: 1
        scsi: paravirtual
      networks:
      - name: Management
        device_type: vmxnet3
      state: poweredon
    delegate_to: localhost

Many of the properties should be self-explanatory, but we’re creating a virtual machine called Linux_VM, with 1 CPU, 2GB of Memory, a 20GB think hard disk, etc.

Because we are creating a new virtual machine, the guest_id needs to be provided, which sets the Guest Operating system profile on the VM. You can get the full list of supported guest_ids from the VMware developer support page.

To run the playbook, I invoke the ansible-playbook command.

ansible-playbook vmware_create_virtual_machine.yml

You can see that the execution was successful (ok=1) and that a change was made (changed=1), which means the virtual machine was created. If we take a look at vCenter we can see the virtual machine now exists, with the specified configuration:

Update Virtual Machine

The great thing about Ansible is that if you were to run this play again, it would not try and create another VM. Instead, it will simply exit with a status of OK, if it discovers that the specified virtual machine already exists and is powered on.

But what if we made some changes to the configuration that we want to apply to the virtual machine? Well, Ansible will only make these changes to the virtual machine if the ‘state‘ parameter has been set to ‘present‘ in the play. Also, if you are making configuration changes to hardware, then the virtual machine may also need to be powered off first (Ansible will display an error if this is required).

So let’s assume that the virtual machine is powered off and we want to enable CPU and memory hot add support. We simply add these configurations under the hardware section:

Now if we run the play again:

And we can see that a reconfigure task is performed on the VM in vCenter server.

Make sure to check the documentation for all the parameters that can be configured. Read more “Automate vSphere Virtual Machine and OVA Appliance Deployments Using Ansible”

Automating vRealize Suite Lifecycle Manager with Ansible – Part 1: Setup and Deploy vIDM and LCM

For many years I have been tasked with building vRealize Automation environments, and one of the biggest pain points has been the deployment and preparation of the IaaS machines. This has usually required special preparation of a Windows template and several scripts to get everything configured so that vRA plays nice. This is usually an error-prone process, especially for the larger enterprise deployments. To tackle this problem, VMware released vRealize Suite Lifecycle Manager, which is on version 2.1, as of this writing.

I decided it was time to try this product and see if it lives up to the claims. I was also more interested in the API functionality, and as with all things automation, I typically turn to Ansible. I wasn’t too surprised to discover, that although the deployment is ‘automated’, depending on your interpretation, there is a number of manual steps that are still required. These include ensuring that the IaaS machines and database are already deployed and properly configured. The vRLCM Create Environment process also provides validation and pre-checks, along with scripts that can be used to prepare the machines.

With the preparation of these playbooks, I set out to automate the following:

  • Deployment of a single VMware vIDM appliance;
  • Deployment and initial configuration of a single vRealize Suite Lifecycle Manager appliance;
  • Deployment of vRealize Automation IaaS Servers (Windows VMs), in multiple deployment scenarios.
  • Creation of vRealize Automation environment through LCM.

This post will focus on deploying vRSLCM and vIDM with a follow-up post on the vRA deployments.

However, in my attempts to make this a set of one-click processes, I wasn’t able to quite get that far (got pretty close). This was mainly due to some limitations with the vRSLCM API (can’t automate certificates, for example). I will discuss these limitations throughout this post, and if I find workarounds, then I’ll provide an update.

I should also point out that this is quite experimental and although I have done all that I can to make these workflows as idempotent as I can, unfortunately, with the limitations of the LCM API, this has proven to be quite difficult. These playbooks are best used as a one-time-only deployment, at least for LCM itself.

Environment Preparation

In my environment, I have a dedicated virtual machine that I develop and run my playbooks on (you may call this the Ansible control machine) running on CentOS 7.x.

Environment Overview

CentOS CentOS 7.x
Ansible 2.8.1 (2.8 is a minimum requirement)
Python 3.6 (installed from EPEL Repository)

Prerequisites

The following pre-requisites are required:

  • DNS A/PTR records created for vRSLCM and vIDM appliances.

Prepare Environment

Ensure that the system is up-to-date by running:

sudo yum -y update

Install yum-utils

sudo yum -y install yum-utils

Install Python 3

You will need to ensure that Python 3.6 is installed on your Ansible host. I am using the EPEL repository, but you may decide to use IUS or SCL to install these packages, so the package names may differ. Refer to the relevant documentation for installing Python 3 using these repositories, if required.

sudo yum -y install python36 python36-pip python36-devel

Install GIT

Git will be used to clone my Ansible vRSLCM playbooks repository.

sudo yum -y install git

Create a Python Environment

It’s always the best approach to create a python virtual environment so that packages do not conflict with the base system. I have a directory in the root of my home dir called ‘python-env‘ where I maintain several different environments. Once you have a virtual environment set up, you just need to install the required packages from the ‘requirements.txt‘ file (provided later in the git repository).

You can follow these steps below to create a virtual environment:

mkdir ~/python-env
cd ~/python-dev
python3.6 -m venv ansible_vrlcm
source ansible_vrlcm/bin/activate

You will notice that the shell will now display the virtual environment that you are using:

It’s also a good idea to ensure the latest version of pip and setuptools is installed.

pip install --upgrade pip setuptools

Read more “Automating vRealize Suite Lifecycle Manager with Ansible – Part 1: Setup and Deploy vIDM and LCM”

Deploying NSX-T Using Ansible – Part 3: Running The Playbook

In this post, I am going to cover the running of the Ansible NSX-T playbook so that you can get NSX-T deployed in your environment(s). In case you missed them, in my previous posts, I detailed how to set up your Ansible environment and configure the playbook in preparation for deploying NSX-T.

If you arrived here and want to figure it out for yourself, you can download my playbooks here.

Playbook Overview

The main playbook that you will need to run is called ‘nsxt_create_environment.yml‘, which is located in the root of the Ansible-NSXT folder.

---
## Deploys an NSX-T environment
- hosts: nsxt_managers_controllers
  connection: local
  become: yes
  gather_facts: False
  vars:
    nsxt_deployment_vcenter: "{{ mgmt_vcenter_server }}"
    nsxt_deployment_vcenter_username: "{{ mgmt_vcenter_admin_username }}"
    nsxt_deployment_vcenter_password: "{{ mgmt_vcenter_admin_password }}"
    nsxt_deployment_datacenter: "{{ mgmt_vcenter_datacenter }}"
    nsxt_deployment_cluster: "{{ mgmt_vcenter_cluster }}"
    nsxt_deployment_datastore: "{{ nsxt_datastore }}"
    nsxt_deployment_portgroup: "{{ nsxt_portgroup }}"
    nsxt_deployment_size: "{{ nsxt_default_deployment_size }}"
    nsxt_role: "{{ nsxt_default_role }}"

    compute_managers:
    - name: "{{ nsxt_compute_manager_name }}"
      host: "{{ nsxt_compute_manager_host }}"
      transport_clusters: "{{ nsxt_transport_clusters }}"

    ip_pools:
    - display_name: "{{ nsxt_transport_switch_ip_pool_name }}"
      subnets:
      - allocation_ranges:
        - start: "{{ nsxt_transport_switch_ip_pool_start }}"
          end: "{{ nsxt_transport_switch_ip_pool_end }}"
        cidr: "{{ nsxt_transport_switch_ip_pool_cidr }}"

    transport_zones:
    - display_name: "{{ nsxt_transport_zone_name }}"
      description: "{{ nsxt_transport_zone_desc }}"
      transport_type: "OVERLAY"
      transport_switch_name: "{{ nsxt_transport_switch_name }}"

    uplink_profiles:
    - display_name: "{{ nsxt_transport_switch_uplink_profile_name }}"
      teaming:
        active_list:
        - uplink_name: "{{ nsxt_transport_switch_uplink_1 }}"
          uplink_type: PNIC
        - uplink_name: "{{ nsxt_transport_switch_uplink_2 }}"
          uplink_type: PNIC
        policy: "{{ nsxt_transport_switch_uplink_profile_policy }}"
      transport_vlan: "{{ nsxt_transport_switch_uplink_profile_vlan }}"

    transport_node_profiles:
    - display_name: "{{ nsxt_transport_node_profile_name }}"
      description: "{{ nsxt_transport_switch_profile_desc }}"
      host_switches:
      - host_switch_profiles:
        - name: "{{ nsxt_transport_switch_uplink_profile_name }}"
          type: UplinkHostSwitchProfile
        host_switch_name: "{{ nsxt_transport_switch_name }}"
        pnics:
        - device_name: "{{ nsxt_transport_switch_pnic_1 }}"
          uplink_name: "{{ nsxt_transport_switch_uplink_1 }}"
        - device_name: "{{ nsxt_transport_switch_pnic_2 }}"
          uplink_name: "{{ nsxt_transport_switch_uplink_2 }}"
        ip_assignment_spec:
          resource_type: StaticIpPoolSpec
          ip_pool_name: "{{ nsxt_transport_switch_ip_pool_name }}"
      transport_zone_endpoints:
      - transport_zone_name: "{{ nsxt_transport_zone_name }}"
    
  roles:
    - nsxt_deploy_ova
    - nsxt_apply_license
    - nsxt_add_compute_managers
    - nsxt_create_ip_pools
    - nsxt_create_transport_zones
    - nsxt_create_uplink_profiles
    - nsxt_create_transport_profiles
    - nsxt_configure_transport_clusters

Read more “Deploying NSX-T Using Ansible – Part 3: Running The Playbook”

Deploying NSX-T Using Ansible – Part 2: Setting Up The Playbook

In my previous post, I covered how to prepare your Ansible environment and install the VMware NSX-T modules. I also provided the details on how to install my Ansible playbooks for deploying NSX-T in your environments.

In this post, I am going to detail how to configure these playbooks to meet your environment/requirements. I have chosen to break out my variables into multiple files. This gives me the flexibility to assign values specific to a group of hosts, inherit values from a parent group and to store usernames, passwords and license information more securely, in their own Ansible Vault encrypted file.

The deployment examples that I will demonstrate include 2 sites, that each includes the following:

  • A management environment at each site. This includes a vCenter Server instance with a single management cluster.
  • A compute resource (CMP) environment at each site. This includes a vCenter Server instance with a single resource cluster.

I will deploy an NSX-T instance at each management cluster. These NSX-T instances will be used to provide SDN capabilities to the compute resource clusters (when I get time I’ll create a diagram!).

An overview of the playbook tree:

├── ansible.cfg
├── nsxt_create_environment.yml
├── nsxt_example_add_compute_manager.yml
├── nsxt_example_apply_license.yml
├── nsxt_example_create_ip_pools.yml
├── nsxt_example_create_transport_profiles.yml
├── nsxt_example_create_transport_zones.yml
├── nsxt_example_create_uplink_profiles.yml
├── nsxt_example_deploy_ova.yml
├── group_vars
│   ├── all
│   ├── nsxt_managers_controllers
│   ├── site_a
│   ├── site_a_cmp_nsxt
│   ├── site_b
│   └── site_b_cmp_nsxt
├── inventory
│   └── hosts
├── roles
│   ├── nsxt_add_compute_managers
│   ├── nsxt_apply_license
│   ├── nsxt_check_manager_status
│   ├── nsxt_configure_transport_clusters
│   ├── nsxt_create_ip_pools
│   ├── nsxt_create_transport_profiles
│   ├── nsxt_create_transport_zones
│   ├── nsxt_create_uplink_profiles
│   └── nsxt_deploy_ova
├── ssh_config

Read more “Deploying NSX-T Using Ansible – Part 2: Setting Up The Playbook”

Deploying NSX-T Using Ansible – Part 1: Setting Up The Environment

When I saw the release of NSX-T 2.4, I decided that I would upgrade my compute clusters to utilise this new version. Since I manage the compute NSX managers mostly through the API, I figured this would provide me with some good experience and also allow me to understand how this is deployed.

In my lab I run vRealize Automation with a management cluster (CMP stack), 2 nested vCenter Servers and ESXi Clusters (compute) that mimic two geographically dispersed data centres. Until now I had deployed a dedicated NSX-V instance for each of my three vCenter deployments, that provides the logical switching and routing required for my lab.

I didn’t want to create yet another ‘how to’ guide on how to do this using the GUI, so instead, I am going to attempt to accomplish this using Ansible. VMware have handily made available Ansible modules for NSX-T, which are supported for the 2.4 release and above (note that these are still in preview). I will attempt to make use of these modules, but if I discover broken or missing functionality, then I will revert to using the NSX-T Rest API.

Link to the VMware Github repository for Ansible NSX-T: https://github.com/vmware/ansible-for-nsxt

Link to my Github Ansible NSX-T Deployment Playbooks: https://github.com/nmshadey/Ansible-NSXT

I am going to provide a series of posts that will cover the set up of the Ansible environment, how to install the VMware NSX-T modules and use the playbooks and roles that I have created to deploy NSX-T in your environments. Read more “Deploying NSX-T Using Ansible – Part 1: Setting Up The Environment”

vRealize Orchestrator: Standardised Logger Action


I have updated this page on 22nd Jan 2019 to introduce enhancements over the original logging action. I have converted the original function calls to an object based approach. This allows a logger object to be used, which looks cleaner and is initialised once. I have also introduced JSDoc documentation styles into my code.


One bugbear that I have with vRO is the limitation around system (console) logging. There is currently no way to dynamically output the name of an action or sub-workflow (see end of the post). I like to see exactly which action or workflow is executing code. This makes it easier for me to troubleshoot a defect, when I am looking at the output logs.

It is possible to use ‘workflow.name’ or ‘this.name’ inside an action, but this will always be set to the name of the initial workflow that was executed. This is because the workflow object is implicitly passed to the action that is called. The result, is that it will look like all the code is executing from the workflow (which is technically true, but I needed more granularity).

I therefore created a standardised way that workflow and action logging should be handled. This is achieved by using an action that will actually handle all the logging for me. The idea is, that an action or workflow will call the ‘logger‘ action, providing some parameters, that allow for a consistent and useful logging experience. Read more “vRealize Orchestrator: Standardised Logger Action”

vRealize Orchestrator: Standardising Modules & Actions


I have updated this page on 22nd Jan 2019 to introduce improvements over the original Action template and module structure. I have introduced the new logger object and JSDoc documentation styles into my code.


Managing your code base in vRealize Orchestrator can be quite challenging and complex. Often, you won’t realise this until you’ve reached a point where it becomes difficult and time consuming to both organise or locate existing code that you have written. In this post, I am going to suggest ways to help you organise your code better, using methods that I have adopted with my time using vRO.

I am not suggesting this be the perfect solution, but it should provide a working standard to adopt to your own needs. I would also argue that the extra time spent getting this in place on the outset, will lead to time saved later on.

Just for reference, from this point on, I am going to refer to ‘code base‘ and ‘actions‘ interchangeably, because your vRO actions ARE your code base. Almost every single line of code you write in vRO should be in an action (I will discuss this in more detail in a later post which I will link here).

Actions

Here are some general principles I like to follow when creating actions: As a general rule, actions should:

  • Contain small, manageable chunks of code that perform a specific task. Actions are just functions and just like any function, it should contain code that performs a specific task. If your action is doing many different tasks, then consider breaking these down into multiple, smaller actions.
  • Validate inputs. I appreciate that some may debate this idea, but actions are not ‘private functions’. They are public code where you can never guarantee that the action ‘caller’ is properly validating its inputs. This is the nature of vRO, it is a ‘hub’ that has many different uses cases and scenarios for executing the same actions. I have seen dozens of cases where developers and support engineers have wasted time tracking unexpected errors;
  • Be named appropriate to the task they perform. I generally like to use verbs in my action names, like, ‘getVirtualMachineNames’, ‘getVirtualMachineNetworks’ or ‘setCustomProperty’. Actions named this way will make it easier for other developers to identify what they are used for;
  • Have variables declared in a single block. This will just make it easier to see what variables are being used. The data type can also be defined, but is not always necessary or as important;
  • Provide consistent logging throughout. Make it so, the action almost tells the story of what is happening. Don’t go overboard, but generally a before, during and after style to logging works quite well;
  • Nesting actions within an action is generally ‘OK’ but keep it to a minimum if possible. Too many nested actions can create depth that may be more difficult to maintain and troubleshoot later on. Typical use cases are ‘helper’ or ‘utility’ actions (you’ll be completely forgiven with actions used for workflow presentation as these are a pain);
  • Perform singular tasks. Don’t write actions that perform plural tasks. Write the singular version first, then use a looping mechanism that re-uses the singular action (there are also ways this can be achieved with performance in mind in vRO). This way you’ll only have 1 version of the code;
  • Be based on a user-defined template. Yup, I’m not crazy. Have a defined template (aka boilerplate) set out on how an action should generally look and have the team follow this. It will make code reviews far easier;
  • Always use camel cased alphabetical characters (no dots);

Read more “vRealize Orchestrator: Standardising Modules & Actions”

Using the indexOf() method for Arrays and Strings for vRO and vRA

I’ve never been a developer so getting into JavaScript was quite a challenge at first and I probably always went the longest route possible to achieve something. As I use it more and more, I am picking up these neat little tricks and uses for built in methods that make my life easier.

In the world of vRA and vRO, I find that most of my time is spent iterating over arrays or parsing custom properties. One method that I have come to find extremely useful is the indexOf() that is available on Arrays and Strings. The methods are very similar but have very different use cases. Let’s take a look at each of them in turn.

String indexOf() Method

w3schools.com defines this as:

The indexOf() method returns the position of the first occurrence of a specified value in a string.

This method returns -1 if the value to search for never occurs.

So as an example, if we had the string “simplygeek.co.uk is fun”

string.indexOf(“m”) would return 2, which is the index within the string that ‘m’ first appears. Note that if ‘m’ appeared twice then only the first match would return a result. Indexes within arrays and strings always start at 0.

Another use case, one which I find the most useful, is being able to provide a string for the lookup. Take the following example:

string.indexOf(“simplygeek”) would return 0, because in a contiguous match the first index is returned.

When writing JavaScript that interacts with vRA you are often required to parse through custom properties, which are key value pairs of data. Such properties can contain useful information that relates to a deployment, such as virtual machine configuration. If custom properties follow a standardised naming convention, it can be easy to discover a set of properties. Let’s assume I have created the following custom properties in vRA for a deployment:

Custom.Deployment.Virtualmachine.Config.hotcpu : true
Custom.Deployment.Virtualmachine.Config.hotmem : true
Custom.Deployment.Virtualmachine.Config.sched.swap.vmxSwapEnabled : true

When the payload is sent to my vRO workflow it could contain over a 100 different key:value pairs of data. To find these easily I can use the indexOf method to iterate over each pair as follows:

var deploymentProp = new Properties;
var arrayOfDeploymentProps = [];
for each (var key in props.keys) {
    if (key.indexOf("Custom.Deployment.Virtualmachine.Config") === 0) {
        // We found a deployment property!
        deploymentProp.put(key, props.get(key));
        arrayOfDeploymentProps.push(deploymentProp);
    }
}

The above will result in an array of properties related to virtual machine configs. I can then pass this array to some code that will handle the implementation of these advanced virtual machine settings. This allows for a very dynamic way to manage custom properties in property groups within vRA.

Array indexOf() method

Very similar to the String method, on an array, the indexOf() method returns the first index at which a given element can be found in the array, or -1 if it is not present. I find this method useful when I need to return a set of unique values from another array. let’s assume we have the following array:

myArray = [‘one’, ‘two’, ‘two’, ‘three’, ‘three’, ‘three’]

If I wanted to return only unique items from myArray, I could use the indexOf method as follows:

var myArray = ['one', 'two', 'two', 'three', 'three', 'three'];
var uniqueItems = [];
for each (var item in myArray) {
    if (uniqueItems.indexOf(item) === -1) {
        // we didn't find this item in the array so add it.
        uniqueItems.push(item);
        // When we get to the second 'two', the value of indexOf() will return '2' and the code above will not be executed.
    }
}

The above code will result in an array:

[‘one’, ‘two’, ‘three’]

I hope that someone else finds these as useful as I have. If you know of more use cases, then please let me know.