Run Multiple Ansible Versions Side by Side Using Python 3 Virtual Environments

I often start many of my blog posts describing how to set up a Python virtual environment and install the required modules, when working with Ansible. I have therefore decided to create this post to cover the topic in a bit more detail and for better consistency.

When I am developing Ansible playbooks and depending on the project, I often require a different set of Python modules. I could install all of the modules that I use on the base system, but then it starts to get contaminated and it becomes difficult to manage all of the dependencies.

Another requirement is that I want to be able to test my Ansible playbooks against different versions (maybe environments are using different versions or to test a new release). Fortunately, both of these problems are easy to solve using Python’s venv (Virtual Environments) module.

You don’t need to be using containers to run multiple versions of Ansible for this purpose like I have witnessed many people do.

The benefits of using a virtual environment include:

  • Each project can have its own isolated environment and modules;
  • The base system is not affected;
  • Does not require root access as virtual environments can be created in your home directory;

In the following sections, I will provide details on how to set up virtual environments and some examples of these with Ansible.

Install Python 3

The first thing you need to do is install Python3. You can create virtual environments with Python 2, but as that is going end of life in January 2020, you really should make every effort to move away from this version. We will need to install Python 3.6 or later.

CentOS 7

Python 3 can be installed from one of the following repositories, depending on your preference (but only choose one). Note that this does not change the default ‘python‘ interpreter on the system.

Extra Packages for Enterprise Linux (EPEL)

Install this repository if not already installed:

sudo yum -y install epel-release

Install Python 3.

sudo yum -y install python36 python36-pip

Software Collections (SCL)

Install this repository if not already installed:

sudo yum -y install centos-release-scl

Install Python 3.

sudo yum -y install rh-python36

Inline with Upstream Stable (IUS)

Install this repository if not already installed:

sudo yum -y install https://centos7.iuscommunity.org/ius-release.rpm

Install Python 3.

sudo yum -y install python36u python36u-pip

Ubuntu

Check out this post: https://www.tecmint.com/install-python-in-ubuntu


Verify that Python is installed and working:

python3.6 --version

Create Virtual Environments

First, we will need to create a folder that we’ll use to store the virtual environments. I recommend that you do not create these environments inside your project folders. I have a folder called ‘python-venv‘ in my home directory, which I use for all my virtual environments.

mkdir ~/python-env

We create a virtual environment using the Python ‘venv‘ module (note that this is built into Python 3):

python3.6 -m venv environment_name

In this example, I am going to create two environments that will provide different versions of Ansible for me to use.

cd ~/python-env
python3.6 -m venv ansible2.7.0
python3.6 -m venv ansible2.8.0

This will create the directories ‘ansible2.7.0‘ and ‘ansible2.8.0‘ under ‘~/python-env‘, that contains the binaries and base libraries for the environment.

Next, we need to activate an environment by sourcing an environment file in the bin directory. Let’s activate the ‘ansible2.7.0‘ environment.

source ansible2.7.0/bin/activate

You will notice that the shell will now display the virtual environment that you are using:

Currently, this environment has no modules installed. The first thing we will want to do is upgrade ‘pip‘ and ‘setuptools‘.

pip install --upgrade pip setuptools

Next, let’s install ansible 2.7.0. PIP will default to install the latest version, but we can override this using == and force a specific version to be installed.

pip install ansible==2.7.0

Once the installation process completes, we can confirm that the version of Ansible has been installed:

ansible --version

Read more “Run Multiple Ansible Versions Side by Side Using Python 3 Virtual Environments”

Automate vSphere Virtual Machine and OVA Appliance Deployments Using Ansible

If you have read any of my posts, you will quickly discover that I use Ansible a lot, for deploying virtual machines and VMware OVA appliances, on vSphere.

Ansible support for VMware is constantly growing and in the latest versions, it has become an essential tool that I use as part of my development process for standing up required infrastructure that is easy and quick to deploy or tear down. The key part of using Ansible for my deployments is that the process is repeatable and consistent.

In this post, I am going to cover some of the core Ansible modules that I use to perform these deployments and provide various use case examples. Once you understand these modules and lay down the groundwork, you’ll be deploying virtual machines or appliances in mere minutes, with the simple editing of some configuration files.

If you are not too familiar with what Ansible is, or what it’s used for, then I recommend that you check out the official documentation. You can also get a brief overview of what Ansible is at cloudacademy.com, and there is a wealth of online training and other material available to get you up to speed.

All examples used in this post, including a fully working Ansible solution, can be found on my Ansible-VMware Github.

Pre-requisites

You will need to have the following packages installed (through PIP) on your Ansible control machine:

  • ansible>=2.8.0
  • pyvmomi>=6.7.1.2018.12

I have also provided a requirements.txt file that you can install using PIP

pip install -r requirements.txt

Deploying Virtual Machines

The most basic task that you are ever likely to perform on any vSphere environment, is the deployment of a virtual machine. In this section, I am going to show you how Ansible can make the task of spinning up dozens of virtual machines a breeze.

Ansible provides the core module vmware_guest that can be used to create a new virtual machine or clone from an existing virtual machine template.

In these examples, I am going to demonstrate how you can create new virtual machines or clone an existing virtual machine from a template for both Windows and Linux, perform customization and configure hardware and advanced settings.

Create a New Virtual Machine (no template)

This is an example play that will create a virtual machine with no OS installed (not from a template). When the virtual machine is powered on, it will automatically try and PXE boot from the network, which can be useful in deployment pipelines where VMs are bootstrapped in this way.

I have a simple play called ‘vmware_create_virtual_machine.yml‘, which includes the tasks to create a virtual machine in VMware vSphere.

---
- hosts: local
  become: no
  gather_facts: False
  vars:
  tasks:
  - name: Create a New Virtual Machine
    vmware_guest:
      hostname: <vcenter hostname>
      username: <vcenter username>
      password: <vcenter password>
      validate_certs: no
      name: Linux_VM
      datacenter: SG1
      cluster: SG1-CLS-MGMT-01
      folder: /SG1/vm
      guest_id: centos7_64Guest
      disk:
      - size_gb: 20
        type: thin
        datastore: vsanDatastore
      hardware:
        memory_mb: 2048
        num_cpus: 1
        scsi: paravirtual
      networks:
      - name: Management
        device_type: vmxnet3
      state: poweredon
    delegate_to: localhost

Many of the properties should be self-explanatory, but we’re creating a virtual machine called Linux_VM, with 1 CPU, 2GB of Memory, a 20GB think hard disk, etc.

Because we are creating a new virtual machine, the guest_id needs to be provided, which sets the Guest Operating system profile on the VM. You can get the full list of supported guest_ids from the VMware developer support page.

To run the playbook, I invoke the ansible-playbook command.

ansible-playbook vmware_create_virtual_machine.yml

You can see that the execution was successful (ok=1) and that a change was made (changed=1), which means the virtual machine was created. If we take a look at vCenter we can see the virtual machine now exists, with the specified configuration:

Update Virtual Machine

The great thing about Ansible is that if you were to run this play again, it would not try and create another VM. Instead, it will simply exit with a status of OK, if it discovers that the specified virtual machine already exists and is powered on.

But what if we made some changes to the configuration that we want to apply to the virtual machine? Well, Ansible will only make these changes to the virtual machine if the ‘state‘ parameter has been set to ‘present‘ in the play. Also, if you are making configuration changes to hardware, then the virtual machine may also need to be powered off first (Ansible will display an error if this is required).

So let’s assume that the virtual machine is powered off and we want to enable CPU and memory hot add support. We simply add these configurations under the hardware section:

Now if we run the play again:

And we can see that a reconfigure task is performed on the VM in vCenter server.

Make sure to check the documentation for all the parameters that can be configured. Read more “Automate vSphere Virtual Machine and OVA Appliance Deployments Using Ansible”

Automating vRealize Suite Lifecycle Manager with Ansible – Part 1: Setup and Deploy vIDM and LCM

For many years I have been tasked with building vRealize Automation environments, and one of the biggest pain points, has been the deployment and preparation of the IaaS machines. This has usually required special preparation of a Windows template and a number of scripts to get everything configured, so that vRA plays nice. This is usually an error prone process, especially for the larger enterprise deployments. To tackle this problem, VMware released vRealize Suite Lifecycle Manager, which is on version 2.1, as of this writing.

I decided it was time to try this product and see if it lives up to the claims. I was also more interested in the API functionality, and as with all things automation, I typically turn to Ansible. I wasn’t too surprised to discover, that although the deployment is ‘automated’, depending on your interpretation, there is actually a number of manual steps that are still required. These include ensuring that the IaaS machines and database are already deployed and properly configured. The vRLCM Create Environment process also provides validation and pre-checks, along with scripts that can be used to prepare the machines.

With the preparation of these playbooks, I set out to automate the following:

  • Deployment of a single VMware vIDM appliance;
  • Deployment and initial configuration of a single vRealize Suite Lifecycle Manager appliance;
  • Deployment of vRealize Automation IaaS Servers (Windows VMs), in multiple deployment scenarios.
  • Creation of vRealize Automation environment through LCM.

This post will focus on deploying vRSLCM and vIDM with a follow-up post on the vRA deployments.

However, in my attempts to make this a set of one click processes, I wasn’t able to quite get that far (got pretty close). This was mainly due to some limitations with the vRSLCM API (can’t automate certificates, for example). I will discuss these limitations throughout this post, and if I find workarounds, then I’ll provide an update.

I should also point out that this is quite experimental and although I have done all that I can to make these workflows as idempotent as I can, unfortunately, with the limitations of the LCM API, this has proven to be quite difficult. These playbooks are best used as a one-time only deployment, at least for LCM itself.

Environment Preparation

In my environment I have a dedicated virtual machine that I develop and run my playbooks on (you may call this the Ansible control machine) running on CentOS 7.x.

Environment Overview

CentOS CentOS 7.x
Ansible 2.8.1 (2.8 is a minimum requirement)
Python 3.6 (installed from EPEL Repository)

Prerequisites

The following pre-requisites are required:

  • DNS A/PTR records created for vRSLCM and vIDM appliances.

Prepare Environment

Ensure that the system is up-to-date by running:

sudo yum -y update

Install yum-utils

sudo yum -y install yum-utils

Install Python 3

You will need to ensure that Python 3.6 is installed on your Ansible host. I am using the EPEL repository, but you may decided to use IUS or SCL to install these packages, so the package names may differ. Refer to the relevant documentation for installing Python 3 using these repositories, if required.

sudo yum -y install python36 python36-pip python36-devel

Install GIT

Git will be used to clone my Ansible vRSLCM playbooks repository.

sudo yum -y install git

Create Python Environment

It’s always the best approach to create a python virtual environment so that packages do not conflict with the base system. I have a directory in the root of my home dir called ‘python-env‘ where I maintain a number of different environments. Once you have a virtual environment set up, you just need to install the required packages from the ‘requirements.txt‘ file (provided later in the git repository).

You can follow these steps below to create a virtual environment:

mkdir ~/python-env
cd ~/python-dev
python3.6 -m venv ansible_vrlcm
source ansible_vrlcm/bin/activate

You will notice that the shell will now display the virtual environment that you are using:

It’s also a good idea to ensure the latest version of pip and setuptools is installed.

pip install --upgrade pip setuptools

Read more “Automating vRealize Suite Lifecycle Manager with Ansible – Part 1: Setup and Deploy vIDM and LCM”

Deploying NSX-T Using Ansible – Part 3: Running The Playbook

In this post I am going to cover the running of the Ansible NSX-T playbook, so that you can get NSX-T deployed in your environment(s). In case you missed them, in my previous posts, I detailed how to set up your Ansible environment and configure the playbook in preparation for deploying NSX-T.

If you arrived here and want to figure it out for yourself, you can download my playbooks here: https://github.com/nmshadey/Ansible-NSXT

Playbook Overview

The main playbook that you will need to run is called ‘nsxt_create_environment.yml‘, which is located in the root of the Ansible-NSXT folder.

---
## Deploys an NSX-T environment
- hosts: nsxt_managers_controllers
  connection: local
  become: yes
  gather_facts: False
  vars:
    nsxt_deployment_vcenter: "{{ mgmt_vcenter_server }}"
    nsxt_deployment_vcenter_username: "{{ mgmt_vcenter_admin_username }}"
    nsxt_deployment_vcenter_password: "{{ mgmt_vcenter_admin_password }}"
    nsxt_deployment_datacenter: "{{ mgmt_vcenter_datacenter }}"
    nsxt_deployment_cluster: "{{ mgmt_vcenter_cluster }}"
    nsxt_deployment_datastore: "{{ nsxt_datastore }}"
    nsxt_deployment_portgroup: "{{ nsxt_portgroup }}"
    nsxt_deployment_size: "{{ nsxt_default_deployment_size }}"
    nsxt_role: "{{ nsxt_default_role }}"

    compute_managers:
    - name: "{{ nsxt_compute_manager_name }}"
      host: "{{ nsxt_compute_manager_host }}"
      transport_clusters: "{{ nsxt_transport_clusters }}"

    ip_pools:
    - display_name: "{{ nsxt_transport_switch_ip_pool_name }}"
      subnets:
      - allocation_ranges:
        - start: "{{ nsxt_transport_switch_ip_pool_start }}"
          end: "{{ nsxt_transport_switch_ip_pool_end }}"
        cidr: "{{ nsxt_transport_switch_ip_pool_cidr }}"

    transport_zones:
    - display_name: "{{ nsxt_transport_zone_name }}"
      description: "{{ nsxt_transport_zone_desc }}"
      transport_type: "OVERLAY"
      transport_switch_name: "{{ nsxt_transport_switch_name }}"

    uplink_profiles:
    - display_name: "{{ nsxt_transport_switch_uplink_profile_name }}"
      teaming:
        active_list:
        - uplink_name: "{{ nsxt_transport_switch_uplink_1 }}"
          uplink_type: PNIC
        - uplink_name: "{{ nsxt_transport_switch_uplink_2 }}"
          uplink_type: PNIC
        policy: "{{ nsxt_transport_switch_uplink_profile_policy }}"
      transport_vlan: "{{ nsxt_transport_switch_uplink_profile_vlan }}"

    transport_node_profiles:
    - display_name: "{{ nsxt_transport_node_profile_name }}"
      description: "{{ nsxt_transport_switch_profile_desc }}"
      host_switches:
      - host_switch_profiles:
        - name: "{{ nsxt_transport_switch_uplink_profile_name }}"
          type: UplinkHostSwitchProfile
        host_switch_name: "{{ nsxt_transport_switch_name }}"
        pnics:
        - device_name: "{{ nsxt_transport_switch_pnic_1 }}"
          uplink_name: "{{ nsxt_transport_switch_uplink_1 }}"
        - device_name: "{{ nsxt_transport_switch_pnic_2 }}"
          uplink_name: "{{ nsxt_transport_switch_uplink_2 }}"
        ip_assignment_spec:
          resource_type: StaticIpPoolSpec
          ip_pool_name: "{{ nsxt_transport_switch_ip_pool_name }}"
      transport_zone_endpoints:
      - transport_zone_name: "{{ nsxt_transport_zone_name }}"
    
  roles:
    - nsxt_deploy_ova
    - nsxt_apply_license
    - nsxt_add_compute_managers
    - nsxt_create_ip_pools
    - nsxt_create_transport_zones
    - nsxt_create_uplink_profiles
    - nsxt_create_transport_profiles
    - nsxt_configure_transport_clusters

Read more “Deploying NSX-T Using Ansible – Part 3: Running The Playbook”

Deploying NSX-T Using Ansible – Part 2: Setting Up The Playbook

In my previous post I covered how to prepare your Ansible environment and install the VMware NSX-T modules. I also provided the details on how to install my Ansible playbooks for deploying NSX-T in your environments.

In this post I am going to detail how to configure these playbooks to meet your environment/requirements. I have chosen to break out my variables into multiple files. This gives me the flexibility to assign values specific to a group of hosts, inherit values from a parent group and to store usernames, passwords and license information more securely, in their own Ansible Vault encrypted file.

The deployment examples that I will demonstrate include 2 sites, that each include the following:

  • A management environment at each site. This includes a vCenter Server instance with a single management cluster.
  • A compute resource (CMP) environment at each site. This includes a vCenter Server instance with a single resource cluster.

I will deploy an NSX-T instance at each management cluster. These NSX-T instances will be used to provide SDN capabilities to the compute resource clusters (when I get time I’ll create a diagram!).

An overview of the playbook tree:

├── ansible.cfg
├── nsxt_create_environment.yml
├── nsxt_example_add_compute_manager.yml
├── nsxt_example_apply_license.yml
├── nsxt_example_create_ip_pools.yml
├── nsxt_example_create_transport_profiles.yml
├── nsxt_example_create_transport_zones.yml
├── nsxt_example_create_uplink_profiles.yml
├── nsxt_example_deploy_ova.yml
├── group_vars
│   ├── all
│   ├── nsxt_managers_controllers
│   ├── site_a
│   ├── site_a_cmp_nsxt
│   ├── site_b
│   └── site_b_cmp_nsxt
├── inventory
│   └── hosts
├── roles
│   ├── nsxt_add_compute_managers
│   ├── nsxt_apply_license
│   ├── nsxt_check_manager_status
│   ├── nsxt_configure_transport_clusters
│   ├── nsxt_create_ip_pools
│   ├── nsxt_create_transport_profiles
│   ├── nsxt_create_transport_zones
│   ├── nsxt_create_uplink_profiles
│   └── nsxt_deploy_ova
├── ssh_config

Read more “Deploying NSX-T Using Ansible – Part 2: Setting Up The Playbook”

Deploying NSX-T Using Ansible – Part 1: Setting Up The Environment

When I saw the release of NSX-T 2.4, I decided that I would upgrade my compute clusters to utilise this new version. Since I manage the compute NSX managers mostly through the API, I figured this would provide me with some good experience and also allow me to understand how this is deployed.

In my lab I run vRealize Automation with a management cluster (CMP stack), 2 nested vCenter Servers and ESXi Clusters (compute) that mimic two geographically dispersed data centres. Until now I had deployed a dedicated NSX-V instance for each of my three vCenter deployments, that provides the logical switching and routing required for my lab.

I didn’t want to create yet another ‘how to’ guide on how to do this using the GUI, so instead, I am going to attempt to accomplish this using Ansible. VMware have handily made available Ansible modules for NSX-T, which are supported for the 2.4 release and above (note that these are still in preview). I will attempt to make use of these modules, but if I discover broken or missing functionality, then I will revert to using the NSX-T Rest API.

Link to the VMware Github repository for Ansible NSX-T: https://github.com/vmware/ansible-for-nsxt

Link to my Github Ansible NSX-T Deployment Playbooks: https://github.com/nmshadey/Ansible-NSXT

I am going to provide a series of posts that will cover the set up of the Ansible environment, how to install the VMware NSX-T modules and use the playbooks and roles that I have created to deploy NSX-T in your environments. Read more “Deploying NSX-T Using Ansible – Part 1: Setting Up The Environment”