Automating vRealize Suite Lifecycle Manager with Ansible – Part 1: Setup and Deploy vIDM and LCM

For many years I have been tasked with building vRealize Automation environments, and one of the biggest pain points, has been the deployment and preparation of the IaaS machines. This has usually required special preparation of a Windows template and a number of scripts to get everything configured, so that vRA plays nice. This is usually an error prone process, especially for the larger enterprise deployments. To tackle this problem, VMware released vRealize Suite Lifecycle Manager, which is on version 2.1, as of this writing.

I decided it was time to try this product and see if it lives up to the claims. I was also more interested in the API functionality, and as with all things automation, I typically turn to Ansible. I wasn’t too surprised to discover, that although the deployment is ‘automated’, depending on your interpretation, there is actually a number of manual steps that are still required. These include ensuring that the IaaS machines and database are already deployed and properly configured. The vRLCM Create Environment process also provides validation and pre-checks, along with scripts that can be used to prepare the machines.

With the preparation of these playbooks, I set out to automate the following:

  • Deployment of a single VMware vIDM appliance;
  • Deployment and initial configuration of a single vRealize Suite Lifecycle Manager appliance;
  • Deployment of vRealize Automation IaaS Servers (Windows VMs), in multiple deployment scenarios.
  • Creation of vRealize Automation environment through LCM.

This post will focus on deploying vRSLCM and vIDM with a follow-up post on the vRA deployments.

However, in my attempts to make this a set of one click processes, I wasn’t able to quite get that far (got pretty close). This was mainly due to some limitations with the vRSLCM API (can’t automate certificates, for example). I will discuss these limitations throughout this post, and if I find workarounds, then I’ll provide an update.

I should also point out that this is quite experimental and although I have done all that I can to make these workflows as idempotent as I can, unfortunately, with the limitations of the LCM API, this has proven to be quite difficult. These playbooks are best used as a one-time only deployment, at least for LCM itself.

Environment Preparation

In my environment I have a dedicated virtual machine that I develop and run my playbooks on (you may call this the Ansible control machine) running on CentOS 7.x.

Environment Overview

CentOS CentOS 7.x
Ansible 2.8.1 (2.8 is a minimum requirement)
Python 3.6 (installed from EPEL Repository)

Prerequisites

The following pre-requisites are required:

  • DNS A/PTR records created for vRSLCM and vIDM appliances.

Prepare Environment

Ensure that the system is up-to-date by running:

Install yum-utils

Install Python 3

You will need to ensure that Python 3.6 is installed on your Ansible host. I am using the EPEL repository, but you may decided to use IUS or SCL to install these packages, so the package names may differ. Refer to the relevant documentation for installing Python 3 using these repositories, if required.

Install GIT

Git will be used to clone my Ansible vRSLCM playbooks repository.

Create Python Environment

It’s always the best approach to create a python virtual environment so that packages do not conflict with the base system. I have a directory in the root of my home dir called ‘python-env‘ where I maintain a number of different environments. Once you have a virtual environment set up, you just need to install the required packages from the ‘requirements.txt‘ file (provided later in the git repository).

You can follow these steps below to create a virtual environment:

You will notice that the shell will now display the virtual environment that you are using:

It’s also a good idea to ensure the latest version of pip and setuptools is installed.

Read More

IaC for vRealize: Deploying vRealize Build Tools To Allow Infrastructure As Code for vRA and vRO

As any vRealize Orchestrator developer will tell you, managing code outside of the appliance is difficult. I recently wrote a post about Using Visual Studio Code for your vRealize Orchestrator Development, where I highlighted some of the challenges with this. The issue is that we’re not given the freedom to use any IDE we want, easily run unit tests on our code or do continuous integration with tools like Jenkins.

I did mention that a couple of solutions were going to make their way, one of these was internal tooling that VMware’s CoE team currently uses for their vRO development (you can read the article here: https://blogs.vmware.com/management/2018/11/automating-at-scale-with-vro-and-vra.html). It wasn’t possible to get access to these tools without engaging with CoE and forking up a bit of cash.

That is until now, as VMware has released these tools as a new fling. The fling is currently in preview, but you can check it out here: https://labs.vmware.com/flings/vrealize-build-tools. I think this is quite an exciting time for VMware developers as these tools could finally change the way we develop and manage our code and integrate into the wider developer ecosystem.

This is my first blog on this topic but if I find these tools useful, then there will be plenty more to follow. Getting the environment set up to use these tools is not straight forward and has several dependencies. These include deploying supporting infrastructure such as JFrog Artifactory, preparing all the required artifacts that are sourced from the vRO server and getting the workstation set up to create and manage packages.

Deploy and Configure Platform

Before the developer can begin using the vRealize Build Tools, the supporting platform has to be deployed and configured. This consists of setting up an Artifactory server to store Maven artifacts and build dependencies and preparing the artifact repositories.

Deploy Artifactory Server (skip if you already have this deployed)

This section will detail how to set up the Artifactory server and required dependencies. Note that the details below only deploy a single Artifactory node with the database instance running on the same machine. It is recommended that for a production environment to ensure Artificatory is deployed with high availability and connects to external/dedicated database instances.

Install Java Development Kit (JDK)

JFrog Artifactory requires the Java JDK 8 and above to be installed and the JAVA_HOME variable configured. I am using the Open Source version of these tools. Install using the following command:

Add the following lines to ‘/etc/profile‘ to set the ‘JAVA_HOME environment variable and add the Java bin directory to the path.

Then source the file and check that the variables have been correctly configured:

Read More

vRealize Orchestrator: HTTP-REST Client

vRealize Orchestrator provides a nice way to manage and interact with web services. I have always liked that you can add HTTP-REST API endpoints, configured with basic authentication, which doesn’t require a token to be requested each time they are used. It’s also quite useful having visibility of these endpoints in the inventory, which makes them easy to discover.

It’s also possible to create operations on these endpoints (i.e. all the get/post operations), but I am not a big fan of using these. The problem, is that using them takes you that little bit further away from managing everything in code. They are also more difficult to port/migrate to other vRO instances, which is especially true if you have multiple environments for development, test, pre-prod, prod, etc. My much preferred method is to use Actions which perform these operations instead.

I have also seen many other developers use Actions for their HTTP-REST operations, which is good. The problem, however, is that I have seen these written in so many different ways. I decided that I needed to create a consistent experience when interacting with these web services and created the HTTP Rest Client. This client is a class with associated methods, that are used to make requests against a HTTP-REST web services.

You can download a package containing the HTTP-REST Client Action here.

HTTP REST Client

The client is an action that returns the HttpClient object. This object has a number of methods that can be used to execute requests against a web service. The following methods are supported:

  • GET, POST, PUT, DELETE and PATCH

The Action itself requires no inputs as these are provided when instantiating the object or supplied to the methods, when called.

Below is the code for the HttpClient class.

Read More

Using Visual Studio Code for your vRealize Orchestrator Development

Please note that this post was created before VMware released the vRealize Build Tools fling. I have a new series which covers using these tools and in effect, supersedes this post. Please check out my new series on IaC for vRealize: Deploying vRealize Build Tools To Allow Infrastructure As Code for vRA and vRO


When you are developing in a vRealize Orchestrator environment, one of the biggest frustrations is being limited to the vRO IDE. The vRO IDE is very simple, in that it does not provide any of the features that you would expect from an IDE, such as IntelliSense and quality of life extensions/addons, and only provides basic syntax checking.

There is no integration with source control management systems such as GIT (it has an internal system which isn’t great), moving code can be difficult and unit testing in the true sense of development, doesn’t exist. A connection to the vRO IDE uses a JAVA client and also requires a constant connection to the vRO server, so developing on the move or offline isn’t possible.

Anyone that has developed on this platform will have experienced the same issues and can only dream of the day when this is no longer the case.

You can use an IDE called Visual Studio Code, that can help make your development life easier. Admittedly, this alone doesn’t solve all of the discussed problems, but it does allow you to leverage the power of this IDE to assist in code development. There are still restrictions, such as lack of integration with vRO itself, which requires the code to be manually copied to the vRO server (yes, annoying). The good news, however, is that solutions are starting to become available to provide that integration. I am going to expand more on this at the end of this post.

If you haven’t heard of Visual Studio Code, it is a lightweight and features rich IDE created by Microsoft. The Visual Studio Code website describes it as:

Visual Studio Code combines the simplicity of a source code editor with powerful developer tooling, like IntelliSense code completion and debugging.

First and foremost, it is an editor that gets out of your way. The delightfully frictionless edit-build-debug cycle means less time fiddling with your environment, and more time executing on your ideas.

Visual Studio Code supports macOS, Linux, and Windows – so you can hit the ground running, no matter the platform.

I do almost all of my vRO development using Visual Studio Code, which gives me access to useful extensions and most importantly, keeps me in the mindset of how a developer should work.

In this post, I am going to cover how to set up a Visual Studio Code environment in Windows, install some useful extensions that I like to use and the installation of GIT and other required software components.

Setting Up Your Development Environment

The first thing that you will want to do, is download and install Visual Studio Code from here. When you launch this for the very first time, you’ll get an immediate good impression, from how quickly it loads and how lightweight it feels. The default dark theme is also quite nice.

Read More

vRA Developer: Part 8 – Working with vCAC Reservation Storage

In my previous post, I covered working with vcac reservations and provided examples of how you can retrieve property values and also linked entities, such as networks and virtual machines. In this post I am going to cover reserved storage. I planned to include this in the previous post, but the content was becoming too large, so I decided to split them.

The Actions provided here should be useful if you ever need to work with storage outside of vRA (for example, when adding or resizing a virtual hard disk directly in vCenter) and can be used to query reservation information first. This can prevent cases where storage can exceed that which is allocated/reserved for the tenant.

Here is how the reserved storage looks for the reservation that I will be working with:

All code that I have provided or talked about in this post can be downloaded as a vRO package for your consumption here.

I have also included a number of workflows in the package, which provide examples of these actions being used.

You can find these under ‘Simplygeek -> Library -> Common Workflows -> vRealize Automation -> Examples -> Reservations

Get Reserved Storage Information

The following sections provide vRO Actions that can be used to get information about storage that has been reserved in a reservation or get details of physical capacity.

Get Storage Reserved Capacity for a Reservation

The following action accepts a storage path and the reservation entity and will return the total amount of storage that has been reserved for the reservation.

Read More

vRA Developer: Part 7 – Working with vCAC Reservation Entities

Reservations are created in vRA to allocate memory, storage and network resources to a business group. Business groups can consume these resources by provisioning virtual machines.

In this post I am going to cover the vCAC Reservation Entities and provide some workflows and actions that can be used to get information about reservations programmatically. This can be useful to gather information about the reservations if you are using custom XaaS processes that would not trigger the standard built-in reservation checks.

All code that I have provided or talked about in this post can be downloaded as a vRO package for your consumption here.

I have also included a number of workflows in the package, which provide examples of these actions being used.

You can find these under ‘Simplygeek -> Library -> Common Workflows -> vRealize Automation -> Examples -> Reservations

Get a List of Available Reservations or Get a Specific Reservation

The following code can be used to get a list of all reservations that are available for the specified tenant or retrieve a specific reservation based on its name or unique id.

This is specifically the reservation that I will be working with and will appear in the log outputs.

Get List of Available Reservations For a Tenant

The following code will accept a tenant Id and a toggle for retrieving only enabled reservations. A list of reservation string names will be returned. This isn’t as straight forward as you would hope (at least from an entities point of view), because there is no property for the tenant id on the reservation. Instead, I had to inspect the business group that the reservation is assigned to, which does have a tenant id property (it’s quite possible this is how the gui does it too).

Log output:

Read More

Deploying NSX-T Using Ansible – Part 3: Running The Playbook

In this post I am going to cover the running of the Ansible NSX-T playbook, so that you can get NSX-T deployed in your environment(s). In case you missed them, in my previous posts, I detailed how to set up your Ansible environment and configure the playbook in preparation for deploying NSX-T.

If you arrived here and want to figure it out for yourself, you can download my playbooks here: https://github.com/nmshadey/Ansible-NSXT

Playbook Overview

The main playbook that you will need to run is called ‘nsxt_create_environment.yml‘, which is located in the root of the Ansible-NSXT folder.

Read More

Deploying NSX-T Using Ansible – Part 2: Setting Up The Playbook

In my previous post I covered how to prepare your Ansible environment and install the VMware NSX-T modules. I also provided the details on how to install my Ansible playbooks for deploying NSX-T in your environments.

In this post I am going to detail how to configure these playbooks to meet your environment/requirements. I have chosen to break out my variables into multiple files. This gives me the flexibility to assign values specific to a group of hosts, inherit values from a parent group and to store usernames, passwords and license information more securely, in their own Ansible Vault encrypted file.

The deployment examples that I will demonstrate include 2 sites, that each include the following:

  • A management environment at each site. This includes a vCenter Server instance with a single management cluster.
  • A compute resource (CMP) environment at each site. This includes a vCenter Server instance with a single resource cluster.

I will deploy an NSX-T instance at each management cluster. These NSX-T instances will be used to provide SDN capabilities to the compute resource clusters (when I get time I’ll create a diagram!).

An overview of the playbook tree:

Read More