IaC for vRealize: Updating Archetypes and Adding Boilerplate For Your vRO Actions

Welcome to the 5th post in the series. This is going to be a relatively small post where we will take a look at archetypes and how to update these and also discuss a standard layout for creating vRO Actions. At this stage, the focus is on ensuring that we have everything required when creating new projects, instead of having to add new files to the project or modify existing ones.


By now you have come across the term ‘archetype‘ when working with Maven. If you’re not a Java developer or ever worked with Maven before, then you likely have no idea what this means in the context of creating projects.

Here is a dictionary definition for archetype:

So if we look at the first definition, we can think of a Maven archetype as a model for our projects. In essence, think of an archetype as a ‘template’ for Maven projects.

Notice what happens when we create a project for the first time, we would execute a command similar to this:

mvn archetype:generate -DinteractiveMode=false -DarchetypeGroupId=com.vmware.pscoe.o11n.archetypes -DarchetypeArtifactId=package-actions-archetype -DarchetypeVersion=1.5.11 -DgroupId=com.simplygeek.library -DartifactId=nsx

We specify ‘archetype:generate‘, which tells Maven that the project should be generated from an existing archetype. We also specify ‘archetypeGroupId‘, ‘archetypeArtifactId‘, and ‘archetypeVersion‘, which tells Maven how to find the archetype.

Once we have executed the above command, Maven will download several artefacts that it requires for the project. Once this task completes, you will have a directory structure that looks something like the following (depending on what type of project you are creating):

The ‘src‘ directory also includes a ‘main‘ and ‘test‘ subdirectory that includes a sample Action and unit test.

This directory structure and the files have been predefined inside of an archetype. When a new project is created, Maven downloads this archetype and unpacks the contents to form the project directory. This means that every project that we create will look the same.

But what if we want to change this, or perhaps add additional files or update existing ones? Well, it’s quite easy, we just have to locate the artefact for the archetype and update it.

Update Existing Archetype

If you think back to the very first post, we downloaded the vRealize Build Tools package from the VMware flings website. From within this package, there was an ‘iac-maven-repository.zip‘ file, which we copied to a CI server (I used the Artifactory server in my first post). This file includes all the archetypes that are used for our projects.

Depending on which project type you create, the following archetypes are available:

  • iac-maven-repository/com/vmware/pscoe/o11n/archetypes/package-actions-archetype
  • iac-maven-repository/com/vmware/pscoe/o11n/archetypes/package-mixed-archetype
  • iac-maven-repository/com/vmware/pscoe/o11n/archetypes/package-vrealize-archetype
  • iac-maven-repository/com/vmware/pscoe/o11n/archetypes/package-xml-archetype
  • iac-maven-repository/com/vmware/pscoe/vra/archetypes/package-vra-archetype

Inside each of these directories, there is a ‘<version>/<archetype>-<version>.jar‘ file and an associated pom file. For a JS based project, you will see:

The jar file is just a zip file and can be opened with any zip software. When you open this file (I’m using 7Zip on WIndows) then you will see two folders, ‘META-INF‘ and ‘archetype-resources‘.

If you navigate into ‘archetype-resources‘, you will see the same folder structure that you get when creating a new project. Read more “IaC for vRealize: Updating Archetypes and Adding Boilerplate For Your vRO Actions”

IaC for vRealize: Documenting Code with JSDoc and Syntax Checking and Code Style Management with ESLint

Welcome to the 4th post in the series where I will take a look at how we can add comments to our code using JSDoc and perform syntax checking and style management with ESLint. I felt this was a good point to intersect the series on these topics, where you will establish fundamental development practices and make your vRO development life easier for both yourself and your peers.

Once you have implemented these techniques, you will begin on a journey of well documented and standardised code and take a step into the doorway of Test Driven Development (TDD).


Several packages need to be installed, which will be used in this post. I created a previous article on Using Visual Studio Code for your vRealize Orchestrator Development which covers the installation of NodeJS (needed for NPM), ESLint and some other plugins.

Please have a look at this post and ensure you have everything installed. Don’t pay too much attention to the JSDoc examples in that post as they are very specific to documenting Actions (with no function declaration). I’ll be demonstrating a different approach here with Maven.

Also, you will need to install the following packages using NPM (I’m installing globally for this tutorial, but production environments should make use of local dependencies, something a bit too long for this post, which I will cover in a later one):

npm install -g jsdoc eslint-plugin-jsdoc eslint

I will detail how and where these are used throughout this post.

You can also download my ‘.eslintrc.json‘ configuration here:

Documenting Code with JSDoc

Developers are often expected to add comments to their code as this can help both yourself and others when trying to figure out what your code is trying to achieve. However, this practice has evolved using tools that are much more integrated and can automate the creation of documentation.

Many programming languages offer some form of documentation library, for example,  java has Javadoc, and Python has the native docstring and In JavaScript, we have JSDoc.

JSDoc makes use of annotations and tags that are added alongside the code and are used to describe constructs such as functions, classes, namespaces, and variables. Some tags allow metadata such as author information, versioning and licensing to be added to the documentation.

JSDoc can also be used to describe variable types and with some advanced settings in VScode, and a few extra lines of code, it can enforce static type checking. It’s one of the major reasons why developers turn to TypeScript for their JavaScript development. However, my attempts to get this to work have been unsuccessful and I leave this open to anyone willing to give it a try. The issue was dealing with vRO object types, such as ‘REST:RESTHost’, where VScode/TypeScript interprets the colon as an error.

Adding JSDoc Annotations To Your Code

Adding JSDoc annotations to your code is a hard requirement. This is due to the way the vRealize Build Tools has been implemented. The ‘vRealize:pull‘ goal has been designed to depend on the JSDoc description and tags (specifically @param and @returns) when creating the Action in vRO. If these properties are missing or specify the wrong data types, then these will be carried through to vRO.

The best way to begin to explain JSDoc and how to use it would be to provide an example of this being used to describe one of my vRO functions.

Below is a JSDoc annotation used to define my ‘attachSecurityTagToVcVm’ function.

 * Attaches the specified nsx security tag to a vCenter virtual machine.
 * @author Gavin Stephens <gavin.stephens@simplygeek.co.uk>
 * @function attachSecurityTagToVcVm
 * @param {REST:RESTHost} nsxRestHost The NSX Rest Host.
 * @param {VC:VirtualMachine} vcVm The vCenter Virtual Machine.
 * @param {string} tagName The NSX security tag name.
 * @returns {void}

The JSDoc annotations are always placed inside a comment block (/**   */). Each line that is defined within the comment block starts with an asterisk (*). Pay close attention to the indentation of the commented lines, these are single-spaced. The comment block itself should be placed on the immediate line before the item you are documenting (i.e. the function declaration).

The first line should always be the description. It should clearly describe what the function does or is used for. The description is mandatory and is added to the description field for the vRO Action.

@author allows me to add author details such as my name and email address. This tag is optional

@function allows the function name to be specified. I use this because function declarations for vRO Actions are anonymous. Therefore, when generating documentation, JSDoc doesn’t know what the function is called. Using this tag can help JSDoc. This tag is optional.

@param describes the parameters that the function expects, which includes the data type, name and description. Note that the data type is specified exactly as it would appear in an Action in vRO for example, ‘VC:VirtualMachine‘ and not ‘vcVirtualMachine‘. The ‘vRealize:push‘ goal will apply these data types exactly as-is to the Actions inputs.

The parameters can also be specified as optional by enclosing square brackets around the parameter name, i.e. in my example, if ‘tagName‘ was optional, it would be specified as ‘[tagName]‘. This does not affect how parameters are presented to vRO and are used for documentation purposes only. You will have to rely on adding code that handles mandatory and optional parameters, including setting defaults.

@returns describes any variable that is returned by the function. Normally, it would be fine to remove this tag if the function does not return a value. However, vRO Actions must specify ‘void‘ if no value is returned. It’s also good to be explicit. Note that when the vRealize Build Tools pulls down any of your Actions for the first time, it will set an @return tag, but it is recommended to use @returns.

Check out https://devdocs.io/jsdoc/ to learn more about JSDoc and all the available tags.

One thing to note is that any additional tags you use, such as @author, @function or anything else apart from @param and @returns, will be appended to the description field of the vRO Action. Read more “IaC for vRealize: Documenting Code with JSDoc and Syntax Checking and Code Style Management with ESLint”

Automate vSphere Virtual Machine and OVA Appliance Deployments Using Ansible

If you have read any of my posts, you will quickly discover that I use Ansible a lot, for deploying virtual machines and VMware OVA appliances, on vSphere.

Ansible support for VMware is constantly growing and in the latest versions, it has become an essential tool that I use as part of my development process for standing up required infrastructure that is easy and quick to deploy or tear down. The key part of using Ansible for my deployments is that the process is repeatable and consistent.

In this post, I am going to cover some of the core Ansible modules that I use to perform these deployments and provide various use case examples. Once you understand these modules and lay down the groundwork, you’ll be deploying virtual machines or appliances in mere minutes, with the simple editing of some configuration files.

If you are not too familiar with what Ansible is, or what it’s used for, then I recommend that you check out the official documentation. You can also get a brief overview of what Ansible is at cloudacademy.com, and there is a wealth of online training and other material available to get you up to speed.

All examples used in this post, including a fully working Ansible solution, can be found on my Ansible-VMware Github.


You will need to have the following packages installed (through PIP) on your Ansible control machine:

  • ansible>=2.8.0
  • pyvmomi>=

I have also provided a requirements.txt file that you can install using PIP

pip install -r requirements.txt

Deploying Virtual Machines

The most basic task that you are ever likely to perform on any vSphere environment, is the deployment of a virtual machine. In this section, I am going to show you how Ansible can make the task of spinning up dozens of virtual machines a breeze.

Ansible provides the core module vmware_guest that can be used to create a new virtual machine or clone from an existing virtual machine template.

In these examples, I am going to demonstrate how you can create new virtual machines or clone an existing virtual machine from a template for both Windows and Linux, perform customization and configure hardware and advanced settings.

Create a New Virtual Machine (no template)

This is an example play that will create a virtual machine with no OS installed (not from a template). When the virtual machine is powered on, it will automatically try and PXE boot from the network, which can be useful in deployment pipelines where VMs are bootstrapped in this way.

I have a simple play called ‘vmware_create_virtual_machine.yml‘, which includes the tasks to create a virtual machine in VMware vSphere.

- hosts: local
  become: no
  gather_facts: False
  - name: Create a New Virtual Machine
      hostname: <vcenter hostname>
      username: <vcenter username>
      password: <vcenter password>
      validate_certs: no
      name: Linux_VM
      datacenter: SG1
      cluster: SG1-CLS-MGMT-01
      folder: /SG1/vm
      guest_id: centos7_64Guest
      - size_gb: 20
        type: thin
        datastore: vsanDatastore
        memory_mb: 2048
        num_cpus: 1
        scsi: paravirtual
      - name: Management
        device_type: vmxnet3
      state: poweredon
    delegate_to: localhost

Many of the properties should be self-explanatory, but we’re creating a virtual machine called Linux_VM, with 1 CPU, 2GB of Memory, a 20GB think hard disk, etc.

Because we are creating a new virtual machine, the guest_id needs to be provided, which sets the Guest Operating system profile on the VM. You can get the full list of supported guest_ids from the VMware developer support page.

To run the playbook, I invoke the ansible-playbook command.

ansible-playbook vmware_create_virtual_machine.yml

You can see that the execution was successful (ok=1) and that a change was made (changed=1), which means the virtual machine was created. If we take a look at vCenter we can see the virtual machine now exists, with the specified configuration:

Update Virtual Machine

The great thing about Ansible is that if you were to run this play again, it would not try and create another VM. Instead, it will simply exit with a status of OK, if it discovers that the specified virtual machine already exists and is powered on.

But what if we made some changes to the configuration that we want to apply to the virtual machine? Well, Ansible will only make these changes to the virtual machine if the ‘state‘ parameter has been set to ‘present‘ in the play. Also, if you are making configuration changes to hardware, then the virtual machine may also need to be powered off first (Ansible will display an error if this is required).

So let’s assume that the virtual machine is powered off and we want to enable CPU and memory hot add support. We simply add these configurations under the hardware section:

Now if we run the play again:

And we can see that a reconfigure task is performed on the VM in vCenter server.

Make sure to check the documentation for all the parameters that can be configured. Read more “Automate vSphere Virtual Machine and OVA Appliance Deployments Using Ansible”

IaC for vRealize: Define Dependencies, Manage Versions, Prepare & Release Packages & Deploy Artifacts

Welcome to the third part in the series working with the vRealize Build Tools. At this stage, you should have a fully working CI infrastructure and have all of your vRO code exported using packages and stored in Git repositories. In this post, I will show you how to manage dependencies across your packages and how you can use the Maven plugins to automatically update packages so that they are using the latest dependency versions.

I will also show you how to manage versioning for development (Snapshots) and production (release) code. Once we have our versioning in place, I will detail how to prepare the releases and finally push the artefacts to the Artifactory repositories, which can be picked up by a release pipeline (something I will cover in much more detail in a later post).

I also want to point out that I had to update my previous post to include the git scm connections in your pom.xml files, as this is required for this post, so make sure to go back and check that out.

Understanding Snapshot vs Release Versions

The first thing to understand is the ‘-SNAPSHOT‘ string that is suffixed to package versions. You will have noticed that all of the package versions you created in my previous post, were version ‘1.0.0-SNAPSHOT’. This suffix tells Maven that the code inside this package is in a development stage and not suitable for production release. The Maven release plugins understand this and will prevent the dependency handler from including these as valid dependencies for your packages, as this could result in unstable code (though, they can be defined manually).

If you are following these posts, then currently, all your packages will be at version 1.0.0-SNAPSHOT. Don’t worry, we will change this later, but for now, accept this as-is.

Define Dependencies

In my previous post, I provided an example of 3 packages that I have created. I also created a table that detailed the ‘groupId‘ and ‘artifactId‘ for these packages. I am going to extend this table to list their dependencies on other packages. I have kept this simple, in that no dependencies exist outside of these 3 packages.

A dependency exists when one package is calling code from another package (typically with a System.getModule()). When dependencies are mapped, their artefacts and required versions are released as part of the release process.

groupId artifactId Dependency
com.simplygeek.library logger
com.simplygeek.library rest logger
com.simplygeek.library nsx logger

I have added the ‘Dependency‘ column, which lists the projects/artefacts that this package depends on. All of my packages depend on ‘logger‘, whereas ‘nsx‘ also depends on ‘rest‘.

We now need to add some XML to the projects ‘pom.xml‘ file, that is located at the root of the project folder. These are ‘<dependencies>‘ tags that are used to define a set of ‘<dependency>‘ tags for the project. You should insert these tags immediately after the ‘<packaging>‘ tags but before the ‘<scm>‘ tags.

We also want to omit ‘-SNAPSHOT‘ from the version, since we’ll be pushing actual releases.

Below is an example of what the dependency XML insert looks like for the ‘rest‘ package:


Once you have completed defining your dependencies, save the ‘pom.xml‘ file and make sure to commit the changes to git.

Repeat this process for all of your projects.

Prepare & Release Packages

Maven provides several lifecycle phases and goals that can be used to prepare and perform the releases. We will also need to push the release artefacts to the Artifactory repositories, as this will be queried for dependencies when releasing a package that has any defined.

You should first be focusing on all the core projects that make up most of your dependencies. Also, start with the projects that have dependencies in the lowest order, i.e. the ‘logger‘ project has no dependencies, so that will be released first, ‘rest‘ depends on ‘logger‘, so that will be released next, etc.

For all the release tasks, we’re going to specify the ‘release‘ plugin followed by the goal. We will use the following goals:

  • Clean
  • Prepare
  • Perform

Read more “IaC for vRealize: Define Dependencies, Manage Versions, Prepare & Release Packages & Deploy Artifacts”

IaC for vRealize: Manage Existing vRO Code With vRealize Build Tools & Set up Git Repositories

In my previous post on Deploying vRealize Build Tools To Allow Infrastructure As Code for vRA and vRO, I covered how to set up the CI infrastructure and your developer workstation, in preparation for managing your vRO code as projects with Visual Studio Code and Maven. In this post, I will explain how you can work with your existing code base and manage it using the build tools. A major part of this will be creating and managing new projects that will map to our existing code in vRO.

Once projects have been created, I will detail how Git repositories can be used to store and manage your vRO code, and then we can map your project dependencies and allow development teams to work collaboratively, without risk of overwriting the work of others (a major problem when developing using the vRO client). Git is going to bring some very useful processes and methodologies to the table such as branching, tagging and merge conflict resolution.

I also want to point out that I am currently only focusing on Actions, as I believe this is where all your code should exist. I will have followup posts that will cover strategies for managing Workflows and other items.

Setting Up GitLab

One thing that I didn’t cover in my previous post, was setting up Git. I purposely reserved this topic for this post, as it was more relevant. To start, you will need to have a GitLab server deployed that can be used to create the repositories for storing your vRO projects. You can use GitHub if you so wish, it doesn’t matter too much, but using GitLab doesn’t require that your environments have access to the Internet. GitLab also allows you to create groups to organise multiple repositories, which is going to be useful.

If you need to install GitLab, then there is a good guide on VULTR, that details how to install GitLab and enable HTTPS.

Create User

You should be using your user account when working with Git and not the default root account. So log into the admin area and create a new local account for your personal use. Alternatively, you can also configure GitLab to allow users from Active Directory to log in, you can use the guide provided by Git here.

Create a Personal Access Token

Once you have a Git user account set up, you will need to create a Personal Access Token. An Access Token can be used instead of a password when authenticating with Git over HTTPS. This will provide safer storage of user credentials in the Git configuration files.

Click on the profile icon on the top right of the page and select Settings:

On the Settings page select Access Tokens.

On the ‘Add a personal access token‘ page, give the access token a name and the ‘write_repository‘ permissions. Set an expiry date if you wish, or leave blank to never expire.

Once you click ‘Create personal access token‘, the token will be displayed. You will need to copy and save this token somewhere safe as you will not be able to view it again. If you lose this token, you will have to create a new one to replace it.

Create a Group

A Group allows multiple projects/repositories to be created under a single namespace. This is useful when a project spans many repositories and you need to keep these together so that they are easy to locate and manage. Our vRO projects will be using multiple repositories, therefore we’ll great a Group for these. A Group also simplifies granting access to projects, as collaborators can be granted access to the group and inherently, the projects it contains.

To create a new group, select ‘Groups‘ from the main menu at the top and then select ‘Your groups‘.

Click the ‘New group‘ button and give your group a name and settings that you require (I simply called my group ‘vRO’).

We will create all vRO projects under this new group.

Install Git Client

You will also need to install the Git client for your workstation. You can download the client for your OS here. Read more “IaC for vRealize: Manage Existing vRO Code With vRealize Build Tools & Set up Git Repositories”

Automating vRealize Suite Lifecycle Manager with Ansible – Part 1: Setup and Deploy vIDM and LCM

For many years I have been tasked with building vRealize Automation environments, and one of the biggest pain points has been the deployment and preparation of the IaaS machines. This has usually required special preparation of a Windows template and several scripts to get everything configured so that vRA plays nice. This is usually an error-prone process, especially for the larger enterprise deployments. To tackle this problem, VMware released vRealize Suite Lifecycle Manager, which is on version 2.1, as of this writing.

I decided it was time to try this product and see if it lives up to the claims. I was also more interested in the API functionality, and as with all things automation, I typically turn to Ansible. I wasn’t too surprised to discover, that although the deployment is ‘automated’, depending on your interpretation, there is a number of manual steps that are still required. These include ensuring that the IaaS machines and database are already deployed and properly configured. The vRLCM Create Environment process also provides validation and pre-checks, along with scripts that can be used to prepare the machines.

With the preparation of these playbooks, I set out to automate the following:

  • Deployment of a single VMware vIDM appliance;
  • Deployment and initial configuration of a single vRealize Suite Lifecycle Manager appliance;
  • Deployment of vRealize Automation IaaS Servers (Windows VMs), in multiple deployment scenarios.
  • Creation of vRealize Automation environment through LCM.

This post will focus on deploying vRSLCM and vIDM with a follow-up post on the vRA deployments.

However, in my attempts to make this a set of one-click processes, I wasn’t able to quite get that far (got pretty close). This was mainly due to some limitations with the vRSLCM API (can’t automate certificates, for example). I will discuss these limitations throughout this post, and if I find workarounds, then I’ll provide an update.

I should also point out that this is quite experimental and although I have done all that I can to make these workflows as idempotent as I can, unfortunately, with the limitations of the LCM API, this has proven to be quite difficult. These playbooks are best used as a one-time-only deployment, at least for LCM itself.

Environment Preparation

In my environment, I have a dedicated virtual machine that I develop and run my playbooks on (you may call this the Ansible control machine) running on CentOS 7.x.

Environment Overview

CentOS CentOS 7.x
Ansible 2.8.1 (2.8 is a minimum requirement)
Python 3.6 (installed from EPEL Repository)


The following pre-requisites are required:

  • DNS A/PTR records created for vRSLCM and vIDM appliances.

Prepare Environment

Ensure that the system is up-to-date by running:

sudo yum -y update

Install yum-utils

sudo yum -y install yum-utils

Install Python 3

You will need to ensure that Python 3.6 is installed on your Ansible host. I am using the EPEL repository, but you may decide to use IUS or SCL to install these packages, so the package names may differ. Refer to the relevant documentation for installing Python 3 using these repositories, if required.

sudo yum -y install python36 python36-pip python36-devel

Install GIT

Git will be used to clone my Ansible vRSLCM playbooks repository.

sudo yum -y install git

Create a Python Environment

It’s always the best approach to create a python virtual environment so that packages do not conflict with the base system. I have a directory in the root of my home dir called ‘python-env‘ where I maintain several different environments. Once you have a virtual environment set up, you just need to install the required packages from the ‘requirements.txt‘ file (provided later in the git repository).

You can follow these steps below to create a virtual environment:

mkdir ~/python-env
cd ~/python-dev
python3.6 -m venv ansible_vrlcm
source ansible_vrlcm/bin/activate

You will notice that the shell will now display the virtual environment that you are using:

It’s also a good idea to ensure the latest version of pip and setuptools is installed.

pip install --upgrade pip setuptools

Read more “Automating vRealize Suite Lifecycle Manager with Ansible – Part 1: Setup and Deploy vIDM and LCM”

IaC for vRealize: Deploying vRealize Build Tools To Allow Infrastructure As Code for vRA and vRO

As any vRealize Orchestrator developer will tell you, managing code outside of the appliance is difficult. I recently wrote a post about Using Visual Studio Code for your vRealize Orchestrator Development, where I highlighted some of the challenges with this. The issue is that we’re not given the freedom to use any IDE we want, easily run unit tests on our code or do continuous integration with tools like Jenkins.

I did mention that a couple of solutions were going to make their way, one of these was internal tooling that VMware’s CoE team currently uses for their vRO development (you can read the article here: https://blogs.vmware.com/management/2018/11/automating-at-scale-with-vro-and-vra.html). It wasn’t possible to get access to these tools without engaging with CoE and forking up a bit of cash.

That is until now, as VMware has released these tools as a new fling. The fling is currently in preview, but you can check it out here: https://labs.vmware.com/flings/vrealize-build-tools. I think this is quite an exciting time for VMware developers as these tools could finally change the way we develop and manage our code and integrate into the wider developer ecosystem.

This is my first blog on this topic but if I find these tools useful, then there will be plenty more to follow. Getting the environment set up to use these tools is not straight forward and has several dependencies. These include deploying supporting infrastructure such as JFrog Artifactory, preparing all the required artefacts that are sourced from the vRO server and getting the workstation set up to create and manage packages.

Deploy and Configure Platform

Before the developer can begin using the vRealize Build Tools, the supporting platform has to be deployed and configured. This consists of setting up an Artifactory server to store Maven artefacts and build dependencies and preparing the artefact repositories.

Deploy Artifactory Server (skip if you already have this deployed)

This section will detail how to set up the Artifactory server and required dependencies. Note that the details below only deploy a single Artifactory node with the database instance running on the same machine. It is recommended that for a production environment to ensure Artificatory is deployed with high availability and connects to external/dedicated database instances.

Install Java Development Kit (JDK)

JFrog Artifactory requires the Java JDK 8 and above to be installed and the JAVA_HOME variable configured. I am using the Open Source version of these tools. Install using the following command:

sudo yum install -y java-1.8.0-openjdk-devel

Add the following lines to ‘/etc/profile‘ to set the ‘JAVA_HOME environment variable and add the Java bin directory to the path.

export JAVA_HOME=$(dirname $(dirname $(readlink $(readlink $(which javac)))))
export PATH=$PATH:$JAVA_HOME/bin

This is what my ‘/etc/profile‘ looks like:

Then source the file and check that the variables have been correctly configured:

source /etc/profile
env | grep JAVA_HOME
env | grep PATH

Read more “IaC for vRealize: Deploying vRealize Build Tools To Allow Infrastructure As Code for vRA and vRO”

vRealize Orchestrator: HTTP-REST Client

vRealize Orchestrator provides a nice way to manage and interact with web services. I have always liked that you can add HTTP-REST API endpoints, configured with basic authentication, which doesn’t require a token to be requested each time they are used. It’s also quite useful having visibility of these endpoints in the inventory, which makes them easy to discover.

It’s also possible to create operations on these endpoints (i.e. all the get/post operations), but I am not a big fan of using these. The problem is that using them takes you that little bit further away from managing everything in code. They are also more difficult to port/migrate to other vRO instances, which is especially true if you have multiple environments for development, test, pre-prod, prod, etc. My much-preferred method is to use Actions which perform these operations instead.

I have also seen many other developers use Actions for their HTTP-REST operations, which is good. The problem, however, is that I have seen these written in so many different ways. I decided that I needed to create a consistent experience when interacting with these web services and created the HTTP Rest Client. This client is a class with associated methods, that are used to make requests against HTTP-REST web services.

You can download a package containing the HTTP-REST Client Action here.


The client is an action that returns the HttpClient object. This object has several methods that can be used to execute requests against a web service. The following methods are supported:


The Action itself requires no inputs as these are provided when instantiating the object or supplied to the methods when called.

Below is the code for the HttpClient class.

 * Creates and returns an instance of the HttpClient object.
 * @author Gavin Stephens <gavin.stephens@simplygeek.co.uk>
 * @version 1.0.0
 * @function httpClient
 * @returns {*} An instance of the HttpClient object.

var logType = "Action";
var logName = "httpClient"; // This must be set to the name of the action
var Logger = System.getModule("com.simplygeek.library.util").logger(logType, logName);
var log = new Logger(logType, logName);
var reqResponse = ""; //REST API request response

 * Defines the HttpClient object.
 * @class
 * @param {REST:RESTHost} restHost - The HTTP REST host.
 * @param {string} [acceptType] - The encoding format to accept.
 * @returns {*} An instance of the HttpClient object.

function HttpClient(restHost, acceptType) {
    this.restHost = restHost;

    if (acceptType) {
        this.acceptType = acceptType;
    } else {
        this.acceptType = "application/json";

     * Defines the GET method.
     * @method
     * @param {string} restUri - The request uri.
     * @param {number[]} [expectedResponseCodes] - A list of expected response codes.
     * @param {Properties} [headers] - A key/value set of headers to include in the request.
     * @returns {*} The request response object.

    this.get = function (restUri, expectedResponseCodes, headers) {
        reqResponse = this._executeRequest("GET", restUri, null, null,
                                           expectedResponseCodes, headers);

        return reqResponse;

     * Defines the POST method.
     * @method
     * @param {string} restUri - The request uri.
     * @param {string} [content] - The request content.
     * @param {string} [contentType] - The encoding for content.
     * @param {number[]} [expectedResponseCodes] - A list of expected response codes.
     * @param {Properties} [headers] - A key/value set of headers to include in the request.
     * @returns {*} The request response object.

    this.post = function (restUri, content, contentType,
                          expectedResponseCodes, headers) {
        if (!content) {
            content = "{}";
        reqResponse = this._executeRequest("POST", restUri, content,
                                           contentType, expectedResponseCodes,

        return reqResponse;

     * Defines the PUT method.
     * @method
     * @param {string} restUri - The request uri.
     * @param {string} content - The request content.
     * @param {string} [contentType] - The encoding for content.
     * @param {number[]} [expectedResponseCodes] - A list of expected response codes.
     * @param {Properties} [headers] - A key/value set of headers to include in the request.
     * @returns {*} The request response object.

    this.put = function (restUri, content, contentType,
                         expectedResponseCodes, headers) {
        reqResponse = this._executeRequest("PUT", restUri, content,
                                           contentType, expectedResponseCodes,

        return reqResponse;

     * Defines the DELETE method.
     * @method
     * @param {string} restUri - The request uri.
     * @param {string} content - The request content.
     * @param {string} [contentType] - The encoding for content.
     * @param {number[]} [expectedResponseCodes] - A list of expected response codes.
     * @param {Properties} [headers] - A key/value set of headers to include in the request.
     * @returns {*} The request response object.

    this.delete = function (restUri, content, contentType,
                            expectedResponseCodes, headers) {
        reqResponse = this._executeRequest("DELETE", restUri, content,
                                           contentType, expectedResponseCodes,

        return reqResponse;

     * Defines the PATCH method.
     * @method
     * @param {string} restUri - The request uri.
     * @param {string} content - The request content.
     * @param {string} [contentType] - The encoding for content.
     * @param {number[]} [expectedResponseCodes] - A list of expected response codes.
     * @param {Properties} [headers] - A key/value set of headers to include in the request.
     * @returns {*} The request response object.

    this.patch = function (restUri, content, contentType,
                           expectedResponseCodes, headers) {
        reqResponse = this._executeRequest("PATCH", restUri, content,
                                           contentType, expectedResponseCodes,

        return reqResponse;

     * A private method that executes the request.
     * @method
     * @private
     * @param {string} restMethod - The request method.
     * @param {string} restUri - The request uri.
     * @param {string} content - The request content.
     * @param {string} [contentType] - The encoding for content.
     * @param {number[]} [expectedResponseCodes] - A list of expected response codes.
     * @param {Properties} [headers] - A key/value set of headers to include in the request.
     * @returns {*} The request response object.

    this._executeRequest = function (restMethod, restUri, content,
                                     contentType, expectedResponseCodes,
                                     headers) {
        var response;
        var maxAttempts = 5;
        var timeout = 10;
        var success = false;
        var statusCode;

        // Default to status code '200' if no expected codes have been defined.
        if (!expectedResponseCodes ||
            (Array.isArray(expectedResponseCodes) &&
            expectedResponseCodes.length < 1)) {
            expectedResponseCodes = [200];
        this._createRequest(restMethod, restUri, content,
                            contentType, headers);

        log.debug("Executing request...");
        log.debug("URL: " + this.request.fullUrl);
        log.debug("Method: " + restMethod);

        for (var i = 0; i < maxAttempts; i++) {
            try {
                response = this.request.execute();
                success = true;
            } catch (e) {
                System.sleep(timeout * 1000);
                log.warn("Request failed: " + e + " retrying...");

        if (!success) {
            log.error("Request failed after " + maxAttempts.toString +
                      " attempts. Aborting.");

        statusCode = response.statusCode;
        if (expectedResponseCodes.indexOf(statusCode) > -1) {
            log.debug("Request executed successfully with status: " +
        } else {
            log.error("Request failed, incorrect response code received: '" +
                      statusCode + "' expected one of: '" +
                      expectedResponseCodes.join(",") +
                      "'\n" + response.contentAsString);

        return response;

     * A private method that creates the request.
     * @method
     * @private
     * @param {string} restMethod - The request method.
     * @param {string} restUri - The request uri.
     * @param {string} content - The request content.
     * @param {string} [contentType] - The encoding for content.
     * @param {Properties} [headers] - A key/value set of headers to include in the request.
     * @returns {*} The request response object.

    this._createRequest = function (restMethod, restUri, content,
                                    contentType, headers) {
        var uri = encodeURI(restUri);

        log.debug("Creating REST request...");
        if (restMethod === "GET") {
            this.request = this.restHost.createRequest(restMethod, uri);
        } else {
            if (!content) {
                this.request = this.restHost.createRequest(restMethod, uri);
            } else {
                if (!contentType) {
                    contentType = this.acceptType;
                this.request = this.restHost.createRequest(restMethod, uri, content);
                this.request.contentType = contentType;

        log.debug("Setting headers...");

     * A private method that sets the request headers.
     * @method
     * @private
     * @param {Properties} [headers] - A key/value set of headers to include in the request.

    this._setHeaders = function (headers) {
        log.debug("Adding Header: Accept: " + this.acceptType);
        this.request.setHeader("Accept", this.acceptType);
        if (headers && (headers instanceof Properties)) {
            for (var headerKey in headers.keys) {
                var headerValue = headers.get(headerKey);
                // eslint-disable-next-line padding-line-between-statements
                log.debug("Adding Header: " + headerKey + ": " + headerValue);
                this.request.setHeader(headerKey, headerValue);


return HttpClient;

Read more “vRealize Orchestrator: HTTP-REST Client”