vRA 7 – Getting more information in workflows from the vRO ExecutionContext object

When a vRO workflow or action is called from vRA, additional input parameters (in addition to those specified as the workflow/action inputs) are provided in the Execution Context object. These can be very useful as they contain additional data that can be used inside the workflows. A couple of good examples would be the user that requested the resource or the name of the tenant.

Here is a list of parameters provided inside the Execution Context (those without a description I’m still trying to figure out):

 Parameter Name  Description
__asd_correlationId  The Request ID
__asd_requestInstanceTimestamp Request Date & Time
__asd_requestInstanceTypeId The lifecycle ID
__asd_requestedBy The UPN of the user who made the request
__asd_requestedFor The UPN of the user the request was on behalf of
__asd_targetResourceProviderTypeId com.vmware.csp.iaas.blueprint.service
__asd_targetResourceTypeId Type ID, i.e. ‘machine’
__asd_tenantRef The friendly name of the tenant (i.e. vsphere.local)

Wait, where is ‘__asd_subtenantRef’? More on that in my post here (soon).

You can also see these in the variables tab for the workflow run:

List all parameters

The parameters and their values in the execution context object can be retrieved using the System scripting class. There is a method called ‘getContext()‘, which returns an object called ‘ExecutionContext‘. The ExecutionContext object has the following methods:

boolean contains(String name)
Object getParameter(String name)
Array parameterNames(String name)

Insert the following code into a scripting task in your workflow to list all the parameters and their values:

Retrieve the value for a specific parameter

To retrieve a value for a specified parameter the following example can be used:

Retrieves the tenant name and sets it to a variable/attribute.

If anyone can shed some light on what the other parameters do then please comment and I’ll update the page.

vRealize Orchestrator cluster nodes not in sync with embedded vRA Appliance

I have come across this issue a number of times where the secondary vRO cluster node is not in sync with the primary node (one thing to note is that the synchronisation status is relative to the node you are logged on to).

The option to synchronise the nodes is not available because I am using vRO that comes embedded with the vRA appliance. The reason this option is not available is because vRA “should” be managing the state of the cluster nodes, which is does from a vRO client prospective but not when you make changes via the Control Center (such as changing the Admins group).

To workaround this issue and unlock the hidden options you will need to append “?advanced” to the end of the URL. For example, if you are on the Orchestrator Cluster Management page add ?advanced to the end of the URL, which should look like this:


When the page refreshes you will notice that a new button has appeared on the bottom.

Clicking on this button will reveal two new options:

  • Push Configuration
  • Push Configuration and restart nodes

Select “Push Configuration and restart nodes”, which will push the configuration and automatically restart the vRO service on the secondary node.

A message will be displayed if this is successful.

It will take approx 10 minutes for the secondary node to start up completely so grab a coffee at this point.

Refresh and both nodes should now show as “Synchronized”.

Resolve vRA/vCAC VM to vCenter VM

If you want to perform configuration tasks against a VM during the deployment process or even for Day 2 operations then you will need to resolve the vRA managed VM to a vCenter VM of type VC:VirtualMachine in your vRO workflows. I initially required this when applying tags to a VM during the BuildingMachine LifeCycle POST state, which involved using the VAPI endpoint.

vRO with the vRA plugin installed will make available two modules with some pre-defined actions .

  • com.vmware.vcac.asd.mappings
  • com.vmware.vcac.asd


The two actions that we are interested in are ‘mapToVCVM‘ and ‘findVcVmByUuid‘. Let’s take a look at these two actions.


Inputs: VMProperties(Properties)
Return type: VC:VirtualMachine

This action has an input ‘VMProperties‘ of type ‘Properties‘, which is passed to it by the calling workflow or action. If you don’t know what this is, it is the vRA payload sent to vRO by the Event Broker Service (EBS). I’ll post another article at a later date on how I am doing this if you’re not sure. The first line of the script extracts the custom property ‘VirtualMachine.Admin.UUID’, which returns the unique UUID of the virtual machine (in previous versions of this action, the MoRef was used which is not unique across multiple vCenter Servers) and stores the value in ‘vmUuid‘.

The second line returns the result of ‘System.getModule(“com.vmware.vcac.asd”).findVcVmByUuid(vmUuid)’ of type VC:VirtualMachine. This code calls another module/action and passes ‘vmUuid‘ as an input (refer to the previous image if you’re unsure what is happening here and it should make sense). The result of this is returned to the original calling workflow/action as a VC:VirtualMachine type, which will be the vCenter managed VM object.


Inputs: VmUuid(String)
Return type: VC:VirtualMachine

This action has an input ‘VmUuid‘ of type ‘String‘. As you saw from the ‘mapToVCVM‘ action, this string is passed over with the value of the UUID for the virtual machine. The action then discovers the configured vCenter endpoints and performs a search on each for the VM until a result is found and returns it.

At this point you have probably worked out that you don’t need to use ‘mapToVCVM‘ at all and we could simply extract the UUID ourselves and then make a call to the ‘findVcVmByUuid‘ action.

Continue reading

What vRealize Automation is trying to solve

Before purchasing or installing the software one of the first things you need to understand is what vRA is trying to solve. Unlike what VMware has released previously, vRA is a very different and tries to solve the problems related to managing multiple cloud platforms and the end to end delivery of IT services. Yes, there are a lot of tools out there that can help in the delivery of these services but it very quickly starts to turn into a complex mismash of applications and processes. It’s also important to point out that vRA is not exclusive to vSphere but is designed to be platform agnostic. Think of vRA operating as the management plane in your data centre that plugs into all the services around it.

So when I talk about ‘end to end delivery of IT services’ what exactly am I referring to? Well, consider the example process below of bringing a new virtual server into service:

  • End user requests new virtual server, provides details and submits request to IT support services;
  • A change is raised on the ITSM CMDB platform;
  • Wait for change to be approved before provisioning starts (alternatively the request could be rejected and process cancelled);
  • When approved, begin the provisioning process;
    • Get the next available hostname (i.e. using a prefix and the next available number increment);
    • Request and reserve the next available IP address from the IPAM software and update the record to reflect the new server being provisioned;
    • Provision the virtual server with specific hardware configuration;
    • Add/update DNS records;
    • Add server to the Active Directory domain and place into the correct Organization Unit;
    • Update OS with latest security patches;
    • Add to existing firewall rules;
    • Generate SSL certificates;
    • Install software such as Anti-Virus and applications;
    • Add new Configuration Item with server details;
    • Update server inventory;
    • Add to monitoring system;
    • Mark change as complete;
  • Email End user that requested the virtual server that it is now ready;

You’ll agree, that this is a lot of steps for provisioning something like a virtual server and will involve multiple different teams (and think about doing this on many different cloud platforms). Most of the time, a lot of these steps are performed manually using run books but are sometimes overlooked and can cause delays in fulfilling the request or lead to problems later, like outdated information. We have tools like Puppet which help us with deployments, software installations, etc. but most of these are very Linux focused. They also do not address the other steps, or at least not in a very friendly way.

Finally, and more interestingly, think about when the virtual server is taken out of service or decommissioned. This is almost always where things get messy. I have seen really well laid out and robust commissioning procedures but nothing to support the decommissioning. Often, it’s a case of ‘best effort’ and items get missed and are usually discovered later, often after they have caused a problem or identified with routine audits of DNS or firewalls. In some cases this could even pose a security risk. When we think of all of this what we are actually referring to is ComplianceGovernance and Life Cycle Management.

My advise here is that even before you touch vRA, you need to think about how IT services are being provisioned within your organisation. This will involve engaging with the various operations or development teams and weed out every single process that is involved, how services are delivered, maintained, monitored and secured. You will need to fully understand the manual process and any automation that exists. Secondly, you will need to understand all of the applications that are being used to support these teams and how you will potentially interface with these during automated deployments.

vRealize Automation is designed to solve all of these problems. it is a fully extensible platform that allows Administrators to create policy driven processes for the provisioning of IT services. It does this whilst providing a catalog of available services that end users can simply request by providing some details and a couple of clicks of a button. Administrators are armed with a powerful Advanced Service Designer tool that allows items such as Virtual Servers or Software Components to be drag and dropped onto the design canvas and later submitted to the catalog. These ‘designs’ are what vRA refers to as blueprints.

Continue reading

A Virtualization Engineer’s Journey into the World of DevOps with vRealize Automation

Before I start this post I want to put it into perspective. Up until now I have been working in infrastructure roles for a number of years, specialising in virtualization (mostly VMware), servers, storage and networking. A lot has changed recently and we can’t go a day without hearing or reading about “DevOps”. I don’t want to get into what DevOps is and isn’t as there is plenty on Google for that but what is clear is that the role of the system admin / virtual engineer / [insert infrastructure role here] is changing and fast.

I had actually planned for this to be just two paragraphs long and the post was meant to have a slightly different focus. However, that didn’t quite go as planned, so happy reading!

We are now at the age of ‘Cloud Computing’ and the need for applications that are cloud native and can be moved around cloud providers with ease. As we’re in a state of transition and it may take some time to get there but until then everything is moving towards a ‘hybrid’ cloud model. As part of this, as infrastructure engineers, we need to be able to deliver infrastructure and services quickly and efficiently. Doing things manually, following a run book or similar is no longer desirable and we need to find a way to automate the end to end delivery of these services. As engineers we need to bridge the gap between operations and development. I’m not suggesting that we need to be developers but we need to be more closely aligned and have a much better understanding of the development life cycle and delivery model.

The Virtualization model allowed us to deliver Infrastructure as a Service (Iaas), Platform as a Service (PaaS), Software as a Service (SaaS) and so on. In the cloud model this has been extended to Everything as a Service (XaaS) and even serverless architectures! Now, the possibilities are endless and we need to start delivering hybrid IT services under this new model. Here are some examples (and no where near limited to):

  • Database as a Service;
  • Email as a Service;
  • Security as a Service;
  • Docker as a Service;
  • Operation services (user creation, mailbox creation, 3rd party application authorisation);
  • Enhancing IaaS and PaaS delivery with tighter integration into IPAM software (i.e. SolarWinds IPAM), ITSM CMDB (i.e. ServiceNow) and monitoring systems;

There are also a lot of tools out there today, typically referred to as ‘Continuous Delivery’ applications that can help us on our journey, such as (again not limited to):

  • Puppet
  • Chef
  • Ansible
  • Salt

These applications allow us to treat our infrastructure as code and automate the delivery of IT infrastructure with a touch of a button. Whilst these applications are extremely powerful and useful they do not by themselves solve all the problems of delivering hybrid IT services.

Continue reading