Upgrading External vRO Appliance to 7.5 (for vRA 7.5)

Oh what a joy of an upgrade this one was. Not that it was too difficult, just a tad annoying that an inline upgrade was not possible (or maybe just not supported?, I dunno) and the only way to upgrade to vRealize Orchestrator 7.5 was through a new deployment of the external appliance and migrating the content and settings across. Anyway, I thought it might be useful to others to document the process that I followed to achieve this (and as a future reminder to myself).

Before you begin

This process documents the steps to upgrade a single, standalone appliance but the steps are almost identical for a cluster. If you are using a cluster, then ensure the load balancer is disabled and that the databases are in sync before proceeding. You are responsible for ensuring that all your appliances, virtual machines, databases, etc are backed up and/or snapshot before attempting to perform the upgrade.

Make sure to also snapshot the target vRO server! If anything goes wrong during the migration, then you will need to revert to this snapshot before you can attempt the migration again.

Deploy the vRealize Orchestrator 7.5 Appliance

Nothing special here, just deploy the OVA package for the 7.5 appliance and configure it so that you can access it on the network. Make sure to get all the DNS entries in place so that each vRO appliance is able to resolve each other.

Enable SSH Access

You will need to ensure that SSH is enabled on both the source and destination vRO appliance. If you need to do this post-install, then log into the VAMI interface ‘of the vRO appliance (https://vro:5480) and go to the ‘Admin‘ tab.

You need to ensure that ‘SSH service enabled:‘ and ‘Administrator SSH login enabled:‘ are both checked and click ‘Save Settings‘.

Stop Source vRO Service and Configure vPostgres

Open an SSH session to the source vRO appliance (if you’re using Windows then Putty is a decent SSH client). Once logged in, perform the following:

Shutdown vRO service

Run the command ‘service vro-server stop‘. This will perform a graceful shutdown of the vro service.

Allow vPostgreSQL to listen on all interfaces

The vPostgre service, by default, is set to listen on the loopback interface. This means that it is not accessible from anything external of the appliance. This will need to be set to listen on all interfaces to allow the migration to the target vRO server to occur. You will see the error message ‘Failed to validate the source vRealize Orchestrator database. org.postgresql.util.PSQLException: Connection to 1.1.1.1:5432 refused.

Edit the ‘/var/vmware/vpostgres/current/pgdata/postgresql.conf‘ file.

And add the following to the end of the file:

Then restart the vpostgres service:

Allow Target vRO Server to Access Source Server vPostgres Database

An ACL file controls what can access the vPostgres instance and from where. By default this will be set to trust to local server only. This will need to be changed to allow the target vRO server to access this database.

Edit the ‘/var/vmware/vpostgres/current/pgdata/pg_hba.conf‘ file.

Add the following to the end of the file (change the IP address to that of your target vRO server).

Then restart the vpostgres service:

Perform Migration to Target vRO Server

Log in to the VAMI page for the target vRO server. Once logged in, go the ‘Migrate‘ tab. Enter the details for the source and target vRO servers and then click validate.

If everything checks out OK then click ‘Migrate‘.

I had an issue the first time that I tried to do this. For some reason the ‘Reinstall the vRealize Orchestrator plug-ins on local node‘ task failed with the error ‘execve() arg 3 contains a non-string value‘. I reverted my snapshot and tried the migration again and it worked without issue. Not sure why this happened but it seemed random.

Configure Control Center on Target vRO Server

Before proceeding further it would be wise to make sure that the old (source) vRO server has been switched off, so that you don’t accidentally connect to vRA to the wrong appliance. Also, make sure that your load balancers and DNS records have been updated to point to the new vRO server(s).

Open a browser to the vRO landing page (https://vro:8281/vco/) and click on the link for ‘Open Control Center‘ under ‘Configure the Orchestrator Server‘. Once you have logged in, the configuration wizard will automatically run and take you through the steps required to get the vRO server in a ready state.

Host Settings

Set the hostname to the load balanced DNS address or the if not using a cluster then the hostname of the vRO server (I’m using a CNAME alias for my standalone server) and click Apply and then Next.

Configure Authentication Provider

Set the ‘Authentication mode‘ to ‘vRealize Automation‘ and enter the address for the vRA appliance or cluster. Accept the certificate if using self-signed certificates.

Configure the identity service with a user that has tenant administrator rights for the default tenant you are configuring this instance for (I’m using vsphere.local in my lab but I also add additional tenants to this vRO server later) and then click ‘Register‘.

Finally, select an ‘Admin group‘ that users who are a member of, will gain administrative privileges to the vRO server and click ‘Save Changes‘.

The vRO services will automatically restart and apply the changes, so grab a coffee and wait a good 10 minutes for everything to come back up.

Log into vRealize Orchestrator Client

Open a browser to the vRO landing page (https://vro:8281/vco/) and click on the link to ‘Start the Orchestrator Client‘ to use the Java web app or ‘Download the Orchestrator Client‘ to download a local copy of the java client.

Once the Orchestrator Client appears, you should be able to log in with an account that is a member of the Admin group that was configured previously.

 

I appreciate that many of you performing this upgrade are likely doing so within complex vRA or clustered environments. There are so many things that could go wrong and I would be willing to offer advice if you get in touch with me. Good luck!

vRealize Automation: IaaS & Understanding the Entity Framework

This entry is part 1 of 1 in the series vRealize Automation Developer Master Series

vRealize Automation 7.x is currently in a sort of ‘split brain’ where there exists two data models which can be used to interact with vRA objects. One is objects that are backed by the Cafe appliance / PostgreSQL database and the other which uses the older, entity framework (IaaS servers).

This post is going to focus on the entity framework, which is still very relevant when working with this version of vRA. There are many things that still do not exist in the newer data model, such as custom properties and data collection. I still see vRO/vRA developers struggle with this, so I hope to help improve the situation.

The Entity Framework

When I first worked with vRA, I struggled to understand how objects were stored and manipulated in the database. I often came across a common object class called an entity. I later discovered that all objects stored in the vRA database are considered ‘entities‘. This is because vRA has been developed with Microsoft’s “Entity Framework“. A brief description of this taken from http://www.entityframeworktutorial.net

The Microsoft ADO.NET Entity Framework is an Object/Relational Mapping (ORM) framework that enables developers to work with relational data as domain-specific objects, eliminating the need for most of the data access plumbing code that developers usually need to write. Using the Entity Framework, developers issue queries using LINQ, then retrieve and manipulate data as strongly typed objects. The Entity Framework’s ORM implementation provides services like change tracking, identity resolution, lazy loading, and query translation so that developers can focus on their application-specific business logic rather than the data access fundamentals.

Entity framework is an Object/Relational Mapping (O/RM) framework. It is an enhancement to ADO.NET that gives developers an automated mechanism for accessing & storing the data in the database.

The ‘domain-specific objects‘ reference is key here and are defined as:

Domain objects are represented by entities and value objects that exist within a domain layer. These objects contribute to a common model and are exposed as a data service, which is also provided by the entity framework.

The entity framework is a layer of abstraction that sits on top of the underlying relational database (SQL Server). This abstraction allows developers to work within a standard framework. Yes, you could run SQL queries on the underlying database directly, but this gets really ugly and isn’t supported.

LINQ is Microsoft’s .NET Language-Integrated Query (language).

What is also important to note is that entities follow some form of continuity and identity (i.e. they must all have certain attributes, such as an ID field or callable methods). This standard allows for a consistent interaction with the domain objects.

In the case of vRA, all domain-level objects (entities) are provided under the ‘ManagementModelEntities.svc‘ data service model. Within this data service model, entities are organised into their own ‘tables’, known as ‘Entity Sets’, and entities can also link to (relate to) other entities. Getting an understanding of the entities will make your life as a vRA/vRO developer so much easier.

Browsing the Data Service Model with LINQPad

The data service model can be access via the following URL:

https://iaas_web_server/Repository/Data/ManagementModelEntities.svc

Although it is possible to perform GET requests against this URL and browse the entities and entity sets, a much more elegant solution is to use an application called LINQPad.

LINQPad is a tool that can connect to a .NET data source and execute LINQ queries. This tool is extremely useful to view and discover the vRA entities that exist under the ‘ManagementModelEntities.svc‘ data service (or any data service). You will often have a requirement to understand which entities exist and their associated properties. Many entities also relate (link) to each other, so understanding this can be very powerful.

Download LINQPad from the following URL. It doesn’t matter if you use version 4.x or 5.x as both will do the job, so just get the one that supports the version of .NET you have installed.

https://www.linqpad.net/

Once you have LINQPad installed, launch it and add a new connection, selecting ‘WCF Data Services 5.5 (OData 3)‘ as the data context.

The next step is to provide the URI of the IaaS Web Server (the IIS server) and an IaaS admin account (I’ll be using my own account that has IaaS admin access). Check ‘Remember this connection‘ so that the connection details are saved for future use.

You may also want to click on the ‘Advanced‘ button to ‘Accept invalid certificates‘ if self-signed certificates are being used.

After you click ‘OK‘, a connection will be established and the entity sets will be presented (that look like tables).

You’re likely not going to care much for most of these. Many of the entity sets have become redundant as they have been migrated to the Cafe appliance. Here is a list of the most common objects/entities that you will probably be working with along with their corresponding entity set name (additional sets will also be used when linking entities but I won’t cover them here).

Object Entity Set Name
Virtual Machines VirtualMachines
Virtual Machine Properties (aka Custom Properties) VirtualMachineProperties
Reservations HostReservations
Reservation Policies HostReservationPolicies
Storage Policies HostStorageReservationPolicies
Compute Resources (clusters/hosts) Hosts
Storage HostToStorage
Data Collection DataCollectionStatuses

I want to make a special note on a couple of these:

  • Custom Properties – Each custom property is stored as an individual entity in the VirtualMachineProperties entity set. If you had a virtual machine, with 30 custom properties, then there would be 30 entities created in this set. If you had another virtual machine with 30 custom properties, then 60 custom property entities would now exist. Although you will see a lot of duplicated custom properties, they are unique entities and maintain a mapping to their respective virtual machine.
  • Data Collection – These entities do not have a mapping to the compute resources. Instead, the data collection entity Id’s are the same as the compute resource entity Id’s that they manage.

So let’s explore these a bit more. We’re going to dig into the Reservation entities and see how these are presented. The first thing to do is locate the ‘HostReservations‘ entity set in the list.

Click on the little + icon and it will display all the properties available for each reservation. If you have ever created or worked with reservations in vRA, then you will be familiar with properties like, ‘ReservationMemorySizeMB‘ or ‘ReservationPriority‘.

The blue and green properties are entity links. These are incoming or outgoing links to other entities from this entity. For example, the blue link ‘Host‘ is an outgoing link to a single compute resource cluster or host, whereas the green link ‘VirtualMachines’ has an incoming link from one or more virtual machines. The cardinality between the entities is also displayed, i.e. a reservation has a many to one relationship with Host. I will cover this in much more detail in future posts that discuss working with entities, properties and links at a much deeper level.

Next, let’s take a look at some existing reservations. Right click the ‘HostReservations‘ entity set and select ‘HostReservations.Take (100)‘.

The results pane is going to display the first 100 reservation entities that have been found. Each entity will be displayed per row with a column representing the properties for this entity.

You can see that I only have one reservation that is called ‘RES-SG-BG_Delivery-vSphere-01’, and has been configured for 4096MB of memory and 100GB of storage.

The links, however, will all be displayed as either null or (0 items). These can be expanded using the ‘Expand‘ method for the LINQ query (Include would have been an even better option but the method is not supported for our use). Expand takes a comma separated string of link names that should be expanded.

Modify the query as follow to display the Host and Reservation Policy.

HostReservations.Expand(“Host,HostReservationPolicy“)

This will now return the reservation entities and this time the links for ‘Host‘ and ‘HostReservationPolicy‘ will be populated.

Continue to explore and get familiar with the entities and their properties.

In my next post I will cover how to work with the entities in vRO and will include some actions that I have created that allow me to easily interface with the entity manager.

vRealize Orchestrator: Standardised Logger Action

One bugbear that I have with vRO is the limitation around system (console) logging. There is currently no way to dynamically output the name of an action or sub-workflow (see end of the post). I like to see exactly which action or workflow is executing code so that it makes it easier for me to find that code to troubleshoot when I am looking at the output logs.

It is possible to use ‘workflow.name’ or ‘this.name’ inside an action, but this will always be set to the name of the initial workflow that was executed. This is because the workflow object is implicitly passed to the action that is called. The result, is that it will look like all the code is executing from the workflow (which is technically true, but I needed more granularity).

I therefore created a standardised way that workflow and action logging should be handled. This is achieved by using an action that will actually handle all the logging for me. The idea is, that an action or workflow will call the ‘logger‘ action, providing some parameters, that allow for a consistent and useful logging experience.

Here is some example output of the logger action in use:

You can clearly see that the outputs are from an ‘Action‘ followed by the name of the action and then the actual log message.

Below is the logger action that I am using (this is the only action I have that does not conform to my template).

The action takes 4 parameters (inputs)

  • logType – Should be set to ‘Action‘ or ‘Workflow‘.
  • logName – Should be set to the ‘Action‘ or ‘Workflow‘ name.
  • logLevel – The log level, one of ‘log‘, ‘warn‘, ‘error‘, ‘debug‘.
  • logMessage – The message to output to the console (can also be a non-string such as an object)

To use this action in any of my other actions, I place the following lines of code inside the variable block.

And simply use the following line anywhere/everywhere that I want to perform some form of logging output.

You can see that I am passing 4 parameters to the ‘logger‘ function, which match the inputs for the logger action that is being called. In the above example, “log” has been set, but this can also be any of “debug“, “error” or “warn“. It will default to “log” if for some reason no valid value has been specified.

There is one overhead that you are probably thinking about and that is, yes, you need to manually set the variable ‘logName‘ to match the name of the action itself. This seems a bit dull at first, but in my experience, action names very rarely change.

Identifying the action name from the underlying Rhino function

I mentioned at the start of this post that there is no way to dynamically set the name of the action. There is, however, a way in which this can be achieved using the following:

All vRO actions are just functions implemented in Rhino, with a prefix applied. The above code retrieves the function name and extracts the action name from it.

However, it is highly recommended that you do not use this as it has been restricted in future releases of ECMAScript, could potentially be removed from vRO and is very costly to execute, which is going to slow your workflows down considerably.

I hope this is useful and as always, any suggestions or comments then please let me know.

vRealize Orchestrator: Standardising Modules & Actions

Managing your code base in vRealize Orchestrator can be quite challenging and complex. Often, you won’t realise this until you’ve reached a point where it becomes difficult and time consuming to both organise or locate existing code that you have written. In this post, I am going to suggest ways to help you organise your code better, using methods that I have adopted with my time using vRO.

I am not suggesting this be the perfect solution, but it should provide a working standard to adopt to your own needs. I would also argue that the extra time spent getting this in place on the outset, will lead to time saved later on.

Just for reference, from this point on, I am going to refer to ‘code base‘ and ‘actions‘ interchangeably, because your vRO actions ARE your code base. Almost every single line of code you write in vRO should be in an action (I will discuss this in more detail in a later post which I will link here).

Modules

Modules are quite a simple topic but it’s important to get them right as they are almost impossible to change later. Modules are used to organise or group a collection of actions together by a common function.

Here are some general principles I like to follow when creating modules: As a general rule, modules should:

  • Conform to a standard naming convention;
  • Allow developers to easily find existing actions or where to create new actions. Failure to address this point will result in developers re-inventing the wheel by writing new actions for which may already exist;
  • Always be lower case alphabetical characters using dot notation;
  • Always have a utility module for storing general ‘helper’ actions i.e. an action that returns a unique array of items

So generally, if you follow a standard naming convention for your module names then you’ll be set. I have found that the following naming convention works well:

com.[company].library.[component].[interface].[objects].[object]

Where

[company] = company name as a single word

[component] = Examples are: NSX, vCAC, Infoblox, Activedirectory

[interface] – Examples are: REST, SOAP, PS (this is not needed if using a plugin)

[objects] = Parent object branch of the component, i.e. Edges, Entities, Networks, Groups

[object] = A property or object branch of the parent objects. These can contain actions that target a single object (singular)

Here is an example of what (some) of my Zerto Rest API code library looks like:

You could just create a module where the branch stops at [interface] or [objects], but what you’ll find is that it will become a dumping ground for dozens of actions, which will become difficult to maintain. Adding additional branches helps break the modules up.

The strategy around creating these branches usually goes in line with the interface/API’s, which helps align your modules quite nicely. A good module branching strategy can also provide execution performance enhancements by calling the module once for a smaller number of required actions.

Actions

Here are some general principles I like to follow when creating actions: As a general rule, actions should:

  • Contain small, manageable chunks of code that perform a specific task. Actions are just functions and just like any function, it should contain code that performs a specific task. If your action is doing many different tasks, then consider breaking these down into multiple, smaller actions.
  • Validate inputs. I appreciate that some may debate this idea, but actions are not ‘private functions’. They are public code where you can never guarantee that the action ‘caller’ is properly validating its inputs. This is the nature of vRO, it is a ‘hub’ that has many different uses cases and scenarios for executing the same actions. I have seen dozens of cases where developers and support engineers have wasted time tracking unexpected errors;
  • Be named appropriate to the task they perform. I generally like to use verbs in my action names, like, ‘getVirtualMachineNames’, ‘getVirtualMachineNetworks’ or ‘setCustomProperty’. Actions named this way will make it easier for other developers to identify what they are used for;
  • Have variables declared in a single block. This will just make it easier to see what variables are being used. The data type can also be defined, but is not always necessary or as important;
  • Provide consistent logging throughout. Make it so, the action almost tells the story of what is happening. Don’t go overboard, but generally a before, during and after style to logging works quite well;
  • Nesting actions within an action is generally ‘OK’ but keep it to a minimum if possible. Too many nested actions can create depth that may be more difficult to maintain and troubleshoot later on. Typical use cases are ‘helper’ or ‘utility’ actions (you’ll be completely forgiven with actions used for workflow presentation as these are a pain);
  • Perform singular tasks. Don’t write actions that perform plural tasks. Write the singular version first, then use a looping mechanism that re-uses the singular action (there are also ways this can be achieved with performance in mind in vRO). This way you’ll only have 1 version of the code;
  • Be based on a user-defined template. Yup, I’m not crazy. Have a defined template (aka boilerplate) set out on how an action should generally look and have the team follow this. It will make code reviews far easier;
  • Always be camel cased alphabetical characters (no dots);

If you adopt the above principles, you’ll have actions that will be much easier to understand, maintain and troubleshoot and everyone will be a happy bunny.

Action Example

Here is a working example of an action that has been based on a template.

Action Template

And here is the template:

If you’re wondering about the ‘logger‘ action, you can read my post here.

I hope this helps provides some standards around your vRO code base. If you have suggestions, comments, etc. then please let me know as I’ll be glad to hear them and value every bit of feedback.

Using the indexOf() method for Arrays and Strings for vRO and vRA

I’ve never been a developer so getting into JavaScript was quite a challenge at first and I probably always went the longest route possible to achieve something. As I use it more and more, I am picking up these neat little tricks and uses for built in methods that make my life easier.

In the world of vRA and vRO, I find that most of my time is spent iterating over arrays or parsing custom properties. One method that I have come to find extremely useful is the indexOf() that is available on Arrays and Strings. The methods are very similar but have very different use cases. Let’s take a look at each of them in turn.

String indexOf() Method

w3schools.com defines this as:

The indexOf() method returns the position of the first occurrence of a specified value in a string.

This method returns -1 if the value to search for never occurs.

So as an example, if we had the string “simplygeek.co.uk is fun”

string.indexOf(“m”) would return 2, which is the index within the string that ‘m’ first appears. Note that if ‘m’ appeared twice then only the first match would return a result. Indexes within arrays and strings always start at 0.

Another use case, one which I find the most useful, is being able to provide a string for the lookup. Take the following example:

string.indexOf(“simplygeek”) would return 0, because in a contiguous match the first index is returned.

When writing JavaScript that interacts with vRA you are often required to parse through custom properties, which are key value pairs of data. Such properties can contain useful information that relates to a deployment, such as virtual machine configuration. If custom properties follow a standardised naming convention, it can be easy to discover a set of properties. Let’s assume I have created the following custom properties in vRA for a deployment:

Custom.Deployment.Virtualmachine.Config.hotcpu : true
Custom.Deployment.Virtualmachine.Config.hotmem : true
Custom.Deployment.Virtualmachine.Config.sched.swap.vmxSwapEnabled : true

When the payload is sent to my vRO workflow it could contain over a 100 different key:value pairs of data. To find these easily I can use the indexOf method to iterate over each pair as follows:

The above will result in an array of properties related to virtual machine configs. I can then pass this array to some code that will handle the implementation of these advanced virtual machine settings. This allows for a very dynamic way to manage custom properties in property groups within vRA.

Array indexOf() method

Very similar to the String method, on an array, the indexOf() method returns the first index at which a given element can be found in the array, or -1 if it is not present. I find this method useful when I need to return a set of unique values from another array. let’s assume we have the following array:

myArray = [‘one’, ‘two’, ‘two’, ‘three’, ‘three’, ‘three’]

If I wanted to return only unique items from myArray, I could use the indexOf method as follows:

The above code will result in an array:

[‘one’, ‘two’, ‘three’]

I hope that someone else finds these as useful as I have. If you know of more use cases, then please let me know.