vRealize Automation: IaaS & Understanding the Entity Framework

This entry is part 1 of 1 in the series vRealize Automation Developer Master Series

vRealize Automation 7.x is currently in a sort of ‘split brain’ where there exists two data models which can be used to interact with vRA objects. One is objects that are backed by the Cafe appliance / PostgreSQL database and the other which uses the older, entity framework (IaaS servers).

This post is going to focus on the entity framework, which is still very relevant when working with this version of vRA. There are many things that still do not exist in the newer data model, such as custom properties and data collection. I still see vRO/vRA developers struggle with this, so I hope to help improve the situation.

The Entity Framework

When I first worked with vRA, I struggled to understand how objects were stored and manipulated in the database. I often came across a common object class called an entity. I later discovered that all objects stored in the vRA database are considered ‘entities‘. This is because vRA has been developed with Microsoft’s “Entity Framework“. A brief description of this taken from http://www.entityframeworktutorial.net

The Microsoft ADO.NET Entity Framework is an Object/Relational Mapping (ORM) framework that enables developers to work with relational data as domain-specific objects, eliminating the need for most of the data access plumbing code that developers usually need to write. Using the Entity Framework, developers issue queries using LINQ, then retrieve and manipulate data as strongly typed objects. The Entity Framework’s ORM implementation provides services like change tracking, identity resolution, lazy loading, and query translation so that developers can focus on their application-specific business logic rather than the data access fundamentals.

Entity framework is an Object/Relational Mapping (O/RM) framework. It is an enhancement to ADO.NET that gives developers an automated mechanism for accessing & storing the data in the database.

The ‘domain-specific objects‘ reference is key here and are defined as:

Domain objects are represented by entities and value objects that exist within a domain layer. These objects contribute to a common model and are exposed as a data service, which is also provided by the entity framework.

The entity framework is a layer of abstraction that sits on top of the underlying relational database (SQL Server). This abstraction allows developers to work within a standard framework. Yes, you could run SQL queries on the underlying database directly, but this gets really ugly and isn’t supported.

LINQ is Microsoft’s .NET Language-Integrated Query (language).

What is also important to note is that entities follow some form of continuity and identity (i.e. they must all have certain attributes, such as an ID field or callable methods). This standard allows for a consistent interaction with the domain objects.

In the case of vRA, all domain-level objects (entities) are provided under the ‘ManagementModelEntities.svc‘ data service model. Within this data service model, entities are organised into their own ‘tables’, known as ‘Entity Sets’, and entities can also link to (relate to) other entities. Getting an understanding of the entities will make your life as a vRA/vRO developer so much easier.

Browsing the Data Service Model with LINQPad

The data service model can be access via the following URL:

https://iaas_web_server/Repository/Data/ManagementModelEntities.svc

Although it is possible to perform GET requests against this URL and browse the entities and entity sets, a much more elegant solution is to use an application called LINQPad.

LINQPad is a tool that can connect to a .NET data source and execute LINQ queries. This tool is extremely useful to view and discover the vRA entities that exist under the ‘ManagementModelEntities.svc‘ data service (or any data service). You will often have a requirement to understand which entities exist and their associated properties. Many entities also relate (link) to each other, so understanding this can be very powerful.

Download LINQPad from the following URL. It doesn’t matter if you use version 4.x or 5.x as both will do the job, so just get the one that supports the version of .NET you have installed.

https://www.linqpad.net/

Once you have LINQPad installed, launch it and add a new connection, selecting ‘WCF Data Services 5.5 (OData 3)‘ as the data context.

The next step is to provide the URI of the IaaS Web Server (the IIS server) and an IaaS admin account (I’ll be using my own account that has IaaS admin access). Check ‘Remember this connection‘ so that the connection details are saved for future use.

You may also want to click on the ‘Advanced‘ button to ‘Accept invalid certificates‘ if self-signed certificates are being used.

After you click ‘OK‘, a connection will be established and the entity sets will be presented (that look like tables).

You’re likely not going to care much for most of these. Many of the entity sets have become redundant as they have been migrated to the Cafe appliance. Here is a list of the most common objects/entities that you will probably be working with along with their corresponding entity set name (additional sets will also be used when linking entities but I won’t cover them here).

Object Entity Set Name
Virtual Machines VirtualMachines
Virtual Machine Properties (aka Custom Properties) VirtualMachineProperties
Reservations HostReservations
Reservation Policies HostReservationPolicies
Storage Policies HostStorageReservationPolicies
Compute Resources (clusters/hosts) Hosts
Storage HostToStorage
Data Collection DataCollectionStatuses

I want to make a special note on a couple of these:

  • Custom Properties – Each custom property is stored as an individual entity in the VirtualMachineProperties entity set. If you had a virtual machine, with 30 custom properties, then there would be 30 entities created in this set. If you had another virtual machine with 30 custom properties, then 60 custom property entities would now exist. Although you will see a lot of duplicated custom properties, they are unique entities and maintain a mapping to their respective virtual machine.
  • Data Collection – These entities do not have a mapping to the compute resources. Instead, the data collection entity Id’s are the same as the compute resource entity Id’s that they manage.

So let’s explore these a bit more. We’re going to dig into the Reservation entities and see how these are presented. The first thing to do is locate the ‘HostReservations‘ entity set in the list.

Click on the little + icon and it will display all the properties available for each reservation. If you have ever created or worked with reservations in vRA, then you will be familiar with properties like, ‘ReservationMemorySizeMB‘ or ‘ReservationPriority‘.

The blue and green properties are entity links. These are incoming or outgoing links to other entities from this entity. For example, the blue link ‘Host‘ is an outgoing link to a single compute resource cluster or host, whereas the green link ‘VirtualMachines’ has an incoming link from one or more virtual machines. The cardinality between the entities is also displayed, i.e. a reservation has a many to one relationship with Host. I will cover this in much more detail in future posts that discuss working with entities, properties and links at a much deeper level.

Next, let’s take a look at some existing reservations. Right click the ‘HostReservations‘ entity set and select ‘HostReservations.Take (100)‘.

The results pane is going to display the first 100 reservation entities that have been found. Each entity will be displayed per row with a column representing the properties for this entity.

You can see that I only have one reservation that is called ‘RES-SG-BG_Delivery-vSphere-01’, and has been configured for 4096MB of memory and 100GB of storage.

The links, however, will all be displayed as either null or (0 items). These can be expanded using the ‘Expand‘ method for the LINQ query (Include would have been an even better option but the method is not supported for our use). Expand takes a comma separated string of link names that should be expanded.

Modify the query as follow to display the Host and Reservation Policy.

HostReservations.Expand(“Host,HostReservationPolicy“)

This will now return the reservation entities and this time the links for ‘Host‘ and ‘HostReservationPolicy‘ will be populated.

Continue to explore and get familiar with the entities and their properties.

In my next post I will cover how to work with the entities in vRO and will include some actions that I have created that allow me to easily interface with the entity manager.

vRealize Orchestrator: Standardised Logger Action

One bugbear that I have with vRO is the limitation around system (console) logging. There is currently no way to dynamically output the name of an action or sub-workflow (see end of the post). I like to see exactly which action or workflow is executing code so that it makes it easier for me to find that code to troubleshoot when I am looking at the output logs.

It is possible to use ‘workflow.name’ or ‘this.name’ inside an action, but this will always be set to the name of the initial workflow that was executed. This is because the workflow object is implicitly passed to the action that is called. The result, is that it will look like all the code is executing from the workflow (which is technically true, but I needed more granularity).

I therefore created a standardised way that workflow and action logging should be handled. This is achieved by using an action that will actually handle all the logging for me. The idea is, that an action or workflow will call the ‘logger‘ action, providing some parameters, that allow for a consistent and useful logging experience.

Here is some example output of the logger action in use:

You can clearly see that the outputs are from an ‘Action‘ followed by the name of the action and then the actual log message.

Below is the logger action that I am using (this is the only action I have that does not conform to my template).

The action takes 4 parameters (inputs)

  • logType – Should be set to ‘Action‘ or ‘Workflow‘.
  • logName – Should be set to the ‘Action‘ or ‘Workflow‘ name.
  • logLevel – The log level, one of ‘log‘, ‘warn‘, ‘error‘, ‘debug‘.
  • logMessage – The message to output to the console (can also be a non-string such as an object)

To use this action in any of my other actions, I place the following lines of code inside the variable block.

And simply use the following line anywhere/everywhere that I want to perform some form of logging output.

You can see that I am passing 4 parameters to the ‘logger‘ function, which match the inputs for the logger action that is being called. In the above example, “log” has been set, but this can also be any of “debug“, “error” or “warn“. It will default to “log” if for some reason no valid value has been specified.

There is one overhead that you are probably thinking about and that is, yes, you need to manually set the variable ‘logName‘ to match the name of the action itself. This seems a bit dull at first, but in my experience, action names very rarely change.

Identifying the action name from the underlying Rhino function

I mentioned at the start of this post that there is no way to dynamically set the name of the action. There is, however, a way in which this can be achieved using the following:

All vRO actions are just functions implemented in Rhino, with a prefix applied. The above code retrieves the function name and extracts the action name from it.

However, it is highly recommended that you do not use this as it has been restricted in future releases of ECMAScript, could potentially be removed from vRO and is very costly to execute, which is going to slow your workflows down considerably.

I hope this is useful and as always, any suggestions or comments then please let me know.

vRealize Orchestrator: Standardising Modules & Actions

Managing your code base in vRealize Orchestrator can be quite challenging and complex. Often, you won’t realise this until you’ve reached a point where it becomes difficult and time consuming to both organise or locate existing code that you have written. In this post, I am going to suggest ways to help you organise your code better, using methods that I have adopted with my time using vRO.

I am not suggesting this be the perfect solution, but it should provide a working standard to adopt to your own needs. I would also argue that the extra time spent getting this in place on the outset, will lead to time saved later on.

Just for reference, from this point on, I am going to refer to ‘code base‘ and ‘actions‘ interchangeably, because your vRO actions ARE your code base. Almost every single line of code you write in vRO should be in an action (I will discuss this in more detail in a later post which I will link here).

Modules

Modules are quite a simple topic but it’s important to get them right as they are almost impossible to change later. Modules are used to organise or group a collection of actions together by a common function.

Here are some general principles I like to follow when creating modules: As a general rule, modules should:

  • Conform to a standard naming convention;
  • Allow developers to easily find existing actions or where to create new actions. Failure to address this point will result in developers re-inventing the wheel by writing new actions for which may already exist;
  • Always be lower case alphabetical characters using dot notation;
  • Always have a utility module for storing general ‘helper’ actions i.e. an action that returns a unique array of items

So generally, if you follow a standard naming convention for your module names then you’ll be set. I have found that the following naming convention works well:

com.[company].library.[component].[interface].[objects].[object]

Where

[company] = company name as a single word

[component] = Examples are: NSX, vCAC, Infoblox, Activedirectory

[interface] – Examples are: REST, SOAP, PS (this is not needed if using a plugin)

[objects] = Parent object branch of the component, i.e. Edges, Entities, Networks, Groups

[object] = A property or object branch of the parent objects. These can contain actions that target a single object (singular)

Here is an example of what (some) of my Zerto Rest API code library looks like:

You could just create a module where the branch stops at [interface] or [objects], but what you’ll find is that it will become a dumping ground for dozens of actions, which will become difficult to maintain. Adding additional branches helps break the modules up.

The strategy around creating these branches usually goes in line with the interface/API’s, which helps align your modules quite nicely. A good module branching strategy can also provide execution performance enhancements by calling the module once for a smaller number of required actions.

Actions

Here are some general principles I like to follow when creating actions: As a general rule, actions should:

  • Contain small, manageable chunks of code that perform a specific task. Actions are just functions and just like any function, it should contain code that performs a specific task. If your action is doing many different tasks, then consider breaking these down into multiple, smaller actions.
  • Validate inputs. I appreciate that some may debate this idea, but actions are not ‘private functions’. They are public code where you can never guarantee that the action ‘caller’ is properly validating its inputs. This is the nature of vRO, it is a ‘hub’ that has many different uses cases and scenarios for executing the same actions. I have seen dozens of cases where developers and support engineers have wasted time tracking unexpected errors;
  • Be named appropriate to the task they perform. I generally like to use verbs in my action names, like, ‘getVirtualMachineNames’, ‘getVirtualMachineNetworks’ or ‘setCustomProperty’. Actions named this way will make it easier for other developers to identify what they are used for;
  • Have variables declared in a single block. This will just make it easier to see what variables are being used. The data type can also be defined, but is not always necessary or as important;
  • Provide consistent logging throughout. Make it so, the action almost tells the story of what is happening. Don’t go overboard, but generally a before, during and after style to logging works quite well;
  • Nesting actions within an action is generally ‘OK’ but keep it to a minimum if possible. Too many nested actions can create depth that may be more difficult to maintain and troubleshoot later on. Typical use cases are ‘helper’ or ‘utility’ actions (you’ll be completely forgiven with actions used for workflow presentation as these are a pain);
  • Perform singular tasks. Don’t write actions that perform plural tasks. Write the singular version first, then use a looping mechanism that re-uses the singular action (there are also ways this can be achieved with performance in mind in vRO). This way you’ll only have 1 version of the code;
  • Be based on a user-defined template. Yup, I’m not crazy. Have a defined template (aka boilerplate) set out on how an action should generally look and have the team follow this. It will make code reviews far easier;
  • Always be camel cased alphabetical characters (no dots);

If you adopt the above principles, you’ll have actions that will be much easier to understand, maintain and troubleshoot and everyone will be a happy bunny.

Action Example

Here is a working example of an action that has been based on a template.

Action Template

And here is the template:

If you’re wondering about the ‘logger‘ action, you can read my post here.

I hope this helps provides some standards around your vRO code base. If you have suggestions, comments, etc. then please let me know as I’ll be glad to hear them and value every bit of feedback.

Install phpVirtualBox on CentOS 7.x

I have to say, I have been bursting with some excitement to recently discover this gem. phpVirtualBox is going to make a great addition to my server by allowing me to create and manage VirtualBox VM’s direct from my web browser. So just a small guide, sticking with the theme, on getting phpVirtualBox installed and running on CentOS 7.x.

Install required packages

Install Apache Web Server

Start Apache and ensure it starts automatically on boot.

Finally, add an exception in the firewall so that you can access Apache over HTTP and HTTPS.

Test that Apache is running and accessible by opening a browser and navigating to http://server_ip_address.

Install PHP

Restart Apache.

Test that PHP is working by creating a PHP test file that will display the PHP information page (watch out for the quotes if copy/pasting).

Verify that the PHP information page loads by opening a browser and navigating to http://server_ip_address/test.php.

Create phpVirtualBox User Account

Create the account that the virtualbox web service will run as and to be used by phpVirtualBox.

Install phpVirtualBox

Download the phpVirtualBox package (I am using a release candidate version, which is the only one available at the time of writing for VirtualBox 5.2.x)

Unzip/Extract phpVirtualBox package.

Configure phpVirtualBox

Edit phpVirtualBox config.php file:

Change the username/password to the user that runs the VirtualBox web service.

Set the user that the VirtualBox web service will run as.

Restart the VirtualBox web service

Add SELinux Policy Exception

If you prefer not to disable SELinux then a policy exception will need to be added that allows phpVirtualBox to connect to the VirtualBox web service. First install the ‘policycoreutils-python’ package, which provides ‘semanage’.

Run the following command to add the exception.

Open a browser and to http://server_ip_address/phpvirtualbox. I’d recommend to use Firefox if you plan on using the console as this uses Flash, which will be blocked by Chrome. Use the default login username: admin and password: admin.

Accessing the Virtual Machine Console Remotely

You will have the capability to access the virtual machine console (to perform assisted installation, access the VM for troubleshooting, etc) using the rdesktop session through phpvirtualbox. You must enable ‘Remote Display’ support on the virtual machine settings, under ‘Display’. The port range defaults to 9000-9100, which should be fine unless you plan to run more VMs :).

In addition to enabling this feature, you will need to add an exception in the firewall to allow your client (web browser) to access the console.

You should now be able to access the VM’s console.

One final note. When connecting to the console, ensure to change the ‘Requested desktop size’ to ‘1024×768’ so that you are able to view the console properly.

Importing Existing VMs/OVA’s Into VirtualBox

Before I decided to run a Linux based, headless installation of VirtualBox, I had been running all my virtual machines in VMware Workstation. When it was time to switch I exported a number of virtual machines that I had already built to OVF format. These were servers like Windows with Active Directory, GIT, Ansible, that I didn’t want to go through the hassle of building again from scratch. This post is dedicated to my experience of doing this and some of the post migration work that needs to be performed, such as removing VMware Tools, etc.

Import Existing Virtual Machine Image

Before you get started

One tip before you start. When you import the VM into VirtualBox, it’s possible that the network adaptor information will change. This was definitely the case for me when I migrated from Workstation that used the VMXNET3 driver. My interface was configured as ‘ens33‘ and when I switched to the virtio driver in VirtualBox the device came up as eth0. I was therefore unable to connect to my headless VM. My solution was to fire up the source VM again and copy the ‘ifcfg-ens33‘ in /etc/sysconfig/network-scripts/ to ‘ifcfg-eth0‘ and change any references from ens33 to eth0. Finally, I commented out the UUID line. When I re-imported, I was able to connect the my VM.

Import Virtual Machine

To perform the import I am going to use the VBoxManage import command. This command allows the –dry-run flag to be supplied and will provide details of how the virtual machine will look once it has been imported into VirtualBox. This is useful as it allows us to ensure that the configuration is how we want it to be. The dry run will also provide any optional flags that can be used to influence the import. There are also the global options keepallmacs, keepnatmacs, and importtovdi.

I have some virtual machines that have been exported to OVF format:

So let’s take a look at my Ansible (SG1-ANS001) virtual machine that is running on CentOS 7.x.

The dry run feature represents the virtual machine image as a Virtual system followed by an index. As this image only contains a single virtual machine, only index 0 is present. The VM hardware configuration is listed as units. Therefore, I can go through each unit in turn and ensure that the target machine is configured correctly. From the dry run output there are a number of units that need to be modified. I will build out the command as I go along.

1: Suggested VM name “vm” (change with “–vsys 0 –vmname <name>”)

This will need to be changed to the actual name of the VM, which in this case is SG1-ANS001. I can use the options –vsys 0 –vmname <name> to accomplish this.

5: Network adapter: orig custom, config 5, extra type=Bridged

The original VM was using a custom host-only network interface on the previous hypervisor which is not understood by VirtualBox. The import command is therefore defaulting to a suggestion of a Bridged interface, which is not correct. I need connect the virtual network card to my vboxnet0 host-only interface on VirtualBox. Unfortunately, the import command does not provide an option that I can see which can make this change. I will therefore need to modify the VM once it has been imported.

9: Hard disk image: source image=SG1-ANS001-disk1.vmdk, target path=/mnt/vm2/vbox/vm/vm/SG1-ANS001-disk1.vmdk, controller=7;channel=0

The source disk image is a VMDK, the format used by VMware. Although VirtualBox uses the VDI disk image format, it has defaulted to continue using VMDK. This will still work but as my goal is to move to a fully Open Source solution, converting to VDI makes more sense. The global option importtovdi can be used to achieve this. My command now looks like:

VirtualBox will default to re-initialising the MAC address of all network cards on the VM. I am not interesting in making post configuration changes because of this and my preference would be to preserve the existing MAC addresses that was allocated to the original VM. There is no real reason not to do this. The global option keepallmacs can be used to achieve this. My final command now looks like this:

If we issue another –dry-run against the new command we can see most of the changes have taken effect.

We can now re-issue the command with the –dry-run omitted to perform the import of this virtual machine into VirtualBox. The progress will be displayed and a message if the import was successful:

Post-Import Configuration

Let’s check that we can see the VM in the VirtualBox inventory:

Configure Networking

If you recall from the previous section, we were not able to change the network card binding during the import. We can do this now that the VM has been imported using the VBoxManage modifyvm command. I need to bind to a host-only interface called vboxnet0.

I also want to use the paravirtualized Network Adaptor using the virtio driver for better performance.

The VBoxManage showvminfo command can be used to view the VMs configuration, which can be used to confirm any changes that have been made.

Disable Audio

I noticed from my VBoxManage showvminfo output that an Audio card was enabled.

Audio: enabled (Driver: ALSA, Controller: AC97, Codec: STAC9700)

I do not require this so the card can be disabled:

Set VM Description

While we’re at it, it would probably be a good idea to give this VM a small description.

Virtual Machine Power Operations

Start VM

Since we’re going to be running this VM in headless mode, the alternative binary VBoxHeadless will be used instead of VBoxManage. The VBoxHeadless interface accepts the same start parameters. If you see the copyright information after you have run the command then the VM will have been started successfully.

Use the  VBoxManage list runningvms command to verify that the VM process is actually running.

Stop VM

You can gracefully stop the running VM using the VBoxManage controlvm <vm> acpipowerbutton command.

Troubleshooting

If you aren’t able to connect to the VM after it has been started then it may have failed to boot or a network configuration issue. As the VMs are running in headless mode it can be difficult to diagnose. There are a couple of decent ways to help diagnose the issue.

Screenshot

This option is simple and takes a screenshot of the console. You can specify the filename to save (in PNG format) then download this via SCP from the VirtualBox host.

Serial Port (console redirect)

I will follow up with an additional post on how the display output can be redirected to the serial port and some cool ways that we can view and interact with this session.

If there is some other configuration that I could apply to my VMs or vbox installation, then please let me know.