Overview of the Ansible OVA Deployment Role

In this second post, we will take a look at the Ansible OVA deployment role (vmware_deploy_ova), which is the core component used to deploy all of the VMware software appliances (vcenter, vro, nsxt, etc). I wanted to create a consistent yet flexible approach to importing and configuring ova appliances and allow for hardware customisation to be introduced.

The added advantage of this role is that it does not require the OVFtool to be installed and makes use of the vmware_deploy_ovf Ansible module, which uses pyVmomi to manage the ova import process.

I have also added several other features to the role, such as the ability to deploy without powering on, configure hardware settings such as CPU and Memory, add disks and configure networks.

The vmware_deploy_ova role is available from:

Ansible Galaxy: vmware_deploy_ova

Github: ansible-role-vmware_deploy_ova

You do not have to manually download this role for the purpose of this series. This role will automatically be downloaded from Ansible Galaxy as it is defined as a dependency on my appliance-based deployment roles. However, you are free to use and modify this role for any specific requirements that you may have.

This post is aimed to provide the details of what is used to deploy the appliances and how these can be configured.

Preparing an Appliance-Based Role to use the vmware_deploy_ova Role

This section provides details on how a role can utilise the vmware_deploy_ova role as a dependency and the ova properties that are required.

Role Dependency

Each appliance-based role that I create defines the vmware_deploy_ova role as a dependency in the meta/main.yml file under the dependencies section.

I also add the tag deploy, which allows me to easily skip the ova deployment process when running/re-running an appliance-based deployment playbook using the –skip-tags “deploy” parameter.

OVA Properties

All VMware appliances require that a set of OVA properties be configured, which specify details such as VM name, hostname, network details such as IP address and subnet mask, root password, etc. Each appliance is different and has a specific set of ova properties associated with it.

The vmware_deploy_ova role requires the ova_networks and ova_properties dictionary type variables to be set. The below is an example snippet of some vCenter Server Appliances ova properties:

In the above example, the values for these properties are mostly taken from other mandatory variables that need to be set for the appliance-based role.

Additional ova properties also exist that are not user-configurable. If you try and set these with ova_properties then the ova import task will fail. To overcome this issue, I have introduced an additional (optional) variable called vapp_properties. This variable includes a list of dictionary values that set the UserConfigurable attribute to True, along with the value that needs to be set for the ova property.

Note that the vapp_properties are not required for all appliances deployments (vRealize Orchestrator for example).

The ova_networks, ova_properties and vapp_properties variables are all defined in the vars/main.yml file for the appliance-based role. This file should NOT be modified and I have included mandatory and optional variables that can be set to manage these properties concerning the appliance being deployed. These will be discussed in later posts when I cover the appliance deployments.

Configuration for the vmware_deploy_ova Role

I have created several variables to influence the OVA deployments. These include mandatory variables that are needed by the role to deploy the appliance. I have also added optional variables to allow the appliance deployments to be tailored to your requirements. An example might be to deploy a vRO appliance with additional CPU and Memory.

Deployment Variables

When deploying the OVA appliances, the following deployment variables need to be set. These can be placed in the host_vars for the appliance hostname or an application group if you want to apply these to multiple appliances.

You can also configure these optional deployment variables (only supported if deploying to a vCenter Server)

OVA Source

These variables allow you to set the location of where the OVA file can be found. These variables should ideally be defined in group_vars.

Make sure not to include a leading ‘/‘ at the end of the ova_path as this will be added automatically.

There is also the option to download the ova file from HTTP by setting ova_source to “http” (default is “local”) and then setting the ova_url variable: The file will be downloaded to the path defined in the ova_path variable.

Virtual Hardware Configuration

The following variables allow the virtual hardware to be customized for a deployment. These variables should be defined in either a host_vars file or group_vars.

Enable CPU/Memory Hot Add support:

If these are not set then hot add support will be set to disable.

Set the Number of CPUs

Set the ova_hardware_num_cpus variable to specify the number of CPU sockets that should be added to the appliance.

If this variable is not set then the number of CPUs will not be changed.

Set Memory Size

Set the ova_hardware_mem_gb variable to specify the amount of memory (in GB) to allocate to the appliance.

If this variable is not set then the amount of memory will not be changed. Read More

Automate the VMware SDDC using a Multi-Stage Environment Using Ansible

Welcome to the first post in my new series: Ansible for VMware SDDC where I will provide a series of posts to help you deploy the VMware SDDC stack in a multi-stage environment using Ansible. Automating VMware environments using DevOps tooling is no easy task, as most administrative tasks are achieved using PowerShell, and often lack a consumable API to perform certain tasks. I have decided to take on this daunting task and provide a complete end-end automated solution.

Managing your VMware infrastructure in this way will allow your SDDC configuration to be treated as Infrastructure as Code (IaC) and stored in a source control repository like Git. This further allows for immutable infrastructure where appliances can be re-deployed and configuration applied in several minutes, should the need arise. This can provide an alternative approach to restoring services due to a major issue, such as corruption or loss of an appliance by simply redeploying it from code.

I started this piece of work as a side project for deploying my own environments but have been fortunate enough to extend the work to help my clients automate their VMware infrastructure. I, therefore, decided to collate the work that I have done into a series to help others.

The Multi-Stage environment idea was based on a great article I read called How to Manage Multistage Environments with Ansible by Justin Ellingwood, which I recommend you read.

This post will demonstrate how to create the Ansible working environment from scratch and utilise a number of roles that I have created for various VMware deployments. I will provide follow-up posts in this series which will cover the deployment of a specific VMware technology, i.e. vCenter Server and to end the series I will demonstrate how you can deploy a full SDDC stack, including nested variants that can be quickly spun up for development and testing purposes.

Prepare the Environment

The roles that I have created have some dependencies that first need to be installed. I recommend that you use Python virtual environments when setting up your Ansible environment. I have created a guide here on how to install Python 3 and how to create and activate a virtual environment, where the dependencies can be installed. Note that this is not a requirement.

I have prepared an ‘ansible_vmware_sddc_requirements.txt‘ file that includes all dependencies that are required to run the VMware SDDC roles, including Ansible and the vSphere Python SDK (pyVmomi). These include specific versions of the core dependencies that I have tested against and other transient dependencies used by these modules.

You can find the ‘ansible_vmware_sddc_requirements.txt‘ requirements file on my Github Gist account here.

Or simply install the dependencies using pip:

Your environment will now be ready to run Ansible and my VMware SDDC roles.

Create Ansible Environment

This section will detail the creation of a multi-stage Ansible environment that includes an inventory for Development, Testing, Staging and Production. Each of these inventories will include its own hosts’ inventory file, group_vars and host_vars. The Development inventory will be set as the default if none has been specified when running a playbook on the command line. This helps to ensure that other environments are not accidentally targeted.

The Ansible environment is typically implemented specifically for the project/company/user so it is difficult to create an all-encompassing solution that suites all uses-cases. However, I have created a recommended setup for creating these deployments.

This is a base configuration that you will likely need or want to tailor to your own specific requirements. This typically boils down to variable placement within the groups, however, as long as the required variables have been defined, then it is up to you where to place them.

This post will only cover setting up the Development Environment.

Start by creating a new empty folder where Ansible should be set up:

I have created a script that I use to bootstrap my own environments when testing. You can download the ansible_bootstrap_vmware_sddc.sh script via my Github Gist account using wget.

Read More

Run Multiple Ansible Versions Side by Side Using Python 3 Virtual Environments

I often start many of my blog posts describing how to set up a Python virtual environment and install the required modules, when working with Ansible. I have therefore decided to create this post to cover the topic in a bit more detail and for better consistency.

When I am developing Ansible playbooks and depending on the project, I often require a different set of Python modules. I could install all of the modules that I use on the base system, but then it starts to get contaminated and it becomes difficult to manage all of the dependencies.

Another requirement is that I want to be able to test my Ansible playbooks against different versions (maybe environments are using different versions or to test a new release). Fortunately, both of these problems are easy to solve using Python’s venv (Virtual Environments) module.

You don’t need to be using containers to run multiple versions of Ansible for this purpose like I have witnessed many people do.

The benefits of using a virtual environment include:

  • Each project can have its own isolated environment and modules;
  • The base system is not affected;
  • Does not require root access as virtual environments can be created in your home directory;

In the following sections, I will provide details on how to set up virtual environments and some examples of these with Ansible.

Install Python 3

The first thing you need to do is install Python3. You can create virtual environments with Python 2, but as that is going end of life in January 2020, you really should make every effort to move away from this version. We will need to install Python 3.6 or later.

CentOS 7

Python 3 can be installed from one of the following repositories, depending on your preference (but only choose one). Note that this does not change the default ‘python‘ interpreter on the system.

Extra Packages for Enterprise Linux (EPEL)

Install this repository if not already installed:

Install Python 3.

Software Collections (SCL)

Install this repository if not already installed:

Install Python 3.

Inline with Upstream Stable (IUS)

Install this repository if not already installed:

Install Python 3.

Ubuntu

Check out this post: https://www.tecmint.com/install-python-in-ubuntu


Verify that Python is installed and working:

Create Virtual Environments

First, we will need to create a folder that we’ll use to store the virtual environments. I recommend that you do not create these environments inside your project folders. I have a folder called ‘python-venv‘ in my home directory, which I use for all my virtual environments.

We create a virtual environment using the Python ‘venv‘ module (note that this is built into Python 3):

In this example, I am going to create two environments that will provide different versions of Ansible for me to use.

This will create the directories ‘ansible2.7.0‘ and ‘ansible2.8.0‘ under ‘~/python-env‘, that contains the binaries and base libraries for the environment.

Next, we need to activate an environment by sourcing an environment file in the bin directory. Let’s activate the ‘ansible2.7.0‘ environment.

You will notice that the shell will now display the virtual environment that you are using:

Currently, this environment has no modules installed. The first thing we will want to do is upgrade ‘pip‘ and ‘setuptools‘.

Next, let’s install ansible 2.7.0. PIP will default to install the latest version, but we can override this using == and force a specific version to be installed.

Once the installation process completes, we can confirm that the version of Ansible has been installed:

Read More

Automate vSphere Virtual Machine and OVA Appliance Deployments Using Ansible

If you have read any of my posts, you will quickly discover that I use Ansible a lot, for deploying virtual machines and VMware OVA appliances, on vSphere.

Ansible support for VMware is constantly growing and in the latest versions, it has become an essential tool that I use as part of my development process for standing up required infrastructure that is easy and quick to deploy or tear down. The key part of using Ansible for my deployments is that the process is repeatable and consistent.

In this post, I am going to cover some of the core Ansible modules that I use to perform these deployments and provide various use case examples. Once you understand these modules and lay down the groundwork, you’ll be deploying virtual machines or appliances in mere minutes, with the simple editing of some configuration files.

If you are not too familiar with what Ansible is, or what it’s used for, then I recommend that you check out the official documentation. You can also get a brief overview of what Ansible is at cloudacademy.com, and there is a wealth of online training and other material available to get you up to speed.

All examples used in this post, including a fully working Ansible solution, can be found on my Ansible-VMware Github.

Pre-requisites

You will need to have the following packages installed (through PIP) on your Ansible control machine:

  • ansible>=2.8.0
  • pyvmomi>=6.7.1.2018.12

I have also provided a requirements.txt file that you can install using PIP

Deploying Virtual Machines

The most basic task that you are ever likely to perform on any vSphere environment, is the deployment of a virtual machine. In this section, I am going to show you how Ansible can make the task of spinning up dozens of virtual machines a breeze.

Ansible provides the core module vmware_guest that can be used to create a new virtual machine or clone from an existing virtual machine template.

In these examples, I am going to demonstrate how you can create new virtual machines or clone an existing virtual machine from a template for both Windows and Linux, perform customization and configure hardware and advanced settings.

Create a New Virtual Machine (no template)

This is an example play that will create a virtual machine with no OS installed (not from a template). When the virtual machine is powered on, it will automatically try and PXE boot from the network, which can be useful in deployment pipelines where VMs are bootstrapped in this way.

I have a simple play called ‘vmware_create_virtual_machine.yml‘, which includes the tasks to create a virtual machine in VMware vSphere.

Many of the properties should be self-explanatory, but we’re creating a virtual machine called Linux_VM, with 1 CPU, 2GB of Memory, a 20GB think hard disk, etc.

Because we are creating a new virtual machine, the guest_id needs to be provided, which sets the Guest Operating system profile on the VM. You can get the full list of supported guest_ids from the VMware developer support page.

To run the playbook, I invoke the ansible-playbook command.

You can see that the execution was successful (ok=1) and that a change was made (changed=1), which means the virtual machine was created. If we take a look at vCenter we can see the virtual machine now exists, with the specified configuration:

Update Virtual Machine

The great thing about Ansible is that if you were to run this play again, it would not try and create another VM. Instead, it will simply exit with a status of OK, if it discovers that the specified virtual machine already exists and is powered on.

But what if we made some changes to the configuration that we want to apply to the virtual machine? Well, Ansible will only make these changes to the virtual machine if the ‘state‘ parameter has been set to ‘present‘ in the play. Also, if you are making configuration changes to hardware, then the virtual machine may also need to be powered off first (Ansible will display an error if this is required).

So let’s assume that the virtual machine is powered off and we want to enable CPU and memory hot add support. We simply add these configurations under the hardware section:

Now if we run the play again:

And we can see that a reconfigure task is performed on the VM in vCenter server.

Make sure to check the documentation for all the parameters that can be configured. Read More

IaC for vRealize: Define Dependencies, Manage Versions, Prepare & Release Packages & Deploy Artifacts

Welcome to the third part in the series working with the vRealize Build Tools. At this stage, you should have a fully working CI infrastructure and have all of your vRO code exported using packages and stored in Git repositories. In this post, I will show you how to manage dependencies across your packages and how you can use the Maven plugins to automatically update packages so that they are using the latest dependency versions.

I will also show you how to manage versioning for development (Snapshots) and production (release) code. Once we have our versioning in place, I will detail how to prepare the releases and finally push the artefacts to the Artifactory repositories, which can be picked up by a release pipeline (something I will cover in much more detail in a later post).

I also want to point out that I had to update my previous post to include the git scm connections in your pom.xml files, as this is required for this post, so make sure to go back and check that out.

Understanding Snapshot vs Release Versions

The first thing to understand is the ‘-SNAPSHOT‘ string that is suffixed to package versions. You will have noticed that all of the package versions you created in my previous post, were version ‘1.0.0-SNAPSHOT’. This suffix tells Maven that the code inside this package is in a development stage and not suitable for production release. The Maven release plugins understand this and will prevent the dependency handler from including these as valid dependencies for your packages, as this could result in unstable code (though, they can be defined manually).

If you are following these posts, then currently, all your packages will be at version 1.0.0-SNAPSHOT. Don’t worry, we will change this later, but for now, accept this as-is.

Define Dependencies

In my previous post, I provided an example of 3 packages that I have created. I also created a table that detailed the ‘groupId‘ and ‘artifactId‘ for these packages. I am going to extend this table to list their dependencies on other packages. I have kept this simple, in that no dependencies exist outside of these 3 packages.

A dependency exists when one package is calling code from another package (typically with a System.getModule()). When dependencies are mapped, their artefacts and required versions are released as part of the release process.

groupId artifactId Dependency
com.simplygeek.library logger
com.simplygeek.library rest logger
com.simplygeek.library nsx logger
rest

I have added the ‘Dependency‘ column, which lists the projects/artefacts that this package depends on. All of my packages depend on ‘logger‘, whereas ‘nsx‘ also depends on ‘rest‘.

We now need to add some XML to the projects ‘pom.xml‘ file, that is located at the root of the project folder. These are ‘<dependencies>‘ tags that are used to define a set of ‘<dependency>‘ tags for the project. You should insert these tags immediately after the ‘<packaging>‘ tags but before the ‘<scm>‘ tags.

We also want to omit ‘-SNAPSHOT‘ from the version, since we’ll be pushing actual releases.

Below is an example of what the dependency XML insert looks like for the ‘rest‘ package:

Once you have completed defining your dependencies, save the ‘pom.xml‘ file and make sure to commit the changes to git.

Repeat this process for all of your projects.

Prepare & Release Packages

Maven provides several lifecycle phases and goals that can be used to prepare and perform the releases. We will also need to push the release artefacts to the Artifactory repositories, as this will be queried for dependencies when releasing a package that has any defined.

You should first be focusing on all the core projects that make up most of your dependencies. Also, start with the projects that have dependencies in the lowest order, i.e. the ‘logger‘ project has no dependencies, so that will be released first, ‘rest‘ depends on ‘logger‘, so that will be released next, etc.

For all the release tasks, we’re going to specify the ‘release‘ plugin followed by the goal. We will use the following goals:

  • Clean
  • Prepare
  • Perform

Read More

IaC for vRealize: Manage Existing vRO Code With vRealize Build Tools & Set up Git Repositories

In my previous post on Deploying vRealize Build Tools To Allow Infrastructure As Code for vRA and vRO, I covered how to set up the CI infrastructure and your developer workstation, in preparation for managing your vRO code as projects with Visual Studio Code and Maven. In this post, I will explain how you can work with your existing code base and manage it using the build tools. A major part of this will be creating and managing new projects that will map to our existing code in vRO.

Once projects have been created, I will detail how Git repositories can be used to store and manage your vRO code, and then we can map your project dependencies and allow development teams to work collaboratively, without risk of overwriting the work of others (a major problem when developing using the vRO client). Git is going to bring some very useful processes and methodologies to the table such as branching, tagging and merge conflict resolution.

I also want to point out that I am currently only focusing on Actions, as I believe this is where all your code should exist. I will have followup posts that will cover strategies for managing Workflows and other items.

Setting Up GitLab

One thing that I didn’t cover in my previous post, was setting up Git. I purposely reserved this topic for this post, as it was more relevant. To start, you will need to have a GitLab server deployed that can be used to create the repositories for storing your vRO projects. You can use GitHub if you so wish, it doesn’t matter too much, but using GitLab doesn’t require that your environments have access to the Internet. GitLab also allows you to create groups to organise multiple repositories, which is going to be useful.

If you need to install GitLab, then there is a good guide on VULTR, that details how to install GitLab and enable HTTPS.

Create User

You should be using your user account when working with Git and not the default root account. So log into the admin area and create a new local account for your personal use. Alternatively, you can also configure GitLab to allow users from Active Directory to log in, you can use the guide provided by Git here.

Create a Personal Access Token

Once you have a Git user account set up, you will need to create a Personal Access Token. An Access Token can be used instead of a password when authenticating with Git over HTTPS. This will provide safer storage of user credentials in the Git configuration files.

Click on the profile icon on the top right of the page and select Settings:

On the Settings page select Access Tokens.

On the ‘Add a personal access token‘ page, give the access token a name and the ‘write_repository‘ permissions. Set an expiry date if you wish, or leave blank to never expire.

Once you click ‘Create personal access token‘, the token will be displayed. You will need to copy and save this token somewhere safe as you will not be able to view it again. If you lose this token, you will have to create a new one to replace it.

Create a Group

A Group allows multiple projects/repositories to be created under a single namespace. This is useful when a project spans many repositories and you need to keep these together so that they are easy to locate and manage. Our vRO projects will be using multiple repositories, therefore we’ll great a Group for these. A Group also simplifies granting access to projects, as collaborators can be granted access to the group and inherently, the projects it contains.

To create a new group, select ‘Groups‘ from the main menu at the top and then select ‘Your groups‘.

Click the ‘New group‘ button and give your group a name and settings that you require (I simply called my group ‘vRO’).

We will create all vRO projects under this new group.

Install Git Client

You will also need to install the Git client for your workstation. You can download the client for your OS here. Read More

Automating vRealize Suite Lifecycle Manager with Ansible – Part 1: Setup and Deploy vIDM and LCM

For many years I have been tasked with building vRealize Automation environments, and one of the biggest pain points, has been the deployment and preparation of the IaaS machines. This has usually required special preparation of a Windows template and a number of scripts to get everything configured, so that vRA plays nice. This is usually an error prone process, especially for the larger enterprise deployments. To tackle this problem, VMware released vRealize Suite Lifecycle Manager, which is on version 2.1, as of this writing.

I decided it was time to try this product and see if it lives up to the claims. I was also more interested in the API functionality, and as with all things automation, I typically turn to Ansible. I wasn’t too surprised to discover, that although the deployment is ‘automated’, depending on your interpretation, there is actually a number of manual steps that are still required. These include ensuring that the IaaS machines and database are already deployed and properly configured. The vRLCM Create Environment process also provides validation and pre-checks, along with scripts that can be used to prepare the machines.

With the preparation of these playbooks, I set out to automate the following:

  • Deployment of a single VMware vIDM appliance;
  • Deployment and initial configuration of a single vRealize Suite Lifecycle Manager appliance;
  • Deployment of vRealize Automation IaaS Servers (Windows VMs), in multiple deployment scenarios.
  • Creation of vRealize Automation environment through LCM.

This post will focus on deploying vRSLCM and vIDM with a follow-up post on the vRA deployments.

However, in my attempts to make this a set of one click processes, I wasn’t able to quite get that far (got pretty close). This was mainly due to some limitations with the vRSLCM API (can’t automate certificates, for example). I will discuss these limitations throughout this post, and if I find workarounds, then I’ll provide an update.

I should also point out that this is quite experimental and although I have done all that I can to make these workflows as idempotent as I can, unfortunately, with the limitations of the LCM API, this has proven to be quite difficult. These playbooks are best used as a one-time only deployment, at least for LCM itself.

Environment Preparation

In my environment I have a dedicated virtual machine that I develop and run my playbooks on (you may call this the Ansible control machine) running on CentOS 7.x.

Environment Overview

CentOS CentOS 7.x
Ansible 2.8.1 (2.8 is a minimum requirement)
Python 3.6 (installed from EPEL Repository)

Prerequisites

The following pre-requisites are required:

  • DNS A/PTR records created for vRSLCM and vIDM appliances.

Prepare Environment

Ensure that the system is up-to-date by running:

Install yum-utils

Install Python 3

You will need to ensure that Python 3.6 is installed on your Ansible host. I am using the EPEL repository, but you may decided to use IUS or SCL to install these packages, so the package names may differ. Refer to the relevant documentation for installing Python 3 using these repositories, if required.

Install GIT

Git will be used to clone my Ansible vRSLCM playbooks repository.

Create Python Environment

It’s always the best approach to create a python virtual environment so that packages do not conflict with the base system. I have a directory in the root of my home dir called ‘python-env‘ where I maintain a number of different environments. Once you have a virtual environment set up, you just need to install the required packages from the ‘requirements.txt‘ file (provided later in the git repository).

You can follow these steps below to create a virtual environment:

You will notice that the shell will now display the virtual environment that you are using:

It’s also a good idea to ensure the latest version of pip and setuptools is installed.

Read More

IaC for vRealize: Deploying vRealize Build Tools To Allow Infrastructure As Code for vRA and vRO

As any vRealize Orchestrator developer will tell you, managing code outside of the appliance is difficult. I recently wrote a post about Using Visual Studio Code for your vRealize Orchestrator Development, where I highlighted some of the challenges with this. The issue is that we’re not given the freedom to use any IDE we want, easily run unit tests on our code or do continuous integration with tools like Jenkins.

I did mention that a couple of solutions were going to make their way, one of these was internal tooling that VMware’s CoE team currently uses for their vRO development (you can read the article here: https://blogs.vmware.com/management/2018/11/automating-at-scale-with-vro-and-vra.html). It wasn’t possible to get access to these tools without engaging with CoE and forking up a bit of cash.

That is until now, as VMware has released these tools as a new fling. The fling is currently in preview, but you can check it out here: https://labs.vmware.com/flings/vrealize-build-tools. I think this is quite an exciting time for VMware developers as these tools could finally change the way we develop and manage our code and integrate into the wider developer ecosystem.

This is my first blog on this topic but if I find these tools useful, then there will be plenty more to follow. Getting the environment set up to use these tools is not straight forward and has several dependencies. These include deploying supporting infrastructure such as JFrog Artifactory, preparing all the required artifacts that are sourced from the vRO server and getting the workstation set up to create and manage packages.

Deploy and Configure Platform

Before the developer can begin using the vRealize Build Tools, the supporting platform has to be deployed and configured. This consists of setting up an Artifactory server to store Maven artifacts and build dependencies and preparing the artifact repositories.

Deploy Artifactory Server (skip if you already have this deployed)

This section will detail how to set up the Artifactory server and required dependencies. Note that the details below only deploy a single Artifactory node with the database instance running on the same machine. It is recommended that for a production environment to ensure Artificatory is deployed with high availability and connects to external/dedicated database instances.

Install Java Development Kit (JDK)

JFrog Artifactory requires the Java JDK 8 and above to be installed and the JAVA_HOME variable configured. I am using the Open Source version of these tools. Install using the following command:

Add the following lines to ‘/etc/profile‘ to set the ‘JAVA_HOME environment variable and add the Java bin directory to the path.

Then source the file and check that the variables have been correctly configured:

Read More