how to restart vRealize Orchestrator 8.x vRO

Different Ways to restart Orchestrator 8.x [CB10101]

If you are new to vRO or coming form vRO 7.x, you may find restarting vRO a little tricky and might want to know how to restart vRO in an ordered way to avoid any service failure or corrupt configuration etc. Historically, in 7.x version of vRO, there used to have a restart button in its VAMI interface which generally restart it gracefully but version 8.x skipped that ability. However, there are new ways that we’ll see today in this post.

  1. via vSphere – restart VM guest OS
  2. via SSH – pod recreation
  3. via SSH – run deploy.sh
  4. via Control Center
  5. Older ways to restart vRO services
    1. via SSH – restart services
    2. via Control Center – Startup Options
    3. via vRA VAMI – for embedded vRO

via vSphere – restart VM guest OS

  • Click Virtual Machines in the VMware Host Client inventory, select vRO VM.
  • To restart a virtual machine, right-click the virtual machine and select Power > Restart Guest OS.

via SSH – pod recreation

  • One way is to scale down pods to ZERO which basically destroys them. You can do so by copy paste these commands on your vRO Server over a SSH session.
kubectl scale deployment orchestration-ui-app --replicas=0 -n prelude
kubectl scale deployment vco-app --replicas=0 -n prelude
sleep 120
kubectl scale deployment orchestration-ui-app --replicas=1 -n prelude
kubectl scale deployment vco-app --replicas=1 -n prelude
  • Other way would be to delete these pods directly using this command. After this command, K8s will auto-deploy the pods back again.
kubectl delete pod vco-app
kubectl delete pod orchestration-ui-app

Now monitor till both pods will be fully recreated (3/3 and 1/1) using this command:

kubectl -n prelude get pods

When all services are listed as Running or Completed, vRealize Orchestrator is ready to use. Generally, pod creation may take up to 5-7 mins.

via SSH – run deploy.sh

  • Login to the vRO appliance using SSH or VMRC
  • To stop all services, run/opt/scripts/deploy.sh –onlyClean
  • To shutdown the appliance, run /opt/scripts/deploy.sh –shutdown
  • To start all services, run /opt/scripts/deploy.sh
  • Validate the deployment has finished by reviewing the output from the deploy.sh script
  • Once the command execution completes, ensure that all of the pods are running correctly with the following command ‘kubectl get pods –all-namespaces

When all services are listed as Running or Completed, vRealize Orchestrator is ready to use.

via Control Center

  • Go to Control Center.
  • Open System Properties and add a new property.
  • This will auto-restart the vRO in 2 mins.

Older ways to restart vRO services

There are some older ways of restarting vRO and its services, perhaps for vRO 6.x & 7.x only. But these are not valid anymore for version 8.x. They are just here for the records.

via SSH – restart services

  • Take an SSH session and run this command will restart vRO services.
service vco-server stop && service vco-configurator stop

via Control Center – Startup Options

  • Open Control Center and go to Startup Options.
  • Click Restart button.

via vRA VAMI – for embedded vRO

  • Open vRA VAMI Interface and go to vRA -> Orchestrator settings.
  • Select Service type and Click Restart button.

That’s all in this post. Please comment down if you use any way other than mentioned here. I’ll be happy to add it here. And don’t forget to share this post. #vRORocks

An Introduction to Cloud-Config Scripting for Linux based VMs in vRA Cloud Templates

Disclaimer This article is referenced from digitalocean.com

  1. Introduction
  2. General Information about Cloud-Config
  3. YAML Formatting
  4. User and Group Management
  5. Change Passwords for Existing Users
  6. Write Files to the Disk
  7. Update or Install Packages on the Server
  8. Configure SSH Keys for User Accounts and the SSH Daemon
  9. Set Up Trusted CA Certificates
  10. Configure resolv.conf to Use Specific DNS Servers
  11. Run Arbitrary Commands for More Control
  12. Shutdown or Reboot the Server
  13. Conclusion
  14. References

Introduction

The cloud-init program that is available on recent distributions (only Ubuntu and CentOS at the time of this writing) is able to consume and execute data from the user-data field. This process behaves differently depending on the format of the information it finds. One of the most popular formats for scripts within user-data is the cloud-config file format.

Cloud-config files are special scripts designed to be run by the cloud-init process. These are generally used for initial configuration on the very first boot of a server. In this guide, we will be discussing the format and usage of cloud-config files.

General Information about Cloud-Config

The cloud-config format implements a declarative syntax for many common configuration items, making it easy to accomplish many tasks. It also allows you to specify arbitrary commands for anything that falls outside of the predefined declarative capabilities.

This “best of both worlds” approach lets the file acts like a configuration file for common tasks, while maintaining the flexibility of a script for more complex functionality.

YAML Formatting

The file is written using the YAML data serialization format. The YAML format was created to be easy to understand for humans and easy to parse for programs.

YAML files are generally fairly intuitive to understand when reading them, but it is good to know the actual rules that govern them.

Some important rules for YAML files are:

  • Indentation with whitespace indicates the structure and relationship of the items to one another. Items that are more indented are sub-items of the first item with a lower level of indentation above them.
  • List members can be identified by a leading dash.
  • Associative array entries are created by using a colon (:) followed by a space and the value.
  • Blocks of text are indented. To indicate that the block should be read as-is, with the formatting maintained, use the pipe character (|) before the block.

Let’s take these rules and analyze an example cloud-config file, paying attention only to the formatting:

#cloud-config
users:
  - name: demo
    groups: sudo
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDf0q4PyG0doiBQYV7OlOxbRjle026hJPBWD+eKHWuVXIpAiQlSElEBqQn0pOqNJZ3IBCvSLnrdZTUph4czNC4885AArS9NkyM7lK27Oo8RV888jWc8hsx4CD2uNfkuHL+NI5xPB/QT3Um2Zi7GRkIwIgNPN5uqUtXvjgA+i1CS0Ku4ld8vndXvr504jV9BMQoZrXEST3YlriOb8Wf7hYqphVMpF3b+8df96Pxsj0+iZqayS9wFcL8ITPApHi0yVwS8TjxEtI3FDpCbf7Y/DmTGOv49+AWBkFhS2ZwwGTX65L61PDlTSAzL+rPFmHaQBHnsli8U9N6E4XHDEOjbSMRX user@example.com
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcthLR0qW6y1eWtlmgUE/DveL4XCaqK6PQlWzi445v6vgh7emU4R5DmAsz+plWooJL40dDLCwBt9kEcO/vYzKY9DdHnX8dveMTJNU/OJAaoB1fV6ePvTOdQ6F3SlF2uq77xYTOqBiWjqF+KMDeB+dQ+eGyhuI/z/aROFP6pdkRyEikO9YkVMPyomHKFob+ZKPI4t7TwUi7x1rZB1GsKgRoFkkYu7gvGak3jEWazsZEeRxCgHgAV7TDm05VAWCrnX/+RzsQ/1DecwSzsP06DGFWZYjxzthhGTvH/W5+KFyMvyA+tZV4i1XM+CIv/Ma/xahwqzQkIaKUwsldPPu00jRN user@desktop
runcmd:
  - touch /test.txt

By looking at this file, we can learn a number of important things.

First, each cloud-config file must begin with #cloud-config alone on the very first line. This signals to the cloud-init program that this should be interpreted as a cloud-config file. If this were a regular script file, the first line would indicate the interpreter that should be used to execute the file.

The file above has two top-level directives, users and runcmd. These both serve as keys. The values of these keys consist of all of the indented lines after the keys.

In the case of the users key, the value is a single list item. We know this because the next level of indentation is a dash (-) which specifies a list item, and because there is only one dash at this indentation level. In the case of the users directive, this incidentally indicates that we are only defining a single user.

The list item itself contains an associative array with more key-value pairs. These are sibling elements because they all exist at the same level of indentation. Each of the user attributes are contained within the single list item we described above.

Some things to note are that the strings you see do not require quoting and that there are no unnecessary brackets to define associations. The interpreter can determine the data type fairly easily and the indentation indicates the relationship of items, both for humans and programs.

By now, you should have a working knowledge of the YAML format and feel comfortable working with information using the rules we discussed above.

We can now begin exploring some of the most common directives for cloud-config.

User and Group Management

To define new users on the system, you can use the users directive that we saw in the example file above.

The general format of user definitions is:

#cloud-config
users:
  - first_user_parameter
    first_user_parameter
    
  - second_user_parameter
    second_user_parameter
    second_user_parameter
    second_user_parameter

Each new user should begin with a dash. Each user defines parameters in key-value pairs. The following keys are available for definition:

  • name: The account username.
  • primary-group: The primary group of the user. By default, this will be a group created that matches the username. Any group specified here must already exist or must be created explicitly (we discuss this later in this section).
  • groups: Any supplementary groups can be listed here, separated by commas.
  • gecos: A field for supplementary info about the user.
  • shell: The shell that should be set for the user. If you do not set this, the very basic sh shell will be used.
  • expiredate: The date that the account should expire, in YYYY-MM-DD format.
  • sudo: The sudo string to use if you would like to define sudo privileges, without the username field.
  • lock-passwd: This is set to “True” by default. Set this to “False” to allow users to log in with a password.
  • passwd: A hashed password for the account.
  • ssh-authorized-keys: A list of complete SSH public keys that should be added to this user’s authorized_keys file in their .ssh directory.
  • inactive: A boolean value that will set the account to inactive.
  • system: If “True”, this account will be a system account with no home directory.
  • homedir: Used to override the default /home/<username>, which is otherwise created and set.
  • ssh-import-id: The SSH ID to import from LaunchPad.
  • selinux-user: This can be used to set the SELinux user that should be used for this account’s login.
  • no-create-home: Set to “True” to avoid creating a /home/<username> directory for the user.
  • no-user-group: Set to “True” to avoid creating a group with the same name as the user.
  • no-log-init: Set to “True” to not initiate the user login databases.

Other than some basic information, like the name key, you only need to define the areas where you are deviating from the default or supplying needed data.

One thing that is important for users to realize is that the passwd field should not be used in production systems unless you have a mechanism of immediately modifying the given value. As with all information submitted as user-data, the hash will remain accessible to any user on the system for the entire life of the server. On modern hardware, these hashes can easily be cracked in a trivial amount of time. Exposing even the hash is a huge security risk that should not be taken on any machines that are not disposable.

For an example user definition, we can use part of the example cloud-config we saw above:

#cloud-config
users:
  - name: demo
    groups: sudo
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDf0q4PyG0doiBQYV7OlOxbRjle026hJPBWD+eKHWuVXIpAiQlSElEBqQn0pOqNJZ3IBCvSLnrdZTUph4czNC4885AArS9NkyM7lK27Oo8RV888jWc8hsx4CD2uNfkuHL+NI5xPB/QT3Um2Zi7GRkIwIgNPN5uqUtXvjgA+i1CS0Ku4ld8vndXvr504jV9BMQoZrXEST3YlriOb8Wf7hYqphVMpF3b+8df96Pxsj0+iZqayS9wFcL8ITPApHi0yVwS8TjxEtI3FDpCbf7Y/DmTGOv49+AWBkFhS2ZwwGTX65L61PDlTSAzL+rPFmHaQBHnsli8U9N6E4XHDEOjbSMRX user@example.com
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcthLR0qW6y1eWtlmgUE/DveL4XCaqK6PQlWzi445v6vgh7emU4R5DmAsz+plWooJL40dDLCwBt9kEcO/vYzKY9DdHnX8dveMTJNU/OJAaoB1fV6ePvTOdQ6F3SlF2uq77xYTOqBiWjqF+KMDeB+dQ+eGyhuI/z/aROFP6pdkRyEikO9YkVMPyomHKFob+ZKPI4t7TwUi7x1rZB1GsKgRoFkkYu7gvGak3jEWazsZEeRxCgHgAV7TDm05VAWCrnX/+RzsQ/1DecwSzsP06DGFWZYjxzthhGTvH/W5+KFyMvyA+tZV4i1XM+CIv/Ma/xahwqzQkIaKUwsldPPu00jRN user@desktop

To define groups, you should use the groups directive. This directive is relatively simple in that it just takes a list of groups you would like to create.

An optional extension to this is to create a sub-list for any of the groups you are making. This new list will define the users that should be placed in this group:

#cloud-config
groups:
  - group1
  - group2: [user1, user2]

Change Passwords for Existing Users

For user accounts that already exist (the root account is the most pertinent), a password can be suppled by using the chpasswd directive.

Note: This directive should only be used in debugging situations, because, once again, the value will be available to every user on the system for the duration of the server’s life. This is even more relevant in this section because passwords submitted with this directive must be given in plain text.

The basic syntax looks like this:

#cloud-config
chpasswd:
  list: |
    user1:password1
    user2:password2
    user3:password3
  expire: False

The directive contains two associative array keys. The list key will contain a block that lists the account names and the associated passwords that you would like to assign. The expire key is a boolean that determines whether the password must be changed at first boot or not. This defaults to “True”.

One thing to note is that you can set a password to “RANDOM” or “R”, which will generate a random password and write it to /var/log/cloud-init-output.log. Keep in mind that this file is accessible to any user on the system, so it is not any more secure.

Write Files to the Disk

In order to write files to the disk, you should use the write_files directive.

Each file that should be written is represented by a list item under the directive. These list items will be associative arrays that define the properties of each file.

The only required keys in this array are path, which defines where to write the file, and content, which contains the data you would like the file to contain.

The available keys for configuring a write_files item are:

  • path: The absolute path to the location on the filesystem where the file should be written.
  • content: The content that should be placed in the file. For multi-line input, you should start a block by using a pipe character (|) on the “content” line, followed by an indented block containing the content. Binary files should include “!!binary” and a space prior to the pipe character.
  • owner: The user account and group that should be given ownership of the file. These should be given in the “username:group” format.
  • permissions: The octal permissions set that should be given for this file.
  • encoding: An optional encoding specification for the file. This can be “b64” for Base64 files, “gzip” for Gzip compressed files, or “gz+b64” for a combination. Leaving this out will use the default, conventional file type.

For example, we could write a file to /test.txt with the contents:

Here is a line.
Another line is here.

The portion of the cloud-config that would accomplish this would look like this:

#cloud-config
write_files:
  - path: /test.txt
    content: |
      Here is a line.
      Another line is here.

Update or Install Packages on the Server

To manage packages, there are a few related settings and directives to keep in mind.

To update the apt database on Debian-based distributions, you should set the package_update directive to “true”. This is synonymous with calling apt-get update from the command line.

The default value is actually “true”, so you only need to worry about this directive if you wish to disable it:

#cloud-config
package_update: false

If you wish to upgrade all of the packages on your server after it boots up for the first time, you can set the package_upgrade directive. This is akin to a apt-get upgrade executed manually.

This is set to “false” by default, so make sure you set this to “true” if you want the functionality:

#cloud-config
package_upgrade: true

To install additional packages, you can simply list the package names using the “packages” directive. Each list item should represent a package. Unlike the two commands above, this directive will function with either yum or apt managed distros.

These items can take one of two forms. The first is simply a string with the name of the package. The second form is a list with two items. The first item of this new list is the package name, and the second item is the version number:

#cloud-config
packages:
  - package_1
  - package_2
  - [package_3, version_num]

The “packages” directive will set apt_update to true, overriding any previous setting.

Configure SSH Keys for User Accounts and the SSH Daemon

You can manage SSH keys in the users directive, but you can also specify them in a dedicated ssh_authorized_keys section. These will be added to the first defined user’s authorized_keys file.

This takes the same general format of the key specification within the users directive:

#cloud-config
ssh_authorized_keys:
  - ssh_key_1
  - ssh_key_2

You can also generate the SSH server’s private keys ahead of time and place them on the filesystem. This can be useful if you want to give your clients the information about this server beforehand, allowing it to trust the server as soon as it comes online.

To do this, we can use the ssh_keys directive. This can take the key pairs for RSA, DSA, or ECDSA keys using the rsa_privatersa_publicdsa_privatedsa_publicecdsa_private, and ecdsa_public sub-items.

Since formatting and line breaks are important for private keys, make sure to use a block with a pipe key when specifying these. Also, you must include the begin key and end key lines for your keys to be valid.

#cloud-config
ssh_keys:
  rsa_private: |
    -----BEGIN RSA PRIVATE KEY-----
    your_rsa_private_key
    -----END RSA PRIVATE KEY-----

  rsa_public: your_rsa_public_key

Set Up Trusted CA Certificates

If your infrastructure relies on keys signed by an internal certificate authority, you can set up your new machines to trust your CA cert by injecting the certificate information. For this, we use the ca-certs directive.

This directive has two sub-items. The first is remove-defaults, which, when set to true, will remove all of the normal certificate trust information included by default. This is usually not needed and can lead to some issues if you don’t know what you are doing, so use with caution.

The second item is trusted, which is a list, each containing a trusted CA certificate:

#cloud-config
ca-certs:
  remove-defaults: true
  trusted:
    - |
      -----BEGIN CERTIFICATE-----
      your_CA_cert
      -----END CERTIFICATE-----

Configure resolv.conf to Use Specific DNS Servers

If you have configured your own DNS servers that you wish to use, you can manage your server’s resolv.conf file by using the resolv_conf directive. This currently only works for RHEL-based distributions.

Under the resolv_conf directive, you can manage your settings with the nameserverssearchdomainsdomain, and options items.

The nameservers directive should take a list of the IP addresses of your name servers. The searchdomains directive takes a list of domains and subdomains to search in when a user specifies a host but not a domain.

The domain sets the domain that should be used for any unresolvable requests, and options contains a set of options that can be defined in the resolv.conf file.

If you are using the resolv_conf directive, you must ensure that the manage-resolv-conf directive is also set to true. Not doing so will cause your settings to be ignored:

#cloud-config
manage-resolv-conf: true
resolv_conf:
  nameservers:
    - 'first_nameserver'
    - 'second_nameserver'
  searchdomains:
    - first.domain.com
    - second.domain.com
  domain: domain.com
  options:
    option1: value1
    option2: value2
    option3: value3

Run Arbitrary Commands for More Control

If none of the managed actions that cloud-config provides works for what you want to do, you can also run arbitrary commands. You can do this with the runcmd directive.

This directive takes a list of items to execute. These items can be specified in two different ways, which will affect how they are handled.

If the list item is a simple string, the entire item will be passed to the sh shell process to run.

The other option is to pass a list, each item of which will be executed in a similar way to how execve processes commands. The first item will be interpreted as the command or script to run, and the following items will be passed as arguments for that command.

Most users can use either of these formats, but the flexibility enables you to choose the best option if you have special requirements. Any output will be written to standard out and to the /var/log/cloud-init-output.log file:

#cloud-config
runcmd:
  - [ sed, -i, -e, 's/here/there/g', some_file]
  - echo "modified some_file"
  - [cat, some_file]

Shutdown or Reboot the Server

In some cases, you’ll want to shutdown or reboot your server after executing the other items. You can do this by setting up the power_state directive.

This directive has four sub-items that can be set. These are delaytimeoutmessage, and mode.

The delay specifies how long into the future the restart or shutdown should occur. By default, this will be “now”, meaning the procedure will begin immediately. To add a delay, users should specify, in minutes, the amount of time that should pass using the +<num_of_mins> format.

The timeout parameter takes a unit-less value that represents the number of seconds to wait for cloud-init to complete before initiating the delay countdown.

The message field allows you to specify a message that will be sent to all users of the system. The mode specifies the type of power event to initiate. This can be “poweroff” to shut down the server, “reboot” to restart the server, or “halt” to let the system decide which is the best action (usually shutdown):

#cloud-config
power_state:
  timeout: 120
  delay: "+5"
  message: Rebooting in five minutes. Please save your work.
  mode: reboot

Conclusion

The above examples represent some of the more common configuration items available when running a cloud-config file. There are additional capabilities that we did not cover in this guide. These include configuration management setup, configuring additional repositories, and even registering with an outside URL when the server is initialized.

You can find out more about some of these options by checking the /usr/share/doc/cloud-init/examples directory. For a practical guide to help you get familiar with cloud-config files, you can follow our tutorial on how to use cloud-config to complete basic server configuration here.

References

Getting Started: vRealize Orchestrator Script Environments [CB10098]

  1. Introduction
  2. Prerequisite
  3. Procedure
  4. Calling modules & variables
    1. For Node.js
    2. For Python
    3. For PowerShell
  5. Sample Node.js Script

Introduction

Do you use a lot of Polyglot scripts in vRO? Are you tired of creating bundles every time you work on a Python, Node.js or PowerShell script which uses modules and libraries which are not provided out-of-the-box by vRealize Orchestrator? Probably vRO guys at VMware heard your prayers this time.

vRO 8.8 onwards, you can now add modules and libraries directly as a dependency in your vRO actions and scriptable tasks. How cool is that!

As we know, in earlier versions, you could only add dependencies by adding them as a ZIP package which is not only a tiring additional steps, but also, editing and understanding those scripts becomes a real nightmare. But not any more.

In this post, we will see a detailed procedure on how to setup an script environment in your vRO (>8.8). I am going with Node.js but almost similar process can be followed for other languages as well. We will use an advanced date & time library called MomentJS available at https://momentjs.com/ but you can use any other module or library of your choice for that matter.


Note Similarly to other vRealize Orchestrator objects such as workflows and actions, environments can be exported to other vRealize Orchestrator deployments as part of a package, which means they are also a part of version control.


Prerequisite

  • vRealize Orchestrator 8.8 or greater

Procedure

  • Log in to the vRealize Orchestrator Client.
  • Navigate to Assets > Environments, and click New Environment.
  • Under the General tab, enter a name for your environment.
  • (Optional) Enter a description, version number, tags, and group permissions for the action.
  • Under the Definition tab, Click on Add button under Dependencies.

You can also change the Memory Limit to 128, 512, 1024 etc. depending on the number and size of packages that you will be using. In my personal experience, using PowerShell modules will require more than default.

  • Provide the package name and version that you want to install. For Node.js, passing latest will get you the most recent package.

Tip The package name is same as what you would use with package managers like npm while installing that package.


  • Once you click Create button, you should see Environment successfully created.
  • Under the Download Logs tab, check if the libraries are already installed.

Here, I have installed two modules, moment and moment-timezone as you can see from the logs.

  • In the Environment variables, you can provide a variable that you want to use as a part of this environment.
  • Create an action or a workflow with scriptable item, Select Runtime Environment as the one that you have created. I have selected moment.
  • Play with your script. Don’t forget to call the modules and environment variables.

Calling modules & variables

For Node.js

const myModule = require('moment');
const envVar = process.env.VAR_NAME;

For Python

import myModule
os.environ.get('VAR_NAME')

For PowerShell

Import-Module myModule
$env:VAR_NAME

Sample Node.js Script

exports.handler = (context, inputs, callback) => {
//——————-Don't edit above it————————//
const moment = require('moment'); // **IMPORTANT**
const tz = require('moment-timezone'); // **IMPORTANT**
const indianTimeZone = process.env.TIMEZONE_IN; // import Env variable in Node.js
console.log(moment().format('MMMM Do YYYY, h:mm:ss a'));
console.log(moment().format('dddd'));
console.log(moment().format("MMM Do YY"));
console.log(moment().format('YYYY [escaped] YYYY'));
console.log(moment().format());
var jul = moment("2022-07-20T12:00:00Z");
var dec = moment("2022-12-20T12:00:00Z");
console.log(jul.tz('America/Los_Angeles').format('ha z')); // 5am PDT
console.log(dec.tz(indianTimeZone).format('ha z')); // 4am PST
//——————-Don't edit below it————————//
callback(undefined, {
status: "done"
});
}

That’s it on this post.

Getting Started: Cloud Assembly Code Examples [CB10094]

  1. Introduction
  2. Samples
    1. A blank template
    2. Literals
    3. Environment variables
    4. Resource variables
    5. Resource self variables
    6. Cluster count index
    7. Conditions
    8. Arithmetic operators
    9. String concatenation
    10. Operators [ ] and .
    11. Construction of map
    12. Construction of array
    13. Functions
    14. Referencing input parameters
    15. Optional Inputs
    16. Explicit dependencies
    17. Property bindings
    18. Encrypt access credentials
    19. Passing inputs to ABXs
    20. Fetching value from Property Groups
    21. Adding a secret property
    22. Enum & oneOf (Dropdowns in Input form)
    23. Dynamic Enums (binding with vRO action)
    24. vRO Custom DataTypes
    25. Resource Flags
    26. cloudConfig (cloud-init & Cloudbase-Init)
  3. Examples of vSphere resources
  4. Market Place
  5. Other ways to create Cloud Assembly templates
    1. Cloud template cloning
    2. Uploading and downloading
    3. Integrating Cloud Assembly with a repository
  6. References

Introduction

Starting your journey on vRA 8.x can be a little challenging. Out of numerous demanding areas, one of them is creating blueprints or do i say, cloud templates in Cloud Assembly. Converting your existing vRA 7.x blueprints to vRA 8.x cloud templates requires some clarity on YAML development. Even though, a good deal of things are already provided out of the box, considering YAML as a Code which can single-handedly establish all your cloud deployments, getting the desired state out of it can be a trickster. Hence, I would like to share some quick bits on what we can do in vRA provided YAML editor to get the job done faster.

Samples

We will start with the basics and then will move to more advanced template options.

A blank template

name: templateName
formatVersion: 1
inputs: {}
resources: {}

Literals

The following literals are supported:

  • Boolean (true or false)
  • Integer
  • Floating point
  • StringBackslash escapes double quote, single quote, and backslash itself:" is escaped as \"' is escaped as \'\ is escaped as \\Quotes only need to be escaped inside a string enclosed with the same type of quote, as shown in the following example."I am a \"double quoted\" string inside \"double quotes\"."
  • Null

Environment variables

Environment names:

  • orgId
  • projectId
  • projectName
  • deploymentId
  • deploymentName
  • blueprintId
  • blueprintVersion
  • blueprintName
  • requestedBy (user)
  • requestedAt (time)
${env.blueprintId}

Resource variables

Resource variables let you bind to resource properties from other resources

format: resource.RESOURCE_NAME.PROPERTY_NAME

${resource.db.id}
${resource.db.networks[0].address}
${resource.app.id} (Return the string for non-clustered resources, where count isn't specified. Return the array for clustered resources.)
${resource.app[0].id} (Return the first entry for clustered resources.)

Resource self variables

Resource self variables are allowed only for resources supporting the allocation phase. Resource self variables are only available (or only have a value set) after the allocation phase is complete.

${self.address} (Return the address assigned during the allocation phase.)

Note that for a resource named resource_xself.property_name and resource.resource_x.property_name are the same and are both considered self-references.

Cluster count index

${count.index == 0 ? "primary" : "secondary"} (Return the node type for clustered resources.)

Limitations:

Use of count.index for resource allocation is not supported. For example, the following capacity expression fails when it references the position within an array of disks created at input time.

inputs:
  disks:
    type: array
    minItems: 0
    maxItems: 12
    items:
      type: object
      properties:
        size:
          type: integer
          title: Size (GB)
          minSize: 1
          maxSize: 2048
resources:
  Cloud_vSphere_Disk_1:
    type: Cloud.vSphere.Disk
    properties:
      capacityGb: '${input.disks[count.index].size}'
      count: '${length(input.disks)}'

Conditions

  • Equality operators are == and !=.
  • Relational operators are < > <= and >=.
  • Logical operators are && || and !.
  • Conditionals use the pattern:condition-expression ? true-expression : false-expression
${input.count < 5 && input.size == 'small'}
${input.count < 2 ? "small" : "large"}

Arithmetic operators

Operators are +  / * and %.

${(input.count + 5) * 2}

String concatenation

${'ABC' + 'DEF'}

Operators [ ] and .

The expression follows ECMAScript in unifying the treatment of the [ ] and . operators.

So, expr.identifier is equivalent to expr["identifier"]. The identifier is used to construct a literal whose value is the identifier, and then the [ ] operator is used with that value.

${resource.app.networks[0].address}

In addition, when a property includes a space, delimit with square brackets and double quotes instead of using dot notation.

Incorrect:

input.operating system

Correct:

input["operating system"]

Construction of map

${{'key1':'value1', 'key2':input.key2}}

Construction of array

${['key1','key2']}

Functions

${function(arguments...)}

${to_lower(resource.app.name)}

List of supported functions

FunctionDescription
abs(number)Absolute number value
avg(array)Return average of all values from array of numbers
base64_decode(string)Return decoded base64 value
base64_encode(string)Return base64 encoded value
ceil(number)Returns the smallest (closest to negative infinity) value that is greater than or equal to the argument and is equal to a mathematical integer
contains(array, value)Check if array contains a value
contains(string, value)Check if string contains a value
digest(value, type)Return digest of value using supported type (md5, sha1, sha256, sha384, sha512)
ends_with(subject, suffix)Check if subject string ends with suffix string
filter_by(array, filter)Return only the array entries that pass the filter operationfilter_by([1,2,3,4], x => x >= 2 && x <= 3)returns [2, 3]filter_by({'key1':1, 'key2':2}, (k,v) => v != 1)returns [{"key2": 2}]
floor(number)Returns the largest (closest to positive infinity) value that is less than or equal to the argument and is equal to a mathematical integer
format(format, values…)Return a formatted string using Java Class Formatter format and values.
from_json(string)Parse json string
join(array, delim)Join array of strings with a delimiter and return a string
json_path(value, path)Evaluate path against value using XPath for JSON.
keys(map)Return keys of map
length(array)Return array length
length(string)Return string length
map_by(array, operation)Return each array entry with an operation applied to itmap_by([1,2], x => x * 10)returns [10, 20]map_by([1,2], x => to_string(x))returns ["1", "2"]map_by({'key1':1, 'key2':2}, (k,v) => {k:v*10})returns [{"key1":10},{"key2":20}]
map_to_object(array, keyname)Return an array of key:value pairs of the specified key name paired with values from another arraymap_to_object(resource.Disk[*].id, "source")returns an array of key:value pairs that has a key field called source paired with disk ID stringsNote thatmap_by(resource.Disk[*].id, id => {'source':id})returns the same result
matches(string, regex)Check if string matches a regex expression
max(array)Return maximum value from array of numbers
merge(map, map)Return a merged map
min(array)Return minimum value from array of numbers
not_null(array)Return the first entry which is not null
now()Return current time in ISO-8601 format
range(start, stop)Return a series of numbers in increments of 1 that begins with the start number and ends just before the stop number
replace(string, target, replacement)Replace string containing target string with target string
reverse(array)Reverse entries of array
slice(array, begin, end)Return slice of array from begin index to end index
split(string, delim)Split string with a delimiter and return array of strings
starts_with(subject, prefix)Check if subject string starts with prefix string
substring(string, begin, end)Return substring of string from begin index until end index
sum(array)Return sum of all values from array of numbers
to_json(value)Serialize value as json string
to_lower(str)Convert string to lower case
to_number(string)Parse string as number
to_string(value)Return string representation of the value
to_upper(str)Convert string to upper case
trim(string)Remove leading and trailing spaces
url_encode(string)Encode string using url encoding specification
uuid()Return randomly generated UUID
values(map)Return values of map

Referencing input parameters

In the resources section, you can reference an input parameter using ${input.property-name} syntax. If a property name includes a space, delimit with square brackets and double quotes instead of using dot notation: ${input["property name"]}.

inputs:
  sshKey:
    type: string
    maxLength: 500
resources:
  frontend:
    type: Cloud.Machine
    properties:
      remoteAccess:
        authentication: publicPrivateKey
        sshKey: '${input.sshKey}'

Important In cloud template code, you cannot use the word input except to indicate an input parameter.


Optional Inputs

Inputs are usually required and marked with an asterisk. To make an input optional, set an empty default value as shown.

owner:
  type: string
  minLength: 0
  maxLength: 30
  title: Owner Name
  description: Account Owner
  default: ''

List of available input properties

PropertyDescription
constUsed with oneOf. The real value associated with the friendly title.
defaultPrepopulated value for the input.The default must be of the correct type. Do not enter a word as the default for an integer.
descriptionUser help text for the input.
encryptedWhether to encrypt the input that the user enters, true or false.Passwords are usually encrypted.You can also create encrypted properties that are reusable across multiple cloud templates. See Secret Cloud Assembly properties.
enumA drop-down menu of allowed values.Use the following example as a format guide.enum: – value 1 – value 2
formatSets the expected format for the input. For example, (25/04/19) supports date-time.Allows the use of the date picker in Service Broker custom forms.
itemsDeclares items within an array. Supports number, integer, string, Boolean, or object.
maxItemsMaximum number of selectable items within an array.
maxLengthMaximum number of characters allowed for a string.For example, to limit a field to 25 characters, enter  maxLength: 25.
maximumLargest allowed value for a number or integer.
minItemsMinimum number of selectable items within an array.
minLengthMinimum number of characters allowed for a string.
minimumSmallest allowed value for a number or integer.
oneOfAllows the user input form to display a friendly name (title) for a less friendly value (const). If setting a default value, set the const, not the title.Valid for use with types string, integer, and number.
patternAllowable characters for string inputs, in regular expression syntax.For example, '[a-z]+' or '[a-z0-9A-Z@#$]+'
propertiesDeclares the key:value properties block for objects.
readOnlyUsed to provide a form label only.
titleUsed with oneOf. The friendly name for a const value. The title appears on the user input form at deployment time.
typeData type of number, integer, string, Boolean, or object.Important:A Boolean type adds a blank checkbox to the request form. Leaving the box untouched does not make the input False.To set the input to False, users must check and then clear the box.
writeOnlyHides keystrokes behind asterisks in the form. Cannot be used with enum. Appears as a password field in Service Broker custom forms.

Explicit dependencies

Sometimes, a resource needs another to be deployed first. For example, a database server might need to exist first, before an application server can be created and configured to access it.

An explicit dependency sets the build order at deployment time, or for scale in or scale out actions. You can add an explicit dependency using the graphical design canvas or the code editor.

  • Design canvas option—draw a connection starting at the dependent resource and ending at the resource to be deployed first.
  • Code editor option—add a dependsOn property to the dependent resource, and identify the resource to be deployed first.An explicit dependency creates a solid arrow in the canvas.
Explicit dependency

Property bindings

Sometimes, a resource property needs a value found in a property of another resource. For example, a backup server might need the operating system image of the database server that is being backed up, so the database server must exist first.

Also called an implicit dependency, a property binding controls build order by waiting until the needed property is available before deploying the dependent resource. You add a property binding using the code editor.

  • Edit the dependent resource, adding a property that identifies the resource and property that must exist first.A property binding creates a dashed arrow in the canvas.
Implicit dependency or property binding

Encrypt access credentials

resources:
  apitier:
    type: Cloud.Machine
    properties:
      cloudConfig: |
        #cloud-config
        runcmd:
          - export apikey=${base64_encode(input.username:input.password)}
          - curl -i -H 'Accept:application/json' -H 'Authorization:Basic :$apikey' http://example.com

Passing inputs to ABXs

resources:
  db-tier:
    type: Cloud.Machine
    properties:
      # Command to execute
      abxRunScript_script: mkdir bp-dir
      # Time delay in seconds before the script is run
      abxRunScript_delay: 120
      # Type of the script: shell (Linux) or powershell (Windows)
      abxRunScript_shellType: linux
      # Could be aws, azure, etc.
      abxRunScript_endpointType: '${self.endpointType}'

Fetching value from Property Groups

appSize: '${propgroup.propGroup_name.const_size}'

Adding a secret property

type: Cloud.Machine
properties:
  name: ourvm
  image: mint20
  flavor: small
  remoteAccess:
    authentication: publicPrivateKey
    sshKey: '${secret.ourPublicKey}'
    username: root

Enum & oneOf (Dropdowns in Input form)

enum and oneOf both are used to provide a set of default values in the input form. However, the only difference in them is that oneOf allows for a friendly title.

inputs:
  osversion:
    type: string
    title: Select OS Version
    default: SLES12
    enum:
      - SLES12
      - SLES15
      - RHEL7
inputs:
  platform:
    type: string
    title: Deploy to
    oneOf:
      # Title is what the user sees, const is the tag for the endpoints.
      - title: AWS
        const: aws
      - title: Azure
        const: azure
      - title: vSphere
        const: vsphere
    default: vsphere

Dynamic Enums (binding with vRO action)

input1:
    type: string
    title: Environment
    default: ''
    $dynamicEnum: '/data/vro-actions/com.org.utils/getEnvironment?environment={{vmenvironment}}'
.
.
.
.
input1:
    type: string
    title: podId
    $dynamicEnum: /data/vro-actions/com.org.helpers/getPod

This is calling a vRO action getEnvironments inside a action module com.org.utils with 1 input environment. Actions with no inputs can be called with using quotes.

vRO Custom DataTypes

inputs:
  accountName:
    type: string
    title: Account name
    encrypted: true   
  displayName:
    type: string
    title: Display name   
  password:
    type: string
    title: Password
    encrypted: true 
  confirmPassword:
    type: string
    title: Password
    encrypted: true   
  ouContainer: 
    type: object
    title: AD OU container
    $data: 'vro/data/inventory/AD:OrganizationalUnit'
    properties:
        id:
            type: string
        type:
            type: string    

Resource Flags

Cloud Assembly includes several cloud template settings that adjust how a resource is handled at request time. Resource flag settings aren’t part of the resource object properties schema. For a given resource, you add the flag settings outside of the properties section as shown.

resources:
  Cloud_Machine_1:
    type: Cloud.Machine
    preventDelete: true
    properties:
      image: coreos
      flavor: small
      attachedDisks:
        - source: '${resource.Cloud_Volume_1.id}'
  Cloud_Volume_1:
    type: Cloud.Volume
    properties:
      capacityGb: 1

Available resource flags

Resource FlagDescription
allocatePerInstanceWhen set to true, resource allocation can be customized for each machine in a cluster.The default is false, which allocates resources equally across the cluster, resulting in the same configuration for each machine. In addition, day 2 actions might not be separately possible for individual resources.Per instance allocation allows count.index to correctly apply the configuration for individual machines. For code examples, see Machine and disk clusters in Cloud Assembly.
createBeforeDeleteSome update actions require that the existing resource be removed and a new one be created. By default, removal is first, which can lead to conditions where the old resource is gone but the new one wasn’t created successfully for some reason.Set this flag to true if you need to make sure that the new resource is successfully created before deleting the previous one.
createTimeoutThe Cloud Assembly default timeout for resource allocate, create, and plan requests is 2 hours (2h). In addition, a project administrator can set a custom default timeout for these requests, applicable throughout the project.This flag lets you override any defaults and set the individual timeout for a specific resource operation. See also updateTimeout and deleteTimeout.
deleteTimeoutThe Cloud Assembly default timeout for delete requests is 2 hours (2h). In addition, a project administrator can set a different default timeout for delete requests, applicable throughout the project.This flag lets you override any defaults and set the individual timeout for a specific resource delete operation. See also updateTimeout and createTimeout.
dependsOnThis flag identifies an explicit dependency between resources, where one resource must exist before creating the next one. For more information, see Creating bindings and dependencies between resources in Cloud Assembly.
dependsOnPreviousInstancesWhen set to true, create cluster resources sequentially. The default is false, which simultaneously creates all resources in a cluster.For example, sequential creation is useful for database clusters where primary and secondary nodes must be created, but secondary node creation needs configuration settings that connect the node to an existing, primary node.
forceRecreateNot all update actions require that the existing resource be removed and a new one be created. If you want an update to remove the old resource and create a new one, independent of whether the update would have done so by default, set this flag to true.
ignoreChangesUsers of a resource might reconfigure it, changing the resource from its deployed state.If you want to perform a deployment update but not overwrite the changed resource with the configuration from the cloud template, set this flag to true.
ignorePropertiesOnUpdateUsers of a resource might customize certain properties, and those properties might be reset to their original cloud template state during an update action.To prevent any properties from being reset by an update action, set this flag to true.
preventDeleteIf you need to protect a created resource from accidental deletion during updates, set this flag to true. If a user deletes the deployment, however, the resource is deleted.
recreatePropertiesOnUpdateUsers of a resource might reconfigure properties, changing the resource from its deployed state. During an update, a resource might or might not be recreated. Resources that aren’t recreated might remain with properties in changed states.If you want a resource and its properties to be recreated, independent of whether the update would have done so by default, set this flag to true.
updateTimeoutThe Cloud Assembly default timeout for update requests is 2 hours (2h). In addition, a project administrator can set a different default timeout for update requests, applicable throughout the project.This flag lets you override any defaults and set the individual timeout for a specific resource update operation. See also deleteTimeout and createTimeout.

cloudConfig (cloud-init & Cloudbase-Init)

You can add a cloudConfig section to Cloud Assembly template code, in which you add machine initialization commands that run at deployment time. cloudConfig command formats are:

  • Linux—initialization commands follow the open cloud-init standard.
  • Windows—initialization commands use Cloudbase-init.

Linux cloud-init and Windows Cloudbase-init don’t share the same syntax. A cloudConfig section for one operating system won’t work in a machine image of the other operating system.

To ensure correct interpretation of commands, always include the pipe character cloudConfig: | as shown. Learn more about cloud-config here.

cloudConfig: |
        #cloud-config
        repo_update: true
        repo_upgrade: all
        packages:
         - apache2
         - php
         - php-mysql
         - libapache2-mod-php
         - php-mcrypt
         - mysql-client
        runcmd:
         - mkdir -p /var/www/html/mywordpresssite && cd /var/www/html && wget https://wordpress.org/latest.tar.gz && tar -xzf /var/www/html/latest.tar.gz -C /var/www/html/mywordpresssite --strip-components 1
         - i=0; while [ $i -le 5 ]; do mysql --connect-timeout=3 -h ${DBTier.networks[0].address} -u root -pmysqlpassword -e "SHOW STATUS;" && break || sleep 15; i=$((i+1)); done
         - mysql -u root -pmysqlpassword -h ${DBTier.networks[0].address} -e "create database wordpress_blog;"
         - mv /var/www/html/mywordpresssite/wp-config-sample.php /var/www/html/mywordpresssite/wp-config.php
         - sed -i -e s/"define( 'DB_NAME', 'database_name_here' );"/"define( 'DB_NAME', 'wordpress_blog' );"/ /var/www/html/mywordpresssite/wp-config.php && sed -i -e s/"define( 'DB_USER', 'username_here' );"/"define( 'DB_USER', 'root' );"/ /var/www/html/mywordpresssite/wp-config.php && sed -i -e s/"define( 'DB_PASSWORD', 'password_here' );"/"define( 'DB_PASSWORD', 'mysqlpassword' );"/ /var/www/html/mywordpresssite/wp-config.php && sed -i -e s/"define( 'DB_HOST', 'localhost' );"/"define( 'DB_HOST', '${DBTier.networks[0].address}' );"/ /var/www/html/mywordpresssite/wp-config.php
         - service apache2 reload

If a cloud-init script behaves unexpectedly, check the captured console output in /var/log/cloud-init-output.log when troubleshooting. For more about cloud-init, see the cloud-init documentation.

Advertisements

Examples of vSphere resources

vSphere virtual machine with CPU, memory, and operating system

resources:
  demo-machine:
    type: Cloud.vSphere.Machine
    properties:
      name: demo-machine
      cpuCount: 1
      totalMemoryMB: 1024
      image: ubuntu

vSphere machine with a datastore resource

resources:
  demo-vsphere-disk-001:
    type: Cloud.vSphere.Disk
    properties:
        name: DISK_001
        type: 'HDD'
        capacityGb: 10
        dataStore: 'datastore-01'
        provisioningType: thick

vSphere machine with an attached disk

resources:
  demo-vsphere-disk-001:
    type: Cloud.vSphere.Disk
    properties:
      name: DISK_001
      type: HDD
      capacityGb: 10
      dataStore: 'datastore-01'
      provisioningType: thin
  demo-machine:
    type: Cloud.vSphere.Machine
    properties:
      name: demo-machine
      cpuCount: 2
      totalMemoryMB: 2048
      imageRef: >-
        https://packages.vmware.com/photon/4.0/Rev1/ova/photon-ova-4.0-ca7c9e9330.ova
      attachedDisks:
        - source: '${demo-vsphere-disk-001.id}'

vSphere machine with a dynamic number of disks

inputs:
  disks:
    type: array
    title: disks
    items:
      title: disks
      type: integer
    maxItems: 15
resources:
  Cloud_Machine_1:
    type: Cloud.vSphere.Machine
    properties:
      image: Centos
      flavor: small
      attachedDisks: '${map_to_object(resource.Cloud_Volume_1[*].id, "source")}'
  Cloud_Volume_1:
    type: Cloud.Volume
    allocatePerInstance: true
    properties:
      capacityGb: '${input.disks[count.index]}'
      count: '${length(input.disks)}'

vSphere machine from a snapshot image. Append a forward slash and the snapshot name. The snapshot image can be a linked clone.

resources:
  demo-machine:
    type: Cloud.vSphere.Machine
    properties:
      imageRef: 'demo-machine/snapshot-01'
      cpuCount: 1
      totalMemoryMB: 1024

vSphere machine in a specific folder in vCenter

resources:
  demo-machine:
    type: Cloud.vSphere.Machine
    properties:
      name: demo-machine
      cpuCount: 2
      totalMemoryMB: 1024
      imageRef: ubuntu
      resourceGroupName: 'myFolder'

vSphere machine with multiple NICs

resources:
  demo-machine:
    type: Cloud.vSphere.Machine
    properties:
      image: ubuntu
      flavor: small
      networks:
        - network: '${network-01.name}'
          deviceIndex: 0
        - network: '${network-02.name}'
          deviceIndex: 1
  network-01:
    type: Cloud.vSphere.Network
    properties:
      name: network-01
  network-02:
    type: Cloud.vSphere.Network
    properties:
      name: network-02

vSphere machine with an attached tag in vCenter

resources:
  demo-machine:
    type: Cloud.vSphere.Machine
    properties:
      flavor: small
      image: ubuntu
      tags:
        - key: env
          value: demo

vSphere machine with a customization spec

resources:
  demo-machine:
      type: Cloud.vSphere.Machine
      properties:
        name: demo-machine
        image: ubuntu
        flavor: small
        customizationSpec: Linux

vSphere machine with remote access

inputs:
  username:
    type: string
    title: Username
    description: Username
    default: testUser
  password:
    type: string
    title: Password
    default: VMware@123
    encrypted: true
    description: Password for the given username
resources:
  demo-machine:
    type: Cloud.vSphere.Machine
    properties:
      flavor: small
      imageRef: >-
        https://cloud-images.ubuntu.com/releases/16.04/release-20170307/ubuntu-16.04-server-cloudimg-amd64.ova
      cloudConfig: |
        ssh_pwauth: yes
        chpasswd:
          list: |
            ${input.username}:${input.password}
          expire: false
        users:
          - default
          - name: ${input.username}
            lock_passwd: false
            sudo: ['ALL=(ALL) NOPASSWD:ALL']
            groups: [wheel, sudo, admin]
            shell: '/bin/bash'
        runcmd:
          - echo "Defaults:${input.username}  !requiretty" >> /etc/sudoers.d/${input.username}
Advertisements

Market Place

The Marketplace provides VMware Solution Exchange cloud templates and images that help you build your template library and access supporting OVA or OVFs.

Other ways to create Cloud Assembly templates

Cloud template cloning

To clone a template, go to Design, select a source, and click Clone. You clone a cloud template to create a copy based on the source, then assign the clone to a new project or use it as starter code for a new application.

Uploading and downloading

You can upload, download, and share cloud template YAML code in any way that makes sense for your site. You can even modify template code using external editors and development environments.


Note A good way to validate shared template code is to inspect it in the Cloud Assembly code editor on the design page.


Integrating Cloud Assembly with a repository

An integrated git source control repository can make cloud templates available to qualified users as the basis for a new deployment. See How do I use Git integration in Cloud Assembly.

References

Valid JavaScript Variable names in vRO

  1. Introduction
  2. Reserved words
  3. Non-reserved words that act like reserved words
  4. Valid identifier names
  5. Examples inside vRO
  6. Can you use them in between scripts?
  7. JavaScript variable name validator

Introduction

Did you know var π = Math.PI; is syntactically valid JavaScript code in vRealize Orchestrator? If the answer is “No” then probably this article maybe of some interest to you.

Let’s see what all Unicode glyphs are allowed in as JavaScript variable names, or identifiers as the ECMAScript specification calls them. This is more of a fun activity but can give you some insight on selecting the best (or may I say worst LOL! – will talk about this later) variable name in your vRO scripts.

Reserved words

The ECMAScript 5.1 spec says:

An Identifier is an IdentifierName that is not a ReservedWord.

The spec describes four groups of reserved words: keywords, future reserved words, null literals and boolean literals.

Keywords are tokens that have special meaning in JavaScript:

break, case, catch, continue, debugger, default, delete, do, else, finally, for, function, if, in, instanceof, new, return, switch, this, throw, try, typeof, var, void, while, and with.

Future reserved words are tokens that may become keywords in a future revision of ECMAScript:  classconstenumexportextendsimport, and super. Some future reserved words only apply in strict mode: implementsinterfaceletpackageprivateprotectedpublicstatic, and yield.

The null literal is, simply, null.

There are two boolean literalstrue and false. None of the above are allowed as variable names.

Non-reserved words that act like reserved words

The NaNInfinity, and undefined properties of the global object are immutable or read-only properties in ES5.1. So even though var NaN = 42; in the global scope wouldn’t throw an error, it wouldn’t actually do anything. To avoid confusion, I’d suggest avoiding the use of these variable names.

// In the global scope:
var NaN = 42;
console.log(NaN); // NaN// 

…but elsewhere:
(function() {	
var NaN = 42;	
console.log(NaN); // 42
}());

In strict mode, eval and arguments are disallowed as variable names too. (They kind of act like keywords in that case.) The old ES3 spec defines some reserved words that aren’t reserved words in ES5 anymore: 

int, byte, char, goto, long, final, float, short, double, native, throws, boolean, abstract, volatile, transient, and synchronized. 

It’s probably a good idea to avoid these as well, for optimal backwards compatibility.

Valid identifier names

As mentioned before, the spec differentiates between identifier names and identifiers. Identifiers form a subset of identifier names, since identifiers have the extra restriction that no reserved words are allowed. For example, var is a valid identifier name, but it’s an invalid identifier.

So, what is allowed in an identifier name?

An identifier must start with $_, or any character in the Unicode categories “Uppercase letter (Lu)”“Lowercase letter (Ll)”“Titlecase letter (Lt)”“Modifier letter (Lm)”“Other letter (Lo)”, or “Letter number (Nl)”.

Unicode escape sequences are also permitted in an IdentifierName, where they contribute a single character. […] A UnicodeEscapeSequence cannot be used to put a character into an IdentifierName that would otherwise be illegal.

This means that you can use var \u0061 and var a interchangeably. Similarly, since var 1 is invalid, so is var \u0031.

Two IdentifierNames that are canonically equivalent according to the Unicode standard are not equal unless they are represented by the exact same sequence of code units.

So, ma\u00F1ana and man\u0303ana are two different variable names, even though they’re equivalent after Unicode normalization.

Examples inside vRO

The following are all examples of valid JavaScript variable names that will work in vRO

System.log(λ);

// How convenient!
var π = Math.PI;
System.log("Value of PI: "+π);

// Sometimes, you just have to use the Bad Parts of JavaScript:
var ಠ_ಠ = "Angry";
System.log(ಠ_ಠ);

// Code, Y U NO WORK?!
var ლ_ಠ益ಠ_ლ = 42;
System.log(ლ_ಠ益ಠ_ლ );

// Obfuscate boring variable names for great justice
var \u006C\u006F\u006C\u0077\u0061\u0074 = 'heh';

// Did you know about the [.] syntax?
var ᱹ = 1;
System.log([1, 2, 3][ᱹ] === 2);

// While perfectly valid, this doesn’t work in most browsers:
var foo\u200Cbar = 42;

// This is *not* a bitwise left shift (`<<`):
var 〱〱 = 2;
System.log(〱〱 << 〱〱); // This is though


// Fun with Roman numerals
var Ⅳ = 4;
var Ⅴ = 5;
System.log(Ⅳ + Ⅴ); // 9

// OK, it's gone too far now

var Hͫ̆̒̐ͣ̊̄ͯ͗͏̵̗̻̰̠̬͝ͅE̴̷̬͎̱̘͇͍̾ͦ͊͒͊̓̓̐_̫̠̱̩̭̤͈̑̎̋ͮͩ̒͑̾͋͘Ç̳͕̯̭̱̲̣̠̜͋̍O̴̦̗̯̹̼ͭ̐ͨ̊̈͘͠M̶̝̠̭̭̤̻͓͑̓̊ͣͤ̎͟͠E̢̞̮̹͍̞̳̣ͣͪ͐̈T̡̯̳̭̜̠͕͌̈́̽̿ͤ̿̅̑Ḧ̱̱̺̰̳̹̘̰́̏ͪ̂̽͂̀͠ = 'Zalgo';


System.log("Hͫ̆̒̐ͣ̊̄ͯ͗͏̵̗̻̰̠̬͝ͅE̴̷̬͎̱̘͇͍̾ͦ͊͒͊̓̓̐_̫̠̱̩̭̤͈̑̎̋ͮͩ̒͑̾͋͘Ç̳͕̯̭̱̲̣̠̜͋̍O̴̦̗̯̹̼ͭ̐ͨ̊̈͘͠M̶̝̠̭̭̤̻͓͑̓̊ͣͤ̎͟͠E̢̞̮̹͍̞̳̣ͣͪ͐̈T̡̯̳̭̜̠͕͌̈́̽̿ͤ̿̅̑Ḧ̱̱̺̰̳̹̘̰́̏ͪ̂̽͂̀͠ "+ Hͫ̆̒̐ͣ̊̄ͯ͗͏̵̗̻̰̠̬͝ͅE̴̷̬͎̱̘͇͍̾ͦ͊͒͊̓̓̐_̫̠̱̩̭̤͈̑̎̋ͮͩ̒͑̾͋͘Ç̳͕̯̭̱̲̣̠̜͋̍O̴̦̗̯̹̼ͭ̐ͨ̊̈͘͠M̶̝̠̭̭̤̻͓͑̓̊ͣͤ̎͟͠E̢̞̮̹͍̞̳̣ͣͪ͐̈T̡̯̳̭̜̠͕͌̈́̽̿ͤ̿̅̑Ḧ̱̱̺̰̳̹̘̰́̏ͪ̂̽͂̀͠);

Can you use them in between scripts?

Interestingly, we can also use this variable names not only in the scripts but also, out of them i.e. as inputs, outputs and attributes. See this action’s input:

As we see, vRO raises an alarm here saying that Parameter name can contain only letters, numbers, and the symbols "_" and "$". Parameter name cannot start or end with ".".

However, when I run this action, it runs perfectly normal.

I think it is quite amusing. If used dexterously, it could give another dimension to your code. 👽

JavaScript variable name validator

Even if you’d learn these rules by heart, it would be virtually impossible to memorize every character in the different Unicode categories that are allowed. If you were to summarize all these rules in a single ASCII-only regular expression for JavaScript, it would be 11,236 characters long.

For that reason, you can go to mothereff.in/js-variables, a webtool and check if a given string is a valid variable name in JavaScript or not.

That’s it in this post. If you want to know more about the JavaScript implementation in vRO, check out my other posts.

Advertisements

Change your OAuth 2.0 token strategy using Scripting API

Introduction

Adding a REST Host in vRO is quite easy. Just run a workflow Add a REST Host and provide some details and select an Authentication type (viz. None, Basic, OAuth, OAuth 2.0, Digest, NTLM or Kerberos) or maybe you can create transient REST hosts using Scripting API. However, if you are using OAuth 2.0 for authenticating your REST hosts in vRO, you should shift your attention a little here.

With vRO 8.7, you now have an option to select a strategy on how to send the OAuth 2.0 bearer access token to your authorized request — oauth_token query parameter & Authorization header. The newly introduced and recommended strategy is to use the Authorization header to send the token when making request to the host.

Flow of OAuth 2.0 authorization. Source: authlib.org

Concern

The main reason to make this change is that Authorization header is more secure as it doesn’t expose the server logs in the incoming requests as with query parameter. Also, the query parameter will be deprecated soon in the future releases of vRO. So, it is probably the time to change/update your OAuth 2.0 authorized REST hosts. One way is to simply run Update a REST Host library workflow, but we will go with the other way i.e. updating it using the Scripting API.

Process

Create an action and copy-paste this script which has 2 inputs RESTHost and token and run it.

/**
 * @function changeOauth2Strategy
 * @version 1.0.0
 * @param {REST:RESTHost} host
 * @param {string} token 
 * @returns {REST:RESTHost}
 */
function changeOauth2Strategy(host, token) {
    var oldAuth = host.authentication
    var ouath20type = "OAuth 2.0"
    if (oldAuth.type !== ouath20type) {
        System.log("REST host isn't using" + ouath20type);
        result = host;
    } else {
        var oldStrategy = oldAuth.rawAuthProperties[1]; // or use oldAuth.getRawAuthProperty(1)
        if (oldStrategy === "Query parameter") {
            var newStrategy = "Authorization header";
        } else {
            var newStrategy = "Query parameter"
        }
        var newAuth = RESTAuthenticationManager.createAuthentication("OAuth 2.0", [token, newStrategy]);
        host.authentication = newAuth;
        return RESTHostManager.updateHost(host);
    }
};

You can verify what token sending strategy your REST host uses by navigating to Inventory > REST-Host, selecting your host, and checking the Authorization entry.

Middle Way

There is more to this story. The old scripting approach of creating an OAuth 2.0 authentication by passing only the token parameter without a token sending strategy still works, and for backwards compatibility preserves the past behavior of using the query parameter strategy. This means that the the below code will work for both type of strategies (in case you have both type of REST Hosts in your inventory).

host.authentication = RESTAuthenticationManager.createAuthentication("OAuth 2.0", ["<token>"]);

Subscription received!

Please check your email to confirm your newsletter subscription.

Don’t Ignore .gitignore in vRO 8.x

  1. Introduction
  2. Advantages
  3. Procedure
  4. Limitations
  5. Flush Git from vRO
  6. Available on gitignore.io

Introduction

I am sure everyone is using the git feature of vRO 8.x and you probably be pushing & pulling your vRO content across your DEV, STAGING, UAT & PROD environments. The question is, “Are you using .gitignore while pushing code from vRO? and why do you need it?”. This question was asked in vRO community (link here), and since the comment that I made answered the query, I thought I should write about it. Let’s see how .gitignore can ease your life during code promotions and gives you fluidity to have environment specific values safe as well as keeps the gibberish away from PROD vRO etc. and keeps your assembly vRO’s codebase clean and compact.


Note If you are looking to setup git in vRealize Orchestrator, follow this official link https://blogs.vmware.com/management/2022/05/git-repository-integration-in-vrealize-orchestrator.html.


Advantages

I will not go in details on how .gitignore works. You can learn more about .gitignore online. Fundamentally, .gitignore allows you to exclude specific files or directories from being pushed to the remote repository. Hence, using .gitignore can drastically improve the performance of your CI\CD pipelines. For vRealize Orchestrator, I have come across mainly following types of files that I want to exclude:

  • Test actions & workflows: Development generally relies on hits & trials. This leaves a lot of post-development debris.
  • Environment specific assets like Configuration elements: This may include passwords or API keys.
  • Packages: Since you have git-power, you don’t really need packages to move your code and hence, if you are using them, it’s probably something not as critical.
  • Default Library content: Why would you copy something which is already there at first place.
  • Multi Node Workflows: These dummy workflows doesn’t need to push across (starts with VCO@).

Question? Can you think of any other type of file that should be ignored? Comment down your answer.


You can use literal file names, wildcard symbols, etc to get them listed. Also, you can use negating patterns. A negating pattern is a pattern that starts with an exclamation mark (!) negates (re-include) any file that is ignored by the previous pattern. The exception to this rule is to re-include a file if its parent directory is excluded.

Procedure

In the vRO, Go to repository which is already setup under Administration -> Git Repositories.

In our case, it will be github.com/imtrinity94/cloudblogger.co.in.

Simply, just create a .gitignore file in the root of your repo branch and list down all the files you want to ignore like

# Default Resource Elements
Resources/Library/*

# Library Workflows
Workflows/Library

# Environment Specific variables
Configurations/Vault/API Keys

# Negating pattern to explicitly include tenant specific information
!Configurations/Tenants/*

# Multi-Node Workflows
Workflows/VCO@*

# Packages
**/Packages/

# Test actions
Actions/org/mayank/test

The other way round is to ignore everything and use negating pattern to include very specific things.

# All packages, JavaScript actions,  resources, configurations, policies, Polyglot action bundles (.py, .ps1 & node.js) and Workflows
**/Packages/
**/Actions/
**/Resources/
**/Configurations/
**/ActionBundles/
**/Workflows/
**/Policy Templates/

This will avoid all the packages, resources and configurations to be committed to the corresponding repo. 

This should be enough. If you want to be more precise, The .gitignore tutorial is here https://www.atlassian.com/git/tutorials/saving-changes/gitignore

Limitations

  • If you add .gitignore after pushing everything even once, the .gitignore will work for new files but will still track those existing files.
  • The .gitignore file will become part of vRO.
  • You may lose version control on ignored objects.

Flush Git from vRO

In case, you have already set up git in your vRO and want to do a fresh start due to unwanted tracking of files or being stuck in an undesirable git state, you can locally remove the git config from your vRO by deleting the below file. Use it with caution though.

/data/vco/usr/lib/vco/app-server/data/git/__SYSTEM.git

Available on gitignore.io

As we all know https://gitignore.io is the largest collection of .gitignore files, I thought why shouldn’t I add vRO to their list as well. That’s why I created a pull request here at https://github.com/toptal/gitignore/pull/460 & it got approved. Now, if you got there and search for vRealizeOrchestrator, you will get the .gitignore file ready to be consumed in your projects.

Source: https://www.toptal.com/developers/gitignore/api/vrealizeorchestrator

That’s it in this post. Let me know what you think about it in the comments.

vRO Debugger: How to use advanced features

  1. Introduction
  2. Quick Demo video
  3. Step-by-step Guide
    1. Step 1: Reproduce the bug
    2. Step 2: Pause the code with a breakpoint
    3. Step 3: Open Debugger Mode
    4. Step 4: Step through the code and use expressions
    5. Step 5: Apply a fix
  4. Final Note

Introduction

If you are looking on how to use breakpoints and expression in vRealize Orchestrator, you are probably at right place. Debugging is crucial for every programming interface and vRO is not lacking there. Debug option in vRO exist for a long time now. But its real power uncovered recently with version 8.0 onwards. See how a vRO programmer can quickly use the full potential of Debugger in the video below and then we will see the process in details.

Quick Demo video

Step-by-step Guide

To understand the whole debugging process, we will take an action with 2 inputs and do a sum of 2 numbers. Catch will be that one input is a number and other is a string due to human error.

Step 1: Reproduce the bug

Finding a series of actions that consistently reproduces a bug is always the first step to debugging.

  • Create an action with 2 inputs
  • In the action script, put this code
var sum = number1 + number2;
System.log(number1 + " + " + number2 + " = " + sum);
  • Save and Run it
  • Enter 5 in the number1 text box.
  • Enter 1 in the number2 text box.
  • The log will show 51. The result should be 6. This is the bug we’re going to fix.

Step 2: Pause the code with a breakpoint

A common method for debugging a problem like this is to insert a lot of System.log() or System.debug() statements into the code, in order to inspect values as the script executes.

The System.log() method may get the job done, but breakpoints can get it done faster. A breakpoint lets you pause your code in the middle of its execution, and examine all values at that moment in time. Breakpoints have a few advantages over the System.log() method:

  • With System.log(), you need to manually open the source code, find the relevant code, insert the System.log() statements, and then save it again in order to see the messages in the log console. With breakpoints, you can pause on the relevant code without even knowing how the code is structured.

Lets see how to use line-of-code breakpoints:

On the left side of the code line count, look for a reddish dot and double tap to make it dark red. You have set breakpoint on that line. Next time while we debug, the code will pause on the breakpoint that you just set.

Step 3: Open Debugger Mode


Important While working on actions, Debugger Mode will only open if there is at least 1 breakpoint in the code. While working on workflows, there are 2 ways to set breakpoint. Either inside the scriptable task or on the item itself. Click the red box on top left of an item to set breakpoint.


  • On the top, select Debug
  • You will notice a new tab Debugger in the bottom panel with Watch expressions and Item Variables boxes.
  • The execution will start and will pause at first breakpoint. The breakpoint where the execution is currently paused can be identified with a red star instead of red dot .

Step 4: Step through the code and use expressions

Stepping through your code enables you to walk through your code’s execution, one line at a time, and figure out exactly where it’s executing in a different order, or giving different results than you expected.


Tip We can step through the code using Continue, Step Into, Step Over & Step Return buttons.

Continue: An action to take in the debugger that will continue execution until the next breakpoint is reached or the program exits.

Step Into: An action to take in the debugger. If the line does not contain a function it behaves the same as “step over” but if it does the debugger will enter the called function and continue line-by-line debugging there.

Step Over: An action to take in the debugger that will step over a given line. If the line contains a function the function will be executed and the result returned without debugging each line.

Step Return: An action to take in the debugger that returns to the line where the current function was called.


  • Once you click Debug, vRO will suspend the execution on first breakpoint which will be on var sum = number1 + number2;
  • Click Continue or Step into to execute that step and check the value of sum.

The Watch Expressions tab lets you monitor the values of variables over time. As the name implies, Watch Expressions aren’t just limited to variables. You can store any valid JavaScript expression in a Watch Expression. Try it now:

  1. Click the Watch expressions tab.
  2. Click on Click to Add an Expression.
  3. Type parseInt(number1) + parseInt(number2)
  4. Press . It shows parseInt(number1) + parseInt(number2) = 6.

This implies that the type of either or both of the number variables is not a number as parsing integer value out of them gives right result.

Step 5: Apply a fix

You’ve found a fix for the bug. All that’s left is to try out your fix by editing the code and re-running the demo. You don’t need to leave Debugger mode to apply the fix. You can edit JavaScript code directly within the workflow editor there.

Final Note

That is how you can start using the Debugger in vRO. Over time, you will be able to debug complex code without using persistent log methods as well as building logic on the go right while executing the code using expressions etc. That’s all in today’s post, see you on other posts. Cheers.

Enhance your VDI 3xperience with ZeeTransformer

Today’s blog post is all about ZeeTransformer – It’s a product by a company called ZeeTim which allows conversion of any PC using their Linux-based ZeeOS into a Thin or Zero Client.

It all started when I was looking for a VDI solution for one of my friend who wanted to setup a lab for his new venture. We needed an endpoint solution which would prevent us from having to duplicate effort in maintaining Windows desktops in the datacenter as well as on the endpoint. We needed something secure, low maintenance, and that would give good performance for the users. Then, I came across this tool ZeeTransformer which was just a bootable key away from our pocket-friendly VDI journey.

I was so impressed with the ease of setup, configurability and PC repurposing capabilities of ZeeTransformer that I thought I should write about it.

For the ProTip, you can point to this link https://www.zeetim.com/zeetransformer-free-trial/

Overall Experience

Before I go into how I did the installation, I would like to share the features I loved the most about ZeeTransformer:

  • Independent of underlying hardware: Any PC that would support Linux (let me tell you, most of them does 😉) can become a thin client and can be a part of your VDI solution. Saved a lot of money 💰 as we repurposed our old systems.
  • Management is a breeze 💨: As ZeeConf can centrally manage various aspects like patching, 3rd party app installations, security, display settings and lot more, managing the solution and applying new settings is just few clicks away.

Installation

With just a little reading on www.zeetim.com, I was good to setup the whole process. We were using a set of 10 old PCs for our lab. For managing them, I used ZeeConf client which comes along with ZeeTransformer out of the box. Let’s me show it in a quick and dirty way.

  • Download ZeeTransformer from here on your PC.
  • Extract the downloaded zip and it should contain 2 folders and testing procedure files:
    • ZeeConf
    • ZeeTransformer
  • Connect a USB key to the PC which will be converted to a bootable key.
  • Go to ZeeTransformer folder in the extracted zip.
  • Launch “ZeeTransformerImagerLite.exe”.
  • Select the inserted USB key as a device.
  • Check “Eject after creation” checkbox if it is not checked by default. This will automatically eject the key after creation.
  • Click on Create boot device to transform the USB key.
  • Eject the key after completion.

Once the USB key is created:

  • Connect the USB key to a PC that you want to use as a thin client.
  • Restart the PC and go to the BIOS settings
  • Disable “Secure Boot” option, select the inserted USB key as a boot device, save the settings and restart the PC.
  • Once the PC boots via the USB key, the device is ready to be used.

ZeeTransformer UI

PC boots up in just a fraction of seconds. The Home screen has a minimalistic interface with few applications pre-installed like Google Chrome, Mozilla Firefox, Citrix Client, Horizon Client etc. for easy access to existing infrastructure for bigger organizations.

Now that we have thin client nodes ready to be consumed by end-user, the next step is to manage them.

Client Management using ZeeConf

As a part of complete VDI solution by ZeeTim, ZeeTransformer comes with a central management tool called ZeeConf.

So, After I converted our PCs to ZeeTerms (means a PC with ZeeTransformer), I had to install the ZeeConf management tool to my main PC. Let’s see how I did that. 

  • On all the newly created ZeeTerm, open ZeeConf Lite (icon on the desktop) and check the IP address on the home page.
  • On main PC, go to the ZeeConf folder from the folder we extracted earlier.
  • Launch ZeeConf.exe

If the PC and the newly launched ZeeTerm are in the same network, ZeeConf will automatically detect it. If they are on the different networks:

  • Click on “Search for new terminals”
  • Enter the IP address of your newly created ZeeTerm and click OK.

Now, once your node is connected, you can manage all sort of settings like Infrastructure, Display, Packages, Security, Network, Proxy, VPN etc. as well as configuration for VMware, Nutanix, Citrix, etc.

For further reference please refer their official guide here ZeeConf Client – User Guide.

Licensing using ZeeLicense

If you followed the above process, you might be able to test the ZeeTim VDI solution because client will continue to work for 1 hour. However, use it further more, you would need licenses.


ProTip I used the 10 free license that ZeeTim provides during registration for our lab setup.


  • Download ZeeLicense Server from the Downloads section.
  • Install it using instructions shown here.
  • Launch the installed ZeeLicense Administrator as an administrator
  • In ZeeLicense Administrator, click License request to request a new license
  • It will open the below window:
  • Fill the details and Click OK.
  • A license request file will be generated in the installation directory of the ZeeLicense Server. (Default: C:\Program Files\ZeeTim\ZeeLicense Server)
  • Send this license request file via email to support@zeetim.com and ZeeTim will send back to you a .zip file containing the requested license.

Extensions

If you are already impressed, I would inform you that there is more to it. Using ZeeTransformer in a enterprise will require other set of tools as well like Centralized Printing solutions, User Access Management, Virtualized Infrastructure Support etc. For that, they have a set of enterprise-grade tools like ZeePrint, ZeeOTP, ZeeEdge etc.

Know more about them at www.zeetim.com

Final Verdict

As the title suggests, the experience was 3x compared to other VDI endpoint options. OK! maybe its not 3x but it is much better than other VDI endpoint solutions out there at least in our case as it allowed old PC repurposing and costed us almost nothing with all the great features 👍. Also, the support the ZeeTim team provided was really great. The team reached out to me to see if everything is fine and we had some great discussions and sessions to help me even better understand the process.

Subscription received!

Please check your email to confirm your newsletter subscription.

Tip around System.log() for Big Workflows with Scriptable tasks and actions

  1. Introduction
  2. Concern
  3. Possible Solution
    1. Workflows
    2. Actions

Introduction

It’s quite evident that vRO Workflows can grow huge for major provisioning tasks like hardware deployments, vApp deployments or tenant provisioning etc. just to name a few. And in a big environment, such workflows can come in large numbers and may be coupled with various other workflows quite generally for eg. using Nested WFs.

Concern

Now, the real pain starts when you have to debug them. for Debugging, developers relies mostly on System.log() or System.debug() to know about the statuses and variable values etc. up to a particular point, which is really great.

If I talk about myself, I even uses a start and completion log for every Scriptable task in a workflow. This always pinpoints to the scriptable task that was started but couldn’t complete due to any reason and therefore couldn’t print the end log. Let me explain it. There is a workflow (Create a VM) and there is a scriptable task (Get All VM Names). In this scriptable task, I would add some thing like

System.log("\"Get all VM Names\" script started");

and at the end of this script

System.log("\"Get all VM Names\" script completed");

Imagine this in a very large workflow, this can really help. But I knew this needs to be improved. Sometimes, you changes the content of a scriptable task and its definition changes, so update it’s name -> Get All VM Names to Get All VM Names and IPs. In such cases, you have to update those start and end log statements. And I hate that!

Possible Solution

I referred to the vRO community (link) and found one interesting way out.

FOR WORKFLOWS

We can use this,

System.log("\""+System.currentWorkflowItem().getDisplayName()+"\"  script started");
System.log("\""+System.currentWorkflowItem().getDisplayName()+"\"  script completed");

This will automatically print the updated name of the scriptable task.

vRO Workflow Screenshot with scriptable tasks | Image by Author

You can also print the item# if you want, using this command

System.currentWorkflowItem().getName();

FOR ACTIONS (OPTIONAL)

Now, this is a little tricky. We can’t do the same for actions. However, if we consider them as a part of a workflow, which means they will act similarly as an item and obviously will have a item number, then we can print their start and end cycle. We can do so by adding the below mentioned script in your actions.

if(System.currentWorkflowItem().getDisplayName())
System.log("\"" + System.currentWorkflowItem().getDisplayName() + "\" action started");
if(System.currentWorkflowItem().getDisplayName())
System.log("\"" + System.currentWorkflowItem().getDisplayName() + "\" action ended");
A vRO Action | Image by Author
vRO Workflow Screenshot with scriptable tasks and action | Image by Author

Here, we added a condition just to avoid printing Ended "" script if you try to run your actions solely.

A vRO Action | Image by Author

Automated Installation of vRO Plugin using Python

If you are looking for a way to quickly install plugins in your vRealize Orchestrator (.dar or .vmoapp files), you can try this python script which uses vRO ControlCenter API at its core.

Prerequisites

{The Script}

# -*- coding: utf-8 -*-
"""
Automated Installation of vRO Plugin 1.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is a python script to allow quick installation of plugins inside vRealize Orchestrator.
"""
__author__ = "Mayank Goyal, Harshit Kesharwani"
import base64
import csv
import email
import json
import os
import re
import smtplib
import time
import requests
import urllib3
from pathlib import Path
from string import Template
urllib3.disable_warnings()
def deployPlugin(vroUrl, username, password, filename):
"""Function to deploy plugin. Calls sub-functions like updateInstallPlugin() & finishInstallPlugin()."""
url = (f"https://{vroUrl}/vco-controlcenter/api/plugins/installation")
print(url)
files = {'file': open(filename, 'rb')}
headers = {'Accept': 'application/json'}
response = requests.post(url, auth=(username, password), headers=headers, files=files, verify=False)
statusCode = response.status_code
if (statusCode == 201):
responseData = response.json()
payloadToUpdate = (responseData)
pluginInstallationID = responseData['id']
pluginName = payloadToUpdate.get("plugins")[0].get("name")
print(f"pluginInstallationID : {pluginInstallationID}")
time.sleep(2)
updateStatus = updateInstallPlugin(vroUrl, username, password, pluginInstallationID, payloadToUpdate)
if updateStatus:
time.sleep(2)
finishStatus = finishInstallPlugin(vroUrl, username, password, pluginInstallationID,pluginName)
return finishStatus
else:
return False
else:
print(f"[ERROR]: Failed to execute Deploy request. Error code : {statusCode}")
return False
def updateInstallPlugin(vroUrl, username, password, pluginInstallationID, payloadToUpdate):
"""Function to update plugin state."""
url = (f"https://{vroUrl}/vco-controlcenter/api/plugins/installation/{pluginInstallationID}")
#payload = "{\"eulaAccepted\":true,\"forceInstall\":true,\"installPluginStates\":[{\"state\":\"NEW_INSTALLED\"}],\"plugins\":[{\"enabled\":true,\"logLevel\":\"DEFAULT\"}]}"
t1 = payloadToUpdate.get("plugins")[0]
t1.__setitem__("enabled", True)
t1.__setitem__("logLevel", "DEFAULT")
payloadToUpdate.__setitem__("plugins", [t1])
t2 = {
"name": str(payloadToUpdate.get("plugins")[0].get("name")),
"newBuild": str(payloadToUpdate.get("plugins")[0].get("buildNumber")),
"newVersion": str(payloadToUpdate.get("plugins")[0].get("version")),
"state": "NEW_INSTALLED"
}
payloadToUpdate.__setitem__("installPluginStates", [t2])
payloadToUpdate.__setitem__("eulaAccepted", True)
payloadToUpdate.__setitem__("forceInstall", True)
payload = json.dumps(payloadToUpdate)
headers = {'Content-Type': 'application/json'}
response = requests.request("POST", url, auth=(username, password), headers=headers, data=payload, verify=False)
statusCode = response.status_code
if (statusCode == 200):
return True
else:
print(f"[ERROR]: Failed to execute Update request. Error code : {statusCode}")
return False
def finishInstallPlugin(vroUrl, username, password, pluginInstallationID, pluginName):
"""Function to finalize the plugin installation."""
url = (f"https://{vroUrl}/vco-controlcenter/api/plugins/installation/{pluginInstallationID}/finish")
headers = {'Content-Type': 'application/json'}
response = requests.request("POST", url, auth=(username, password), headers=headers, verify=False)
statusCode = response.status_code
if (statusCode == 200):
responseData = response.json()
print(f"Enable plugin name {pluginName}")
time.sleep(2)
changePluginState(vroUrl, username, password, pluginName)
return True
else:
print(f"[ERROR]: Failed to execute Finish request. Error code : {statusCode}")
return False
def changePluginState(vroUrl, username, password, pluginName):
"""Function to change set enabled=true which actually completes the installation process."""
url = (f"https://{vroUrl}/vco-controlcenter/api/plugins/state")
payload = json.dumps([{"enabled": True, "name": pluginName}])
headers = {'Content-Type': 'application/json'}
response = requests.request("POST", url, auth=(username, password), headers=headers, data=payload, verify=False)
statusCode = response.status_code
if (statusCode == 200):
return True
else:
print(f"[ERROR]: Failed to execute changePluginState request. Error code : {statusCode}")
return False
def checkPluginInstallation(vroUrl, username, password, pluginInstallationID):
"""Function to check if installation is completed or not."""
url = (f"https://{vroUrl}/vco-controlcenter/api/plugins/installation/{pluginInstallationID}")
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
response = requests.request("GET", url, auth=(username, password), headers=headers, verify=False)
statusCode = response.status_code
print(f"statusCode : {statusCode}")
if (statusCode == 200):
return True
else:
print(f"[ERROR]: Failed to execute checkPluginInstallation request. Error code : {statusCode}")
return False
if __name__ == '__main__':
#User provided data
vroUrl = input("Enter vRO FQDN: [Eg: fqdn.vro.local] ")
username = input("Enter username: [Tip: try root] ")
password = input("Enter password: [Tip: try root password] ")
filename = input("Enter full path of plugin: [Eg: C:/Users/abcd/Downloads/plugin_file.vmoapp] ")
deployPlugin(vroUrl, username, password, filename)

Understand the process

For demo purpose, I will show the very few steps that you need to install a vRO plugin named JsonPath in an automated fashion.

  • Copy the full path of downloaded plugin with filename itself.
    • C:\Users\mayank\Downloads\cloudblogger tutorial\o11nplugin-jsonpath-1.0.2.vmoapp
  • Run the script and provide values to required inputs
  • Reload vRO ControlCenter, the desired plugin will be already installed there.

Subscription received!

Please check your email to confirm your newsletter subscription.

Generate vRO Workflow Runs Report in CSV


TL;DR Download the package from here & edit the inputs and run it to get a CSV Report for last 30 days of Workflow Runs in vRealize Orchestrator.


Many a times, we have environments where we want to monitor the executions of vRealize Orchestrator Workflows, like how many times a workflow is run, who executed the workflow, or what was the success rate of a particular workflow, or maybe just want to check how long it take to execute a workflow. I have seen few customers that were looking for similar solutions. So, If you are also looking for a way to achieve something similar, this article could be of some help.

As we know, vRO offers REST API out-of-the-box for all kind of operations. Same is true for getting the facts and figures of its workflow execution. There are various ways to get that kind of information but I find using the catalog-service-controller module to be the quickest for my use-case. Let’s see it in detail below.

Understand the REST API

The catalog-service-controller module is quite a versatile API module in vRO. You can use it to dig-in information for a extensive set of entities. However, the path and method we are mostly interested in today is, GET /api/catalog/{namespace}/{type}.

Namespace = System
type = WorkflowToken
/api/catalog/System/WorkflowToken?conditions=state%5Ecompleted%7Cfailed&conditions=endDate%3E"+date30DaysAgo+"&conditions=endDate%3C"+currentDate

where date30DaysAgo will get its value from getDate30DaysAgo() action and currentDate from getCurrentDateInISOFormat() action available in the vro-package.


Note You can add more states to this query like waiting, canceled, etc. if you are interested in workflow tokens other than completed and failed.


Understand the Code

In the package attached, we have a workflow Workflow Runs Report Generation which will have the code divided in 3 parts. For ease of operation, the workflow is creating a transient REST host using user provided inputs but you can also add your vRO as a REST Host in the inventory and add this REST URL as a REST Operation with dates as variable input. Also, in Part 2/3, only 5 properties are added to the CSV data. You can lookup for additional properties related to a workflow token if you want a more details-heavy report.

//Part 1/3: Invoke Rest Operation
.
.
.

//Part 2/3: Gather data in CSV format
var reportCSV = "Name,Status,Start Date,End Date,Started by\r\n";
if (response.statusCode != 200) { // 200 - HTTP status OK
    System.warn("Request failed with status code " + response.statusCode);
} else {
    var doc = JSON.parse(response.contentAsString);
    System.log("Number of failed or cancelled tokens: " + doc.total);
    for each(var link in doc.link) {
        for each(var attribute in link.attributes) {
            if (attribute.name == "name")
                var wfName = attribute.value;
            if (attribute.name == "state")
                var wfState = attribute.value;
            if (attribute.name == "startDate")
                var wfStartDate = attribute.value;
            if (attribute.name == "endDate")
                var wfEndDate = attribute.value;
            if (attribute.name == "startedBy")
                var wfStartedBy = attribute.value;
        }
        reportCSV += (wfName + "," + wfState + "," + wfStartDate + "," + wfEndDate + "," + wfStartedBy+"\r\n");
    }
}

//Part 3/3: Send CSV data as a mail attachment
.
.
.

In Part 3/3, we are simply attaching the CSV data as a MIME attachment and send it to the toAddress.

Let’s Run it

  • Import the package available here. If you need help with package import, follow this guide here.
  • Go to workflow Workflow Runs Report Generation and Click Edit
  • Go to the scriptable task and edit these input values.
  • After editing the values, Save it.
  • Click Run.
  • If the workflow executed successfully, you will see in the logs, the mail is being sent to the provided email address.
  • Check your mailbox. If you can’t see the email in your inbox, probably check the email address that you’ve provided or check your SPAM folder.
  • Download the attached CSV file and Open it. You will see a sheet which will be identical to what you have in your vRO Workflow Runs Tab (Performance view OFF). You may also want to sort this sheet on Start Date column in descending manner.

Download vRO Package


That’s it in today’s post. I hope you will like it.

Subscription received!

Please check your email to confirm your newsletter subscription.