Create ESXi root account with vRO [CB10104]


TL;DR If you would like to create ESXi local account using vRO, download this package (in.co.cloudblogger.crudEsxiLocalUser.package) to get started.


  1. Introduction
  2. Classes & Methods
  3. Script for creating a local admin account in ESXi
  4. Demo Video
  5. vRO Package for CRUD operation

Introduction

Many organization uses vRO for Host Provisioning. Various hardware vendors provide vRO Scripting APIs via plugins or REST APIs to manage and provision bare-metal servers. While doing so, there is always a possibility that post-provisioning, you would like to access your ESXi host from an account other than root for several reasons like security restrictions, limited access etc. In that case, the best way is to create a fresh new account using vRO with the kind of access mode or lets call it, role that suits the needs. In this post, we will see how to create an ESXi local user account using vRO Scripting API.

Classes & Methods

As shown below, we have used following classes and methods for retrieval of existing accounts, creation, updating & deletion of accounts as well as change access or Role of those accounts.

Script for creating a local admin account in ESXi

Link to gist here.

/**
 *
 * @version 0.0.0
 *
 * @param {VC:HostSystem} host 
 * @param {string} localUserName 
 * @param {SecureString} localUserPassword 
 * @param {string} accessMode 
 * @param {string} localUserDescription 
 *
 * @outputType void
 *
 */
function createEsxiLocalUser(host, localUserName, localUserPassword, accessMode, localUserDescription) {
	if(!host) throw "host parameter not set";
	if(!localUserName || !localUserPassword) throw "Either username or password parameter not set";
	if(!localUserDescription) localUserDescription = "***Account created using vRO***";
	if(localUserDescription.indexOf(localUserPassword) != -1) throw 'Weak Credentials! Avoid putting password string in description';
	
	// Retrieve all system and custom user accounts
	var arrExistingLocalusers = host.configManager.hostAccessManager.retrieveHostAccessControlEntries();
	var accountSpecs = new VcHostAccountSpec(localUserName,localUserPassword,localUserDescription);
	host.configManager.accountManager.createUser(accountSpecs);
	switch(accessMode){
	    case 'Admin': //Full access rights
	        host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessAdmin);
	        break;
	    case 'ReadOnly': //See details of objects, but not make changes
	        host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessReadOnly);
	        break;
	    case 'NoAccess': //Used for restricting granted access
	        host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessNoAccess);
	        break;
	    default: //No access assigned. Note: Role assigned is accessNone
	        host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessNone);
	}
	System.warn("  >>> Local user "+localUserName+" created with accessMode "+accessMode+" on host "+host.name);
	
	
}

Demo Video

In this demo, we can see how the workflow is utilized to create a local account testuser1 through which we logged in to ESXi and check if it has required permissions.

vRO Package for CRUD operation

I have created a vRO Workflow to create and manage your ESXi local accounts directly from the input form itself. Please find the vRO package that contains the master workflow and associated actions.

  • Workflow: CRUD Operation on ESXi Local Users 
  • Actions:
    • getEsxiLocalUser
    • deleteEsxiLocalUser
    • updateEsxiLocalUser
    • createEsxiLocalUser
    • getAllEsxiLocalUsers
    • getAllEsxiLocalUsersWithRoles

Link to vRO package: in.co.cloudblogger.crudEsxiLocalUser.package

That’s all in this post. Thanks for reading.

Edit text-based Resource Elements On-The-Go [CB10103]


TL:DR Idea is to update Resource elements from vRO itself as no such functionality exists in UI yet. Use the package and workflow to update resource elements directly from vRO quickly and conveniently. Link to vRO package here.


  1. Introduction
  2. Prerequisites
  3. Procedure
  4. Scope of Improvement
  5. References

Introduction

We all use Resource Elements inside vRealize Orchestrator for various reasons. Resource Elements are external independent objects that once imported, can be used in vRO Workflows and scripts.

Resource elements include image files, scripts, XML templates, HTML files, and so on. However,  I have one concern regarding updating them. It is such an overhead. Though, on official vRO Docs, it is clearly mentioned that you can import, export, restore, update, and delete a resource element, in reality, you have to update that object using a external editor that means a text editor for text based files, image editor for images etc.

Apart from images, the most common type of resource elements are text based files for example, .sh, .ps1, .json, .xml, .html, .yaml etc. In this post, to ease the process of updating resource elements, I have created a WF using which you really won’t have to follow that long boring method of exporting a resource element, edit it in the Notepad++, import it back. Just run that WF and select your resource element and it will give you a text editing area where you can update your resource element on-the-go.

Prerequisites

  • Download the package from here and import into your vRO.
  • Make sure you have the resource element you want to update.

Procedure

  • Run the Workflow Edit Resource Element On-The-Go and select the resource element. Here, I’ve selected template.yaml which I already imported in vRO earlier.
  • By default, vRO picks up a MIME type for your file. However, for text based objects, you can set it to text/{fileExtension}. Here, I will set it to text/yaml so that I can see it’s content directly in vRO.
  • Go to the next section of the workflow and you can see the current content of your resource element. Edit it the way you like. Here, this file was empty, so I added this YAML code.
  • Click Run and wait for workflow to complete.
  • Go back to the resource element to check if the content is there.
  • Now, you want to change the flavor from large to small. Rerun the workflow, edit the flavor value. Click Run.
  • Your changes should be visible now.

Scope of Improvement

I am looking for a way to give the version as an input, so that we can update the version of the resource element as we update its content. Seems like the resourceElement.version attributes is not working at least in vRO 8.x. Suggestions would be appreciated.

References

https://kb.vmware.com/s/article/81575

How to modify vRO Workflow description using REST

  1. Requirement
  2. Procedure
  3. Steps
    1. Response body from GET call
    2. Body with updated description for PUT call

Requirement

If you want to update a Workflow’s description as and when you want while working in vRO or from outside, you can use this quick method using vRO’s REST APIs. If we want, we can easily create a workflow out of it.

Procedure

We will be using two REST APIs from vRO.

GET schema: for getting the schema content of the WF which will be modified and used later.

PUT schema: for updating the description of the WF.

Steps

  • Click Execute. If status is 200, you will see the response body with workflow content and obviously its description as well.
  • Now modify this response body so that the new body has updated description.

Response body from GET call

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
  <schema-workflow xmlns:ns2="http://www.vmware.com/vco" root-name="item1" object-name="workflow:name=generic" id="bddbcab3-b4b7-4577-b76e-3301374d805f" version="0.0.0" api-version="6.0.0" restartMode="1" resumeFromFailedMode="0" editor-version="2.0">
    <display-name>demo test</display-name>
    <description>This description needs to be updated programmatically, but how?</description>
    <position y="50.0" x="100.0"/>
    <input/>
    <output/>
.
.
.

Body with updated description for PUT call

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
  <schema-workflow xmlns:ns2="http://www.vmware.com/vco" root-name="item1" object-name="workflow:name=generic" id="bddbcab3-b4b7-4577-b76e-3301374d805f" version="0.0.0" api-version="6.0.0" restartMode="1" resumeFromFailedMode="0" editor-version="2.0">
    <display-name>demo test</display-name>
    <description>The description has been updated using REST API</description>
    <position y="50.0" x="100.0"/>
    <input/>
    <output/>
.
.
.
  • Go to PUT request and provide parameters, id and updated body content. Click Execute.
  • You should see the updated description in the workflow.

Now you can easily automate this to create an action or Workflow where you can simply pass all the IDs of all workflows to update and the Description as well. Let me know in the comment if you want me to create a workflow for this.

That’s it in this post. You can check this question in the VMTN Community for which I have created this post. Don’t forget to subscribe.

Inside vRO’s JavaScript Engine – Rhino 1.7R4 [CB10102]

  1. What is Rhino Engine?
  2. Released 10 years ago
  3. Compatibility with JavaScript features
  4. ECMAScript 5.1 Specifications
  5. Rhino on GitHub
  6. Rhino Engine Limitations in vRO
  7. Access to additional Java Classes
    1. Procedure
  8. Javadoc for Rhino 1.7R4
    1. Feature Set when released
  9. Additional Links

vRealize Orchestrator a.k.a. vRO, which is a drag-and-drop automation tool, is quite an old tool developed and released as early as in 2007 by Dunes Technologies, Switzerland. After VMware acquired Dunes, there were 100’s of releases and updates came year after year. Lots and lots of improvements made over time in the UI, backend technologies, security, multi-language support, etc. However, one thing that remains the same is its JavaScript Engine. vRO uses Mozilla Rhino Engine 1.7R4 which was released in 2012.

In this post, My goal is to provide some insights on this Rhino Engine as it is almost extinct from the internet. However, I am still behind the JavaScript Engine which provides IntelliSense support to vRO 8.x. As you might have noticed and probably be wondering how CTRL+SPACE shows options only available to recent versions of JavaScript. I guess it’s for Node.js runtime.

What is Rhino Engine?

Rhino Engine converts JavaScript scripts into classes. It is intended to be used in desktop or server-side applications, hence there is no built-in support for the Web browser objects that are commonly associated with JavaScript which makes it very suitable for vRO. Rhino works in both compiled and interpreted mode. Rhino Engine got its name from the animal on the cover of the O’Reilly book about JavaScript published many years back.

The Rhino project was started at Netscape in the autumn of 1997. At the time, Netscape was planning to produce a version of Navigator written entirely in Java and so it needed an implementation of JavaScript written in Java. When Netscape stopped work on “Javagator,” as it was called, somehow Rhino escaped the axe (rumor had it that the executives “forgot” it existed). For a time, a couple of major companies (including Sun) licensed Rhino for use in their products and paid Netscape to do so, allowing work on Rhino to continue. Now Rhino is part of Mozilla’s open-source repository.

Released 10 years ago

Released in 2012-06-18, Rhino 1.7R4 is almost prehistoric for today’s standards and that’s been always a point of discussion in the vRO Community.

Release Notes of 1.7R4

Compatibility with JavaScript features

While trying to look deep into its compatibility matrix with ES, I found this Kangax’s Compat-table which gives an excellent and detailed view of all the possible features that Rhino 1.7R4 supports. Click the link to know more.

Rhino Compatibility Matrix with JavaScript

ECMAScript 5.1 Specifications

This document will give you a very in-depth knowledge of the ECMAScript 5.1 that vRO leverages to understand the language better. Learn more at https://262.ecma-international.org/5.1.

You can download this document and read about all the fine details by yourself.

Rhino on GitHub

Currently, on GitHub page of Mozilla, version 1.7R4 is not available. However, you may find some very old scripts that were written at the time of 1.7R4 as I can validate using web-achieve. You can explore their GitHub repo here.

Rhino Engine Limitations in vRO

When writing scripts for workflows, you must consider the following limitations of the Mozilla Rhino implementation in Orchestrator.

  • When a workflow runs, the objects that pass from one workflow element to another are not JavaScript objects. What is passed from one element to the next is the serialization of a Java object that has a JavaScript image. As a consequence, you cannot use the whole JavaScript language, but only the classes that are present in the API Explorer. You cannot pass function objects from one workflow element to another.
  • Orchestrator runs the code in scriptable task elements in a context that is not the Rhino root context. Orchestrator transparently wraps scriptable task elements and actions into JavaScript functions, which it then runs. A scriptable task element that contains System.log(this); does not display the global object this in the same way as a standard Rhino implementation does.
  • You can only call actions that return nonserializable objects from scripting, and not from workflows. To call an action that returns a nonserializable object, you must write a scriptable task element that calls the action by using the System.getModuleModuleName.action() method.
  • Workflow validation does not check whether a workflow attribute type is different from an input type of an action or subworkflow. If you change the type of a workflow input parameter, for example from VIM3:VirtualMachine to VC:VirtualMachine, but you do not update any scriptable tasks or actions that use the original input type, the workflow validates but does not run.

Access to additional Java Classes

By default, vRealize Orchestrator restricts JavaScript access to a limited set of Java classes. If you require JavaScript access to a wider range of Java classes, you must set an vRealize Orchestrator system property.

Allowing the JavaScript engine full access to the Java virtual machine (JVM) presents potential security issues. Malformed or malicious scripts might have access to all the system components to which the user who runs the vRealize Orchestrator server has access. Therefore, by default the vRealize Orchestrator JavaScript engine can access only the classes in the java.util.* package.

If you require JavaScript access to classes outside of the java.util.* package, you can list in a configuration file the Java packages to which to allow JavaScript access. You then set the com.vmware.scripting.rhino-class-shutter-file system property to point to this file.

Procedure

  1. Create a text configuration file to store the list of Java packages to which to allow JavaScript access.For example, to allow JavaScript access to all the classes in the java.net package and to the java.lang.Object class, you add the following content to the file.java.net.* java.lang.Object
  2. Enter a name for the configuration file.
  3. Save the configuration file in a subdirectory of /data/vco/usr/lib/vco. The configuration file cannot be saved under another directory.
  4. Log in to Control Center as root.
  5. Click System Properties.
  6. Click New.
  7. In the Key text box, enter com.vmware.scripting.rhino-class-shutter-file.
  8. In the Value text box, enter vco/usr/lib/vco/your_configuration_file_subdirectory.
  9. In the Description text box, enter a description for the system property.
  10. Click Add.
  11. Click Save changes from the pop-up menu.A message indicates that you have saved successfully.
  12. Wait for the vRealize Orchestrator server to restart.

See an implementation example of accessing external Java classes by BlueCat here. Here, the code implements new java.lang.Long(0)

.
.
.
var testConfig = BCNProteusAPI.createAPIEntity(new java.lang.Long(0),configName,"","Configuration" );
var args = new Array( new java.lang.Long(0), testConfig );
configId = new java.lang.Long( BCNProteusAPI.call( profileName,"addEntity",args ));
System.log( "New configuration was created, id=" + configId );
var addTFTPGroupArgs = new Array( configId, "tftpGroupName1", "" );
var tftpGroupId = new java.lang.Long( BCNProteusAPI.call(profileName,"addTFTPGroup", addTFTPGroupArgs ) );
System.log( "New TFTP Group was created, id=" + tftpGroupId );
.
.
.

Javadoc for Rhino 1.7R4

Feature Set when released

Source: https://contest-server.cs.uchicago.edu/ref/JavaScript/developer.mozilla.org/en-US/docs/Web/JavaScript/New_in_JavaScript/1-7.html#New_features_in_JavaScript_1.7

how to restart vRealize Orchestrator 8.x vRO

Different Ways to restart Orchestrator 8.x [CB10101]

If you are new to vRO or coming form vRO 7.x, you may find restarting vRO a little tricky and might want to know how to restart vRO in an ordered way to avoid any service failure or corrupt configuration etc. Historically, in 7.x version of vRO, there used to have a restart button in its VAMI interface which generally restart it gracefully but version 8.x skipped that ability. However, there are new ways that we’ll see today in this post.

  1. via vSphere – restart VM guest OS
  2. via SSH – pod recreation
  3. via SSH – run deploy.sh
  4. via Control Center
  5. Older ways to restart vRO services
    1. via SSH – restart services
    2. via Control Center – Startup Options
    3. via vRA VAMI – for embedded vRO

via vSphere – restart VM guest OS

  • Click Virtual Machines in the VMware Host Client inventory, select vRO VM.
  • To restart a virtual machine, right-click the virtual machine and select Power > Restart Guest OS.

via SSH – pod recreation

  • One way is to scale down pods to ZERO which basically destroys them. You can do so by copy paste these commands on your vRO Server over a SSH session.
kubectl scale deployment orchestration-ui-app --replicas=0 -n prelude
kubectl scale deployment vco-app --replicas=0 -n prelude
sleep 120
kubectl scale deployment orchestration-ui-app --replicas=1 -n prelude
kubectl scale deployment vco-app --replicas=1 -n prelude
  • Other way would be to delete these pods directly using this command. After this command, K8s will auto-deploy the pods back again.
kubectl delete pod vco-app
kubectl delete pod orchestration-ui-app

Now monitor till both pods will be fully recreated (3/3 and 1/1) using this command:

kubectl -n prelude get pods

When all services are listed as Running or Completed, vRealize Orchestrator is ready to use. Generally, pod creation may take up to 5-7 mins.

via SSH – run deploy.sh

  • Login to the vRO appliance using SSH or VMRC
  • To stop all services, run/opt/scripts/deploy.sh –onlyClean
  • To shutdown the appliance, run /opt/scripts/deploy.sh –shutdown
  • To start all services, run /opt/scripts/deploy.sh
  • Validate the deployment has finished by reviewing the output from the deploy.sh script
  • Once the command execution completes, ensure that all of the pods are running correctly with the following command ‘kubectl get pods –all-namespaces

When all services are listed as Running or Completed, vRealize Orchestrator is ready to use.

via Control Center

  • Go to Control Center.
  • Open System Properties and add a new property.
  • This will auto-restart the vRO in 2 mins.

Older ways to restart vRO services

There are some older ways of restarting vRO and its services, perhaps for vRO 6.x & 7.x only. But these are not valid anymore for version 8.x. They are just here for the records.

via SSH – restart services

  • Take an SSH session and run this command will restart vRO services.
service vco-server stop && service vco-configurator stop

via Control Center – Startup Options

  • Open Control Center and go to Startup Options.
  • Click Restart button.

via vRA VAMI – for embedded vRO

  • Open vRA VAMI Interface and go to vRA -> Orchestrator settings.
  • Select Service type and Click Restart button.

That’s all in this post. Please comment down if you use any way other than mentioned here. I’ll be happy to add it here. And don’t forget to share this post. #vRORocks

Differences between VMware Aria Automation Orchestrator Forms and VMware Aria Automation Service Broker Forms

Starting with vRealize Automation 8.2, Service Broker is capable of displaying input forms designed in vRealize Orchestrator with the custom forms display engine. However, there are some differences in the forms display engines.

Orchestrator and Service Broker forms

Amongst the differences, the following features supported in vRealize Orchestrator are not yet supported in Service Broker:

  • The inputs presentations developed with the vRealize Orchestrator Legacy Client used in vRealize Orchestrator 7.6 and earlier, are not compatible. vRealize Orchestrator uses a built-in legacy input presentation conversion that is not available from Service Broker yet.
  • The inputs presentation in vRealize Orchestrator has access to all the workflow elements in the workflow. The custom forms have access to the elements exposed to vRealize Automation Service Broker through the VRO-Gateway service, which is a subset of what is available on vRealize Orchestrator.
    • Custom forms can bind workflow inputs to action parameters used to set values in other inputs.
    • Custom forms cannot bind workflows variables to action parameters used to set values in other inputs.

Note You might have noticed VRO-Gateway service when you use WFs as a WBX (Workflow Based Extensibility) in Event Subscriptions where these WFs get triggered by this service.

Basically, It provides a gateway to VMware Realize Orchestrator (vRO) for services running on vRealize Automation. By using the gateway, consumers of the API can access a vRO instance, and initiate workflows or script actions without having to deal directly with the vRO APIs.


It is possible to work around vRealize Automation not having access to workflow variables by one of the following options :

  • Using a custom action returning the variable content.
  • Binding to an input parameter set to not visible instead of a variable.
  • Enabling custom forms and using constants.

The widgets available in vRealize Orchestrator and in vRealize Automation vary for certain types. The following table describes what is supported.

vRAvRO
Input Data TypePossible Form Display TypesAction return type for Default ValueAction return type for Value OptionsPossible Form Display TypesAction return type for Default ValueAction return type for Value Options
StringText, TextField, Text AreaDropdown, Radio GroupStringArray of StringPropertiesArray of Properties (value, label)Text, TextFIeld, Text AreaDropdown, Radio GroupStringArray of String
Array of StringArray Input (vRA 8.2), Dual List, Multi SelectArray of StringPropertiesArray of PropertiesArray of StringDatagrid, Multi Value PickerArray of StringPropertiesArray of PropertiesArray of String
IntegerIntegerNumberArray of NumberNot supportedNot supportedNot supported
Array of IntegerArray Input (vRA 8.2), Datagrid (vRA 8.1)Array of NumberArray of NumberNot supportedNot supportedNot supported
NumberDecimalNumberArray of NumberDecimalNumberArray of Number
Array/NumberArray Input (vRA 8.2), Datagrid (vRA 8.1)Array of NumberArray of NumberDatagridArray of NumberArray of Number
BooleanCheckboxBooleanNot supportedCheckboxBoolean
DateDate TimeDateArray of DateDate TimeDateArray of Date
Array of DateArray Input (vRA 8.2), Datagrid (vRA 8.1)Array of DateArray of DateDatagridArray of DateArray of Date
Composite/ComplexDatagrid, Object Field (vRA 8.3)Composite, Properties, Array/Composite, Array/PropertiesArray of CompositeDatagridComposite(columns…)Array/PropertiesArray of Composite
Array of CompositeDatagrid, Multi Value PickerComposite, Properties, Array/Composite, Array/PropertiesArray of CompositeDatagrid, Multi Value PickerArray/Composite(columns…)Array/PropertiesArray of Composite
Reference / vRO SDK Object typeValue PickerSDK ObjectArray of SDK Object (vRA 8.2)Value PickerSDK ObjectArray of SDK Object
Array of ReferenceMulti Value Picker (vRA 8.3)Array of SDK ObjectArray of SDK Object (vRA 8.3)DatagridArray of SDK ObjectArray of SDK Object
Secure StringPasswordStringNot supportedPasswordStringNot supported
FileNot supportedNot supportedNot supportedFile UploadNot supportedNot supported

For use cases where the widget specified in vRealize Orchestrator is not available from Service Broker, a compatible widget is used.

Because the data being passed to and from the widget might expect different types, formats, and values in the case they are unset, the best practice to develop workflows targeting Service Broker is to:

  1. Develop the vRealize Orchestrator workflow. This can include both the initial development of the workflow or changes of inputs.
  2. Version the workflow manually.
  3. In Cloud Assembly, navigate to Infrastructure > Connections > Integrations and select your vRealize Orchestrator integration.
  4. Start the data collection for the vRealize Orchestrator integration. This step, along with versioning up your workflow, ensure that the VRO-Gateway service used by vRealize Automation has the latest version of the workflow.
  5. Import content into Service Broker. This step generates a new default custom form.
  6. In addition to the input forms designed in vRealize Orchestrator, you can, if needed, develop workflow input forms with the custom forms editor.
  7. If these forms call actions, develop or run these from the vRealize Orchestrator workflow editor.
  8. Test the inputs presentation in Service Broker.
  9. Repeat from step 5 as many times as needed.
  10. Repeat from step 1, in case workflows inputs or forms need to be changed.

Either distribute and maintain the custom forms or alternatively, design vRealize Orchestrator inputs by using the same options or actions as in the custom forms (the above step 1), and then repeat the steps 2 to 8 to validate that the process works.

Using this last option means that:

  • Running the workflow from vRealize Orchestrator can lead to the input presentation not working as expected when started in vRealize Orchestrator.
  • For some cases, you must modify the return type of the actions used for default value or value options so these values can be set from the vRealize Orchestrator workflow editor and, when the workflow is saved, revert the action return types.

Designing the form in the workflow has the following advantages:

  • Form is packaged and delivered as part of the workflow included in a package.
  • Form can be tested in vRealize Orchestrator as long as the compatible widgets are applied.
  • The form can optionally be versioned and synchronized to a Git repository with the workflow.

Designing the custom forms separately has the following advantages:

  • Being able to customize the form without changing the workflow.
  • Being able to import and export the form as a file and reusing it for different workflows.

For example, a common use case is to have a string based drop-down menu.

Returning a Properties type can be used in both the vRealize Orchestrator input form presentation and vRealize Automation custom forms presentation. With the Property type you can display a list of values in the drop-down menu. After being select by the user, these values pass an ID to the parameter (to the workflow and the other input fields that would bind to this parameter). This is very practical to list objects when there is no dedicated plug-in for them as this avoids you having to select object names and having to find object IDs by name.

Returning an array of Properties types has the same goal as returning Properties but does give control on the ordering of the element. It is done by setting for each property in the array the label and value keys. For example, it is possible to sort ascending or descending properties by label or by keys within the action.

All the workflows included in the “drop down” folder of the sample package include drop down menus created with actions that have array of Properties set as the return type.

Find Object Types in vRealize Orchestrator

Find Object Types in vRealize Orchestrator [CB10100]

Sometimes, we want to know exactly what type of vRO object we are working on. It could be something that is returning from an action of type Any or a method returning various types of objects or simply about switch cases. In this quick post, we will see what are the options that vRO provides and where to use them.

  1. typeof
    1. Code Examples
    2. Using new operator
    3. use of Parenthesis
  2. System.getObjectType()
    1. Code Examples
  3. System.getObjectClassName()
    1. Code Examples
  4. instanceof
    1. Syntax
    2. Code Examples

typeof

The typeof operator returns a string indicating the type of the operand’s value where the operand is an object or of primitive type.

TypeResult
Undefined"undefined"
Null"object" (reason)
Boolean"boolean"
Number"number"
String"string"
Function"function"
Array"object"
Date"object"
vRO Object types"object"
vRO Object types with new operator"function"

Code Examples

var var1 = new VcCustomizationSpec(); 
System.debug(typeof var1); //function
var var2 = new Object();
System.debug(typeof var2); //object
var var3 = "a";
System.debug(typeof var3); //string
var var4 = 2;
System.debug(typeof var4); //number
var var4 = new Array(1, 2, 3);
System.debug(typeof var4); //object
System.debug(typeof []); //object
System.debug(typeof function () {}); //function
System.debug(typeof /regex/); //object
System.debug(typeof new Date()); //object
System.debug(typeof null); //object
System.debug(typeof undefinedVarible); //undefined

Using new operator

In this example, typeof operator is showing different results when used with new operator for class VC:CustomizationSpecManager. That’s because the new operator is used for creating a user-defined object type instance of one of the built-in object types that has a constructor function. So basically it calls the constructor function of that object type, hence typeof prints function. However, something to note here is that when new operator is used with primitive object type Number, typeof recognizes that as an object.

var num1 = 2;
System.debug(typeof num1); //number

var num2 = Number("123");;
System.debug(typeof (1 + num2)); //number

var num3 = new Number("123");;
System.debug(typeof (num3)); //object

var num4 = new Number("123");;
System.debug(typeof (1 + num4)); //number

use of Parenthesis

// Parentheses can be used for determining the data type of expressions.
const someData = 99;
typeof someData + "cloudblogger"; // "number cloudblogger"
typeof (someData + " cloudblogger"); // "string"

System.getObjectType()

The System.getObjectType() method returns the VS-O ‘type’ for the given operand. This method is more advanced than typeof and is able to detect more complex yet intrinsic object types like Date, Array etc. But, it still cannot figure out the plugin object types like VC:SDKConnection, etc.

TypeResult
Array"Array"
Number"number"
String"string"
vRO Plugin Object Types (with or without new)"null"
Date"Date"
Composite Types"Properties"
SecureString"string"
undefined VariableReference Error

Code Examples


var var1 = new VcCustomizationSpec(); 
System.debug(System.getObjectType(var1)); //null

var var2 = new Object();
System.debug(System.getObjectType(var2)); //Properties

var var3 = "a";
System.debug(System.getObjectType(var3)); //string

var var4 = 2;
System.debug(System.getObjectType(var4)); //number

var var4 = new Array(1, 2, 3);
System.debug(System.getObjectType(var4)); //Array

System.debug(System.getObjectType([])); //Array

System.debug(System.getObjectType(function () {})); //null

System.debug(System.getObjectType(new Date())); //Date

System.debug(System.getObjectType(undefinedVarible)); //FAIL ReferenceError: "undefinedVarible" is not defined.

System.getObjectClassName()

The System.getObjectClassName() method returns the class name of any vRO scripting object that typeof(obj) returns “object”. This works the best with complex vRO object types and surpasses System.getObjectType() in terms of its capability to identify object types.

TypeResult
Array"Array"
Number"Number"
String"String"
vRO Plugin Object Types (eg: VC:SdkConnection)Class Name (eg: VcSdkConnection)
Date"Date"
Composite Types"Properties"
SecureString"String"
undefined VariableReference Error
null objectsError: Cannot get class name from null object

Code Examples

System.debug(System.getObjectClassName(input));  //String

var var1 = new VcCustomizationSpec(); 
System.debug(System.getObjectClassName(var1)); //VcCustomizationSpec

var var2 = new Object();
System.debug(System.getObjectClassName(var2)); //Object

var var3 = "a";
System.debug(System.getObjectClassName(var3)); //String

var var4 = 2;
System.debug(System.getObjectClassName(var4)); //Double

var var4 = new Array(1, 2, 3);
System.debug(System.getObjectClassName(var4)); //Array

System.debug(System.getObjectClassName([])); //Array

System.debug(System.getObjectClassName(function () {})); //Function

System.debug(System.getObjectClassName(new Date())); //Date

instanceof

The instanceof operator tests to see if the prototype property of a constructor appears anywhere in the prototype chain of an object. The return value is a boolean value. This means that instanceof checks if RHS matches the constructor of a class. That’s why it doesn’t work with primitive types like number, string, etc. However, works with variety of complex types available in vRO.

Syntax

object instanceof constructor

Code Examples

var var1 = new VcCustomizationSpec(); 
System.debug(var1 instanceof VcCustomizationSpec); //true

var var1 = new VcCustomizationSpec(); 
System.debug(var1 instanceof Object); //true

var var2 = new Object();
System.debug(var2 instanceof Object); //true

var var3 = "a";
System.debug(var3 instanceof String); //false

var var3 = new String("a");
System.debug(var3 instanceof String); //true

var var3 = "a";
System.debug(var3 instanceof String); //false

var var4 = 2;
System.debug(var4 instanceof Number); //false

var var4 = new Array(1, 2, 3);
System.debug(var4 instanceof Array); //true

System.debug([] instanceof Array); //true

System.debug(function () {} instanceof Function); //true

System.debug(new Date() instanceof Date); //true

System.debug({} instanceof Object); //true

That’s all in this port. I hope you will have a better understanding on how to check vRO Object types. Let me know in the comment if you have any doubt or question. Feel free to share this article. Thank you.

Advanced JavaScript Snippets in vRO [CB10099]

  1. Introduction
  2. Snippets
    1. External Modules
    2. First-class Functions
    3. Ways to add properties to Objects
    4. Custom Class
    5. Private variable
    6. Label
    7. with keyword
    8. Function binding
    9. Prototype Chaining
  3. Recommended Reading

Introduction

vRO JS code is generally plain and basic just enough to get the job done. But I was wondering, how to fancy it? So, I picked some slightly modern JS code (ES5.1+) and tried running it on my vRO 8.3. I found some interesting things which I would like to share in this article.

Snippets

Here are some JS concepts that you can use writing vRO JavaScript code to make it more compelling and beautiful.

External Modules

To utilize modern features, you can use modules like lodash.js for features such as map or filter etc. Other popular module is moment.js for complex Date and Time handling in vRO.

var _ = System.getModule("fr.numaneo.library").lodashLibrary();
var myarr = [1,2,3];
var myarr2 = [4,5,6];
var concatarr = _.concat(myarr, myarr2);
System.log(concatarr); // [1,2,3,4,5,6];

Find more information on how to leverage Lodash.js in vRO here.

First-class Functions

First-class functions are functions that are treated like any other variable. For example, a function can be passed as an argument to other functions, can be returned by another function and can be assigned as a value to a variable.

// we send in the function as an argument to be
// executed from inside the calling function
function performOperation(a, b, cb) {
    var c = a + b;
    cb(c);
}

performOperation(2, 3, function(result) {
    // prints out 5
    System.log("The result of the operation is " + result);
})

Ways to add properties to Objects

There are 4 ways to add a property to an object in vRO.

// supported since ES3
// the dot notation
instance.key = "A key's value";

// the square brackets notation
instance["key"] = "A key's value";

// supported since ES5
// setting a single property using Object.defineProperty
Object.defineProperty(instance, "key", {
    value: "A key's value",
    writable: true,
    enumerable: true,
    configurable: true
});

// setting multiple properties using Object.defineProperties
Object.defineProperties(instance, {
    "firstKey": {
        value: "First key's value",
        writable: true
    },
    "secondKey": {
        value: "Second key's value",
        writable: false
    }
});

Custom Class

You can create your own custom classes in vRO using the function keyword and extend that function’s prototype.

// we define a constructor for Person objects
function Person(name, age, isDeveloper) {
    this.name = name;
    this.age = age;
    this.isDeveloper = isDeveloper || false;
}

// we extend the function's prototype
Person.prototype.writesCode = function() {
    System.log(this.isDeveloper? "This person does write code" : "This person does not write code");
}

// creates a Person instance with properties name: Bob, age: 38, isDeveloper: true and a method writesCode
var person1 = new Person("Bob", 38, true);
// creates a Person instance with properties name: Alice, age: 32, isDeveloper: false and a method writesCode
var person2 = new Person("Alice", 32);

// prints out: This person does write code
person1.writesCode();
// prints out: this person does not write code
person2.writesCode();

Both instances of the Person constructor can access a shared instance of the writesCode() method.

Private variable

A private variable is only visible to the current class. It is not accessible in the global scope or to any of its subclasses. For example, we can do this in Java (and most other programming languages) by using the private keyword when we declare a variable

// we  used an immediately invoked function expression
// to create a private variable, counter
var counterIncrementer = (function() {
    var counter = 0;

    return function() {
        return ++counter;
    };
})();

// prints out 1
System.log(counterIncrementer());
// prints out 2
System.log(counterIncrementer());
// prints out 3
System.log(counterIncrementer());

Label

Labels can be used with break or continue statements. It is prefixing a statement with an identifier which you can refer to.

var str = '';

loop1:
for (var i = 0; i < 5; i++) {
  if (i === 1) {
    continue loop1;
  }
  str = str + i;
}

System.log(str);
// expected output: "0234"

with keyword

The with statement extends the scope chain for a statement. Check the example for better understanding.

var box = {"dimensions": {"width": 2, "height": 3, "length": 4}};
with(box.dimensions){
  var volume = width * height * length;
}
System.log(volume); //24

// vs

var box = {"dimensions": {"width": 2, "height": 3, "length": 4}};
var boxDimensions = box.dimensions;
var volume2 = boxDimensions.width * boxDimensions.height * boxDimensions.length;
System.log(volume2); //24

Function binding

The bind() method creates a new function that, when called, has its this keyword set to the provided value, with a given sequence of arguments preceding any provided when the new function is called.

const module = {
  x: 42,
  getX: function() {
    return this.x;
  }
};

const unboundGetX = module.getX;
System.log(unboundGetX()); // The function gets invoked at the global scope
// expected output: undefined

const boundGetX = unboundGetX.bind(module);
System.log(boundGetX());
// expected output: 42

Prototype Chaining

const o = {
  a: 1,
  b: 2,
  // __proto__ sets the [[Prototype]]. It's specified here
  // as another object literal.
  __proto__: {
    b: 3,
    c: 4,
  },
};

// o.[[Prototype]] has properties b and c.
// o.[[Prototype]].[[Prototype]] is Object.prototype (we will explain
// what that means later).
// Finally, o.[[Prototype]].[[Prototype]].[[Prototype]] is null.
// This is the end of the prototype chain, as null,
// by definition, has no [[Prototype]].
// Thus, the full prototype chain looks like:
// { a: 1, b: 2 } ---> { b: 3, c: 4 } ---> Object.prototype ---> null

System.log(o.a); // 1
// Is there an 'a' own property on o? Yes, and its value is 1.

System.log(o.b); // 2
// Is there a 'b' own property on o? Yes, and its value is 2.
// The prototype also has a 'b' property, but it's not visited.
// This is called Property Shadowing

System.log(o.c); // 4
// Is there a 'c' own property on o? No, check its prototype.
// Is there a 'c' own property on o.[[Prototype]]? Yes, its value is 4.

System.log(o.d); // undefined
// Is there a 'd' own property on o? No, check its prototype.
// Is there a 'd' own property on o.[[Prototype]]? No, check its prototype.
// o.[[Prototype]].[[Prototype]] is Object.prototype and
// there is no 'd' property by default, check its prototype.
// o.[[Prototype]].[[Prototype]].[[Prototype]] is null, stop searching,
// no property found, return undefined.

official-vmware-guides-for-vro-and-vra-8.x

Official VMware Guides for vRO and vRA 8.x

  1. Introduction
  2. Guides for vRA
    1. Architecture
    2. Cloud Assembly
    3. Code Stream
    4. Service Broker
    5. Transition Guide
    6. Migration
    7. Cloud Transition (SaaS only)
    8. Integration with ServiceNow
    9. Load Balancing
    10. NSX-T Migration
    11. SaltStack Config
  3. Guides for vRO
    1. Installation
    2. User Interface
    3. Developer’s Guide
    4. Migration
    5. Plug-in Development
    6. Plug-in Guide

Introduction

This blogpost is simply about giving a consolidated view on all the official guides that VMware provides for vRealize Automation and vRealize Orchestrator. These guides can help Automation Engineers & Developers, Solution Architects, vRealize Admins, etc and can be used as a reference for developing vRO Code, vRA Templates and various other tasks. You can download them from the provided links for offline access.

Guides for vRA

Architecture

Cloud Assembly

Code Stream

Service Broker

Transition Guide

Migration

Cloud Transition (SaaS only)

Download: https://docs.vmware.com/en/vRealize-Automation/services/vrealize-automation-cloud-transition-guide.pdf

Integration with ServiceNow

Load Balancing

NSX-T Migration

SaltStack Config

Guides for vRO

Installation

User Interface

Download: https://docs.vmware.com/en/vRealize-Orchestrator/8.8/vrealize-orchestrator-88-using-client-guide.pdf

Developer’s Guide

Migration

Plug-in Development

Plug-in Guide

I will try to update this list over time. Hope this list of guides will help you in understanding things a little better. Feel free to share.

Advertisements

An Introduction to Cloud-Config Scripting for Linux based VMs in vRA Cloud Templates | Cloud-Init

  1. Introduction
    1. Use cloud-init to configure:
    2. Compatible OSes
  2. Install cloud-init in VM images #firststep
  3. Where cloudConfig commands can be added
  4. General Information about Cloud-Config
  5. YAML Formatting
  6. User and Group Management
  7. Change Passwords for Existing Users
  8. Write Files to the Disk
  9. Update or Install Packages on the Server
  10. Configure SSH Keys for User Accounts and the SSH Daemon
  11. Set Up Trusted CA Certificates
  12. Configure resolv.conf to Use Specific DNS Servers
  13. Run Arbitrary Commands for More Control
  14. Shutdown or Reboot the Server
  15. Troubleshooting
  16. Conclusion
  17. References

Introduction

Cloud images are operating system templates and every instance starts out as an identical clone of every other instance. It is the user data that gives every cloud instance its personality and cloud-init is the tool that applies user data to your instances automatically.

Use cloud-init to configure:

  • Setting a default locale
  • Setting the hostname
  • Generating and setting up SSH private keys
  • Setting up ephemeral mount points
  • Installing packages

There is even a full-fledged website https://cloud-init.io/ where you can check various types of resources and information.

Compatible OSes

While cloud-init started life in Ubuntu, it is now available for most major Linux and FreeBSD operating systems. For cloud image providers, then cloud-init handles many of the differences between cloud vendors automatically — for example, the official Ubuntu cloud images are identical across all public and private clouds.

cloudConfig commands are special scripts designed to be run by the cloud-init process. These are generally used for initial configuration on the very first boot of a server. In this guide, we will be discussing the format and usage of cloud-config commands.

Install cloud-init in VM images #firststep

Make sure cloud-init is installed and properly configured in the linux based images you want with work with. Possibilities are that you may have to install it in some of the OSes and flavors. For.eg: cloud-init comes installed in the official Ubuntu live server images since the release of 18.04, Ubuntu Cloud Images, etc. However, in some of the Red Hat Linux images, it doesn’t come preinstalled.

Where cloudConfig commands can be added

You can add a cloudConfig section to cloud template code, but you can also add one to a machine image in advance, when configuring infrastructure. Then, all cloud templates that reference the source image get the same initialization.

You might have an image map and a cloud template where both contain initialization commands. At deployment time, the commands merge, and Cloud Assembly runs the consolidated commands. When the same command appears in both places but includes different parameters, only the image map command is run. Faulty cloudConfig commands can result in a resource that isn’t correctly configured or behaves unpredictably.


Important cloudConfig may cause unpredictable results when used with vSphere Guest Customizations. A hit & trial can be done to figure out what works best.


General Information about Cloud-Config

The cloud-config format implements a declarative syntax for many common configuration items, making it easy to accomplish many tasks. It also allows you to specify arbitrary commands for anything that falls outside of the predefined declarative capabilities.

This “best of both worlds” approach lets the file acts like a configuration file for common tasks, while maintaining the flexibility of a script for more complex functionality.

YAML Formatting

The file is written using the YAML data serialization format. The YAML format was created to be easy to understand for humans and easy to parse for programs.

YAML files are generally fairly intuitive to understand when reading them, but it is good to know the actual rules that govern them.

Some important rules for YAML files are:

  • Indentation with whitespace indicates the structure and relationship of the items to one another. Items that are more indented are sub-items of the first item with a lower level of indentation above them.
  • List members can be identified by a leading dash.
  • Associative array entries are created by using a colon (:) followed by a space and the value.
  • Blocks of text are indented. To indicate that the block should be read as-is, with the formatting maintained, use the pipe character (|) before the block.

Let’s take these rules and analyze an example cloud-config file, paying attention only to the formatting:

#cloud-config
users:
  - name: demo
    groups: sudo
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDf0q4PyG0doiBQYV7OlOxbRjle026hJPBWD+eKHWuVXIpAiQlSElEBqQn0pOqNJZ3IBCvSLnrdZTUph4czNC4885AArS9NkyM7lK27Oo8RV888jWc8hsx4CD2uNfkuHL+NI5xPB/QT3Um2Zi7GRkIwIgNPN5uqUtXvjgA+i1CS0Ku4ld8vndXvr504jV9BMQoZrXEST3YlriOb8Wf7hYqphVMpF3b+8df96Pxsj0+iZqayS9wFcL8ITPApHi0yVwS8TjxEtI3FDpCbf7Y/DmTGOv49+AWBkFhS2ZwwGTX65L61PDlTSAzL+rPFmHaQBHnsli8U9N6E4XHDEOjbSMRX user@example.com
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcthLR0qW6y1eWtlmgUE/DveL4XCaqK6PQlWzi445v6vgh7emU4R5DmAsz+plWooJL40dDLCwBt9kEcO/vYzKY9DdHnX8dveMTJNU/OJAaoB1fV6ePvTOdQ6F3SlF2uq77xYTOqBiWjqF+KMDeB+dQ+eGyhuI/z/aROFP6pdkRyEikO9YkVMPyomHKFob+ZKPI4t7TwUi7x1rZB1GsKgRoFkkYu7gvGak3jEWazsZEeRxCgHgAV7TDm05VAWCrnX/+RzsQ/1DecwSzsP06DGFWZYjxzthhGTvH/W5+KFyMvyA+tZV4i1XM+CIv/Ma/xahwqzQkIaKUwsldPPu00jRN user@desktop
runcmd:
  - touch /test.txt

By looking at this file, we can learn a number of important things.

First, each cloud-config file must begin with #cloud-config alone on the very first line. This signals to the cloud-init program that this should be interpreted as a cloud-config file. If this were a regular script file, the first line would indicate the interpreter that should be used to execute the file.

The file above has two top-level directives, users and runcmd. These both serve as keys. The values of these keys consist of all of the indented lines after the keys.

In the case of the users key, the value is a single list item. We know this because the next level of indentation is a dash (-) which specifies a list item, and because there is only one dash at this indentation level. In the case of the users directive, this incidentally indicates that we are only defining a single user.

The list item itself contains an associative array with more key-value pairs. These are sibling elements because they all exist at the same level of indentation. Each of the user attributes are contained within the single list item we described above.

Some things to note are that the strings you see do not require quoting and that there are no unnecessary brackets to define associations. The interpreter can determine the data type fairly easily and the indentation indicates the relationship of items, both for humans and programs.

By now, you should have a working knowledge of the YAML format and feel comfortable working with information using the rules we discussed above.

We can now begin exploring some of the most common directives for cloud-config.

User and Group Management

To define new users on the system, you can use the users directive that we saw in the example file above.

The general format of user definitions is:

#cloud-config
users:
  - first_user_parameter
    first_user_parameter
    
  - second_user_parameter
    second_user_parameter
    second_user_parameter
    second_user_parameter

Each new user should begin with a dash. Each user defines parameters in key-value pairs. The following keys are available for definition:

  • name: The account username.
  • primary-group: The primary group of the user. By default, this will be a group created that matches the username. Any group specified here must already exist or must be created explicitly (we discuss this later in this section).
  • groups: Any supplementary groups can be listed here, separated by commas.
  • gecos: A field for supplementary info about the user.
  • shell: The shell that should be set for the user. If you do not set this, the very basic sh shell will be used.
  • expiredate: The date that the account should expire, in YYYY-MM-DD format.
  • sudo: The sudo string to use if you would like to define sudo privileges, without the username field.
  • lock-passwd: This is set to “True” by default. Set this to “False” to allow users to log in with a password.
  • passwd: A hashed password for the account.
  • ssh-authorized-keys: A list of complete SSH public keys that should be added to this user’s authorized_keys file in their .ssh directory.
  • inactive: A boolean value that will set the account to inactive.
  • system: If “True”, this account will be a system account with no home directory.
  • homedir: Used to override the default /home/<username>, which is otherwise created and set.
  • ssh-import-id: The SSH ID to import from LaunchPad.
  • selinux-user: This can be used to set the SELinux user that should be used for this account’s login.
  • no-create-home: Set to “True” to avoid creating a /home/<username> directory for the user.
  • no-user-group: Set to “True” to avoid creating a group with the same name as the user.
  • no-log-init: Set to “True” to not initiate the user login databases.

Other than some basic information, like the name key, you only need to define the areas where you are deviating from the default or supplying needed data.

One thing that is important for users to realize is that the passwd field should not be used in production systems unless you have a mechanism of immediately modifying the given value. As with all information submitted as user-data, the hash will remain accessible to any user on the system for the entire life of the server. On modern hardware, these hashes can easily be cracked in a trivial amount of time. Exposing even the hash is a huge security risk that should not be taken on any machines that are not disposable.

For an example user definition, we can use part of the example cloud-config we saw above:

#cloud-config
users:
  - name: demo
    groups: sudo
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDf0q4PyG0doiBQYV7OlOxbRjle026hJPBWD+eKHWuVXIpAiQlSElEBqQn0pOqNJZ3IBCvSLnrdZTUph4czNC4885AArS9NkyM7lK27Oo8RV888jWc8hsx4CD2uNfkuHL+NI5xPB/QT3Um2Zi7GRkIwIgNPN5uqUtXvjgA+i1CS0Ku4ld8vndXvr504jV9BMQoZrXEST3YlriOb8Wf7hYqphVMpF3b+8df96Pxsj0+iZqayS9wFcL8ITPApHi0yVwS8TjxEtI3FDpCbf7Y/DmTGOv49+AWBkFhS2ZwwGTX65L61PDlTSAzL+rPFmHaQBHnsli8U9N6E4XHDEOjbSMRX user@example.com
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcthLR0qW6y1eWtlmgUE/DveL4XCaqK6PQlWzi445v6vgh7emU4R5DmAsz+plWooJL40dDLCwBt9kEcO/vYzKY9DdHnX8dveMTJNU/OJAaoB1fV6ePvTOdQ6F3SlF2uq77xYTOqBiWjqF+KMDeB+dQ+eGyhuI/z/aROFP6pdkRyEikO9YkVMPyomHKFob+ZKPI4t7TwUi7x1rZB1GsKgRoFkkYu7gvGak3jEWazsZEeRxCgHgAV7TDm05VAWCrnX/+RzsQ/1DecwSzsP06DGFWZYjxzthhGTvH/W5+KFyMvyA+tZV4i1XM+CIv/Ma/xahwqzQkIaKUwsldPPu00jRN user@desktop

To define groups, you should use the groups directive. This directive is relatively simple in that it just takes a list of groups you would like to create.

An optional extension to this is to create a sub-list for any of the groups you are making. This new list will define the users that should be placed in this group:

#cloud-config
groups:
  - group1
  - group2: [user1, user2]

Change Passwords for Existing Users

For user accounts that already exist (the root account is the most pertinent), a password can be suppled by using the chpasswd directive.

Note: This directive should only be used in debugging situations, because, once again, the value will be available to every user on the system for the duration of the server’s life. This is even more relevant in this section because passwords submitted with this directive must be given in plain text.

The basic syntax looks like this:

#cloud-config
chpasswd:
  list: |
    user1:password1
    user2:password2
    user3:password3
  expire: False

The directive contains two associative array keys. The list key will contain a block that lists the account names and the associated passwords that you would like to assign. The expire key is a boolean that determines whether the password must be changed at first boot or not. This defaults to “True”.

One thing to note is that you can set a password to “RANDOM” or “R”, which will generate a random password and write it to /var/log/cloud-init-output.log. Keep in mind that this file is accessible to any user on the system, so it is not any more secure.

Write Files to the Disk

In order to write files to the disk, you should use the write_files directive.

Each file that should be written is represented by a list item under the directive. These list items will be associative arrays that define the properties of each file.

The only required keys in this array are path, which defines where to write the file, and content, which contains the data you would like the file to contain.

The available keys for configuring a write_files item are:

  • path: The absolute path to the location on the filesystem where the file should be written.
  • content: The content that should be placed in the file. For multi-line input, you should start a block by using a pipe character (|) on the “content” line, followed by an indented block containing the content. Binary files should include “!!binary” and a space prior to the pipe character.
  • owner: The user account and group that should be given ownership of the file. These should be given in the “username:group” format.
  • permissions: The octal permissions set that should be given for this file.
  • encoding: An optional encoding specification for the file. This can be “b64” for Base64 files, “gzip” for Gzip compressed files, or “gz+b64” for a combination. Leaving this out will use the default, conventional file type.

For example, we could write a file to /test.txt with the contents:

Here is a line.
Another line is here.

The portion of the cloud-config that would accomplish this would look like this:

#cloud-config
write_files:
  - path: /test.txt
    content: |
      Here is a line.
      Another line is here.

Update or Install Packages on the Server

To manage packages, there are a few related settings and directives to keep in mind.

To update the apt database on Debian-based distributions, you should set the package_update directive to “true”. This is synonymous with calling apt-get update from the command line.

The default value is actually “true”, so you only need to worry about this directive if you wish to disable it:

#cloud-config
package_update: false

If you wish to upgrade all of the packages on your server after it boots up for the first time, you can set the package_upgrade directive. This is akin to a apt-get upgrade executed manually.

This is set to “false” by default, so make sure you set this to “true” if you want the functionality:

#cloud-config
package_upgrade: true

To install additional packages, you can simply list the package names using the “packages” directive. Each list item should represent a package. Unlike the two commands above, this directive will function with either yum or apt managed distros.

These items can take one of two forms. The first is simply a string with the name of the package. The second form is a list with two items. The first item of this new list is the package name, and the second item is the version number:

#cloud-config
packages:
  - package_1
  - package_2
  - [package_3, version_num]

The “packages” directive will set apt_update to true, overriding any previous setting.

Configure SSH Keys for User Accounts and the SSH Daemon

You can manage SSH keys in the users directive, but you can also specify them in a dedicated ssh_authorized_keys section. These will be added to the first defined user’s authorized_keys file.

This takes the same general format of the key specification within the users directive:

#cloud-config
ssh_authorized_keys:
  - ssh_key_1
  - ssh_key_2

You can also generate the SSH server’s private keys ahead of time and place them on the filesystem. This can be useful if you want to give your clients the information about this server beforehand, allowing it to trust the server as soon as it comes online.

To do this, we can use the ssh_keys directive. This can take the key pairs for RSA, DSA, or ECDSA keys using the rsa_privatersa_publicdsa_privatedsa_publicecdsa_private, and ecdsa_public sub-items.

Since formatting and line breaks are important for private keys, make sure to use a block with a pipe key when specifying these. Also, you must include the begin key and end key lines for your keys to be valid.

#cloud-config
ssh_keys:
  rsa_private: |
    -----BEGIN RSA PRIVATE KEY-----
    your_rsa_private_key
    -----END RSA PRIVATE KEY-----

  rsa_public: your_rsa_public_key

Set Up Trusted CA Certificates

If your infrastructure relies on keys signed by an internal certificate authority, you can set up your new machines to trust your CA cert by injecting the certificate information. For this, we use the ca-certs directive.

This directive has two sub-items. The first is remove-defaults, which, when set to true, will remove all of the normal certificate trust information included by default. This is usually not needed and can lead to some issues if you don’t know what you are doing, so use with caution.

The second item is trusted, which is a list, each containing a trusted CA certificate:

#cloud-config
ca-certs:
  remove-defaults: true
  trusted:
    - |
      -----BEGIN CERTIFICATE-----
      your_CA_cert
      -----END CERTIFICATE-----

Configure resolv.conf to Use Specific DNS Servers

If you have configured your own DNS servers that you wish to use, you can manage your server’s resolv.conf file by using the resolv_conf directive. This currently only works for RHEL-based distributions.

Under the resolv_conf directive, you can manage your settings with the nameserverssearchdomainsdomain, and options items.

The nameservers directive should take a list of the IP addresses of your name servers. The searchdomains directive takes a list of domains and subdomains to search in when a user specifies a host but not a domain.

The domain sets the domain that should be used for any unresolvable requests, and options contains a set of options that can be defined in the resolv.conf file.

If you are using the resolv_conf directive, you must ensure that the manage-resolv-conf directive is also set to true. Not doing so will cause your settings to be ignored:

#cloud-config
manage-resolv-conf: true
resolv_conf:
  nameservers:
    - 'first_nameserver'
    - 'second_nameserver'
  searchdomains:
    - first.domain.com
    - second.domain.com
  domain: domain.com
  options:
    option1: value1
    option2: value2
    option3: value3

Run Arbitrary Commands for More Control

If none of the managed actions that cloud-config provides works for what you want to do, you can also run arbitrary commands. You can do this with the runcmd directive.

This directive takes a list of items to execute. These items can be specified in two different ways, which will affect how they are handled.

If the list item is a simple string, the entire item will be passed to the sh shell process to run.

The other option is to pass a list, each item of which will be executed in a similar way to how execve processes commands. The first item will be interpreted as the command or script to run, and the following items will be passed as arguments for that command.

Most users can use either of these formats, but the flexibility enables you to choose the best option if you have special requirements. Any output will be written to standard out and to the /var/log/cloud-init-output.log file:

#cloud-config
runcmd:
  - [ sed, -i, -e, 's/here/there/g', some_file]
  - echo "modified some_file"
  - [cat, some_file]

Shutdown or Reboot the Server

In some cases, you’ll want to shutdown or reboot your server after executing the other items. You can do this by setting up the power_state directive.

This directive has four sub-items that can be set. These are delaytimeoutmessage, and mode.

The delay specifies how long into the future the restart or shutdown should occur. By default, this will be “now”, meaning the procedure will begin immediately. To add a delay, users should specify, in minutes, the amount of time that should pass using the +<num_of_mins> format.

The timeout parameter takes a unit-less value that represents the number of seconds to wait for cloud-init to complete before initiating the delay countdown.

The message field allows you to specify a message that will be sent to all users of the system. The mode specifies the type of power event to initiate. This can be “poweroff” to shut down the server, “reboot” to restart the server, or “halt” to let the system decide which is the best action (usually shutdown):

#cloud-config
power_state:
  timeout: 120
  delay: "+5"
  message: Rebooting in five minutes. Please save your work.
  mode: reboot

Troubleshooting

If a cloud-init script behaves unexpectedly, check the captured console output in /var/log/cloud-init-output.log inside vRealize Automation.

Conclusion

The above examples represent some of the more common configuration items available when running a cloud-config file. There are additional capabilities that we did not cover in this guide. These include configuration management setup, configuring additional repositories, and even registering with an outside URL when the server is initialized.

You can find out more about some of these options by checking the /usr/share/doc/cloud-init/examples directory. For a practical guide to help you get familiar with cloud-config files, you can follow our tutorial on how to use cloud-config to complete basic server configuration here.

References

Getting Started: vRealize Orchestrator Script Environments [CB10098]

  1. Introduction
  2. Prerequisite
  3. Procedure
  4. Calling modules & variables
    1. For Node.js
    2. For Python
    3. For PowerShell
  5. Sample Node.js Script

Introduction

Do you use a lot of Polyglot scripts in vRO? Are you tired of creating bundles every time you work on a Python, Node.js or PowerShell script which uses modules and libraries which are not provided out-of-the-box by vRealize Orchestrator? Probably vRO guys at VMware heard your prayers this time.

vRO 8.8 onwards, you can now add modules and libraries directly as a dependency in your vRO actions and scriptable tasks. How cool is that!

As we know, in earlier versions, you could only add dependencies by adding them as a ZIP package which is not only a tiring additional steps, but also, editing and understanding those scripts becomes a real nightmare. But not any more.

In this post, we will see a detailed procedure on how to setup an script environment in your vRO (>8.8). I am going with Node.js but almost similar process can be followed for other languages as well. We will use an advanced date & time library called MomentJS available at https://momentjs.com/ but you can use any other module or library of your choice for that matter.


Note Similarly to other vRealize Orchestrator objects such as workflows and actions, environments can be exported to other vRealize Orchestrator deployments as part of a package, which means they are also a part of version control.


Prerequisite

  • vRealize Orchestrator 8.8 or greater

Procedure

  • Log in to the vRealize Orchestrator Client.
  • Navigate to Assets > Environments, and click New Environment.
  • Under the General tab, enter a name for your environment.
  • (Optional) Enter a description, version number, tags, and group permissions for the action.
  • Under the Definition tab, Click on Add button under Dependencies.

You can also change the Memory Limit to 128, 512, 1024 etc. depending on the number and size of packages that you will be using. In my personal experience, using PowerShell modules will require more than default.

  • Provide the package name and version that you want to install. For Node.js, passing latest will get you the most recent package.

Tip The package name is same as what you would use with package managers like npm while installing that package.


  • Once you click Create button, you should see Environment successfully created.
  • Under the Download Logs tab, check if the libraries are already installed.

Here, I have installed two modules, moment and moment-timezone as you can see from the logs.

  • In the Environment variables, you can provide a variable that you want to use as a part of this environment.
  • Create an action or a workflow with scriptable item, Select Runtime Environment as the one that you have created. I have selected moment.
  • Play with your script. Don’t forget to call the modules and environment variables.

Calling modules & variables

For Node.js

const myModule = require('moment');
const envVar = process.env.VAR_NAME;

For Python

import myModule
os.environ.get('VAR_NAME')

For PowerShell

Import-Module myModule
$env:VAR_NAME

Sample Node.js Script

exports.handler = (context, inputs, callback) => {
//——————-Don't edit above it————————//
const moment = require('moment'); // **IMPORTANT**
const tz = require('moment-timezone'); // **IMPORTANT**
const indianTimeZone = process.env.TIMEZONE_IN; // import Env variable in Node.js
console.log(moment().format('MMMM Do YYYY, h:mm:ss a'));
console.log(moment().format('dddd'));
console.log(moment().format("MMM Do YY"));
console.log(moment().format('YYYY [escaped] YYYY'));
console.log(moment().format());
var jul = moment("2022-07-20T12:00:00Z");
var dec = moment("2022-12-20T12:00:00Z");
console.log(jul.tz('America/Los_Angeles').format('ha z')); // 5am PDT
console.log(dec.tz(indianTimeZone).format('ha z')); // 4am PST
//——————-Don't edit below it————————//
callback(undefined, {
status: "done"
});
}

That’s it on this post.