Create ESXi root account with vRO [CB10104]


TL;DR If you would like to create ESXi local account using vRO, download this package (in.co.cloudblogger.crudEsxiLocalUser.package) to get started.


  1. Introduction
  2. Classes & Methods
  3. Script for creating a local admin account in ESXi
  4. Demo Video
  5. vRO Package for CRUD operation

Introduction

Many organization uses vRO for Host Provisioning. Various hardware vendors provide vRO Scripting APIs via plugins or REST APIs to manage and provision bare-metal servers. While doing so, there is always a possibility that post-provisioning, you would like to access your ESXi host from an account other than root for several reasons like security restrictions, limited access etc. In that case, the best way is to create a fresh new account using vRO with the kind of access mode or lets call it, role that suits the needs. In this post, we will see how to create an ESXi local user account using vRO Scripting API.

Classes & Methods

As shown below, we have used following classes and methods for retrieval of existing accounts, creation, updating & deletion of accounts as well as change access or Role of those accounts.

Script for creating a local admin account in ESXi

Link to gist here.

/**
 *
 * @version 0.0.0
 *
 * @param {VC:HostSystem} host 
 * @param {string} localUserName 
 * @param {SecureString} localUserPassword 
 * @param {string} accessMode 
 * @param {string} localUserDescription 
 *
 * @outputType void
 *
 */
function createEsxiLocalUser(host, localUserName, localUserPassword, accessMode, localUserDescription) {
	if(!host) throw "host parameter not set";
	if(!localUserName || !localUserPassword) throw "Either username or password parameter not set";
	if(!localUserDescription) localUserDescription = "***Account created using vRO***";
	if(localUserDescription.indexOf(localUserPassword) != -1) throw 'Weak Credentials! Avoid putting password string in description';
	
	// Retrieve all system and custom user accounts
	var arrExistingLocalusers = host.configManager.hostAccessManager.retrieveHostAccessControlEntries();
	var accountSpecs = new VcHostAccountSpec(localUserName,localUserPassword,localUserDescription);
	host.configManager.accountManager.createUser(accountSpecs);
	switch(accessMode){
	    case 'Admin': //Full access rights
	        host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessAdmin);
	        break;
	    case 'ReadOnly': //See details of objects, but not make changes
	        host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessReadOnly);
	        break;
	    case 'NoAccess': //Used for restricting granted access
	        host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessNoAccess);
	        break;
	    default: //No access assigned. Note: Role assigned is accessNone
	        host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessNone);
	}
	System.warn("  >>> Local user "+localUserName+" created with accessMode "+accessMode+" on host "+host.name);
	
	
}

Demo Video

In this demo, we can see how the workflow is utilized to create a local account testuser1 through which we logged in to ESXi and check if it has required permissions.

vRO Package for CRUD operation

I have created a vRO Workflow to create and manage your ESXi local accounts directly from the input form itself. Please find the vRO package that contains the master workflow and associated actions.

  • Workflow: CRUD Operation on ESXi Local Users 
  • Actions:
    • getEsxiLocalUser
    • deleteEsxiLocalUser
    • updateEsxiLocalUser
    • createEsxiLocalUser
    • getAllEsxiLocalUsers
    • getAllEsxiLocalUsersWithRoles

Link to vRO package: in.co.cloudblogger.crudEsxiLocalUser.package

That’s all in this post. Thanks for reading.

Edit text-based Resource Elements On-The-Go [CB10103]


TL:DR Idea is to update Resource elements from vRO itself as no such functionality exists in UI yet. Use the package and workflow to update resource elements directly from vRO quickly and conveniently. Link to vRO package here.


  1. Introduction
  2. Prerequisites
  3. Procedure
  4. Scope of Improvement
  5. References

Introduction

We all use Resource Elements inside vRealize Orchestrator for various reasons. Resource Elements are external independent objects that once imported, can be used in vRO Workflows and scripts.

Resource elements include image files, scripts, XML templates, HTML files, and so on. However,  I have one concern regarding updating them. It is such an overhead. Though, on official vRO Docs, it is clearly mentioned that you can import, export, restore, update, and delete a resource element, in reality, you have to update that object using a external editor that means a text editor for text based files, image editor for images etc.

Apart from images, the most common type of resource elements are text based files for example, .sh, .ps1, .json, .xml, .html, .yaml etc. In this post, to ease the process of updating resource elements, I have created a WF using which you really won’t have to follow that long boring method of exporting a resource element, edit it in the Notepad++, import it back. Just run that WF and select your resource element and it will give you a text editing area where you can update your resource element on-the-go.

Prerequisites

  • Download the package from here and import into your vRO.
  • Make sure you have the resource element you want to update.

Procedure

  • Run the Workflow Edit Resource Element On-The-Go and select the resource element. Here, I’ve selected template.yaml which I already imported in vRO earlier.
  • By default, vRO picks up a MIME type for your file. However, for text based objects, you can set it to text/{fileExtension}. Here, I will set it to text/yaml so that I can see it’s content directly in vRO.
  • Go to the next section of the workflow and you can see the current content of your resource element. Edit it the way you like. Here, this file was empty, so I added this YAML code.
  • Click Run and wait for workflow to complete.
  • Go back to the resource element to check if the content is there.
  • Now, you want to change the flavor from large to small. Rerun the workflow, edit the flavor value. Click Run.
  • Your changes should be visible now.

Scope of Improvement

I am looking for a way to give the version as an input, so that we can update the version of the resource element as we update its content. Seems like the resourceElement.version attributes is not working at least in vRO 8.x. Suggestions would be appreciated.

References

https://kb.vmware.com/s/article/81575

How to modify vRO Workflow description using REST

  1. Requirement
  2. Procedure
  3. Steps
    1. Response body from GET call
    2. Body with updated description for PUT call

Requirement

If you want to update a Workflow’s description as and when you want while working in vRO or from outside, you can use this quick method using vRO’s REST APIs. If we want, we can easily create a workflow out of it.

Procedure

We will be using two REST APIs from vRO.

GET schema: for getting the schema content of the WF which will be modified and used later.

PUT schema: for updating the description of the WF.

Steps

  • Click Execute. If status is 200, you will see the response body with workflow content and obviously its description as well.
  • Now modify this response body so that the new body has updated description.

Response body from GET call

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
  <schema-workflow xmlns:ns2="http://www.vmware.com/vco" root-name="item1" object-name="workflow:name=generic" id="bddbcab3-b4b7-4577-b76e-3301374d805f" version="0.0.0" api-version="6.0.0" restartMode="1" resumeFromFailedMode="0" editor-version="2.0">
    <display-name>demo test</display-name>
    <description>This description needs to be updated programmatically, but how?</description>
    <position y="50.0" x="100.0"/>
    <input/>
    <output/>
.
.
.

Body with updated description for PUT call

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
  <schema-workflow xmlns:ns2="http://www.vmware.com/vco" root-name="item1" object-name="workflow:name=generic" id="bddbcab3-b4b7-4577-b76e-3301374d805f" version="0.0.0" api-version="6.0.0" restartMode="1" resumeFromFailedMode="0" editor-version="2.0">
    <display-name>demo test</display-name>
    <description>The description has been updated using REST API</description>
    <position y="50.0" x="100.0"/>
    <input/>
    <output/>
.
.
.
  • Go to PUT request and provide parameters, id and updated body content. Click Execute.
  • You should see the updated description in the workflow.

Now you can easily automate this to create an action or Workflow where you can simply pass all the IDs of all workflows to update and the Description as well. Let me know in the comment if you want me to create a workflow for this.

That’s it in this post. You can check this question in the VMTN Community for which I have created this post. Don’t forget to subscribe.

Inside vRO’s JavaScript Engine – Rhino 1.7R4 [CB10102]

  1. What is Rhino Engine?
  2. Released 10 years ago
  3. Compatibility with JavaScript features
  4. ECMAScript 5.1 Specifications
  5. Rhino on GitHub
  6. Rhino Engine Limitations in vRO
  7. Access to additional Java Classes
    1. Procedure
  8. Javadoc for Rhino 1.7R4
    1. Feature Set when released
  9. Additional Links

vRealize Orchestrator a.k.a. vRO, which is a drag-and-drop automation tool, is quite an old tool developed and released as early as in 2007 by Dunes Technologies, Switzerland. After VMware acquired Dunes, there were 100’s of releases and updates came year after year. Lots and lots of improvements made over time in the UI, backend technologies, security, multi-language support, etc. However, one thing that remains the same is its JavaScript Engine. vRO uses Mozilla Rhino Engine 1.7R4 which was released in 2012.

In this post, My goal is to provide some insights on this Rhino Engine as it is almost extinct from the internet. However, I am still behind the JavaScript Engine which provides IntelliSense support to vRO 8.x. As you might have noticed and probably be wondering how CTRL+SPACE shows options only available to recent versions of JavaScript. I guess it’s for Node.js runtime.

What is Rhino Engine?

Rhino Engine converts JavaScript scripts into classes. It is intended to be used in desktop or server-side applications, hence there is no built-in support for the Web browser objects that are commonly associated with JavaScript which makes it very suitable for vRO. Rhino works in both compiled and interpreted mode. Rhino Engine got its name from the animal on the cover of the O’Reilly book about JavaScript published many years back.

The Rhino project was started at Netscape in the autumn of 1997. At the time, Netscape was planning to produce a version of Navigator written entirely in Java and so it needed an implementation of JavaScript written in Java. When Netscape stopped work on “Javagator,” as it was called, somehow Rhino escaped the axe (rumor had it that the executives “forgot” it existed). For a time, a couple of major companies (including Sun) licensed Rhino for use in their products and paid Netscape to do so, allowing work on Rhino to continue. Now Rhino is part of Mozilla’s open-source repository.

Released 10 years ago

Released in 2012-06-18, Rhino 1.7R4 is almost prehistoric for today’s standards and that’s been always a point of discussion in the vRO Community.

Release Notes of 1.7R4

Compatibility with JavaScript features

While trying to look deep into its compatibility matrix with ES, I found this Kangax’s Compat-table which gives an excellent and detailed view of all the possible features that Rhino 1.7R4 supports. Click the link to know more.

Rhino Compatibility Matrix with JavaScript

ECMAScript 5.1 Specifications

This document will give you a very in-depth knowledge of the ECMAScript 5.1 that vRO leverages to understand the language better. Learn more at https://262.ecma-international.org/5.1.

You can download this document and read about all the fine details by yourself.

Rhino on GitHub

Currently, on GitHub page of Mozilla, version 1.7R4 is not available. However, you may find some very old scripts that were written at the time of 1.7R4 as I can validate using web-achieve. You can explore their GitHub repo here.

Rhino Engine Limitations in vRO

When writing scripts for workflows, you must consider the following limitations of the Mozilla Rhino implementation in Orchestrator.

  • When a workflow runs, the objects that pass from one workflow element to another are not JavaScript objects. What is passed from one element to the next is the serialization of a Java object that has a JavaScript image. As a consequence, you cannot use the whole JavaScript language, but only the classes that are present in the API Explorer. You cannot pass function objects from one workflow element to another.
  • Orchestrator runs the code in scriptable task elements in a context that is not the Rhino root context. Orchestrator transparently wraps scriptable task elements and actions into JavaScript functions, which it then runs. A scriptable task element that contains System.log(this); does not display the global object this in the same way as a standard Rhino implementation does.
  • You can only call actions that return nonserializable objects from scripting, and not from workflows. To call an action that returns a nonserializable object, you must write a scriptable task element that calls the action by using the System.getModuleModuleName.action() method.
  • Workflow validation does not check whether a workflow attribute type is different from an input type of an action or subworkflow. If you change the type of a workflow input parameter, for example from VIM3:VirtualMachine to VC:VirtualMachine, but you do not update any scriptable tasks or actions that use the original input type, the workflow validates but does not run.

Access to additional Java Classes

By default, vRealize Orchestrator restricts JavaScript access to a limited set of Java classes. If you require JavaScript access to a wider range of Java classes, you must set an vRealize Orchestrator system property.

Allowing the JavaScript engine full access to the Java virtual machine (JVM) presents potential security issues. Malformed or malicious scripts might have access to all the system components to which the user who runs the vRealize Orchestrator server has access. Therefore, by default the vRealize Orchestrator JavaScript engine can access only the classes in the java.util.* package.

If you require JavaScript access to classes outside of the java.util.* package, you can list in a configuration file the Java packages to which to allow JavaScript access. You then set the com.vmware.scripting.rhino-class-shutter-file system property to point to this file.

Procedure

  1. Create a text configuration file to store the list of Java packages to which to allow JavaScript access.For example, to allow JavaScript access to all the classes in the java.net package and to the java.lang.Object class, you add the following content to the file.java.net.* java.lang.Object
  2. Enter a name for the configuration file.
  3. Save the configuration file in a subdirectory of /data/vco/usr/lib/vco. The configuration file cannot be saved under another directory.
  4. Log in to Control Center as root.
  5. Click System Properties.
  6. Click New.
  7. In the Key text box, enter com.vmware.scripting.rhino-class-shutter-file.
  8. In the Value text box, enter vco/usr/lib/vco/your_configuration_file_subdirectory.
  9. In the Description text box, enter a description for the system property.
  10. Click Add.
  11. Click Save changes from the pop-up menu.A message indicates that you have saved successfully.
  12. Wait for the vRealize Orchestrator server to restart.

See an implementation example of accessing external Java classes by BlueCat here. Here, the code implements new java.lang.Long(0)

.
.
.
var testConfig = BCNProteusAPI.createAPIEntity(new java.lang.Long(0),configName,"","Configuration" );
var args = new Array( new java.lang.Long(0), testConfig );
configId = new java.lang.Long( BCNProteusAPI.call( profileName,"addEntity",args ));
System.log( "New configuration was created, id=" + configId );
var addTFTPGroupArgs = new Array( configId, "tftpGroupName1", "" );
var tftpGroupId = new java.lang.Long( BCNProteusAPI.call(profileName,"addTFTPGroup", addTFTPGroupArgs ) );
System.log( "New TFTP Group was created, id=" + tftpGroupId );
.
.
.

Javadoc for Rhino 1.7R4

Feature Set when released

Source: https://contest-server.cs.uchicago.edu/ref/JavaScript/developer.mozilla.org/en-US/docs/Web/JavaScript/New_in_JavaScript/1-7.html#New_features_in_JavaScript_1.7

Find Object Types in vRealize Orchestrator

Find Object Types in vRealize Orchestrator [CB10100]

Sometimes, we want to know exactly what type of vRO object we are working on. It could be something that is returning from an action of type Any or a method returning various types of objects or simply about switch cases. In this quick post, we will see what are the options that vRO provides and where to use them.

  1. typeof
    1. Code Examples
    2. Using new operator
    3. use of Parenthesis
  2. System.getObjectType()
    1. Code Examples
  3. System.getObjectClassName()
    1. Code Examples
  4. instanceof
    1. Syntax
    2. Code Examples

typeof

The typeof operator returns a string indicating the type of the operand’s value where the operand is an object or of primitive type.

TypeResult
Undefined"undefined"
Null"object" (reason)
Boolean"boolean"
Number"number"
String"string"
Function"function"
Array"object"
Date"object"
vRO Object types"object"
vRO Object types with new operator"function"

Code Examples

var var1 = new VcCustomizationSpec(); 
System.debug(typeof var1); //function
var var2 = new Object();
System.debug(typeof var2); //object
var var3 = "a";
System.debug(typeof var3); //string
var var4 = 2;
System.debug(typeof var4); //number
var var4 = new Array(1, 2, 3);
System.debug(typeof var4); //object
System.debug(typeof []); //object
System.debug(typeof function () {}); //function
System.debug(typeof /regex/); //object
System.debug(typeof new Date()); //object
System.debug(typeof null); //object
System.debug(typeof undefinedVarible); //undefined

Using new operator

In this example, typeof operator is showing different results when used with new operator for class VC:CustomizationSpecManager. That’s because the new operator is used for creating a user-defined object type instance of one of the built-in object types that has a constructor function. So basically it calls the constructor function of that object type, hence typeof prints function. However, something to note here is that when new operator is used with primitive object type Number, typeof recognizes that as an object.

var num1 = 2;
System.debug(typeof num1); //number

var num2 = Number("123");;
System.debug(typeof (1 + num2)); //number

var num3 = new Number("123");;
System.debug(typeof (num3)); //object

var num4 = new Number("123");;
System.debug(typeof (1 + num4)); //number

use of Parenthesis

// Parentheses can be used for determining the data type of expressions.
const someData = 99;
typeof someData + "cloudblogger"; // "number cloudblogger"
typeof (someData + " cloudblogger"); // "string"

System.getObjectType()

The System.getObjectType() method returns the VS-O ‘type’ for the given operand. This method is more advanced than typeof and is able to detect more complex yet intrinsic object types like Date, Array etc. But, it still cannot figure out the plugin object types like VC:SDKConnection, etc.

TypeResult
Array"Array"
Number"number"
String"string"
vRO Plugin Object Types (with or without new)"null"
Date"Date"
Composite Types"Properties"
SecureString"string"
undefined VariableReference Error

Code Examples


var var1 = new VcCustomizationSpec(); 
System.debug(System.getObjectType(var1)); //null

var var2 = new Object();
System.debug(System.getObjectType(var2)); //Properties

var var3 = "a";
System.debug(System.getObjectType(var3)); //string

var var4 = 2;
System.debug(System.getObjectType(var4)); //number

var var4 = new Array(1, 2, 3);
System.debug(System.getObjectType(var4)); //Array

System.debug(System.getObjectType([])); //Array

System.debug(System.getObjectType(function () {})); //null

System.debug(System.getObjectType(new Date())); //Date

System.debug(System.getObjectType(undefinedVarible)); //FAIL ReferenceError: "undefinedVarible" is not defined.

System.getObjectClassName()

The System.getObjectClassName() method returns the class name of any vRO scripting object that typeof(obj) returns “object”. This works the best with complex vRO object types and surpasses System.getObjectType() in terms of its capability to identify object types.

TypeResult
Array"Array"
Number"Number"
String"String"
vRO Plugin Object Types (eg: VC:SdkConnection)Class Name (eg: VcSdkConnection)
Date"Date"
Composite Types"Properties"
SecureString"String"
undefined VariableReference Error
null objectsError: Cannot get class name from null object

Code Examples

System.debug(System.getObjectClassName(input));  //String

var var1 = new VcCustomizationSpec(); 
System.debug(System.getObjectClassName(var1)); //VcCustomizationSpec

var var2 = new Object();
System.debug(System.getObjectClassName(var2)); //Object

var var3 = "a";
System.debug(System.getObjectClassName(var3)); //String

var var4 = 2;
System.debug(System.getObjectClassName(var4)); //Double

var var4 = new Array(1, 2, 3);
System.debug(System.getObjectClassName(var4)); //Array

System.debug(System.getObjectClassName([])); //Array

System.debug(System.getObjectClassName(function () {})); //Function

System.debug(System.getObjectClassName(new Date())); //Date

instanceof

The instanceof operator tests to see if the prototype property of a constructor appears anywhere in the prototype chain of an object. The return value is a boolean value. This means that instanceof checks if RHS matches the constructor of a class. That’s why it doesn’t work with primitive types like number, string, etc. However, works with variety of complex types available in vRO.

Syntax

object instanceof constructor

Code Examples

var var1 = new VcCustomizationSpec(); 
System.debug(var1 instanceof VcCustomizationSpec); //true

var var1 = new VcCustomizationSpec(); 
System.debug(var1 instanceof Object); //true

var var2 = new Object();
System.debug(var2 instanceof Object); //true

var var3 = "a";
System.debug(var3 instanceof String); //false

var var3 = new String("a");
System.debug(var3 instanceof String); //true

var var3 = "a";
System.debug(var3 instanceof String); //false

var var4 = 2;
System.debug(var4 instanceof Number); //false

var var4 = new Array(1, 2, 3);
System.debug(var4 instanceof Array); //true

System.debug([] instanceof Array); //true

System.debug(function () {} instanceof Function); //true

System.debug(new Date() instanceof Date); //true

System.debug({} instanceof Object); //true

That’s all in this port. I hope you will have a better understanding on how to check vRO Object types. Let me know in the comment if you have any doubt or question. Feel free to share this article. Thank you.

Advanced JavaScript Snippets in vRO [CB10099]

  1. Introduction
  2. Snippets
    1. External Modules
    2. First-class Functions
    3. Ways to add properties to Objects
    4. Custom Class
    5. Private variable
    6. Label
    7. with keyword
    8. Function binding
    9. Prototype Chaining
  3. Recommended Reading

Introduction

vRO JS code is generally plain and basic just enough to get the job done. But I was wondering, how to fancy it? So, I picked some slightly modern JS code (ES5.1+) and tried running it on my vRO 8.3. I found some interesting things which I would like to share in this article.

Snippets

Here are some JS concepts that you can use writing vRO JavaScript code to make it more compelling and beautiful.

External Modules

To utilize modern features, you can use modules like lodash.js for features such as map or filter etc. Other popular module is moment.js for complex Date and Time handling in vRO.

var _ = System.getModule("fr.numaneo.library").lodashLibrary();
var myarr = [1,2,3];
var myarr2 = [4,5,6];
var concatarr = _.concat(myarr, myarr2);
System.log(concatarr); // [1,2,3,4,5,6];

Find more information on how to leverage Lodash.js in vRO here.

First-class Functions

First-class functions are functions that are treated like any other variable. For example, a function can be passed as an argument to other functions, can be returned by another function and can be assigned as a value to a variable.

// we send in the function as an argument to be
// executed from inside the calling function
function performOperation(a, b, cb) {
    var c = a + b;
    cb(c);
}

performOperation(2, 3, function(result) {
    // prints out 5
    System.log("The result of the operation is " + result);
})

Ways to add properties to Objects

There are 4 ways to add a property to an object in vRO.

// supported since ES3
// the dot notation
instance.key = "A key's value";

// the square brackets notation
instance["key"] = "A key's value";

// supported since ES5
// setting a single property using Object.defineProperty
Object.defineProperty(instance, "key", {
    value: "A key's value",
    writable: true,
    enumerable: true,
    configurable: true
});

// setting multiple properties using Object.defineProperties
Object.defineProperties(instance, {
    "firstKey": {
        value: "First key's value",
        writable: true
    },
    "secondKey": {
        value: "Second key's value",
        writable: false
    }
});

Custom Class

You can create your own custom classes in vRO using the function keyword and extend that function’s prototype.

// we define a constructor for Person objects
function Person(name, age, isDeveloper) {
    this.name = name;
    this.age = age;
    this.isDeveloper = isDeveloper || false;
}

// we extend the function's prototype
Person.prototype.writesCode = function() {
    System.log(this.isDeveloper? "This person does write code" : "This person does not write code");
}

// creates a Person instance with properties name: Bob, age: 38, isDeveloper: true and a method writesCode
var person1 = new Person("Bob", 38, true);
// creates a Person instance with properties name: Alice, age: 32, isDeveloper: false and a method writesCode
var person2 = new Person("Alice", 32);

// prints out: This person does write code
person1.writesCode();
// prints out: this person does not write code
person2.writesCode();

Both instances of the Person constructor can access a shared instance of the writesCode() method.

Private variable

A private variable is only visible to the current class. It is not accessible in the global scope or to any of its subclasses. For example, we can do this in Java (and most other programming languages) by using the private keyword when we declare a variable

// we  used an immediately invoked function expression
// to create a private variable, counter
var counterIncrementer = (function() {
    var counter = 0;

    return function() {
        return ++counter;
    };
})();

// prints out 1
System.log(counterIncrementer());
// prints out 2
System.log(counterIncrementer());
// prints out 3
System.log(counterIncrementer());

Label

Labels can be used with break or continue statements. It is prefixing a statement with an identifier which you can refer to.

var str = '';

loop1:
for (var i = 0; i < 5; i++) {
  if (i === 1) {
    continue loop1;
  }
  str = str + i;
}

System.log(str);
// expected output: "0234"

with keyword

The with statement extends the scope chain for a statement. Check the example for better understanding.

var box = {"dimensions": {"width": 2, "height": 3, "length": 4}};
with(box.dimensions){
  var volume = width * height * length;
}
System.log(volume); //24

// vs

var box = {"dimensions": {"width": 2, "height": 3, "length": 4}};
var boxDimensions = box.dimensions;
var volume2 = boxDimensions.width * boxDimensions.height * boxDimensions.length;
System.log(volume2); //24

Function binding

The bind() method creates a new function that, when called, has its this keyword set to the provided value, with a given sequence of arguments preceding any provided when the new function is called.

const module = {
  x: 42,
  getX: function() {
    return this.x;
  }
};

const unboundGetX = module.getX;
System.log(unboundGetX()); // The function gets invoked at the global scope
// expected output: undefined

const boundGetX = unboundGetX.bind(module);
System.log(boundGetX());
// expected output: 42

Prototype Chaining

const o = {
  a: 1,
  b: 2,
  // __proto__ sets the [[Prototype]]. It's specified here
  // as another object literal.
  __proto__: {
    b: 3,
    c: 4,
  },
};

// o.[[Prototype]] has properties b and c.
// o.[[Prototype]].[[Prototype]] is Object.prototype (we will explain
// what that means later).
// Finally, o.[[Prototype]].[[Prototype]].[[Prototype]] is null.
// This is the end of the prototype chain, as null,
// by definition, has no [[Prototype]].
// Thus, the full prototype chain looks like:
// { a: 1, b: 2 } ---> { b: 3, c: 4 } ---> Object.prototype ---> null

System.log(o.a); // 1
// Is there an 'a' own property on o? Yes, and its value is 1.

System.log(o.b); // 2
// Is there a 'b' own property on o? Yes, and its value is 2.
// The prototype also has a 'b' property, but it's not visited.
// This is called Property Shadowing

System.log(o.c); // 4
// Is there a 'c' own property on o? No, check its prototype.
// Is there a 'c' own property on o.[[Prototype]]? Yes, its value is 4.

System.log(o.d); // undefined
// Is there a 'd' own property on o? No, check its prototype.
// Is there a 'd' own property on o.[[Prototype]]? No, check its prototype.
// o.[[Prototype]].[[Prototype]] is Object.prototype and
// there is no 'd' property by default, check its prototype.
// o.[[Prototype]].[[Prototype]].[[Prototype]] is null, stop searching,
// no property found, return undefined.

official-vmware-guides-for-vro-and-vra-8.x

Official VMware Guides for vRO and vRA 8.x

  1. Introduction
  2. Guides for vRA
    1. Architecture
    2. Cloud Assembly
    3. Code Stream
    4. Service Broker
    5. Transition Guide
    6. Migration
    7. Cloud Transition (SaaS only)
    8. Integration with ServiceNow
    9. Load Balancing
    10. NSX-T Migration
    11. SaltStack Config
  3. Guides for vRO
    1. Installation
    2. User Interface
    3. Developer’s Guide
    4. Migration
    5. Plug-in Development
    6. Plug-in Guide

Introduction

This blogpost is simply about giving a consolidated view on all the official guides that VMware provides for vRealize Automation and vRealize Orchestrator. These guides can help Automation Engineers & Developers, Solution Architects, vRealize Admins, etc and can be used as a reference for developing vRO Code, vRA Templates and various other tasks. You can download them from the provided links for offline access.

Guides for vRA

Architecture

Cloud Assembly

Code Stream

Service Broker

Transition Guide

Migration

Cloud Transition (SaaS only)

Download: https://docs.vmware.com/en/vRealize-Automation/services/vrealize-automation-cloud-transition-guide.pdf

Integration with ServiceNow

Load Balancing

NSX-T Migration

SaltStack Config

Guides for vRO

Installation

User Interface

Download: https://docs.vmware.com/en/vRealize-Orchestrator/8.8/vrealize-orchestrator-88-using-client-guide.pdf

Developer’s Guide

Migration

Plug-in Development

Plug-in Guide

I will try to update this list over time. Hope this list of guides will help you in understanding things a little better. Feel free to share.

Advertisements

An Introduction to Cloud-Config Scripting for Linux based VMs in vRA Cloud Templates | Cloud-Init

  1. Introduction
    1. Use cloud-init to configure:
    2. Compatible OSes
  2. Install cloud-init in VM images #firststep
  3. Where cloudConfig commands can be added
  4. General Information about Cloud-Config
  5. YAML Formatting
  6. User and Group Management
  7. Change Passwords for Existing Users
  8. Write Files to the Disk
  9. Update or Install Packages on the Server
  10. Configure SSH Keys for User Accounts and the SSH Daemon
  11. Set Up Trusted CA Certificates
  12. Configure resolv.conf to Use Specific DNS Servers
  13. Run Arbitrary Commands for More Control
  14. Shutdown or Reboot the Server
  15. Troubleshooting
  16. Conclusion
  17. References

Introduction

Cloud images are operating system templates and every instance starts out as an identical clone of every other instance. It is the user data that gives every cloud instance its personality and cloud-init is the tool that applies user data to your instances automatically.

Use cloud-init to configure:

  • Setting a default locale
  • Setting the hostname
  • Generating and setting up SSH private keys
  • Setting up ephemeral mount points
  • Installing packages

There is even a full-fledged website https://cloud-init.io/ where you can check various types of resources and information.

Compatible OSes

While cloud-init started life in Ubuntu, it is now available for most major Linux and FreeBSD operating systems. For cloud image providers, then cloud-init handles many of the differences between cloud vendors automatically — for example, the official Ubuntu cloud images are identical across all public and private clouds.

cloudConfig commands are special scripts designed to be run by the cloud-init process. These are generally used for initial configuration on the very first boot of a server. In this guide, we will be discussing the format and usage of cloud-config commands.

Install cloud-init in VM images #firststep

Make sure cloud-init is installed and properly configured in the linux based images you want with work with. Possibilities are that you may have to install it in some of the OSes and flavors. For.eg: cloud-init comes installed in the official Ubuntu live server images since the release of 18.04, Ubuntu Cloud Images, etc. However, in some of the Red Hat Linux images, it doesn’t come preinstalled.

Where cloudConfig commands can be added

You can add a cloudConfig section to cloud template code, but you can also add one to a machine image in advance, when configuring infrastructure. Then, all cloud templates that reference the source image get the same initialization.

You might have an image map and a cloud template where both contain initialization commands. At deployment time, the commands merge, and Cloud Assembly runs the consolidated commands. When the same command appears in both places but includes different parameters, only the image map command is run. Faulty cloudConfig commands can result in a resource that isn’t correctly configured or behaves unpredictably.


Important cloudConfig may cause unpredictable results when used with vSphere Guest Customizations. A hit & trial can be done to figure out what works best.


General Information about Cloud-Config

The cloud-config format implements a declarative syntax for many common configuration items, making it easy to accomplish many tasks. It also allows you to specify arbitrary commands for anything that falls outside of the predefined declarative capabilities.

This “best of both worlds” approach lets the file acts like a configuration file for common tasks, while maintaining the flexibility of a script for more complex functionality.

YAML Formatting

The file is written using the YAML data serialization format. The YAML format was created to be easy to understand for humans and easy to parse for programs.

YAML files are generally fairly intuitive to understand when reading them, but it is good to know the actual rules that govern them.

Some important rules for YAML files are:

  • Indentation with whitespace indicates the structure and relationship of the items to one another. Items that are more indented are sub-items of the first item with a lower level of indentation above them.
  • List members can be identified by a leading dash.
  • Associative array entries are created by using a colon (:) followed by a space and the value.
  • Blocks of text are indented. To indicate that the block should be read as-is, with the formatting maintained, use the pipe character (|) before the block.

Let’s take these rules and analyze an example cloud-config file, paying attention only to the formatting:

#cloud-config
users:
  - name: demo
    groups: sudo
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDf0q4PyG0doiBQYV7OlOxbRjle026hJPBWD+eKHWuVXIpAiQlSElEBqQn0pOqNJZ3IBCvSLnrdZTUph4czNC4885AArS9NkyM7lK27Oo8RV888jWc8hsx4CD2uNfkuHL+NI5xPB/QT3Um2Zi7GRkIwIgNPN5uqUtXvjgA+i1CS0Ku4ld8vndXvr504jV9BMQoZrXEST3YlriOb8Wf7hYqphVMpF3b+8df96Pxsj0+iZqayS9wFcL8ITPApHi0yVwS8TjxEtI3FDpCbf7Y/DmTGOv49+AWBkFhS2ZwwGTX65L61PDlTSAzL+rPFmHaQBHnsli8U9N6E4XHDEOjbSMRX user@example.com
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcthLR0qW6y1eWtlmgUE/DveL4XCaqK6PQlWzi445v6vgh7emU4R5DmAsz+plWooJL40dDLCwBt9kEcO/vYzKY9DdHnX8dveMTJNU/OJAaoB1fV6ePvTOdQ6F3SlF2uq77xYTOqBiWjqF+KMDeB+dQ+eGyhuI/z/aROFP6pdkRyEikO9YkVMPyomHKFob+ZKPI4t7TwUi7x1rZB1GsKgRoFkkYu7gvGak3jEWazsZEeRxCgHgAV7TDm05VAWCrnX/+RzsQ/1DecwSzsP06DGFWZYjxzthhGTvH/W5+KFyMvyA+tZV4i1XM+CIv/Ma/xahwqzQkIaKUwsldPPu00jRN user@desktop
runcmd:
  - touch /test.txt

By looking at this file, we can learn a number of important things.

First, each cloud-config file must begin with #cloud-config alone on the very first line. This signals to the cloud-init program that this should be interpreted as a cloud-config file. If this were a regular script file, the first line would indicate the interpreter that should be used to execute the file.

The file above has two top-level directives, users and runcmd. These both serve as keys. The values of these keys consist of all of the indented lines after the keys.

In the case of the users key, the value is a single list item. We know this because the next level of indentation is a dash (-) which specifies a list item, and because there is only one dash at this indentation level. In the case of the users directive, this incidentally indicates that we are only defining a single user.

The list item itself contains an associative array with more key-value pairs. These are sibling elements because they all exist at the same level of indentation. Each of the user attributes are contained within the single list item we described above.

Some things to note are that the strings you see do not require quoting and that there are no unnecessary brackets to define associations. The interpreter can determine the data type fairly easily and the indentation indicates the relationship of items, both for humans and programs.

By now, you should have a working knowledge of the YAML format and feel comfortable working with information using the rules we discussed above.

We can now begin exploring some of the most common directives for cloud-config.

User and Group Management

To define new users on the system, you can use the users directive that we saw in the example file above.

The general format of user definitions is:

#cloud-config
users:
  - first_user_parameter
    first_user_parameter
    
  - second_user_parameter
    second_user_parameter
    second_user_parameter
    second_user_parameter

Each new user should begin with a dash. Each user defines parameters in key-value pairs. The following keys are available for definition:

  • name: The account username.
  • primary-group: The primary group of the user. By default, this will be a group created that matches the username. Any group specified here must already exist or must be created explicitly (we discuss this later in this section).
  • groups: Any supplementary groups can be listed here, separated by commas.
  • gecos: A field for supplementary info about the user.
  • shell: The shell that should be set for the user. If you do not set this, the very basic sh shell will be used.
  • expiredate: The date that the account should expire, in YYYY-MM-DD format.
  • sudo: The sudo string to use if you would like to define sudo privileges, without the username field.
  • lock-passwd: This is set to “True” by default. Set this to “False” to allow users to log in with a password.
  • passwd: A hashed password for the account.
  • ssh-authorized-keys: A list of complete SSH public keys that should be added to this user’s authorized_keys file in their .ssh directory.
  • inactive: A boolean value that will set the account to inactive.
  • system: If “True”, this account will be a system account with no home directory.
  • homedir: Used to override the default /home/<username>, which is otherwise created and set.
  • ssh-import-id: The SSH ID to import from LaunchPad.
  • selinux-user: This can be used to set the SELinux user that should be used for this account’s login.
  • no-create-home: Set to “True” to avoid creating a /home/<username> directory for the user.
  • no-user-group: Set to “True” to avoid creating a group with the same name as the user.
  • no-log-init: Set to “True” to not initiate the user login databases.

Other than some basic information, like the name key, you only need to define the areas where you are deviating from the default or supplying needed data.

One thing that is important for users to realize is that the passwd field should not be used in production systems unless you have a mechanism of immediately modifying the given value. As with all information submitted as user-data, the hash will remain accessible to any user on the system for the entire life of the server. On modern hardware, these hashes can easily be cracked in a trivial amount of time. Exposing even the hash is a huge security risk that should not be taken on any machines that are not disposable.

For an example user definition, we can use part of the example cloud-config we saw above:

#cloud-config
users:
  - name: demo
    groups: sudo
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDf0q4PyG0doiBQYV7OlOxbRjle026hJPBWD+eKHWuVXIpAiQlSElEBqQn0pOqNJZ3IBCvSLnrdZTUph4czNC4885AArS9NkyM7lK27Oo8RV888jWc8hsx4CD2uNfkuHL+NI5xPB/QT3Um2Zi7GRkIwIgNPN5uqUtXvjgA+i1CS0Ku4ld8vndXvr504jV9BMQoZrXEST3YlriOb8Wf7hYqphVMpF3b+8df96Pxsj0+iZqayS9wFcL8ITPApHi0yVwS8TjxEtI3FDpCbf7Y/DmTGOv49+AWBkFhS2ZwwGTX65L61PDlTSAzL+rPFmHaQBHnsli8U9N6E4XHDEOjbSMRX user@example.com
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcthLR0qW6y1eWtlmgUE/DveL4XCaqK6PQlWzi445v6vgh7emU4R5DmAsz+plWooJL40dDLCwBt9kEcO/vYzKY9DdHnX8dveMTJNU/OJAaoB1fV6ePvTOdQ6F3SlF2uq77xYTOqBiWjqF+KMDeB+dQ+eGyhuI/z/aROFP6pdkRyEikO9YkVMPyomHKFob+ZKPI4t7TwUi7x1rZB1GsKgRoFkkYu7gvGak3jEWazsZEeRxCgHgAV7TDm05VAWCrnX/+RzsQ/1DecwSzsP06DGFWZYjxzthhGTvH/W5+KFyMvyA+tZV4i1XM+CIv/Ma/xahwqzQkIaKUwsldPPu00jRN user@desktop

To define groups, you should use the groups directive. This directive is relatively simple in that it just takes a list of groups you would like to create.

An optional extension to this is to create a sub-list for any of the groups you are making. This new list will define the users that should be placed in this group:

#cloud-config
groups:
  - group1
  - group2: [user1, user2]

Change Passwords for Existing Users

For user accounts that already exist (the root account is the most pertinent), a password can be suppled by using the chpasswd directive.

Note: This directive should only be used in debugging situations, because, once again, the value will be available to every user on the system for the duration of the server’s life. This is even more relevant in this section because passwords submitted with this directive must be given in plain text.

The basic syntax looks like this:

#cloud-config
chpasswd:
  list: |
    user1:password1
    user2:password2
    user3:password3
  expire: False

The directive contains two associative array keys. The list key will contain a block that lists the account names and the associated passwords that you would like to assign. The expire key is a boolean that determines whether the password must be changed at first boot or not. This defaults to “True”.

One thing to note is that you can set a password to “RANDOM” or “R”, which will generate a random password and write it to /var/log/cloud-init-output.log. Keep in mind that this file is accessible to any user on the system, so it is not any more secure.

Write Files to the Disk

In order to write files to the disk, you should use the write_files directive.

Each file that should be written is represented by a list item under the directive. These list items will be associative arrays that define the properties of each file.

The only required keys in this array are path, which defines where to write the file, and content, which contains the data you would like the file to contain.

The available keys for configuring a write_files item are:

  • path: The absolute path to the location on the filesystem where the file should be written.
  • content: The content that should be placed in the file. For multi-line input, you should start a block by using a pipe character (|) on the “content” line, followed by an indented block containing the content. Binary files should include “!!binary” and a space prior to the pipe character.
  • owner: The user account and group that should be given ownership of the file. These should be given in the “username:group” format.
  • permissions: The octal permissions set that should be given for this file.
  • encoding: An optional encoding specification for the file. This can be “b64” for Base64 files, “gzip” for Gzip compressed files, or “gz+b64” for a combination. Leaving this out will use the default, conventional file type.

For example, we could write a file to /test.txt with the contents:

Here is a line.
Another line is here.

The portion of the cloud-config that would accomplish this would look like this:

#cloud-config
write_files:
  - path: /test.txt
    content: |
      Here is a line.
      Another line is here.

Update or Install Packages on the Server

To manage packages, there are a few related settings and directives to keep in mind.

To update the apt database on Debian-based distributions, you should set the package_update directive to “true”. This is synonymous with calling apt-get update from the command line.

The default value is actually “true”, so you only need to worry about this directive if you wish to disable it:

#cloud-config
package_update: false

If you wish to upgrade all of the packages on your server after it boots up for the first time, you can set the package_upgrade directive. This is akin to a apt-get upgrade executed manually.

This is set to “false” by default, so make sure you set this to “true” if you want the functionality:

#cloud-config
package_upgrade: true

To install additional packages, you can simply list the package names using the “packages” directive. Each list item should represent a package. Unlike the two commands above, this directive will function with either yum or apt managed distros.

These items can take one of two forms. The first is simply a string with the name of the package. The second form is a list with two items. The first item of this new list is the package name, and the second item is the version number:

#cloud-config
packages:
  - package_1
  - package_2
  - [package_3, version_num]

The “packages” directive will set apt_update to true, overriding any previous setting.

Configure SSH Keys for User Accounts and the SSH Daemon

You can manage SSH keys in the users directive, but you can also specify them in a dedicated ssh_authorized_keys section. These will be added to the first defined user’s authorized_keys file.

This takes the same general format of the key specification within the users directive:

#cloud-config
ssh_authorized_keys:
  - ssh_key_1
  - ssh_key_2

You can also generate the SSH server’s private keys ahead of time and place them on the filesystem. This can be useful if you want to give your clients the information about this server beforehand, allowing it to trust the server as soon as it comes online.

To do this, we can use the ssh_keys directive. This can take the key pairs for RSA, DSA, or ECDSA keys using the rsa_privatersa_publicdsa_privatedsa_publicecdsa_private, and ecdsa_public sub-items.

Since formatting and line breaks are important for private keys, make sure to use a block with a pipe key when specifying these. Also, you must include the begin key and end key lines for your keys to be valid.

#cloud-config
ssh_keys:
  rsa_private: |
    -----BEGIN RSA PRIVATE KEY-----
    your_rsa_private_key
    -----END RSA PRIVATE KEY-----

  rsa_public: your_rsa_public_key

Set Up Trusted CA Certificates

If your infrastructure relies on keys signed by an internal certificate authority, you can set up your new machines to trust your CA cert by injecting the certificate information. For this, we use the ca-certs directive.

This directive has two sub-items. The first is remove-defaults, which, when set to true, will remove all of the normal certificate trust information included by default. This is usually not needed and can lead to some issues if you don’t know what you are doing, so use with caution.

The second item is trusted, which is a list, each containing a trusted CA certificate:

#cloud-config
ca-certs:
  remove-defaults: true
  trusted:
    - |
      -----BEGIN CERTIFICATE-----
      your_CA_cert
      -----END CERTIFICATE-----

Configure resolv.conf to Use Specific DNS Servers

If you have configured your own DNS servers that you wish to use, you can manage your server’s resolv.conf file by using the resolv_conf directive. This currently only works for RHEL-based distributions.

Under the resolv_conf directive, you can manage your settings with the nameserverssearchdomainsdomain, and options items.

The nameservers directive should take a list of the IP addresses of your name servers. The searchdomains directive takes a list of domains and subdomains to search in when a user specifies a host but not a domain.

The domain sets the domain that should be used for any unresolvable requests, and options contains a set of options that can be defined in the resolv.conf file.

If you are using the resolv_conf directive, you must ensure that the manage-resolv-conf directive is also set to true. Not doing so will cause your settings to be ignored:

#cloud-config
manage-resolv-conf: true
resolv_conf:
  nameservers:
    - 'first_nameserver'
    - 'second_nameserver'
  searchdomains:
    - first.domain.com
    - second.domain.com
  domain: domain.com
  options:
    option1: value1
    option2: value2
    option3: value3

Run Arbitrary Commands for More Control

If none of the managed actions that cloud-config provides works for what you want to do, you can also run arbitrary commands. You can do this with the runcmd directive.

This directive takes a list of items to execute. These items can be specified in two different ways, which will affect how they are handled.

If the list item is a simple string, the entire item will be passed to the sh shell process to run.

The other option is to pass a list, each item of which will be executed in a similar way to how execve processes commands. The first item will be interpreted as the command or script to run, and the following items will be passed as arguments for that command.

Most users can use either of these formats, but the flexibility enables you to choose the best option if you have special requirements. Any output will be written to standard out and to the /var/log/cloud-init-output.log file:

#cloud-config
runcmd:
  - [ sed, -i, -e, 's/here/there/g', some_file]
  - echo "modified some_file"
  - [cat, some_file]

Shutdown or Reboot the Server

In some cases, you’ll want to shutdown or reboot your server after executing the other items. You can do this by setting up the power_state directive.

This directive has four sub-items that can be set. These are delaytimeoutmessage, and mode.

The delay specifies how long into the future the restart or shutdown should occur. By default, this will be “now”, meaning the procedure will begin immediately. To add a delay, users should specify, in minutes, the amount of time that should pass using the +<num_of_mins> format.

The timeout parameter takes a unit-less value that represents the number of seconds to wait for cloud-init to complete before initiating the delay countdown.

The message field allows you to specify a message that will be sent to all users of the system. The mode specifies the type of power event to initiate. This can be “poweroff” to shut down the server, “reboot” to restart the server, or “halt” to let the system decide which is the best action (usually shutdown):

#cloud-config
power_state:
  timeout: 120
  delay: "+5"
  message: Rebooting in five minutes. Please save your work.
  mode: reboot

Troubleshooting

If a cloud-init script behaves unexpectedly, check the captured console output in /var/log/cloud-init-output.log inside vRealize Automation.

Conclusion

The above examples represent some of the more common configuration items available when running a cloud-config file. There are additional capabilities that we did not cover in this guide. These include configuration management setup, configuring additional repositories, and even registering with an outside URL when the server is initialized.

You can find out more about some of these options by checking the /usr/share/doc/cloud-init/examples directory. For a practical guide to help you get familiar with cloud-config files, you can follow our tutorial on how to use cloud-config to complete basic server configuration here.

References

vRO JavaScript Style Guide [CB10096]

  1. Introduction
  2. JavaScript Language Rules
    1. var
    2. Constants
    3. Semicolons
    4. Nested functions
    5. Function Declarations Within Blocks
    6. Exceptions
    7. Custom exceptions
    8. Standards features
    9. Wrapper objects for primitive types
    10. delete
    11. JSON.parse()
    12. with() {}
    13. this
    14. for-in loop
    15. Multiline string literals
    16. Array and Object literals
    17. Modifying prototypes of built-in objects
  3. JavaScript Style Rules
    1. Naming
    2. Deferred initialization
    3. Code formatting
    4. Parentheses
    5. Strings
    6. Comments
    7. Tips and Tricks
  4. vRO Element Naming Rules
    1. Actions
    2. Workflows
    3. Action Modules
    4. Resource Elements and Configuration Elements
    5. Attributes inside Configuration Elements

Introduction

If you are new to vRealize Orchestrator and want to look for some code style references, this post can help you with some basic guidelines to write the perfect JavaScript code in your vRO Workflows and Actions. These are industry standards are preferred by almost every industry. Keep in mind that this guide won’t help you with writing your first vRO code, instead can be used as style reference while writing your vRO JavaScript code. In short, helps you to achieve 7 things:

  1. Code that’s easier to read
  2. Provide clear guidelines when writing code
  3. Predictable variable and function names
  4. Improve team’s efficiency to collaborate
  5. Saves time and makes communication between teams more efficient
  6. Reduces redundant work
  7. Enables each team member to share the same clear messaging with potential customers

vRealize Orchestrator uses JavaScript 1.7 as its classic language of choice for all the automation tasks that gets executed inside Mozilla Rhino 1.7R4 Engine along with some limitations listed here. This style guide is a list of dos and don’ts for JavaScript programs that you write inside your vRO. Let’s have a look.

JavaScript Language Rules

var

Always declare variables with var, unless it’s a const.

Constants

  • Use NAMES_LIKE_THIS for constant values, that start with const keyword.

Semicolons

Always use semicolons.

Why?

JavaScript requires statements to end with a semicolon, except when it thinks it can safely infer their existence. In each of these examples, a function declaration or object or array literal is used inside a statement. The closing brackets are not enough to signal the end of the statement. JavaScript never ends a statement if the next token is an infix or bracket operator.

This has really surprised people, so make sure your assignments end with semicolons.

Clarification: Semicolons and functions

Semicolons should be included at the end of function expressions, but not at the end of function declarations. The distinction is best illustrated with an example:

var foo = function() {
  return true;
};  // semicolon here.

function foo() {
  return true;
}  // no semicolon here.

Nested functions

Yes

Nested functions can be very useful, for example in the creation of continuations and for the task of hiding helper functions. Feel free to use them.

Function Declarations Within Blocks

No

Do not do this:

if (x) {
  function foo() {}
}

ECMAScript only allows for Function Declarations in the root statement list of a script or function. Instead use a variable initialized with a Function Expression to define a function within a block:

if (x) {
  var foo = function() {};
}

Exceptions

Yes

You basically can’t avoid exceptions. Go for it.

Custom exceptions

Yes

Sometimes, It becomes really hard to troubleshoot inside vRealize Orchestrator. But, custom exceptions can certainly help you there. Feel free to use custom exceptions when appropriate and throw them using System.error() for soft errors or throw "" for hard errors.

Standards features

Always preferred over non-standards features

For maximum portability and compatibility, always prefer standards features over non-standards features (e.g., string.charAt(3) over string[3].

Wrapper objects for primitive types

No

There’s no reason to use wrapper objects for primitive types, plus they’re dangerous:

var x = new Boolean(false);
if (x) {
  alert('hi');  // Shows 'hi'.
}

Don’t do it!

However type casting is fine.

var x = Boolean(0);
if (x) {
  alert('hi');  // This will never be alerted.
}
typeof Boolean(0) == 'boolean';
typeof new Boolean(0) == 'object';

This is very useful for casting things to numberstring and boolean.

delete

Prefer this.foo = null.

var dispose = function() {
  this.property_ = null;
};

Instead of:

var dispose = function() {
  delete this.property_;
};

Changing the number of properties on an object is much slower than reassigning the values. The delete keyword should be avoided except when it is necessary to remove a property from an object’s iterated list of keys, or to change the result of if (key in obj).

JSON.parse()

You can always use JSON and read the result using JSON.parse().

var userInfo = JSON.parse(feed);
var email = userInfo['email'];

With JSON.parse, invalid JSON will cause an exception to be thrown.

with() {}

No

Using with clouds the semantics of your program. Because the object of the with can have properties that collide with local variables, it can drastically change the meaning of your program. For example, what does this do?

with (foo) {
  var x = 3;
  return x;
}

Answer: anything. The local variable x could be clobbered by a property of foo and perhaps it even has a setter, in which case assigning 3 could cause lots of other code to execute. Don’t use with.

this

Only in object constructors, methods, and in setting up closures

The semantics of this can be tricky. At times it refers to the global object (in most places), the scope of the caller (in eval), a newly created object (in a constructor), or some other object (if function was call()ed or apply()ed).

Because this is so easy to get wrong, limit its use to those places where it is required:

  • in constructors
  • in methods of objects (including in the creation of closures)

for-in loop

Only for iterating over keys in an object/map/hash

for-in loops are often incorrectly used to loop over the elements in an Array. This is however very error prone because it does not loop from 0 to length - 1 but over all the present keys in the object and its prototype chain. Here are a few cases where it fails:

function printArray(arr) {
  for (var key in arr) {
    print(arr[key]);
  }
}

printArray([0,1,2,3]);  // This works.

var a = new Array(10);
printArray(a);  // This is wrong.

a = [0,1,2,3];
a.buhu = 'wine';
printArray(a);  // This is wrong again.

a = new Array;
a[3] = 3;
printArray(a);  // This is wrong again.

Always use normal for loops when using arrays.

function printArray(arr) {
  var l = arr.length;
  for (var i = 0; i < l; i++) {
    print(arr[i]);
  }
}

Multiline string literals

No

Do not do this:

var myString = 'A rather long string of English text, an error message \
                actually that just keeps going and going -- an error \
                message to make the Energizer bunny blush (right through \
                those Schwarzenegger shades)! Where was I? Oh yes, \
                you\'ve got an error and all the extraneous whitespace is \
                just gravy.  Have a nice day.';

The whitespace at the beginning of each line can’t be safely stripped at compile time; whitespace after the slash will result in tricky errors.

Use string concatenation instead:

var myString = 'A rather long string of English text, an error message ' +
    'actually that just keeps going and going -- an error ' +
    'message to make the Energizer bunny blush (right through ' +
    'those Schwarzenegger shades)! Where was I? Oh yes, ' +
    'you\'ve got an error and all the extraneous whitespace is ' +
    'just gravy.  Have a nice day.';

Array and Object literals

Yes

Use Array and Object literals instead of Array and Object constructors.

Array constructors are error-prone due to their arguments.

// Length is 3.
var a1 = new Array(x1, x2, x3);

// Length is 2.
var a2 = new Array(x1, x2);

// If x1 is a number and it is a natural number the length will be x1.
// If x1 is a number but not a natural number this will throw an exception.
// Otherwise the array will have one element with x1 as its value.
var a3 = new Array(x1);

// Length is 0.
var a4 = new Array();

Because of this, if someone changes the code to pass 1 argument instead of 2 arguments, the array might not have the expected length.

To avoid these kinds of weird cases, always use the more readable array literal.

var a = [x1, x2, x3];
var a2 = [x1, x2];
var a3 = [x1];
var a4 = [];

Object constructors don’t have the same problems, but for readability and consistency object literals should be used.

var o = new Object();

var o2 = new Object();
o2.a = 0;
o2.b = 1;
o2.c = 2;
o2['strange key'] = 3;

Should be written as:

var o = {};

var o2 = {
  a: 0,
  b: 1,
  c: 2,
  'strange key': 3
};

Modifying prototypes of built-in objects

Modifying builtins like Object.prototype and Array.prototype are strictly forbidden in vRO. Doing so will result in an error.

JavaScript Style Rules

Naming

In general, use functionNamesLikeThisvariableNamesLikeThisEnumNamesLikeThisCONSTANT_VALUES_LIKE_THIS.

Method and function parameter

Optional function arguments start with opt_.

Getters and Setters

EcmaScript 5 getters and setters for properties are discouraged. However, if they are used, then getters must not change observable state.

/**
 * WRONG -- Do NOT do this.
 */
var foo = { get next() { return this.nextId++; } };

Accessor functions

Getters and setters methods for properties are not required. However, if they are used, then getters must be named getFoo() and setters must be named setFoo(value). (For boolean getters, isFoo() is also acceptable, and often sounds more natural.)

Deferred initialization

OK

It isn’t always possible to initialize variables at the point of declaration, so deferred initialization is fine.

Code formatting

Curly Braces

Because of implicit semicolon insertion, always start your curly braces on the same line as whatever they’re opening. For example:

if (something) {
  // ...
} else {
  // ...
}

Array and Object Initializers

Single-line array and object initializers are allowed when they fit on a line:

var arr = [1, 2, 3];  // No space after [ or before ].
var obj = {a: 1, b: 2, c: 3};  // No space after { or before }.

Multiline array initializers and object initializers are indented 2 spaces, with the braces on their own line, just like blocks.

Indenting wrapped lines

Except for array literals, object literals, and anonymous functions, all wrapped lines should be indented either left-aligned to a sibling expression above, or four spaces (not two spaces) deeper than a parent expression (where “sibling” and “parent” refer to parenthesis nesting level).

someWonderfulHtml = '' +
                    getEvenMoreHtml(someReallyInterestingValues, moreValues,
                                    evenMoreParams, 'a duck', true, 72,
                                    slightlyMoreMonkeys(0xfff)) +
                    '';

thisIsAVeryLongVariableName =
    hereIsAnEvenLongerOtherFunctionNameThatWillNotFitOnPrevLine();

thisIsAVeryLongVariableName = siblingOne + siblingTwo + siblingThree +
    siblingFour + siblingFive + siblingSix + siblingSeven +
    moreSiblingExpressions + allAtTheSameIndentationLevel;

thisIsAVeryLongVariableName = operandOne + operandTwo + operandThree +
    operandFour + operandFive * (
        aNestedChildExpression + shouldBeIndentedMore);

someValue = this.foo(
    shortArg,
    'Some really long string arg - this is a pretty common case, actually.',
    shorty2,
    this.bar());

if (searchableCollection(allYourStuff).contains(theStuffYouWant) &&
    !ambientNotification.isActive() && (client.isAmbientSupported() ||
                                        client.alwaysTryAmbientAnyways())) {
  ambientNotification.activate();
}

Blank lines

Use newlines to group logically related pieces of code. For example:

doSomethingTo(x);
doSomethingElseTo(x);
andThen(x);

nowDoSomethingWith(y);

andNowWith(z);

Binary and Ternary Operators

Always put the operator on the preceding line. Otherwise, line breaks and indentation follow the same rules as in other Google style guides. This operator placement was initially agreed upon out of concerns about automatic semicolon insertion. In fact, semicolon insertion cannot happen before a binary operator, but new code should stick to this style for consistency.

var x = a ? b : c;  // All on one line if it will fit.

// Indentation +4 is OK.
var y = a ?
    longButSimpleOperandB : longButSimpleOperandC;

// Indenting to the line position of the first operand is also OK.
var z = a ?
        moreComplicatedB :
        moreComplicatedC;

This includes the dot operator.

var x = foo.bar().
    doSomething().
    doSomethingElse();

Parentheses

Only where required

Use sparingly and in general only where required by the syntax and semantics.

Never use parentheses for unary operators such as deletetypeof and void or after keywords such as returnthrow as well as others (casein or new).

Strings

For consistency single-quotes (‘) are preferred to double-quotes (“). This is helpful when creating strings that include HTML:

var msg = 'This is some message';

Comments

Use JSDoc if you want.

Tips and Tricks

JavaScript tidbits

True and False Boolean Expressions

The following are all false in boolean expressions:

  • null
  • undefined
  • '' the empty string
  • 0 the number

But be careful, because these are all true:

  • '0' the string
  • [] the empty array
  • {} the empty object

This means that instead of this:

while (x != null) {

you can write this shorter code (as long as you don’t expect x to be 0, or the empty string, or false):

while (x) {

And if you want to check a string to see if it is null or empty, you could do this:

if (y != null && y != '') {

But this is shorter and nicer:

if (y) {

Caution: There are many unintuitive things about boolean expressions. Here are some of them:

  • Boolean('0') == true
    '0' != true
  • 0 != null
    0 == []
    0 == false
  • Boolean(null) == false
    null != true
    null != false
  • Boolean(undefined) == false
    undefined != true
    undefined != false
  • Boolean([]) == true
    [] != true
    [] == false
  • Boolean({}) == true
    {} != true
    {} != false

Conditional (Ternary) Operator (?:)

Instead of this:

if (val) {
  return foo();
} else {
  return bar();
}

you can write this:

return val ? foo() : bar();

&& and ||

These binary boolean operators are short-circuited, and evaluate to the last evaluated term.

“||” has been called the ‘default’ operator, because instead of writing this:

function foo(opt_win) {
  var win;
  if (opt_win) {
    win = opt_win;
  } else {
    win = window;
  }
  // ...
}

you can write this:

function foo(opt_win) {
  var win = opt_win || window;
  // ...
}

“&&” is also useful for shortening code. For instance, instead of this:

if (node) {
  if (node.kids) {
    if (node.kids[index]) {
      foo(node.kids[index]);
    }
  }
}

you could do this:

if (node && node.kids && node.kids[index]) {
  foo(node.kids[index]);
}

or this:

var kid = node && node.kids && node.kids[index];
if (kid) {
  foo(kid);
}

vRO Element Naming Rules

Actions

should be named like actionNameLikeThis and must start with a verb like get, create, set, delete, fetch, put, call, and so forth. Then, describe what the action is doing.

Workflows

should be a short statement which briefly tells what the Workflow is doing.

Action Modules

naming should be similar to com.[company].library.[component].[interface].[parent group].[child group]

Where

[company] = company name as a single word

library – Optional

[component] = Examples are: NSX, vCAC, Infoblox, Activedirectory, Zerto.

[interface] – Optional – Examples are: REST, SOAP, PS. Omit this if you are using the vcenter or vcac plugins.

[parent group] – Optional – Parent group i.e. Edges, Entities, Networks, vm.

[child group] – Optional – A group containing collections of child actions that relate to the parent group.

Resource Elements and Configuration Elements

Resource element’s name is actually the filename at time of uploading. It should be clear enough.

Configuration Elements are used for various purposes like Password Containers (Credential store), Environment specific variables, location information, or can be a used as global variables get\set by various workflows. Hence, the name should define its purpose.

Attributes inside Configuration Elements

attribute’s name should have some suffix like DC_ or CONF_. when mapped with variable inside workflow, we should keep that variable’s name same as well. While using it in scripts, pass its value to another variable.

In this example, a attribute DC_EAST_LOCATION inside a Configuration element (named Location Variables) is mapped to a variable DC_EAST_LOCATION in a workflow and this variable is being used in a scriptable task.

//CORRECT WAY
var location = DC_EAST_LOCATION; //pass value to a local variable
System.log("Current location: " + location);

//WRONG WAY
System.log("Current location: " + DC_EAST_LOCATION);

Code Obfuscation in vRO [CB10095]

This will be a very quick post. While reading about code security, I came through a concept called “Security through obscurity” and when I dig in, I realized code obfuscation is great way to do it. But then, I thought how vRO handles obfuscated code. And so I tried and let me give you a quick demo on that.

What is Code Obfuscation?

Before going to vRO, let’s briefly understand what exactly code obfuscation is. Obfuscation is the act of creating code that is difficult for humans to understand. It may use needlessly roundabout expressions to compose statements. Programmers may deliberately obfuscate code to conceal its purpose or its logic or implicit values embedded in it, primarily, in order to prevent tampering, or even to create a challenge for someone reading the source code.

Example of an small obfuscated piece of code

Let’s try inside vRO

We have a vRO action which decodes vSphere license details using vRO Scripting APIs.

/**
* @version 0.0.0
* @description Decodes the 'Product Name' and 'Product Version' from a vSphere license string
*
* @param {VC:SdkConnection} vCenterConnection
* @param {string} licenseKey
*
* @outputType string
*
*/
//validate that we have a license key that is 5, 5 character strings delimited by a '-'
var licenseParts = licenseKey.split('-');
if (licenseParts.length != 5) {
throw("license key is not complete");
}
for (var part in licenseParts) {
if (licenseParts[part].length < 5) {
throw("license key is invalid. Section: '"+licenseParts[part]+"' of '"+licenseKey+"' is too short");
}
if (licenseParts[part].length > 5) {
throw("license key is invalid. Section: '"+licenseParts[part]+"' of '"+licenseKey+"' is too long");
}
}
var licenseInfo = vCenterConnection.licenseManager.decodeLicense(licenseKey);
System.log("License Name: "+licenseInfo.name);
var licenseProps = licenseInfo.properties;
for (var lp in licenseProps) {
if (licenseProps[lp].key === "ProductName") {
System.log("Product Name: "+licenseProps[lp].value);
} else if (licenseProps[lp].key === "ProductVersion") {
System.log("Product Version: "+licenseProps[lp].value);
}
}
/* Example Output
License Name: vSphere 7 Enterprise Plus
Product Name: VMware ESX Server
Product Version: 7.0
*/

It takes 2 inputs. Select vCenter and provide a licenseKey in this format XXXXQ-YYTYA-ZZ1G8-0N8K2-05D6J)

vRO Input form

Let’s run it. The output will be similar to

vRO log

Run Obfuscated code

To obfuscate my JavaScript code, I used this tool available at https://obfuscator.io/

Source: https://obfuscator.io/

After conversion, I copied the code.

Converted with https://obfuscator.io/

And it look like this

var _0x30dc97=_0x4adc;function _0x48f4(){var _0x282c41=['key','3166SRMjmj','70317gluhjb','\x27\x20is\x20too\x20short','Product\x20Version:\x20','length','decodeLicense','4140NJYHMY','log','94BxAcqD','license\x20key\x20is\x20not\x20complete','ProductVersion','properties','\x27\x20of\x20\x27','licenseManager','split','name','4670613YLePgL','license\x20key\x20is\x20invalid.\x20\x20Section:\x20\x27','6764730tJimsy','6487908mPAsSG','ProductName','value','License\x20Name:\x20','Product\x20Name:\x20','56MtIDBs','1323441OmRImv','4038660gdcllK','\x27\x20is\x20too\x20long'];_0x48f4=function(){return _0x282c41;};return _0x48f4();}(function(_0x38b91a,_0x218a87){var _0x202c0f=_0x4adc,_0x3b562e=_0x38b91a();while(!![]){try{var _0x3f0cce=parseInt(_0x202c0f(0x85))/0x1*(parseInt(_0x202c0f(0x70))/0x2)+-parseInt(_0x202c0f(0x78))/0x3+-parseInt(_0x202c0f(0x7b))/0x4+-parseInt(_0x202c0f(0x7a))/0x5+parseInt(_0x202c0f(0x82))/0x6+-parseInt(_0x202c0f(0x81))/0x7*(-parseInt(_0x202c0f(0x80))/0x8)+parseInt(_0x202c0f(0x86))/0x9*(parseInt(_0x202c0f(0x6e))/0xa);if(_0x3f0cce===_0x218a87)break;else _0x3b562e['push'](_0x3b562e['shift']());}catch(_0x2ca51c){_0x3b562e['push'](_0x3b562e['shift']());}}}(_0x48f4,0xcf10d));var licenseParts=licenseKey[_0x30dc97(0x76)]('-');if(licenseParts['length']!=0x5)throw _0x30dc97(0x71);for(var part in licenseParts){if(licenseParts[part]['length']<0x5)throw _0x30dc97(0x79)+licenseParts[part]+_0x30dc97(0x74)+licenseKey+_0x30dc97(0x87);if(licenseParts[part][_0x30dc97(0x89)]>0x5)throw _0x30dc97(0x79)+licenseParts[part]+_0x30dc97(0x74)+licenseKey+_0x30dc97(0x83);}var licenseInfo=vCenterConnection[_0x30dc97(0x75)][_0x30dc97(0x8a)](licenseKey);System[_0x30dc97(0x6f)](_0x30dc97(0x7e)+licenseInfo[_0x30dc97(0x77)]);function _0x4adc(_0x4d2f0a,_0x40aefd){var _0x48f4ef=_0x48f4();return _0x4adc=function(_0x4adcaf,_0x1f587e){_0x4adcaf=_0x4adcaf-0x6e;var _0x2861fc=_0x48f4ef[_0x4adcaf];return _0x2861fc;},_0x4adc(_0x4d2f0a,_0x40aefd);}var licenseProps=licenseInfo[_0x30dc97(0x73)];for(var lp in licenseProps){if(licenseProps[lp]['key']===_0x30dc97(0x7c))System['log'](_0x30dc97(0x7f)+licenseProps[lp][_0x30dc97(0x7d)]);else licenseProps[lp][_0x30dc97(0x84)]===_0x30dc97(0x72)&&System[_0x30dc97(0x6f)](_0x30dc97(0x88)+licenseProps[lp]['value']);}

This script is available at https://gist.github.com/imtrinity94/e172c126ee13f7c9804acd0bc886f5bf

And I ran it in vRO by just copy-pasting it to action with same input variables and it worked. I know that obfuscated code is also just the plain JavaScript but I still never tried it and wanted to get a feel of it inside vRO.

vRO Action with obfuscated code

That’s it in this post. Thanks for reading.

Tip around System.log() for Big Workflows with Scriptable tasks and actions

  1. Introduction
  2. Concern
  3. Possible Solution
    1. Workflows
    2. Actions

Introduction

It’s quite evident that vRO Workflows can grow huge for major provisioning tasks like hardware deployments, vApp deployments or tenant provisioning etc. just to name a few. And in a big environment, such workflows can come in large numbers and may be coupled with various other workflows quite generally for eg. using Nested WFs.

Concern

Now, the real pain starts when you have to debug them. for Debugging, developers relies mostly on System.log() or System.debug() to know about the statuses and variable values etc. up to a particular point, which is really great.

If I talk about myself, I even uses a start and completion log for every Scriptable task in a workflow. This always pinpoints to the scriptable task that was started but couldn’t complete due to any reason and therefore couldn’t print the end log. Let me explain it. There is a workflow (Create a VM) and there is a scriptable task (Get All VM Names). In this scriptable task, I would add some thing like

System.log("\"Get all VM Names\" script started");

and at the end of this script

System.log("\"Get all VM Names\" script completed");

Imagine this in a very large workflow, this can really help. But I knew this needs to be improved. Sometimes, you changes the content of a scriptable task and its definition changes, so update it’s name -> Get All VM Names to Get All VM Names and IPs. In such cases, you have to update those start and end log statements. And I hate that!

Possible Solution

I referred to the vRO community (link) and found one interesting way out.

FOR WORKFLOWS

We can use this,

System.log("\""+System.currentWorkflowItem().getDisplayName()+"\"  script started");
System.log("\""+System.currentWorkflowItem().getDisplayName()+"\"  script completed");

This will automatically print the updated name of the scriptable task.

vRO Workflow Screenshot with scriptable tasks | Image by Author

You can also print the item# if you want, using this command

System.currentWorkflowItem().getName();

FOR ACTIONS (OPTIONAL)

Now, this is a little tricky. We can’t do the same for actions. However, if we consider them as a part of a workflow, which means they will act similarly as an item and obviously will have a item number, then we can print their start and end cycle. We can do so by adding the below mentioned script in your actions.

if(System.currentWorkflowItem().getDisplayName())
System.log("\"" + System.currentWorkflowItem().getDisplayName() + "\" action started");
if(System.currentWorkflowItem().getDisplayName())
System.log("\"" + System.currentWorkflowItem().getDisplayName() + "\" action ended");
A vRO Action | Image by Author
vRO Workflow Screenshot with scriptable tasks and action | Image by Author

Here, we added a condition just to avoid printing Ended "" script if you try to run your actions solely.

A vRO Action | Image by Author

List of available Node.js modules in vRO

Here is the list of node.js modules that comes preinstalled in vRO 8.3 when you select in Node.js 12 in Runtime.

ModuleVersion
node12.18.3
v87.8.279.23-node.39
uv1.38.0
zlib1.2.11
brotli1.0.7
ares1.16.0
modules72
nghttp21.41.0
napi6
llhttp2.0.4
http_parser2.9.3
openssl1.1.1g
cldr37.0
icu67.1
tz2019c
unicode13.0
process
Note: http_parser is removed from later versions of vRO Node.js Runtime

Reference Script

In Node.js, the process.versions property is an inbuilt application programming interface of the process module which is used to get the versions of node.js modules and it’s dependencies. The process.versions property returns an object listing the version strings of Node.js and its dependencies.

exports.handler = (context, inputs, callback) => {
const process = require('process');
var no_versions = 0;
var versions = process.versions;
console.log("|Module|Version|");
console.log("|——|——-|");
for (var key in versions) {
// Printing key and its versions
console.log("|"+key + "|" + versions[key]+"|");
no_versions++;
}
console.log("Total no of values available = " + no_versions);
callback(undefined, {
status: "done"
});
}

Let me know if you have an even better way to get the list of modules from vRO. See you.