Many organization uses vRO for Host Provisioning. Various hardware vendors provide vRO Scripting APIs via plugins or REST APIs to manage and provision bare-metal servers. While doing so, there is always a possibility that post-provisioning, you would like to access your ESXi host from an account other than root for several reasons like security restrictions, limited access etc. In that case, the best way is to create a fresh new account using vRO with the kind of access mode or lets call it, role that suits the needs. In this post, we will see how to create an ESXi local user account using vRO Scripting API.
Classes & Methods
As shown below, we have used following classes and methods for retrieval of existing accounts, creation, updating & deletion of accounts as well as change access or Role of those accounts.
/**
*
* @version 0.0.0
*
* @param {VC:HostSystem} host
* @param {string} localUserName
* @param {SecureString} localUserPassword
* @param {string} accessMode
* @param {string} localUserDescription
*
* @outputType void
*
*/
function createEsxiLocalUser(host, localUserName, localUserPassword, accessMode, localUserDescription) {
if(!host) throw "host parameter not set";
if(!localUserName || !localUserPassword) throw "Either username or password parameter not set";
if(!localUserDescription) localUserDescription = "***Account created using vRO***";
if(localUserDescription.indexOf(localUserPassword) != -1) throw 'Weak Credentials! Avoid putting password string in description';
// Retrieve all system and custom user accounts
var arrExistingLocalusers = host.configManager.hostAccessManager.retrieveHostAccessControlEntries();
var accountSpecs = new VcHostAccountSpec(localUserName,localUserPassword,localUserDescription);
host.configManager.accountManager.createUser(accountSpecs);
switch(accessMode){
case 'Admin': //Full access rights
host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessAdmin);
break;
case 'ReadOnly': //See details of objects, but not make changes
host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessReadOnly);
break;
case 'NoAccess': //Used for restricting granted access
host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessNoAccess);
break;
default: //No access assigned. Note: Role assigned is accessNone
host.configManager.hostAccessManager.changeAccessMode(localUserName,false,VcHostAccessMode.accessNone);
}
System.warn(" >>> Local user "+localUserName+" created with accessMode "+accessMode+" on host "+host.name);
}
Demo Video
In this demo, we can see how the workflow is utilized to create a local account testuser1 through which we logged in to ESXi and check if it has required permissions.
vRO Package for CRUD operation
I have created a vRO Workflow to create and manage your ESXi local accounts directly from the input form itself. Please find the vRO package that contains the master workflow and associated actions.
TL:DR Idea is to update Resource elements from vRO itself as no such functionality exists in UI yet. Use the package and workflow to update resource elements directly from vRO quickly and conveniently. Link to vRO package here.
We all use Resource Elements inside vRealize Orchestrator for various reasons. Resource Elements are external independent objects that once imported, can be used in vRO Workflows and scripts.
Resource elements include image files, scripts, XML templates, HTML files, and so on. However, I have one concern regarding updating them. It is such an overhead. Though, on official vRO Docs, it is clearly mentioned that you can import, export, restore, update, and delete a resource element, in reality, you have to update that object using a external editor that means a text editor for text based files, image editor for images etc.
Apart from images, the most common type of resource elements are text based files for example, .sh, .ps1, .json, .xml, .html, .yaml etc. In this post, to ease the process of updating resource elements, I have created a WF using which you really won’t have to follow that long boring method of exporting a resource element, edit it in the Notepad++, import it back. Just run that WF and select your resource element and it will give you a text editing area where you can update your resource element on-the-go.
Prerequisites
Download the package from here and import into your vRO.
Make sure you have the resource element you want to update.
Procedure
Run the Workflow Edit Resource Element On-The-Go and select the resource element. Here, I’ve selected template.yaml which I already imported in vRO earlier.
By default, vRO picks up a MIME type for your file. However, for text based objects, you can set it to text/{fileExtension}. Here, I will set it to text/yaml so that I can see it’s content directly in vRO.
Go to the next section of the workflow and you can see the current content of your resource element. Edit it the way you like. Here, this file was empty, so I added this YAML code.
Click Run and wait for workflow to complete.
Go back to the resource element to check if the content is there.
Now, you want to change the flavor from large to small. Rerun the workflow, edit the flavor value. Click Run.
Your changes should be visible now.
Scope of Improvement
I am looking for a way to give the version as an input, so that we can update the version of the resource element as we update its content. Seems like the resourceElement.version attributes is not working at least in vRO 8.x. Suggestions would be appreciated.
vRealize Orchestrator a.k.a. vRO, which is a drag-and-drop automation tool, is quite an old tool developed and released as early as in 2007 by Dunes Technologies, Switzerland. After VMware acquired Dunes, there were 100’s of releases and updates came year after year. Lots and lots of improvements made over time in the UI, backend technologies, security, multi-language support, etc. However, one thing that remains the same is its JavaScript Engine. vRO uses Mozilla Rhino Engine 1.7R4 which was released in 2012.
In this post, My goal is to provide some insights on this Rhino Engine as it is almost extinct from the internet. However, I am still behind the JavaScript Engine which provides IntelliSense support to vRO 8.x. As you might have noticed and probably be wondering how CTRL+SPACE shows options only available to recent versions of JavaScript. I guess it’s for Node.js runtime.
What is Rhino Engine?
Rhino Engine converts JavaScript scripts into classes. It is intended to be used in desktop or server-side applications, hence there is no built-in support for the Web browser objects that are commonly associated with JavaScript which makes it very suitable for vRO. Rhino works in both compiled and interpreted mode. Rhino Engine got its name from the animal on the cover of the O’Reilly book about JavaScript published many years back.
The Rhino project was started at Netscape in the autumn of 1997. At the time, Netscape was planning to produce a version of Navigator written entirely in Java and so it needed an implementation of JavaScript written in Java. When Netscape stopped work on “Javagator,” as it was called, somehow Rhino escaped the axe (rumor had it that the executives “forgot” it existed). For a time, a couple of major companies (including Sun) licensed Rhino for use in their products and paid Netscape to do so, allowing work on Rhino to continue. Now Rhino is part of Mozilla’s open-source repository.
Released 10 years ago
Released in 2012-06-18, Rhino 1.7R4 is almost prehistoric for today’s standards and that’s been always a point of discussion in the vRO Community.
Release Notes of 1.7R4
Compatibility with JavaScript features
While trying to look deep into its compatibility matrix with ES, I found this Kangax’s Compat-table which gives an excellent and detailed view of all the possible features that Rhino 1.7R4 supports. Click the link to know more.
Rhino Compatibility Matrix with JavaScript
ECMAScript 5.1 Specifications
This document will give you a very in-depth knowledge of the ECMAScript 5.1 that vRO leverages to understand the language better. Learn more at https://262.ecma-international.org/5.1.
You can download this document and read about all the fine details by yourself.
Currently, on GitHub page of Mozilla, version 1.7R4 is not available. However, you may find some very old scripts that were written at the time of 1.7R4 as I can validate using web-achieve. You can explore their GitHub repo here.
When writing scripts for workflows, you must consider the following limitations of the Mozilla Rhino implementation in Orchestrator.
When a workflow runs, the objects that pass from one workflow element to another are not JavaScript objects. What is passed from one element to the next is the serialization of a Java object that has a JavaScript image. As a consequence, you cannot use the whole JavaScript language, but only the classes that are present in the API Explorer. You cannot pass function objects from one workflow element to another.
Orchestrator runs the code in scriptable task elements in a context that is not the Rhino root context. Orchestrator transparently wraps scriptable task elements and actions into JavaScript functions, which it then runs. A scriptable task element that contains System.log(this); does not display the global object this in the same way as a standard Rhino implementation does.
You can only call actions that return nonserializable objects from scripting, and not from workflows. To call an action that returns a nonserializable object, you must write a scriptable task element that calls the action by using the System.getModuleModuleName.action() method.
Workflow validation does not check whether a workflow attribute type is different from an input type of an action or subworkflow. If you change the type of a workflow input parameter, for example from VIM3:VirtualMachine to VC:VirtualMachine, but you do not update any scriptable tasks or actions that use the original input type, the workflow validates but does not run.
Access to additional Java Classes
By default, vRealize Orchestrator restricts JavaScript access to a limited set of Java classes. If you require JavaScript access to a wider range of Java classes, you must set an vRealize Orchestrator system property.
Allowing the JavaScript engine full access to the Java virtual machine (JVM) presents potential security issues. Malformed or malicious scripts might have access to all the system components to which the user who runs the vRealize Orchestrator server has access. Therefore, by default the vRealize Orchestrator JavaScript engine can access only the classes in the java.util.* package.
If you require JavaScript access to classes outside of the java.util.* package, you can list in a configuration file the Java packages to which to allow JavaScript access. You then set the com.vmware.scripting.rhino-class-shutter-file system property to point to this file.
Procedure
Create a text configuration file to store the list of Java packages to which to allow JavaScript access.For example, to allow JavaScript access to all the classes in the java.net package and to the java.lang.Object class, you add the following content to the file.java.net.* java.lang.Object
Enter a name for the configuration file.
Save the configuration file in a subdirectory of /data/vco/usr/lib/vco. The configuration file cannot be saved under another directory.
Log in to Control Center as root.
Click System Properties.
Click New.
In the Key text box, enter com.vmware.scripting.rhino-class-shutter-file.
In the Value text box, enter vco/usr/lib/vco/your_configuration_file_subdirectory.
In the Description text box, enter a description for the system property.
Click Add.
Click Save changes from the pop-up menu.A message indicates that you have saved successfully.
Wait for the vRealize Orchestrator server to restart.
See an implementation example of accessing external Java classes by BlueCat here. Here, the code implements new java.lang.Long(0)
.
.
.
var testConfig = BCNProteusAPI.createAPIEntity(new java.lang.Long(0),configName,"","Configuration" );
var args = new Array( new java.lang.Long(0), testConfig );
configId = new java.lang.Long( BCNProteusAPI.call( profileName,"addEntity",args ));
System.log( "New configuration was created, id=" + configId );
var addTFTPGroupArgs = new Array( configId, "tftpGroupName1", "" );
var tftpGroupId = new java.lang.Long( BCNProteusAPI.call(profileName,"addTFTPGroup", addTFTPGroupArgs ) );
System.log( "New TFTP Group was created, id=" + tftpGroupId );
.
.
.
If you are new to vRO or coming form vRO 7.x, you may find restarting vRO a little tricky and might want to know how to restart vRO in an ordered way to avoid any service failure or corrupt configuration etc. Historically, in 7.x version of vRO, there used to have a restart button in its VAMI interface which generally restart it gracefully but version 8.x skipped that ability. However, there are new ways that we’ll see today in this post.
Other way would be to delete these pods directly using this command. After this command, K8s will auto-deploy the pods back again.
kubectl delete pod vco-app
kubectl delete pod orchestration-ui-app
Now monitor till both pods will be fully recreated (3/3 and 1/1) using this command:
kubectl -n prelude get pods
When all services are listed as Running or Completed, vRealize Orchestrator is ready to use. Generally, pod creation may take up to 5-7 mins.
via SSH – run deploy.sh
Login to the vRO appliance using SSH or VMRC
To stop all services, run/opt/scripts/deploy.sh –onlyClean
To shutdown the appliance, run /opt/scripts/deploy.sh –shutdown
To start all services, run /opt/scripts/deploy.sh
Validate the deployment has finished by reviewing the output from the deploy.sh script
Once the command execution completes, ensure that all of the pods are running correctly with the following command ‘kubectl get pods –all-namespaces‘
When all services are listed as Running or Completed, vRealize Orchestrator is ready to use.
via Control Center
Go to Control Center.
Open System Properties and add a new property.
This will auto-restart the vRO in 2 mins.
Older ways to restart vRO services
There are some older ways of restarting vRO and its services, perhaps for vRO 6.x & 7.x only. But these are not valid anymore for version 8.x. They are just here for the records.
via SSH – restart services
Take an SSH session and run this command will restart vRO services.
service vco-server stop && service vco-configurator stop
via Control Center – Startup Options
Open Control Center and go to Startup Options.
Click Restart button.
via vRA VAMI – for embedded vRO
Open vRA VAMI Interface and go to vRA -> Orchestrator settings.
Select Service type and Click Restart button.
That’s all in this post. Please comment down if you use any way other than mentioned here. I’ll be happy to add it here. And don’t forget to share this post. #vRORocks
Starting with vRealize Automation 8.2, Service Broker is capable of displaying input forms designed in vRealize Orchestrator with the custom forms display engine. However, there are some differences in the forms display engines.
Orchestrator and Service Broker forms
Amongst the differences, the following features supported in vRealize Orchestrator are not yet supported in Service Broker:
The inputs presentations developed with the vRealize Orchestrator Legacy Client used in vRealize Orchestrator 7.6 and earlier, are not compatible. vRealize Orchestrator uses a built-in legacy input presentation conversion that is not available from Service Broker yet.
The inputs presentation in vRealize Orchestrator has access to all the workflow elements in the workflow. The custom forms have access to the elements exposed to vRealize Automation Service Broker through the VRO-Gateway service, which is a subset of what is available on vRealize Orchestrator.
Custom forms can bind workflow inputs to action parameters used to set values in other inputs.
Custom forms cannot bind workflows variables to action parameters used to set values in other inputs.
Note You might have noticed VRO-Gateway service when you use WFs as a WBX (Workflow Based Extensibility) in Event Subscriptions where these WFs get triggered by this service.
Basically, It provides a gateway to VMware Realize Orchestrator (vRO) for services running on vRealize Automation. By using the gateway, consumers of the API can access a vRO instance, and initiate workflows or script actions without having to deal directly with the vRO APIs.
It is possible to work around vRealize Automation not having access to workflow variables by one of the following options :
Using a custom action returning the variable content.
Binding to an input parameter set to not visible instead of a variable.
Enabling custom forms and using constants.
The widgets available in vRealize Orchestrator and in vRealize Automation vary for certain types. The following table describes what is supported.
vRA
vRO
Input Data Type
Possible Form Display Types
Action return type for Default Value
Action return type for Value Options
Possible Form Display Types
Action return type for Default Value
Action return type for Value Options
String
Text, TextField, Text AreaDropdown, Radio Group
String
Array of StringPropertiesArray of Properties (value, label)
For use cases where the widget specified in vRealize Orchestrator is not available from Service Broker, a compatible widget is used.
Because the data being passed to and from the widget might expect different types, formats, and values in the case they are unset, the best practice to develop workflows targeting Service Broker is to:
Develop the vRealize Orchestrator workflow. This can include both the initial development of the workflow or changes of inputs.
Version the workflow manually.
In Cloud Assembly, navigate to Infrastructure > Connections > Integrations and select your vRealize Orchestrator integration.
Start the data collection for the vRealize Orchestrator integration. This step, along with versioning up your workflow, ensure that the VRO-Gateway service used by vRealize Automation has the latest version of the workflow.
Import content into Service Broker. This step generates a new default custom form.
In addition to the input forms designed in vRealize Orchestrator, you can, if needed, develop workflow input forms with the custom forms editor.
If these forms call actions, develop or run these from the vRealize Orchestrator workflow editor.
Test the inputs presentation in Service Broker.
Repeat from step 5 as many times as needed.
Repeat from step 1, in case workflows inputs or forms need to be changed.
Either distribute and maintain the custom forms or alternatively, design vRealize Orchestrator inputs by using the same options or actions as in the custom forms (the above step 1), and then repeat the steps 2 to 8 to validate that the process works.
Using this last option means that:
Running the workflow from vRealize Orchestrator can lead to the input presentation not working as expected when started in vRealize Orchestrator.
For some cases, you must modify the return type of the actions used for default value or value options so these values can be set from the vRealize Orchestrator workflow editor and, when the workflow is saved, revert the action return types.
Designing the form in the workflow has the following advantages:
Form is packaged and delivered as part of the workflow included in a package.
Form can be tested in vRealize Orchestrator as long as the compatible widgets are applied.
The form can optionally be versioned and synchronized to a Git repository with the workflow.
Designing the custom forms separately has the following advantages:
Being able to customize the form without changing the workflow.
Being able to import and export the form as a file and reusing it for different workflows.
For example, a common use case is to have a string based drop-down menu.
Returning a Properties type can be used in both the vRealize Orchestrator input form presentation and vRealize Automation custom forms presentation. With the Property type you can display a list of values in the drop-down menu. After being select by the user, these values pass an ID to the parameter (to the workflow and the other input fields that would bind to this parameter). This is very practical to list objects when there is no dedicated plug-in for them as this avoids you having to select object names and having to find object IDs by name.
Returning an array of Properties types has the same goal as returning Properties but does give control on the ordering of the element. It is done by setting for each property in the array the label and value keys. For example, it is possible to sort ascending or descending properties by label or by keys within the action.
All the workflows included in the “drop down” folder of the sample package include drop down menus created with actions that have array of Properties set as the return type.
Sometimes, we want to know exactly what type of vRO object we are working on. It could be something that is returning from an action of type Any or a method returning various types of objects or simply about switch cases. In this quick post, we will see what are the options that vRO provides and where to use them.
var var1 = new VcCustomizationSpec();
System.debug(typeof var1); //function
var var2 = new Object();
System.debug(typeof var2); //object
var var3 = "a";
System.debug(typeof var3); //string
var var4 = 2;
System.debug(typeof var4); //number
var var4 = new Array(1, 2, 3);
System.debug(typeof var4); //object
System.debug(typeof []); //object
System.debug(typeof function () {}); //function
System.debug(typeof /regex/); //object
System.debug(typeof new Date()); //object
System.debug(typeof null); //object
System.debug(typeof undefinedVarible); //undefined
Using new operator
In this example, typeof operator is showing different results when used with new operator for class VC:CustomizationSpecManager. That’s because the new operator is used for creating a user-defined object type instance of one of the built-in object types that has a constructor function. So basically it calls the constructor function of that object type, hence typeof prints function. However, something to note here is that when new operator is used with primitive object type Number, typeof recognizes that as an object.
var num1 = 2;
System.debug(typeof num1); //number
var num2 = Number("123");;
System.debug(typeof (1 + num2)); //number
var num3 = new Number("123");;
System.debug(typeof (num3)); //object
var num4 = new Number("123");;
System.debug(typeof (1 + num4)); //number
use of Parenthesis
// Parentheses can be used for determining the data type of expressions.
const someData = 99;
typeof someData + "cloudblogger"; // "number cloudblogger"
typeof (someData + " cloudblogger"); // "string"
System.getObjectType()
The System.getObjectType() method returns the VS-O ‘type’ for the given operand. This method is more advanced than typeof and is able to detect more complex yet intrinsic object types like Date, Array etc. But, it still cannot figure out the plugin object types like VC:SDKConnection, etc.
Type
Result
Array
"Array"
Number
"number"
String
"string"
vRO Plugin Object Types (with or without new)
"null"
Date
"Date"
Composite Types
"Properties"
SecureString
"string"
undefined Variable
Reference Error
Code Examples
var var1 = new VcCustomizationSpec();
System.debug(System.getObjectType(var1)); //null
var var2 = new Object();
System.debug(System.getObjectType(var2)); //Properties
var var3 = "a";
System.debug(System.getObjectType(var3)); //string
var var4 = 2;
System.debug(System.getObjectType(var4)); //number
var var4 = new Array(1, 2, 3);
System.debug(System.getObjectType(var4)); //Array
System.debug(System.getObjectType([])); //Array
System.debug(System.getObjectType(function () {})); //null
System.debug(System.getObjectType(new Date())); //Date
System.debug(System.getObjectType(undefinedVarible)); //FAIL ReferenceError: "undefinedVarible" is not defined.
System.getObjectClassName()
The System.getObjectClassName() method returns the class name of any vRO scripting object that typeof(obj) returns “object”. This works the best with complex vRO object types and surpasses System.getObjectType() in terms of its capability to identify object types.
Type
Result
Array
"Array"
Number
"Number"
String
"String"
vRO Plugin Object Types (eg: VC:SdkConnection)
Class Name (eg: VcSdkConnection)
Date
"Date"
Composite Types
"Properties"
SecureString
"String"
undefined Variable
Reference Error
null objects
Error: Cannot get class name from null object
Code Examples
System.debug(System.getObjectClassName(input)); //String
var var1 = new VcCustomizationSpec();
System.debug(System.getObjectClassName(var1)); //VcCustomizationSpec
var var2 = new Object();
System.debug(System.getObjectClassName(var2)); //Object
var var3 = "a";
System.debug(System.getObjectClassName(var3)); //String
var var4 = 2;
System.debug(System.getObjectClassName(var4)); //Double
var var4 = new Array(1, 2, 3);
System.debug(System.getObjectClassName(var4)); //Array
System.debug(System.getObjectClassName([])); //Array
System.debug(System.getObjectClassName(function () {})); //Function
System.debug(System.getObjectClassName(new Date())); //Date
instanceof
The instanceof operator tests to see if the prototype property of a constructor appears anywhere in the prototype chain of an object. The return value is a boolean value. This means that instanceof checks if RHS matches the constructor of a class. That’s why it doesn’t work with primitive types like number, string, etc. However, works with variety of complex types available in vRO.
Syntax
object instanceof constructor
Code Examples
var var1 = new VcCustomizationSpec();
System.debug(var1 instanceof VcCustomizationSpec); //true
var var1 = new VcCustomizationSpec();
System.debug(var1 instanceof Object); //true
var var2 = new Object();
System.debug(var2 instanceof Object); //true
var var3 = "a";
System.debug(var3 instanceof String); //false
var var3 = new String("a");
System.debug(var3 instanceof String); //true
var var3 = "a";
System.debug(var3 instanceof String); //false
var var4 = 2;
System.debug(var4 instanceof Number); //false
var var4 = new Array(1, 2, 3);
System.debug(var4 instanceof Array); //true
System.debug([] instanceof Array); //true
System.debug(function () {} instanceof Function); //true
System.debug(new Date() instanceof Date); //true
System.debug({} instanceof Object); //true
That’s all in this port. I hope you will have a better understanding on how to check vRO Object types. Let me know in the comment if you have any doubt or question. Feel free to share this article. Thank you.
vRO JS code is generally plain and basic just enough to get the job done. But I was wondering, how to fancy it? So, I picked some slightly modern JS code (ES5.1+) and tried running it on my vRO 8.3. I found some interesting things which I would like to share in this article.
Snippets
Here are some JS concepts that you can use writing vRO JavaScript code to make it more compelling and beautiful.
External Modules
To utilize modern features, you can use modules like lodash.js for features such as map or filter etc. Other popular module is moment.js for complex Date and Time handling in vRO.
var _ = System.getModule("fr.numaneo.library").lodashLibrary();
var myarr = [1,2,3];
var myarr2 = [4,5,6];
var concatarr = _.concat(myarr, myarr2);
System.log(concatarr); // [1,2,3,4,5,6];
Find more information on how to leverage Lodash.js in vRO here.
First-class Functions
First-class functions are functions that are treated like any other variable. For example, a function can be passed as an argument to other functions, can be returned by another function and can be assigned as a value to a variable.
// we send in the function as an argument to be
// executed from inside the calling function
function performOperation(a, b, cb) {
var c = a + b;
cb(c);
}
performOperation(2, 3, function(result) {
// prints out 5
System.log("The result of the operation is " + result);
})
Ways to add properties to Objects
There are 4 ways to add a property to an object in vRO.
// supported since ES3
// the dot notation
instance.key = "A key's value";
// the square brackets notation
instance["key"] = "A key's value";
// supported since ES5
// setting a single property using Object.defineProperty
Object.defineProperty(instance, "key", {
value: "A key's value",
writable: true,
enumerable: true,
configurable: true
});
// setting multiple properties using Object.defineProperties
Object.defineProperties(instance, {
"firstKey": {
value: "First key's value",
writable: true
},
"secondKey": {
value: "Second key's value",
writable: false
}
});
Custom Class
You can create your own custom classes in vRO using the function keyword and extend that function’s prototype.
// we define a constructor for Person objects
function Person(name, age, isDeveloper) {
this.name = name;
this.age = age;
this.isDeveloper = isDeveloper || false;
}
// we extend the function's prototype
Person.prototype.writesCode = function() {
System.log(this.isDeveloper? "This person does write code" : "This person does not write code");
}
// creates a Person instance with properties name: Bob, age: 38, isDeveloper: true and a method writesCode
var person1 = new Person("Bob", 38, true);
// creates a Person instance with properties name: Alice, age: 32, isDeveloper: false and a method writesCode
var person2 = new Person("Alice", 32);
// prints out: This person does write code
person1.writesCode();
// prints out: this person does not write code
person2.writesCode();
Both instances of the Person constructor can access a shared instance of the writesCode() method.
Private variable
A private variable is only visible to the current class. It is not accessible in the global scope or to any of its subclasses. For example, we can do this in Java (and most other programming languages) by using the private keyword when we declare a variable
// we used an immediately invoked function expression
// to create a private variable, counter
var counterIncrementer = (function() {
var counter = 0;
return function() {
return ++counter;
};
})();
// prints out 1
System.log(counterIncrementer());
// prints out 2
System.log(counterIncrementer());
// prints out 3
System.log(counterIncrementer());
Label
Labels can be used with break or continue statements. It is prefixing a statement with an identifier which you can refer to.
var str = '';
loop1:
for (var i = 0; i < 5; i++) {
if (i === 1) {
continue loop1;
}
str = str + i;
}
System.log(str);
// expected output: "0234"
with keyword
The with statement extends the scope chain for a statement. Check the example for better understanding.
var box = {"dimensions": {"width": 2, "height": 3, "length": 4}};
with(box.dimensions){
var volume = width * height * length;
}
System.log(volume); //24
// vs
var box = {"dimensions": {"width": 2, "height": 3, "length": 4}};
var boxDimensions = box.dimensions;
var volume2 = boxDimensions.width * boxDimensions.height * boxDimensions.length;
System.log(volume2); //24
Function binding
The bind() method creates a new function that, when called, has its this keyword set to the provided value, with a given sequence of arguments preceding any provided when the new function is called.
const module = {
x: 42,
getX: function() {
return this.x;
}
};
const unboundGetX = module.getX;
System.log(unboundGetX()); // The function gets invoked at the global scope
// expected output: undefined
const boundGetX = unboundGetX.bind(module);
System.log(boundGetX());
// expected output: 42
Prototype Chaining
const o = {
a: 1,
b: 2,
// __proto__ sets the [[Prototype]]. It's specified here
// as another object literal.
__proto__: {
b: 3,
c: 4,
},
};
// o.[[Prototype]] has properties b and c.
// o.[[Prototype]].[[Prototype]] is Object.prototype (we will explain
// what that means later).
// Finally, o.[[Prototype]].[[Prototype]].[[Prototype]] is null.
// This is the end of the prototype chain, as null,
// by definition, has no [[Prototype]].
// Thus, the full prototype chain looks like:
// { a: 1, b: 2 } ---> { b: 3, c: 4 } ---> Object.prototype ---> null
System.log(o.a); // 1
// Is there an 'a' own property on o? Yes, and its value is 1.
System.log(o.b); // 2
// Is there a 'b' own property on o? Yes, and its value is 2.
// The prototype also has a 'b' property, but it's not visited.
// This is called Property Shadowing
System.log(o.c); // 4
// Is there a 'c' own property on o? No, check its prototype.
// Is there a 'c' own property on o.[[Prototype]]? Yes, its value is 4.
System.log(o.d); // undefined
// Is there a 'd' own property on o? No, check its prototype.
// Is there a 'd' own property on o.[[Prototype]]? No, check its prototype.
// o.[[Prototype]].[[Prototype]] is Object.prototype and
// there is no 'd' property by default, check its prototype.
// o.[[Prototype]].[[Prototype]].[[Prototype]] is null, stop searching,
// no property found, return undefined.
This blogpost is simply about giving a consolidated view on all the official guides that VMware provides for vRealize Automation and vRealize Orchestrator. These guides can help Automation Engineers & Developers, Solution Architects, vRealize Admins, etc and can be used as a reference for developing vRO Code, vRA Templates and various other tasks. You can download them from the provided links for offline access.
Cloud images are operating system templates and every instance starts out as an identical clone of every other instance. It is the user data that gives every cloud instance its personality and cloud-init is the tool that applies user data to your instances automatically.
Use cloud-init to configure:
Setting a default locale
Setting the hostname
Generating and setting up SSH private keys
Setting up ephemeral mount points
Installing packages
There is even a full-fledged website https://cloud-init.io/ where you can check various types of resources and information.
Compatible OSes
While cloud-init started life in Ubuntu, it is now available for most major Linux and FreeBSD operating systems. For cloud image providers, then cloud-init handles many of the differences between cloud vendors automatically — for example, the official Ubuntu cloud images are identical across all public and private clouds.
cloudConfig commands are special scripts designed to be run by the cloud-init process. These are generally used for initial configuration on the very first boot of a server. In this guide, we will be discussing the format and usage of cloud-config commands.
Install cloud-init in VM images #firststep
Make sure cloud-init is installed and properly configured in the linux based images you want with work with. Possibilities are that you may have to install it in some of the OSes and flavors. For.eg: cloud-init comes installed in the official Ubuntu live server images since the release of 18.04, Ubuntu Cloud Images, etc. However, in some of the Red Hat Linux images, it doesn’t come preinstalled.
Where cloudConfig commands can be added
You can add a cloudConfig section to cloud template code, but you can also add one to a machine image in advance, when configuring infrastructure. Then, all cloud templates that reference the source image get the same initialization.
You might have an image map and a cloud template where both contain initialization commands. At deployment time, the commands merge, and Cloud Assembly runs the consolidated commands. When the same command appears in both places but includes different parameters, only the image map command is run. Faulty cloudConfig commands can result in a resource that isn’t correctly configured or behaves unpredictably.
Important cloudConfig may cause unpredictable results when used with vSphere Guest Customizations. A hit & trial can be done to figure out what works best.
General Information about Cloud-Config
The cloud-config format implements a declarative syntax for many common configuration items, making it easy to accomplish many tasks. It also allows you to specify arbitrary commands for anything that falls outside of the predefined declarative capabilities.
This “best of both worlds” approach lets the file acts like a configuration file for common tasks, while maintaining the flexibility of a script for more complex functionality.
YAML Formatting
The file is written using the YAML data serialization format. The YAML format was created to be easy to understand for humans and easy to parse for programs.
YAML files are generally fairly intuitive to understand when reading them, but it is good to know the actual rules that govern them.
Some important rules for YAML files are:
Indentation with whitespace indicates the structure and relationship of the items to one another. Items that are more indented are sub-items of the first item with a lower level of indentation above them.
List members can be identified by a leading dash.
Associative array entries are created by using a colon (:) followed by a space and the value.
Blocks of text are indented. To indicate that the block should be read as-is, with the formatting maintained, use the pipe character (|) before the block.
Let’s take these rules and analyze an example cloud-config file, paying attention only to the formatting:
By looking at this file, we can learn a number of important things.
First, each cloud-config file must begin with #cloud-config alone on the very first line. This signals to the cloud-init program that this should be interpreted as a cloud-config file. If this were a regular script file, the first line would indicate the interpreter that should be used to execute the file.
The file above has two top-level directives, users and runcmd. These both serve as keys. The values of these keys consist of all of the indented lines after the keys.
In the case of the users key, the value is a single list item. We know this because the next level of indentation is a dash (-) which specifies a list item, and because there is only one dash at this indentation level. In the case of the users directive, this incidentally indicates that we are only defining a single user.
The list item itself contains an associative array with more key-value pairs. These are sibling elements because they all exist at the same level of indentation. Each of the user attributes are contained within the single list item we described above.
Some things to note are that the strings you see do not require quoting and that there are no unnecessary brackets to define associations. The interpreter can determine the data type fairly easily and the indentation indicates the relationship of items, both for humans and programs.
By now, you should have a working knowledge of the YAML format and feel comfortable working with information using the rules we discussed above.
We can now begin exploring some of the most common directives for cloud-config.
User and Group Management
To define new users on the system, you can use the users directive that we saw in the example file above.
Each new user should begin with a dash. Each user defines parameters in key-value pairs. The following keys are available for definition:
name: The account username.
primary-group: The primary group of the user. By default, this will be a group created that matches the username. Any group specified here must already exist or must be created explicitly (we discuss this later in this section).
groups: Any supplementary groups can be listed here, separated by commas.
gecos: A field for supplementary info about the user.
shell: The shell that should be set for the user. If you do not set this, the very basic sh shell will be used.
expiredate: The date that the account should expire, in YYYY-MM-DD format.
sudo: The sudo string to use if you would like to define sudo privileges, without the username field.
lock-passwd: This is set to “True” by default. Set this to “False” to allow users to log in with a password.
passwd: A hashed password for the account.
ssh-authorized-keys: A list of complete SSH public keys that should be added to this user’s authorized_keys file in their .ssh directory.
inactive: A boolean value that will set the account to inactive.
system: If “True”, this account will be a system account with no home directory.
homedir: Used to override the default /home/<username>, which is otherwise created and set.
ssh-import-id: The SSH ID to import from LaunchPad.
selinux-user: This can be used to set the SELinux user that should be used for this account’s login.
no-create-home: Set to “True” to avoid creating a /home/<username> directory for the user.
no-user-group: Set to “True” to avoid creating a group with the same name as the user.
no-log-init: Set to “True” to not initiate the user login databases.
Other than some basic information, like the name key, you only need to define the areas where you are deviating from the default or supplying needed data.
One thing that is important for users to realize is that the passwd field should not be used in production systems unless you have a mechanism of immediately modifying the given value. As with all information submitted as user-data, the hash will remain accessible to any user on the system for the entire life of the server. On modern hardware, these hashes can easily be cracked in a trivial amount of time. Exposing even the hash is a huge security risk that should not be taken on any machines that are not disposable.
For an example user definition, we can use part of the example cloud-config we saw above:
To define groups, you should use the groups directive. This directive is relatively simple in that it just takes a list of groups you would like to create.
An optional extension to this is to create a sub-list for any of the groups you are making. This new list will define the users that should be placed in this group:
For user accounts that already exist (the root account is the most pertinent), a password can be suppled by using the chpasswd directive.
Note: This directive should only be used in debugging situations, because, once again, the value will be available to every user on the system for the duration of the server’s life. This is even more relevant in this section because passwords submitted with this directive must be given in plain text.
The directive contains two associative array keys. The list key will contain a block that lists the account names and the associated passwords that you would like to assign. The expire key is a boolean that determines whether the password must be changed at first boot or not. This defaults to “True”.
One thing to note is that you can set a password to “RANDOM” or “R”, which will generate a random password and write it to /var/log/cloud-init-output.log. Keep in mind that this file is accessible to any user on the system, so it is not any more secure.
Write Files to the Disk
In order to write files to the disk, you should use the write_files directive.
Each file that should be written is represented by a list item under the directive. These list items will be associative arrays that define the properties of each file.
The only required keys in this array are path, which defines where to write the file, and content, which contains the data you would like the file to contain.
The available keys for configuring a write_files item are:
path: The absolute path to the location on the filesystem where the file should be written.
content: The content that should be placed in the file. For multi-line input, you should start a block by using a pipe character (|) on the “content” line, followed by an indented block containing the content. Binary files should include “!!binary” and a space prior to the pipe character.
owner: The user account and group that should be given ownership of the file. These should be given in the “username:group” format.
permissions: The octal permissions set that should be given for this file.
encoding: An optional encoding specification for the file. This can be “b64” for Base64 files, “gzip” for Gzip compressed files, or “gz+b64” for a combination. Leaving this out will use the default, conventional file type.
For example, we could write a file to /test.txt with the contents:
Here is a line.
Another line is here.
The portion of the cloud-config that would accomplish this would look like this:
#cloud-config
write_files:
- path: /test.txt
content: |
Here is a line.
Another line is here.
Update or Install Packages on the Server
To manage packages, there are a few related settings and directives to keep in mind.
To update the apt database on Debian-based distributions, you should set the package_update directive to “true”. This is synonymous with calling apt-get update from the command line.
The default value is actually “true”, so you only need to worry about this directive if you wish to disable it:
#cloud-config
package_update: false
If you wish to upgrade all of the packages on your server after it boots up for the first time, you can set the package_upgrade directive. This is akin to a apt-get upgrade executed manually.
This is set to “false” by default, so make sure you set this to “true” if you want the functionality:
#cloud-config
package_upgrade: true
To install additional packages, you can simply list the package names using the “packages” directive. Each list item should represent a package. Unlike the two commands above, this directive will function with either yum or apt managed distros.
These items can take one of two forms. The first is simply a string with the name of the package. The second form is a list with two items. The first item of this new list is the package name, and the second item is the version number:
The “packages” directive will set apt_update to true, overriding any previous setting.
Configure SSH Keys for User Accounts and the SSH Daemon
You can manage SSH keys in the users directive, but you can also specify them in a dedicated ssh_authorized_keys section. These will be added to the first defined user’s authorized_keys file.
This takes the same general format of the key specification within the users directive:
You can also generate the SSH server’s private keys ahead of time and place them on the filesystem. This can be useful if you want to give your clients the information about this server beforehand, allowing it to trust the server as soon as it comes online.
To do this, we can use the ssh_keys directive. This can take the key pairs for RSA, DSA, or ECDSA keys using the rsa_private, rsa_public, dsa_private, dsa_public, ecdsa_private, and ecdsa_public sub-items.
Since formatting and line breaks are important for private keys, make sure to use a block with a pipe key when specifying these. Also, you must include the begin key and end key lines for your keys to be valid.
If your infrastructure relies on keys signed by an internal certificate authority, you can set up your new machines to trust your CA cert by injecting the certificate information. For this, we use the ca-certs directive.
This directive has two sub-items. The first is remove-defaults, which, when set to true, will remove all of the normal certificate trust information included by default. This is usually not needed and can lead to some issues if you don’t know what you are doing, so use with caution.
The second item is trusted, which is a list, each containing a trusted CA certificate:
If you have configured your own DNS servers that you wish to use, you can manage your server’s resolv.conf file by using the resolv_conf directive. This currently only works for RHEL-based distributions.
Under the resolv_conf directive, you can manage your settings with the nameservers, searchdomains, domain, and options items.
The nameservers directive should take a list of the IP addresses of your name servers. The searchdomains directive takes a list of domains and subdomains to search in when a user specifies a host but not a domain.
The domain sets the domain that should be used for any unresolvable requests, and options contains a set of options that can be defined in the resolv.conf file.
If you are using the resolv_conf directive, you must ensure that the manage-resolv-conf directive is also set to true. Not doing so will cause your settings to be ignored:
If none of the managed actions that cloud-config provides works for what you want to do, you can also run arbitrary commands. You can do this with the runcmd directive.
This directive takes a list of items to execute. These items can be specified in two different ways, which will affect how they are handled.
If the list item is a simple string, the entire item will be passed to the sh shell process to run.
The other option is to pass a list, each item of which will be executed in a similar way to how execve processes commands. The first item will be interpreted as the command or script to run, and the following items will be passed as arguments for that command.
Most users can use either of these formats, but the flexibility enables you to choose the best option if you have special requirements. Any output will be written to standard out and to the /var/log/cloud-init-output.log file:
In some cases, you’ll want to shutdown or reboot your server after executing the other items. You can do this by setting up the power_state directive.
This directive has four sub-items that can be set. These are delay, timeout, message, and mode.
The delay specifies how long into the future the restart or shutdown should occur. By default, this will be “now”, meaning the procedure will begin immediately. To add a delay, users should specify, in minutes, the amount of time that should pass using the +<num_of_mins> format.
The timeout parameter takes a unit-less value that represents the number of seconds to wait for cloud-init to complete before initiating the delay countdown.
The message field allows you to specify a message that will be sent to all users of the system. The mode specifies the type of power event to initiate. This can be “poweroff” to shut down the server, “reboot” to restart the server, or “halt” to let the system decide which is the best action (usually shutdown):
#cloud-config
power_state:
timeout: 120
delay: "+5"
message: Rebooting in five minutes. Please save your work.
mode: reboot
Troubleshooting
If a cloud-init script behaves unexpectedly, check the captured console output in /var/log/cloud-init-output.log inside vRealize Automation.
Conclusion
The above examples represent some of the more common configuration items available when running a cloud-config file. There are additional capabilities that we did not cover in this guide. These include configuration management setup, configuring additional repositories, and even registering with an outside URL when the server is initialized.
You can find out more about some of these options by checking the /usr/share/doc/cloud-init/examples directory. For a practical guide to help you get familiar with cloud-config files, you can follow our tutorial on how to use cloud-config to complete basic server configuration here.
Disclaimer Turned out that this course is not freely available for everyone. I would suggest you give it a try and see if you’re lucky enough.
If you are looking for a course on vRealize Automation (vRA) and vRealize Orchestrator (vRO) which is officially developed by VMware, is enterprise-level, not just the basic one and most importantly FREE, then you should go for this course. It has 41 lessons, more than 70,000 views and is a ELS (Enterprise) course and talks on vRA architecture, installation, Cloud templates, integration with NSX-T, Kubernetes, Public Clouds, SaltStack, vRO Workflows and extensibility and a lot more. I personally went through this course after I completed the Udemy’s Getting started with VMware vRealize Automation 8.1 and while Udemy’s push start your journey in vRA 8.x, this VMware course will take it to another level. Recommended for someone who is in VMware Automation, coming from vRA 7.x, Looking for migrating from 7.x to 8.x, deployment of vRA etc. In this post, I have shared some basic steps on how to get to that course and get yourself started.