Skip to content

Deploying Virtual Appliances

To deploy a virtual appliance (VA), complete the steps specific to your virtualization platform. Then, proceed to starting a new virtual machine and subsequent steps.

Start with the section for your deployment type.

Local VA Deployment with vSphere

To deploy a VA locally with vSphere:

  1. Download the virtual appliance package.

  2. Unzip the package.

  3. Copy it to your virtualization platform following the standard process for your platform.

  4. Proceed to Starting a New Virtual Machine.

Local VA Deployment with Hyper-V

To deploy a VA locally with Hyper-V:

  1. Download the virtual appliance package.

  2. Unzip the package.

  3. Copy the sailpoint-va.vhd file to your virtualization platform.

  4. In Hyper-V, select New > Virtual Machine. The New Virtual Machine Wizard launches.

  5. On the Before You Begin screen, select Next > to create a virtual machine with custom configuration.

  6. On the Specify Name and Location screen:

    a. Enter a name for your new virtual machine.

    b. Select the checkbox for Store the virtual machine in a different location.

    c. Enter the desired location for your new virtual machine.

    d. Select Next >.

  7. On the Specify Generation screen, select Generation 1, and then Next >.

  8. On the Assign Memory screen, enter the amount of startup memory for the VA, and select Next >.

  9. On the Configure Networking screen, select External for the V-Switch connection, and then Next >.

  10. On the Connect Virtual Hard Disk screen, select Use an existing virtual hard disk, enter the path to the extracted file in the Location field, and select Finish.

  11. Verify the Summary.

  12. Proceed to Starting a New Virtual Machine.

Cloud VA Deployment with AWS

To support connections to other cloud-based applications, you may want to deploy SailPoint VAs on your AWS infrastructure.

VA deployment with AWS must be completed by an experienced admin of your company’s AWS tenant with knowledge of the following:

  • The VPC, networking, and security group requirements for your AWS tenant.
  • The AWS regions where the VA will reside.

To deploy a VA in the cloud with AWS:

  1. Ensure that your environment meets the following prerequisites:

  2. Open a support ticket requesting an Amazon Machine Image (AMI) ID to install a VA in AWS. You will need to provide your AWS account number and region, such as us-east-1.

    SailPoint Support will then share the AMI with your account and provide you with an AMI ID.

  3. In AWS, select the AMI, and select Launch.

  4. In the Instance Type page, select m4.xlarge.

  5. Select Configure Instance Details.

  6. Select the appropriate VPC and Subnet for your environment.

  7. Select Add Storage, and leave the defaults on this page.

  8. Select Add Tags, and complete this page as appropriate for your organization.

  9. Select Configure Security Group, and select the appropriate security group.

  10. Select Review and Launch > Launch.

  11. In the Select an existing key pair or create a new key pair dialog box, select the option appropriate to your company policy.

  12. Proceed to Starting a New Virtual Machine.

Cloud VA Deployment with Azure

To support connections to other cloud-based applications, you may want to deploy SailPoint VAs on your Azure infrastructure.

VA deployment with Azure must be completed by an experienced admin of your company’s Azure tenant with knowledge of the following:

  • The Azure CLI
  • The networking and security group requirements for your Azure tenant
  • The regions where the VA will reside

To deploy a VA in the cloud with Azure:

  1. Ensure that your environment:

    • Meets the Azure VM instance size requirements for storage account, blob container, and resource group to hold the VA resources.
    • Has sufficient bandwidth to upload a 132 GB image to Azure
  2. Download the virtual appliance package.

  3. Unzip the package.

    Important

    The extracted VHD file will be around 132 GB. Check your disk space before extracting.

  4. Log in to your Azure command line tool. Refer to the Azure CLI Command Reference as necessary.

  5. Upload sailpoint-va.vhd to an Azure storage container with the following az storage blob upload command:

    az storage blob upload --container-name “$container_name” --file “sailpoint-va.vhd” --name "sailpoint-va.vhd" --connection-string "$connection_string"

    Note

    Given the large size of the image, this step might take hours.

  6. Create a managed disk from the blob with the following az disk create command:

    az disk create --resource-group “$resource_group” --name “sailpoint-va” --source "$vhd_blob_url"

    Where:

    $vhd_blob_url is the URL of the sailpoint-va.vhd blob.

    For example: https://${storage_account}.blob.core.windows.net/$container_name/sailpoint-va.vhd

  7. Create the VM from the managed disk with the following az vm create command:

    az vm create --resource-group “$resource_group” --location “$region” --name “$name” --os-type linux --attach-os-disk “sailpoint-va” --nsg “$network_security_group” --size “Standard_B4ms”

    Note

    The network security group you associate with the VM must allow traffic over port 22 in order for you to SSH into the VA.

  8. To test the VM, SSH in using the default login:

    Username: sailpoint

    Password: S@ilp0int

  9. Proceed to Starting a New Virtual Machine to change your password.

Cloud VA Deployment with GCP

VA deployment with Google Cloud Platform (GCP) must completed by an experienced admin of your company’s GCP environment with the following:

  • Google SDK
  • Admin permissions for GCP account
  • A bucket to hold the VA image with admin permissions
  • Knowledge of the networking and security requirements for your GCP environment
  • A project on GCP

To deploy a VA in the cloud with GCP:

  1. Download the virtual appliance package.

  2. Unzip the package.

  3. Launch a Google Cloud SDK Shell.

  4. Authenticate to GCP.

  5. Upload the unzipped VA-latest folder to the bucket you will use for the VA.

  6. In Google Cloud Shell SDK, execute a command to import the VA disk as a non-bootable virtual disk into GCP.

    For example, to import a virtual disk named sailpoint-va-disk1.vmdk stored in gs://bucket-import/va-latest bucket, enter:

    gcloud compute images import va-image --no-guest-environment --source-file gs://bucket-import/va-latest/sailpoint-va-disk1.vmdk --data-disk

  7. After the import completes, log in to Google Admin Console and go to Computer Engine > VM Instances > Create an Instance.

    a. On Boot disk section go to Change > Custom Images.

    b. Select the Project you created and where the VA Disk was updated.

    c. Choose the image you imported.

    d. Size the disk to 128G.

  8. Create the instance. After the instance is created it will appear on the instances tab and you will be able to log in via SSH using the default username and password for the VA.

  9. Log in as the SailPoint user through SSH and register the VA to your tenant.

  10. Proceed to Starting a New Virtual Machine.

Starting a New Virtual Machine

  1. Start the virtual machine (VM) you previously downloaded and copied to your local virtualization platform or launch your VM instance.

  2. Sign in to the VM:

    Username: sailpoint

    Password: S@ilp0int

  3. Change your password immediately:

    a. At the command prompt, type passwd.

    b. Enter the current password S@ilp0int.

    c. Provide a new password.

    d. Repeat the new password.

    Important

    If you are performing a cloud VA deployment and receive a failed unit message for the esx_dhcp_bump.service after login, run the following command to disable the service: sudo systemctl disable esx_dhcp_bump.service

  4. For local VA deployments, set a static IP address for your virtual appliance.

  5. For cloud VA deployments, proceed to Creating Virtual Appliances.

Setting a Static IP Address for Local VA Deployments

  1. Find the name of your virtual NIC card for your VA:

    a. In the command line, type ip addr

    b. From the list of virtual NICs displayed, find the second one.

    Note

    Virtual NIC names are dynamically assigned upon initial VA creation, so you will need to perform this step for each VA to enter the correct name into your static.network file in the steps that follow.

  2. Create the static.network file:

    a. From the /home/sailpoint/ directory, enter:

    sudoedit /etc/systemd/network/static.network

    b. Enter the following:

    [Match]

    Name=<NICname>

    [Network]

    DNS=<DNS>

    Address=<IPaddress and CIDR>

    Gateway=<Gateway>

    Where:

    • <NICName> is the name of your VA's virtual NIC card and the values under Network are specific to your VA's IP address.
    • CIDR in the Address field is required if you want to set a subnet mask

    Note

    To set a custom DNS Search Domain, add Domains=<search domain> to the bottom of the [Network] section.

  3. Disable the ESX DHCP bump service: sudo systemctl disable esx_dhcp_bump.service

  4. Reboot the VA: sudo reboot

  5. Proceed to Creating Virtual Appliances.

If using Hyper-V:

Hyper-V images ship with the waagent Azure service enabled by default. This can cause DNS issues and irregular network routing on virtual appliances running on Hyper-V. To prevent these issues, disable the waagent service:

sudo systemctl status waagent

sudo systemctl stop waagent

sudo systemctl disable waagent

sudo reboot

Creating Virtual Appliances

After you get your URL from SailPoint, you can securely connect the virtual machine to your tenant by creating VAs.

This section describes how to create VAs with the standard configuration.

Caution

  • Make sure you have the time and resources to complete all the steps at once. After a new VA has been added to the cluster, you have 30 minutes to download the va-config-<va_id>.yaml file, update the keyPassphrase, copy the updated file to the new VA, and successfully test the connection. If these tasks are not completed within 30 minutes, you will have to start over and create another VA.
  • The VA is not saved until you select Test Connection at the end and receive a success message.

Note

If you are using non-standard VA configurations, be sure to complete the additional configuration steps for HTTP proxy VAs or Network Tunnel VAs before creating VAs.

To create a VA:

  1. Go to Admin > Connections > Virtual Appliances. The list of virtual appliance clusters can be viewed as cards or in a table.

  2. On the Virtual Appliance Clusters page, select Create New.

  3. Enter a unique Cluster Name and Cluster Description for the virtual appliance cluster. You cannot have two clusters with the same name in your organization.

  4. Select the Default cluster type for most deployments.

    If your organization has Data Access Security or AI-Driven Identity Security for IdentityIQ, you may need to select a different VA cluster type.

    VA Cluster Types

    Important

    You cannot change the cluster type for an existing cluster. If the wrong cluster type is selected for a cluster, you must delete the cluster and start over.

  5. Select a Time Zone. The cluster time zone determines the GMT offset when scheduling account aggregations and entitlement aggregations for the connected source.

  6. (Optional) Select Enable Debugging to start 24 hours of debug-level (verbose) logging for all VAs in this cluster. It is not required to enable debugging, but it can be helpful in case you need to troubleshoot anything.

  7. Select Save.

  8. Select Virtual Appliances.

  9. Select Add a VA and enter a description.

  10. Select Download VA Config File to copy va-config-<va_id>.yaml to your workstation. Each .yaml file is unique and cannot be used by more than one VA.

    Downloading this file to your workstation may result in the file having a .txt extension. If this happens, rename the file with a .yaml extension before copying it to your VA in Step 11.

  11. Open va-config-<va_id>.yaml and change the value of keyPassphrase from _ch@ngeMe_ to a unique value for your organization.

    The value of keyPassphrase must be identical for every virtual appliance in the cluster. The keyPassphrase cannot start with a special character, and cannot include !, /, \, or spaces.

    The VA automatically encrypts the keyPassphrase when you copy the .yaml file to it. Encrypted keyPassphrases are denoted by a leading set of colons (::::).

    Important

    If you are using a network tunnel configuration, add the following line to the bottom of the va-config-<va_id>.yaml file: tunnelTraffic: true

    Important

    For IdentityIQ users with AI-Driven Identity Security, deploying the VA requires adding product: iai to the va-config-.yaml file. Refer to Deploying the Virtual Appliance with IdentityIQ for instructions.

  12. Copy va-config-<va_id>.yaml from your workstation to the VA using the following scp command:

    scp <local_path>/va-config-<va_id>.yaml sailpoint@<va_ip_address>:/home/sailpoint/config.yaml)

    Where:

    • is the location of the .yaml file on your workstation.
    • is the IP address of the VA

    Notes

    • Copying the .yaml file renames it to config.yaml as is required.
    • To find the IP address for the VA, run the ifconfig -a command.
  13. Wait several minutes for the VA to bootstrap.

  14. Select Test Connection.

    If the connection is successful, a success message will appear and the VA is saved in the VA cluster. From here, you can exit or create another VA.

    If the connection is not successful, a warning message will appear.

    Ensure that all steps were executed correctly and try testing the connection again. If the VA is unable to successfully connect, consider starting over or referring to the Virtual Appliance Troubleshooting Guide using your SailPoint Compass login.

If the VA connection is successful, you can now connect the VA to a source and enable Transport Layer Security if the source supports it.

Deploying VAs for High Availability and Disaster Recovery

The need to factor High Availability (HA) and Disaster Recovery (DR) into your deployment decisions may be obvious, but it might help to also understand the following:

  • Each source is associated with a specific VA cluster.
  • Any actions performed on that source, such as aggregation, test connection, authentication, or provisioning are sent as requests to the VA cluster and form a queue.
  • Each VA that's running continually polls the queue of requests sent to its associated VA cluster.

This section outlines different strategies for handling high availability and disaster recovery scenarios in virtual appliance deployments.

High Availability means ensuring that there are enough VAs running to meet the processing needs of the business, as well as sufficient redundancy to be able to compensate for a single VA becoming temporarily unavailable due to an upgrade process, loss of connectivity, or other activity that could otherwise result in downtime.

Disaster Recovery means making sure that your organization has VAs deployed in more than one location, as part of a failover strategy that ensures business continuity in the face of a disaster (natural or otherwise).

All VAs Running

In this strategy, all VAs are deployed in a single VA cluster, with all VAs running concurrently. Some VAs are in the primary datacenter, and others (called DR VAs) are deployed in a DR datacenter.

As work is assigned to the VA cluster, either primary VAs or DR VAs can pick up and perform requests. A problem could arise if there are latency issues between the source that the VA is communicating with and the VA's deployment location. This is especially true for DR VAs, which may be farther from the sources they are communicating with.

During a failover event, no action is needed. If the primary VAs go down, the DR VAs continue to respond to requests.

Advantages Disadvantages
  • On DR event, no action needed
  • Full utilization of all VAs
  • VAs stay up-to-date
  • Minimal risk of outages
  • Potential latency issues

Switch Clusters

In this strategy, two VA clusters are deployed. One VA cluster is the 'primary VA cluster', with all member VAs in the primary datacenter. The other VA cluster is the 'DR VA cluster', with all member VAs in the backup DR datacenter. All VAs in all clusters are powered-on and receiving updates.

As work is assigned to the primary VA cluster, the primary VAs can pick up and fulfill requests. This mitigates any sort of latency issues between the source that the VA is communicating with and the VA deployment location. Even though the DR VAs are powered on, they are not receiving requests, because they are not associated with the primary VA cluster that the sources are using.

During a disaster event where the primary VAs go down, there would be an outage until the sources are reconfigured to use the DR VA cluster.

Note

On failover, switching clusters requires reentering the source credentials. This can complicate the failover process if these credentials are not readily available to the administrator, or if there are many sources to manage.

Advantages Disadvantages
  • DR VAs stay up-to-date
  • DR VAs don’t add latency, as they aren’t processing anything until a DR event occurs
  • No utilization of DR VA cluster
  • Reconfiguration needed upon DR event, involving reentering of source credentials
  • Difficult if large number of sources to manage

Standby Reactive Deployment

In this strategy, primary VAs are deployed in a single VA cluster. Only the VAs in the primary datacenter are running concurrently. There are existing standby VAs set up and tested in a DR zone, but not yet deployed to a VA cluster. These VAs can be left powered up or down.

As work is assigned to the primary VA cluster, the primary VAs can pick up and fulfill requests. This mitigates any sort of latency issues between the sources that the VA is communicating with and the VA deployment location.

During a disaster event where the primary VAs go down, there would be an outage until the standby DR VAs are deployed to the primary VA cluster. As the new standby DR VAs come online, they start to fulfill requests.

Note

We recommend you keep standby VAs tested and updated by frequently adding the VAs to an unused cluster dedicated for this purpose. After adding the standby VAs to this cluster, and allowing for some time to pass for updates to occur, the VAs can then be removed and powered down again.

Advantages Disadvantages
  • VAs do not add latency, as they are not processing anything until a DR situation occurs
  • Turnaround time can be greater depending on deployment of VA DR
  • Relies on VA readiness

While there are no technical reason prohibiting it, we strongly recommend that you not deploy virtual appliances in the DMZ, or perimeter network. For the most secure and highest performing communication with target sources, we recommend that you deploy VAs near their sources on internal networks as shown in the following diagram.

Why Not Deploy in the DMZ?

We recommend against deploying the VA in the DMZ for the following reasons:

  • Security - The most important consideration against DMZ deployment is security. A DMZ is a less-secure perimeter network by design. SailPoint VAs are hardened against attack, but they are a communication backbone with sources, and could be an attack vector. Each VA also contains the 2048-bit RSA asymmetric private key (generated from the chosen key passphrase), which is used to decrypt credentials when talking to various sources. Placing a VA in a less-secure zone could put your information at risk.

  • Proximity - Virtual appliances connect to various sources, and both read (aggregation) and write (provisioning) activities can occur via API on these connections. Some connector APIs can be latency-sensitive. Deploying the VAs closer to the sources they are communicating with yields better performance.

  • Connectivity - VAs are designed to communicate with internal sources, not perimeter sources. The purpose of a DMZ perimeter network is for externally-facing components to communicate with each other, not with components on the internal network. If a VA is deployed in the DMZ and needs to communicate with internal sources, you might have to open more ports on your internal firewall to facilitate that communication.

Documentation Feedback