8.3 Deploying ESM for Fusion

This section provides information about using the CDF Management Portal to deploy ESM for Fusion.

8.3.1 Configuring the Cluster

  1. Open a new tab in a supported web browser.

  2. Specify the URL for the CDF Management Portal:

    https://<ESM_for_Fusion-server>:3000

    NOTE:Use port 3000 when you are setting up the CDF for the first time. After the initial setup, use port 5443 to access the CDF Management Portal.

    Use the fully qualified domain name of the host that you specified in the Connection step during the CDF configuration. Usually, this is the master node’s FQDN.

  3. Log in to the CDF Management Portal with the credentials of the administrative user that you provided during installation.

  4. Select the metadata file version in version, and then click Next.

  5. Read the license agreement, and then select I agree.

  6. Click Next.

  7. On the Capabilities page, select the following options:

    • Fusion

    • ArcSight Command Center

    • (Optional) ArcSight Layered Analytics

      NOTE:To use this capability, you must deploy both ArcSight Interset and ESM for Fusion in the same cluster.

  8. Click Next.

  9. On the Database page, retain the default values, and then click Next.

  10. On the Deployment Size page, select the required cluster, and then click Next.

  11. (Conditional) For worker node configuration, select Medium Cluster.

  12. On the Connection page, an external host name is automatically populated. This is resolved from the virtual IP (VIP) specified during the CDF installation (--ha-virtual-ip parameter). Confirm that the VIP is correct and then click Next.

  13. (Conditional) To set up high availability, complete the following steps:

    IMPORTANT:You must configure high availability at this point in the process. You cannot add master nodes and configure high availability after installation.

    1. Select Make master highly available.

    2. On the Master High Availability page, add at least two additional master nodes.

    3. On the Add Master Node page, specify the following details:

      Host

      Specifies the fully qualified domain name (FQDN) of the node that you are adding.

      Ignore Warnings

      Specifies whether you want the CDF Management Portal to ignore any warnings that occur during the pre-checks on the server.

      If deselected, the add node process will stop and a window will display any warning messages. We recommend that you start with Ignore Warnings deselected in order to view any warnings displayed. You may then evaluate whether to ignore or rectify any warnings, clear the warning dialog, and then click Save again with the box selected to avoid stopping.

      User Name

      Specifies the user credential for logging in to the node.

      Verify Mode

      Specifies whether the verification mode should be Password or Key-based.

      • For Password, you must also specify the your password.

      • For Key-based, you must first specify a user name and then upload a private key file when connecting the node with a private key file.

      Thinpool Device

      Applies only if you configured a Thinpool for the master node

      Specifies the Thinpool Device path that you configured for the master node.

      For example: /dev/mapper/docker-thinpool

      You must have already set up the Docker thin pool for all cluster nodes that need to use thinpools, as described in the CDF Planning Guide.

      flannel IFace

      Applies only if the master node has more than one network adapter

      Specifies the flannel IFace value for Docker inter-host communication.

      The value must be a single IPv4 address or name of the existing interface.

    4. Click Save.

    5. Repeat these steps for other master nodes.

  14. Click Next.

  15. (Conditional) For multi-node deployment, complete the following steps:

    1. Select the Add Worker Node page.

    2. To add additional worker nodes, click + (Add).

    3. Specify the required configuration information.

    4. Click Save.

    5. Repeat these steps for each of the worker nodes you want to add.

  16. Click Next.

  17. (Conditional) To run the worker node on the master node , select Allow suite workload to be deployed on the master node, and then click Next.

    NOTE:Before selecting this option, ensure that the master node meets the system requirements specified for the worker node.

  18. To configure each NFS volume, complete the following steps:

    1. Navigate to the File Storage page.

    2. For File System Type, select Self-Hosted NFS.

      Self-hosted NFS refers to the external NFS that you created while preparing the environment for CDF installation.

    3. For File Server, specify the IP address or FQDN of the NFS server.

    4. For Exported Path, specify the following paths for the NFS volumes:

      NFS Volume

      File Path

      arcsight-volume

      <NFS_VOLUME_FOLDER>/arcsight-vol

      itom-vol-claim

      <NFS_VOLUME_FOLDER>/itom-vol

      db-single-vol

      <NFS_VOLUME_FOLDER>/db-single-vol

      itom-logging-vol

      <NFS_VOLUME_FOLDER>/itom-logging-vol

      db-backup-vol

      <NFS_VOLUME_FOLDER>/db-backup-vol

    5. Click Validate.

      Ensure that you have validated all NFS volumes successfully before continuing with the next step.

  19. Click Next.

  20. To start deploying master and worker nodes, click Yes in the Confirmation dialog box.

  21. Continue with Uploading Images to the Local Registry.

8.3.2 Uploading Images to the Local Registry

For the docker registry to deploy ESM for Fusion, it needs the following images associated with the deployment:

  • fusion-x.x.x.x

  • esm-x.x.x.x

  • (Optional) layered-analytics-x.x.x.x

You must upload these images to the local registry as follows:

  1. Launch a terminal session, then log in to the master node as root or a sudo user.

  2. Change to the following directory:

    cd /<cdf_installer_directory>/scripts/

    For example:

    cd /opt/esm-cmd-center-installer-for-fusion-x.x.x.x/installers/cdf-x.x.x-x.x.x.x/scripts/

  3. Upload required images to the local registry. When prompted for a password, use the admin user password for the CDF Management Portal:

    ./uploadimages.sh -d <downloaded_suite_images_folder_path>

    For example:

    ./uploadimages.sh -d /<download_directory>/esm-cmd-center-installer-for-fusion-x.x.x.x/suite_images

  4. Continue with Deploying ESM for Fusion.

8.3.3 Deploying ESM for Fusion

After you upload the images to the local directory, the CDF uses these images to deploy the respective software in the cluster.

  1. Log in to the CDF Management Portal.

  2. On the Download Images page, click Next because all the required packages are already downloaded and uncompressed.

  3. After the Check Image Availability page displays All images are available in the registry, click Next.

    If the page displays any missing image error, upload the missing image.

  4. (Optional) To monitor the progress of service deployment, complete the following steps:

    1. Launch a terminal session.

    2. Log in to the master node as root.

    3. Execute the command:

      watch 'kubectl get pods --all-namespaces'

  5. After the Deployment of Infrastructure Nodes page displays the status of the node in green, click Next.

    The deployment process can take up to 15 minutes to complete.

  6. (Conditional) If any of the nodes show a red icon on the Deployment of Infrastructure Nodes page, click the retry icon.

    IMPORTANT:CDF might display the red icon if the process times out for a node. Because the retry operation executes the script again on that node, ensure that you click retry only once.

  7. After the Deployment of Infrastructure Services page indicates that all the services are deployed and the status indicates green, click Next.

    The deployment process can take up to 15 minutes to complete.

  8. Click Next.

  9. Configure the pre-deployment settings in the CDF Management Portal, by making the following changes under ANALYTICS:

    • In the Cluster Configuration section, select 0 from the Hercules Search Engine Replicas drop-down list.

      NOTE:By default, the value for Hercules Search Engine Replicas is 1.

    • In the Database Configuration section, disable Database.

    • In the Single Sign-on Configuration section, specify the values for Client ID and Client Secret.

  10. To finish the deployment, click Next.

  11. Copy the Management Portal link displayed on the Configuration Complete page.

    Some of the pods on the Configuration Complete page might remain in a pending status until the product labels are applied on worker nodes.

  12. (Conditional) For high availability and multi-master deployment, after the deployment has been completed, complete the following steps to manually restart the keepalive process:

    1. Log in to the master node.

    2. Change to the following directory:

      cd /<k8S_HOME>/bin/

      For example:

      cd /opt/arcsight/kubernetes/bin/

    3. Run the following script:

      ./start_lb.sh

  13. Continue to the post-installation steps.