8.1 Preparing Your Environment for CDF

8.1.1 Configuring the Nodes

For multi-node deployment, consider the following when configuring master and worker nodes:

  • Deploy master and worker nodes on virtual machines. Since most of the processing occurs on worker nodes, we recommend that you deploy worker nodes on physical servers.

  • Keep the host system configuration identical across master and worker nodes.

  • When using virtual machines, ensure that:

    • Resources are reserved and not shared

    • UUID and MAC addresses are static because dynamic addresses cause the Kubernetes cluster to fail

  • Install all master and worker nodes in the same subnet.

  • Add more worker nodes rather than installing bigger and faster hardware.Using more worker nodes enables you to perform maintenance on your cluster nodes with minimal impact to uptime. Adding more nodes also helps with predicting costs due to new hardware.

For high availability, consider the following when configuring master and worker nodes:

  • Create a virtual IP that is shared by all master nodes and ensure that virtual IP is under the same subnet. The VIP must not respond when pinged before you install ESM for Fusion.

  • Install all master and worker nodes in the same subnet.

8.1.2 Setting System Parameters (Network Bridging)

Ensure that the br_netfilter module is installed on all master and worker nodes before changing system settings.

You can either run the following scripts that set system parameters automatically or you can set the system parameters manually:

  • /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_sysctl_conf.sh

  • /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_rc_local.sh

Perform the following steps on all the master and worker nodes to set the system parameters manually.

  1. Log in to the master node.

  2. Check whether the br_netfilter module is enabled:

    lsmod |grep br_netfilter

  3. If there is no return value and the br_netfilter module is not installed, install it:

    modprobe br_netfilter

    echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf

  4. Open the /etc/sysctl.conf file.

  5. Ensure that the following system parameters are set:

    net.bridge.bridge-nf-call-iptables=1

    net.bridge.bridge-nf-call-ip6tables=1

    net.ipv4.ip_forward = 1

    net.ipv4.tcp_tw_recycle = 0

    kernel.sem=50100 128256000 50100 2560

  6. Save the /etc/sysctl.conf file.

  7. Apply the updates to the node:

    /sbin/sysctl -p

  8. Repeat these steps for worker node.

8.1.3 Checking MAC and Cipher Algorithms

To configure MAC and Cipher algorithms manually, ensure that the /etc/ssh/sshd_config files on every master and worker node are configured with at least one of the following values, which lists all supported algorithms. Add only the algorithms that meet the security policy of your organization.

  • For MAC algorithms: hmac-sha1,hmac-sha2-256,hmac-sha2-512,hmac-sha1-96

  • For Cipher algorithms: 3des-cbc,aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,blowfish-cbc

For example, you could add the following lines to the /etc/ssh/sshd_config file on all master and worker nodes:

MACs hmac-sha2-256,hmac-sha2-512

Ciphers aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr

8.1.4 Checking Password Authentication Settings

If you plan to use a user name and password authentication for adding cluster nodes during the installation, ensure that the PasswordAuthentication parameter in the /etc/ssh/sshd_config file is set to yes. There is no need to check the password authentication setting when you add cluster nodes using a user name and key authentication.

To ensure that the password authentication is enabled, perform the following steps on every master and worker node.

  1. Log in to the master node.

  2. Open the /etc/ssh/sshd_config file.

  3. Check whether the PasswordAuthentication parameter is set to yes. If not, set the parameter to yes as follows:

    PasswordAuthentication yes

  4. Restart the sshd service:

    systemctl restart sshd.service

  5. Repeat these steps for each worker node.

8.1.5 Installing the Required Operating System Packages

Ensure that the packages listed in the following table are installed on appropriate nodes. These packages are available in the standard yum repository.

Package

Nodes

device-mapper-libs

Master and worker

java-1.8.0-openjdk

Master

libgcrypt

Master and worker

libseccomp

Master and worker

libtool-ltdl

Master and worker

net-tools

Master and worker

nfs-utils

Master and worker

rpcbind

Master node, worker node, and NFS server

systemd-libs (version >= 219)

Master and worker

unzip

Master and worker

httpd-tools

Master and worker

conntrack-tools

Master and worker

lvm2

Master and worker

curl

Master and worker

libtool-libs

Master and worker

openssl

Master and worker

socat

Master and worker

container-selinux

Master and worker

You can either run the /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_1_required_packages.sh script that installs the required OS packages automatically or install the required OS packages manually.

To install the packages manually:

  1. Log in to the master or worker nodes.

  2. Verify whether the package exists:

    yum list installed <package name>

  3. (Conditional) If the package is not installed, install the required package:

    yum -y install <package name>

8.1.6 Removing Libraries

Remove libraries that prevent Ingress from starting and confirm the removal when prompted:

yum remove rsh rsh-server vsftpd

8.1.7 Configuring Time Synchronization

You must implement a Network Time Protocol (NTP) to synchronize time on all nodes in the cluster. To implement this protocol, use chrony. Ensure that chrony is running on all nodes in the cluster. By default chrony is installed on some versions of RHEL.

You can either run the /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_synchronize_time.sh script that synchronizes time automatically or configure the time synchronization manually.

To configure the time synchronization manually:

  1. Verify chrony configuration:

    chronyc tracking

  2. (Conditional) If chrony is not installed, install chrony:

    yum install chrony

  3. Start and enable chrony:

    systemctl start chronyd

    systemctl enable chronyd

  4. Synchronize the operating system time with the NTP server:

    chronyc makestep

  5. Restart the chronyd daemon:

    systemctl restart chronyd

  6. Check the server time synchronization:

    timedatectl

  7. Synchronize the hardware time:

    hwclock -w

8.1.8 Configuring the Firewall

Ensure that the firewalld.service is enabled and running on all nodes.

You can either run the /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_firewall.sh script that configures the firewall automatically or configure the firewall manually.

When the firewall is enabled, you must also enable the masquerade settings.

To enable masquerade settings:

  1. Check whether the masquerade setting is already enabled:

    firewall-cmd --query-masquerade

    If the command returns yes, then masquerade is enabled.

    If the command returns no, then masquerade is disabled.

  2. (Conditional) If the masquerade setting is not enabled, enable masquerade:

    firewall-cmd --add-masquerade --permanent

    firewall-cmd --reload

8.1.9 Configuring Proxy

Ensure that the cluster should have no access to the Internet and that the proxy settings (http_proxy, https_proxy, and no_proxy) are not set. However, if a connection with the Internet is needed and you already specified a proxy server for http and https connection, you must correctly configure no_proxy.

If you have the http_proxy or https_proxy set, then no_proxy definitions must contain at least the following values:

no_proxy=localhost, 127.0.0.1, <all Master and Worker cluster node IP addresses>,<all cluster node FQDNs>,<HA virtual IP Address>,<FQDN for the HA Virtual IP address>

For example:

  • export http_proxy="http://web-proxy.example.net:8080"

    export https_proxy="http://web-proxy.example.net:8080"

    export no_proxy="localhost,127.0.0.1,node1.swinfra.net,10.94.235.231,node2.swinfra.net,10.94.235.232,node3.swinfra.net,10.94.235.233,node3.swinfra.net,10.94.235.233,node4.swinfra.net,10.94.235.234,node5.swinfra.net,10.94.235.235,node6.swinfra.net,10.94.235.236,ha.swinfra.net 10.94.235.200"

  • export http_proxy="http://web-proxy.eu.example.net:8080"

    export https_proxy="localhost,127.0.0.1,swinfra.net,10.94.235.231,10.94.235.232,10.94.235.233,10.94.235.233,10.94.235.234,10.94.235.235,10.94.235.236,10.94.235.200"

NOTE:Incorrect configuration of proxy settings has proven to be a frequent installation problem. To verify that proxy settings are configured properly on all master and worker nodes, run the following command and ensure that the output corresponds to the recommendations:

echo $http_proxy, $https_proxy, $no_proxy

If the firewall is turned off, the installation process will generate a warning. To prevent the warning, set the CDF install parameter --auto-configure-firewall to true.

8.1.10 Configuring DNS

Ensure that the host name resolution through Domain Name System (DNS) is working across all nodes in the cluster, including correct forward and reverse DNS lookups. Host name resolution must not be performed through /etc/hosts file settings.

You can either run the <download_directory>/scripts/prereq_disable_ipv6.sh script that configures DNS automatically or configure DNS manually.

Ensure that all nodes are configured with a Fully Qualified Domain Name (FQDN) and are in the same subnet. Transformation Hub uses the host system FQDN as its Kafka advertised.host.name. If the FQDN resolves successfully in the Network Address Translation (NAT) environment, producers and consumers will function correctly. If there are network-specific issues resolving FQDN through NAT, DNS will need to be updated to resolve these issues.

  • Transformation Hub supports ingestion of event data that contains both IPv4 and IPv6 addresses. However, its infrastructure cannot be installed in an IPv6-only network.

  • localhost must not resolve to an IPv6 address, such as, ::1 – this is the default state. The installation process expects only IPv4 resolution to IP address 127.0.0.1. Comment out any ::1 reference.

  • The initial master node host name must not resolve to multiple IPv4 addresses and this includes lookup in /etc/hosts.

Testing Forward and Reverse DNS Lookup

Test that the forward and reverse lookup records for all servers were properly configured.

To test the forward lookup, run the following commands on every master and worker node in the cluster and on every producer and consumer host system, including:

  • All master nodes: master1.yourcompany.com, …, mastern.yourcompany.com

  • All worker nodes: worker1.yourcompany.com, …, workern.yourcompany.com

  • Your ArcMC nodes: arcmc1.yourcompany.com, ..., arcmcn.yourcompany.com

Use the nslookup or host commands to verify your DNS configuration.

NOTE:Do not use the ping command.

You must run the nslookup commands on every server specified in your/etc/resolv.conf file. Every server must be able to perform forward and reverse lookup properly and return identical results.

If you have a public DNS server specified in your /etc/resolv.conf file, such as the Google public DNS server 8.8.8.8 or 8.8.4.4, you must remove this server from your DNS configuration.

Run the commands as follows. Expected sample output is shown below each command.

  • hostname

    master1
  • hostname -s

    master1
  • hostname -f

    master1.yourcompany.com
  • hostname -d

    yourcompany.com
  • nslookup master1.yourcompany.com

    Server: 192.168.0.53
    Address: 192.168.0.53#53
    Address: 192.168.0.1
    Name: master1.example.com
  • nslookup master1

    Server: 192.168.0.53
    Address: 192.168.0.53#53
    Name: master1.example.com
    Address: 192.168.0.1
  • nslookup 192.168.0.1

    Server: 192.168.0.53
    Address: 192.168.0.53#53
    1.0.168.192.in-addr.arpa name = master1.example.com.

Understanding Kubernetes Network Subnet Settings

The Kubernetes network subnet is controlled by the --POD_CIDR and –SERVICE_CIDR parameters to the Container Deployment Foundation (CDF) installation portal.

The --POD_CIDR parameter specifies the network address range for Kubernetes pods. The address range specified in the --POD_CIDR parameter must not overlap with the IP range assigned for Kubernetes services, which is specified in the –SERVICE_CIDR parameter. The expected value is a Classless Inter-Domain Routing (CIDR) format IP address. CIDR notation comprises an IP address, a slash (/) character, and a network prefix (a decimal number). The minimum useful network prefix is /24 and the maximum useful network prefix is /8. The default value is 172.16.0.0/16.

For example:

POD_CIDR=172.16.0.0/16

The CIDR_SUBNETLEN parameter specifies the size of the subnet allocated to each host for Kubernetes pod network addresses. The default value is dependent on the value of the POD_CIDR parameter, as described in the following table.

POD_CIDR Prefix

POD_CIDR_SUBNETLEN defaults

POD_CIDR_SUBNETLEN allowed values

/8 to /21

/24

/(POD_CIDR prefix + 3) to /27

/22 to /24

/(POD_CIDR prefix + 3)

/(POD_CIDR prefix + 3) to /27

Smaller prefix values indicate a larger number of available addresses. The minimum useful network prefix is /27 and the maximum useful network prefix is /12. The default value is 172.17.17.0/24.

Change the default POD_CIDR or CIDR_SUBNETLEN values only when your network configuration requires you to do so. You must also ensure that you have sufficient understanding of the flannel network fabric configuration requirements before you make any changes.

8.1.11 Configuring the NFS Server

Container Deployment Foundation (CDF) requires an NFS server to maintain state information about the infrastructure and to store other pertinent data.

For high availability, NFS must run on a highly available external server in the case of a dedicated master deployment having a minimum of three master nodes. For optimal security, secure all NFS settings to allow only required hosts to connect to the NFS server.

Prerequisites

The prerequisites for configuring the NFS server are listed below:

  • Ensure that the ports 111, 2049, and 20048 are open on the NFS server for communication.

  • Enable the rpcbind and nfs-server package by executing the following commands on your NFS server:

    systemctl enable rpcbind

    systemctl start rpcbind

    systemctl enable nfs-server

    systemctl start nfs-server

  • Create and configure the following shared directories:

    Directory

    Description

    <NFS_VOLUME_DIRECTORY>/itom-vol

    This is the CDF NFS root folder, which contains the CDF database and files. The disk usage will grow gradually.

    <NFS_VOLUME_DIRECTORY>/db-single-vol

    This volume is available only if you did not choose PostgreSQL High Availability (HA) for the CDF database setting. It is for the CDF database.

    During the installation you will not choose the Postgres database HA option.

    <NFS_VOLUME_DIRECTORY>/db-backup-vol

    This volume is used for backup and restoration of the CDF PostgreSQL database. Its sizing is dependent on the implementation’s processing requirements and data volumes.

    <NFS_VOLUME_DIRECTORY>/itom-logging-vol

    This volume stores the log output files of CDF components. The required size depends on how long the log will be kept.

    <NFS_VOLUME_DIRECTORY>/arcsight-vol

    This volume stores the component installation packages.

Creating NFS Shared Directories

  1. Log in to the NFS server as root.

  2. Create the following:

    • Group: arcsight with a GID 1999

    • User: arcsight with a UID 1999

    • NFS root directory: Root directory under which you can create all NFS shared directories.

      Example (NFS_Volume_Directory):/opt/NFS_Volume

  3. (Conditional) If you have previously installed any version of CDF, you must remove all NFS directories using the following command for each directory:

    rm -rf <path to NFS directory>

    For example:

    rm -rf /opt/NFS_Volume/itom-vol

  4. Create each NFS shared directory using the command:

    mkdir -p <path to NFS directory>

    For example:

    mkdir -p /opt/NFS_Volume/itom-vol

  5. For each NFS directory, set the permission to 755 using the command:

    chmod -R 755 <path to NFS directory>

    For example:

    chmod -R 755 /opt/NFS_Volume/itom-vol

  6. For each NFS directory, set the ownership to UID 1999 and GID 1999 using the command:

    chown -R 1999:1999 <path to NFS directory>

    For example:

    chown -R 1999:1999 /opt/NFS_Volume/itom-vol

    If you use a UID/GID other than 1999/1999, provide it during the CDF installation in the installation script arguments --system-group-id and --system-user-id.

Exporting the NFS Configuration

For every NFS volume, run the following set of commands on the External NFS server based on the IP address. You will need to export the NFS configuration with the appropriate IP address for the NFS mount to work properly.

  1. For every node in the cluster, you must update the configuration to grant the node access to the NFS volume shares.

    For example:

    /opt/NFS_Volume/arcsight-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)

    /opt/NFS_Volume/itom-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)

    /opt/NFS_Volume/db-single-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)

    /opt/NFS_Volume/itom-logging-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)

    /opt/NFS_Volume/db-backup-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)

  2. Modify the /etc/exports file and run the following command:

    exportfs -ra

    If you add more NFS shared directories later, you must restart the NFS service.

Verifying NFS Configuration

  1. Create the NFS directory under /mnt.

  2. Mount the NFS directory on your local system by using the command:

    • NFS v3: mount -t nfs 192.168.1.25:/opt/NFS_Volume/arcsight-vol /mnt/nfs

    • NFS v4: mount -t nfs4 192.168.1.25:/opt/NFS_Volume/arcsight-vol /mnt/nfs

  3. After creating all the directories, run the following commands on the NFS server:

    exportfs -ra

    systemctl restart rpcbind

    systemctl enable rpcbind

    systemctl restart nfs-server

    systemctl enable nfs-server

Setting Up NFS By Using the Script

IMPORTANT:The information in this section applies only for non-high-availability and single-node deployments.

You can either run the /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/preinstall_create_nfs_share.sh script that sets up the NFS automatically or set up NFS manually.

To set up NFS manually:

  1. Copy setupNFS.sh to the NFS server.

    The setupNFS.sh file is located on the master node in the <download_directory>/esm-cmd-center-installer-for-fusion-x.x.x.x/installers/cdf-x.x.x.x/scripts folder.

  2. (Conditional) If you are using the default UID/GID, use the command:

    sh setupNFS.sh <path_to_nfs_directory>/volumes/volume_name

  3. (Conditional) If you are using a non-default UID/GID, use the command:

    sh setupNFS.sh <path_to_nfs_directory>/volumes/volume_name true <uid> <gid>

  4. Restart the NFS service:

    systemctl restart nfs

8.1.12 Disabling Swap Space

You must disable swap space on all master and worker nodes, excluding the node that has the database.

  1. Log in to the node where you want to disable swap space.

  2. Run the following command:

    swapoff -a

  3. In the /etc/fstab file, comment out the lines that contain swap as the disk type and save the file.

    For example:

    #/dev/mapper/centos_shcentos72x64-swap swap

8.1.13 Creating Docker Thinpools

Optionally, to improve performance of Docker processing, set up a thinpool on each master and worker node. Before setting up a thinpool on each node, create a single disk partition on the node, as explained below.

For the thinpool device for Docker (for example, sdb1) the minimum physical volume size is 30 GB.

Creating a New Partition

  1. Log in to the node.

  2. Run the command:

    fdisk <name of the new disk device that was added>

    For example:

    # fdisk /dev/sdb1

  3. Enter n to create a new partition.

  4. When prompted, enter the partition number, sector, type (Linux LVM), and size for the first partition. To select the Linux LVM partition type:

    • Enter t to change the default partition type to Linux LVM

    • Type L to list the supported partition types

    • Type 8e to select the Linux LVM type

  5. When prompted, enter the partition number, sector, type (Linux LVM), and size for the second partition.

  6. Type p to view the partition table.

  7. Type w to save the partition table to disk.

  8. Type partprobe.

Setting Up a Thinpool for Docker

  1. Create a physical volume with the following command:

    # pvcreate [physical device name]

    For example:

    # pvcreate /dev/sdb1

  2. Create a volume group with the following command:

    # vgcreate [volume group name] [logical volume name]

    For example:

    # vgcreate docker /dev/sdb1

  3. Create a logical volume (LV) for the thinpool and bootstrap with the following command:

    # lvcreate [logical volume name] [volume group name]

    For example, the data LV is 95% of the 'Docker' volume group size. (Leaving free space allows for automatic expanding of either the data or metadata if space is running low, as a temporary measure.)

    # lvcreate --wipesignatures y -n thinpool docker -l 95%VG

    # lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG

  4. Convert the pool to a thinpool with the following command:

    # lvconvert -y --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta

    Optionally, you can configure the auto-extension of thinpools using an lvm profile.

    1. Open the lvm profile.

    2. Specify a value for the parameters thin_pool_autoextend_threshold and thin_pool_autoextend_percent, each of which represents a percentage of the space used.

      For example:

      activation { thin_pool_autoextend_threshold=80 thin_pool_autoextend_percent=20 }

    3. Apply the lvm profile with the following command:

      # lvchange --metadataprofile docker-thinpool docker/thinpool

    4. Verify that the lvm profile is monitored with the following command:

      # lvs -o+seg_monitor

    5. Clear the graph driver directory with the following command, if Docker was previously started:

      # rm -rf /var/lib/docker/*

    6. Monitor the thinpool and volume group free space with the following commands:

      # lvs

      # lvs -a

      # vgs

    7. Check the logs to see the auto-extension of the thinpool when it hits the threshold:

      # journalctl -fu dm-event.service

8.1.14 Enabling Installation Permissions for a sudo User

If you choose to install CDF as a sudo user, the root user must grant non-root (sudo) users installation permission before they can perform the installation. Ensure that the provided user has permission to execute scripts under temporary directory /tmp on all master and worker nodes.

There are two distinct file edits that need to be performed: First on the initial master node only, and then on all remaining master and worker nodes.

Edit the sudoers File on the Initial Master Node

Make the following modifications only on the initial master node.

IMPORTANT:In the following commands you must ensure that there no more than a single space character after each comma that delimits parameters. Otherwise, you may get an error similar to this when you attempt to save the file:

>>> /etc/sudoers: syntax error near line nn<<<

  1. Log in to the initial master node as the root user.

  2. Open the /etc/sudoers file using Visudo.

  3. Add the following Cmnd_Alias line to the command aliases group in the sudoers file:

    Cmnd_Alias CDFINSTALL = <CDF_installation_package_directory>/scripts/precheck.sh, <CDF_installation_package_directory>/install, <K8S_HOME>/uninstall.sh, /usr/bin/kubectl, /usr/bin/docker, /usr/bin/mkdir,/bin/rm, /bin/su, /bin/chmod, /bin/tar, <K8S_HOME>/scripts/uploadimages.sh,/bin/chown

    where

    • CDF_installation_package_directory represents the directory where you unzipped the installation package. For example:

      /tmp/cdf-2019.05.0xxx.

    • K8S_HOME represents the Kubernetes directory, by default /opt/arcsight/kubernetes.

  4. Add the following lines to the wheel users group, replacing <username> with your sudo user password:

    %wheel ALL=(ALL) ALL

    cdfuser ALL=NOPASSWD: CDFINSTALL

    For example:

    Defaults: root !requiretty

  5. Locate the secure_path line in the sudoers file and ensure that the following paths are present:

    Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin

    By doing this, the sudo user can execute the showmount, curl, ifconfig, and unzip commands when installing CDF.

  6. Save the file.

Installing Components Using the sudo User

After completing the modifications to the sudoers file as described above, perform the following steps:

  1. Log in to the initial master node as the non-root sudo user to perform the installation.

  2. Download the installation files to a directory where the non-root sudo user has write permissions.

  3. Run CDF using the sudo command.

Edit the sudoers File on the Remaining Master and Worker Nodes

Make the following modifications only on the remaining master and worker nodes.

IMPORTANT:In the following commands you must ensure that there is, at most, a single space character after each comma that delimits parameters. Otherwise, you may get an error similar to this when you attempt to save the file.

>>> /etc/sudoers: syntax error near line nn<<<

  1. Log in to each master and worker node.

  2. Open the /etc/sudoers file.

  3. Add the following Cmnd_Alias line to the command aliases group in the sudoers file.

    Cmnd_Alias CDFINSTALL = /tmp/scripts/pre-check.sh, <ITOM_Suite_Foundation_Node>/install, <K8S_HOME>/uninstall.sh, /usr/bin/kubectl, /usr/bin/docker,/usr/bin/mkdir, /bin/rm, /bin/su, /bin/chmod, /bin/tar, <K8S_HOME>/scripts/uploadimages.sh, /bin/chown

    where

    • ITOM_Suite_Foundation_Node represents the directory where you unzipped the installation package. For example:

      /tmp/ITOM_Suite_Foundation_2019.05.0xxx

    • K8S_HOME represents the Kubernetes directory, by default /opt/arcsight/kubernetes.

  4. Add the following lines to the wheel users group, replacing <username> with your sudo user password:

    %wheel ALL=(ALL) ALL

    cdfuser ALL=NOPASSWD: CDFINSTALL

    For example:

    Defaults: root !requiretty

  5. Locate the secure_path line in the sudoers file and ensure that the following paths are present:

    Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin

    By doing this, the sudo user can execute the showmount, curl, ifconfig, and unzip commands when installing CDF.

  6. Save the file.

Repeat the process for each remaining master and worker node.