The procedures in this section enable you to configure your environment for a successful installation of the Container Deployment Foundation (CDF).
For multi-node deployment, consider the following when configuring master and worker nodes:
Deploy master and worker nodes on virtual machines. Since most of the processing occurs on worker nodes, we recommend that you deploy worker nodes on physical servers.
Keep the host system configuration identical across master and worker nodes.
When using virtual machines, ensure that:
Resources are reserved and not shared
UUID and MAC addresses are static because dynamic addresses cause the Kubernetes cluster to fail
Install all master and worker nodes in the same subnet.
Add more worker nodes rather than installing bigger and faster hardware.Using more worker nodes enables you to perform maintenance on your cluster nodes with minimal impact to uptime. Adding more nodes also helps with predicting costs due to new hardware.
For high availability, consider the following when configuring master and worker nodes:
Create a virtual IP that is shared by all master nodes and ensure that virtual IP is under the same subnet. The VIP must not respond when pinged before you install ESM for Fusion.
Install all master and worker nodes in the same subnet.
Ensure that the br_netfilter module is installed on all master and worker nodes before changing system settings.
You can either run the following scripts that set system parameters automatically or you can set the system parameters manually:
/opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_sysctl_conf.sh
/opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_rc_local.sh
Perform the following steps on all the master and worker nodes to set the system parameters manually.
Log in to the master node.
Check whether the br_netfilter module is enabled:
lsmod |grep br_netfilter
If there is no return value and the br_netfilter module is not installed, install it:
modprobe br_netfilter
echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf
Open the /etc/sysctl.conf file.
Ensure that the following system parameters are set:
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_recycle = 0
kernel.sem=50100 128256000 50100 2560
Save the /etc/sysctl.conf file.
Apply the updates to the node:
/sbin/sysctl -p
Repeat these steps for worker node.
To configure MAC and Cipher algorithms manually, ensure that the /etc/ssh/sshd_config files on every master and worker node are configured with at least one of the following values, which lists all supported algorithms. Add only the algorithms that meet the security policy of your organization.
For MAC algorithms: hmac-sha1,hmac-sha2-256,hmac-sha2-512,hmac-sha1-96
For Cipher algorithms: 3des-cbc,aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,blowfish-cbc
For example, you could add the following lines to the /etc/ssh/sshd_config file on all master and worker nodes:
MACs hmac-sha2-256,hmac-sha2-512
Ciphers aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
If you plan to use a user name and password authentication for adding cluster nodes during the installation, ensure that the PasswordAuthentication parameter in the /etc/ssh/sshd_config file is set to yes. There is no need to check the password authentication setting when you add cluster nodes using a user name and key authentication.
To ensure that the password authentication is enabled, perform the following steps on every master and worker node.
Log in to the master node.
Open the /etc/ssh/sshd_config file.
Check whether the PasswordAuthentication parameter is set to yes. If not, set the parameter to yes as follows:
PasswordAuthentication yes
Restart the sshd service:
systemctl restart sshd.service
Repeat these steps for each worker node.
Ensure that the packages listed in the following table are installed on appropriate nodes. These packages are available in the standard yum repository.
Package |
Nodes |
---|---|
device-mapper-libs |
Master and worker |
java-1.8.0-openjdk |
Master |
libgcrypt |
Master and worker |
libseccomp |
Master and worker |
libtool-ltdl |
Master and worker |
net-tools |
Master and worker |
nfs-utils |
Master and worker |
rpcbind |
Master node, worker node, and NFS server |
systemd-libs (version >= 219) |
Master and worker |
unzip |
Master and worker |
httpd-tools |
Master and worker |
conntrack-tools |
Master and worker |
lvm2 |
Master and worker |
curl |
Master and worker |
libtool-libs |
Master and worker |
openssl |
Master and worker |
socat |
Master and worker |
container-selinux |
Master and worker |
You can either run the /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_1_required_packages.sh script that installs the required OS packages automatically or install the required OS packages manually.
To install the packages manually:
Log in to the master or worker nodes.
Verify whether the package exists:
yum list installed <package name>
(Conditional) If the package is not installed, install the required package:
yum -y install <package name>
Remove libraries that prevent Ingress from starting and confirm the removal when prompted:
yum remove rsh rsh-server vsftpd
You must implement a Network Time Protocol (NTP) to synchronize time on all nodes in the cluster. To implement this protocol, use chrony. Ensure that chrony is running on all nodes in the cluster. By default chrony is installed on some versions of RHEL.
You can either run the /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_synchronize_time.sh script that synchronizes time automatically or configure the time synchronization manually.
To configure the time synchronization manually:
Verify chrony configuration:
chronyc tracking
(Conditional) If chrony is not installed, install chrony:
yum install chrony
Start and enable chrony:
systemctl start chronyd
systemctl enable chronyd
Synchronize the operating system time with the NTP server:
chronyc makestep
Restart the chronyd daemon:
systemctl restart chronyd
Check the server time synchronization:
timedatectl
Synchronize the hardware time:
hwclock -w
Ensure that the firewalld.service is enabled and running on all nodes.
You can either run the /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/prereq_firewall.sh script that configures the firewall automatically or configure the firewall manually.
When the firewall is enabled, you must also enable the masquerade settings.
To enable masquerade settings:
Check whether the masquerade setting is already enabled:
firewall-cmd --query-masquerade
If the command returns yes, then masquerade is enabled.
If the command returns no, then masquerade is disabled.
(Conditional) If the masquerade setting is not enabled, enable masquerade:
firewall-cmd --add-masquerade --permanent
firewall-cmd --reload
Ensure that the cluster should have no access to the Internet and that the proxy settings (http_proxy, https_proxy, and no_proxy) are not set. However, if a connection with the Internet is needed and you already specified a proxy server for http and https connection, you must correctly configure no_proxy.
If you have the http_proxy or https_proxy set, then no_proxy definitions must contain at least the following values:
no_proxy=localhost, 127.0.0.1, <all Master and Worker cluster node IP addresses>,<all cluster node FQDNs>,<HA virtual IP Address>,<FQDN for the HA Virtual IP address>
For example:
export http_proxy="http://web-proxy.example.net:8080"
export https_proxy="http://web-proxy.example.net:8080"
export no_proxy="localhost,127.0.0.1,node1.swinfra.net,10.94.235.231,node2.swinfra.net,10.94.235.232,node3.swinfra.net,10.94.235.233,node3.swinfra.net,10.94.235.233,node4.swinfra.net,10.94.235.234,node5.swinfra.net,10.94.235.235,node6.swinfra.net,10.94.235.236,ha.swinfra.net 10.94.235.200"
export http_proxy="http://web-proxy.eu.example.net:8080"
export https_proxy="localhost,127.0.0.1,swinfra.net,10.94.235.231,10.94.235.232,10.94.235.233,10.94.235.233,10.94.235.234,10.94.235.235,10.94.235.236,10.94.235.200"
NOTE:Incorrect configuration of proxy settings has proven to be a frequent installation problem. To verify that proxy settings are configured properly on all master and worker nodes, run the following command and ensure that the output corresponds to the recommendations:
echo $http_proxy, $https_proxy, $no_proxy
If the firewall is turned off, the installation process will generate a warning. To prevent the warning, set the CDF install parameter --auto-configure-firewall to true.
Ensure that the host name resolution through Domain Name System (DNS) is working across all nodes in the cluster, including correct forward and reverse DNS lookups. Host name resolution must not be performed through /etc/hosts file settings.
You can either run the <download_directory>/scripts/prereq_disable_ipv6.sh script that configures DNS automatically or configure DNS manually.
Ensure that all nodes are configured with a Fully Qualified Domain Name (FQDN) and are in the same subnet. Transformation Hub uses the host system FQDN as its Kafka advertised.host.name. If the FQDN resolves successfully in the Network Address Translation (NAT) environment, producers and consumers will function correctly. If there are network-specific issues resolving FQDN through NAT, DNS will need to be updated to resolve these issues.
Transformation Hub supports ingestion of event data that contains both IPv4 and IPv6 addresses. However, its infrastructure cannot be installed in an IPv6-only network.
localhost must not resolve to an IPv6 address, such as, ::1 – this is the default state. The installation process expects only IPv4 resolution to IP address 127.0.0.1. Comment out any ::1 reference.
The initial master node host name must not resolve to multiple IPv4 addresses and this includes lookup in /etc/hosts.
Test that the forward and reverse lookup records for all servers were properly configured.
To test the forward lookup, run the following commands on every master and worker node in the cluster and on every producer and consumer host system, including:
All master nodes: master1.yourcompany.com, …, mastern.yourcompany.com
All worker nodes: worker1.yourcompany.com, …, workern.yourcompany.com
Your ArcMC nodes: arcmc1.yourcompany.com, ..., arcmcn.yourcompany.com
Use the nslookup or host commands to verify your DNS configuration.
NOTE:Do not use the ping command.
You must run the nslookup commands on every server specified in your/etc/resolv.conf file. Every server must be able to perform forward and reverse lookup properly and return identical results.
If you have a public DNS server specified in your /etc/resolv.conf file, such as the Google public DNS server 8.8.8.8 or 8.8.4.4, you must remove this server from your DNS configuration.
Run the commands as follows. Expected sample output is shown below each command.
hostname
master1
hostname -s
master1
hostname -f
master1.yourcompany.com
hostname -d
yourcompany.com
nslookup master1.yourcompany.com
Server: 192.168.0.53 Address: 192.168.0.53#53 Address: 192.168.0.1 Name: master1.example.com
nslookup master1
Server: 192.168.0.53 Address: 192.168.0.53#53 Name: master1.example.com Address: 192.168.0.1
nslookup 192.168.0.1
Server: 192.168.0.53 Address: 192.168.0.53#53 1.0.168.192.in-addr.arpa name = master1.example.com.
The Kubernetes network subnet is controlled by the --POD_CIDR and –SERVICE_CIDR parameters to the Container Deployment Foundation (CDF) installation portal.
The --POD_CIDR parameter specifies the network address range for Kubernetes pods. The address range specified in the --POD_CIDR parameter must not overlap with the IP range assigned for Kubernetes services, which is specified in the –SERVICE_CIDR parameter. The expected value is a Classless Inter-Domain Routing (CIDR) format IP address. CIDR notation comprises an IP address, a slash (/) character, and a network prefix (a decimal number). The minimum useful network prefix is /24 and the maximum useful network prefix is /8. The default value is 172.16.0.0/16.
For example:
POD_CIDR=172.16.0.0/16
The CIDR_SUBNETLEN parameter specifies the size of the subnet allocated to each host for Kubernetes pod network addresses. The default value is dependent on the value of the POD_CIDR parameter, as described in the following table.
POD_CIDR Prefix |
POD_CIDR_SUBNETLEN defaults |
POD_CIDR_SUBNETLEN allowed values |
---|---|---|
/8 to /21 |
/24 |
/(POD_CIDR prefix + 3) to /27 |
/22 to /24 |
/(POD_CIDR prefix + 3) |
/(POD_CIDR prefix + 3) to /27 |
Smaller prefix values indicate a larger number of available addresses. The minimum useful network prefix is /27 and the maximum useful network prefix is /12. The default value is 172.17.17.0/24.
Change the default POD_CIDR or CIDR_SUBNETLEN values only when your network configuration requires you to do so. You must also ensure that you have sufficient understanding of the flannel network fabric configuration requirements before you make any changes.
Container Deployment Foundation (CDF) requires an NFS server to maintain state information about the infrastructure and to store other pertinent data.
For high availability, NFS must run on a highly available external server in the case of a dedicated master deployment having a minimum of three master nodes. For optimal security, secure all NFS settings to allow only required hosts to connect to the NFS server.
The prerequisites for configuring the NFS server are listed below:
Ensure that the ports 111, 2049, and 20048 are open on the NFS server for communication.
Enable the rpcbind and nfs-server package by executing the following commands on your NFS server:
systemctl enable rpcbind
systemctl start rpcbind
systemctl enable nfs-server
systemctl start nfs-server
Create and configure the following shared directories:
Directory |
Description |
---|---|
<NFS_VOLUME_DIRECTORY>/itom-vol |
This is the CDF NFS root folder, which contains the CDF database and files. The disk usage will grow gradually. |
<NFS_VOLUME_DIRECTORY>/db-single-vol |
This volume is available only if you did not choose PostgreSQL High Availability (HA) for the CDF database setting. It is for the CDF database. During the installation you will not choose the Postgres database HA option. |
<NFS_VOLUME_DIRECTORY>/db-backup-vol |
This volume is used for backup and restoration of the CDF PostgreSQL database. Its sizing is dependent on the implementation’s processing requirements and data volumes. |
<NFS_VOLUME_DIRECTORY>/itom-logging-vol |
This volume stores the log output files of CDF components. The required size depends on how long the log will be kept. |
<NFS_VOLUME_DIRECTORY>/arcsight-vol |
This volume stores the component installation packages. |
Log in to the NFS server as root.
Create the following:
Group: arcsight with a GID 1999
User: arcsight with a UID 1999
NFS root directory: Root directory under which you can create all NFS shared directories.
Example (NFS_Volume_Directory):/opt/NFS_Volume
(Conditional) If you have previously installed any version of CDF, you must remove all NFS directories using the following command for each directory:
rm -rf <path to NFS directory>
For example:
rm -rf /opt/NFS_Volume/itom-vol
Create each NFS shared directory using the command:
mkdir -p <path to NFS directory>
For example:
mkdir -p /opt/NFS_Volume/itom-vol
For each NFS directory, set the permission to 755 using the command:
chmod -R 755 <path to NFS directory>
For example:
chmod -R 755 /opt/NFS_Volume/itom-vol
For each NFS directory, set the ownership to UID 1999 and GID 1999 using the command:
chown -R 1999:1999 <path to NFS directory>
For example:
chown -R 1999:1999 /opt/NFS_Volume/itom-vol
If you use a UID/GID other than 1999/1999, provide it during the CDF installation in the installation script arguments --system-group-id and --system-user-id.
For every NFS volume, run the following set of commands on the External NFS server based on the IP address. You will need to export the NFS configuration with the appropriate IP address for the NFS mount to work properly.
For every node in the cluster, you must update the configuration to grant the node access to the NFS volume shares.
For example:
/opt/NFS_Volume/arcsight-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)
/opt/NFS_Volume/itom-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)
/opt/NFS_Volume/db-single-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)
/opt/NFS_Volume/itom-logging-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)
/opt/NFS_Volume/db-backup-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)
Modify the /etc/exports file and run the following command:
exportfs -ra
If you add more NFS shared directories later, you must restart the NFS service.
Create the NFS directory under /mnt.
Mount the NFS directory on your local system by using the command:
NFS v3: mount -t nfs 192.168.1.25:/opt/NFS_Volume/arcsight-vol /mnt/nfs
NFS v4: mount -t nfs4 192.168.1.25:/opt/NFS_Volume/arcsight-vol /mnt/nfs
After creating all the directories, run the following commands on the NFS server:
exportfs -ra
systemctl restart rpcbind
systemctl enable rpcbind
systemctl restart nfs-server
systemctl enable nfs-server
IMPORTANT:The information in this section applies only for non-high-availability and single-node deployments.
You can either run the /opt/<ESM_Command_Center_Installer_For_Fusion>/scripts/preinstall_create_nfs_share.sh script that sets up the NFS automatically or set up NFS manually.
To set up NFS manually:
Copy setupNFS.sh to the NFS server.
The setupNFS.sh file is located on the master node in the <download_directory>/esm-cmd-center-installer-for-fusion-x.x.x.x/installers/cdf-x.x.x.x/scripts folder.
(Conditional) If you are using the default UID/GID, use the command:
sh setupNFS.sh <path_to_nfs_directory>/volumes/volume_name
(Conditional) If you are using a non-default UID/GID, use the command:
sh setupNFS.sh <path_to_nfs_directory>/volumes/volume_name true <uid> <gid>
Restart the NFS service:
systemctl restart nfs
You must disable swap space on all master and worker nodes, excluding the node that has the database.
Log in to the node where you want to disable swap space.
Run the following command:
swapoff -a
In the /etc/fstab file, comment out the lines that contain swap as the disk type and save the file.
For example:
#/dev/mapper/centos_shcentos72x64-swap swap
Optionally, to improve performance of Docker processing, set up a thinpool on each master and worker node. Before setting up a thinpool on each node, create a single disk partition on the node, as explained below.
For the thinpool device for Docker (for example, sdb1) the minimum physical volume size is 30 GB.
Log in to the node.
Run the command:
fdisk <name of the new disk device that was added>
For example:
# fdisk /dev/sdb1
Enter n to create a new partition.
When prompted, enter the partition number, sector, type (Linux LVM), and size for the first partition. To select the Linux LVM partition type:
Enter t to change the default partition type to Linux LVM
Type L to list the supported partition types
Type 8e to select the Linux LVM type
When prompted, enter the partition number, sector, type (Linux LVM), and size for the second partition.
Type p to view the partition table.
Type w to save the partition table to disk.
Type partprobe.
Create a physical volume with the following command:
# pvcreate [physical device name]
For example:
# pvcreate /dev/sdb1
Create a volume group with the following command:
# vgcreate [volume group name] [logical volume name]
For example:
# vgcreate docker /dev/sdb1
Create a logical volume (LV) for the thinpool and bootstrap with the following command:
# lvcreate [logical volume name] [volume group name]
For example, the data LV is 95% of the 'Docker' volume group size. (Leaving free space allows for automatic expanding of either the data or metadata if space is running low, as a temporary measure.)
# lvcreate --wipesignatures y -n thinpool docker -l 95%VG
# lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG
Convert the pool to a thinpool with the following command:
# lvconvert -y --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta
Optionally, you can configure the auto-extension of thinpools using an lvm profile.
Open the lvm profile.
Specify a value for the parameters thin_pool_autoextend_threshold and thin_pool_autoextend_percent, each of which represents a percentage of the space used.
For example:
activation { thin_pool_autoextend_threshold=80 thin_pool_autoextend_percent=20 }
Apply the lvm profile with the following command:
# lvchange --metadataprofile docker-thinpool docker/thinpool
Verify that the lvm profile is monitored with the following command:
# lvs -o+seg_monitor
Clear the graph driver directory with the following command, if Docker was previously started:
# rm -rf /var/lib/docker/*
Monitor the thinpool and volume group free space with the following commands:
# lvs
# lvs -a
# vgs
Check the logs to see the auto-extension of the thinpool when it hits the threshold:
# journalctl -fu dm-event.service
If you choose to install CDF as a sudo user, the root user must grant non-root (sudo) users installation permission before they can perform the installation. Ensure that the provided user has permission to execute scripts under temporary directory /tmp on all master and worker nodes.
There are two distinct file edits that need to be performed: First on the initial master node only, and then on all remaining master and worker nodes.
Make the following modifications only on the initial master node.
IMPORTANT:In the following commands you must ensure that there no more than a single space character after each comma that delimits parameters. Otherwise, you may get an error similar to this when you attempt to save the file:
>>> /etc/sudoers: syntax error near line nn<<<
Log in to the initial master node as the root user.
Open the /etc/sudoers file using Visudo.
Add the following Cmnd_Alias line to the command aliases group in the sudoers file:
Cmnd_Alias CDFINSTALL = <CDF_installation_package_directory>/scripts/precheck.sh, <CDF_installation_package_directory>/install, <K8S_HOME>/uninstall.sh, /usr/bin/kubectl, /usr/bin/docker, /usr/bin/mkdir,/bin/rm, /bin/su, /bin/chmod, /bin/tar, <K8S_HOME>/scripts/uploadimages.sh,/bin/chown
where
CDF_installation_package_directory represents the directory where you unzipped the installation package. For example:
/tmp/cdf-2019.05.0xxx.
K8S_HOME represents the Kubernetes directory, by default /opt/arcsight/kubernetes.
Add the following lines to the wheel users group, replacing <username> with your sudo user password:
%wheel ALL=(ALL) ALL
cdfuser ALL=NOPASSWD: CDFINSTALL
For example:
Defaults: root !requiretty
Locate the secure_path line in the sudoers file and ensure that the following paths are present:
Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
By doing this, the sudo user can execute the showmount, curl, ifconfig, and unzip commands when installing CDF.
Save the file.
After completing the modifications to the sudoers file as described above, perform the following steps:
Log in to the initial master node as the non-root sudo user to perform the installation.
Download the installation files to a directory where the non-root sudo user has write permissions.
Run CDF using the sudo command.
Make the following modifications only on the remaining master and worker nodes.
IMPORTANT:In the following commands you must ensure that there is, at most, a single space character after each comma that delimits parameters. Otherwise, you may get an error similar to this when you attempt to save the file.
>>> /etc/sudoers: syntax error near line nn<<<
Log in to each master and worker node.
Open the /etc/sudoers file.
Add the following Cmnd_Alias line to the command aliases group in the sudoers file.
Cmnd_Alias CDFINSTALL = /tmp/scripts/pre-check.sh, <ITOM_Suite_Foundation_Node>/install, <K8S_HOME>/uninstall.sh, /usr/bin/kubectl, /usr/bin/docker,/usr/bin/mkdir, /bin/rm, /bin/su, /bin/chmod, /bin/tar, <K8S_HOME>/scripts/uploadimages.sh, /bin/chown
where
ITOM_Suite_Foundation_Node represents the directory where you unzipped the installation package. For example:
/tmp/ITOM_Suite_Foundation_2019.05.0xxx
K8S_HOME represents the Kubernetes directory, by default /opt/arcsight/kubernetes.
Add the following lines to the wheel users group, replacing <username> with your sudo user password:
%wheel ALL=(ALL) ALL
cdfuser ALL=NOPASSWD: CDFINSTALL
For example:
Defaults: root !requiretty
Locate the secure_path line in the sudoers file and ensure that the following paths are present:
Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
By doing this, the sudo user can execute the showmount, curl, ifconfig, and unzip commands when installing CDF.
Save the file.
Repeat the process for each remaining master and worker node.