Using Kubernetes
Verastream Host Integrator (VHI) now has the ability to run on Kubernetes, a popular platform for building highly scalable systems of enterprise applications.
This topic, part of our Early Adopter Program, describes the setup of VHI in a Kubernetes environment. We want to hear from early adopters about their experiences, issues and future needs.
Familiarize yourself with Kubernetes' excellent documentation, which will help you get started. This guide assumes that you are familiar with the Linux operating environment.
The VHI Kubernetes Example Setup
This guide walks you through how to configure a VHI Kubernetes example setup consisting of one VHI management server and two VHI session servers, all running in a light-weight Kubernetes environment K3s.
K3s provides most of the elements necessary to work with VHI in Kubernetes. VHI has been developed, run and tested in K3s.
This VHI Kubernetes example setup uses the Kubernetes default
namespace to keep things simple. Production systems usually use a dedicated namespace to isolate concerns, and make the system more reliable and secure.
There are a few steps involved in setting up the VHI Kubernetes Example Setup. It is best to follow the steps in the order that they are presented here. Once you have K3s installed on Linux, and VHI container images imported into your container registry, you can use our Helm setup chart to install VHI.
Setting up K3s
Install K3s on a Linux environment that supports the K3s requirements. This guide was prepared using Red Hat Enterprise Linux 8, however other supported distributions of Linux should also work.
After you install K3s, you will need to set an environment variable KUBECONFIG
. Run the command,
export KUBECONFIG=~/.kube/config
Then create the config file:
mkdir ~/.kube 2> /dev/null
sudo k3s kubectl config view --raw > "$KUBECONFIG"
chmod 600 "$KUBECONFIG"
KUBECONFIG
to your ~/.profile
or ~/.bashrc
to make it persist on reboot.
Install Helm Package Manager
The Helm package manager is what you use to install, update or remove VHI in Kubernetes. Refer to the Helm install instructions.
Install Docker Command Line Interface
Because K3s does not provide container management, you need to provide this. The Docker command line is the standard interface. For example, you can easily obtain podman on Red Hat Linux using the command:
sudo dnf install docker
Note
For convenience, set up an alias docker=podman
.
Configure a Container Registry
You need a container registry to store the VHI images. You can use an existing registry, or set one up locally.
To run a test registry, do the following:
docker run --rm -d -p 6000:5000 --name registry registry:2
This test registry does not require authentication. In a production environment your registry should have access control configured. For this, VHI Helm setup can be used with an imagePullSecret
name if necessary. Follow these instructions to configure an imagePullSecret
, and add the name to your custom-values.yaml
file. See Configure Custom Settings, below.
Extract Files
VHI Kubernetes setup is provided in the file vhik8s-7.9.6011-prod-eap.zip
. This file contains all of the images and dependencies required for the VHI Kubernetes example setup. The zip file contains the following files:
vhi-7.9.6011.tgz
-- Helm chart for installing VHIvhi-airgap-images-7.9.6011.tgz
-- VHI runtime container imagesvhi_thirdpartynotices.txt
-- Third party license informationVHI-Early-Adopter-Program.pdf
-- EAP features and limitations
Extract the zip file to a convenient location. You do not need to extract the tgz
files.
Import VHI Images to Your Container Registry
Issue the following commands to import the images into your local container registry.
To import VHI images, do the following:
docker load -i vhi-airgap-images-7.9.6011.tgz
View the loaded images:
docker image ls
Tag the images:
docker tag <image-id> localhost:6000/vhi-mngt:7.9.6011
docker tag <image-id> localhost:6000/vhi-ssvr:7.9.6011
docker tag <image-id> localhost:6000/busybox:1.36
Note
Replace <image-id>
with the actual IMAGE ID
s from docker image ls
Verify your tagged images:
docker image ls
Upload images to the registry:
docker push localhost:6000/vhi-mngt:7.9.6011
docker push localhost:6000/vhi-ssvr:7.9.6011
docker push localhost:6000/busybox:1.36
Note
You may need to use docker push --tls-verify=false ...
if you encounter TLS errors with this command.
Using TLS Certificates
You should obtain a server certificate for the machine that will be hosting K3s, and install it as a Kubernetes TLS secret. You specify this ingress secret name in the next section.
Note
Trust will need to be established on client machines when using self-signed or personal-CA certificates.
Configure Custom Settings
For the next steps, you need to obtain your fully qualified host name, using the command:
hostname --fqdn
In the same directory where vhi-7.9.6011.tgz
is located, create a file named custom-values.yaml
, that contains the following entries:
# Custom values for VHI Kubernetes Example Setup
# K3s uses persistent volume storage class "local-path"
modelPVCStorageClass: local-path
# FQDN hosting your K3s for VHI administration
adminHostName: <fqdn-hostname>
# FQDN ingress host and TLS certificate secret name
ingress:
host: <fqdn-hostname>
secret: <tls-secret-name>
# VHI Management Server image location/name:tag
mngtImage: localhost:6000/vhi-mngt:7.9.6011
# VHI Session Server image location/name:tag
ssrvrImage: localhost:6000/vhi-ssvr:7.9.6011
# Init Container Image
initContainerImage: localhost:6000/busybox:1.36
# VHI Node Port base (change only if port conflicts occur)
basePort: 30500
Note
More advanced values can be found here.
Check Your Setup Templates
Test your configuration using the helm install --dry-run
option. This command will highlight syntax and configuration errors in the .yaml files, and display what will be submitted to Kubernetes:
helm install --dry-run --values custom-values.yaml vhi vhi-7.9.6011.tgz
Install VHI using Helm
Now you should be ready to install VHI using the helm install
command:
helm install --values custom-values.yaml vhi vhi-7.9.6011.tgz
Release "vhi" has been installed. Happy Helming!
NAME: vhi
LAST DEPLOYED: Wed Apr 19 20:08:36 2023
NAMESPACE: default
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
Installed vhi-mngt and vhi-ss!
kubectl get pods
, which should return a list of the running pods, that looks something like this:
NAME READY STATUS RESTARTS AGE
vhi-mngt-deployment-69f54cb475-xc4xr 1/1 Running 0 10s
vhi-ss-deployment-85454985d8-958wn 1/1 Running 0 7s
vhi-ss-deployment-85454985d8-lntff 1/1 Running 0 6s
The first install may take a minute or two for the pod status to get to Running
. You may see Initializing
or Pending
before you see Running
. Repeat the command to see the current status. You may see some transient errors and restarts, as things are getting set up. Persistent non-running status indicates a problem. See the section, In Case of Difficulty, below.
Using VHI in Kubernetes
If you are accustomed to using VHI in a traditional environment, there are some things you need to be aware of when running in a Kubernetes environment.
- VHI TCP ports are exposed on NodePort, in the range 30000-32767. The default base port for VHI is 30500.
- When connecting the VHI Administrative Console to the Management Server, provide a port number:
<fqdn-hostname>:30500
(base + 0) - When using VHI connectors, provide the port number:
<fqdn-hostname>:30523
(base + 23)
- When connecting the VHI Administrative Console to the Management Server, provide a port number:
- VHI web services are published on standard ports. This includes
/vhi
,/vhi-ws
,/vhi-rs
and/vhi-xe
. You may need to update existing web service clients. - The default Administrative Console login credentials are username:
admin
and password:secretpassword
. You can change the password using the VHI Administrative Console: Management Server Explorer, and right click on the Management Cluster. Select Change Admin Password... This new password will persist until you re-install VHI. - Deploying Models is different using Kubernetes. When you deploy a model, the session servers will need to be restarted before they publish the model. To restart all session server pods, use the command
kubectl rollout restart deployment vhi-ss-deployment
. - Web Service Explorer is not installed in Kubernetes.
- Design Tool cannot deploy deployment descriptors to a Kubernetes environment. There is a workaround, first define your deployment in the Design Tool. Then you copy the descriptors (typically "deploy_desc.xml") from
<model>/deploy/design_tool
into the folder<model>/deploy
. After that you can use the deploy function in the Design Tool or the package/activate scripts. - Testing deployed models is disabled via the Design Tool.
- If you re-install VHI (
helm uninstall vhi
, followed byhelm install ... vhi ...
), you need to re-deploy your models. This is because uninstalling reclaims persistent volume storage. If you only need to make changes to your VHI install, usehelm upgrade
instead. Your models will be preserved. - Configuration changes that you make to individual session server pods using the VHI Administrative Console only persist for the lifetime of the pod. We are evaluating early adopter feedback to determine future needs.
- Host Emulator is not installed in Kubernetes. If you use our demo models, your Host Emulator will need to be installed at an accessible network location outside Kubernetes, and the model should be modified prior to deployment, to access the Host Emulator using that host name or IP address.
- Stateful services will only work with a single session server replica. If you plan to use stateful web services (using
wsResourceCreate
,rsCreateSessionId
or suspend/resume session with VHI connectors), add the following value tocustom-values.yaml
:Then run# Number of Session Server replicas ssrvrReplicas: 1
helm upgrade --values custom-values.yaml vhi vhi-7.9.6011.tgz
Checking Your VHI Installation
Now you should be able to verify that VHI has been properly set up, by running some quick tests.
- Start VHI Administrative Console and connect to the management server. In the Connect dialog, enter
<fqdn-hostname>:30500
. Enter username:admin
and password:secretpassword
when prompted for credentials. In the Host Integrator Session Server Explorer panel, you should see two servers, identified by the service pod IP address. - Connect to VHI Web Services. In a web browser, enter
https://<fqdn-hostname>/vhi-ws
. You should see the SOAP Services catalog page. If there are no models deployed, it is normal for the list of services to be empty. If the server certificate is installed correctly, the browser should display a green or gray lock icon to the left of the browser address bar.
Congratulations! You should now be able to deploy models and start using VHI.
In Case of Difficulty
- Setup Failures: It is difficult to anticipate and diagnose problems in a guide such as this. The best approach is to start with a clean setup, and stay as close to the defaults given in this guide as possible. Once you have a basic system up and running, add your customizations one at a time. If something breaks, resolve that problem, or set it aside, and move on to the next thing.
- Image pull backoff: This means that Kubernetes was not able to pull the container image as specified in the
custom-values.yaml
file. Double check for typing errors, and retrace your import process. Verify the container registry image names usingdocker image ls
. If your container registry requires authentication, make sure you have correctly configured your image pull secret. - Crash retry backoff: A problem with provisioning the K3s system, or another environmental problem is not allowing the VHI services to initialize, start or run correctly. Run the command
helm uninstall vhi
to remove VHI. Double check your setup and configuration. Retrace your steps from the beginning and reinstall as necessary. - Hint: most of the time,
helm upgrade
is sufficient to change your VHI configuration. In many cases, an upgrade will not even cause interruptions in service. - Run kubectl commands: The Kubectl cheat sheet has many helpful commands for diagnosing problems and general system monitoring.
- Run K3s Dashboard: Kubernetes Dashboard is a web-based visual tool for viewing Kubernetes resources in real time. Follow these instructions to enable Kubernetes Dashboard in K3s.
- Install K9s: K9s is a text UI dashboard that provides real-time status of multiple pods, live logs, easy access to resource descriptions and other useful diagnostic tools. Follow these instructions to install K9s in your development environment.