Deploying to Kubernetes
This page is a work in progress and should not be published |
This page provides guidance on deploying Discovery and Caplin Platform to Kubernetes. While the guidance is targetted at Kubernetes, similar principles apply to any container-orchestrated deployment platform.
Requirements
This page assumes you have access to the following servers and tools:
-
A Kubernetes cluster
-
kubectl command
-
helm command
Your Caplin Platform components must meet the version requirements documented here: Requirements.
Overview
Discovery’s features make deployment to Kubernetes easier, and the guidance on this page assumes a full migration to Discovery’s licensing, peer discovery, and scalable data services.
Expect to make the following changes to all components configured for a traditional Caplin Platform deployment:
-
Removal of licence files (Liberator, Transformer, and the TREP Adapter only)
-
Removal of
peer-thread-pool-size
configuration (if present) -
Removal of
add-peer
configuration -
Addition of
discovery-provide-service
configuration (Transformer and adapters only) -
Reconfiguration of
add-data-service
configuration (Liberator and Transformer only) -
Addition of the following configuration items:
-
discovery-addr
-
discovery-cluster-name
-
-
Addition or amendment of the following configuration items:
-
datasrc-interface
-
datasrc-local-label
-
-
Enable Sockmon interface of Liberator and Transformer
-
Enable JMX interface of Java DataSources
Docker image design
There is a lot of flexibility here:
-
Create individual images for each component
-
Create a single master image containing all components and specify which component should run at the time of pod deployment.
How you answer this question may depend on how you currently deploy the Caplin Platform. If you configure components manually, then a clean separation of components into separate images may be attractive. If you use the Deployment Framework to manage configuration, then a single master image is more appropriate.
If you opt for the Deployment Framework option, deploy all components locally as if deploying to a single machine. Configure all components to run on localhost
. Create a Docker image of this deployment, with a start script that runs a component specified in a command-line argument or an environment variable. The start script should run the component using the Deployment Framework’s dfw start-fg
command, which runs a single component in the foreground with logging directed to STDOUT — ideal for container deployments:
$ ./dfw start-fg component
Regardless of which model you choose, we recommend that you treat the Docker image as a template, with the value of some DataSource configuration items set at time of deployment. Configure each component to initialise the following configuration items from container environment variables:
-
discovery-addr
: Discovery’s network location -
discovery-cluster-name
: Discovery’s cluster name -
datasrc-local-label
: The DataSource’s unique identifier -
datasrc-interface
: The DataSource’s network location
For example:
discovery-addr ${ENV:DISCOVERY_ADDR} discovery-cluster-name ${ENV:DISCOVERY_CLUSTER_NAME} datasrc-local-label ${ENV:DATASRC_LOCAL_LABEL} datasrc-interface ${ENV:DATASRC_INTERFACE}
The various methods for setting these container environment variables are covered in the next section.
Kubernetes architecture
We recommend that you use Helm to create templates that you can customise at time of deployment.
Setting container environment variables
Two of the environment variables can be set automatically in the container specification using Kubernetes' Downward API:
env:
- name: DATASRC_INTERFACE
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: DATASRC_LOCAL_LABEL
valueFrom:
fieldRef:
fieldPath: metadata.name
Logging
TODO
Error handling
TODO