Enabling TLS in Containerized Environments

To minimize the configuration required, the NuoDB Docker image is pre-configured with TLS enabled and defines a convention that must be adhered to when you provision TLS keys and certificates and expose them to containerized NuoDB processes.

All containerized NuoDB processes expect TLS keys and certificates to be located in $NUODB_CFGDIR/keys, which expands to /etc/nuodb/keys and includes the following files:

File Description

nuoadmin.p12

The default value of the keystore for the NuoDB Admin Process (the keystore property in nuoadmin.conf). Contains only the admin key and certificate.

nuoadmin-truststore.p12

The default value of the truststore for the NuoDB Admin Process (the truststore property in nuoadmin.conf). Contains the certificate used to verify admin certificates and the client certificate.

nuocmd.pem

The default value of the NUOCMD_CLIENT_KEY environment variable, which specifies the PEM file containing the key and certificate used by NuoDB Command (nuocmd) clients.

ca.cert

The default value of the NUOCMD_VERIFY_SERVER environment variable, which specifies the PEM file containing the certificate used to verify admin certificates.

To enable security, provision your own TLS keys and certificates, as described in Enabling TLS Encryption, using files described above. Alternatively, self-signed certificates and keystore files in the expected format can be generated using the setup-keys.sh helper script shipped with the NuoDB container image and located in /usr/local/bin. For example:

kubectl run generate-nuodb-certs \
  --image nuodb/nuodb-ce:4.2 \
  --env="DEFAULT_PASSWORD=changeMe" \
  --command -- 'tail' '-f' '/dev/null'

kubectl exec -ti generate-nuodb-certs -- \
  bash -c "cd /etc/nuodb/keys && setup-keys.sh"

The generated keystore files need to be exposed to containerized NuoDB processes using mechanisms such as Docker Volumes, Kubernetes Secrets or other secret management vault services such as HashiCorp Vault.

Setting Environment Variables

When enabling TLS in containerized environments, you must set the following environment variables:

Variable Default Value Description

NUODB_KEYSTORE_PASSWORD

changeIt

The password used to encrypt the private key in the NuoDB Admin keystore.

NUODB_TRUSTSTORE_PASSWORD

changeIt

The password used to verify the integrity of the NuoDB Admin truststore.

NUOCMD_CLIENT_KEY

-

Used to specify the PEM file containing the key and certificate used by NuoDB Command (nuocmd) clients.

NUOCMD_VERIFY_SERVER

-

Used to specify the PEM file containing the certificate used to verify admin certificates.

A containerized NuoDB Admin Process (AP) can be started with a custom set of TLS keys and certificates as follows:

docker run -d --cap-add SYS_PTRACE ... \
    -v "$KEYS_DIR:/etc/nuodb/keys" \
    -e "NUODB_KEYSTORE_PASSWORD=$PASSWD" \
    -e "NUODB_TRUSTSTORE_PASSWORD=$PASSWD" \
    -e "NUODB_BOOTSTRAP_SERVERID=admin-0" \
    -e "NUODB_DOMAIN_ENTRYPOINT=admin-0" \
    -e "NUOCMD_CLIENT_KEY=$KEYS_DIR/nuocmd.pem" \
    -e "NUOCMD_VERIFY_SERVER=$KEYS_DIR/$NUOCMD_VERIFY_SERVER" \
    nuodb:latest nuoadmin

In the above example, $KEYS_DIR is an environment variable used to set the directory on the host that has the provisioned key and certificate data for the AP. $PASSWD is the environment variable set for the keystore and truststore password environment variables.

$NUOCMD_VERIFY_SERVER is an environment variable set to the server certificate file name; the value is set based on which TLS trust model is used, shared admin key or unique admin key. For example, nuoadmin.cert for the Shared Admin Key trust model and ca.cert for the Unique Admin Key trust model.

Enabling TLS in OpenShift or Kubernetes

To enable TLS in OpenShift or Kubernetes, do the following:

1. Create a Kubernetes Secret template named nuodb-tls-secret.yaml with the following contents:

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
  type: Opaque
data:
  nuoadmin.p12: replace
  nuoadmin-truststore.p12: replace
  nuocmd.pem: replace
  ca.cert: replace

2. Inject the key material into the template file as follows:

export KEYSTORE_BASE64=$(cat nuoadmin.p12 | base64 | tr -d '\n')
export TRUSTSTORE_BASE64=$(cat nuoadmin-truststore.p12 | base64 | tr -d '\n')
export NUOCMD_PEM_BASE64=$(cat nuocmd.pem | base64 | tr -d '\n')
export CA_CERT_BASE64=$(cat ca.cert | base64 | tr -d '\n')
sed -i -e '/nuoadmin.p12:.*/ s|:.*|: '"${KEYSTORE_BASE64}"'|' nuodb-tls-secret.yaml
sed -i -e '/nuoadmin-truststore.p12:.*/ s|:.*|: '"${TRUSTSTORE_BASE64}"'|' nuodb-tls-secret.yaml
sed -i -e '/nuocmd.pem:.*/ s|:.*|: '"${NUOCMD_PEM_BASE64}"'|' nuodb-tls-secret.yaml
sed -i -e '/ca.cert:.*/ s|:.*|: '"${CA_CERT_BASE64}"'|' nuodb-tls-secret.yaml

3. Install the Kubernetes Secret before launching any admin:

kubectl create -f nuodb-tls-secret.yaml

For more information on distributing credentials securely, see kubernetes support.

Exposing the Keystore to Containerized Database Processes

If the NuoDB AP cannot act as a CA for database processes which it manages, database processes are passed the same certificate used by the nuoadmin process as opposed to a certificate signed by it.

In containerized deployments of NuoDB, the admin and database processes can only communicate via socket connections and there is no secure method for the AP to pass the database process its private key and certificate. In this scenario, it is necessary to specify the keystore file and keystore password for the entry-point script that forks the database process (nuodocker start sm/te).

This keystore can be mounted into the container using some mechanism such as Docker Volumes or Kubernetes Secrets. The keystore is specified using the --keystore argument of nuodocker start sm/te, while the keystore password is specified using the NUODOCKER_KEYSTORE_PASSWORD environment variable, which can be specified as a Kubernetes Secret (to avoid exposing the password with docker ps or kubectl describe pod).

Invoking a Containerized Database Process with a Keystore

The following example demonstrates how to use Docker to start a containerized database process with a keystore.

docker run -d --cap-add SYS_PTRACE ... -e NUODOCKER_KEYSTORE_PASSWORD="$PASSWD"
--volume /path/to/keys:/etc/nuodb/keys nuodb:latest \
nuodocker --api-server admin-0:8888 start te \
--db-name db --server-id admin-0 --keystore /etc/nuodb/keys/nuoadmin.p12

Automatic downgrade to non-TLS mode

If the keystore files and certificates are not found in the image at runtime when the Admin Process (AP) is started, then TLS mode will be gracefully downgraded to false and the Admin Process (AP) will run in non-TLS mode.

If the files referenced by the NUOCMD_CLIENT_KEY and NUOCMD_VERIFY_SERVER environment variables do not exist, NuoDB Command (nuocmd) will downgrade using HTTP when communicating with the Admin Process (AP) REST service.

The graceful downgrade to non-TLS can be disabled by explicitly setting ssl=true when starting the AP using nuodocker start admin …​ -- ssl=true and explicitly defining the HTTPS protocol in the NUOCMD_API_SERVER environment variable.

Upgrade from NuoDB domain that uses default TLS keys

TLS certificates and keystore files should be generated before performing a rolling upgrade to NuoDB 4.2 or greater in Kubernetes or container deployments that operate with the default keystore files shipped with NuoDB container images prior to NuoDB 4.2.

If the above prerequisite is not performed, containers started with the NuoDB 4.2 or greater image that don’t have TLS keystore files mounted as volumes will downgrade gracefully to non-TLS mode. This will prevent them from connecting to an existing TLS enabled domain generating error messages similar to these described below.

Newly started Admin Process (AP) containers during admin statefulsets rolling upgrade will emit multiple Thrift error messages like the one below which indicates connectivity problems:

2021-02-12T09:11:53.672+0000 ERROR [admin-nuodb-cluster0-2:nthriftserver-port48005-33-4] TThreadPoolServer Thrift Error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client?
	at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:252)
	at org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:101)
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:313)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at com.nuodb.util.Threading$1.lambda$wrapTarget$0(Threading.java:29)

Newly started Storage Manager (SM) database process containers during SM statefulset rolling upgrade will emit error message like the one below which indicates connectivity problems:

2021-02-12T09:20:39.991+0000 Error while checking Admin layer: Admin layer is inoperative - exiting: Unable to connect to http://nuodb.nuodb.svc:8888: ('Connection aborted.', BadStatusLine('\x15\x03\x03\x00\x02\x02P',))

Several strategies can be used to upgrade such environments to the latest NuoDB version depending on the availability and configuration requirements.

Rolling upgrade

The described steps are applicable with the following considerations in mind: - The ultimate goal is to upgrade NuoDB deployment using the default TLS certificates with no downtime to NuoDB 4.2 or greater. - The steps apply to single cluster deployment. - The deployment has enough AP and engine pods to support Kubernetes rolling upgrade strategy without service downtime.

To achieve the goal, there are several prerequisite steps which need to be performed in addition to the standard NuoDB rolling upgrade procedure for a Kubernetes deployment.

It’s recommended that the upgrade steps are first tested in a staging or pre-production environment.

The high-level instructions can be described as:

  1. Download the default TLS keystore files used in the existing NuoDB domain.

  2. Create needed Kubernetes secrets from the downloaded files in step 1.

  3. Upgrade admin and database Helm releases so that they use the newly defined Kubernetes secrets in step 2

  4. Generate new TLS keystore files either by using self-signed certificates or requesting them from your security department.

  5. Perform online TLS CA certificate rotation to the newly generated keys in step 4.

  6. Perform a rolling upgrade to NuoDB 4.2 or greater.

For more information about TLS certificates rotation, please check Rotating Key Pair Certificates.

Cold upgrade

If a service downtime event during the NuoDB upgrade is a feasible option, then a simplified steps that doesn’t involve certificate rotation can be followed.

It’s recommended that the upgrade steps are first tested in a staging or pre-production environment.

The high-level instructions can be described as:

  1. Generate new TLS keystore files either by using self-signed certificates or requesting them from your security department.

  2. Create needed Kubernetes secrets from the files generated in step 1.

  3. Scale down all controllers in the NuoDB domain to replicas=0. This includes admin statefulset, hotcopy (HC), non-hotcopy (nonHC) statefulsets and TE deployments. Wait for all the pods to shutdown.

  4. Upgrade first the admin and then the database Helm releases so that they use the newly defined Kubernetes secrets in step 2 and the NuoDB 4.2 or greater image. This will scale the statefulsets and deployments to the original replica count.

  5. Ensure that the NuoDB domain is healthy and the NuoDB database is running.