Preparing Your Cluster

Before you can launch a NuoDB database, you must first prepare your cluster. The preparation of a cluster involves the following:


So that you can run cluster commands, run the following commands as a cluster-admin user:

sudo oc login -u system:admin
sudo oc adm policy add-cluster-role-to-user cluster-admin admin

Understanding how labeling controls placement of NuoDB processes

NuoDB relies on node labeling to control the placement of each tier of a NuoDB deployment; admin, transaction engine (TE), and storage manager (SM). Placement of the admin and TEs is managed using the node label cluster-admin chooses and defines the available zone(s) and which nodes comprise each zone for a given domain.

For example, it's possible to have a single cluster and to allocate resources to two different NuoDB domains within that cluster. This could be accomplished by labeling three nodes with and three others with Two separate projects could then be created, called domain1 and domain2.

Starting up the admin template with NODE_ZONE=domain1 will start between one and three admin pods on the nodes labeled

No more than three pods could be started because no more than one admin pod for a given domain can run on any given node, and only three nodes have been labeled as being in zone domain1. The admin pods for domain2 would be started in the same way as done for domain1.

Understanding how labels control the placement of Storage Managers

NuoDB uses a separate label to control the placement of Storage Managers (SMs) for a database. This label is in the form:<domain-name>.<database-name> where <domain-name> is the name of a project in which a NuoDB domain is running.

Labeling Nodes

In NuoDB, labels are used for tasks such as naming secrets and driving selectors.

Note: Internally, secret names are generated using a combination of domain-name, and database-name. However, Kubernetes constraints require that the secret name be a valid DNS-1123 subdomain. Secrets must be created before the database and database names must be lower case. For more information on Kubernetes concepts, see here.

Labeling Storage Nodes

NuoDB requires nodes to be labeled for database storage. An example of a oc label commands which identify a storage node is as follows:

oc label node <node><domain-name>.<database-name>=backup
oc label node <node><domain-name>.<database-name>=nobackup

Note: When creating your database, each node which has storage provisioned for a particular database must be labeled specifically for that database to allow the system to start the database Storage Manager on the correct storage node. For more information, see Creating Your Database.

Labeling Compute Nodes

As it is desirable for Transaction Engines (TEs) to run where the client application is running, compute nodes (where TEs run) are constrained only by zone, and not by node type. As a result, the only thing you must do is add the zone label to any node running the application AND running a TE.

An example of an oc label command which identifies a compute node is as follows:

oc label node <node>

Managing Node Labels

This section documents common oc label commands.

To list the nodes in your domain:
oc get nodes
To list the labels on a node:
oc describe node <node>
To label a node:
oc label node <node><value>
To re-label existing nodes:
$ oc label node -l zone=us-east-1a

This command re-labels existing nodes, which have customer-specific labels for available zones, using NuoDB equivalent labels.

To verify your labels were set correctly:
oc get nodes -l -L,<domain name>.<database name>,
To remove a label from a node:
oc label node <node> mylabel-

For more detail on labeling, see Labeling Conventions.

Expanding a NuoDB Domain Across Multiple Zones

A NuoDB domain can expand across multiple zones. If a cluster spanned three AWS availability zones, then the nodes in each AZ could be labeled with a value that is unique to that AZ. For example all hosts in us-east-2a could be labeled, and so on for each AZ.

The admin pods for us-east-2a could then be started first, using the NuoDB Admin template and setting NODE_ZONE=us-east-2a. The nodes for the other 2 AZs could then be started with the same template; setting NODE_ZONE to the label for that AZ, and setting EXISTING_ZONE=us-east-2a. This causes the admin pods in the 2nd and 3rd zones to join the already-running nodes in the first zone(us-east-2a). The result would be a single NuoDB domain running across multiple AZs (zones).

This allows greater control when clients connect to the database - for example a client running on a node in AWS zone us-east-2b can specify a connection url that indicates it should connect to a TE in us-east-2b and not in us-east-2a.

Zones and zone labels are used for the placement of TEs and admin pods. They are alsoused in connection policies by the client to optimize the client connection to the database.

Configuring Storage

In private data centers with storage under your own control, local disk is preferable. If the storage solution is local disks only, and a host is lost, all state for that host is also lost. If all hosts are lost due to a power outage, apart from restoring from backups, all database state is lost.

Changing Security Context

When using SELinux, for each of the nodes labeled for storage, you must change the security context to permit Docker containers to write to the volumes.

Note: This applies to both DAS as well as for CNS (Container Native Storage).

To perform volume provisioning on each of the nodes labeled as supporting database storage, run the following commands:

export DIRPATH=/<storage_dir>
sudo mkdir -p $DIRPATH
sudo chcon -t svirt_sandbox_file_t "$ DIRPATH "

Labeling Storage Nodes

For details of how to label storage nodes, see Labeling Storage Nodes.

Considerations for Durable Storage

By default, GlusterFS is used for durable storage for RAFT state in the management tier.

About OpenShift Templates

With the cluster-based deployment model, the following OpenShift templates are available:

File Name

Use this file to...


Provision NuoDB's administration tier.


Provision the initial database credentials. Required for each database created.


Provision NuoDB TEs.


Provision NuoDB SMs using Direct Attached Storage (DAS) and deployment configuration.


Provision NuoDB SMs using Container Native Storage (CNS). Default is GlusterFS. (For development and test environments only)


Load a sample application using the Hockey sample database. The application creates both a query and update workload.


Load a sample application using the Hockey sample database. The application creates a query workload.


Load a sample application using the Hockey sample database. The application creates an update workload.


Load the standard Yahoo Cloud Serving Benchmark (YCSB) JDBC client. The YCSB load can be selected with a parameter.


Create a custom balancing policy. The default policy favors TEs in the following order:

  1. Pod

  2. Node

  3. Zone

  4. Any


Provision NuoDB Insights visual performance monitoring collection with display link.

Note: To obtain OpenShift NuoDB Enterprise Edition database deployment templates, contact your NuoDB Technical Sales or NuoDB Services team.