Preparing Your Cluster

Before you can launch a NuoDB database, you must first prepare your cluster. The preparation of a cluster involves the following:

Prerequisites

So that you can run cluster commands, run the following commands as a cluster-admin user:

sudo oc login -u system:admin
sudo oc adm policy add-cluster-role-to-user cluster-admin admin

Understanding how labeling controls placement of NuoDB processes

NuoDB relies on node labeling to control the placement of each tier of a NuoDB deployment; admin, transaction engine (TE), and storage manager (SM). Placement of the admin and TEs is managed using the node label nuodb.com/zone.The cluster-admin chooses and defines the available zone(s) and which nodes comprise each zone for a given domain.

For example, it's possible to have a single cluster and to allocate resources to two different NuoDB domains within that cluster. This could be accomplished by labeling three nodes with nuodb.com/zone=domain1 and three others with nuodb.com/zone=domain2. Two separate projects could then be created, called domain1 and domain2.

Starting up the admin template with NODE_ZONE=domain1 will start between one and three admin pods on the nodes labeled nuodb.com/zone=domain1.

No more than three pods could be started because no more than one admin pod for a given domain can run on any given node, and only three nodes have been labeled as being in zone domain1. The admin pods for domain2 would be started in the same way as done for domain1.

Understanding how labels control the placement of Storage Managers

NuoDB uses a separate label to control the placement of Storage Managers (SMs) for a database. This label is in the form: nuodb.com/<domain-name>.<database-name> where <domain-name> is the name of a project in which a NuoDB domain is running.

Labeling Nodes

In NuoDB, labels are used for tasks such as naming secrets and driving selectors.

Note: Internally, secret names are generated using a combination of domain-name, and database-name. However, Kubernetes constraints require that the secret name be a valid DNS-1123 subdomain. Secrets must be created before the database and database names must be lower case. For more information on Kubernetes concepts, see here.

Labeling Storage Nodes

NuoDB requires nodes to be labeled for database storage. An example of a oc label commands which identify a storage node is as follows:

oc label node <node> nuodb.com/<domain-name>.<database-name>=backup
oc label node <node> nuodb.com/<domain-name>.<database-name>=nobackup

Note: When creating your database, each node which has storage provisioned for a particular database must be labeled specifically for that database to allow the system to start the database Storage Manager on the correct storage node. For more information, see Creating Your Database.

Labeling Compute Nodes

As it is desirable for Transaction Engines (TEs) to run where the client application is running, compute nodes (where TEs run) are constrained only by zone, and not by node type. As a result, the only thing you must do is add the zone label to any node running the application AND running a TE.

An example of an oc label command which identifies a compute node is as follows:

oc label node <node> nuodb.com/zone=east

Managing Node Labels

This section documents common oc label commands.

To list the nodes in your domain:
oc get nodes
To list the labels on a node:
oc describe node <node>
To label a node:
oc label node <node> nuodb.com/label=<value>
To re-label existing nodes:
$ oc label node -l zone=us-east-1a

This command re-labels existing nodes, which have customer-specific labels for available zones, using NuoDB equivalent labels.

To verify your labels were set correctly:
oc get nodes -l nuodb.com/zone -L nuodb.com/zone,nuodb.com/<domain name>.<database name>,nuodb.com/node-type
To remove a label from a node:
oc label node <node> mylabel-

For more detail on labeling, see Labeling Conventions.

Expanding a NuoDB Domain Across Multiple Zones

A NuoDB domain can expand across multiple zones. If a cluster spanned three AWS availability zones, then the nodes in each AZ could be labeled with a value that is unique to that AZ. For example all hosts in us-east-2a could be labeled nuodb.com/zone=us-east-2a, and so on for each AZ.

The admin pods for us-east-2a could then be started first, using the NuoDB Admin template and setting NODE_ZONE=us-east-2a. The nodes for the other 2 AZs could then be started with the same template; setting NODE_ZONE to the label for that AZ, and setting EXISTING_ZONE=us-east-2a. This causes the admin pods in the 2nd and 3rd zones to join the already-running nodes in the first zone(us-east-2a). The result would be a single NuoDB domain running across multiple AZs (zones).

This allows greater control when clients connect to the database - for example a client running on a node in AWS zone us-east-2b can specify a connection url that indicates it should connect to a TE in us-east-2b and not in us-east-2a.

Zones and zone labels are used for the placement of TEs and admin pods. They are alsoused in connection policies by the client to optimize the client connection to the database.

Configuring Storage

In private data centers with storage under your own control, local disk is preferable. However when using AWS, NuoDB strongly recommends using EBS storage even though i3 instances in AWS have higher performance with direct attached storage. If the storage solution is local disks only, and a host is lost, all state for that host is also lost. If all hosts are lost due to a power outage, apart from restoring from backups, all database state is lost.

Changing Security Context

When using SELinux, for each of the nodes labeled for storage, you must change the security context to permit Docker containers to write to the volumes.

Note: This applies to both DAS as well as for OpenEBS (Container Native Storage).

To perform volume provisioning on each of the nodes labeled as supporting database storage, run the following commands:

export DIRPATH=/openebs
sudo mkdir -p $DIRPATH
sudo chcon -t svirt_sandbox_file_t "${DIRPATH}"

Labeling Storage Nodes

For details of how to label storage nodes, see Labeling Storage Nodes.

Considerations for Durable Storage

By default, GlusterFS is used for durable storage for RAFT state in the management tier. If you decide to use OpenEBS, prior to running workloads against Container Native Storage (CNS), you must install and configure OpenEBS.

Installing OpenEBS on Your Master Nodes

To install OpenEBS on each of your master nodes, create and execute the following script:

#!/usr/bin/env bash

: ${OCPHOST:=$(hostname)}

if [ "$(id -u)" -ne 0 ]; then

cd ; git clone https://github.com/openebs/openebs.git

sudo THEUSER=${USER} $0

exit $?

fi

cd /home/${THEUSER}

# login

oc login https://${OCPHOST}:8443 -u system:admin

# give user cluster-admin privileges

oc adm policy add-cluster-role-to-user cluster-admin admin

# allow containers access to the disk

oc patch scc restricted \

--patch '{ "allowHostDirVolumePlugin": true, "runAsUser": { "type": "RunAsAny" }}'

oc project openebs

oc apply -f /home/${THEUSER}/openebs/k8s/openebs-operator.yaml

if [ -z "${DIRPATH}" ]; then

printf "Enter default location for openebs volumes: "; read DIRPATH

fi

sed -e "s&/var/openebs&${DIRPATH}&" /home/${THEUSER}/openebs/k8s/openebs-config.yaml | oc apply -f -

oc apply -f /home/${THEUSER}/openebs/k8s/openebs-storageclasses.yaml

About OpenShift Templates

With the cluster-based deployment model, the following OpenShift templates are available:

File Name

Use this file to...

admin-cns.yaml

Provision NuoDB's administration tier.

database-secret.yaml

Provision the initial database credentials. Required for each database created.

database-engine.yaml

Provision NuoDB TEs.

database-storage-das.yaml

Provision NuoDB SMs using Direct Attached Storage (DAS) and deployment configuration.

database-storage-cns.yaml

Provision NuoDB SMs using Container Native Storage (CNS). Default is GlusterFS. (For development and test environments only)

hockey-load.yaml

Load a sample application using the Hockey sample database. The application creates both a query and update workload.

hockey-query.yaml

Load a sample application using the Hockey sample database. The application creates a query workload.

hockey-update.yaml

Load a sample application using the Hockey sample database. The application creates an update workload.

ycsb.yaml

Load the standard Yahoo Cloud Serving Benchmark (YCSB) JDBC client. The YCSB load can be selected with a parameter.

job-create-lbpolicy.yaml

Create a custom balancing policy. The default policy favors TEs in the following order:

  1. Pod

  2. Node

  3. Zone

  4. Any

monitor-insights.yaml

Provision NuoDB Insights visual performance monitoring collection with display link.

Note: To obtain OpenShift NuoDB Enterprise Edition database deployment templates, contact your NuoDB Technical Sales or NuoDB Services team.