Kubernetes
For information on how to deploy production NuoDB in Kubernetes, please visit the NuoDB Helm Charts GitHub repository for deployment instructions. The NuoDB Helm Charts are production ready and fully support day-2 operational tasks such as backup and recovery, rolling upgrade, and can be deployed in multi-cluster and multi-cloud environments.
Quick Start Guide
NuoDB works with Kubernetes locally or in the cloud. To get started with NuoDB in Kubernetes, a running Kubernetes cluster is required in either environment.
- Cloud
-
NuoDB is supported for production use in a wide range of Kubernetes environments, including:
- Local
-
For test, development, and evaluation purposes, the following can also be used:
For more information on running Kubernetes locally on desktop or laptop refer to Appendix A - Running Kubernetes Locally. |
Prerequisites
-
A running Kubernetes cluster.
-
kubectl installed and able to access the cluster.
-
Helm 3.x installed.
Recommended: Sufficient familiarity with Kubernetes to use kubectl get events
and kubectl logs
to diagnose problems.
Installing NuoDB
Configure Kubernetes
Create a Kubernetes namespace for NuoDB installation and make it the default.
kubectl create namespace nuodb
kubectl config set-context --current --namespace=nuodb
Configure NuoDB Helm charts
Fetch a local copy of the charts:
helm repo add nuodb https://nuodb.github.io/nuodb-helm-charts
To display the available Helm Charts, run:
helm search repo
NAME CHART VERSION APP VERSION DESCRIPTION
nuodb/admin 3.5.0 4.3.2 Administration tier for NuoDB.
nuodb/database 3.5.0 4.3.2 NuoDB distributed SQL database.
nuodb/restore 3.5.0 4.3.2 On-demand restore a NuoDB SQL database.
nuodb/storage-class 3.5.0 4.3.2 Storage classes for NuoDB.
nuodb/transparent-hugepage 3.5.0 4.3.2 Disable disables transparent_hugepage on Linux ...
Configure Default Storage Class
Persistent storage is required by the NuoDB Admin Process (AP) and Storage Manager (SM) pods. Persistent storage is managed using PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs). If a PVC does not specify a storageClassName, the default StorageClass is used. To view the available StorageClasses, run:
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
hostpath (default) docker.io/hostpath Delete Immediate false 7d5h
In the above example the hostpath
is the default StorageClass marked by (default)
.
The StorageClass(es) may have different names or there may be more than one StorageClass, but there must be at least one marked as (default)
.
If one of the StorageClasses is marked as (default)
, skip to the next step.
However, if a default StorageClass is not present:
-
If running locally, consult the applicable Kubernetes documentation to set the default StorageClass.
-
If running in the cloud, use the NuoDB
storage
chart to set up the default StorageClass, where<provider>
isamazon
,google
orazure
.helm install nuodb/storage-class storage --set cloud.provider=<provider>
Install a Limited Use License
To obtain the license file required to deploy NuoDB with a Limited Use License, contact NuoDB Support.
For information on installing the license, see Redeploy the Admin Processes (APs) using Helm.
Configure the Admin Layer
There are many options to configure the NuoDB Admin Processes (APs), but this command uses all the defaults. This is discussed further in Appendix B.
helm upgrade --install admin nuodb/admin --set nuocollector.enabled=true
NAME: admin
LAST DEPLOYED: Thu Feb 16 11:34:36 2023
NAMESPACE: nuodb
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTE: Please be patient while the chart is being deployed.
Again, although helm
returns immediately, deploying the admin
pod will take a minute or so.
-
The NuoDB container image will be downloaded from the Docker repository specified by
nuodb.image.repository
Helm value (see Appendix B) and used to run a single AP. -
Wait until the
admin
pod is ready; watch by runningkubectl get pods
periodically. -
Soon, an
admin
pod calledadmin-nuodb-cluster0-0
will come up with1/1
displayed in theREADY
column.
Check the new NuoDB domain by running nuocmd
inside the admin pod:
kubectl exec admin-nuodb-cluster0-0 -- nuocmd show domain
server version: 6.0-1-a69794e1fc, server license: Limited
server time: 2023-11-13T14:32:05.080, client token: ...
Servers:
[admin-nuodb-cluster0-0] admin-nuodb-cluster0-0.nuodb.nuodb.svc.cluster.local:48005
[last_ack = 0.49]ACTIVE (LEADER, Leader=admin-nuodb-cluster0-0, log=0/4/4) Connected *
Databases:
Configure Database
The helm
command below includes several overrides (using --set
) to the default configuration.
-
Set the database name.
-
Set DBA "root" username and password.
-
Enable external access to the processes inside the cluster.
-
Reduce the disk used for storing the database data (what NuoDB calls its archive) to 5G.
-
Disable Hot Copy (online) backup, which is enabled by default but not needed for this quick start. Enable a Storage Manager (SM) that does not run backups instead.
-
Reduce the very large defaults for CPU and memory resources.
-
Enable the Insights Monitoring collector agent.
Using --set is a one-and-done sort of edit and gathers all the changes in one location which has nice benefits when scripting your start-up commands, see Appendix B - Using a Configuration File for the alternative option of copying the configuration values to a YAML file and modifying them.
|
Start the database using the override options. One Transaction Engine (TE) and one Storage Manager (SM) will start:
- Linux/MacOS
For Windows, use the following command but replace all instances of "\" with "^". |
helm install demo nuodb/database \
--set database.name=demo \
--set database.rootUser=dba \
--set database.rootPassword=dba \
--set database.te.labels.external-address=localhost \
--set database.persistence.size=5Gi \
--set database.sm.hotCopy.enablePod=false \
--set database.sm.noHotCopy.replicas=1 \
--set database.sm.resources.requests.cpu=500m \
--set database.sm.resources.requests.memory=500M \
--set database.sm.resources.limits.cpu=500m \
--set database.sm.resources.limits.memory=500M \
--set database.te.resources.requests.cpu=500m \
--set database.te.resources.requests.memory=500M \
--set database.te.resources.limits.cpu=500m \
--set database.te.resources.limits.memory=500M \
--set nuocollector.enabled=true \
--set database.te.dbServices.enabled=true \
--set database.legacy.directService.enabled=true
The command generates the following output:
NAME: demo
LAST DEPLOYED: Thu Feb 16 11:37:28 2023
NAMESPACE: nuodb
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTE: Please be patient while the chart is being deployed.
NuoDB can be accessed via port 48004 on the following DNS name from within the cluster:
nuodb.nuodb.svc.cluster.local - Read/Write connection
This takes a few minutes to run.
-
Monitor by running
kubectl get pods
periodically. -
There should be two pods,
sm-database-nuodb-cluster0-test-hotcopy-0
andte-database-nuodb-cluster0-test-xxxx
(wherexxxx
is a random suffix chosen by Kubernetes). -
Each will enter the
RUNNING
state.NAME READY STATUS RESTARTS AGE admin-nuodb-cluster0-0 3/3 Running 0 8m10s sm-demo-nuodb-cluster0-demo-database-0 3/3 Running 0 5m17s te-demo-nuodb-cluster0-demo-database-6bbfc5d96c-7bq9l 3/3 Running 0 5m17s
To display the domain and newly created database, run:
kubectl exec admin-nuodb-cluster0-0 -- nuocmd show domain
Defaulted container "admin" out of: admin, nuocollector, nuocollector-config, init-disk (init)
server version: 6.0-1-a69794e1fc, server license: Limited
server time: 2023-11-13T14:32:05.080, client token: ...
Servers:
[admin-nuodb-cluster0-0] admin-nuodb-cluster0-0.nuodb.nuodb.svc.cluster.local:48005
[last_ack = 1.54] ACTIVE (LEADER, Leader=admin-nuodb-cluster0-0, log=10/74/74) Connected *
Databases:
demo [state = RUNNING]
[SM] sm-demo-nuodb-cluster0-demo-database-0/10.42.0.16:48006 [start_id = 12]
[server_id = admin-nuodb-cluster0-1] [pid = 96] [node_id = 1] [last_ack = 3.45] MONITORED:RUNNING
[TE] te-demo-nuodb-cluster0-demo-database-545f6b5d9c-4w46s/10.42.3.7:48006 [start_id = 13]
[server_id = admin-nuodb-cluster0-2] [pid = 43] [node_id = 2] [last_ack = 9.41] MONITORED:RUNNING
To view the NuoDB Helm Charts installed using helm
, run:
helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
admin nuodb 2 2023-01-27 ... deployed admin-3.5.0 4.3.2
demo nuodb 1 2023-01-28 ... deployed database-3.5.0 4.3.2
Notice that demo is both the name of the helm database deployment and the name chosen for the database.
|
Access the Database
Port-forwarding
Use port-forwarding to map requests from the local machine to processes in the cluster:
- Windows
-
Start two new
cmd
windows.-
In the first, run
kubectl port-forward svc/nuodb-clusterip 48004:48004
. -
In the second run
kubectl port-forward svc/demo-nuodb-cluster0-demo-database-clusterip 48006:48006
. -
Return to the original window.
-
- Linux/MacOS
-
Run the following commands:
-
kubectl port-forward svc/nuodb-clusterip 48004:48004 > /dev/null 2>&1 &
-
kubectl port-forward svc/demo-nuodb-cluster0-demo-database-clusterip 48006:48006 > /dev/null 2>&1 &
-
Ports 48004 and 48006 are the default ports for an AP and a TE respectively. The database demo
can now be accessed as if it was running locally on the machine.
Port forwarding is used to simplify the database external access configuration for this quick start and works in a wide variety of Kubernetes cluster deployments. To configure database external access in production, please read Connect to NuoDB Database Externally. |
Connect to the database
- Option 1
-
Connect from any favorite SQL IDE tool (such as DBeaver or DbVisualizer):
-
Host:
localhost
-
Port:
48004
-
Database name:
demo
-
Username:
dba
-
Password:
dba
-
- Option 2
-
Run NuoDB’s command-line SQL tool
nuosql
from inside the AP pod:kubectl exec -it admin-nuodb-cluster0-0 -- bash
nuosql demo --user dba --password dba --connection-property PreferInternalAddress=true
Use the SYSTEM.Nodes
table to view the domain:
SELECT id, startid, address, port, state, type, release_ver FROM system.nodes;
ID STARTID ADDRESS PORT STATE TYPE RELEASE_VER
--- -------- ----------- ----- ------- ----------- ------------------
1 0 172.17.0.12 48006 Running Storage 4.3.2-1-a69794e1fc
2 1 172.17.0.11 48006 Running Transaction 4.3.2-1-a69794e1fc
The Nodes
table shows two NuoDB processes (or nodes) running - a Storage Manager (SM) and a Transaction Engine (TE).
For full documentation of the NuoDB Helm Charts, refer to https://github.com/nuodb/nuodb-helm-charts.
Import the Sample Ice Hockey Database
Open a bash
shell session in the admin-nuodb-cluster0-0
pod:
kubectl exec -it admin-nuodb-cluster0-0 -- bash
Use the following commands to import the sample ice hockey database schemas into the empty demo
database:
|
nuosql demo --schema hockey --user dba --password dba --connection-property PreferInternalAddress=true --file /opt/nuodb/samples/quickstart/sql/create-db.sql
nuosql demo --schema hockey --user dba --password dba --connection-property PreferInternalAddress=true --file /opt/nuodb/samples/quickstart/sql/Players.sql
nuosql demo --schema hockey --user dba --password dba --connection-property PreferInternalAddress=true --file /opt/nuodb/samples/quickstart/sql/Scoring.sql
nuosql demo --schema hockey --user dba --password dba --connection-property PreferInternalAddress=true --file /opt/nuodb/samples/quickstart/sql/Teams.sql
Using NuoSQL
To try out some simple nuosql
commands, invoke an interactive nuosql
session connecting to the demo
database.
nuosql demo --schema hockey --user dba --password dba --connection-property PreferInternalAddress=true
USE hockey;
SHOW tables;
Tables in schema HOCKEY
HOCKEY
PLAYERS
SCORING
TEAMS
VW_PLAYER_STATS is a view
In the above example:
USE hockey;
switches to using the "HOCKEY" schema from the default "USER" schema and:
SHOW tables;
shows the tables in that schema.
Try out some more nuosql
commands on the hockey database, such as:
SELECT * FROM teams WHERE teams.year=2011;
Now try a more advanced query such as:
SELECT p.lastName, p.firstName, s.year, s.teamID, s.gamesPlayed
FROM players p, scoring s
WHERE p.birthCountry='Slovakia'
AND s.playerID = p.playerID ORDER BY p.lastName;
When finished, type quit
to exit
the interactive nuosql
session.
Type exit
to exit the bash shell in the admin-nuodb-cluster0-0
container.
Appendix C covers installing NuoDB’s Insights monitoring tool.
When finished, remember to clean up resources - see Appendix D.
Appendix A - Running Kubernetes Locally
Possible options are:
-
Docker Desktop has an option in its Settings to run up a Kubernetes cluster. Just tick the box, apply and wait a few minutes. It automatically installs
kubectl
and sets upkubeconfig
to enable access the cluster. -
Rancher’s k3d allows running their minimal Kubernetes (
k3s
) on top of Docker (so a Docker installation is still needed). It is lighter weight than the one built into Docker Desktop. -
Canonical’s MicroK8s has the advantage that it does not require Docker.
-
minikube can run with or without Docker, provided there is virtual machine support available on the platform in use.
All three are available for Windows, MacOS, and Linux.
Helm is available at GitHub.
-
There is a binary install for most platforms, or use the package manager for the platform in use.
-
Windows users should download the installer
zip
, unpack it and copyhelm.exe
toC:\Windows\System32
.
-
Appendix B - Using a Configuration File
As an alternative to using --set
, it is typically easier to copy and modify the database
Helm chart’s configuration.
Especially if planning to make further changes later.
Moreover, the file can be kept under version control.
The following sets up the same configuration used above.
Get the db-values.yaml
configuration file for the chart:
helm inspect values nuodb/database > db-values.yaml
Step 1: Modify the YAML
Make the following changes in db-values.yaml
:
-
Search for
name: demo
to see this section. This is where to change the database name and set the root DBA user name and password.database: ## Provide a name in place of the chart name for `app:` labels ## #nameOverride: "" ## Provide a name to substitute for the full names of resources ## #fullnameOverride: "" # name # NuoDB Database name. must consist of lowercase alphanumeric #characters '[a-z0-9]+' name: demo # rootUser # Name of Database user rootUser: dba # rootPassword # Database password rootPassword: secret
-
Scroll down a short way and set
persistence
size to5G
as shown (the default of 20G is unnecessary):## Import Environment Variables from one or more configMaps # Ex: configMapRef: [ myConfigMap, myOtherConfigMap ] ## envFrom: configMapRef: [] persistence: size: 5Gi accessModes: - ReadWriteOnce # storageClass: "-"
-
Search for
hotCopy:
(note the colon) and setenablePod
tofalse
to disable hotcopy (online) backups::#... hotCopy: enablePod: false enableBackups: true replicas: 1 ...
-
Search for
noHotCopy:
(note the colon) and enable an SM that does not perform backups by settingreplicas
to1
:# ... noHotCopy: enablePod: true replicas: 1
-
Search for
resources:
(note the colon) and set the following values to reduce the memory limits for SM resources. The default values are set for a reasonably sized production database and are too big for a demo.## resources # k8s resource min (request) and max (limit) # min is also used for the target maximum memory used by the cache (NuoDB --mem option) resources: limits: cpu: 500m memory: 500M requests: cpu: 500m memory: 500M
-
Search for
dbServices
and enable them (remove the curly brackets{}
and the#
):## By default, the database clusterip service for direct TE connections is enabled, ## but can be optionally disabled here dbServices: enabled: false
-
Just below there is
resources:
again (these are the TE resources). Make the same changes as before. -
Just below there is a
labels
section, modify to add two labels as shown to enable database access from outside the cluster. Be sure to remove the curly brackets afterlabels:
## Affinity, selector, and tolerations # There are expanded as YAML, and can include variable and template references affinity: {} # nodeSelector: {} # tolerations: [] # labels # Additional Labels given to the TEs started labels: external-address: localhost external-port: 48006
-
Search for
legacy
.-
Under
directService
setenabled: true
. -
Under
nuocollector
setenabled: true
.legacy: headlessService: enabled: false directService: enabled: true nuocollector: # Enable NuoDB Collector by setting nuocollector.enabled=true enabled: true
-
-
Save the changes.
Step 2: Deploying the Chart
-
As more than one database may be deployed, choose the name of the Helm deployment to be the same as the name of the database (in the example above it is called
demo
). -
Deploy the chart by running:
helm install <db-name> nuodb/database --values db-values.yaml
The <db-name> is the helm deployment name.
Appendix C - Running Insights/Grafana and a YCSB Workload (Optional)
Use helm
to add NuoDB Insights visual monitoring into the deployment environment.
NuoDB Insights displays real-time and historical performance data graphically to assist with workload and/or root-cause analysis.
helm repo add nuodb-insights https://nuodb.github.io/nuodb-insights
Once the repository has been added:
helm install insights nuodb-insights/insights --namespace nuodb
NAME: insights
LAST DEPLOYED: Thu Feb 16 12:43:12 2023
NAMESPACE: nuodb
STATUS: deployed
REVISION: 1
NOTES:
Periodically run kubectl get pods
until Insights is up and running.
NAME READY STATUS RESTARTS AGE
admin-nuodb-cluster0-0 3/3 Running 0 69m
insights-grafana-76d68674c-j29q9 3/3 Running 0 77s
insights-influxdb-0 1/1 Running 0 77s
sm-demo-nuodb-cluster0-demo-database-0 3/3 Running 0 67m
te-demo-nuodb-cluster0-demo-database-6bbfc5d96c-7bq9l 3/3 Running 0 67m
The following command will provide the password to the Grafana instance.
kubectl get secrets insights-grafana -n nuodb -o jsonpath={.data.admin-password} | base64 --decode
In a separate console make Grafana accessible via the browser.
It is possible to then log in with username admin
and the password output from the previous command.
kubectl port-forward service/insights-grafana 8080:80
Using any web-browser, navigate to http://localhost:8080/ where the instance of Grafana is accessible.
Now that there is have a graphical representation of the database activity, start a YCSB workload so there is something to see. Add the repo:
helm repo add nuodb-incubator https://nuodb.github.io/nuodb-helm-charts/incubator
To get the workload running:
- Linux/MacOS
For Windows use the following command but replace all instances of "\" with "^". |
helm install ycsb nuodb-incubator/demo-ycsb \
--set ycsb.teDirect=true \
--set ycsb.replicas=1 \
--set ycsb.opsPerIteration=50000 \
--set database.name=demo \
--set ycsb.dbSchema=user \
--set ycsb.noOfProcesses=10 \
--set ycsb.noOfRows=50000 \
--set ycsb.workload=b
The command generates the following output:
NAME: ycsb
LAST DEPLOYED: Thu Feb 16 12:51:21 2023
NAMESPACE: nuodb
STATUS: deployed
REVISION: 1
TEST SUITE: None
Leave YCSB running for a few minutes then return to the Grafana dashboard.
-
In the left side panel, click on the 2 by 2 grid of squares, then in the drop-down click
Manage
. -
There are several dashboards listed under the
nuodb
folder. -
The two to start with are NuoDB Ops System Overview and NuoDB Overview.
-
Click on either to see some activity happening.
-
-
Both dashboards will appear in the bottom left Dashboards panel of the home page.
Appendix D - Clean Up
To clean up and delete all of the resources, delete the entire namespace and everything in it:
kubectl delete namespace nuodb
Alternatively, it is possible to clean up any of the Helm deployments individually.
For example to get rid of YSCB and the database but leave Insights and the domain (admin
pods), ready to create a different database later.
-
Remove YCSB Benchmark:
helm delete ycsb
. -
Remove Insights:
helm delete insights
- see warning below. -
Remove Database:
helm delete demo
- see warning below. -
Remove Domain (the
admin
pods):helm delete admin
- see warning below.
Uninstalling a Helm deployment does not delete any associated Persistent Volume Claims (PVCs). After uninstalling the insights, database and/or admin chart, PVCs must be deleted manually. For more information, see Cleaning up PVCs. |
Cleaning up PVCs
-
Use
kubectl get pvc
to find the PVCs. The output will look something like thiskubectl get pvc -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,VOLUME:.spec.volumeName
NAME STATUS VOLUME raftlog-admin-nuodb-cluster0-0 Unbound pvc-07ed40c3-3979-47ab-ad60-12146e6de568 archive-volume-sm-demo-nuodb-cluster0-demo-database-0 Unbound pvc-4e2214b3-7a46-4408-a3e4-c1b75053849c
-
The admin pod’s PVC is the Raft log volume.
-
The SM’s PVC is the archive volume.
-
The TE has no persistent volume claims.
-
-
Before removing a PVC, check that the volume is marked as
Unbound
, since the pod that was using it has been deleted. -
Run
kubectl delete pvc <NAME>
to remove a PVC. For example:kubectl delete pvc raftlog-admin-nuodb-cluster0-0
kubectl delete pvc archive-volume-sm-demo-nuodb-cluster0-demo-database-0