Using Incremental Hot Copy
An incremental hot copy creates a space-efficient, transactionally-consistent copy of an SM which stores only those atoms that have changed since the most recent full or incremental hot copy in a backup set containing hot copies for that SM.
Incremental hot copy requires the use of an existing backup set (as created by a full hot copy).
How it works:
-
NuoDB uses a hash of each atom to keep track of when it has changed. If an atom is changed, its hash also changes.
-
Only atoms whose hash has changed are written into the incremental backup.
-
This includes atoms in SM memory that have also changed (but are only recorded in the journal and have not yet been updated in the archive).
-
About incremental hot copy:
-
Like full and journal hot copy, incremental hot copy includes every storage group the hot copied SM serves.
-
Because an incremental hot copy is relative to the most recent full or incremental hot copy in a backup set (rather than always being relative to the full hot copy), incremental hot copies do not become bigger over time.
-
To request an incremental hot copy, specify
--type incremental
when executing the hot copy and specify an existing backup set directory that contains a completed full hot copy.-
Unlike journal hot copy, incremental hot copy can only be executed against a backup set once the full hot copy used to create that backup set has completed.
-
If your full hot copy takes longer to run than the time between incremental hot copies, it is possible to continue running incremental hot copy against the previous backup set until the new backup set is ready.
-
-
Incremental backup can be run against any backup set containing a successfull full hot copy. Typically, it is run against the most recently created backup set, but it is up to you to manage this and ensure the correct backup set is used.
-
An incremental hot copy of an SM is approximately the size of all changed atoms (since the last full or incremental backup) in the SM being hot copied at the time the hot copy finishes, plus the size of the journal of the SM at the time the hot copy finishes. Note that when an atom is changed, the entire atom is copied into the incremental hot copy.
Incremental hot copies in a backup set must be restored into a new archive using the NuoDB Archive utility (see NuoDB Archive). This archive can then be used to start an SM.
-
By default a restore of a backup set containing incremental hot copies always restores to the state as at the end off the latest incremental hot copy.
-
The restoration creates the new archive into a new directory (if the directory exists, it must be empty)
-
To restore up to a specific incremental hot copy within a backup set, use the
--backup-element-id
option tonuoarchive
. Backup is only possible to a point in time matching the finish time of one of the incremental backups. -
Journal hot copy enables fine-grained point-in-time restore, allowing restoration to a point in time of your choice instead.
-
Restore an archive from one or more incremental hot copies
Restore an archive from a backup set using NuoDB Archive (nuoarchive restore
) with the --restore-dir
option.
NuoDB Archive options relevant to restoring a full hot copy from a backup set:
nuoarchive restore --restore-dir <dest-dir> <backup-set-directory>
-
--restore-dir <dest-dir>
Restore into this destination directory -
<backup-set-directory>
Restore from this backup set directory_from_.
For more information, see Restore Data From Backup Sets.
Examples
Example 1: Incremental Hot Copy into a Backup Set
In this example:
-
An empty table
foo
with a single integer column has already been created. -
We will use archive id 0 to generate our backups.
Steps:
-
Before incremental hot copy can be executed, a backup set containing a full hot copy must exist:
nuocmd hotcopy database --db-name test --type full --backup-dirs 0 /volumes/backups/test-2022-04-18
-
If we inspect the file system, we can see that the backup set directory is populated with a full hot copy:
$ ls -At1 /volumes/backups/test-2022-04-18 tmp full state.xml
-
To make the incremental hot copy interesting, we will insert a table row using autocommit into table
foo
:SQL> insert into foo values(1);
-
Now we can perform a hot copy with
type
set toincremental
. It will copy only those atoms changed since the full hot copy.nuocmd hotcopy database --db-name test --type incremental --backup-dirs 0 /volumes/backups/test-2022-04-18
-
The backup set now contains an incremental hot copy:
$ ls -At1 /volumes/backups/test-2022-04-18 tmp 1.inc full state.xml
-
We can compare what atoms were copied in the full and incremental hot copies. The full backup copied 140 atoms, but the incremental only copied 6:
$ find /volumes/backups/test-2022-04-18/full/data -name '*.atm' | wc --lines 140 $ find /volumes/backups/test-2022-04-18/1.inc/data -name '*.atm' | wc --lines 6
-
Incremental hot copy can be executed again. If no changes have been made in the database, no atom files will be copied.
$ nuocmd hotcopy database --db-name test --type incremental --backup-dirs 0 /volumes/backups/test-2022-04-18 ... $ find /volumes/backups/test-2022-04-18/2.inc/data -name '*.atm' | wc --lines 0
Example 2: Incremental Hot Copy with Storage Groups
Typically no single SM contains all storage groups; the whole point is to reduce the amount of data managed by each SM. To create a transactionally consistent backup of multiple archives where each archive contains a different set of storage groups:
-
A coordinated hot copy is required over a subset of SMs that includes all storage groups in the database.
-
Some storage-groups may be backed up more than once, but every storage-group must be backed up at least once.
Coordinated hot copy backs up multiple SMs at the same time using a single hot-copy command. In this case, lets assume we need to backup archives with id 0 and 2.
-
First the full backup:
nuocmd hotcopy database --db-name test --type full --backup-dirs 0 /volumes/backups/test-2022-04-18-archive-0 2 /volumes/backups/test-2022-04-18-archive-2
-
Then each scheduled incremental backup:
nuocmd hotcopy database --db-name test --type incremental --backup-dirs 0 /volumes/backups/test-2022-04-18-archive-0 2 /volumes/backups/test-2022-04-18-archive-2