Using Backup Sets
Hot copies may be organized into backup sets. Backup sets provide a convenient mechanism for identifying hot copies and maintaining metadata information locally while allowing data to be compressed and/or moved to an alternative (for example, offsite) storage location. Backup sets are also required in order to take advantage of the new incremental hot copy and journal hot copy for point-in-time restore.
Backup sets are optional if using only full hot copy.
Backup sets are required if using incremental hot copy.
Backup sets are required if using journal hot copy.
A backup set is a directory created when initiating a full hot copy. The full hot copy is stored into a subdirectory in the backup set called
full. Zero or more journal hot copies and zero or more incremental hot copies may be stored into the latest backup set created for each SM. Journal and incremental hot copies are stored in numbered subdirectories such as
2.jnl. Subdirectories containing hot copies are called hot copy elements.
Inside each hot copy element are two directories, control and data. The contents of the control directory must not be moved or compressed as long until the backup set is destroyed. The contents of the data directory may be compressed, and may be moved to an alternative storage location (for example, offsite). The contents of the data directories of hot copy elements in a backup set must be moved back and uncompressed in order to perform restore using that backup set.
Here is an example of the directory structure in a backup set containing one full hot copy, two incremental hot copies, and two journal hot copies.
/tmp/hotcopy/2017-12-18 ├── 1.inc │ ├── control │ └── data ├── 1.jnl │ ├── control │ └── data ├── 2.inc │ ├── control │ └── data ├── 2.jnl │ ├── control │ └── data ├── full │ ├── control │ └── data ├── tmp └── state.xml
To store a hot copy into a backup set, specify the
backupSetDirectory option when executing hotcopy in place of the
destinationArchiveDirectory and (optional)
When hot copying into a backup set, the type option is required. The type option may be
Data in a backup set must be restored into an archive before it can be used to start an SM (in order to make the data accessible in a database). This holds true for all hot copy types.
Metadata must not be deleted from backup sets. To delete an old backup, delete the oldest backup set (repeating if necessary to repeat as many backups as desired).
It is highly recommended that each backup set be named by the date or date and time it is created. NuoDB tools do not rely on this naming convention, but it simplifies management of backup sets, provides a way to search for full and incremental hot copies of interest, and provides a first approximation to the backup set containing a timestamp of interest for point-in-time restore.
The backup sets created by executing full hot copy on an SM must be stored in the same parent directory.
Full hot copy produce a consistent copy of an SM. Incremental hot copy produce a space-efficient copy of an SM. Journal hot copy copies changes since the last journal hot copy to enable point-in-time restore. These copies are all local to the host where the SM is running, unless they are made to a network attached disk.
If backup requirements include moving hot copies to alternative storage, perhaps offsite, then hot copies can be moved after they are created.
When using backup sets, the data directories may be compressed and moved (deleted from local storage).
It may be desirable to delete old hot copies to save space, or to comply with data retention requirements.
When using backup sets, the minimum unit of deletion is the backup set. Delete backup sets in order from oldest to newest.
If a backup set contains a transaction of interest, do not delete the prior backup set.
Start a new backup set periodically to ensure that backup sets can be deleted later to conserve space and comply with data retention requirements.
New backup sets should also be started periodically in case there is a data corruption in the storage where backups are stored, since a corruption in a journal hot copy will corrupt later journal hot copies in that backup set (but not in later backup sets), and a corruption in an incremental hot copy will corrupt later incremental hot copies in that backup set (but not in later backup sets).
Constructing an archive from many incremental hot copies can also become expensive, so starting a new backup set can reduce restore latency.