nuodb-migrator load
nuodb-migrator load
loads data into a target NuoDB database from data and/or metadata dump files that were generated by the nuodb-migrator dump
command processing.
Syntax
nuodb-migrator load
--target.url=jdbc:target_jdbc_url
--target.schema=my_schema
--target.username=userid
--target.password=passwd
--input.path=path_for_dump_files [ option ]...
nuodb-migrator load
supports the following types of options:
-
Target database options
Specify the details for the target NuoDB database, such as database connection, login credentials and other related settings.Option Required Description --target.url=
url
No
Target database connection URL in the format:
jdbc:com.nuodb://host:port/database
--target.username=
username
No
Target database user name
--target.password=
password
No
Target database password
--target.properties=
properties
No
Additional connection properties encoded as URL query string, e.g. “
property1=value1&property2=value2…
”--target.schema=
schema
No
Database schema name to use. If not provided, objects will be generated in the same schema as the source database schema.
-
Input file specifications
Specify the format and location for the data dump of the source database fromnuodb-migrator dump
processing.Option Required Description --input.path=
input_path
Yes
Path on the file system where the catalog dump file,
backup.cat
, and data dump file(s) were created. These are created bynuodb-migrator dump
.--input.{csv | xml}.
attribute_name
=attribute_value
No
Input format attributes. These are specific to the type of input being read.
For CSV, the valid attributes (
attribute_name=attribute_value
) are:-
--input.csv.encoding=encoding
(the input and output encoding)
Defaults to system propertyfile.encoding
if omitted.
--input.csv.encoding=UTF-8
is equivalent toJAVA_OPTS=-Dfile.encoding=UTF-8
. -
--input.csv.delimiter=char
(the symbol used for value separation, must not be a line break character)
Default value is "," (comma). -
--input.csv.quoting={ true | false }
(indicates whether quotation should be used) -
--input.csv.quote=char
(the symbol used as value encapsulation marker) -
--input.csv.escape=char
(the symbol used to escape special characters in values) -
--input.csv.line.separator=char
(the record separator to use)
For XML, the valid attributes (
attribute_name=attribute_value
) are:-
--input.xml.encoding=encoding
(the default is utf-8) -
--input.xml.version=n
(should be 1.0)
-
-
Migration modes
Specifies the mode ofnuodb-migrator load
:Option Required Description --data={ true | false }
No
Enables or disables data being loaded.
Default istrue
.--schema={ true | false }
No
Enables or disables the execution of a DDL sql script file to generate schema objects prior to loading the data.
Default istrue
.Option Description --data=true --schema=true
Loads both data and metadata into a target NuoDB database using dump files generated by
nuodb-migrator dump
. This is the default value.--data=false --schema=true
Generates objects in the target NuoDB database using metadata from dump files generated by
nuodb-migrator dump
.--data=true --schema=false
Loads data into a target NuoDB database using dump files generated by
nuodb-migrator dump
. The objects must already exist in the target NuoDB database. -
Commit strategy and insert type specifications
Specify the frequency of commits duringnuodb-migrate load
.Option Required Description --commit.strategy={ single | batch |
custom
}No
Commit strategy name, either
single
orbatch
or fully classified class name of a custom strategy implementingcom.nuodb.migrator.jdbc.commit.CommitStrategy
. Default isbatch
.--commit.
commit_strategy_attribute
=value
No
Set commit strategy attributes, such as
commit.batch.size
, which specifies the batch size to load and commit.
Default is 1000, meaning the load will do a commit after every 1000 rows are loaded. -
Insert type specifications
Specify the type of SQL statements to generate for loading data. Thenuodb-migrator load
command can generateINSERT
orREPLACE
SQL statements.Option Required Description --replace (-r)
No
Writes
REPLACE
statements rather thanINSERT
statements for all tables being loaded.--table.
table_name
.replaceNo
Writes
REPLACE
statements for the specified table--table.
table_name
.insertNo
Writes
INSERT
statements for the specified table--time.zone=
time_zone
No
Time zone enables data columns to be dumped and reloaded between servers in different time zones
-
Schema options
These options are supported if the migration mode includes--schema=true
.Option Required Description Various
schema
command line optionsNo
Various optional
schema
command options are also supported when the--schema=true
command line option is passed tonuodb-migrator dump
. For more information, see migration modes andnuodb-migrator schema
. -
Executor options
Option Required Description --threads (-t)=
threads
No
Number of worker threads. Defaults to the number of available processors. Default command line options values for parallel loader:
-
--threads=number_cpus
-
--parallelizer=table.level
--parallelizer (-p)= { table.level | row.level |
custom
}No
Parallelization strategy name, either
table.level
(default),row.level
or fully classified class name of a custom parallelizer implementingcom.nuodb.migrator.backup.loader.Parallelizer
. Table level parallelization activates one worker thread per table at maximum, while row level enables forking with more than one thread, where the number of worker threads is based on the weight of the loaded row set to the size of loaded tables. Notice row level forking may (and typically does) reorder the rows in the target table.--parallelizer.
parallelizer_attribute
=value
No
Parallelizer attributes, such as
min.rows.per.thread
andmax.rows.per.thread
which are min possible and max allowed number of rows per thread, defaults are 100000 and 0 (unlimited) respectively.Parallelizer attributes
--parallelizer.min.rows.per.thread
and--parallelizer.max.rows.per.thread
switches are valid with--parallelizer=row.level
,
so when--parallelizer=row.level
is provided the following defaults are used for parallelizer attributes:-
--threads=num_cpus
-
--parallelizer=row.level
-
--parallelizer.min.rows.per.thread=100000
-
--parallelizer.max.rows.per.thread=0
-
The output from running nuodb-migrator load
includes a data types mapping summary.
For each source type, the summary shows the NuoDB type to which it was mapped.
Example
The following nuodb-migrator load
command restores a database dump to the NuoDB db1
database using a dump catalog file and set of data files that are located in the /tmp
directory.
$ nuodb-migrator load \
--target.url=jdbc:com.nuodb://localhost/db1 \
--target.username=dba \
--target.password=goalie \
--input.path=/tmp