NuoDB Migrator Load Command

The load command loads data into a target NuoDB database from data and/or metadata dump files that were generated by the NuoDB Migrator dump command processing.

The types of options that are available to the load command are:

  • Target database options
    Specify the details for the target NuoDB database, such as database connection, login credentials, etc.

  • Input file specifications
    Specify the location and any specific formatting of the source database data dump file from the migrator dump processing.

  • Migration mode
    Specifies the migrator load command mode:

Option Description

--data=true --schema=true

load command will load both data and metadata into a target NuoDB database using dump files generated by the NuoDB Migrator dump command processing. This is the default value.

--data=false --schema=true

load command will generate objects in the target NuoDB database using metadata from dump files generated by the NuoDB Migrator dump command processing

--data=true --schema=false

load command will load data into a target NuoDB database using data dump files generated by the NuoDB Migrator dump command processing. The objects must already exist in the target NuoDB database.

  • Commit Strategy and Insert type specifications
    Specify how often commits are done during the load processing.

  • Insert type specifications
    Specify the type of data load DDL sql statements to generate. The load command can generate INSERT or REPLACE SQL statements.

  • Schema options
    These options are used if the migration mode includes the schema command. The specific commands available are listed on the schema command listing (see NuoDB Migrator Schema Command).

The output from running the load command includes a data types mapping summary. For each source type, the summary shows the NuoDB type to which it was mapped.

Syntax

nuodb-migrator load
        --target.url=jdbc:target_jdbc_url
        --target.schema=my_schema
        --target.username=userid
        --target.password=passwd
        --input.path=path_for_dump_files  [ option ]...

The following is a list of load command line options.

Options related to the target database connection.

Option

Required

Description

--target.url=url

No

Target database connection URL in the format, jdbc:com.nuodb://host:port/database

--target.username=username

No

Target database user name

--target.password=password

No

Target database password

--target.properties=properties

No

Additional connection properties encoded as URL query string, e.g. “property1=value1&property2=value2…​

--target.schema=schema

No

Database schema name to use. If not provided, objects will be generated in the same schema as the source database schema.

Options related to input specification

Option

Required

Description

--input.path=input_path

Yes

Path on the file system where the catalog dump file, backup.cat, and data dump file(s) were created. These are created by the dump migrator tool command.

--input.{csv | xml}.attribute_name=attribute_value

No

Input format attributes. These are specific to the type of input being read. For csv, the valid attributes (attribute_name=attribute_value) are:

  • --input.csv.encoding=encoding (the input and output encoding) Defaults to system property file.encoding if omitted. --input.csv.encoding=UTF-8 is equivalent to JAVA_OPTS=-Dfile.encoding=UTF-8.

  • --input.csv.delimiter=char (the symbol used for value separation, must not be a line break character) Default value is "," (comma).

  • --input.csv.quoting={ true | false } (indicates whether quotation should be used)

  • --input.csv.quote=char (the symbol used as value encapsulation marker)

  • --input.csv.escape=char (the symbol used to escape special characters in values)

  • --input.csv.line.separator=char (the record separator to use)

For xml, the valid attributes (attribute_name=attribute_value) are:

  • --input.xml.encoding=encoding (the default is utf-8)

  • --input.xml.version=n (should be 1.0)

Migration modes

Option

Required

Description

--data={ true | false }

No

Enables or disables data being loaded. Default is true.

--schema={ true | false }

No

Enables or disables the execution of a DDL sql script file to generate schema objects prior to loading the data. Default is true.

Commit Strategy

Option

Required

Description

--commit.strategy={ single | batch | custom}

No

Commit strategy name, either single or batch or fully classified class name of a custom strategy implementing com.nuodb.migrator.jdbc.commit.CommitStrategy. Default is batch.

--commit. commit_strategy_attribute=value

No

Set commit strategy attributes, such as commit.batch.size, which specifies the batch size to load and commit. The default is 1000, meaning the load will do a commit after every 1000 rows are loaded.

Options related to insert type specification

Option

Required

Description

--replace (-r)

No

Writes REPLACE statements rather than INSERT statements for all tables being loaded.

--table.table_name.replace

No

Writes REPLACE statements for the specified table

--table.table_name.insert

No

Writes INSERT statements for the specified table

--time.zone=time_zone

No

Time zone enables data columns to be dumped and reloaded between servers in different time zones

Schema migration commands (supported if command line parameter option --schema=true is used)

Option

Required

Description

Various schema command line options

No

Various optional schema command options are also supported when using the --schema option when the --schema=true command line option is passed to the dump command. See NuoDB Migrator Schema Command for a list of options supported.

Executor Options

Option

Required

Description

--threads (-t)=threads

No

Number of worker threads. Defaults to the number of available processors. Default command line options values for parallel loader:

  • --threads=number_cpus

  • --parallelizer=table.level

--parallelizer (-p)= { table.level | row.level | custom }

No

Parallelization strategy name, either table.level (default), row.level or fully classified class name of a custom parallelizer implementing com.nuodb.migrator.backup.loader.Parallelizer. Table level parallelization activates one worker thread per table at maximum, while row level enables forking with more than one thread, where the number of worker threads is based on the weight of the loaded row set to the size of loaded tables. Notice row level forking may (and typically does) reorder the rows in the target table.

--parallelizer.parallelizer_attribute=value

No

Parallelizer attributes, such as min.rows.per.thread and max.rows.per.thread which are min possible and max allowed number of rows per thread, defaults are 100000 and 0 (unlimited) respectively.

Parallelizer attributes --parallelizer.min.rows.per.thread and --parallelizer.max.rows.per.thread switches are valid with --parallelizer=row.level,
so when --parallelizer=row.level is provided the following defaults are used for parallelizer attributes:

  • --threads=num_cpus

  • --parallelizer=row.level

  • --parallelizer.min.rows.per.thread=100000

  • --parallelizer.max.rows.per.thread=0

Usage Example

The following load command restores a database dump to the NuoDB db1 database using a dump catalog file and set of data files that are located in the /tmp directory.

$ nuodb-migrator load                                   \
        --target.url=jdbc:com.nuodb://localhost/db1     \
        --target.username=dba                           \
        --target.password=goalie                        \
        --input.path=/tmp