NuoDB Migrator Dump Command

The dump command exports data from an existing NuoDB or 3rd party database. A data dump file will be created for all tables (or a select number of tables, or a subset of tables) as determined by certain command line options. Data dump files will be saved in the specified output directory along with a dump metadata description XML file.

Data dump files will only be created for tables that contain data.

The types of options that are available to the dump command are:

  • Source database options
    Specify the details for the source NuoDB or 3rd party database, such as database connection, login credentials, etc.

  • Output file specifications
    Specify the format and location for the data dump of the source database.

  • Migration mode
    Specifies the dump command mode:

Option Description

--data=true --schema=true

dump command will dump both data and metadata from the source database. This is the default value.

--data=false --schema=true

dump command will dump only the metadata from the source database

--data=true --schema=false

dump command will dump only data from the source database

  • Data Dump Result Set
    Specify objects, queries, filters, etc. These options will be used to generate a result set of the source data that is to be dumped. These can also specify what data or objects not to dump by setting options to false.

  • Schema Options
    These options are used if the migration mode includes the schema command. The specific commands available are listed on the schema command listing. For more information, see NuoDB Migrator Schema Command.

For dumping large production databases to files, please consult with your NuoDB Professional Services team when considering the best setting for the --source.transaction.isolation=value migrator dump command line option. This value is specific to the source database vendor JDBC driver properties.


nuodb-migrator dump
        --output.path=path_for_dump_files [ option ]...

The following is a list of dump command line options.

Options related to the source database connection






JDBC driver class name



Source database connection URL in the standard syntax: jdbc:subprotocol:subname



Source database username.



Source database password.


Additional connection properties encoded as URL query string, e.g. “property1=value1&property2=value2…​” (see Connection Properties).



Default database catalog name to use



Default database schema name to use{ true | false }


If set to true, each individual statement is treated as a transaction and is automatically committed after it is executed. The default is false.

--source.transaction. isolation=value


Sets the transaction isolation level for the migration execution. A valid value can be either a symbolic name or an integer constant representing transaction isolation levels as defined in the JDBC java.sql.Connection class. Valid values include:

  • none or 0

  • read.uncommitted or 1

  • read.committed or 2

  • or 4

  • serializable or 8

  • vendor_dependent_level

NuoDB does not support all of the levels, only read.committed or 2, serializable or 8 and two additional levels that are not in the JDBC standard. See Supported Transaction Isolation Levels for more information.

Options related to Output Specification




--output.type={ csv | xml | bson }


Output type to generate - default is csv.



Path on the file system where the output should be generated. If the `--schema=true command line parameter option is used, the schema DDL SQL script file will also be written to this directory. The file is named with extension .sql.

--output.{csv | xml}.attribute_name=attribute_value


Output format attributes. These are specific to the type of output being generated, as defined by the command line option parameter --output.type. In other words, the choice between csv or xml must match the value specified as --output.type. For csv, the valid attributes (attribute_name=attribute_value) are:

  • --output.csv.encoding=encoding (the input and output encoding)

  • --output.csv.delimiter=char (the symbol used for value separation, must not be a line break character)

  • --output.csv.quoting={ true | false } (indicates whether quotation should be used)

  • --output.csv.quote=char (the symbol used as value encapsulation marker)

  • --output.csv.escape=char (the symbol used to escape special characters in values)

  • --output.csv.line.separator=char (the record separator to use)

For xml, the valid attributes (attribute_name=attribute_value) are:

  • --output.xml.encoding=encoding (the default is utf-8)

  • --output.xml.version=n (should be 1.0)

Migration modes




--data={ true | false }


Enables or disables data being dumped. Default is true.

--schema={ true | false }


Enables or disables generation of DDL SQL script file to generate schema objects. This enforces the same behavior as running the migrator tool schema command. Default is true.

Options related to building the data dump result set






Table(s) specified for data to be dumped



Comma separated list of types of table objects (TABLE, VIEW, SYSTEM TABLE, GLOBAL TEMPORARY, ALIAS, SYNONYM, etc) to process. The default behavior is only TABLE types are processed. Views, triggers, functions, and stored procedures are not supported by the migrator tool.

--table.catalog_name[schema_name.] table_name.filter=query_filter


Selects table rows to be dumped from the table specified as table_name (optionally catalog and/or schema qualified), using a WHERE clause specified by query_filter. The migrator will dynamically generate a SELECT statement, selecting all table columns from the table, and apply the query_filter as the WHERE clause. If query_filter contains spaces, it must be enclosed in double quotes.



Number of concurrent processing threads, defined by thread_val as integer (i.e. a value of 1 would cause single thread processing). The default value is the number of CPU cores.



A SELECT statement that will filter table records to be dumped. Only the result set of the query will be dumped.



Query limit is a maximum number of rows to split a table into chunks with LIMIT limit OFFSET offset syntax in a database specific way, where each chunk is written to a separate file. If a query limit is not given or is not supported by the migrator for a particular database, queries are not split.


Time zone enables date columns to be dumped and reloaded between servers in different time zones

Schema migration commands (supported if command line parameter option --schema=true is used)




Various schema command line options


Various optional schema command options are also supported When the --schema=true command line option is passed to the dump command. See NuoDB Migrator Schema Command for a list of options supported.

Usage Examples

Dump from a MySQL Database

The following dump command creates a dump from a MySQL database in CSV format and saves it to /tmp/

$ nuodb-migrator dump                               \
        --source.driver=com.mysql.jdbc.Driver       \
        --source.url=jdbc:mysql://localhost/test    \
        --source.username=root                      \
        --source.password=12345                     \
        --output.type=csv                           \

Time Zone Option

The user can specify that all DATE, DATETIME, and TIMESTAMP columns can be dumped in a different time zone. The time zone is instantiated by its identifier, an abbreviation such as PST or a full name such as America/Los_Angeles.

nuodb-migrator dump ...

Dump Specific Tables

The following dump commands are identical and tell the migrator to only dump these specific tables.

nuodb-migrator dump ... --table=t1,t2,t3
nuodb-migrator dump ... --table=t1 --table=t2 --table=t3

Dump Subset of Rows

NuoDB Migrator allows the user to dump a subset of rows which match a condition specified with a query_filter option. The query_filter value will be appended as a WHERE clause to a dynamically generated SELECT statement, selecting all columns from the table specified. If the query_filter contains spaces, it must be enclosed in double quotes. The table_name is optionally catalog and/or schema qualified.

nuodb-migrator dump ... --table.catalog1.schema1.t1.filter=id>1000
nuodb-migrator dump ... --table.catalog1.t1.filter=id>1000
nuodb-migrator dump ... --table.schema1.t1.filter=id>1000
nuodb-migrator dump ... --table.t1.filter=id>1000