nuodb-migrator dump

The nuodb-migrator dump command exports data from an existing NuoDB or third-party database. A data dump file will be created for all tables (or a select number of tables, or a subset of tables) as determined by certain command line options. Data dump files will be saved in the specified output directory along with a dump metadata description XML file.

Data dump files will only be created for tables that contain data.

Syntax

nuodb-migrator dump
        --source.driver=source_driver_class_name
        --source.url=jdbc:source_jdbc_url
        --source.schema=my_schema
        --source.username=userid
        --source.password=passwd
        --output.type=csv
        --output.path=path_for_dump_files [ option ]...

nuodb-migrator dump supports the following types of options:

The types of options that are available for nuodb-migrator dump are:

  • Source database options
    Specify the details for the source NuoDB or third-party database, such as database connection, login credentials and other related settings.

    Option Required Description

    --source.driver=driver

    Yes

    JDBC driver class name

    --source.url=url

    Yes

    Source database connection URL in the standard syntax: jdbc:subprotocol:subname

    --source.username=username

    Yes

    Source database username.

    --source.password=password

    No

    Source database password.

    --source.properties=properties

    No

    Additional connection properties encoded as URL query string, e.g. “property1=value1&property2=value2…​” (see Connection Properties).

    --source.catalog=catalog

    No

    Default database catalog name to use

    --source.schema=schema

    No

    Default database schema name to use

    --source.auto.commit={ true | false }

    No

    If set to true, each individual statement is treated as a transaction and is automatically committed after it is executed. The default is false.

    --source.transaction. isolation=value

    No

    Sets the transaction isolation level for the migration execution. A valid value can be either a symbolic name or an integer constant representing transaction isolation levels as defined in the JDBC java.sql.Connection class. Valid values include:

    • none or 0

    • read.uncommitted or 1

    • read.committed or 2

    • repeatable.read or 4

    • serializable or 8

    • vendor_dependent_level

    NuoDB does not support all of the levels, only read.committed or 2, serializable or 8 and two additional levels that are not in the JDBC standard. See Supported Transaction Isolation Levels for more information.

  • Output file specifications
    Specify the format and location for the data dump of the source database.

    Option Required Description

    --output.type={ csv | xml | bson }

    Yes

    Output type to generate - default is csv.

    --output.path=output_path

    No

    Path on the file system where the output should be generated. If the `--schema=true command line parameter option is used, the schema DDL SQL script file will also be written to this directory. The file is named with extension .sql.

    --output.{csv | xml}.attribute_name=attribute_value

    No

    Output format attributes. These are specific to the type of output being generated, as defined by the command line option parameter --output.type. In other words, the choice between csv or xml must match the value specified as --output.type. For csv, the valid attributes (attribute_name=attribute_value) are:

    • --output.csv.encoding=encoding (the input and output encoding)

    • --output.csv.delimiter=char (the symbol used for value separation, must not be a line break character)

    • --output.csv.quoting={ true | false } (indicates whether quotation should be used)

    • --output.csv.quote=char (the symbol used as value encapsulation marker)

    • --output.csv.escape=char (the symbol used to escape special characters in values)

    • --output.csv.line.separator=char (the record separator to use)

    For xml, the valid attributes (attribute_name=attribute_value) are:

    • --output.xml.encoding=encoding (the default is utf-8)

    • --output.xml.version=n (should be 1.0)

  • Migration modes
    Specifies the mode of nuodb-migrator dump:

    Option Required Description

    --data={ true | false }

    No

    Enables or disables data being dumped. Default is true.

    --schema={ true | false }

    No

    Enables or disables generation of DDL SQL script file to generate schema objects. This enforces the same behavior as running nuodb-migrator schema. Default is true.

    Option Description

    --data=true --schema=true

    dump both data and metadata from the source database. This is the default value.

    --data=false --schema=true

    nuodb-migrator dump only, the metadata from the source database

    --data=true --schema=false

    nuodb-migrator dump only, data from the source database

  • Data dump result set
    Specify details such as objects, queries, and filters. These options will be used to generate a result set of the source data that is to be dumped. These can also specify what data or objects not to dump by setting options to false.

    Option Required Description

    --table=table_name[,table_name…​]

    No

    Table(s) specified for data to be dumped

    --table.type=table_type[,table_type,…​]

    No

    Comma separated list of types of table objects (TABLE, VIEW, SYSTEM TABLE, GLOBAL TEMPORARY, ALIAS, SYNONYM, etc) to process. The default behavior is only TABLE types are processed. Views, triggers, functions, and stored procedures are not supported by the nuodb-migrator tool.

    --table.catalog_name[schema_name.] table_name.filter=query_filter

    No

    Selects table rows to be dumped from the table specified as table_name (optionally catalog and/or schema qualified), using a WHERE clause specified by query_filter. nuodb-migrator will dynamically generate a SELECT statement, selecting all table columns from the table, and apply the query_filter as the WHERE clause. If query_filter contains spaces, it must be enclosed in double quotes.

    --threads=thread_val

    No

    Number of concurrent processing threads, defined by thread_val as integer (i.e. a value of 1 would cause single thread processing). The default value is the number of CPU cores.

    --query=query

    No

    A SELECT statement that will filter table records to be dumped. Only the result set of the query will be dumped.

    --query.limit=query_limit

    No

    Query limit is a maximum number of rows to split a table into chunks with LIMIT limit OFFSET offset syntax in a database specific way, where each chunk is written to a separate file. If a query limit is not given or is not supported by nuodb-migrator for a particular database, queries are not split.

    --time.zone=time_zone

    No

    Time zone enables date columns to be dumped and reloaded between servers in different time zones

  • Schema Options
    These options are supported if the migration mode includes --schema=true.

    Option Required Description

    Various schema command line options

    No

    Various optional schema command options are also supported when the --schema=true command line option is passed to nuodb-migrator dump. For more information, see migration modes and nuodb-migrator schema.

For dumping large production databases to files, please consult with your NuoDB Professional Services team when considering the best setting for the --source.transaction.isolation=value nuodb-migrator dump option. This value is specific to the source database vendor JDBC driver properties.

Examples

Dump from a MySQL Database

The following nuodb-migrator dump creates a dump from a MySQL database in CSV format and saves it to /tmp/dump.cat.

$ nuodb-migrator dump                               \
        --source.driver=com.mysql.jdbc.Driver       \
        --source.url=jdbc:mysql://localhost/test    \
        --source.username=root                      \
        --source.password=12345                     \
        --output.type=csv                           \
        --output.path=/tmp

Time Zone Option

The user can specify that all DATE, DATETIME, and TIMESTAMP columns can be dumped in a different time zone. The time zone is instantiated by its identifier, an abbreviation such as PST or a full name such as America/Los_Angeles.

nuodb-migrator dump ... --time.zone=UTC+2

Dump Specific Tables

The following nuodb-migrator dump are identical and tell the nuodb-migrator tool to only dump these specific tables.

nuodb-migrator dump ... --table=t1,t2,t3
nuodb-migrator dump ... --table=t1 --table=t2 --table=t3

Dump Subset of Rows

nuodb-migrator allows the user to dump a subset of rows which match a condition specified with a query_filter option. The query_filter value will be appended as a WHERE clause to a dynamically generated SELECT statement, selecting all columns from the table specified. If the query_filter contains spaces, it must be enclosed in double quotes. The table_name is optionally catalog and/or schema qualified.

nuodb-migrator dump ... --table.catalog1.schema1.t1.filter=id>1000
nuodb-migrator dump ... --table.catalog1.t1.filter=id>1000
nuodb-migrator dump ... --table.schema1.t1.filter=id>1000
nuodb-migrator dump ... --table.t1.filter=id>1000