Dumping a Source Database to be Migrated

The NuoDB Migrator dump command exports the source database data in a format specified by the dump command option --output.type (e.g. --output.type=CSV). This process will also generate an XML file that defines the metadata of the source database. By default, the dump command will export table data and all column and table constraints, including default values, column "On Update" defaults (MySQL only), not null, primary key, foreign key and check constraints. The dump command will also export sequence objects owned by the source database schema. The following NuoDB Migrator dump command will migrate most source database schemas. For MySQL, replace --source.schema=my_schema with --source.catalog=my_schema.

nuodb-migrator dump                                 \
        --source.driver=source_driver_class_name \ 
        --source.url=jdbc:source_jdbc_url        \
        --source.schema=my_schema                \
        --source.username=userid                 \  
        --source.password=passwd                 \
        --output.type=csv                           \
        --output.path=path_for_dump_files

To migrate a large source database, it is recommended that you do not migrate the indexes until after the source database data is loaded into the target database. Otherwise, this will have a negative impact on performance of the load processing. The following command will eliminate indexes from the dump. The NuoDB Migrator tool also has a schema command that will generate Data Definition Language (DDL) SQL scripts to create all indexes. See Generating DDL SQL Statement Scripts for Migrated Data on how to migrate only indexes.

nuodb-migrator dump                                 \
        --source.driver=source_driver_class_name \
        --source.url=jdbc:source_jdbc_url        \
        --source.schema=my_schema                \
        --source.username=userid                 \
        --source.password=passwd                 \
        --output.type=csv                        \
        --output.path=path_for_dump_files        \  
        --meta.data.primary.key=false               \
        --meta.data.foreign.key=false               \
        --meta.data.index=false

If the source database is a live, production database, with on-going updates, it is recommended that the transaction isolation level for the source JDBC connection be set to maintain a consistent read of the database while the dump command is processing. This value is dependent on the JDBC driver used. An example of setting this is:

nuodb-migrator dump                                 \
        --source.driver=source_driver_class_name \
        --source.url=jdbc:source_jdbc_url        \
        --source.schema=my_schema                \
        --source.username=userid                 \
        --source.password=passwd                 \
        --output.type=csv                        \
        --output.path=path_for_dump_files        \
        --source.transaction.isolation=serializable