Create a metadata migration
Metadata migrations transfer existing metadata, as well as any subsequent changes made to the source metadata (in the scope of the migration), while Hive Migrator keeps working.
If you're using MariaDB or MySQL, add the JDBC driver to the classpath manually.
Wildcards support
When creating patterns for your migration, note the following:
If using Hive 1, 2, or 3: Only use patterns with the wildcards
*
and|
.For example, using
--database-pattern test*
will match any database with "test" at the beginning of its name, such as test01, test02, test03.If using Hive 4: Use any wildcards based on Hive's Data Definition Language (DDL).
- UI
- CLI
Create a metadata migration with the UI
Ensure you have migrated the data for the databases and tables you want to migrate.
You need both the data and associated metadata before you can successfully run queries on migrated databases.
From your Dashboard, select the product under Products.
Under Metadata Migrations, select Create metadata migration.
Enter a name for this migration.
Select a source and target agent.
Create a Database Pattern and Table Pattern that match the databases and tables you want to migrate.
(Optional) - Use the options under Target Agent Configuration Overrides to override your Databricks target agent configuration for this migration.
Catalog Enter the name of your Databricks Unity Catalog.
Convert to delta format Select to convert your tables to Delta Lake format after migrating to Databricks.
Delete after conversion Select to delete the underlying table data and metadata from the Filesystem Mount Point location after it has been converted to Delta Lake in Databricks.
infoOnly use this option if you're performing one-time migrations for the underlying table data. The Databricks agent doesn't support continuous (live) updates of table data if you're converting to Delta Lake in Databricks.
Filesysytem Mount Point The filesystem that contains your data you want to migrate must be mounted onto your DBFS.
Enter the mounted container's path on the DBFS.(Optional) - Enter another path for Default Filesystem Override.
If you select Convert to Delta Lake , enter the location on the DBFS to store the tables converted to Delta Lake. To store Delta Lake tables on cloud storage, enter the path to the mount point and the path on the cloud storage.
Example: Location on the DBFS to store tables converted to Delta Lakedbfs:<location>
Example: Cloud storage locationdbfs:/mnt/adls2/storage_account/
If you don't select Convert to Delta, enter the mount point.
Example: Filesystem mount pointdbfs:<value of Fs mount point>
Select Start migration automatically to start the migration automatically or leave clear to start manually after creation.
Select Create to create the metadata migration.
Metrics shown on the metadata migration content summary page don’t show correct results following certain metadata migration failures. See the Known issue for more information.
Create a metadata migration with the CLI
Migrate metadata from your source metastore to a target metastore using the hive migration
command.
Define the source and target using the hive agent names detailed in the Connect to metastores section, and apply the hive rule names to the migration.
Follow the command links to learn how to set the parameters and see examples.
Create a new metadata migration:
Apply the
--auto-start
parameter if you would like the migration to start right away.If you don't have auto-start enabled, manually start the migration: