Skip to main content
Version: 3.0 (latest)

Troubleshooting

This article details issues that you may face when installing and using Data Migrator. Follow the steps if you encounter these issues.

Please ensure you have read the Prerequisites as you may experience problems if you miss any of these requirements.

We recommend making use of logs when troubleshooting Data Migrator. See Log Commands for information on how to enable logging across various levels. Logs for each component of Data Migrator are stored in the /var/log/wandisco/ directory within the Data Migrator installation directory, with a directory for each component, such as /var/log/wandisco/ui for the UI.

General

Rule names parameter does not autocomplete in the CLI

When adding the --rule-names parameter to the end of a hive migration add command, auto-completion will not suggest the parameter name. For example:

Example
hive migration add --name test --source sourceAgent --target testGlue --rule-names

To work around this, either:

  • Use the --rule-names parameter earlier in the command. For example: hive migration add --name test --rule-names
  • Use the Tab key twice in the CLI when attempting to autocomplete the parameter, and select --rule-names with the left and right arrow keys.

Hive Migrator configuration files missing when reinstalling Data Migrator on Ubuntu/Debian

This issue will occur when you have removed the Hive Migrator package with apt-get remove instead of apt-get purge during the uninstall steps.

The /etc/wandisco/hivemigrator directory will be missing files as a result. The cause is that the Ubuntu package management tool (dpkg) stores service configuration information in its internal database and assumes this directory already has the needed files (even if they were manually removed).

To resolve this:

  1. Cleanup the dpkg database for the Hive Migrator service:

    rm -f /var/lib/dpkg/info/hivemigrator*
  2. Remove the Hive Migrator package again using dpkg and the --purge option:

    dpkg --purge hivemigrator
  3. Carry out the install steps for the new version of Data Migrator.

  4. If needed, install the Hive Migrator package using dpkg and the --force-confmiss option:

    Example
    dpkg -i --force-confmiss hivemigrator_1.3.1-518_all.deb

Manual JDBC driver configuration

If you're using MariaDB or MySQL, you need to manually add the JDBC driver to the classpath on your Hivemigrator host, or the metadata migration will stall.

hivemigrator.log
2021-09-09 16:44:49,033 INFO com.wandisco.hivemigrator.agent.utils.JdbcUtil - [default-nioEventLoopGroup-3-4]: Loaded jdbc drivers: [class org.apache.derby.jdbc.EmbeddedDriver, *null*, class org.postgresql.Driver, *null*]

If the migration stalls, manually move the driver into place. Note that the driver version may vary.

  • mv mysql-connector-java-8.0.20 /opt/wandisco/hivemigrator/agent/hive/

After moving the driver into place, set the .jar file to be readable by the Hive Migrator system user. For example, if Hive Migrator is running as hive:

Example
chown hive:hadoop /opt/wandisco/hivemigrator/agent/hive/mysql-connector-java-8.0.20

Missing PostgreSQL driver

If you’re connecting to a Hive metastore agent and using CDP 7.1.x with Postgresql, you need to create a symlink from the agent installation path to the PostgreSQL JDBC driver file (postgresql-jdbc.jar).

Without a symlink, the hive agent check command returns the following error:

"title" : "Transaction store failed to initialize",
"message" : " Download the driver from url: https://jdbc.postgresql.org/download.html, and copy it to the HiveMigrator Hive agent installation path. Default installation paths for Hive agent: for HiveMigrator - /opt/wandisco/hivemigrator/agent/hive, for HiveMigrator Remote Server - /opt/wandisco/hivemigrator-remote-server/agent/hive.",
"status" : 500,
"error" : 10115
  1. Find and copy the path to the PostgreSQL JDBC driver file on your system. The driver should be found in <PREFIX>/share/java/postgresql.jar. For example, /usr/share/java/postgresql-jdbc.jar.

    note

    You can use the following command to find the driver on your system:

    find / -name postgresql-jdbc.jar
  2. Navigate to the agent installation path:

    HiveMigrator
    cd /opt/wandisco/hivemigrator/agent/hive/

    Or

    HiveMigrator Remote Server
    cd /opt/wandisco/hivemigrator-remote-server/agent/hive
  3. Create a symlink to the PostgreSQL JDBC driver file:

    ln -s /usr/share/java/postgresql-jdbc.jar
  4. If the error persists, ensure the new symlink is owned by the correct user and group (hive:hadoop by default) and restart the relevant service:

    HiveMigrator
    chown -h hive:hadoop /opt/wandisco/hivemigrator/agent/hive/

    Or

    HiveMigrator Remote Server
    chown -h hive:hadoop /opt/wandisco/hivemigrator-remote-server/agent/hive/postgresql-jdbc.jar

    Then

    HiveMigrator
    systemctl restart hivemigrator

    Or

    HiveMigrator Remote Server
    systemctl restart hivemigrator-remote-server

Data Migrator account

Reset admin user password

If you have lost or otherwise need to change the admin user password without using the associated email address, refer to these instructions.

Service startup issues

Out of memory

When Data Migrator first starts, the start.sh script in /opt/wandisco/livedata-migrator attempts to begin the Java process with 16GB of memory. This is the amount of memory defined by two properties in /etc/wandisco/livedata-migrator/vars.env:

JVM_MAX_MEM="16384m"
JVM_MIN_MEM="16384m"

If the Java process is unable to allocate this amount of memory, it immediately fails. After failing, the Data Migrator attempts to restart very quickly until it stops (only takes a few seconds).

The first issue is that the Data Migrator service status displays as failed:

livedata-migrator.service - Live Data Migrator
Loaded: loaded (/usr/lib/systemd/system/livedata-migrator.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Mon 2021-07-19 19:49:09 COT; 5min ago
Process: 8105 ExecStart=/opt/wandisco/livedata-migrator/livedata-migrator.jar (code=exited, status=1/FAILURE)
Main PID: 8105 (code=exited, status=1/FAILURE)

Jul 19 19:49:09 systemd[1]: livedata-migrator.service failed.
Jul 19 19:49:09 systemd[1]: livedata-migrator.service holdoff time over, scheduling restart.
Jul 19 19:49:09 systemd[1]: Stopped Live Data Migrator.
Jul 19 19:49:09 systemd[1]: start request repeated too quickly for livedata-migrator.service
Jul 19 19:49:09 systemd[1]: Failed to start Live Data Migrator.
Jul 19 19:49:09 systemd[1]: Unit livedata-migrator.service entered failed state.
Jul 19 19:49:09 systemd[1]: livedata-migrator.service failed.
Jul 19 19:51:19 systemd[1]: start request repeated too quickly for livedata-migrator.service
Jul 19 19:51:19 systemd[1]: Failed to start Live Data Migrator.
Jul 19 19:51:19 systemd[1]: livedata-migrator.service failed.

The second issue is that the service is unable to fix itself by auto-restarting:

Job for livedata-migrator.service failed because start of the service was attempted too often. See "systemctl status livedata-migrator.service" and "journalctl -xe" for details. To force a start use "systemctl reset-failed livedata-migrator.service" followed by "systemctl start livedata-migrator.service" again.

The third issue is the presence of hs_err log files. When Data Migrator is unable to allocate memory, it generates a log file in the /opt/wandisco/livedata-migrator directory. These log files have the prefix hs_err_pid followed by the process number:

# ll /opt/wandisco/livedata-migrator
total 320056
drwxr-xr-x 4 hdfs hdfs 38 Mar 21 11:40 db
-rwxr-xr-x 1 hdfs hdfs 2718 Mar 15 18:04 env-checks.sh
*-rw-r--r-- 1 hdfs hdfs 26130 Mar 21 11:35 hs_err_pid1120.log
-rw-r--r-- 1 hdfs hdfs 26179 Mar 21 11:35 hs_err_pid1243.log
-rw-r--r-- 1 hdfs hdfs 26226 Mar 21 11:35 hs_err_pid1367.log
-rw-r--r-- 1 hdfs hdfs 26371 Mar 21 11:35 hs_err_pid828.log
-rw-r--r-- 1 hdfs hdfs 26272 Mar 21 11:35 hs_err_pid992.log*
-rw-r--r-- 1 hdfs hdfs 514 Mar 15 18:04 livedata-migrator.conf
-rw-r--r-- 1 hdfs hdfs 327560208 Mar 15 18:01 livedata-migrator.jar
-rw-r--r-- 1 hdfs hdfs 321 Mar 21 11:35 livedata-migrator.service
-rwxr-xr-x 1 hdfs hdfs 1574 Mar 15 18:04 start.sh
-rwxr-xr-x 1 hdfs hdfs 13485 Mar 15 18:04 talkback.sh
drwxr-xr-x 4 hdfs hdfs 37 Mar 21 11:35 THIRD_PARTY_LIBRARIES

The first two lines of each of those files confirm that the problem is related to the inability to allocate memory:

There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 17179869184 bytes for committing reserved memory.

To fix these issues:

  1. Update JVM_MIN_MEM and JVM_MAX_MEM properties in /etc/wandisco/livedata-migrator/vars.env.
  2. Reset the failed process.
  3. Start the Data Migrator service.

Microsoft Azure resources

Insufficient container permissions with an Azure Data Lake Storage (ADLS) Gen2 target filesystem when using OAuth2 authentication

When creating or updating an ADLS Gen2 target filesystem using the OAuth2 authentication protocol, you may have insufficient permission to guarantee a successful migration. This is usually because the Role Based Access Control on the service principal does not guarantee root access. In this case, the migration will fail to start (or resume) and issue a warning.

To force the migration to start (or resume) despite the warning, update the ADLS Gen2 filesystem with the following property and restart Data Migrator afterwards:

Property
fs.ignore-authentication-privileges=true
Example Usage
filesystem update adls2 oauth --file-system-id target --properties fs.ignore-authentication-privileges=true

Amazon Web Services (AWS) resources

Failed to connect to Data Migrator

This error appears when you try to add an S3 bucket in the UI with any of the following problems:

  • You've made a mistake or typo while entering an access or secret key.
  • Your bucket contains a dot (.) in the name.

Check that you've entered your access and secret keys correctly with no extra characters, and follow the recommendations in the bucket naming rules guide when you create an S3 bucket.

Error Code: AccessDenied. Error Message: Access to the resource https://sqs.eu-west-1.amazonaws.com/ is denied.

This problem arises if your account does not have sufficient SQS permissions to access the bucket resource. To fix this, ask your organisation administrator to assign the necessary privileges in the SQS policy manager.

For example, configuring an allow rule for sqs:* will allow all organization users configured with SQS to perform the necessary actions with Data Migrator.

Notifications

Below are some of the most common notifications that you may encounter during the deployment or use of Data Migrator.

LiveMigratorPanicNotification

When Data Migrator encounters an unexpected run-time exception, it will emit a log message with the notification LiveMigratorPanicNotification. The message and resolution vary based on the cause of the exception. For example:

Example
2020-11-12 16:26:37.441 ERROR - [engine-pool-1 ] c.w.l.e.LM2UncaughtExceptionHandler : Uncaught exception in thread Thread[engine-pool-1,5,main], exception:
java.lang.IllegalArgumentException: Wrong FS: hdfs://.livemigrator_55f9bf54-77fc-4bc1-95e9-0a378d938609, expected: hdfs://nmcnu01-vm0.bdfrem.wandisco.com at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:730) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:233) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1576) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1573) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1588) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1683) at com.wandisco.livemigrator2.fs.hdfs.HdfsFileSystemWrapper.exists(HdfsFileSystemWrapper.java:154) at com.wandisco.livemigrator2.fs.hdfs.HdfsFileSystemWrapper$$FastClassBySpringCGLIB$$c15450b.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88) at com.wandisco.livemigrator2.fs.FileSystemExceptionHandlerAspect.handleException(FileSystemExceptionHandlerAspect.java:19) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633) at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691) at com.wandisco.livemigrator2.fs.hdfs.HdfsFileSystemWrapper$$EnhancerBySpringCGLIB$$57c6ec3a.exists(<generated>) at com.wandisco.livemigrator2.migration.MigratorEngine.createMarkerIfNecesssary(MigratorEngine.java:959) at com.wandisco.livemigrator2.migration.MigratorEngine.init(MigratorEngine.java:211) at com.wandisco.livemigrator2.migration.MigratorEngine.run(MigratorEngine.java:304) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
2020-11-12 16:26:37.442 INFO - [engine-pool-1 ] c.w.l.n.NotificationManagerImpl : Notification: Notification{level=ERROR, type='LiveMigratorPanicNotification', message='Wrong FS: hdfs://.livemigrator_55f9bf54-77fc-4bc1-95e9-0a378d938609, expected: hdfs://nmcnu01-vm0.bdfrem.wandisco.com', id='urn:uuid:8bf396b3-2b58-473c-9e77-8cab70e88c04', timeStamp=1605198397441, code=40003, resolved=false, updatedTimeStamp=1605198397441, payload={}}

Any issue triggering this notification will cause the application to shut down with a return code of -1, indicating an abnormal termination.

HighPendingRegionNotification

When directories are moved or modified during a migration, they are logged as pending regions. Pending regions are paths that Data Migrator rescans for changes. Exceeding the configured maximum number of pending regions, during migration, will cause the migration to abort.

This issue can be resolved by increasing the maximum number of pending regions in the migration.

This notification displays when the number of pending regions exceeds the "high watermark" percentage of maximum pending regions, and is resolved when the number falls below the "low watermark" percentage.

Both watermarks may be configured by adding settings to application.properties. The following setting configures the high watermark percentage of pending regions:

Example
notifications.pending.region.warn.percent=60

And the following setting determines the low watermark percentage:

Example
notifications.pending.region.clear.percent=50

Error message "Can't access source events stream from the Kafka service."

Migrations from IBM Cloud Object Storage use Kafka to pull out the source cluster's events. These events include filesystem changes that apply to the target cluster during the migration. If Data Migrator cannot communicate with the Kafka service, the migration will stall until communication with the service resumes.

note

The notification message is sent 10 minutes after contact with the Kafka service is lost.

Recommended steps

  1. Check the availability of the Kafka service.
  2. If the Kafka service is unavailable, restart the Kafka service.
  3. If there are very large numbers of queued changes we recommend that you reset the migration. It'll be faster and more reliable to rescan the source than attempt to continue the stalled migration.

Hive Migrator connection to the source or target timed-out

Hive metadata migrations are set to fail if connections to either the target or source agent are lost for more than 20 minutes. Fix any failures by restarting the affected migrations.

If migrations continue to fail due to this timeout, consider increasing the connectionRetryTimeout parameter:

Changing connectionRetryTimeout parameter

  1. Open /etc/wandisco/hivemigrator/application.properties.
  2. Uncomment out the connectionRetryTimeout parameter and change the default 20 minutes to something higher. It's better to make incremental increases and retest rather than immediately setting a very high value.
  3. Save the change.
  4. Restart the Hive Migrator service. See System service commands - Hive Migrator.

Change metastore rescan rate

Hive Migrator rescans the Hive metastore as soon as the previous scan finishes. If appropriate you can reduce the scan rate by updating the delayBetweenScanRounds parameter:

Changing delayBetweenScanRounds parameter

  1. Open /etc/wandisco/hivemigrator/application.properties.
  2. Uncomment out the delayBetweenScanRounds parameter and change the default 1 second to something higher. If you introduce a large delay, test that migration performance is not significantly impacted.
  3. Save the change.
  4. Restart the Hive Migrator service to enable the new configuration. See System service commands - Hive Migrator.

Data transfer agents

Restore connection to data transfer agent

If an agent's configuration properties or credentials are moved or become corrupt, the connection from Data Migrator to the agent is lost. The agent is displayed as unhealthy.

The following notification is displayed:

Couldn't restore connection to data transfer agent <DTA_NAME>, <HOSTNAME>, <PORT NUMBER> at <DATETIME>.
Remove and add the agent again. View agents.

We recommend you remove the agent that is listed as unhealthy and add it again. This should resolve the problem. If the problem persists, contact Support.

Kerberos

Kerberos configuration

If you're having issues configuring Kerberos for a filesystem, try the following:

Check the keytab is readable by the user operating Data Migrator.

To test this, run the following commands (where ldmuser should be your user):

Example of authenticating 'ldmuser'
su ldmuser
ls -al /etc/security/keytabs/ldmuser.keytab

If the command fails, modify permissions on the directory to allow access for ldmuser.

Check the Kerberos principal is included within the keytab file.

Inspect the keytab file's contents:

Example of listing ldmuser's keytab file contents
su ldmuser
klist -kt /etc/security/keytabs/ldmuser.keytab

If ldmuser/hostname@REALM.COM is not in the keytab, create a keytab containing ldmuser/hostname@REALM.COM and copy it to the /etc/security/keytabs directory on the edge node running Data Migrator.

Check the Kerberos principal is valid.

For example: a principal of ldmuser/hostname@REALM.COM and a keytab file ldmuser.keytab are valid.

To ensure principal validity, you can destroy all currently active authentication tickets in the cache and try initiating a new one:

Test validity of Kerberos principal
su ldmuser
kdestroy
kinit -kt /etc/security/keytabs/ldmuser.keytab ldmuser/hostname@REALM.COM
klist

If kinit fails and there is no principal in the cache, check the principal to ensure there are no password mismatches or other inconsistencies. In this case, the ldmuser principal and keytab file might need to be recreated.

Ensure the Kerberos principal is linked to a superuser: global access to filesystem operations is required.

To test access, run the following commands to read the file tree, replacing the user details with your own:

Test superuser access by reading the file tree
su ldmuser
kinit -kt /etc/security/keytabs/ldmuser.keytab ldmuser/hostname@REALM.COM
hdfs dfs -ls /

If successful, the operation will return the HDFS file tree. Optionally, try creating a directory as well:

Test superuser access by creating a directory
hdfs dfs -mkdir /ldm_test

This creates an ldm_test directory if successful.

If either command fails, check auth_to_local rules are correctly configured, and that your user (in this case, ldmuser) is in the superuser group.

note

Additionally, if you're configuring Kerberos for a Hive metastore, the principal must be associated with the hive user or another superuser. For example: hive/hostname@REALM.COM

note

If Kerberos is disabled, and Hadoop configuration is on the host, Data Migrator detects the source filesystem automatically on startup.

Hadoop should be installed globally on the filesystem to allow Data Migrator to access Hadoop configuration during automatic detection. Alternatively, if you're running Data Migrator for a single user's environment, Hadoop should be made available to the agent running the service on the PATH environment variable:

Systemctl sudo systemctl set-environment PATH=$PATH

Message stream modified (41)

If you encounter the error "Message stream modified (41) To try to automatically discover the source, please run 'filesystem auto-discover-source' for the type of filesystem you want to discover" and it is not resolved by performing the suggested action, fix the issue by modifying the user principal in the key distribution center:

Example with principal ldmuser/hostname@REALM.com
modprinc -maxrenewlife 90day +allow_renewable ldmuser/hostname@REALM.COM

Hive remote agent

info

See the following Known issue regarding remote agents with JDBC credentials: JDBC password overwritten.

Check agent health status with UI

  1. Select your Data Migrator product from the Instances list in the dashboard.
  2. Under Metastore Agents, select your metastore agent from the list.
  3. Under Metastore Connection, view health status as either Healthy or Unhealthy.

Find and correct configuration errors

info

It's not possible to adjust some TLS parameters for remote metastore agents after creation. Find more information in the following Knowledge base article.

  1. Identify an unhealthy agent status and any configuration error messages with the hive agent check CLI command.

    hive agent check --name remoteagent1
    {
    "title" : "Agent check error",
    "message" : "Keytab file: /etc/security/keytabs/hive.service.eytab is not readable or does not exist. Check that the correct keytab is available.",
    "status" : 500,
    "error" : 10113
    }

  2. Optionally, check the Hive Migrator remote server log on the remote agent to confirm the issue.

    tail -100f /var/log/wandisco/hivemigrator-remote-server/hivemigrator-remote-server.log
    ..
    2023-03-15 10:23:34,605 ERROR [main] c.w.h.a.r.AgentProviderImpl: Can not create agent localAgent
    com.wandisco.hivemigrator.exceptions.KeytabNotReadableException: Keytab file: /etc/security/keytabs/hive.service.eytab is not readable or does not exist. Check that the correct keytab is available.
    ..
  3. Change the incorrect configuration of the agent using the hive agent configure hive CLI command.

    Example, correction of an incorrect Kerberos keytab value
    hive agent configure hive --name remoteagent1 --kerberos-keytab /etc/security/keytabs/hive.service.keytab
  4. Edit the remote agent configuration file on the remote agent and apply the correct values.

    vi /etc/wandisco/hivemigrator-remote-server/agent.yaml
    ...
    kerberosKeytab: "/etc/security/keytabs/hive.service.keytab"
    ...
    note

    If you deployed your remote agent with the --autodeploy option, your configuration will be automatically updated on the remote agent with the hive agent configure hive command.

  5. Restart the remote agent service with the service restart command.

  6. Check the agent health status with the hive agent check command in the CLI.

    hive agent check --name remoteagent1
    {
    "name": "remoteagent1",
    "location": "REMOTE",
    "config": {
    "agentType": "HIVE",
    "remoteAgentConfig": {
    ...
    ...
info

You can also delete and re-add the agent to address configuration errors using the hive agent delete and hive agent add hive CLI commands.

HiveConf class not found on Oracle BDS

When deploying either Hivemigrator or Hivemigrator Remote Server within an Oracle Big Data Service (BDS) Hadoop cluster, both components are unable to locate the HiveConf class due to Oracle's Hive implementation differing from that of other Hadoop clusters. The Hivemigrator service will fail to start or your Remote Server will remain to show as unhealthy.

Data Migrator will log the following error in /var/log/wandisco/hivemigrator/hivemigrator.log or /var/log/wandisco/hivemigrator-remote-server/hivemigrator-remote-server.log.

Example: hivemigrator-remote-server.log
...
2023-07-18 08:30:16,121 ERROR [ecutor-thread-2] i.m.h.s.RouteExecutor : Unexpected error occurred: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
...

To resolve this error, provide an override for the Hive classpath by adding a environment script in /etc/wandisco/hivemigrator/ for Hivemigrator or /etc/wandisco/hivemigrator-remote-server/ for Hivemigrator Remote Server.

Complete the following steps to override the Hive classpath when deployed in an Oracle BDS environment.

  1. Create environment script.

    touch hivemigrator-user-env.sh 
  2. Edit the script.

    Edit the script with nano
    nano hivemigrator-user-env.sh

    or

    Edit the script with vim
    vi hivemigrator-user-env.sh
  3. Add the following lines to the script.

    #!/bin/sh
    HIVE_CP="${HIVE_CP}:/usr/odh/current/hive-metastore/lib/*"
  4. Restart HVM services.

    Hivemigrator.

    systemctl restart hivemigrator

    Hive Migrator Remote Server.

    systemctl restart hivemigrator-remote-server

Troubleshooting techniques

Use these Data Migrator features to identify problems with migrations or filesystems.

Check path status

You can check the status of a filepath in either the UI or the CLI to determine whether any work is scheduled on the file.