[ 
https://issues.apache.org/jira/browse/HUDI-1528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17264862#comment-17264862
 ] 

Trevorzhang commented on HUDI-1528:
-----------------------------------

{panel:title=我的标题}
[lingqu@xx-dev-cq-ecs-dtpbu-datalake-cdh-work-01 jars]$ sh 
/data/zyx/hudi-hive-sync/run_sync_tool.sh --base-path  
hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city  --database 
datalake_taxi    --table xx_biz_operation_t_city2 --jdbc-url 
jdbc:hive2://xx-test-cq-ecs-dtpbu-datalake-cdh-edge-01:xx000  
--partition-value-extractor 
org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor  --user lingqu 
--pass lingqu@bigdata --partitioned-by 
ds[lingqu@xx-dev-cq-ecs-dtpbu-datalake-cdh-work-01 jars]$ sh 
/data/zyx/hudi-hive-sync/run_sync_tool.sh --base-path  
hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city  --database 
datalake_taxi    --table xx_biz_operation_t_city2 --jdbc-url 
jdbc:hive2://xx-test-cq-ecs-dtpbu-datalake-cdh-edge-01:xx000  
--partition-value-extractor 
org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor  --user lingqu 
--pass lingqu@bigdata --partitioned-by dsRunning Command : java -cp 
/opt/cloudera/parcels/CDH/lib/hive/lib/hive-metastore-2.1.1-cdh6.3.0.jar::/opt/cloudera/parcels/CDH/lib/hive/lib/hive-service-2.1.1-cdh6.3.0.jar::/opt/cloudera/parcels/CDH/lib/hive/lib/hive-exec-2.1.1-cdh6.3.0.jar::/opt/cloudera/parcels/CDH/lib/hive/lib/hive-jdbc-2.1.1-cdh6.3.0.jar::/opt/cloudera/parcels/CDH/lib/hive/lib/jackson-annotations-2.9.9.jar:/opt/cloudera/parcels/CDH/lib/hive/lib/jackson-core-2.9.9.jar:/opt/cloudera/parcels/CDH/lib/hive/lib/jackson-core-asl-1.9.13.jar:/opt/cloudera/parcels/CDH/lib/hive/lib/jackson-databind-2.9.9.jar:/opt/cloudera/parcels/CDH/lib/hive/lib/jackson-jaxrs-1.9.13.jar:/opt/cloudera/parcels/CDH/lib/hive/lib/jackson-mapper-asl-1.9.13-cloudera.1.jar:/opt/cloudera/parcels/CDH/lib/hive/lib/jackson-xc-1.9.13.jar::/opt/cloudera/parcels/CDH/lib/hadoop/client/*:/data/zyx/hudi-hive-sync/jars/*:/etc/hadoop/conf:/data/zyx/hudi-hive-sync/jars/hudi-hive-sync-bundle-0.6.0.jar
 org.apache.hudi.hive.HiveSyncTool --base-path 
hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city --database 
datalake_taxi --table xx_biz_operation_t_city2 --jdbc-url 
jdbc:hive2://xx-test-cq-ecs-dtpbu-datalake-cdh-edge-01:xx000 
--partition-value-extractor 
org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor --user lingqu 
--pass lingqu@bigdata --partitioned-by dsSLF4J: Failed to load class 
"org.slf4j.impl.StaticLoggerBinder".SLF4J: Defaulting to no-operation (NOP) 
logger implementationSLF4J: See 
http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.21/01/14 
20:20:58 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://nameservice1], Config:[Configuration: core-default.xml, core-site.xml, 
mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, 
hdfs-default.xml, hdfs-site.xml], FileSystem: 
[DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_1717621797_1, ugi=lingqu 
(auth:SIMPLE)]]]21/01/14 20:20:59 INFO table.HoodieTableMetaClient: Loading 
HoodieTableMetaClient from 
hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city21/01/14 
20:20:59 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://nameservice1], Config:[Configuration: core-default.xml, core-site.xml, 
mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, 
hdfs-default.xml, hdfs-site.xml], FileSystem: 
[DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_1717621797_1, ugi=lingqu 
(auth:SIMPLE)]]]21/01/14 20:20:59 INFO table.HoodieTableConfig: Loading table 
properties from 
hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city/.hoodie/hoodie.properties21/01/14
 20:20:59 INFO table.HoodieTableMetaClient: Finished Loading Table of type 
COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from 
hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city21/01/14 
20:20:59 INFO table.HoodieTableMetaClient: Loading Active commit timeline for 
hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city21/01/14 
20:20:59 INFO timeline.HoodieActiveTimeline: Loaded instants 
[[20201xx6164936__commit__COMPLETED]]21/01/14 20:20:59 INFO 
hive.HoodieHiveClient: Creating hive connection 
jdbc:hive2://xx-test-cq-ecs-dtpbu-datalake-cdh-edge-01:xx00021/01/14 20:20:59 
INFO hive.HoodieHiveClient: Successfully established Hive connection to  
jdbc:hive2://xx-test-cq-ecs-dtpbu-datalake-cdh-edge-01:xx00021/01/14 20:21:00 
INFO hive.HiveSyncTool: Trying to sync hoodie table xx_biz_operation_t_city2 
with base path 
hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city of type 
COPY_ON_WRITE21/01/14 20:21:00 INFO hive.HoodieHiveClient: Executing SQL create 
database if not exists datalake_taxi21/01/14 20:21:00 INFO hive.HiveSyncTool: 
Hive table xx_biz_operation_t_city2 is not found. Creating it21/01/14 20:21:00 
INFO hive.HoodieHiveClient: Creating table with CREATE EXTERNAL TABLE  IF NOT 
EXISTS `datalake_taxi`.`xx_biz_operation_t_city2`( `_hoodie_commit_time` 
string, `_hoodie_commit_seqno` string, `_hoodie_record_key` string, 
`_hoodie_partition_path` string, `_hoodie_file_name` string, `ums_id_` string, 
`ums_ts_` string, `ums_op_` string, `uuid` string, `org_code` string, 
`org_name` string, `parent_code` string, `center` string, `flag` int, 
`create_time` string, `update_time` string) PARTITIONED BY (`ds` string) ROW 
FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' 
STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat' 
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' 
LOCATION 
'hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city'21/01/14 
20:21:00 INFO hive.HoodieHiveClient: Executing SQL CREATE EXTERNAL TABLE  IF 
NOT EXISTS `datalake_taxi`.`xx_biz_operation_t_city2`( `_hoodie_commit_time` 
string, `_hoodie_commit_seqno` string, `_hoodie_record_key` string, 
`_hoodie_partition_path` string, `_hoodie_file_name` string, `ums_id_` string, 
`ums_ts_` string, `ums_op_` string, `uuid` string, `org_code` string, 
`org_name` string, `parent_code` string, `center` string, `flag` int, 
`create_time` string, `update_time` string) PARTITIONED BY (`ds` string) ROW 
FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' 
STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat' 
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' 
LOCATION 
'hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city'21/01/14 
20:21:00 INFO hive.HiveSyncTool: Schema sync complete. Syncing partitions for 
xx_biz_operation_t_city221/01/14 20:21:00 INFO hive.HiveSyncTool: Last commit 
time synced was found to be null21/01/14 20:21:00 INFO 
common.AbstractSyncHoodieClient: Last commit time synced is not known, listing 
all partitions in 
hdfs://xx.3.98.183:8020/data_lake/jqxx/data/xx_biz_operation.t_city,FS 
:DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_1717621797_1, ugi=lingqu 
(auth:SIMPLE)]]21/01/14 20:21:00 INFO hive.HiveSyncTool: Storage partitions 
scan complete. Found 121/01/14 20:21:00 ERROR hive.HiveSyncTool: Got runtime 
exception when hive syncingorg.apache.hudi.hive.HoodieHiveSyncException: Failed 
to sync partitions for table xx_biz_operation_t_city2 at 
org.apache.hudi.hive.HiveSyncTool.syncPartitions(HiveSyncTool.java:206) at 
org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:142) at 
org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:94) at 
org.apache.hudi.hive.HiveSyncTool.main(HiveSyncTool.java:226)Caused by: 
NoSuchObjectException(message:datalake_taxi.xx_biz_operation_t_city2 table not 
found) at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java)
 at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java)
 at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.read(ThriftHiveMetastore.java)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions(ThriftHiveMetastore.java:2289)
 at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions(ThriftHiveMetastore.java:2274)
 at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitions(HiveMetaStoreClient.java:1384)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154)
 at com.sun.proxy.$Proxy18.listPartitions(Unknown Source) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2562)
 at com.sun.proxy.$Proxy18.listPartitions(Unknown Source) at 
org.apache.hudi.hive.HoodieHiveClient.scanTablePartitions(HoodieHiveClient.java:236)
 at org.apache.hudi.hive.HiveSyncTool.syncPartitions(HiveSyncTool.java:196) ... 
3 more
{panel}

> hudi-sync-tool error
> --------------------
>
>                 Key: HUDI-1528
>                 URL: https://issues.apache.org/jira/browse/HUDI-1528
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: Hive Integration
>            Reporter: Trevorzhang
>            Assignee: Trevorzhang
>            Priority: Major
>             Fix For: 0.8.0
>
>
> When using hudi-sync-tools to synchronize to a remote hive, hivemetastore 
> will throw exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to