[ 
https://issues.apache.org/jira/browse/HUDI-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17375099#comment-17375099
 ] 

ASF GitHub Bot commented on HUDI-2089:
--------------------------------------

hudi-bot edited a comment on pull request #3182:
URL: https://github.com/apache/hudi/pull/3182#issuecomment-870381931


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "d59d9848553983cedbad7e2a08d4ac33b725a68a",
       "status" : "DELETED",
       "url" : 
"https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=531";,
       "triggerID" : "d59d9848553983cedbad7e2a08d4ac33b725a68a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9582f7dcde72e98c09fc12e0f69f5beef28dbaea",
       "status" : "DELETED",
       "url" : 
"https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=550";,
       "triggerID" : "9582f7dcde72e98c09fc12e0f69f5beef28dbaea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b2cc0dd8817120e5d7ea4894afbef1ae4e0ec265",
       "status" : "DELETED",
       "url" : 
"https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=640";,
       "triggerID" : "b2cc0dd8817120e5d7ea4894afbef1ae4e0ec265",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c7a4a11a1e28af225c0039f5c4f7994761921e2a",
       "status" : "SUCCESS",
       "url" : 
"https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=671";,
       "triggerID" : "c7a4a11a1e28af225c0039f5c4f7994761921e2a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "636c067d8f8ca4675be4b3336c8b40b82058897d",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "636c067d8f8ca4675be4b3336c8b40b82058897d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * c7a4a11a1e28af225c0039f5c4f7994761921e2a Azure: 
[SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=671)
 
   * 636c067d8f8ca4675be4b3336c8b40b82058897d UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run travis` re-run the last Travis build
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> fix the bug that metatable cannot support non_partition table
> -------------------------------------------------------------
>
>                 Key: HUDI-2089
>                 URL: https://issues.apache.org/jira/browse/HUDI-2089
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: Spark Integration
>    Affects Versions: 0.8.0
>         Environment: spark3.1.1
> hive3.1.1
> hadoop 3.1.1
>            Reporter: tao meng
>            Assignee: tao meng
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.9.0
>
>
> now, we found that when we enable metable for non_partition hudi table,  the 
> follow  error occur:
> org.apache.hudi.exception.HoodieMetadataException: Error syncing to metadata 
> table.org.apache.hudi.exception.HoodieMetadataException: Error syncing to 
> metadata table.
>  at 
> org.apache.hudi.client.SparkRDDWriteClient.syncTableMetadata(SparkRDDWriteClient.java:447)
>  at 
> org.apache.hudi.client.AbstractHoodieWriteClient.postCommit(AbstractHoodieWriteClient.java:433)
>  at 
> org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:187)
> we use hudi 0.8, but we  also find this problem in latest code of hudi
> test step:
> val df = spark.range(0, 1000).toDF("keyid")
>  .withColumn("col3", expr("keyid"))
>  .withColumn("age", lit(1))
>  .withColumn("p", lit(2))
> df.write.format("hudi").
>  option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, 
> DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL).
>  option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "col3").
>  option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "keyid").
>  option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "").
>  option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, 
> "org.apache.hudi.keygen.NonpartitionedKeyGenerator").
>  option(DataSourceWriteOptions.OPERATION_OPT_KEY, "insert").
>  option("hoodie.insert.shuffle.parallelism", "4").
>  option("hoodie.metadata.enable", "true").
>  option(HoodieWriteConfig.TABLE_NAME, "hoodie_test")
>  .mode(SaveMode.Overwrite).save(basePath)
> // upsert same record again
> df.write.format("hudi").
>  option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, 
> DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL).
>  option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "col3").
>  option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "keyid").
>  option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "").
>  option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, 
> "org.apache.hudi.keygen.NonpartitionedKeyGenerator").
>  option(DataSourceWriteOptions.OPERATION_OPT_KEY, "upsert").
>  option("hoodie.insert.shuffle.parallelism", "4").
>  option("hoodie.metadata.enable", "true").
>  option(HoodieWriteConfig.TABLE_NAME, "hoodie_test")
>  .mode(SaveMode.Append).save(basePath)
>  
> org.apache.hudi.exception.HoodieMetadataException: Error syncing to metadata 
> table.org.apache.hudi.exception.HoodieMetadataException: Error syncing to 
> metadata table.
>  at 
> org.apache.hudi.client.SparkRDDWriteClient.syncTableMetadata(SparkRDDWriteClient.java:447)
>  at 
> org.apache.hudi.client.AbstractHoodieWriteClient.postCommit(AbstractHoodieWriteClient.java:433)
>  at 
> org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:187)
>  at 
> org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:121)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:564)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:230) 
> at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:162) at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to