Nazerra opened a new issue, #6022:
URL: https://github.com/apache/hudi/issues/6022

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
dev-subscr...@hudi.apache.org.
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   A clear and concise description of the problem.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1.create Hudi table with below specified config and data size.both hudi 
insert and create run parallley every hour.
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :0.11.1
   
   * Spark version :2.4.7
   
   * Hive version : 2.3.7
   
   * Hadoop version :
   
   * Storage (HDFS/S3/GCS..) :GCS
   
   * Running on Docker? (yes/no) :NO
   There are two Hudi loads done .Both Work on same table .One is insert with 
below config:
    val hudiOptions = Map(
         "hoodie.datasource.write.keygenerator.class" -> 
"org.apache.hudi.keygen.ComplexKeyGenerator",
         "hoodie.datasource.write.table.type" -> "COPY_ON_WRITE",
         "hoodie.datasource.write.operation" -> "insert",
         "hoodie.upsert.shuffle.parallelism" -> "100",
         "hoodie.insert.shuffle.parallelism" -> "100",
         "hoodie.cleaner.commits.retained" ->"1",
        "hoodie.parquet.max.file.size"->"134217728",
         "hoodie.parquet.small.file.limit"->"104857600",
         //"hoodie.copyonwrite.record.size.estimate"->"128",
         "hoodie.parquet.compression.codec"-> "snappy",
        "hoodie.parquet.block.size"->"134217728",
        "hoodie.combine.before.insert" -> "true",
   
         // "hoodie.merge.allow.duplicate.on.inserts" -> "true"
       )
   
       val hudiHiveOptions = Map(
         "hoodie.datasource.write.hive_style_partitioning" -> "true",
         "hoodie.datasource.hive_sync.support_timestamp" -> "true",
         "hoodie.datasource.write.recordkey.field" -> datasourceRecordKey,
         "hoodie.datasource.write.partitionpath.field" -> 
datasourcePartitionKey,
         "hoodie.datasource.write.precombine.field" -> datasourcePrecombineKey,
         "hoodie.table.name" -> hudiTableName,
         "hoodie.datasource.write.table.name" -> hudiDatasourceTablename,
         "hoodie.database.name" -> hudiDatabaseName,
         "path" -> basePath
       )
   
   Delete config:-
     val hudiOptions = Map("hoodie.datasource.write.keygenerator.class" 
->"org.apache.hudi.keygen.ComplexKeyGenerator",
         "hoodie.datasource.write.table.type" -> "COPY_ON_WRITE",
         "hoodie.datasource.write.operation" -> "delete",
         "hoodie.cleaner.commits.retained" -> "1",
         "hoodie.delete.shuffle.parallelism" -> "100",
           "hoodie.clean.automatic"->"true",
          "hoodie.clean.async" -> "true",
   "hoodie.parquet.compression.codec"-> "snappy")
   
   
   Total table record-13 columns,3900 rows
   There we two issues i faced when i ran hudi write job scheduled hourly.And 
both hudi insert and delete load jobs ran paralllely.
   The job ran good some 24 hours in one hour interval.
   ISSUE 1:-
         diagnostics: User class threw exception: 
java.lang.IllegalArgumentException
        at 
org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:31)
        at 
org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionState(HoodieActiveTimeline.java:555)
        at 
org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionState(HoodieActiveTimeline.java:536)
        at 
org.apache.hudi.common.table.timeline.HoodieActiveTimeline.saveAsComplete(HoodieActiveTimeline.java:183)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.commit(BaseHoodieWriteClient.java:270)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:234)
        at 
org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:122)
        at 
org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:650)
        at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:313)
        at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:163)
        at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:136)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:160)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:132)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
        at 
org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:696)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:80)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
        at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:696)
        at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:310)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:291)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:249)
   
   I got this at irregular intervals in either of the insert or delete job and 
upon rerun it would complete without error
   
   ISSUE 2:-ON the hudi inserts job, hudi delete job was idle not running
   Caused by: org.apache.hudi.exception.HoodieException: 
java.io.FileNotFoundException: File not found: 
gs://7e79e4985f762913d2be704c34505ddf5188b571bb799f3a0a1ae7eb086204/tableName/partition=value/c7579698-c1c4-4066-8cef-6b4558768e75-0_0-26-1212_20220701011007824.parquet
        at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:149)
        at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpdateInternal(BaseSparkCommitActionExecutor.java:358)
        at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpdate(BaseSparkCommitActionExecutor.java:349)
        at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpsertPartition(BaseSparkCommitActionExecutor.java:322)
        ... 29 more
   Caused by: java.io.FileNotFoundException: File not found: 
gs://7e79e4985f762913d2be704c34505ddf5188b571bb799f3a0a1ae7eb086204/table_name/partition=value/c7579698-c1c4-4066-8cef-6b4558768e75-0_0-26-1212_20220701011007824.parquet
        at 
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.getFileStatus(GoogleHadoopFileSystemBase.java:1082)
        at 
org.apache.parquet.hadoop.ParquetReader$Builder.build(ParquetReader.java:300)
        at 
org.apache.hudi.io.storage.HoodieParquetReader.getRecordIterator(HoodieParquetReader.java:70)
        at org.a
   
   After i got this error i saw there were 4 parquet files in the base path .
   
   I  increased the   "hoodie.cleaner.commits.retained" -> "1", to   
"hoodie.cleaner.commits.retained" -> "4"
   
   After first run i could see only 2 parquet files
   
   after second run for there was only one parquet file 
   
   on 3rd run no parquet files could be found.
   
   Its also to be noted that source did not send any data for last 3 hours.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to