Adrian Tanase created HUDI-1079:
-----------------------------------

             Summary: Cannot upsert on schema with Array of Record with single 
field
                 Key: HUDI-1079
                 URL: https://issues.apache.org/jira/browse/HUDI-1079
             Project: Apache Hudi
          Issue Type: Bug
          Components: Spark Integration
    Affects Versions: 0.5.3
         Environment: spark 2.4.4, local 
            Reporter: Adrian Tanase


I am trying to trigger upserts on a table that has an array field with records 
of just one field.
 Here is the code to reproduce:
{code:scala}
  val spark = SparkSession.builder()
      .master("local[1]")
      .appName("SparkByExamples.com")
      .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .getOrCreate();

  // https://sparkbyexamples.com/spark/spark-dataframe-array-of-struct/

  val arrayStructData = Seq(
    Row("James",List(Row("Java","XX",120),Row("Scala","XA",300))),
    Row("Michael",List(Row("Java","XY",200),Row("Scala","XB",500))),
    Row("Robert",List(Row("Java","XZ",400),Row("Scala","XC",250))),
    Row("Washington",null)
  )

  val arrayStructSchema = new StructType()
      .add("name",StringType)
      .add("booksIntersted",ArrayType(
        new StructType()
          .add("bookName",StringType)
//          .add("author",StringType)
//          .add("pages",IntegerType)
      ))

    val df = 
spark.createDataFrame(spark.sparkContext.parallelize(arrayStructData),arrayStructSchema)
{code}
Running insert following by upsert will fail:
{code:scala}
  df.write
      .format("hudi")
      .options(getQuickstartWriteConfigs)
      .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "name")
      .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "name")
      .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, "COPY_ON_WRITE")
      .option(HoodieWriteConfig.TABLE_NAME, tableName)
      .mode(Overwrite)
      .save(basePath)

  df.write
      .format("hudi")
      .options(getQuickstartWriteConfigs)
      .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "name")
      .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "name")
      .option(HoodieWriteConfig.TABLE_NAME, tableName)
      .mode(Append)
      .save(basePath)
{code}
If I create the books record with all the fields (at least 2), it works as 
expected.

Another way to test is by changing the generated data in the tips to just the 
amount, by dropping the currency on the tips_history field, tests will start 
failing:
 
[https://github.com/apache/hudi/compare/release-0.5.3...aditanase:avro-arrays-upsert?expand=1]

I have narrowed this down to this block in the parquet-avro integration: 
[https://github.com/apache/parquet-mr/blob/master/parquet-avro/src/main/java/org/apache/parquet/avro/AvroRecordConverter.java#L846-L875]

Which always returns false after trying to decide whether reader and writer 
schemas are compatible. Going throught that code path makes me thing it's 
related to the fields being optional, as the inferred schema seems to be (null, 
string) with default null instead of (string, null) with no default.

At this point I'm lost, tried to figure something out based on this 
[https://github.com/apache/hudi/pull/1406/files] but I'm not sure where to 
start.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to