[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526] Translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-02-05 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r570669951



##
File path: 
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/hudi/functional/TestCOWDataSource.scala
##
@@ -348,4 +352,141 @@ class TestCOWDataSource extends HoodieClientTestBase {
 
 assertTrue(HoodieDataSourceHelpers.hasNewCommits(fs, basePath, "000"))
   }
+
+  private def getDataFrameWriter(keyGenerator: String): DataFrameWriter[Row] = 
{
+val records = recordsToStrings(dataGen.generateInserts("000", 100)).toList
+val inputDF = spark.read.json(spark.sparkContext.parallelize(records, 2))
+
+inputDF.write.format("hudi")
+  .options(commonOpts)
+  .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, keyGenerator)
+  .mode(SaveMode.Overwrite)
+  }
+
+  @Test def testTranslateSparkParamsToHudiParamsWithCustomKeyGenerator(): Unit 
= {
+// Without fieldType, the default is SIMPLE
+var writer = getDataFrameWriter(classOf[CustomKeyGenerator].getName)
+writer.partitionBy("current_ts")
+  .save(basePath)
+
+var recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("current_ts").cast("string")).count() == 0)
+
+// Specify fieldType as TIMESTAMP
+writer = getDataFrameWriter(classOf[CustomKeyGenerator].getName)
+writer.partitionBy("current_ts:TIMESTAMP")
+  .option(Config.TIMESTAMP_TYPE_FIELD_PROP, "EPOCHMILLISECONDS")
+  .option(Config.TIMESTAMP_OUTPUT_DATE_FORMAT_PROP, "MMdd")
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+val udf_date_format = udf((data: Long) => new 
DateTime(data).toString(DateTimeFormat.forPattern("MMdd")))
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
udf_date_format(col("current_ts"))).count() == 0)
+
+// Mixed fieldType
+writer = getDataFrameWriter(classOf[CustomKeyGenerator].getName)
+writer.partitionBy("driver", "rider:SIMPLE", "current_ts:TIMESTAMP")
+  .option(Config.TIMESTAMP_TYPE_FIELD_PROP, "EPOCHMILLISECONDS")
+  .option(Config.TIMESTAMP_OUTPUT_DATE_FORMAT_PROP, "MMdd")
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*/*")
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!=
+  concat(col("driver"), lit("/"), col("rider"), lit("/"), 
udf_date_format(col("current_ts".count() == 0)
+
+// Test invalid partitionKeyType
+writer = getDataFrameWriter(classOf[CustomKeyGenerator].getName)
+writer = writer.partitionBy("current_ts:DUMMY")
+  .option(Config.TIMESTAMP_TYPE_FIELD_PROP, "EPOCHMILLISECONDS")
+  .option(Config.TIMESTAMP_OUTPUT_DATE_FORMAT_PROP, "MMdd")
+try {
+  writer.save(basePath)
+  fail("should fail when invalid PartitionKeyType is provided!")
+} catch {
+  case e: Exception =>
+assertTrue(e.getMessage.contains("No enum constant 
org.apache.hudi.keygen.CustomAvroKeyGenerator.PartitionKeyType.DUMMY"))
+}
+  }
+
+  @Test def testTranslateSparkParamsToHudiParamsWithSimpleKeyGenerator() {
+// Use the `driver` field as the partition key
+var writer = getDataFrameWriter(classOf[SimpleKeyGenerator].getName)
+writer.partitionBy("driver")
+  .save(basePath)
+
+var recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("driver")).count() == 0)
+
+// Use the `driver,rider` field as the partition key, If no such field 
exists, the default value `default` is used
+writer = getDataFrameWriter(classOf[SimpleKeyGenerator].getName)
+writer.partitionBy("driver", "rider")

Review comment:
   If use SimpleKeyGenerator, when we specify multiple fields such as 
```.partitionBy("a", "b")```, the effect we want will not be achieved, and it 
will be translated into a field like ```"a, b"```, and this field does not 
exist Is the ```default``` value used





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526] Translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-02-04 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r570669951



##
File path: 
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/hudi/functional/TestCOWDataSource.scala
##
@@ -348,4 +352,141 @@ class TestCOWDataSource extends HoodieClientTestBase {
 
 assertTrue(HoodieDataSourceHelpers.hasNewCommits(fs, basePath, "000"))
   }
+
+  private def getDataFrameWriter(keyGenerator: String): DataFrameWriter[Row] = 
{
+val records = recordsToStrings(dataGen.generateInserts("000", 100)).toList
+val inputDF = spark.read.json(spark.sparkContext.parallelize(records, 2))
+
+inputDF.write.format("hudi")
+  .options(commonOpts)
+  .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, keyGenerator)
+  .mode(SaveMode.Overwrite)
+  }
+
+  @Test def testTranslateSparkParamsToHudiParamsWithCustomKeyGenerator(): Unit 
= {
+// Without fieldType, the default is SIMPLE
+var writer = getDataFrameWriter(classOf[CustomKeyGenerator].getName)
+writer.partitionBy("current_ts")
+  .save(basePath)
+
+var recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("current_ts").cast("string")).count() == 0)
+
+// Specify fieldType as TIMESTAMP
+writer = getDataFrameWriter(classOf[CustomKeyGenerator].getName)
+writer.partitionBy("current_ts:TIMESTAMP")
+  .option(Config.TIMESTAMP_TYPE_FIELD_PROP, "EPOCHMILLISECONDS")
+  .option(Config.TIMESTAMP_OUTPUT_DATE_FORMAT_PROP, "MMdd")
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+val udf_date_format = udf((data: Long) => new 
DateTime(data).toString(DateTimeFormat.forPattern("MMdd")))
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
udf_date_format(col("current_ts"))).count() == 0)
+
+// Mixed fieldType
+writer = getDataFrameWriter(classOf[CustomKeyGenerator].getName)
+writer.partitionBy("driver", "rider:SIMPLE", "current_ts:TIMESTAMP")
+  .option(Config.TIMESTAMP_TYPE_FIELD_PROP, "EPOCHMILLISECONDS")
+  .option(Config.TIMESTAMP_OUTPUT_DATE_FORMAT_PROP, "MMdd")
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*/*")
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!=
+  concat(col("driver"), lit("/"), col("rider"), lit("/"), 
udf_date_format(col("current_ts".count() == 0)
+
+// Test invalid partitionKeyType
+writer = getDataFrameWriter(classOf[CustomKeyGenerator].getName)
+writer = writer.partitionBy("current_ts:DUMMY")
+  .option(Config.TIMESTAMP_TYPE_FIELD_PROP, "EPOCHMILLISECONDS")
+  .option(Config.TIMESTAMP_OUTPUT_DATE_FORMAT_PROP, "MMdd")
+try {
+  writer.save(basePath)
+  fail("should fail when invalid PartitionKeyType is provided!")
+} catch {
+  case e: Exception =>
+assertTrue(e.getMessage.contains("No enum constant 
org.apache.hudi.keygen.CustomAvroKeyGenerator.PartitionKeyType.DUMMY"))
+}
+  }
+
+  @Test def testTranslateSparkParamsToHudiParamsWithSimpleKeyGenerator() {
+// Use the `driver` field as the partition key
+var writer = getDataFrameWriter(classOf[SimpleKeyGenerator].getName)
+writer.partitionBy("driver")
+  .save(basePath)
+
+var recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("driver")).count() == 0)
+
+// Use the `driver,rider` field as the partition key, If no such field 
exists, the default value `default` is used
+writer = getDataFrameWriter(classOf[SimpleKeyGenerator].getName)
+writer.partitionBy("driver", "rider")

Review comment:
   If use SimpleKeyGenerator, when we specify multiple fields such as 
```.partitionBy("a", "b")```, the effect we want will not be achieved, and it 
will be translated into a field like ```"a, b"```, and this field does not 
exist Is the ```default``` value used





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526] Translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-02-02 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r568434336



##
File path: 
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/hudi/functional/TestCOWDataSource.scala
##
@@ -348,4 +351,65 @@ class TestCOWDataSource extends HoodieClientTestBase {
 
 assertTrue(HoodieDataSourceHelpers.hasNewCommits(fs, basePath, "000"))
   }
+
+  @Test def testPartitionByTranslateToPartitionPath() {
+val records = recordsToStrings(dataGen.generateInserts("000", 100)).toList
+val inputDF = spark.read.json(spark.sparkContext.parallelize(records, 2))
+
+val noPartitionPathOpts = commonOpts - 
DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY
+
+// partitionBy takes effect
+inputDF.write.format("hudi")
+  .partitionBy("current_date")
+  .options(noPartitionPathOpts)
+  .mode(SaveMode.Overwrite)
+  .save(basePath)
+
+var recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("current_date").cast("string")).count() == 0)
+
+// PARTITIONPATH_FIELD_OPT_KEY takes effect
+inputDF.write.format("hudi")
+  .partitionBy("current_date")
+  .options(noPartitionPathOpts)
+  .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "timestamp")
+  .mode(SaveMode.Overwrite)
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("timestamp").cast("string")).count() == 0)
+
+// CustomKeyGenerator with SIMPLE
+inputDF.write.format("hudi")
+  .partitionBy("current_ts")
+  .options(noPartitionPathOpts)
+  .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, 
"org.apache.hudi.keygen.CustomKeyGenerator")
+  .mode(SaveMode.Overwrite)
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("current_ts").cast("string")).count() == 0)
+
+// CustomKeyGenerator with TIMESTAMP
+inputDF.write.format("hudi")
+  .partitionBy("current_ts")
+  .options(noPartitionPathOpts)
+  .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, 
"org.apache.hudi.keygen.CustomKeyGenerator")
+  .option(Config.TIMESTAMP_TYPE_FIELD_PROP, "EPOCHMILLISECONDS")
+  .option(Config.TIMESTAMP_OUTPUT_DATE_FORMAT_PROP, "MMdd")
+  .mode(SaveMode.Overwrite)
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+val udf_date_format = udf((data: Long) => new 
DateTime(data).toString(DateTimeFormat.forPattern("MMdd")))

Review comment:
   @nsivabalan 
   Thank you for your review, it has been adjusted according to your opinion





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526] Translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-02-02 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r568434336



##
File path: 
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/hudi/functional/TestCOWDataSource.scala
##
@@ -348,4 +351,65 @@ class TestCOWDataSource extends HoodieClientTestBase {
 
 assertTrue(HoodieDataSourceHelpers.hasNewCommits(fs, basePath, "000"))
   }
+
+  @Test def testPartitionByTranslateToPartitionPath() {
+val records = recordsToStrings(dataGen.generateInserts("000", 100)).toList
+val inputDF = spark.read.json(spark.sparkContext.parallelize(records, 2))
+
+val noPartitionPathOpts = commonOpts - 
DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY
+
+// partitionBy takes effect
+inputDF.write.format("hudi")
+  .partitionBy("current_date")
+  .options(noPartitionPathOpts)
+  .mode(SaveMode.Overwrite)
+  .save(basePath)
+
+var recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("current_date").cast("string")).count() == 0)
+
+// PARTITIONPATH_FIELD_OPT_KEY takes effect
+inputDF.write.format("hudi")
+  .partitionBy("current_date")
+  .options(noPartitionPathOpts)
+  .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "timestamp")
+  .mode(SaveMode.Overwrite)
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("timestamp").cast("string")).count() == 0)
+
+// CustomKeyGenerator with SIMPLE
+inputDF.write.format("hudi")
+  .partitionBy("current_ts")
+  .options(noPartitionPathOpts)
+  .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, 
"org.apache.hudi.keygen.CustomKeyGenerator")
+  .mode(SaveMode.Overwrite)
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+assertTrue(recordsReadDF.filter(col("_hoodie_partition_path") =!= 
col("current_ts").cast("string")).count() == 0)
+
+// CustomKeyGenerator with TIMESTAMP
+inputDF.write.format("hudi")
+  .partitionBy("current_ts")
+  .options(noPartitionPathOpts)
+  .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, 
"org.apache.hudi.keygen.CustomKeyGenerator")
+  .option(Config.TIMESTAMP_TYPE_FIELD_PROP, "EPOCHMILLISECONDS")
+  .option(Config.TIMESTAMP_OUTPUT_DATE_FORMAT_PROP, "MMdd")
+  .mode(SaveMode.Overwrite)
+  .save(basePath)
+
+recordsReadDF = spark.read.format("org.apache.hudi")
+  .load(basePath + "/*/*")
+
+val udf_date_format = udf((data: Long) => new 
DateTime(data).toString(DateTimeFormat.forPattern("MMdd")))

Review comment:
   @nsivabalan 
   Thank you for your review, it has been adjusted according to your opinion





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526]translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-01-25 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r563598187



##
File path: 
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/DataSourceOptions.scala
##
@@ -181,16 +183,33 @@ object DataSourceWriteOptions {
   @Deprecated
   val DEFAULT_STORAGE_TYPE_OPT_VAL = COW_STORAGE_TYPE_OPT_VAL
 
-  def translateStorageTypeToTableType(optParams: Map[String, String]) : 
Map[String, String] = {
+  def translateOptParams(optParams: Map[String, String]): Map[String, String] 
= {
+// translate StorageType to TableType
+var newOptParams = optParams
 if (optParams.contains(STORAGE_TYPE_OPT_KEY) && 
!optParams.contains(TABLE_TYPE_OPT_KEY)) {
   log.warn(STORAGE_TYPE_OPT_KEY + " is deprecated and will be removed in a 
later release; Please use " + TABLE_TYPE_OPT_KEY)
-  optParams ++ Map(TABLE_TYPE_OPT_KEY -> optParams(STORAGE_TYPE_OPT_KEY))
-} else {
-  optParams
+  newOptParams = optParams ++ Map(TABLE_TYPE_OPT_KEY -> 
optParams(STORAGE_TYPE_OPT_KEY))
 }
+// translate the api partitionBy of spark DataFrameWriter to 
PARTITIONPATH_FIELD_OPT_KEY
+if (optParams.contains(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY) && 
!optParams.contains(PARTITIONPATH_FIELD_OPT_KEY)) {
+  val partitionColumns = 
optParams.get(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY)
+.map(SparkDataSourceUtils.decodePartitioningColumns)
+.getOrElse(Nil)
+
+  val keyGeneratorClass = 
optParams.getOrElse(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,
+DataSourceWriteOptions.DEFAULT_KEYGENERATOR_CLASS_OPT_VAL)
+  val partitionPathField =
+keyGeneratorClass match {
+  case "org.apache.hudi.keygen.CustomKeyGenerator" =>
+partitionColumns.map(e => s"$e:SIMPLE").mkString(",")

Review comment:
   @wangxianghu  Thank you for your review. My opinion is this:In 
accordance with the habit of using Spark, the partition field value 
corresponding to partitionBy is the original value, so the default is to use 
SIMPLE. If we automatically infer whether to use TIMESTAMP based on the field 
type, the rules are not easy to determine. For example, if a field is long, we 
Do you need to convert to TIMESTAMP? If you want to convert, but the value is 
not a timestamp, an error will be reported, so SIMPLE is used by default. If 
you want to use TIMESTAMP, users can directly use 
```hoodie.datasource.write.partitionpath. field```Go to specify

##
File path: 
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/DataSourceOptions.scala
##
@@ -181,16 +183,33 @@ object DataSourceWriteOptions {
   @Deprecated
   val DEFAULT_STORAGE_TYPE_OPT_VAL = COW_STORAGE_TYPE_OPT_VAL
 
-  def translateStorageTypeToTableType(optParams: Map[String, String]) : 
Map[String, String] = {
+  def translateOptParams(optParams: Map[String, String]): Map[String, String] 
= {
+// translate StorageType to TableType
+var newOptParams = optParams
 if (optParams.contains(STORAGE_TYPE_OPT_KEY) && 
!optParams.contains(TABLE_TYPE_OPT_KEY)) {
   log.warn(STORAGE_TYPE_OPT_KEY + " is deprecated and will be removed in a 
later release; Please use " + TABLE_TYPE_OPT_KEY)
-  optParams ++ Map(TABLE_TYPE_OPT_KEY -> optParams(STORAGE_TYPE_OPT_KEY))
-} else {
-  optParams
+  newOptParams = optParams ++ Map(TABLE_TYPE_OPT_KEY -> 
optParams(STORAGE_TYPE_OPT_KEY))
 }
+// translate the api partitionBy of spark DataFrameWriter to 
PARTITIONPATH_FIELD_OPT_KEY
+if (optParams.contains(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY) && 
!optParams.contains(PARTITIONPATH_FIELD_OPT_KEY)) {
+  val partitionColumns = 
optParams.get(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY)
+.map(SparkDataSourceUtils.decodePartitioningColumns)
+.getOrElse(Nil)
+
+  val keyGeneratorClass = 
optParams.getOrElse(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,
+DataSourceWriteOptions.DEFAULT_KEYGENERATOR_CLASS_OPT_VAL)
+  val partitionPathField =
+keyGeneratorClass match {
+  case "org.apache.hudi.keygen.CustomKeyGenerator" =>
+partitionColumns.map(e => s"$e:SIMPLE").mkString(",")

Review comment:
   Yes, now if the parameters include ```TIMESTAMP_TYPE_FIELD_PROP``` and 
```TIMESTAMP_OUTPUT_DATE_FORMAT_PROP```, TIMESTAMP is used by default, 
otherwise SIMPLE





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526]translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-01-25 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r563665044



##
File path: 
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/DataSourceOptions.scala
##
@@ -181,16 +183,33 @@ object DataSourceWriteOptions {
   @Deprecated
   val DEFAULT_STORAGE_TYPE_OPT_VAL = COW_STORAGE_TYPE_OPT_VAL
 
-  def translateStorageTypeToTableType(optParams: Map[String, String]) : 
Map[String, String] = {
+  def translateOptParams(optParams: Map[String, String]): Map[String, String] 
= {
+// translate StorageType to TableType
+var newOptParams = optParams
 if (optParams.contains(STORAGE_TYPE_OPT_KEY) && 
!optParams.contains(TABLE_TYPE_OPT_KEY)) {
   log.warn(STORAGE_TYPE_OPT_KEY + " is deprecated and will be removed in a 
later release; Please use " + TABLE_TYPE_OPT_KEY)
-  optParams ++ Map(TABLE_TYPE_OPT_KEY -> optParams(STORAGE_TYPE_OPT_KEY))
-} else {
-  optParams
+  newOptParams = optParams ++ Map(TABLE_TYPE_OPT_KEY -> 
optParams(STORAGE_TYPE_OPT_KEY))
 }
+// translate the api partitionBy of spark DataFrameWriter to 
PARTITIONPATH_FIELD_OPT_KEY
+if (optParams.contains(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY) && 
!optParams.contains(PARTITIONPATH_FIELD_OPT_KEY)) {
+  val partitionColumns = 
optParams.get(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY)
+.map(SparkDataSourceUtils.decodePartitioningColumns)
+.getOrElse(Nil)
+
+  val keyGeneratorClass = 
optParams.getOrElse(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,
+DataSourceWriteOptions.DEFAULT_KEYGENERATOR_CLASS_OPT_VAL)
+  val partitionPathField =
+keyGeneratorClass match {
+  case "org.apache.hudi.keygen.CustomKeyGenerator" =>
+partitionColumns.map(e => s"$e:SIMPLE").mkString(",")

Review comment:
   Yes, now if the parameters include ```TIMESTAMP_TYPE_FIELD_PROP``` and 
```TIMESTAMP_OUTPUT_DATE_FORMAT_PROP```, TIMESTAMP is used by default, 
otherwise SIMPLE





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526]translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-01-25 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r563598187



##
File path: 
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/DataSourceOptions.scala
##
@@ -181,16 +183,33 @@ object DataSourceWriteOptions {
   @Deprecated
   val DEFAULT_STORAGE_TYPE_OPT_VAL = COW_STORAGE_TYPE_OPT_VAL
 
-  def translateStorageTypeToTableType(optParams: Map[String, String]) : 
Map[String, String] = {
+  def translateOptParams(optParams: Map[String, String]): Map[String, String] 
= {
+// translate StorageType to TableType
+var newOptParams = optParams
 if (optParams.contains(STORAGE_TYPE_OPT_KEY) && 
!optParams.contains(TABLE_TYPE_OPT_KEY)) {
   log.warn(STORAGE_TYPE_OPT_KEY + " is deprecated and will be removed in a 
later release; Please use " + TABLE_TYPE_OPT_KEY)
-  optParams ++ Map(TABLE_TYPE_OPT_KEY -> optParams(STORAGE_TYPE_OPT_KEY))
-} else {
-  optParams
+  newOptParams = optParams ++ Map(TABLE_TYPE_OPT_KEY -> 
optParams(STORAGE_TYPE_OPT_KEY))
 }
+// translate the api partitionBy of spark DataFrameWriter to 
PARTITIONPATH_FIELD_OPT_KEY
+if (optParams.contains(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY) && 
!optParams.contains(PARTITIONPATH_FIELD_OPT_KEY)) {
+  val partitionColumns = 
optParams.get(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY)
+.map(SparkDataSourceUtils.decodePartitioningColumns)
+.getOrElse(Nil)
+
+  val keyGeneratorClass = 
optParams.getOrElse(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,
+DataSourceWriteOptions.DEFAULT_KEYGENERATOR_CLASS_OPT_VAL)
+  val partitionPathField =
+keyGeneratorClass match {
+  case "org.apache.hudi.keygen.CustomKeyGenerator" =>
+partitionColumns.map(e => s"$e:SIMPLE").mkString(",")

Review comment:
   @wangxianghu  Thank you for your review. My opinion is this:In 
accordance with the habit of using Spark, the partition field value 
corresponding to partitionBy is the original value, so the default is to use 
SIMPLE. If we automatically infer whether to use TIMESTAMP based on the field 
type, the rules are not easy to determine. For example, if a field is long, we 
Do you need to convert to TIMESTAMP? If you want to convert, but the value is 
not a timestamp, an error will be reported, so SIMPLE is used by default. If 
you want to use TIMESTAMP, users can directly use 
```hoodie.datasource.write.partitionpath. field```Go to specify





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526]translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-01-22 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r563026619



##
File path: 
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/DataSourceOptions.scala
##
@@ -181,16 +183,33 @@ object DataSourceWriteOptions {
   @Deprecated
   val DEFAULT_STORAGE_TYPE_OPT_VAL = COW_STORAGE_TYPE_OPT_VAL
 
-  def translateStorageTypeToTableType(optParams: Map[String, String]) : 
Map[String, String] = {
+  def translateOptParams(optParams: Map[String, String]): Map[String, String] 
= {
+// translate StorageType to TableType
+var newOptParams = optParams
 if (optParams.contains(STORAGE_TYPE_OPT_KEY) && 
!optParams.contains(TABLE_TYPE_OPT_KEY)) {
   log.warn(STORAGE_TYPE_OPT_KEY + " is deprecated and will be removed in a 
later release; Please use " + TABLE_TYPE_OPT_KEY)
-  optParams ++ Map(TABLE_TYPE_OPT_KEY -> optParams(STORAGE_TYPE_OPT_KEY))
-} else {
-  optParams
+  newOptParams = optParams ++ Map(TABLE_TYPE_OPT_KEY -> 
optParams(STORAGE_TYPE_OPT_KEY))
 }
+// translate the api partitionBy of spark DataFrameWriter to 
PARTITIONPATH_FIELD_OPT_KEY
+if (optParams.contains(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY) && 
!optParams.contains(PARTITIONPATH_FIELD_OPT_KEY)) {
+  val partitionColumns = 
optParams.get(SparkDataSourceUtils.PARTITIONING_COLUMNS_KEY)
+.map(SparkDataSourceUtils.decodePartitioningColumns)
+.getOrElse(Nil)
+
+  val keyGeneratorClass = 
optParams.getOrElse(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,
+DataSourceWriteOptions.DEFAULT_KEYGENERATOR_CLASS_OPT_VAL)
+  val partitionPathField =
+keyGeneratorClass match {
+  case "org.apache.hudi.keygen.CustomKeyGenerator" =>

Review comment:
   @wangxianghu All KeyGenerators are considered, only 
```CustomKeyGenerator``` is special, which requires the user to specify in the 
form of ```field1:PartitionKeyType1, field2:PartitionKeyType2```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526]translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-01-21 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r561454150



##
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/hudi/DefaultSource.scala
##
@@ -46,6 +46,11 @@ class DefaultSource extends RelationProvider
   with StreamSinkProvider
   with Serializable {
 
+  SparkSession.getActiveSession.foreach { spark =>
+// Enable "passPartitionByAsOptions" to support "write.partitionBy(...)"
+spark.conf.set("spark.sql.legacy.sources.write.passPartitionByAsOptions", 
"true")

Review comment:
   Thank you for your review, todo has been added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] teeyog commented on a change in pull request #2431: [HUDI-1526]translate the api partitionBy to hoodie.datasource.write.partitionpath.field

2021-01-20 Thread GitBox


teeyog commented on a change in pull request #2431:
URL: https://github.com/apache/hudi/pull/2431#discussion_r561454150



##
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/hudi/DefaultSource.scala
##
@@ -46,6 +46,11 @@ class DefaultSource extends RelationProvider
   with StreamSinkProvider
   with Serializable {
 
+  SparkSession.getActiveSession.foreach { spark =>
+// Enable "passPartitionByAsOptions" to support "write.partitionBy(...)"
+spark.conf.set("spark.sql.legacy.sources.write.passPartitionByAsOptions", 
"true")

Review comment:
   Thank you for your review, todo has been added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org