[GitHub] [hudi] leesf commented on a change in pull request #3754: [HUDI-2482] support 'drop partition' sql

2021-10-14 Thread GitBox


leesf commented on a change in pull request #3754:
URL: https://github.com/apache/hudi/pull/3754#discussion_r729045601



##
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/HoodieSqlUtils.scala
##
@@ -92,7 +93,45 @@ object HoodieSqlUtils extends SparkAdapterSupport {
   properties.putAll((spark.sessionState.conf.getAllConfs ++ 
table.storage.properties).asJava)
   HoodieMetadataConfig.newBuilder.fromProperties(properties).build()
 }
-FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
HoodieSqlUtils.getTableLocation(table, spark)).asScala
+FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
getTableLocation(table, spark)).asScala
+  }
+
+  /**
+   * This method is used to compatible with the old non-hive-styled partition 
table.
+   * By default we enable the "hoodie.datasource.write.hive_style_partitioning"
+   * when writing data to hudi table by spark sql by default.
+   * If the exist table is a non-hive-styled partitioned table, we should
+   * disable the "hoodie.datasource.write.hive_style_partitioning" when
+   * merge or update the table. Or else, we will get an incorrect merge result
+   * as the partition path mismatch.
+   */
+  def isHiveStylePartitionPartitioning(partitionPaths: Seq[String], table: 
CatalogTable): Boolean = {

Review comment:
   rename to isHiveStyledPartitioning




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] leesf commented on a change in pull request #3754: [HUDI-2482] support 'drop partition' sql

2021-10-14 Thread GitBox


leesf commented on a change in pull request #3754:
URL: https://github.com/apache/hudi/pull/3754#discussion_r729045075



##
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/HoodieSqlUtils.scala
##
@@ -92,7 +93,45 @@ object HoodieSqlUtils extends SparkAdapterSupport {
   properties.putAll((spark.sessionState.conf.getAllConfs ++ 
table.storage.properties).asJava)
   HoodieMetadataConfig.newBuilder.fromProperties(properties).build()
 }
-FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
HoodieSqlUtils.getTableLocation(table, spark)).asScala
+FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
getTableLocation(table, spark)).asScala
+  }
+
+  /**
+   * This method is used to compatible with the old non-hive-styled partition 
table.
+   * By default we enable the "hoodie.datasource.write.hive_style_partitioning"
+   * when writing data to hudi table by spark sql by default.
+   * If the exist table is a non-hive-styled partitioned table, we should
+   * disable the "hoodie.datasource.write.hive_style_partitioning" when
+   * merge or update the table. Or else, we will get an incorrect merge result
+   * as the partition path mismatch.
+   */
+  def isHiveStylePartitionPartitioning(partitionPaths: Seq[String], table: 
CatalogTable): Boolean = {
+if (table.partitionColumnNames.nonEmpty) {
+  val isHiveStylePartitionPath = (path: String) => {
+val fragments = path.split("/")
+if (fragments.size != table.partitionColumnNames.size) {
+  false
+} else {
+  fragments.zip(table.partitionColumnNames).forall {
+case (pathFragment, partitionColumn) => 
pathFragment.startsWith(s"$partitionColumn=")
+  }
+}
+  }
+  partitionPaths.forall(isHiveStylePartitionPath)
+} else {
+  true

Review comment:
   here means if it is not a partition table, we treat it as hive style 
partition?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] leesf commented on a change in pull request #3754: [HUDI-2482] support 'drop partition' sql

2021-10-07 Thread GitBox


leesf commented on a change in pull request #3754:
URL: https://github.com/apache/hudi/pull/3754#discussion_r724012463



##
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/HoodieSqlUtils.scala
##
@@ -92,7 +93,45 @@ object HoodieSqlUtils extends SparkAdapterSupport {
   properties.putAll((spark.sessionState.conf.getAllConfs ++ 
table.storage.properties).asJava)
   HoodieMetadataConfig.newBuilder.fromProperties(properties).build()
 }
-FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
HoodieSqlUtils.getTableLocation(table, spark)).asScala
+FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
getTableLocation(table, spark)).asScala
+  }
+
+  /**
+   * This method is used to compatible with the old non-hive-styled partition 
table.
+   * By default we enable the "hoodie.datasource.write.hive_style_partitioning"
+   * when writing data to hudi table by spark sql by default.
+   * If the exist table is a non-hive-styled partitioned table, we should
+   * disable the "hoodie.datasource.write.hive_style_partitioning" when
+   * merge or update the table. Or else, we will get an incorrect merge result
+   * as the partition path mismatch.
+   */
+  def isNotHiveStyledPartitionTable(partitionPaths: Seq[String], table: 
CatalogTable): Boolean = {

Review comment:
   would rename to `isHiveStylePartitionPartitioning`

##
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/HoodieSqlUtils.scala
##
@@ -92,7 +93,45 @@ object HoodieSqlUtils extends SparkAdapterSupport {
   properties.putAll((spark.sessionState.conf.getAllConfs ++ 
table.storage.properties).asJava)
   HoodieMetadataConfig.newBuilder.fromProperties(properties).build()
 }
-FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
HoodieSqlUtils.getTableLocation(table, spark)).asScala
+FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
getTableLocation(table, spark)).asScala
+  }
+
+  /**
+   * This method is used to compatible with the old non-hive-styled partition 
table.
+   * By default we enable the "hoodie.datasource.write.hive_style_partitioning"
+   * when writing data to hudi table by spark sql by default.
+   * If the exist table is a non-hive-styled partitioned table, we should
+   * disable the "hoodie.datasource.write.hive_style_partitioning" when
+   * merge or update the table. Or else, we will get an incorrect merge result
+   * as the partition path mismatch.
+   */
+  def isNotHiveStyledPartitionTable(partitionPaths: Seq[String], table: 
CatalogTable): Boolean = {
+if (table.partitionColumnNames.nonEmpty) {
+  val isHiveStylePartitionPath = (path: String) => {
+val fragments = path.split("/")
+if (fragments.size != table.partitionColumnNames.size) {
+  false
+} else {
+  fragments.zip(table.partitionColumnNames).forall {
+case (pathFragment, partitionColumn) => 
pathFragment.startsWith(s"$partitionColumn=")
+  }
+}
+  }
+  !partitionPaths.forall(isHiveStylePartitionPath)
+} else {
+  false
+}
+  }
+
+  /**
+   * If this table has disable the url encode, spark sql should also disable 
it when writing to the table.
+   */
+  def isUrlEncodeDisable(partitionPaths: Seq[String], table: CatalogTable): 
Boolean = {

Review comment:
   `isUrlEncodeEnabled`  and fix the description please




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] leesf commented on a change in pull request #3754: [HUDI-2482] support 'drop partition' sql

2021-10-07 Thread GitBox


leesf commented on a change in pull request #3754:
URL: https://github.com/apache/hudi/pull/3754#discussion_r724013236



##
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/HoodieSqlUtils.scala
##
@@ -92,7 +93,45 @@ object HoodieSqlUtils extends SparkAdapterSupport {
   properties.putAll((spark.sessionState.conf.getAllConfs ++ 
table.storage.properties).asJava)
   HoodieMetadataConfig.newBuilder.fromProperties(properties).build()
 }
-FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
HoodieSqlUtils.getTableLocation(table, spark)).asScala
+FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
getTableLocation(table, spark)).asScala
+  }
+
+  /**
+   * This method is used to compatible with the old non-hive-styled partition 
table.
+   * By default we enable the "hoodie.datasource.write.hive_style_partitioning"
+   * when writing data to hudi table by spark sql by default.
+   * If the exist table is a non-hive-styled partitioned table, we should
+   * disable the "hoodie.datasource.write.hive_style_partitioning" when
+   * merge or update the table. Or else, we will get an incorrect merge result
+   * as the partition path mismatch.
+   */
+  def isNotHiveStyledPartitionTable(partitionPaths: Seq[String], table: 
CatalogTable): Boolean = {
+if (table.partitionColumnNames.nonEmpty) {
+  val isHiveStylePartitionPath = (path: String) => {
+val fragments = path.split("/")
+if (fragments.size != table.partitionColumnNames.size) {
+  false
+} else {
+  fragments.zip(table.partitionColumnNames).forall {
+case (pathFragment, partitionColumn) => 
pathFragment.startsWith(s"$partitionColumn=")
+  }
+}
+  }
+  !partitionPaths.forall(isHiveStylePartitionPath)
+} else {
+  false
+}
+  }
+
+  /**
+   * If this table has disable the url encode, spark sql should also disable 
it when writing to the table.
+   */
+  def isUrlEncodeDisable(partitionPaths: Seq[String], table: CatalogTable): 
Boolean = {

Review comment:
   `isUrlEncodeEnabled`  and fix the description please




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] leesf commented on a change in pull request #3754: [HUDI-2482] support 'drop partition' sql

2021-10-07 Thread GitBox


leesf commented on a change in pull request #3754:
URL: https://github.com/apache/hudi/pull/3754#discussion_r724012463



##
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/HoodieSqlUtils.scala
##
@@ -92,7 +93,45 @@ object HoodieSqlUtils extends SparkAdapterSupport {
   properties.putAll((spark.sessionState.conf.getAllConfs ++ 
table.storage.properties).asJava)
   HoodieMetadataConfig.newBuilder.fromProperties(properties).build()
 }
-FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
HoodieSqlUtils.getTableLocation(table, spark)).asScala
+FSUtils.getAllPartitionPaths(sparkEngine, metadataConfig, 
getTableLocation(table, spark)).asScala
+  }
+
+  /**
+   * This method is used to compatible with the old non-hive-styled partition 
table.
+   * By default we enable the "hoodie.datasource.write.hive_style_partitioning"
+   * when writing data to hudi table by spark sql by default.
+   * If the exist table is a non-hive-styled partitioned table, we should
+   * disable the "hoodie.datasource.write.hive_style_partitioning" when
+   * merge or update the table. Or else, we will get an incorrect merge result
+   * as the partition path mismatch.
+   */
+  def isNotHiveStyledPartitionTable(partitionPaths: Seq[String], table: 
CatalogTable): Boolean = {

Review comment:
   would rename to `isHiveStylePartitionPartitioning`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org