[GitHub] [spark] cloud-fan commented on a change in pull request #35132: [SPARK-37841][SQL] BasicWriteTaskStatsTracker should not try get status for a skipped file

2022-01-07 Thread GitBox


cloud-fan commented on a change in pull request #35132:
URL: https://github.com/apache/spark/pull/35132#discussion_r780458052



##
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/BasicWriteStatsTracker.scala
##
@@ -142,8 +142,13 @@ class BasicWriteTaskStatsTracker(
 numSubmittedFiles += 1
   }
 
-  override def closeFile(filePath: String): Unit = {
-updateFileStats(filePath)
+  override def closeFile(filePath: String, isPathCreated: Boolean): Unit = {
+if (isPathCreated) {
+  updateFileStats(filePath)
+} else {
+  logDebug(s"$filePath is not pre-touched by writer, skipping update file 
stats")

Review comment:
   ```suggestion
 logDebug(s"$filePath is not created due to no data, skip updating file 
stats")
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #35132: [SPARK-37841][SQL] BasicWriteTaskStatsTracker should not try get status for a skipped file

2022-01-07 Thread GitBox


cloud-fan commented on a change in pull request #35132:
URL: https://github.com/apache/spark/pull/35132#discussion_r780459615



##
File path: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFileFormat.scala
##
@@ -305,6 +305,12 @@ private[orc] class OrcOutputWriter(
   recordWriter.close(Reporter.NULL)
 }
   }
+
+  /**
+   * If `recordWriterInstantiated` is false, the output file is not pretouched.

Review comment:
   Alternatively, can we fix the Hive ORC data source to always write the 
file? This seems wrong to me. We should at least write one file even if the 
input query is empty, to record the output schema. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #35132: [SPARK-37841][SQL] BasicWriteTaskStatsTracker should not try get status for a skipped file

2022-01-08 Thread GitBox


cloud-fan commented on a change in pull request #35132:
URL: https://github.com/apache/spark/pull/35132#discussion_r780458052



##
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/BasicWriteStatsTracker.scala
##
@@ -142,8 +142,13 @@ class BasicWriteTaskStatsTracker(
 numSubmittedFiles += 1
   }
 
-  override def closeFile(filePath: String): Unit = {
-updateFileStats(filePath)
+  override def closeFile(filePath: String, isPathCreated: Boolean): Unit = {
+if (isPathCreated) {
+  updateFileStats(filePath)
+} else {
+  logDebug(s"$filePath is not pre-touched by writer, skipping update file 
stats")

Review comment:
   ```suggestion
 logDebug(s"$filePath is not created due to no data, skip updating file 
stats")
   ```

##
File path: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFileFormat.scala
##
@@ -305,6 +305,12 @@ private[orc] class OrcOutputWriter(
   recordWriter.close(Reporter.NULL)
 }
   }
+
+  /**
+   * If `recordWriterInstantiated` is false, the output file is not pretouched.

Review comment:
   Alternatively, can we fix the Hive ORC data source to always write the 
file? This seems wrong to me. We should at least write one file even if the 
input query is empty, to record the output schema. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #35132: [SPARK-37841][SQL] BasicWriteTaskStatsTracker should not try get status for a skipped file

2022-01-10 Thread GitBox


cloud-fan commented on a change in pull request #35132:
URL: https://github.com/apache/spark/pull/35132#discussion_r780998540



##
File path: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFileFormat.scala
##
@@ -299,11 +292,26 @@ private[orc] class OrcOutputWriter(
   }
 
   override def close(): Unit = {
-if (recordWriterInstantiated) {
+try {
+  OrcUtils.addSparkVersionMetadata(getOrCreateInternalWriter())
+} catch {
+  case NonFatal(e) => log.warn(e.toString, e)
+}
+recordWriter.close(Reporter.NULL)
+  }
+
+  private def getOrCreateInternalWriter(): Writer = {
+val writerField = recordWriter.getClass.getDeclaredField("writer")
+writerField.setAccessible(true)
+var writer = writerField.get(recordWriter).asInstanceOf[Writer]
+if (writer == null) {

Review comment:
   do we need to consider multi-thread here?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org