Re: [PR] [SPARK-47793][TEST][FOLLOWUP] Fix flaky test for Python data source exactly once. [spark]

2024-05-10 Thread via GitHub


dongjoon-hyun closed pull request #46481: [SPARK-47793][TEST][FOLLOWUP] Fix 
flaky test for Python data source exactly once.
URL: https://github.com/apache/spark/pull/46481


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-47793][TEST][FOLLOWUP] Fix flaky test for Python data source exactly once. [spark]

2024-05-10 Thread via GitHub


dongjoon-hyun commented on PR #46481:
URL: https://github.com/apache/spark/pull/46481#issuecomment-2104801936

   Let me bring this first for further monitoring. Thank you, @chaoqin-li1123 
and @allisonwang-db .
   Merged to master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-47793][TEST][FOLLOWUP] Fix flaky test for Python data source exactly once. [spark]

2024-05-09 Thread via GitHub


dongjoon-hyun commented on PR #46481:
URL: https://github.com/apache/spark/pull/46481#issuecomment-2103611989

   Could you do the final review and sign-off, please, @HyukjinKwon ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-47793][TEST][FOLLOWUP] Fix flaky test for Python data source exactly once. [spark]

2024-05-08 Thread via GitHub


chaoqin-li1123 commented on code in PR #46481:
URL: https://github.com/apache/spark/pull/46481#discussion_r1594775969


##
sql/core/src/test/scala/org/apache/spark/sql/execution/python/PythonStreamingDataSourceSuite.scala:
##
@@ -326,8 +326,11 @@ class PythonStreamingDataSourceSuite extends 
PythonDataSourceSuiteBase {
 lastBatch = q.lastProgress.batchId.toInt
   }
   assert(lastBatch > 20)
+  val rowCount = 
spark.read.format("json").load(outputDir.getAbsolutePath).count()
+  // There may be one uncommitted batch that is not recorded in query 
progress.
+  assert(rowCount == 2 * lastBatch + 2 || rowCount == 2 * lastBatch + 4)

Review Comment:
   More comment added. We can check an upper bound here, but it is not really 
meaningful.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-47793][TEST][FOLLOWUP] Fix flaky test for Python data source exactly once. [spark]

2024-05-08 Thread via GitHub


allisonwang-db commented on code in PR #46481:
URL: https://github.com/apache/spark/pull/46481#discussion_r1594584246


##
sql/core/src/test/scala/org/apache/spark/sql/execution/python/PythonStreamingDataSourceSuite.scala:
##
@@ -326,8 +326,11 @@ class PythonStreamingDataSourceSuite extends 
PythonDataSourceSuiteBase {
 lastBatch = q.lastProgress.batchId.toInt
   }
   assert(lastBatch > 20)
+  val rowCount = 
spark.read.format("json").load(outputDir.getAbsolutePath).count()
+  // There may be one uncommitted batch that is not recorded in query 
progress.
+  assert(rowCount == 2 * lastBatch + 2 || rowCount == 2 * lastBatch + 4)

Review Comment:
   Can we explain the `2 * lastBatch + 2` and `2 * lastBatch + 4` in the 
comment here? Just curious why can't we provide an upper bound for the row 
count here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-47793][TEST][FOLLOWUP] Fix flaky test for Python data source exactly once. [spark]

2024-05-08 Thread via GitHub


chaoqin-li1123 commented on code in PR #46481:
URL: https://github.com/apache/spark/pull/46481#discussion_r1594551803


##
sql/core/src/test/scala/org/apache/spark/sql/execution/python/PythonStreamingDataSourceSuite.scala:
##
@@ -326,8 +326,11 @@ class PythonStreamingDataSourceSuite extends 
PythonDataSourceSuiteBase {
 lastBatch = q.lastProgress.batchId.toInt
   }
   assert(lastBatch > 20)
+  val rowCount = 
spark.read.format("json").load(outputDir.getAbsolutePath).count()
+  // There may be one uncommitted batch that is not recorded in query 
progress.
+  assert(rowCount == 2 * lastBatch + 2 || rowCount == 2 * lastBatch + 4)

Review Comment:
   Unfortunately there is no graceful way to shutdown a streaming query. I 
can't think of any alternative.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-47793][TEST][FOLLOWUP] Fix flaky test for Python data source exactly once. [spark]

2024-05-08 Thread via GitHub


dongjoon-hyun commented on code in PR #46481:
URL: https://github.com/apache/spark/pull/46481#discussion_r1594531248


##
sql/core/src/test/scala/org/apache/spark/sql/execution/python/PythonStreamingDataSourceSuite.scala:
##
@@ -326,8 +326,11 @@ class PythonStreamingDataSourceSuite extends 
PythonDataSourceSuiteBase {
 lastBatch = q.lastProgress.batchId.toInt
   }
   assert(lastBatch > 20)
+  val rowCount = 
spark.read.format("json").load(outputDir.getAbsolutePath).count()
+  // There may be one uncommitted batch that is not recorded in query 
progress.
+  assert(rowCount == 2 * lastBatch + 2 || rowCount == 2 * lastBatch + 4)

Review Comment:
   I'm wondering if this is a correct way to fix the root cause of flakiness. 
Any better way to be deterministic?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-47793][TEST][FOLLOWUP] Fix flaky test for Python data source exactly once. [spark]

2024-05-08 Thread via GitHub


dongjoon-hyun commented on code in PR #46481:
URL: https://github.com/apache/spark/pull/46481#discussion_r1594531248


##
sql/core/src/test/scala/org/apache/spark/sql/execution/python/PythonStreamingDataSourceSuite.scala:
##
@@ -326,8 +326,11 @@ class PythonStreamingDataSourceSuite extends 
PythonDataSourceSuiteBase {
 lastBatch = q.lastProgress.batchId.toInt
   }
   assert(lastBatch > 20)
+  val rowCount = 
spark.read.format("json").load(outputDir.getAbsolutePath).count()
+  // There may be one uncommitted batch that is not recorded in query 
progress.
+  assert(rowCount == 2 * lastBatch + 2 || rowCount == 2 * lastBatch + 4)

Review Comment:
   Is this a correct way to fix the root cause of flakiness?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [PR] [SPARK-47793][TEST][FOLLOWUP] Fix flaky test for Python data source exactly once. [spark]

2024-05-08 Thread via GitHub


dongjoon-hyun commented on PR #46481:
URL: https://github.com/apache/spark/pull/46481#issuecomment-2101258433

   cc @HeartSaVioR and @allisonwang-db from #45977 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org