[ 
https://issues.apache.org/jira/browse/SPARK-18907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15826909#comment-15826909
 ] 

Xin Ren commented on SPARK-18907:
---------------------------------

Hi Shixiong, what do you mean by flaky? to intercept these Exceptions?

not sure about the expected outcome of this test case, thanks a lot.

{code}
13:54:38.697 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
13:54:50.257 ERROR org.apache.spark.sql.execution.streaming.StreamExecution: 
Query maxFilesPerTrigger_test [id = 118d8397-3dab-49f4-a1e7-eb8ec7dd4fc2, runId 
= ae622c2d-eb0e-4647-beca-907b5dac59b0] terminated with error
java.lang.IllegalArgumentException: Invalid value 'not-a-integer' for option 
'maxFilesPerTrigger', must be a positive integer
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2$$anonfun$apply$3.apply(FileStreamOptions.scala:35)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2$$anonfun$apply$3.apply(FileStreamOptions.scala:35)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2.apply(FileStreamOptions.scala:34)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2.apply(FileStreamOptions.scala:33)
        at scala.Option.map(Option.scala:146)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions.<init>(FileStreamOptions.scala:33)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions.<init>(FileStreamOptions.scala:31)
        at 
org.apache.spark.sql.execution.streaming.FileStreamSource.<init>(FileStreamSource.scala:44)
        at 
org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:256)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:140)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:136)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
        at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:287)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:277)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan$lzycompute(StreamExecution.scala:136)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan(StreamExecution.scala:131)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:246)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:186)
13:54:52.364 ERROR org.apache.spark.sql.execution.streaming.StreamExecution: 
Query maxFilesPerTrigger_test [id = 6c42063d-39f4-4722-b529-fc3d379c691d, runId 
= df0c4fae-49db-4be3-8270-662fe0947559] terminated with error
java.lang.IllegalArgumentException: Invalid value '-1' for option 
'maxFilesPerTrigger', must be a positive integer
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2$$anonfun$apply$3.apply(FileStreamOptions.scala:35)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2$$anonfun$apply$3.apply(FileStreamOptions.scala:35)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2.apply(FileStreamOptions.scala:34)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2.apply(FileStreamOptions.scala:33)
        at scala.Option.map(Option.scala:146)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions.<init>(FileStreamOptions.scala:33)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions.<init>(FileStreamOptions.scala:31)
        at 
org.apache.spark.sql.execution.streaming.FileStreamSource.<init>(FileStreamSource.scala:44)
        at 
org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:256)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:140)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:136)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
        at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:287)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:277)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan$lzycompute(StreamExecution.scala:136)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan(StreamExecution.scala:131)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:246)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:186)
13:54:52.473 ERROR org.apache.spark.sql.execution.streaming.StreamExecution: 
Query maxFilesPerTrigger_test [id = ff834351-65c6-48d5-abb0-3ac8728a67b6, runId 
= ddf613f9-67c6-4668-8c11-9a12ed271c4b] terminated with error
java.lang.IllegalArgumentException: Invalid value '0' for option 
'maxFilesPerTrigger', must be a positive integer
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2$$anonfun$apply$3.apply(FileStreamOptions.scala:35)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2$$anonfun$apply$3.apply(FileStreamOptions.scala:35)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2.apply(FileStreamOptions.scala:34)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2.apply(FileStreamOptions.scala:33)
        at scala.Option.map(Option.scala:146)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions.<init>(FileStreamOptions.scala:33)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions.<init>(FileStreamOptions.scala:31)
        at 
org.apache.spark.sql.execution.streaming.FileStreamSource.<init>(FileStreamSource.scala:44)
        at 
org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:256)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:140)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:136)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
        at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:287)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:277)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan$lzycompute(StreamExecution.scala:136)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan(StreamExecution.scala:131)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:246)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:186)
13:54:52.557 ERROR org.apache.spark.sql.execution.streaming.StreamExecution: 
Query maxFilesPerTrigger_test [id = 2ea98fb9-9748-4b9e-8b45-c34b91c28dea, runId 
= 8065a0aa-ecad-432c-a967-52b1f65f4d2d] terminated with error
java.lang.IllegalArgumentException: Invalid value '10.1' for option 
'maxFilesPerTrigger', must be a positive integer
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2$$anonfun$apply$3.apply(FileStreamOptions.scala:35)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2$$anonfun$apply$3.apply(FileStreamOptions.scala:35)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2.apply(FileStreamOptions.scala:34)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions$$anonfun$2.apply(FileStreamOptions.scala:33)
        at scala.Option.map(Option.scala:146)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions.<init>(FileStreamOptions.scala:33)
        at 
org.apache.spark.sql.execution.streaming.FileStreamOptions.<init>(FileStreamOptions.scala:31)
        at 
org.apache.spark.sql.execution.streaming.FileStreamSource.<init>(FileStreamSource.scala:44)
        at 
org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:256)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:140)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:136)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
        at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:287)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:277)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan$lzycompute(StreamExecution.scala:136)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan(StreamExecution.scala:131)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:246)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:186)
{code}

> Fix flaky test: o.a.s.sql.streaming.FileStreamSourceSuite max files per 
> trigger - incorrect values
> --------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18907
>                 URL: https://issues.apache.org/jira/browse/SPARK-18907
>             Project: Spark
>          Issue Type: Test
>            Reporter: Shixiong Zhu
>            Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to