[jira] [Updated] (SPARK-21123) Options for file stream source are in a wrong table

2017-06-16 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-21123:
-
Description: 
Right now options for file stream source are documented with file sink. We 
should create a table for source options and fix it.


  was:
!Screen Shot 2017-06-15 at 11.25.49 AM.png|thumbnail!




> Options for file stream source are in a wrong table
> ---
>
> Key: SPARK-21123
> URL: https://issues.apache.org/jira/browse/SPARK-21123
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation, Structured Streaming
>Affects Versions: 2.1.1
>Reporter: Shixiong Zhu
>Priority: Minor
> Attachments: Screen Shot 2017-06-15 at 11.25.49 AM.png
>
>
> Right now options for file stream source are documented with file sink. We 
> should create a table for source options and fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21123) Options for file stream source are in a wrong table

2017-06-16 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-21123:
-
Description: 
!Screen Shot 2017-06-15 at 11.25.49 AM.png|thumbnail!



  was:!attachment-name.jpg|thumbnail!


> Options for file stream source are in a wrong table
> ---
>
> Key: SPARK-21123
> URL: https://issues.apache.org/jira/browse/SPARK-21123
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation, Structured Streaming
>Affects Versions: 2.1.1
>Reporter: Shixiong Zhu
>Priority: Minor
> Attachments: Screen Shot 2017-06-15 at 11.25.49 AM.png
>
>
> !Screen Shot 2017-06-15 at 11.25.49 AM.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21123) Options for file stream source are in a wrong table

2017-06-16 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-21123:
-
Description: !attachment-name.jpg|thumbnail!

> Options for file stream source are in a wrong table
> ---
>
> Key: SPARK-21123
> URL: https://issues.apache.org/jira/browse/SPARK-21123
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation, Structured Streaming
>Affects Versions: 2.1.1
>Reporter: Shixiong Zhu
>Priority: Minor
> Attachments: Screen Shot 2017-06-15 at 11.25.49 AM.png
>
>
> !attachment-name.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21123) Options for file stream source are in a wrong table

2017-06-16 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-21123:


 Summary: Options for file stream source are in a wrong table
 Key: SPARK-21123
 URL: https://issues.apache.org/jira/browse/SPARK-21123
 Project: Spark
  Issue Type: Documentation
  Components: Documentation, Structured Streaming
Affects Versions: 2.1.1
Reporter: Shixiong Zhu
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21123) Options for file stream source are in a wrong table

2017-06-16 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-21123:
-
Attachment: Screen Shot 2017-06-15 at 11.25.49 AM.png

> Options for file stream source are in a wrong table
> ---
>
> Key: SPARK-21123
> URL: https://issues.apache.org/jira/browse/SPARK-21123
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation, Structured Streaming
>Affects Versions: 2.1.1
>Reporter: Shixiong Zhu
>Priority: Minor
> Attachments: Screen Shot 2017-06-15 at 11.25.49 AM.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20979) Add a rate source to generate values for tests and benchmark

2017-06-13 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20979:
-
Fix Version/s: 2.2.0

> Add a rate source to generate values for tests and benchmark
> 
>
> Key: SPARK-20979
> URL: https://issues.apache.org/jira/browse/SPARK-20979
> Project: Spark
>  Issue Type: New Feature
>  Components: Structured Streaming
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
> Fix For: 2.2.0, 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21069) Add rate source to programming guide

2017-06-12 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-21069:
-
Labels: starter  (was: )

> Add rate source to programming guide
> 
>
> Key: SPARK-21069
> URL: https://issues.apache.org/jira/browse/SPARK-21069
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation, Structured Streaming
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>  Labels: starter
>
> SPARK-20979 added a new structured streaming source: rate source. We should 
> document it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20979) Add a rate source to generate values for tests and benchmark

2017-06-12 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20979:
-
Affects Version/s: (was: 2.2.0)
   2.3.0

> Add a rate source to generate values for tests and benchmark
> 
>
> Key: SPARK-20979
> URL: https://issues.apache.org/jira/browse/SPARK-20979
> Project: Spark
>  Issue Type: New Feature
>  Components: Structured Streaming
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
> Fix For: 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21069) Add rate source to programming guide

2017-06-12 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-21069:


 Summary: Add rate source to programming guide
 Key: SPARK-21069
 URL: https://issues.apache.org/jira/browse/SPARK-21069
 Project: Spark
  Issue Type: Documentation
  Components: Documentation, Structured Streaming
Affects Versions: 2.3.0
Reporter: Shixiong Zhu


SPARK-20979 added a new structured streaming source: rate source. We should 
document it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20979) Add a rate source to generate values for tests and benchmark

2017-06-12 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20979.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> Add a rate source to generate values for tests and benchmark
> 
>
> Key: SPARK-20979
> URL: https://issues.apache.org/jira/browse/SPARK-20979
> Project: Spark
>  Issue Type: New Feature
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
> Fix For: 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20927) Add cache operator to Unsupported Operations in Structured Streaming

2017-06-12 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046740#comment-16046740
 ] 

Shixiong Zhu commented on SPARK-20927:
--

Do nothing except logging a warning

> Add cache operator to Unsupported Operations in Structured Streaming 
> -
>
> Key: SPARK-20927
> URL: https://issues.apache.org/jira/browse/SPARK-20927
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Jacek Laskowski
>Priority: Trivial
>  Labels: starter
>
> Just [found 
> out|https://stackoverflow.com/questions/42062092/why-does-using-cache-on-streaming-datasets-fail-with-analysisexception-queries]
>  that {{cache}} is not allowed on streaming datasets.
> {{cache}} on streaming datasets leads to the following exception:
> {code}
> scala> spark.readStream.text("files").cache
> org.apache.spark.sql.AnalysisException: Queries with streaming sources must 
> be executed with writeStream.start();;
> FileSource[files]
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.org$apache$spark$sql$catalyst$analysis$UnsupportedOperationChecker$$throwError(UnsupportedOperationChecker.scala:297)
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForBatch$1.apply(UnsupportedOperationChecker.scala:36)
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForBatch$1.apply(UnsupportedOperationChecker.scala:34)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.checkForBatch(UnsupportedOperationChecker.scala:34)
>   at 
> org.apache.spark.sql.execution.QueryExecution.assertSupported(QueryExecution.scala:63)
>   at 
> org.apache.spark.sql.execution.QueryExecution.withCachedData$lzycompute(QueryExecution.scala:74)
>   at 
> org.apache.spark.sql.execution.QueryExecution.withCachedData(QueryExecution.scala:72)
>   at 
> org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:78)
>   at 
> org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:78)
>   at 
> org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
>   at 
> org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
>   at 
> org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
>   at 
> org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
>   at 
> org.apache.spark.sql.execution.CacheManager$$anonfun$cacheQuery$1.apply(CacheManager.scala:104)
>   at 
> org.apache.spark.sql.execution.CacheManager.writeLock(CacheManager.scala:68)
>   at 
> org.apache.spark.sql.execution.CacheManager.cacheQuery(CacheManager.scala:92)
>   at org.apache.spark.sql.Dataset.persist(Dataset.scala:2603)
>   at org.apache.spark.sql.Dataset.cache(Dataset.scala:2613)
>   ... 48 elided
> {code}
> It should be included in Structured Streaming's [Unsupported 
> Operations|http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#unsupported-operations].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21029) All StreamingQuery should be stopped when the SparkSession is stopped

2017-06-09 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-21029:
-
Issue Type: Improvement  (was: Bug)

> All StreamingQuery should be stopped when the SparkSession is stopped
> -
>
> Key: SPARK-21029
> URL: https://issues.apache.org/jira/browse/SPARK-21029
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Felix Cheung
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20952) TaskContext should be an InheritableThreadLocal

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043732#comment-16043732
 ] 

Shixiong Zhu commented on SPARK-20952:
--

For `ParquetFileFormat#readFootersInParallel`, I would suggest that you just 
set the TaskContext in "parFiles.flatMap". 

{code}
val taskContext = TaskContext.get

val parFiles = partFiles.par
parFiles.tasksupport = new ForkJoinTaskSupport(new ForkJoinPool(8))
parFiles.flatMap { currentFile =>
TaskContext.setTaskContext(taskContext)
...
}.seq
{code}

In this special case, it's safe since this is a local one-time thread pool.

> TaskContext should be an InheritableThreadLocal
> ---
>
> Key: SPARK-20952
> URL: https://issues.apache.org/jira/browse/SPARK-20952
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Robert Kruszewski
>Priority: Minor
>
> TaskContext is a ThreadLocal as a result when you fork a thread inside your 
> executor task you lose the handle on the original context set by the 
> executor. We should change it to InheritableThreadLocal so we can access it 
> inside thread pools on executors. 
> See ParquetFileFormat#readFootersInParallel for example of code that uses 
> thread pools inside the tasks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20952) TaskContext should be an InheritableThreadLocal

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043710#comment-16043710
 ] 

Shixiong Zhu commented on SPARK-20952:
--

Although I don't know what you plan to do, you can save the TaskContext into a 
local variable like this:
{code}
  private[parquet] def readParquetFootersInParallel(
  conf: Configuration,
  partFiles: Seq[FileStatus],
  ignoreCorruptFiles: Boolean): Seq[Footer] = {
val taskContext = TaskContext.get

val parFiles = partFiles.par
parFiles.tasksupport = new ForkJoinTaskSupport(new ForkJoinPool(8))
parFiles.flatMap { currentFile =>
  try {
// Use `taskContext` rather than `TaskContext.get`

// Skips row group information since we only need the schema.
// ParquetFileReader.readFooter throws RuntimeException, instead of 
IOException,
// when it can't read the footer.
Some(new Footer(currentFile.getPath(),
  ParquetFileReader.readFooter(
conf, currentFile, SKIP_ROW_GROUPS)))
  } catch { case e: RuntimeException =>
if (ignoreCorruptFiles) {
  logWarning(s"Skipped the footer in the corrupted file: $currentFile", 
e)
  None
} else {
  throw new IOException(s"Could not read footer for file: 
$currentFile", e)
}
  }
}.seq
  }
{code}

> TaskContext should be an InheritableThreadLocal
> ---
>
> Key: SPARK-20952
> URL: https://issues.apache.org/jira/browse/SPARK-20952
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Robert Kruszewski
>Priority: Minor
>
> TaskContext is a ThreadLocal as a result when you fork a thread inside your 
> executor task you lose the handle on the original context set by the 
> executor. We should change it to InheritableThreadLocal so we can access it 
> inside thread pools on executors. 
> See ParquetFileFormat#readFootersInParallel for example of code that uses 
> thread pools inside the tasks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20952) TaskContext should be an InheritableThreadLocal

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043708#comment-16043708
 ] 

Shixiong Zhu commented on SPARK-20952:
--

Why it needs TaskContext?

> TaskContext should be an InheritableThreadLocal
> ---
>
> Key: SPARK-20952
> URL: https://issues.apache.org/jira/browse/SPARK-20952
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Robert Kruszewski
>Priority: Minor
>
> TaskContext is a ThreadLocal as a result when you fork a thread inside your 
> executor task you lose the handle on the original context set by the 
> executor. We should change it to InheritableThreadLocal so we can access it 
> inside thread pools on executors. 
> See ParquetFileFormat#readFootersInParallel for example of code that uses 
> thread pools inside the tasks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20952) TaskContext should be an InheritableThreadLocal

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043694#comment-16043694
 ] 

Shixiong Zhu commented on SPARK-20952:
--

[~robert3005] could you show me your codes? Are you modifying 
"ParquetFileFormat#readFootersInParallel"?

> TaskContext should be an InheritableThreadLocal
> ---
>
> Key: SPARK-20952
> URL: https://issues.apache.org/jira/browse/SPARK-20952
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Robert Kruszewski
>Priority: Minor
>
> TaskContext is a ThreadLocal as a result when you fork a thread inside your 
> executor task you lose the handle on the original context set by the 
> executor. We should change it to InheritableThreadLocal so we can access it 
> inside thread pools on executors. 
> See ParquetFileFormat#readFootersInParallel for example of code that uses 
> thread pools inside the tasks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20952) TaskContext should be an InheritableThreadLocal

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043655#comment-16043655
 ] 

Shixiong Zhu commented on SPARK-20952:
--

If TaskContext is not inheritable, we can always find a way to pass it to the 
codes that need to access it. But if it's inheritable, it's pretty hard to 
avoid TaskContext pollution (or avoid using a stale TaskContext, you have to 
always set it manually in a task running in a cached thread).

[~joshrosen] listed many tickets that are caused by localProperties is 
InheritableThreadLocal: 
https://issues.apache.org/jira/browse/SPARK-14686?focusedCommentId=15244478=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244478

> TaskContext should be an InheritableThreadLocal
> ---
>
> Key: SPARK-20952
> URL: https://issues.apache.org/jira/browse/SPARK-20952
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Robert Kruszewski
>Priority: Minor
>
> TaskContext is a ThreadLocal as a result when you fork a thread inside your 
> executor task you lose the handle on the original context set by the 
> executor. We should change it to InheritableThreadLocal so we can access it 
> inside thread pools on executors. 
> See ParquetFileFormat#readFootersInParallel for example of code that uses 
> thread pools inside the tasks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-21022) RDD.foreach swallows exceptions

2017-06-08 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-21022:
-
Comment: was deleted

(was: ~~Good catch...~~)

> RDD.foreach swallows exceptions
> ---
>
> Key: SPARK-21022
> URL: https://issues.apache.org/jira/browse/SPARK-21022
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Colin Woodbury
>Assignee: Shixiong Zhu
>Priority: Minor
>
> A `RDD.foreach` or `RDD.foreachPartition` call will swallow Exceptions thrown 
> inside its closure, but not if the exception was thrown earlier in the call 
> chain. An example:
> {code:none}
>  package examples
>  import org.apache.spark._
>  object Shpark {
>def main(args: Array[String]) {
>  implicit val sc: SparkContext = new SparkContext(
>new SparkConf().setMaster("local[*]").setAppName("blahfoobar")
>  )
>  /* DOESN'T THROW 
> 
>  sc.parallelize(0 until 1000) 
> 
>.foreachPartition { _.map { i =>   
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}} 
> 
>   */
>  /* DOESN'T THROW, nor does anything print.   
> 
>   * Commenting out the exception runs the prints. 
> 
>   * (i.e. `foreach` is sufficient to "run" an RDD)
> 
>  sc.parallelize(0 until 10)   
> 
>.foreach({ i =>
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}) 
> 
>   */
>  /* Throws! */
>  sc.parallelize(0 until 10)
>.map({ i =>
>  println("BEFORE THROW")
>  throw new Exception("Testing exception handling")
>  i
>})
>.foreach(i => println(i))
>  println("JOB DONE!")
>  System.in.read
>  sc.stop()
>}
>  }
> {code}
> When exceptions are swallowed, the jobs don't seem to fail, and the driver 
> exits normally. When one _is_ thrown, as in the last example, the exception 
> successfully rises up to the driver and can be caught with try/catch.
> The expected behaviour is for exceptions in `foreach` to throw and crash the 
> driver, as they would with `map`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21022) RDD.foreach swallows exceptions

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043266#comment-16043266
 ] 

Shixiong Zhu commented on SPARK-21022:
--

Wait. I also checked `foreach` method. It does throw the exception. It's 
probably just you missed the exception due to lots of logs output?

> RDD.foreach swallows exceptions
> ---
>
> Key: SPARK-21022
> URL: https://issues.apache.org/jira/browse/SPARK-21022
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Colin Woodbury
>Assignee: Shixiong Zhu
>Priority: Minor
>
> A `RDD.foreach` or `RDD.foreachPartition` call will swallow Exceptions thrown 
> inside its closure, but not if the exception was thrown earlier in the call 
> chain. An example:
> {code:none}
>  package examples
>  import org.apache.spark._
>  object Shpark {
>def main(args: Array[String]) {
>  implicit val sc: SparkContext = new SparkContext(
>new SparkConf().setMaster("local[*]").setAppName("blahfoobar")
>  )
>  /* DOESN'T THROW 
> 
>  sc.parallelize(0 until 1000) 
> 
>.foreachPartition { _.map { i =>   
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}} 
> 
>   */
>  /* DOESN'T THROW, nor does anything print.   
> 
>   * Commenting out the exception runs the prints. 
> 
>   * (i.e. `foreach` is sufficient to "run" an RDD)
> 
>  sc.parallelize(0 until 10)   
> 
>.foreach({ i =>
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}) 
> 
>   */
>  /* Throws! */
>  sc.parallelize(0 until 10)
>.map({ i =>
>  println("BEFORE THROW")
>  throw new Exception("Testing exception handling")
>  i
>})
>.foreach(i => println(i))
>  println("JOB DONE!")
>  System.in.read
>  sc.stop()
>}
>  }
> {code}
> When exceptions are swallowed, the jobs don't seem to fail, and the driver 
> exits normally. When one _is_ thrown, as in the last example, the exception 
> successfully rises up to the driver and can be caught with try/catch.
> The expected behaviour is for exceptions in `foreach` to throw and crash the 
> driver, as they would with `map`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-21022) RDD.foreach swallows exceptions

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043223#comment-16043223
 ] 

Shixiong Zhu edited comment on SPARK-21022 at 6/8/17 7:05 PM:
--

~~Good catch...~~


was (Author: zsxwing):
Good catch...

> RDD.foreach swallows exceptions
> ---
>
> Key: SPARK-21022
> URL: https://issues.apache.org/jira/browse/SPARK-21022
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Colin Woodbury
>Assignee: Shixiong Zhu
>Priority: Minor
>
> A `RDD.foreach` or `RDD.foreachPartition` call will swallow Exceptions thrown 
> inside its closure, but not if the exception was thrown earlier in the call 
> chain. An example:
> {code:none}
>  package examples
>  import org.apache.spark._
>  object Shpark {
>def main(args: Array[String]) {
>  implicit val sc: SparkContext = new SparkContext(
>new SparkConf().setMaster("local[*]").setAppName("blahfoobar")
>  )
>  /* DOESN'T THROW 
> 
>  sc.parallelize(0 until 1000) 
> 
>.foreachPartition { _.map { i =>   
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}} 
> 
>   */
>  /* DOESN'T THROW, nor does anything print.   
> 
>   * Commenting out the exception runs the prints. 
> 
>   * (i.e. `foreach` is sufficient to "run" an RDD)
> 
>  sc.parallelize(0 until 10)   
> 
>.foreach({ i =>
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}) 
> 
>   */
>  /* Throws! */
>  sc.parallelize(0 until 10)
>.map({ i =>
>  println("BEFORE THROW")
>  throw new Exception("Testing exception handling")
>  i
>})
>.foreach(i => println(i))
>  println("JOB DONE!")
>  System.in.read
>  sc.stop()
>}
>  }
> {code}
> When exceptions are swallowed, the jobs don't seem to fail, and the driver 
> exits normally. When one _is_ thrown, as in the last example, the exception 
> successfully rises up to the driver and can be caught with try/catch.
> The expected behaviour is for exceptions in `foreach` to throw and crash the 
> driver, as they would with `map`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21022) RDD.foreach swallows exceptions

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043247#comment-16043247
 ] 

Shixiong Zhu commented on SPARK-21022:
--

By the way, `foreachPartition` doesn't have the issue. It's just because 
"Iterator.map" is lazy and you don't consume the Iterator.

> RDD.foreach swallows exceptions
> ---
>
> Key: SPARK-21022
> URL: https://issues.apache.org/jira/browse/SPARK-21022
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Colin Woodbury
>Assignee: Shixiong Zhu
>Priority: Minor
>
> A `RDD.foreach` or `RDD.foreachPartition` call will swallow Exceptions thrown 
> inside its closure, but not if the exception was thrown earlier in the call 
> chain. An example:
> {code:none}
>  package examples
>  import org.apache.spark._
>  object Shpark {
>def main(args: Array[String]) {
>  implicit val sc: SparkContext = new SparkContext(
>new SparkConf().setMaster("local[*]").setAppName("blahfoobar")
>  )
>  /* DOESN'T THROW 
> 
>  sc.parallelize(0 until 1000) 
> 
>.foreachPartition { _.map { i =>   
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}} 
> 
>   */
>  /* DOESN'T THROW, nor does anything print.   
> 
>   * Commenting out the exception runs the prints. 
> 
>   * (i.e. `foreach` is sufficient to "run" an RDD)
> 
>  sc.parallelize(0 until 10)   
> 
>.foreach({ i =>
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}) 
> 
>   */
>  /* Throws! */
>  sc.parallelize(0 until 10)
>.map({ i =>
>  println("BEFORE THROW")
>  throw new Exception("Testing exception handling")
>  i
>})
>.foreach(i => println(i))
>  println("JOB DONE!")
>  System.in.read
>  sc.stop()
>}
>  }
> {code}
> When exceptions are swallowed, the jobs don't seem to fail, and the driver 
> exits normally. When one _is_ thrown, as in the last example, the exception 
> successfully rises up to the driver and can be caught with try/catch.
> The expected behaviour is for exceptions in `foreach` to throw and crash the 
> driver, as they would with `map`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-21022) RDD.foreach swallows exceptions

2017-06-08 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu reassigned SPARK-21022:


Assignee: Shixiong Zhu

> RDD.foreach swallows exceptions
> ---
>
> Key: SPARK-21022
> URL: https://issues.apache.org/jira/browse/SPARK-21022
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Colin Woodbury
>Assignee: Shixiong Zhu
>Priority: Minor
>
> A `RDD.foreach` or `RDD.foreachPartition` call will swallow Exceptions thrown 
> inside its closure, but not if the exception was thrown earlier in the call 
> chain. An example:
> {code:none}
>  package examples
>  import org.apache.spark._
>  object Shpark {
>def main(args: Array[String]) {
>  implicit val sc: SparkContext = new SparkContext(
>new SparkConf().setMaster("local[*]").setAppName("blahfoobar")
>  )
>  /* DOESN'T THROW 
> 
>  sc.parallelize(0 until 1000) 
> 
>.foreachPartition { _.map { i =>   
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}} 
> 
>   */
>  /* DOESN'T THROW, nor does anything print.   
> 
>   * Commenting out the exception runs the prints. 
> 
>   * (i.e. `foreach` is sufficient to "run" an RDD)
> 
>  sc.parallelize(0 until 10)   
> 
>.foreach({ i =>
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}) 
> 
>   */
>  /* Throws! */
>  sc.parallelize(0 until 10)
>.map({ i =>
>  println("BEFORE THROW")
>  throw new Exception("Testing exception handling")
>  i
>})
>.foreach(i => println(i))
>  println("JOB DONE!")
>  System.in.read
>  sc.stop()
>}
>  }
> {code}
> When exceptions are swallowed, the jobs don't seem to fail, and the driver 
> exits normally. When one _is_ thrown, as in the last example, the exception 
> successfully rises up to the driver and can be caught with try/catch.
> The expected behaviour is for exceptions in `foreach` to throw and crash the 
> driver, as they would with `map`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21022) RDD.foreach swallows exceptions

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043223#comment-16043223
 ] 

Shixiong Zhu commented on SPARK-21022:
--

Good catch...

> RDD.foreach swallows exceptions
> ---
>
> Key: SPARK-21022
> URL: https://issues.apache.org/jira/browse/SPARK-21022
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Colin Woodbury
>Priority: Minor
>
> A `RDD.foreach` or `RDD.foreachPartition` call will swallow Exceptions thrown 
> inside its closure, but not if the exception was thrown earlier in the call 
> chain. An example:
> {code:none}
>  package examples
>  import org.apache.spark._
>  object Shpark {
>def main(args: Array[String]) {
>  implicit val sc: SparkContext = new SparkContext(
>new SparkConf().setMaster("local[*]").setAppName("blahfoobar")
>  )
>  /* DOESN'T THROW 
> 
>  sc.parallelize(0 until 1000) 
> 
>.foreachPartition { _.map { i =>   
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}} 
> 
>   */
>  /* DOESN'T THROW, nor does anything print.   
> 
>   * Commenting out the exception runs the prints. 
> 
>   * (i.e. `foreach` is sufficient to "run" an RDD)
> 
>  sc.parallelize(0 until 10)   
> 
>.foreach({ i =>
> 
>  println("BEFORE THROW")  
> 
>  throw new Exception("Testing exception handling")
> 
>  println(i)   
> 
>}) 
> 
>   */
>  /* Throws! */
>  sc.parallelize(0 until 10)
>.map({ i =>
>  println("BEFORE THROW")
>  throw new Exception("Testing exception handling")
>  i
>})
>.foreach(i => println(i))
>  println("JOB DONE!")
>  System.in.read
>  sc.stop()
>}
>  }
> {code}
> When exceptions are swallowed, the jobs don't seem to fail, and the driver 
> exits normally. When one _is_ thrown, as in the last example, the exception 
> successfully rises up to the driver and can be caught with try/catch.
> The expected behaviour is for exceptions in `foreach` to throw and crash the 
> driver, as they would with `map`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20971) Purge the metadata log for FileStreamSource

2017-06-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043150#comment-16043150
 ] 

Shixiong Zhu commented on SPARK-20971:
--

 FileStreamSource saves the seen files in the disk/HDFS, we can use the similar 
way like org.apache.spark.sql.execution.streaming.FileStreamSource.SeenFilesMap 
to purge the file entries.

> Purge the metadata log for FileStreamSource
> ---
>
> Key: SPARK-20971
> URL: https://issues.apache.org/jira/browse/SPARK-20971
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.1.1
>Reporter: Shixiong Zhu
>
> Currently 
> [FileStreamSource.commit|https://github.com/apache/spark/blob/16186cdcbce1a2ec8f839c550e6b571bf5dc2692/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala#L258]
>  is empty. We can delete unused metadata logs in this method to reduce the 
> size of log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21008) Streaming applications read stale credentials file when recovering from checkpoint.

2017-06-07 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-21008:
-
Component/s: (was: Structured Streaming)
 DStreams

> Streaming applications read stale credentials file when recovering from 
> checkpoint.
> ---
>
> Key: SPARK-21008
> URL: https://issues.apache.org/jira/browse/SPARK-21008
> Project: Spark
>  Issue Type: Bug
>  Components: DStreams
>Affects Versions: 1.6.3, 2.0.2, 2.1.1, 2.2.0
>Reporter: Xing Shi
>
> On a security(Kerberos) enabled cluster, streaming applications renew HDFS 
> delegation tokens periodically and save them in 
> {{/.sparkStaging//}} directory on HDFS.
> The path of the credentials file will written into checkpoint, and reloaded 
> as the *old applicationId* at application restarting, although the 
> application has changed to a new id.
> This issue can be reproduced by restarting a checkpoint-enabled streaming 
> application on a kerberized cluster.
> The application run well - but with thousands of 
> {{java.io.FileNotFoundException}} logged - and finally failed by token 
> expiration.
> The log file is something like this:
> {code:title=the_first_run.log}
> 17/06/07 14:52:06 INFO executor.CoarseGrainedExecutorBackend: Will 
> periodically update credentials from: 
> hdfs://nameservice1/user/xxx/.sparkStaging/application_1496384469444_0035/credentials-19a7c11e-8c93-478c-ab0a-cdbfae5b2ae5
> 17/06/07 14:52:06 INFO security.CredentialUpdater: Scheduling credentials 
> refresh from HDFS in 92263 ms.
> {code}
> {code:title=after_restart.log}
> 17/06/07 15:11:24 INFO executor.CoarseGrainedExecutorBackend: Will 
> periodically update credentials from: 
> hdfs://nameservice1/user/xxx/.sparkStaging/application_1496384469444_0035/credentials-19a7c11e-8c93-478c-ab0a-cdbfae5b2ae5
> ...
> 17/06/07 15:12:24 WARN yarn.YarnSparkHadoopUtil: Error while attempting to 
> list files from application staging dir
> java.io.FileNotFoundException: File 
> hdfs://nameservice1/user/xxx/.sparkStaging/application_1496384469444_0035 
> does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:697)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1485)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1525)
> at 
> org.apache.spark.deploy.SparkHadoopUtil.listFilesSorted(SparkHadoopUtil.scala:257)
> at 
> org.apache.spark.deploy.yarn.security.CredentialUpdater.org$apache$spark$deploy$yarn$security$CredentialUpdater$$updateCredentialsIfRequired(CredentialUpdater.scala:72)
> at 
> org.apache.spark.deploy.yarn.security.CredentialUpdater$$anon$1$$anonfun$run$1.apply$mcV$sp(CredentialUpdater.scala:53)
> at 
> org.apache.spark.deploy.yarn.security.CredentialUpdater$$anon$1$$anonfun$run$1.apply(CredentialUpdater.scala:53)
> at 
> org.apache.spark.deploy.yarn.security.CredentialUpdater$$anon$1$$anonfun$run$1.apply(CredentialUpdater.scala:53)
> at 
> org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1962)
> at 
> org.apache.spark.deploy.yarn.security.CredentialUpdater$$anon$1.run(CredentialUpdater.scala:53)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Notice that the applicationId after restart is 
> application_1496384469444_{color:red}0036{color} but the application still 
> attempt to read credentials from 0035's directory.
> Recently I used Spark 1.6 in my cluster, and tested this issue with Spark 
> 1.6.3 and 2.1.1. But it should affect all the versions from 1.5.x to current 
> master(2.3.x).



--
This 

[jira] [Assigned] (SPARK-20991) BROADCAST_TIMEOUT conf should be a timeoutConf

2017-06-05 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu reassigned SPARK-20991:


Assignee: Feng Liu

> BROADCAST_TIMEOUT conf should be a timeoutConf
> --
>
> Key: SPARK-20991
> URL: https://issues.apache.org/jira/browse/SPARK-20991
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.2.0, 2.2.1
>Reporter: Feng Liu
>Assignee: Feng Liu
> Fix For: 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20991) BROADCAST_TIMEOUT conf should be a timeoutConf

2017-06-05 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20991.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> BROADCAST_TIMEOUT conf should be a timeoutConf
> --
>
> Key: SPARK-20991
> URL: https://issues.apache.org/jira/browse/SPARK-20991
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.2.0, 2.2.1
>Reporter: Feng Liu
> Fix For: 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20979) Add a rate source to generate values for tests and benchmark

2017-06-04 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-20979:


 Summary: Add a rate source to generate values for tests and 
benchmark
 Key: SPARK-20979
 URL: https://issues.apache.org/jira/browse/SPARK-20979
 Project: Spark
  Issue Type: New Feature
  Components: Structured Streaming
Affects Versions: 2.2.0
Reporter: Shixiong Zhu
Assignee: Shixiong Zhu






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20971) Purge the metadata log for FileStreamSource

2017-06-02 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-20971:


 Summary: Purge the metadata log for FileStreamSource
 Key: SPARK-20971
 URL: https://issues.apache.org/jira/browse/SPARK-20971
 Project: Spark
  Issue Type: Improvement
  Components: Structured Streaming
Affects Versions: 2.1.1
Reporter: Shixiong Zhu


Currently 
[FileStreamSource.commit|https://github.com/apache/spark/blob/16186cdcbce1a2ec8f839c550e6b571bf5dc2692/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala#L258]
 is empty. We can delete unused metadata logs in this method to reduce the size 
of log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20952) TaskContext should be an InheritableThreadLocal

2017-06-02 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035302#comment-16035302
 ] 

Shixiong Zhu commented on SPARK-20952:
--

What I'm concerned about is global thread pools, such as 
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/BroadcastExchangeExec.scala#L128

> TaskContext should be an InheritableThreadLocal
> ---
>
> Key: SPARK-20952
> URL: https://issues.apache.org/jira/browse/SPARK-20952
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Robert Kruszewski
>Priority: Minor
>
> TaskContext is a ThreadLocal as a result when you fork a thread inside your 
> executor task you lose the handle on the original context set by the 
> executor. We should change it to InheritableThreadLocal so we can access it 
> inside thread pools on executors. 
> See ParquetFileFormat#readFootersInParallel for example of code that uses 
> thread pools inside the tasks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20955) A lot of duplicated "executorId" strings in "TaskUIData"s

2017-06-02 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20955.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

> A lot of duplicated "executorId" strings in "TaskUIData"s 
> --
>
> Key: SPARK-20955
> URL: https://issues.apache.org/jira/browse/SPARK-20955
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
> Fix For: 2.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20957) Flaky Test: o.a.s.sql.streaming.StreamingQueryManagerSuite listing

2017-06-01 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-20957:


 Summary: Flaky Test: 
o.a.s.sql.streaming.StreamingQueryManagerSuite listing
 Key: SPARK-20957
 URL: https://issues.apache.org/jira/browse/SPARK-20957
 Project: Spark
  Issue Type: Test
  Components: Structured Streaming
Affects Versions: 2.2.0
Reporter: Shixiong Zhu
Assignee: Shixiong Zhu


{code}
sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
org.apache.spark.sql.execution.streaming.StreamingQueryWrapper@74d70cd4 did not 
equal null
at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite$$anonfun$3$$anonfun$apply$mcV$sp$2.apply(StreamingQueryManagerSuite.scala:82)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite$$anonfun$3$$anonfun$apply$mcV$sp$2.apply(StreamingQueryManagerSuite.scala:61)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite$$anonfun$org$apache$spark$sql$streaming$StreamingQueryManagerSuite$$withQueriesOn$1.apply$mcV$sp(StreamingQueryManagerSuite.scala:268)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite$$anonfun$org$apache$spark$sql$streaming$StreamingQueryManagerSuite$$withQueriesOn$1.apply(StreamingQueryManagerSuite.scala:244)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite$$anonfun$org$apache$spark$sql$streaming$StreamingQueryManagerSuite$$withQueriesOn$1.apply(StreamingQueryManagerSuite.scala:244)
at 
org.scalatest.concurrent.Timeouts$class.timeoutAfter(Timeouts.scala:326)
at org.scalatest.concurrent.Timeouts$class.failAfter(Timeouts.scala:245)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite.failAfter(StreamingQueryManagerSuite.scala:39)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite.org$apache$spark$sql$streaming$StreamingQueryManagerSuite$$withQueriesOn(StreamingQueryManagerSuite.scala:244)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite$$anonfun$3.apply$mcV$sp(StreamingQueryManagerSuite.scala:61)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite$$anonfun$3.apply(StreamingQueryManagerSuite.scala:56)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite$$anonfun$3.apply(StreamingQueryManagerSuite.scala:56)
at org.apache.spark.sql.catalyst.util.package$.quietly(package.scala:42)
at 
org.apache.spark.sql.test.SQLTestUtils$$anonfun$testQuietly$1.apply$mcV$sp(SQLTestUtils.scala:310)
at 
org.apache.spark.sql.test.SQLTestUtils$$anonfun$testQuietly$1.apply(SQLTestUtils.scala:310)
at 
org.apache.spark.sql.test.SQLTestUtils$$anonfun$testQuietly$1.apply(SQLTestUtils.scala:310)
at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:68)
at 
org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
at 
org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at 
org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(StreamingQueryManagerSuite.scala:39)
at 
org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:255)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite.org$scalatest$BeforeAndAfter$$super$runTest(StreamingQueryManagerSuite.scala:39)
at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
at 
org.apache.spark.sql.streaming.StreamingQueryManagerSuite.runTest(StreamingQueryManagerSuite.scala:39)
at 
org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at 
org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at 
org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at 
org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:381)
at 

[jira] [Comment Edited] (SPARK-20952) TaskContext should be an InheritableThreadLocal

2017-06-01 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16033915#comment-16033915
 ] 

Shixiong Zhu edited comment on SPARK-20952 at 6/1/17 11:43 PM:
---

InheritableThreadLocal only works when creating a new thread. Here you were 
talking about thread pools. Reusing a thread may get a wrong TaskContext if 
it's InheritableThreadLocal.


was (Author: zsxwing):
InheritableThreadLocal only works when creating a new thread. Here you were 
talking about thread pools.

> TaskContext should be an InheritableThreadLocal
> ---
>
> Key: SPARK-20952
> URL: https://issues.apache.org/jira/browse/SPARK-20952
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Robert Kruszewski
>Priority: Minor
>
> TaskContext is a ThreadLocal as a result when you fork a thread inside your 
> executor task you lose the handle on the original context set by the 
> executor. We should change it to InheritableThreadLocal so we can access it 
> inside thread pools on executors. 
> See ParquetFileFormat#readFootersInParallel for example of code that uses 
> thread pools inside the tasks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20952) TaskContext should be an InheritableThreadLocal

2017-06-01 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16033915#comment-16033915
 ] 

Shixiong Zhu commented on SPARK-20952:
--

InheritableThreadLocal only works when creating a new thread. Here you were 
talking about thread pools.

> TaskContext should be an InheritableThreadLocal
> ---
>
> Key: SPARK-20952
> URL: https://issues.apache.org/jira/browse/SPARK-20952
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Robert Kruszewski
>Priority: Minor
>
> TaskContext is a ThreadLocal as a result when you fork a thread inside your 
> executor task you lose the handle on the original context set by the 
> executor. We should change it to InheritableThreadLocal so we can access it 
> inside thread pools on executors. 
> See ParquetFileFormat#readFootersInParallel for example of code that uses 
> thread pools inside the tasks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-20894) Error while checkpointing to HDFS

2017-06-01 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu reassigned SPARK-20894:


Assignee: Shixiong Zhu

> Error while checkpointing to HDFS
> -
>
> Key: SPARK-20894
> URL: https://issues.apache.org/jira/browse/SPARK-20894
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.1.1
> Environment: Ubuntu, Spark 2.1.1, hadoop 2.7
>Reporter: kant kodali
>Assignee: Shixiong Zhu
> Fix For: 2.3.0
>
> Attachments: driver_info_log, executor1_log, executor2_log
>
>
> Dataset df2 = df1.groupBy(functions.window(df1.col("Timestamp5"), "24 
> hours", "24 hours"), df1.col("AppName")).count();
> StreamingQuery query = df2.writeStream().foreach(new 
> KafkaSink()).option("checkpointLocation","/usr/local/hadoop/checkpoint").outputMode("update").start();
> query.awaitTermination();
> This for some reason fails with the Error 
> ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.IllegalStateException: Error reading delta file 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta of HDFSStateStoreProvider[id = 
> (op=0, part=0), dir = /usr/local/hadoop/checkpoint/state/0/0]: 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta does not exist
> I did clear all the checkpoint data in /usr/local/hadoop/checkpoint/  and all 
> consumer offsets in Kafka from all brokers prior to running and yet this 
> error still persists. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20894) Error while checkpointing to HDFS

2017-06-01 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20894:
-
Fix Version/s: 2.3.0

> Error while checkpointing to HDFS
> -
>
> Key: SPARK-20894
> URL: https://issues.apache.org/jira/browse/SPARK-20894
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.1.1
> Environment: Ubuntu, Spark 2.1.1, hadoop 2.7
>Reporter: kant kodali
>Assignee: Shixiong Zhu
> Fix For: 2.3.0
>
> Attachments: driver_info_log, executor1_log, executor2_log
>
>
> Dataset df2 = df1.groupBy(functions.window(df1.col("Timestamp5"), "24 
> hours", "24 hours"), df1.col("AppName")).count();
> StreamingQuery query = df2.writeStream().foreach(new 
> KafkaSink()).option("checkpointLocation","/usr/local/hadoop/checkpoint").outputMode("update").start();
> query.awaitTermination();
> This for some reason fails with the Error 
> ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.IllegalStateException: Error reading delta file 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta of HDFSStateStoreProvider[id = 
> (op=0, part=0), dir = /usr/local/hadoop/checkpoint/state/0/0]: 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta does not exist
> I did clear all the checkpoint data in /usr/local/hadoop/checkpoint/  and all 
> consumer offsets in Kafka from all brokers prior to running and yet this 
> error still persists. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20927) Add cache operator to Unsupported Operations in Structured Streaming

2017-06-01 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16033731#comment-16033731
 ] 

Shixiong Zhu commented on SPARK-20927:
--

neither cache nor checkpoint makes sense for a streaming query. We should just 
change them to no-op for a streaming query without throwing an exception.

> Add cache operator to Unsupported Operations in Structured Streaming 
> -
>
> Key: SPARK-20927
> URL: https://issues.apache.org/jira/browse/SPARK-20927
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Jacek Laskowski
>Priority: Trivial
>  Labels: starter
>
> Just [found 
> out|https://stackoverflow.com/questions/42062092/why-does-using-cache-on-streaming-datasets-fail-with-analysisexception-queries]
>  that {{cache}} is not allowed on streaming datasets.
> {{cache}} on streaming datasets leads to the following exception:
> {code}
> scala> spark.readStream.text("files").cache
> org.apache.spark.sql.AnalysisException: Queries with streaming sources must 
> be executed with writeStream.start();;
> FileSource[files]
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.org$apache$spark$sql$catalyst$analysis$UnsupportedOperationChecker$$throwError(UnsupportedOperationChecker.scala:297)
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForBatch$1.apply(UnsupportedOperationChecker.scala:36)
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForBatch$1.apply(UnsupportedOperationChecker.scala:34)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.checkForBatch(UnsupportedOperationChecker.scala:34)
>   at 
> org.apache.spark.sql.execution.QueryExecution.assertSupported(QueryExecution.scala:63)
>   at 
> org.apache.spark.sql.execution.QueryExecution.withCachedData$lzycompute(QueryExecution.scala:74)
>   at 
> org.apache.spark.sql.execution.QueryExecution.withCachedData(QueryExecution.scala:72)
>   at 
> org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:78)
>   at 
> org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:78)
>   at 
> org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
>   at 
> org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
>   at 
> org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
>   at 
> org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
>   at 
> org.apache.spark.sql.execution.CacheManager$$anonfun$cacheQuery$1.apply(CacheManager.scala:104)
>   at 
> org.apache.spark.sql.execution.CacheManager.writeLock(CacheManager.scala:68)
>   at 
> org.apache.spark.sql.execution.CacheManager.cacheQuery(CacheManager.scala:92)
>   at org.apache.spark.sql.Dataset.persist(Dataset.scala:2603)
>   at org.apache.spark.sql.Dataset.cache(Dataset.scala:2613)
>   ... 48 elided
> {code}
> It should be included in Structured Streaming's [Unsupported 
> Operations|http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#unsupported-operations].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20927) Add cache operator to Unsupported Operations in Structured Streaming

2017-06-01 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20927:
-
Affects Version/s: (was: 2.3.0)
   2.2.0

> Add cache operator to Unsupported Operations in Structured Streaming 
> -
>
> Key: SPARK-20927
> URL: https://issues.apache.org/jira/browse/SPARK-20927
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Jacek Laskowski
>Priority: Trivial
>  Labels: starter
>
> Just [found 
> out|https://stackoverflow.com/questions/42062092/why-does-using-cache-on-streaming-datasets-fail-with-analysisexception-queries]
>  that {{cache}} is not allowed on streaming datasets.
> {{cache}} on streaming datasets leads to the following exception:
> {code}
> scala> spark.readStream.text("files").cache
> org.apache.spark.sql.AnalysisException: Queries with streaming sources must 
> be executed with writeStream.start();;
> FileSource[files]
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.org$apache$spark$sql$catalyst$analysis$UnsupportedOperationChecker$$throwError(UnsupportedOperationChecker.scala:297)
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForBatch$1.apply(UnsupportedOperationChecker.scala:36)
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForBatch$1.apply(UnsupportedOperationChecker.scala:34)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
>   at 
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.checkForBatch(UnsupportedOperationChecker.scala:34)
>   at 
> org.apache.spark.sql.execution.QueryExecution.assertSupported(QueryExecution.scala:63)
>   at 
> org.apache.spark.sql.execution.QueryExecution.withCachedData$lzycompute(QueryExecution.scala:74)
>   at 
> org.apache.spark.sql.execution.QueryExecution.withCachedData(QueryExecution.scala:72)
>   at 
> org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:78)
>   at 
> org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:78)
>   at 
> org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
>   at 
> org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
>   at 
> org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
>   at 
> org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
>   at 
> org.apache.spark.sql.execution.CacheManager$$anonfun$cacheQuery$1.apply(CacheManager.scala:104)
>   at 
> org.apache.spark.sql.execution.CacheManager.writeLock(CacheManager.scala:68)
>   at 
> org.apache.spark.sql.execution.CacheManager.cacheQuery(CacheManager.scala:92)
>   at org.apache.spark.sql.Dataset.persist(Dataset.scala:2603)
>   at org.apache.spark.sql.Dataset.cache(Dataset.scala:2613)
>   ... 48 elided
> {code}
> It should be included in Structured Streaming's [Unsupported 
> Operations|http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#unsupported-operations].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20935) A daemon thread, "BatchedWriteAheadLog Writer", left behind after terminating StreamingContext.

2017-06-01 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20935:
-
Component/s: (was: Structured Streaming)
 DStreams

> A daemon thread, "BatchedWriteAheadLog Writer", left behind after terminating 
> StreamingContext.
> ---
>
> Key: SPARK-20935
> URL: https://issues.apache.org/jira/browse/SPARK-20935
> Project: Spark
>  Issue Type: Bug
>  Components: DStreams
>Affects Versions: 1.6.3, 2.1.1
>Reporter: Terence Yim
>
> With batched write ahead log on by default in driver (SPARK-11731), if there 
> is no receiver based {{InputDStream}}, the "BatchedWriteAheadLog Writer" 
> thread created by {{BatchedWriteAheadLog}} never get shutdown. 
> The root cause is due to 
> https://github.com/apache/spark/blob/master/streaming/src/main/scala/org/apache/spark/streaming/scheduler/ReceiverTracker.scala#L168
> that it never call {{ReceivedBlockTracker.stop()}} (which in turn call 
> {{BatchedWriteAheadLog.close()}}) if there is no receiver based input.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20955) A lot of duplicated "executorId" strings in "TaskUIData"s

2017-06-01 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16033664#comment-16033664
 ] 

Shixiong Zhu commented on SPARK-20955:
--

In [this 
line|https://github.com/apache/spark/blob/f7cf2096fdecb8edab61c8973c07c6fc877ee32d/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala#L128],
 it uses the executorId string received from executors and finally it will go 
into TaskUIData. As deserializing the executorId string will always create a 
new instance, we have a lot of duplicated string instances.

> A lot of duplicated "executorId" strings in "TaskUIData"s 
> --
>
> Key: SPARK-20955
> URL: https://issues.apache.org/jira/browse/SPARK-20955
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20955) A lot of duplicated "executorId" strings in "TaskUIData"s

2017-06-01 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-20955:


 Summary: A lot of duplicated "executorId" strings in "TaskUIData"s 
 Key: SPARK-20955
 URL: https://issues.apache.org/jira/browse/SPARK-20955
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 2.1.1
Reporter: Shixiong Zhu
Assignee: Shixiong Zhu






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20940) AccumulatorV2 should not throw IllegalAccessError

2017-05-31 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20940.
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   2.1.2
   2.0.3

> AccumulatorV2 should not throw IllegalAccessError
> -
>
> Key: SPARK-20940
> URL: https://issues.apache.org/jira/browse/SPARK-20940
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.0.2, 2.1.1, 2.2.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
> Fix For: 2.0.3, 2.1.2, 2.2.0
>
>
> IllegalAccessError is a LinkageError which is a fatal error. We should use 
> IllegalStateException instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20940) AccumulatorV2 should not throw IllegalAccessError

2017-05-31 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-20940:


 Summary: AccumulatorV2 should not throw IllegalAccessError
 Key: SPARK-20940
 URL: https://issues.apache.org/jira/browse/SPARK-20940
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 2.1.1, 2.0.2, 2.2.0
Reporter: Shixiong Zhu
Assignee: Shixiong Zhu


IllegalAccessError is a LinkageError which is a fatal error. We should use 
IllegalStateException instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20883) Improve StateStore APIs for efficiency

2017-05-30 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20883.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> Improve StateStore APIs for efficiency
> --
>
> Key: SPARK-20883
> URL: https://issues.apache.org/jira/browse/SPARK-20883
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Tathagata Das
>Assignee: Tathagata Das
> Fix For: 2.3.0
>
>
> Current state store API has a bunch of problems that causes too many 
> transient objects causing memory pressure.
> - StateStore.get() returns Options which forces creation of Some/None objects 
> for every get
> - StateStore.iterator() returns tuples which forces creation of new tuple for 
> each record returned
> - StateStore.updates() requires the implementation to keep track of updates, 
> while this is used minimally (only by Append mode in streaming aggregations). 
> This can be totally removed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20894) Error while checkpointing to HDFS (similar to JIRA SPARK-19268)

2017-05-30 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030160#comment-16030160
 ] 

Shixiong Zhu commented on SPARK-20894:
--

The root issue here is the driver uses the local file system for checkpoints 
but executors use HDFS.

I reopened this ticket because I think we can improve the error message here.

> Error while checkpointing to HDFS (similar to JIRA SPARK-19268)
> ---
>
> Key: SPARK-20894
> URL: https://issues.apache.org/jira/browse/SPARK-20894
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.1.1
> Environment: Ubuntu, Spark 2.1.1, hadoop 2.7
>Reporter: kant kodali
> Attachments: driver_info_log, executor1_log, executor2_log
>
>
> Dataset df2 = df1.groupBy(functions.window(df1.col("Timestamp5"), "24 
> hours", "24 hours"), df1.col("AppName")).count();
> StreamingQuery query = df2.writeStream().foreach(new 
> KafkaSink()).option("checkpointLocation","/usr/local/hadoop/checkpoint").outputMode("update").start();
> query.awaitTermination();
> This for some reason fails with the Error 
> ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.IllegalStateException: Error reading delta file 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta of HDFSStateStoreProvider[id = 
> (op=0, part=0), dir = /usr/local/hadoop/checkpoint/state/0/0]: 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta does not exist
> I did clear all the checkpoint data in /usr/local/hadoop/checkpoint/  and all 
> consumer offsets in Kafka from all brokers prior to running and yet this 
> error still persists. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20894) Error while checkpointing to HDFS

2017-05-30 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20894:
-
Summary: Error while checkpointing to HDFS  (was: Error while checkpointing 
to HDFS (similar to JIRA SPARK-19268))

> Error while checkpointing to HDFS
> -
>
> Key: SPARK-20894
> URL: https://issues.apache.org/jira/browse/SPARK-20894
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.1.1
> Environment: Ubuntu, Spark 2.1.1, hadoop 2.7
>Reporter: kant kodali
> Attachments: driver_info_log, executor1_log, executor2_log
>
>
> Dataset df2 = df1.groupBy(functions.window(df1.col("Timestamp5"), "24 
> hours", "24 hours"), df1.col("AppName")).count();
> StreamingQuery query = df2.writeStream().foreach(new 
> KafkaSink()).option("checkpointLocation","/usr/local/hadoop/checkpoint").outputMode("update").start();
> query.awaitTermination();
> This for some reason fails with the Error 
> ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.IllegalStateException: Error reading delta file 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta of HDFSStateStoreProvider[id = 
> (op=0, part=0), dir = /usr/local/hadoop/checkpoint/state/0/0]: 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta does not exist
> I did clear all the checkpoint data in /usr/local/hadoop/checkpoint/  and all 
> consumer offsets in Kafka from all brokers prior to running and yet this 
> error still persists. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20894) Error while checkpointing to HDFS (similar to JIRA SPARK-19268)

2017-05-30 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20894:
-
Issue Type: Improvement  (was: Bug)

> Error while checkpointing to HDFS (similar to JIRA SPARK-19268)
> ---
>
> Key: SPARK-20894
> URL: https://issues.apache.org/jira/browse/SPARK-20894
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.1.1
> Environment: Ubuntu, Spark 2.1.1, hadoop 2.7
>Reporter: kant kodali
> Attachments: driver_info_log, executor1_log, executor2_log
>
>
> Dataset df2 = df1.groupBy(functions.window(df1.col("Timestamp5"), "24 
> hours", "24 hours"), df1.col("AppName")).count();
> StreamingQuery query = df2.writeStream().foreach(new 
> KafkaSink()).option("checkpointLocation","/usr/local/hadoop/checkpoint").outputMode("update").start();
> query.awaitTermination();
> This for some reason fails with the Error 
> ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.IllegalStateException: Error reading delta file 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta of HDFSStateStoreProvider[id = 
> (op=0, part=0), dir = /usr/local/hadoop/checkpoint/state/0/0]: 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta does not exist
> I did clear all the checkpoint data in /usr/local/hadoop/checkpoint/  and all 
> consumer offsets in Kafka from all brokers prior to running and yet this 
> error still persists. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-20894) Error while checkpointing to HDFS (similar to JIRA SPARK-19268)

2017-05-30 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu reopened SPARK-20894:
--

> Error while checkpointing to HDFS (similar to JIRA SPARK-19268)
> ---
>
> Key: SPARK-20894
> URL: https://issues.apache.org/jira/browse/SPARK-20894
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.1.1
> Environment: Ubuntu, Spark 2.1.1, hadoop 2.7
>Reporter: kant kodali
> Attachments: driver_info_log, executor1_log, executor2_log
>
>
> Dataset df2 = df1.groupBy(functions.window(df1.col("Timestamp5"), "24 
> hours", "24 hours"), df1.col("AppName")).count();
> StreamingQuery query = df2.writeStream().foreach(new 
> KafkaSink()).option("checkpointLocation","/usr/local/hadoop/checkpoint").outputMode("update").start();
> query.awaitTermination();
> This for some reason fails with the Error 
> ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.IllegalStateException: Error reading delta file 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta of HDFSStateStoreProvider[id = 
> (op=0, part=0), dir = /usr/local/hadoop/checkpoint/state/0/0]: 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta does not exist
> I did clear all the checkpoint data in /usr/local/hadoop/checkpoint/  and all 
> consumer offsets in Kafka from all brokers prior to running and yet this 
> error still persists. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20597) KafkaSourceProvider falls back on path as synonym for topic

2017-05-30 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20597:
-
Labels: starter  (was: )

> KafkaSourceProvider falls back on path as synonym for topic
> ---
>
> Key: SPARK-20597
> URL: https://issues.apache.org/jira/browse/SPARK-20597
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Jacek Laskowski
>Priority: Trivial
>  Labels: starter
>
> # {{KafkaSourceProvider}} supports {{topic}} option that sets the Kafka topic 
> to save a DataFrame's rows to
> # {{KafkaSourceProvider}} can use {{topic}} column to assign rows to Kafka 
> topics for writing
> What seems a quite interesting option is to support {{start(path: String)}} 
> as the least precedence option in which {{path}} would designate the default 
> topic when no other options are used.
> {code}
> df.writeStream.format("kafka").start("topic")
> {code}
> See 
> http://apache-spark-developers-list.1001551.n3.nabble.com/KafkaSourceProvider-Why-topic-option-and-column-without-reverting-to-path-as-the-least-priority-td21458.html
>  for discussion



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20599) ConsoleSink should work with write (batch)

2017-05-30 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20599:
-
Labels: starter  (was: )

> ConsoleSink should work with write (batch)
> --
>
> Key: SPARK-20599
> URL: https://issues.apache.org/jira/browse/SPARK-20599
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL, Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Jacek Laskowski
>Priority: Minor
>  Labels: starter
>
> I think the following should just work.
> {code}
> spark.
>   read.  // <-- it's a batch query not streaming query if that matters
>   format("kafka").
>   option("subscribe", "topic1").
>   option("kafka.bootstrap.servers", "localhost:9092").
>   load.
>   write.
>   format("console").  // <-- that's not supported currently
>   save
> {code}
> The above combination of {{kafka}} source and {{console}} sink leads to the 
> following exception:
> {code}
> java.lang.RuntimeException: 
> org.apache.spark.sql.execution.streaming.ConsoleSinkProvider does not allow 
> create table as select.
>   at scala.sys.package$.error(package.scala:27)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:479)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
>   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:93)
>   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:93)
>   at 
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610)
>   at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233)
>   ... 48 elided
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20919) Simplificaiton of CachedKafkaConsumer using guava cache.

2017-05-30 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20919:
-
Affects Version/s: (was: 2.3.0)
   2.2.0
 Target Version/s: 2.3.0

> Simplificaiton of CachedKafkaConsumer using guava cache.
> 
>
> Key: SPARK-20919
> URL: https://issues.apache.org/jira/browse/SPARK-20919
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Prashant Sharma
>
> On the lines of SPARK-19968, guava cache can be used to simplify the code in 
> CachedKafkaConsumer as well. With an additional feature of automatic cleanup 
> of a consumer unused for a configurable time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20919) Simplificaiton of CachedKafkaConsumer using guava cache.

2017-05-30 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20919:
-
Issue Type: Improvement  (was: Bug)

> Simplificaiton of CachedKafkaConsumer using guava cache.
> 
>
> Key: SPARK-20919
> URL: https://issues.apache.org/jira/browse/SPARK-20919
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Prashant Sharma
>
> On the lines of SPARK-19968, guava cache can be used to simplify the code in 
> CachedKafkaConsumer as well. With an additional feature of automatic cleanup 
> of a consumer unused for a configurable time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-19968) Use a cached instance of KafkaProducer for writing to kafka via KafkaSink.

2017-05-29 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-19968.
--
  Resolution: Fixed
Assignee: Prashant Sharma
   Fix Version/s: 2.2.0
Target Version/s:   (was: 2.3.0)

> Use a cached instance of KafkaProducer for writing to kafka via KafkaSink.
> --
>
> Key: SPARK-19968
> URL: https://issues.apache.org/jira/browse/SPARK-19968
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Prashant Sharma
>Assignee: Prashant Sharma
>  Labels: kafka
> Fix For: 2.2.0
>
>
> KafkaProducer is thread safe and an instance can be reused for writing every 
> batch out. According to Kafka docs, this sort of usage is encouraged. It has 
> impact on performance too.
> On an average an addBatch operation takes 25ms with this patch. It takes 250+ 
> ms without this patch.
> Results of benchmark results, posted on github PR.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20907) Use testQuietly for test suites that generate long log output

2017-05-29 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20907.
--
   Resolution: Fixed
 Assignee: Kazuaki Ishizaki
Fix Version/s: 2.2.0

> Use testQuietly for test suites that generate long log output
> -
>
> Key: SPARK-20907
> URL: https://issues.apache.org/jira/browse/SPARK-20907
> Project: Spark
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 2.2.0
>Reporter: Kazuaki Ishizaki
>Assignee: Kazuaki Ishizaki
> Fix For: 2.2.0
>
>
> Use `testQuietly` instead of `test` for test causes that generate long output



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20907) Use testQuietly for test suites that generate long log output

2017-05-29 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20907:
-
Affects Version/s: (was: 2.3.0)

> Use testQuietly for test suites that generate long log output
> -
>
> Key: SPARK-20907
> URL: https://issues.apache.org/jira/browse/SPARK-20907
> Project: Spark
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 2.2.0
>Reporter: Kazuaki Ishizaki
>Assignee: Kazuaki Ishizaki
> Fix For: 2.2.0
>
>
> Use `testQuietly` instead of `test` for test causes that generate long output



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20907) Use testQuietly for test suites that generate long log output

2017-05-29 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20907:
-
Priority: Minor  (was: Major)

> Use testQuietly for test suites that generate long log output
> -
>
> Key: SPARK-20907
> URL: https://issues.apache.org/jira/browse/SPARK-20907
> Project: Spark
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 2.2.0
>Reporter: Kazuaki Ishizaki
>Assignee: Kazuaki Ishizaki
>Priority: Minor
> Fix For: 2.2.0
>
>
> Use `testQuietly` instead of `test` for test causes that generate long output



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19372) Code generation for Filter predicate including many OR conditions exceeds JVM method size limit

2017-05-26 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-19372:
-
Fix Version/s: 2.2.0

> Code generation for Filter predicate including many OR conditions exceeds JVM 
> method size limit 
> 
>
> Key: SPARK-19372
> URL: https://issues.apache.org/jira/browse/SPARK-19372
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Jay Pranavamurthi
>Assignee: Kazuaki Ishizaki
> Fix For: 2.2.0, 2.3.0
>
> Attachments: wide400cols.csv
>
>
> For the attached csv file, the code below causes the exception 
> "org.codehaus.janino.JaninoRuntimeException: Code of method 
> "(Lorg/apache/spark/sql/catalyst/InternalRow;)Z" of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificPredicate" 
> grows beyond 64 KB
> Code:
> {code:borderStyle=solid}
>   val conf = new SparkConf().setMaster("local[1]")
>   val sqlContext = 
> SparkSession.builder().config(conf).getOrCreate().sqlContext
>   val dataframe =
> sqlContext
>   .read
>   .format("com.databricks.spark.csv")
>   .load("wide400cols.csv")
>   val filter = (0 to 399)
> .foldLeft(lit(false))((e, index) => 
> e.or(dataframe.col(dataframe.columns(index)) =!= s"column${index+1}"))
>   val filtered = dataframe.filter(filter)
>   filtered.show(100)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20843) Cannot gracefully kill drivers which take longer than 10 seconds to die

2017-05-26 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20843.
--
   Resolution: Fixed
 Assignee: Shixiong Zhu
Fix Version/s: 2.2.0
   2.1.2

> Cannot gracefully kill drivers which take longer than 10 seconds to die
> ---
>
> Key: SPARK-20843
> URL: https://issues.apache.org/jira/browse/SPARK-20843
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Michael Allman
>Assignee: Shixiong Zhu
>  Labels: regression
> Fix For: 2.1.2, 2.2.0
>
>
> Commit 
> https://github.com/apache/spark/commit/1c9a386c6b6812a3931f3fb0004249894a01f657
>  changed the behavior of driver process termination. Whereas before 
> `Process.destroyForcibly` was never called, now it is called (on Java VM's 
> supporting that API) if the driver process does not die within 10 seconds.
> This prevents apps which take longer than 10 seconds to shutdown gracefully 
> from shutting down gracefully. For example, streaming apps with a large batch 
> duration (say, 30 seconds+) can take minutes to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20894) Error while checkpointing to HDFS (similar to JIRA SPARK-19268)

2017-05-26 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026846#comment-16026846
 ] 

Shixiong Zhu commented on SPARK-20894:
--

Thanks for reporting it. I'm wondering if you can provide the driver log and 
logs of all executors. "HDFSBackedStateStoreProvider" will write a log when it 
writes a file. I want to know if it does write 
"/usr/local/hadoop/checkpoint/state/0/0/1.delta". The write operation may 
happen on another executor, that's why I want to see all logs.

> Error while checkpointing to HDFS (similar to JIRA SPARK-19268)
> ---
>
> Key: SPARK-20894
> URL: https://issues.apache.org/jira/browse/SPARK-20894
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.1.1
> Environment: Ubuntu, Spark 2.1.1, hadoop 2.7
>Reporter: kant kodali
>
> Dataset df2 = df1.groupBy(functions.window(df1.col("Timestamp5"), "24 
> hours", "24 hours"), df1.col("AppName")).count();
> StreamingQuery query = df2.writeStream().foreach(new 
> KafkaSink()).option("checkpointLocation","/usr/local/hadoop/checkpoint").outputMode("update").start();
> query.awaitTermination();
> This for some reason fails with the Error 
> ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.IllegalStateException: Error reading delta file 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta of HDFSStateStoreProvider[id = 
> (op=0, part=0), dir = /usr/local/hadoop/checkpoint/state/0/0]: 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta does not exist
> I did clear all the checkpoint data in /usr/local/hadoop/checkpoint/  and all 
> consumer offsets in Kafka from all brokers prior to running and yet this 
> error still persists. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20894) Error while checkpointing to HDFS (similar to JIRA SPARK-19268)

2017-05-26 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20894:
-
Docs Text:   (was: Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties
17/05/25 23:01:05 INFO CoarseGrainedExecutorBackend: Started daemon with 
process name: 1453@ip-172-31-25-189
17/05/25 23:01:05 INFO SignalUtils: Registered signal handler for TERM
17/05/25 23:01:05 INFO SignalUtils: Registered signal handler for HUP
17/05/25 23:01:05 INFO SignalUtils: Registered signal handler for INT
17/05/25 23:01:06 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
17/05/25 23:01:06 INFO SecurityManager: Changing view acls to: ubuntu
17/05/25 23:01:06 INFO SecurityManager: Changing modify acls to: ubuntu
17/05/25 23:01:06 INFO SecurityManager: Changing view acls groups to: 
17/05/25 23:01:06 INFO SecurityManager: Changing modify acls groups to: 
17/05/25 23:01:06 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(ubuntu); groups 
with view permissions: Set(); users  with modify permissions: Set(ubuntu); 
groups with modify permissions: Set()
17/05/25 23:01:06 INFO TransportClientFactory: Successfully created connection 
to /192.31.29.39:52000 after 55 ms (0 ms spent in bootstraps)
17/05/25 23:01:06 INFO SecurityManager: Changing view acls to: ubuntu
17/05/25 23:01:06 INFO SecurityManager: Changing modify acls to: ubuntu
17/05/25 23:01:06 INFO SecurityManager: Changing view acls groups to: 
17/05/25 23:01:06 INFO SecurityManager: Changing modify acls groups to: 
17/05/25 23:01:06 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(ubuntu); groups 
with view permissions: Set(); users  with modify permissions: Set(ubuntu); 
groups with modify permissions: Set()
17/05/25 23:01:06 INFO TransportClientFactory: Successfully created connection 
to /192.31.29.39:52000 after 1 ms (0 ms spent in bootstraps)
17/05/25 23:01:06 INFO DiskBlockManager: Created local directory at 
/usr/local/spark/temp/spark-14760b98-21b0-458f-9646-5321c472e66d/executor-d13962fb-be68-4243-832b-78e68f65e784/blockmgr-bc0640eb-2d3d-4933-b83c-1b0222740de5
17/05/25 23:01:06 INFO MemoryStore: MemoryStore started with capacity 912.3 MB
17/05/25 23:01:06 INFO CoarseGrainedExecutorBackend: Connecting to driver: 
spark://CoarseGrainedScheduler@192.31.29.39:52000
17/05/25 23:01:06 INFO WorkerWatcher: Connecting to worker 
spark://Worker@192.31.25.189:58000
17/05/25 23:01:06 INFO WorkerWatcher: Successfully connected to 
spark://Worker@192.31.25.189:58000
17/05/25 23:01:06 INFO TransportClientFactory: Successfully created connection 
to /192.31.25.189:58000 after 4 ms (0 ms spent in bootstraps)
17/05/25 23:01:06 INFO CoarseGrainedExecutorBackend: Successfully registered 
with driver
17/05/25 23:01:06 INFO Executor: Starting executor ID 0 on host 192.31.25.189
17/05/25 23:01:06 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 52100.
17/05/25 23:01:06 INFO NettyBlockTransferService: Server created on 
192.31.25.189:52100
17/05/25 23:01:06 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
17/05/25 23:01:06 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(0, 192.31.25.189, 52100, None)
17/05/25 23:01:06 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(0, 192.31.25.189, 52100, None)
17/05/25 23:01:06 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(0, 192.31.25.189, 52100, None)
17/05/25 23:01:10 INFO CoarseGrainedExecutorBackend: Got assigned task 0
17/05/25 23:01:10 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
17/05/25 23:01:10 INFO Executor: Fetching 
spark://192.31.29.39:52000/jars/dataprocessing-client-stream.jar with timestamp 
1495753264902
17/05/25 23:01:10 INFO TransportClientFactory: Successfully created connection 
to /192.31.29.39:52000 after 2 ms (0 ms spent in bootstraps)
17/05/25 23:01:10 INFO Utils: Fetching 
spark://192.31.29.39:52000/jars/dataprocessing-client-stream.jar to 
/usr/local/spark/temp/spark-14760b98-21b0-458f-9646-5321c472e66d/executor-d13962fb-be68-4243-832b-78e68f65e784/spark-4d4ed6ae-202c-4ef0-88ea-ab9f1bdf424e/fetchFileTemp2546265996353018358.tmp
17/05/25 23:01:10 INFO Utils: Copying 
/usr/local/spark/temp/spark-14760b98-21b0-458f-9646-5321c472e66d/executor-d13962fb-be68-4243-832b-78e68f65e784/spark-4d4ed6ae-202c-4ef0-88ea-ab9f1bdf424e/2978712601495753264902_cache
 to 
/usr/local/spark/work/app-20170525230105-0019/0/./dataprocessing-client-stream.jar
17/05/25 23:01:10 INFO Executor: Adding 
file:/usr/local/spark/work/app-20170525230105-0019/0/./dataprocessing-client-stream.jar
 to class loader
17/05/25 23:01:10 INFO TorrentBroadcast: 

[jira] [Commented] (SPARK-20894) Error while checkpointing to HDFS (similar to JIRA SPARK-19268)

2017-05-26 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026848#comment-16026848
 ] 

Shixiong Zhu commented on SPARK-20894:
--

By the way, you can click "More" -> "Attach Files" to upload logs instead.

> Error while checkpointing to HDFS (similar to JIRA SPARK-19268)
> ---
>
> Key: SPARK-20894
> URL: https://issues.apache.org/jira/browse/SPARK-20894
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.1.1
> Environment: Ubuntu, Spark 2.1.1, hadoop 2.7
>Reporter: kant kodali
>
> Dataset df2 = df1.groupBy(functions.window(df1.col("Timestamp5"), "24 
> hours", "24 hours"), df1.col("AppName")).count();
> StreamingQuery query = df2.writeStream().foreach(new 
> KafkaSink()).option("checkpointLocation","/usr/local/hadoop/checkpoint").outputMode("update").start();
> query.awaitTermination();
> This for some reason fails with the Error 
> ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.IllegalStateException: Error reading delta file 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta of HDFSStateStoreProvider[id = 
> (op=0, part=0), dir = /usr/local/hadoop/checkpoint/state/0/0]: 
> /usr/local/hadoop/checkpoint/state/0/0/1.delta does not exist
> I did clear all the checkpoint data in /usr/local/hadoop/checkpoint/  and all 
> consumer offsets in Kafka from all brokers prior to running and yet this 
> error still persists. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20843) Cannot gracefully kill drivers which take longer than 10 seconds to die

2017-05-26 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026809#comment-16026809
 ] 

Shixiong Zhu commented on SPARK-20843:
--

[~michael] Will a per-cluster config be enough for your usage? A per-app config 
requires more changes and it's too risky for 2.2 now.

> Cannot gracefully kill drivers which take longer than 10 seconds to die
> ---
>
> Key: SPARK-20843
> URL: https://issues.apache.org/jira/browse/SPARK-20843
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Michael Allman
>  Labels: regression
>
> Commit 
> https://github.com/apache/spark/commit/1c9a386c6b6812a3931f3fb0004249894a01f657
>  changed the behavior of driver process termination. Whereas before 
> `Process.destroyForcibly` was never called, now it is called (on Java VM's 
> supporting that API) if the driver process does not die within 10 seconds.
> This prevents apps which take longer than 10 seconds to shutdown gracefully 
> from shutting down gracefully. For example, streaming apps with a large batch 
> duration (say, 30 seconds+) can take minutes to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20014) Optimize mergeSpillsWithFileStream method

2017-05-26 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20014.
--
   Resolution: Fixed
 Assignee: Sital Kedia
Fix Version/s: 2.3.0

> Optimize mergeSpillsWithFileStream method
> -
>
> Key: SPARK-20014
> URL: https://issues.apache.org/jira/browse/SPARK-20014
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.0.0
>Reporter: Sital Kedia
>Assignee: Sital Kedia
> Fix For: 2.3.0
>
>
> When the individual partition size in a spill is small, 
> mergeSpillsWithTransferTo method does many small disk ios which is really 
> inefficient. One way to improve the performance will be to use 
> mergeSpillsWithFileStream method by turning off transfer to and using 
> buffered file read/write to improve the io throughput. 
> However, the current implementation of mergeSpillsWithFileStream does not do 
> a buffer read/write of the files and in addition to that it unnecessarily 
> flushes the output files for each partitions.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20844) Remove experimental from API and docs

2017-05-26 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20844.
--
   Resolution: Fixed
 Assignee: Michael Armbrust
Fix Version/s: 2.2.0

> Remove experimental from API and docs
> -
>
> Key: SPARK-20844
> URL: https://issues.apache.org/jira/browse/SPARK-20844
> Project: Spark
>  Issue Type: Task
>  Components: Structured Streaming
>Affects Versions: 2.1.1
>Reporter: Michael Armbrust
>Assignee: Michael Armbrust
>Priority: Blocker
> Fix For: 2.2.0
>
>
> As of Spark 2.2. we know of several large scale production use cases of 
> Structured Streaming.  We should update the working in the API and docs 
> accordingly before the release.
> Lets leave {{Evolving}} on the internal APIs that we still plan to change 
> though (source and sink).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20882) Executor is waiting for ShuffleBlockFetcherIterator

2017-05-26 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026595#comment-16026595
 ] 

Shixiong Zhu commented on SPARK-20882:
--

[~cenyuhai] do you mind to share what's the root cause?

> Executor is waiting for ShuffleBlockFetcherIterator
> ---
>
> Key: SPARK-20882
> URL: https://issues.apache.org/jira/browse/SPARK-20882
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0, 2.1.1
>Reporter: cen yuhai
> Attachments: executor_jstack, executor_log, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png
>
>
> This bug is like https://issues.apache.org/jira/browse/SPARK-19300.
> but I have updated my client netty version to 4.0.43.Final.
> The shuffle service handler is still 4.0.42.Final
> spark.sql.adaptive.enabled is true
> {code}
> "Executor task launch worker for task 4808985" #5373 daemon prio=5 os_prio=0 
> tid=0x7f54ef437000 nid=0x1aed0 waiting on condition [0x7f53aebfe000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x000498c249c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:189)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:332)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:58)
> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
> at 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:199)
> at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
> at org.apache.spark.scheduler.Task.run(Task.scala:114)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:323)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> at java.lang.Thread.run(Thread.java:834)
> {code}
> {code}
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: remainingBlocks: 
> Set(shuffle_5_1431_805, shuffle_5_1431_808, shuffle_5_1431_806, 
> shuffle_5_1431_809, shuffle_5_1431_807)
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: remainingBlocks: 
> Set(shuffle_5_1431_808, shuffle_5_1431_806, shuffle_5_1431_809, 
> shuffle_5_1431_807)
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: remainingBlocks: 
> Set(shuffle_5_1431_808, shuffle_5_1431_809, shuffle_5_1431_807)
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: remainingBlocks: 
> Set(shuffle_5_1431_808, shuffle_5_1431_809)
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: remainingBlocks: 
> Set(shuffle_5_1431_809)
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: remainingBlocks: Set()
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 21
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 20
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 19
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 18
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 17
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 16
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 15
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 14
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 13
> 17/05/26 12:04:06 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 12
> 17/05/26 12:04:06 DEBUG 

[jira] [Commented] (SPARK-20882) Executor is waiting for ShuffleBlockFetcherIterator

2017-05-25 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025588#comment-16025588
 ] 

Shixiong Zhu commented on SPARK-20882:
--

[~cenyuhai] Did you see this log "logger.error("Still have {} requests 
outstanding when connection from {} is closed","?

> Executor is waiting for ShuffleBlockFetcherIterator
> ---
>
> Key: SPARK-20882
> URL: https://issues.apache.org/jira/browse/SPARK-20882
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0, 2.1.1
>Reporter: cen yuhai
> Attachments: executor_jstack, executor_log, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png
>
>
> This bug is like https://issues.apache.org/jira/browse/SPARK-19300.
> but I have updated my client netty version to 4.0.43.Final.
> The shuffle service handler is still 4.0.42.Final
> spark.sql.adaptive.enabled is true
> {code}
> "Executor task launch worker for task 4808985" #5373 daemon prio=5 os_prio=0 
> tid=0x7f54ef437000 nid=0x1aed0 waiting on condition [0x7f53aebfe000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x000498c249c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:189)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:332)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:58)
> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
> at 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:199)
> at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
> at org.apache.spark.scheduler.Task.run(Task.scala:114)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:323)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> at java.lang.Thread.run(Thread.java:834)
> {code}
> {code}
> 7/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 3
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 2
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:02:03 WARN TransportChannelHandler: Exception in connection from 
> bigdata-apache-hdp-132.xg01/10.0.132.58:7337
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:221)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:899)
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:275)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>   at 

[jira] [Commented] (SPARK-20882) Executor is waiting for ShuffleBlockFetcherIterator

2017-05-25 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025506#comment-16025506
 ] 

Shixiong Zhu commented on SPARK-20882:
--

[~cenyuhai] Is it possible to reproduce it and get logs of the shuffle service?

> Executor is waiting for ShuffleBlockFetcherIterator
> ---
>
> Key: SPARK-20882
> URL: https://issues.apache.org/jira/browse/SPARK-20882
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0, 2.1.1
>Reporter: cen yuhai
> Attachments: executor_jstack, executor_log, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png
>
>
> This bug is like https://issues.apache.org/jira/browse/SPARK-19300.
> but I have updated my client netty version to 4.0.43.Final.
> The shuffle service handler is still 4.0.42.Final
> spark.sql.adaptive.enabled is true
> {code}
> "Executor task launch worker for task 4808985" #5373 daemon prio=5 os_prio=0 
> tid=0x7f54ef437000 nid=0x1aed0 waiting on condition [0x7f53aebfe000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x000498c249c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:189)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:332)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:58)
> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
> at 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:199)
> at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
> at org.apache.spark.scheduler.Task.run(Task.scala:114)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:323)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> at java.lang.Thread.run(Thread.java:834)
> {code}
> {code}
> 7/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 3
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 2
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:02:03 WARN TransportChannelHandler: Exception in connection from 
> bigdata-apache-hdp-132.xg01/10.0.132.58:7337
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:221)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:899)
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:275)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)

[jira] [Commented] (SPARK-20882) Executor is waiting for ShuffleBlockFetcherIterator

2017-05-25 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025377#comment-16025377
 ] 

Shixiong Zhu commented on SPARK-20882:
--

NVM. I saw NettyBlockTransferService in the log.

> Executor is waiting for ShuffleBlockFetcherIterator
> ---
>
> Key: SPARK-20882
> URL: https://issues.apache.org/jira/browse/SPARK-20882
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0, 2.1.1
>Reporter: cen yuhai
> Attachments: executor_jstack, executor_log, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png
>
>
> This bug is like https://issues.apache.org/jira/browse/SPARK-19300.
> but I have updated my client netty version to 4.0.43.Final.
> The shuffle service handler is still 4.0.42.Final
> spark.sql.adaptive.enabled is true
> {code}
> "Executor task launch worker for task 4808985" #5373 daemon prio=5 os_prio=0 
> tid=0x7f54ef437000 nid=0x1aed0 waiting on condition [0x7f53aebfe000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x000498c249c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:189)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:332)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:58)
> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
> at 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:199)
> at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
> at org.apache.spark.scheduler.Task.run(Task.scala:114)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:323)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> at java.lang.Thread.run(Thread.java:834)
> {code}
> {code}
> 7/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 3
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 2
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:02:03 WARN TransportChannelHandler: Exception in connection from 
> bigdata-apache-hdp-132.xg01/10.0.132.58:7337
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:221)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:899)
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:275)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>   at 
> 

[jira] [Commented] (SPARK-20882) Executor is waiting for ShuffleBlockFetcherIterator

2017-05-25 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025227#comment-16025227
 ] 

Shixiong Zhu commented on SPARK-20882:
--

[~cenyuhai] are you using shuffle service?

> Executor is waiting for ShuffleBlockFetcherIterator
> ---
>
> Key: SPARK-20882
> URL: https://issues.apache.org/jira/browse/SPARK-20882
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0, 2.1.1
>Reporter: cen yuhai
> Attachments: executor_jstack, executor_log, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png
>
>
> This bug is like https://issues.apache.org/jira/browse/SPARK-19300.
> but I have updated my netty version to 4.0.43.Final.
> spark.sql.adaptive.enabled is true
> {code}
> "Executor task launch worker for task 4808985" #5373 daemon prio=5 os_prio=0 
> tid=0x7f54ef437000 nid=0x1aed0 waiting on condition [0x7f53aebfe000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x000498c249c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:189)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:332)
> at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:58)
> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
> at 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:199)
> at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
> at org.apache.spark.scheduler.Task.run(Task.scala:114)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:323)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> at java.lang.Thread.run(Thread.java:834)
> {code}
> {code}
> 7/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 3
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 2
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:01:55 DEBUG ShuffleBlockFetcherIterator: Number of requests in 
> flight 1
> 17/05/26 02:02:03 WARN TransportChannelHandler: Exception in connection from 
> bigdata-apache-hdp-132.xg01/10.0.132.58:7337
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:221)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:899)
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:275)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>   at 
> 

[jira] [Resolved] (SPARK-20874) The "examples" project doesn't depend on Structured Streaming Kafka source

2017-05-25 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20874.
--
Resolution: Fixed

> The "examples" project doesn't depend on Structured Streaming Kafka source
> --
>
> Key: SPARK-20874
> URL: https://issues.apache.org/jira/browse/SPARK-20874
> Project: Spark
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 2.1.0, 2.1.1, 2.2.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
>Priority: Minor
> Fix For: 2.1.2, 2.2.0
>
>
> Right now running `bin/run-example StructuredKafkaWordCount ...` will throw 
> an error saying "kafka" source not found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20874) The "examples" project doesn't depend on Structured Streaming Kafka source

2017-05-25 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20874:
-
Fix Version/s: 2.2.0
   2.1.2

> The "examples" project doesn't depend on Structured Streaming Kafka source
> --
>
> Key: SPARK-20874
> URL: https://issues.apache.org/jira/browse/SPARK-20874
> Project: Spark
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 2.1.0, 2.1.1, 2.2.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
>Priority: Minor
> Fix For: 2.1.2, 2.2.0
>
>
> Right now running `bin/run-example StructuredKafkaWordCount ...` will throw 
> an error saying "kafka" source not found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20874) The "examples" project doesn't depend on Structured Streaming Kafka source

2017-05-25 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20874:
-
Affects Version/s: 2.2.0

> The "examples" project doesn't depend on Structured Streaming Kafka source
> --
>
> Key: SPARK-20874
> URL: https://issues.apache.org/jira/browse/SPARK-20874
> Project: Spark
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 2.1.0, 2.1.1, 2.2.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
>Priority: Minor
> Fix For: 2.1.2, 2.2.0
>
>
> Right now running `bin/run-example StructuredKafkaWordCount ...` will throw 
> an error saying "kafka" source not found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20874) The "examples" project doesn't depend on Structured Streaming Kafka source

2017-05-24 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-20874:


 Summary: The "examples" project doesn't depend on Structured 
Streaming Kafka source
 Key: SPARK-20874
 URL: https://issues.apache.org/jira/browse/SPARK-20874
 Project: Spark
  Issue Type: Bug
  Components: Examples
Affects Versions: 2.1.1, 2.1.0
Reporter: Shixiong Zhu
Assignee: Shixiong Zhu
Priority: Minor


Right now running `bin/run-example StructuredKafkaWordCount ...` will throw an 
error saying "kafka" source not found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20799) Unable to infer schema for ORC on reading ORC from S3

2017-05-21 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20799:
-
Component/s: (was: Spark Core)
 SQL

> Unable to infer schema for ORC on reading ORC from S3
> -
>
> Key: SPARK-20799
> URL: https://issues.apache.org/jira/browse/SPARK-20799
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Jork Zijlstra
>
> We are getting the following exception: 
> {code}org.apache.spark.sql.AnalysisException: Unable to infer schema for ORC. 
> It must be specified manually.{code}
> Combining following factors will cause it:
> - Use S3
> - Use format ORC
> - Don't apply a partitioning on de data
> - Embed AWS credentials in the path
> The problem is in the PartitioningAwareFileIndex def allFiles()
> {code}
> leafDirToChildrenFiles.get(qualifiedPath)
>   .orElse { leafFiles.get(qualifiedPath).map(Array(_)) }
>   .getOrElse(Array.empty)
> {code}
> leafDirToChildrenFiles uses the path WITHOUT credentials as its key while the 
> qualifiedPath contains the path WITH credentials.
> So leafDirToChildrenFiles.get(qualifiedPath) doesn't find any files, so no 
> data is read and the schema cannot be defined.
> Spark does output the S3xLoginHelper:90 - The Filesystem URI contains login 
> details. This is insecure and may be unsupported in future., but this should 
> not mean that it shouldn't work anymore.
> Workaround:
> Move the AWS credentials from the path to the SparkSession
> {code}
> SparkSession.builder
>   .config("spark.hadoop.fs.s3n.awsAccessKeyId", {awsAccessKeyId})
>   .config("spark.hadoop.fs.s3n.awsSecretAccessKey", {awsSecretAccessKey})
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20792) Support same timeout operations in mapGroupsWithState function in batch queries as in streaming queries

2017-05-21 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20792.
--
Resolution: Fixed

> Support same timeout operations in mapGroupsWithState function in batch 
> queries as in streaming queries
> ---
>
> Key: SPARK-20792
> URL: https://issues.apache.org/jira/browse/SPARK-20792
> Project: Spark
>  Issue Type: Sub-task
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Tathagata Das
>Assignee: Tathagata Das
> Fix For: 2.2.0
>
>
> Currently, in the batch queries, timeout is disabled (i.e. 
> GroupStateTimeout.NoTimeout) which means any GroupState.setTimeout*** 
> operation would throw UnsupportedOperationException. This makes it weird when 
> converting a streaming query into a batch query by changing the input DF from 
> streaming to a batch DF. If the timeout was enabled and used, then the batch 
> query will start throwing UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-13747) Concurrent execution in SQL doesn't work with Scala ForkJoinPool

2017-05-17 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-13747.
--
  Resolution: Fixed
   Fix Version/s: 2.2.0
Target Version/s: 2.1.0, 2.0.2  (was: 2.0.2, 2.1.0)

> Concurrent execution in SQL doesn't work with Scala ForkJoinPool
> 
>
> Key: SPARK-13747
> URL: https://issues.apache.org/jira/browse/SPARK-13747
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
> Fix For: 2.2.0
>
>
> Run the following codes may fail
> {code}
> (1 to 100).par.foreach { _ =>
>   println(sc.parallelize(1 to 5).map { i => (i, i) }.toDF("a", "b").count())
> }
> java.lang.IllegalArgumentException: spark.sql.execution.id is already set 
> at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87)
>  
> at 
> org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904) 
> at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1385) 
> {code}
> This is because SparkContext.runJob can be suspended when using a 
> ForkJoinPool (e.g.,scala.concurrent.ExecutionContext.Implicits.global) as it 
> calls Await.ready (introduced by https://github.com/apache/spark/pull/9264).
> So when SparkContext.runJob is suspended, ForkJoinPool will run another task 
> in the same thread, however, the local properties has been polluted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-17463) Serialization of accumulators in heartbeats is not thread-safe

2017-05-17 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014799#comment-16014799
 ] 

Shixiong Zhu commented on SPARK-17463:
--

[~allengeorge] it's protected by `Collections.synchronizedList(new 
ArrayList[T]())`.

> Serialization of accumulators in heartbeats is not thread-safe
> --
>
> Key: SPARK-17463
> URL: https://issues.apache.org/jira/browse/SPARK-17463
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.0.0
>Reporter: Josh Rosen
>Assignee: Shixiong Zhu
>Priority: Critical
> Fix For: 2.0.1, 2.1.0
>
>
> Check out the following {{ConcurrentModificationException}}:
> {code}
> 16/09/06 16:10:29 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(2,[Lscala.Tuple2;@66e7b6e7,BlockManagerId(2, HOST, 57743))] in 1 
> attempts
> org.apache.spark.SparkException: Exception thrown in awaitResult
> at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
> at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
> at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
> at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
> at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
> at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162)
> at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
> at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
> at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:518)
> at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:547)
> at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:547)
> at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:547)
> at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1862)
> at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:547)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException
> at java.util.ArrayList.writeObject(ArrayList.java:766)
> at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
> 

[jira] [Resolved] (SPARK-20788) Fix the Executor task reaper's false alarm warning logs

2017-05-17 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20788.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

> Fix the Executor task reaper's false alarm warning logs
> ---
>
> Key: SPARK-20788
> URL: https://issues.apache.org/jira/browse/SPARK-20788
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
> Fix For: 2.2.0
>
>
> Executor task reaper may fail to detect if a task is finished or not when a 
> task is finishing but being killed at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20788) Fix the Executor task reaper's false alarm warning logs

2017-05-17 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-20788:


 Summary: Fix the Executor task reaper's false alarm warning logs
 Key: SPARK-20788
 URL: https://issues.apache.org/jira/browse/SPARK-20788
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 2.2.0
Reporter: Shixiong Zhu
Assignee: Shixiong Zhu


Executor task reaper may fail to detect if a task is finished or not when a 
task is finishing but being killed at the same time.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-19372) Code generation for Filter predicate including many OR conditions exceeds JVM method size limit

2017-05-16 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu reassigned SPARK-19372:


Assignee: Kazuaki Ishizaki

> Code generation for Filter predicate including many OR conditions exceeds JVM 
> method size limit 
> 
>
> Key: SPARK-19372
> URL: https://issues.apache.org/jira/browse/SPARK-19372
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Jay Pranavamurthi
>Assignee: Kazuaki Ishizaki
> Fix For: 2.3.0
>
> Attachments: wide400cols.csv
>
>
> For the attached csv file, the code below causes the exception 
> "org.codehaus.janino.JaninoRuntimeException: Code of method 
> "(Lorg/apache/spark/sql/catalyst/InternalRow;)Z" of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificPredicate" 
> grows beyond 64 KB
> Code:
> {code:borderStyle=solid}
>   val conf = new SparkConf().setMaster("local[1]")
>   val sqlContext = 
> SparkSession.builder().config(conf).getOrCreate().sqlContext
>   val dataframe =
> sqlContext
>   .read
>   .format("com.databricks.spark.csv")
>   .load("wide400cols.csv")
>   val filter = (0 to 399)
> .foldLeft(lit(false))((e, index) => 
> e.or(dataframe.col(dataframe.columns(index)) =!= s"column${index+1}"))
>   val filtered = dataframe.filter(filter)
>   filtered.show(100)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-19372) Code generation for Filter predicate including many OR conditions exceeds JVM method size limit

2017-05-16 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-19372.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> Code generation for Filter predicate including many OR conditions exceeds JVM 
> method size limit 
> 
>
> Key: SPARK-19372
> URL: https://issues.apache.org/jira/browse/SPARK-19372
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Jay Pranavamurthi
> Fix For: 2.3.0
>
> Attachments: wide400cols.csv
>
>
> For the attached csv file, the code below causes the exception 
> "org.codehaus.janino.JaninoRuntimeException: Code of method 
> "(Lorg/apache/spark/sql/catalyst/InternalRow;)Z" of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificPredicate" 
> grows beyond 64 KB
> Code:
> {code:borderStyle=solid}
>   val conf = new SparkConf().setMaster("local[1]")
>   val sqlContext = 
> SparkSession.builder().config(conf).getOrCreate().sqlContext
>   val dataframe =
> sqlContext
>   .read
>   .format("com.databricks.spark.csv")
>   .load("wide400cols.csv")
>   val filter = (0 to 399)
> .foldLeft(lit(false))((e, index) => 
> e.or(dataframe.col(dataframe.columns(index)) =!= s"column${index+1}"))
>   val filtered = dataframe.filter(filter)
>   filtered.show(100)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20774) BroadcastExchangeExec doesn't cancel the Spark job if broadcasting a relation timeouts.

2017-05-16 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-20774:


 Summary: BroadcastExchangeExec doesn't cancel the Spark job if 
broadcasting a relation timeouts.
 Key: SPARK-20774
 URL: https://issues.apache.org/jira/browse/SPARK-20774
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.1.1, 2.1.0
Reporter: Shixiong Zhu


When broadcasting a table takes too long and triggers timeout, the SQL query 
will fail. However, the background Spark job is still running and it wastes 
resources.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20529) Worker should not use the received Master address

2017-05-16 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20529.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

> Worker should not use the received Master address
> -
>
> Key: SPARK-20529
> URL: https://issues.apache.org/jira/browse/SPARK-20529
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.6.3, 2.0.2, 2.1.0, 2.2.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
> Fix For: 2.2.0
>
>
> Right now when worker connects to master, master will send its address to the 
> worker. Then worker will save this address and use it to reconnect in case of 
> failure.
> However, sometimes, this address is not correct. If there is a proxy between 
> master and worker, the address master sent is not the address of proxy.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20666) Flaky test - SparkListenerBus randomly failing java.lang.IllegalAccessError

2017-05-15 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20666:
-
Fix Version/s: (was: 2.2.1)
   (was: 2.3.0)
   2.2.0

> Flaky test - SparkListenerBus randomly failing java.lang.IllegalAccessError
> ---
>
> Key: SPARK-20666
> URL: https://issues.apache.org/jira/browse/SPARK-20666
> Project: Spark
>  Issue Type: Bug
>  Components: ML, Spark Core, SparkR
>Affects Versions: 2.2.0
>Reporter: Felix Cheung
>Assignee: Wenchen Fan
>Priority: Critical
> Fix For: 2.2.0
>
>
> seeing quite a bit of this on AppVeyor, aka Windows only,-> seems like in 
> other test runs too, always only when running ML tests, it seems
> {code}
> Exception in thread "SparkListenerBus" java.lang.IllegalAccessError: 
> Attempted to access garbage collected accumulator 159454
>   at 
> org.apache.spark.util.AccumulatorContext$$anonfun$get$1.apply(AccumulatorV2.scala:265)
>   at 
> org.apache.spark.util.AccumulatorContext$$anonfun$get$1.apply(AccumulatorV2.scala:261)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.spark.util.AccumulatorContext$.get(AccumulatorV2.scala:261)
>   at org.apache.spark.util.AccumulatorV2.name(AccumulatorV2.scala:88)
>   at 
> org.apache.spark.sql.execution.metric.SQLMetric.toInfo(SQLMetrics.scala:67)
>   at 
> org.apache.spark.sql.execution.ui.SQLListener$$anonfun$onTaskEnd$1.apply(SQLListener.scala:216)
>   at 
> org.apache.spark.sql.execution.ui.SQLListener$$anonfun$onTaskEnd$1.apply(SQLListener.scala:216)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.spark.sql.execution.ui.SQLListener.onTaskEnd(SQLListener.scala:216)
>   at 
> org.apache.spark.scheduler.SparkListenerBus$class.doPostEvent(SparkListenerBus.scala:45)
>   at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>   at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>   at 
> org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:63)
>   at 
> org.apache.spark.scheduler.LiveListenerBus.postToAll(LiveListenerBus.scala:36)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:94)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>   at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
>   at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1268)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
> 1
> MLlib recommendation algorithms: Spark package found in SPARK_HOME: 
> C:\projects\spark\bin\..
> {code}
> {code}
> java.lang.IllegalStateException: SparkContext has been shutdown
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2015)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2044)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2063)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2088)
>   at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>   at 
> org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275)
>   at 
> org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2923)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2474)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2474)
>   at 

[jira] [Updated] (SPARK-20666) Flaky test - SparkListenerBus randomly failing java.lang.IllegalAccessError

2017-05-15 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20666:
-
Affects Version/s: (was: 2.3.0)
   2.2.0

> Flaky test - SparkListenerBus randomly failing java.lang.IllegalAccessError
> ---
>
> Key: SPARK-20666
> URL: https://issues.apache.org/jira/browse/SPARK-20666
> Project: Spark
>  Issue Type: Bug
>  Components: ML, Spark Core, SparkR
>Affects Versions: 2.2.0
>Reporter: Felix Cheung
>Assignee: Wenchen Fan
>Priority: Critical
> Fix For: 2.2.0
>
>
> seeing quite a bit of this on AppVeyor, aka Windows only,-> seems like in 
> other test runs too, always only when running ML tests, it seems
> {code}
> Exception in thread "SparkListenerBus" java.lang.IllegalAccessError: 
> Attempted to access garbage collected accumulator 159454
>   at 
> org.apache.spark.util.AccumulatorContext$$anonfun$get$1.apply(AccumulatorV2.scala:265)
>   at 
> org.apache.spark.util.AccumulatorContext$$anonfun$get$1.apply(AccumulatorV2.scala:261)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.spark.util.AccumulatorContext$.get(AccumulatorV2.scala:261)
>   at org.apache.spark.util.AccumulatorV2.name(AccumulatorV2.scala:88)
>   at 
> org.apache.spark.sql.execution.metric.SQLMetric.toInfo(SQLMetrics.scala:67)
>   at 
> org.apache.spark.sql.execution.ui.SQLListener$$anonfun$onTaskEnd$1.apply(SQLListener.scala:216)
>   at 
> org.apache.spark.sql.execution.ui.SQLListener$$anonfun$onTaskEnd$1.apply(SQLListener.scala:216)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.spark.sql.execution.ui.SQLListener.onTaskEnd(SQLListener.scala:216)
>   at 
> org.apache.spark.scheduler.SparkListenerBus$class.doPostEvent(SparkListenerBus.scala:45)
>   at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>   at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>   at 
> org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:63)
>   at 
> org.apache.spark.scheduler.LiveListenerBus.postToAll(LiveListenerBus.scala:36)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:94)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>   at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
>   at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1268)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
> 1
> MLlib recommendation algorithms: Spark package found in SPARK_HOME: 
> C:\projects\spark\bin\..
> {code}
> {code}
> java.lang.IllegalStateException: SparkContext has been shutdown
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2015)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2044)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2063)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2088)
>   at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>   at 
> org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275)
>   at 
> org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2923)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2474)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2474)
>   at org.apache.spark.sql.Dataset$$anonfun$57.apply(Dataset.scala:2907)
>   at 
> 

[jira] [Issue Comment Deleted] (SPARK-20666) Flaky test - SparkListenerBus randomly failing java.lang.IllegalAccessError

2017-05-15 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20666:
-
Comment: was deleted

(was: User 'shivaram' has created a pull request for this issue:
https://github.com/apache/spark/pull/17966)

> Flaky test - SparkListenerBus randomly failing java.lang.IllegalAccessError
> ---
>
> Key: SPARK-20666
> URL: https://issues.apache.org/jira/browse/SPARK-20666
> Project: Spark
>  Issue Type: Bug
>  Components: ML, Spark Core, SparkR
>Affects Versions: 2.3.0
>Reporter: Felix Cheung
>Assignee: Wenchen Fan
>Priority: Critical
> Fix For: 2.2.1, 2.3.0
>
>
> seeing quite a bit of this on AppVeyor, aka Windows only,-> seems like in 
> other test runs too, always only when running ML tests, it seems
> {code}
> Exception in thread "SparkListenerBus" java.lang.IllegalAccessError: 
> Attempted to access garbage collected accumulator 159454
>   at 
> org.apache.spark.util.AccumulatorContext$$anonfun$get$1.apply(AccumulatorV2.scala:265)
>   at 
> org.apache.spark.util.AccumulatorContext$$anonfun$get$1.apply(AccumulatorV2.scala:261)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.spark.util.AccumulatorContext$.get(AccumulatorV2.scala:261)
>   at org.apache.spark.util.AccumulatorV2.name(AccumulatorV2.scala:88)
>   at 
> org.apache.spark.sql.execution.metric.SQLMetric.toInfo(SQLMetrics.scala:67)
>   at 
> org.apache.spark.sql.execution.ui.SQLListener$$anonfun$onTaskEnd$1.apply(SQLListener.scala:216)
>   at 
> org.apache.spark.sql.execution.ui.SQLListener$$anonfun$onTaskEnd$1.apply(SQLListener.scala:216)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.spark.sql.execution.ui.SQLListener.onTaskEnd(SQLListener.scala:216)
>   at 
> org.apache.spark.scheduler.SparkListenerBus$class.doPostEvent(SparkListenerBus.scala:45)
>   at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>   at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>   at 
> org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:63)
>   at 
> org.apache.spark.scheduler.LiveListenerBus.postToAll(LiveListenerBus.scala:36)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:94)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>   at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
>   at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1268)
>   at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
> 1
> MLlib recommendation algorithms: Spark package found in SPARK_HOME: 
> C:\projects\spark\bin\..
> {code}
> {code}
> java.lang.IllegalStateException: SparkContext has been shutdown
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2015)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2044)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2063)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2088)
>   at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>   at 
> org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275)
>   at 
> org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2923)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2474)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2474)
>   at 

[jira] [Resolved] (SPARK-20716) StateStore.abort() should not throw further exception

2017-05-15 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20716.
--
Resolution: Fixed

> StateStore.abort() should not throw further exception
> -
>
> Key: SPARK-20716
> URL: https://issues.apache.org/jira/browse/SPARK-20716
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Tathagata Das
>Assignee: Tathagata Das
> Fix For: 2.2.0
>
>
> StateStore.abort() should do a best effort attempt to clean up temporary 
> resources. It should not throw errors.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20717) Tweak MapGroupsWithState update function behavior

2017-05-15 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20717.
--
Resolution: Fixed

> Tweak MapGroupsWithState update function behavior
> -
>
> Key: SPARK-20717
> URL: https://issues.apache.org/jira/browse/SPARK-20717
> Project: Spark
>  Issue Type: Sub-task
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Tathagata Das
>Assignee: Tathagata Das
> Fix For: 2.2.0
>
>
> Timeout and state data are two independent entities and should be settable 
> independently. Therefore, in the same call of the user-defined function, one 
> should be able to set the timeout before initializing the state and also 
> after removing the state. Whether timeouts can be set or not should not 
> depend on the current state, and vice versa. 
> However, a limitation of the current implementation is that state cannot be 
> null while timeout is set. This is checked lazily after the function call has 
> completed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20714) Fix match error when watermark is set with timeout = no timeout / processing timeout

2017-05-12 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20714.
--
Resolution: Fixed

> Fix match error when watermark is set with timeout = no timeout / processing 
> timeout
> 
>
> Key: SPARK-20714
> URL: https://issues.apache.org/jira/browse/SPARK-20714
> Project: Spark
>  Issue Type: Sub-task
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Tathagata Das
>Assignee: Tathagata Das
> Fix For: 2.2.0
>
>
> When watermark is set, and timeout conf is NoTimeout or ProcessingTimeTimeout 
> (both do not need the watermark), the query fails at runtime with the 
> following exception.
> {code}
> MatchException: 
> Some(org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificPredicate@1a9b798e)
>  (of class scala.Some)
> 
> org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec$$anonfun$doExecute$1.apply(FlatMapGroupsWithStateExec.scala:120)
> 
> org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec$$anonfun$doExecute$1.apply(FlatMapGroupsWithStateExec.scala:116)
> 
> org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1.apply(package.scala:70)
> 
> org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1.apply(package.scala:65)
> 
> org.apache.spark.sql.execution.streaming.state.StateStoreRDD.compute(StateStoreRDD.scala:64)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20702) TaskContextImpl.markTaskCompleted should not hide the original error

2017-05-12 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20702.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

> TaskContextImpl.markTaskCompleted should not hide the original error
> 
>
> Key: SPARK-20702
> URL: https://issues.apache.org/jira/browse/SPARK-20702
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1, 2.2.0
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
> Fix For: 2.2.0
>
>
> If a TaskCompletionListener throws an error, 
> TaskContextImpl.markTaskCompleted will hide the original error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-20600) KafkaRelation should be pretty printed in web UI (Details for Query)

2017-05-11 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-20600.
--
   Resolution: Fixed
 Assignee: Jacek Laskowski
Fix Version/s: 2.3.0
   2.2.1

> KafkaRelation should be pretty printed in web UI (Details for Query)
> 
>
> Key: SPARK-20600
> URL: https://issues.apache.org/jira/browse/SPARK-20600
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Jacek Laskowski
>Assignee: Jacek Laskowski
>Priority: Trivial
> Fix For: 2.2.1, 2.3.0
>
> Attachments: kafka-source-scan-webui.png
>
>
> Executing the following batch query gives the default stringified/internal 
> name of {{KafkaRelation}} in web UI (under Details for Query), i.e. 
> http://localhost:4040/SQL/execution/?id=3 (<-- change the {{id}}). See the 
> attachment.
> {code}
> spark.
>   read.
>   format("kafka").
>   option("subscribe", "topic1").
>   option("kafka.bootstrap.servers", "localhost:9092").
>   load.
>   select('value cast "string").
>   write.
>   csv("fromkafka.csv")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20702) TaskContextImpl.markTaskCompleted should not hide the original error

2017-05-10 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-20702:


 Summary: TaskContextImpl.markTaskCompleted should not hide the 
original error
 Key: SPARK-20702
 URL: https://issues.apache.org/jira/browse/SPARK-20702
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 2.1.1, 2.2.0
Reporter: Shixiong Zhu
Assignee: Shixiong Zhu


If a TaskCompletionListener throws an error, TaskContextImpl.markTaskCompleted 
will hide the original error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-20373) Batch queries with 'Dataset/DataFrame.withWatermark()` does not execute

2017-05-09 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu reassigned SPARK-20373:


Assignee: Genmao Yu

> Batch queries with 'Dataset/DataFrame.withWatermark()` does not execute
> ---
>
> Key: SPARK-20373
> URL: https://issues.apache.org/jira/browse/SPARK-20373
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Tathagata Das
>Assignee: Genmao Yu
>Priority: Minor
> Fix For: 2.2.1, 2.3.0
>
>
> Any Dataset/DataFrame batch query with the operation `withWatermark` does not 
> execute because the batch planner does not have any rule to explicitly handle 
> the EventTimeWatermark logical plan. The right solution is to simply remove 
> the plan node, as the watermark should not affect any batch query in any way.
> {code}
> from pyspark.sql.functions import *
> eventsDF = spark.createDataFrame([("2016-03-11 09:00:07", "dev1", 
> 123)]).toDF("eventTime", "deviceId", 
> "signal").select(col("eventTime").cast("timestamp").alias("eventTime"), 
> "deviceId", "signal")
> windowedCountsDF = \
>   eventsDF \
> .withWatermark("eventTime", "10 minutes") \
> .groupBy(
>   "deviceId",
>   window("eventTime", "5 minutes")) \
> .count()
> windowedCountsDF.collect()
> {code}
> This throws as an error 
> {code}
> java.lang.AssertionError: assertion failed: No plan for EventTimeWatermark 
> eventTime#3762657: timestamp, interval 10 minutes
> +- Project [cast(_1#3762643 as timestamp) AS eventTime#3762657, _2#3762644 AS 
> deviceId#3762651]
>+- LogicalRDD [_1#3762643, _2#3762644, _3#3762645L]
>   at scala.Predef$.assert(Predef.scala:170)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at 

[jira] [Updated] (SPARK-20373) Batch queries with 'Dataset/DataFrame.withWatermark()` does not execute

2017-05-09 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-20373:
-
Fix Version/s: 2.3.0
   2.2.1

> Batch queries with 'Dataset/DataFrame.withWatermark()` does not execute
> ---
>
> Key: SPARK-20373
> URL: https://issues.apache.org/jira/browse/SPARK-20373
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Tathagata Das
>Priority: Minor
> Fix For: 2.2.1, 2.3.0
>
>
> Any Dataset/DataFrame batch query with the operation `withWatermark` does not 
> execute because the batch planner does not have any rule to explicitly handle 
> the EventTimeWatermark logical plan. The right solution is to simply remove 
> the plan node, as the watermark should not affect any batch query in any way.
> {code}
> from pyspark.sql.functions import *
> eventsDF = spark.createDataFrame([("2016-03-11 09:00:07", "dev1", 
> 123)]).toDF("eventTime", "deviceId", 
> "signal").select(col("eventTime").cast("timestamp").alias("eventTime"), 
> "deviceId", "signal")
> windowedCountsDF = \
>   eventsDF \
> .withWatermark("eventTime", "10 minutes") \
> .groupBy(
>   "deviceId",
>   window("eventTime", "5 minutes")) \
> .count()
> windowedCountsDF.collect()
> {code}
> This throws as an error 
> {code}
> java.lang.AssertionError: assertion failed: No plan for EventTimeWatermark 
> eventTime#3762657: timestamp, interval 10 minutes
> +- Project [cast(_1#3762643 as timestamp) AS eventTime#3762657, _2#3762644 AS 
> deviceId#3762651]
>+- LogicalRDD [_1#3762643, _2#3762644, _3#3762645L]
>   at scala.Predef$.assert(Predef.scala:170)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>   at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at 

[jira] [Comment Edited] (SPARK-20600) KafkaRelation should be pretty printed in web UI (Details for Query)

2017-05-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001680#comment-16001680
 ] 

Shixiong Zhu edited comment on SPARK-20600 at 5/8/17 10:38 PM:
---

[~jlaskowski] Hope you can do it soon. Then we can put it into 2.2.0 if RC2 
fails.


was (Author: zsxwing):
[~jlaskowski] Hope you want do it soon. Then we can put it into 2.2.0 if RC2 
fails.

> KafkaRelation should be pretty printed in web UI (Details for Query)
> 
>
> Key: SPARK-20600
> URL: https://issues.apache.org/jira/browse/SPARK-20600
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Jacek Laskowski
>Priority: Trivial
> Attachments: kafka-source-scan-webui.png
>
>
> Executing the following batch query gives the default stringified/internal 
> name of {{KafkaRelation}} in web UI (under Details for Query), i.e. 
> http://localhost:4040/SQL/execution/?id=3 (<-- change the {{id}}). See the 
> attachment.
> {code}
> spark.
>   read.
>   format("kafka").
>   option("subscribe", "topic1").
>   option("kafka.bootstrap.servers", "localhost:9092").
>   load.
>   select('value cast "string").
>   write.
>   csv("fromkafka.csv")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20600) KafkaRelation should be pretty printed in web UI (Details for Query)

2017-05-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001680#comment-16001680
 ] 

Shixiong Zhu commented on SPARK-20600:
--

[~jlaskowski] Hope you want do it soon. Then we can put it into 2.2.0 if RC2 
fails.

> KafkaRelation should be pretty printed in web UI (Details for Query)
> 
>
> Key: SPARK-20600
> URL: https://issues.apache.org/jira/browse/SPARK-20600
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Jacek Laskowski
>Priority: Trivial
> Attachments: kafka-source-scan-webui.png
>
>
> Executing the following batch query gives the default stringified/internal 
> name of {{KafkaRelation}} in web UI (under Details for Query), i.e. 
> http://localhost:4040/SQL/execution/?id=3 (<-- change the {{id}}). See the 
> attachment.
> {code}
> spark.
>   read.
>   format("kafka").
>   option("subscribe", "topic1").
>   option("kafka.bootstrap.servers", "localhost:9092").
>   load.
>   select('value cast "string").
>   write.
>   csv("fromkafka.csv")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19268) File does not exist: /tmp/temporary-157b89c1-27bb-49f3-a70c-ca1b75022b4d/state/0/2/1.delta

2017-05-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001631#comment-16001631
 ] 

Shixiong Zhu commented on SPARK-19268:
--

[~skrishna] could you provide your codes, or the output of 
"dataset.explain(true)", please? Perhaps there is another bug in aggregation.

> File does not exist: 
> /tmp/temporary-157b89c1-27bb-49f3-a70c-ca1b75022b4d/state/0/2/1.delta
> --
>
> Key: SPARK-19268
> URL: https://issues.apache.org/jira/browse/SPARK-19268
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.0.0, 2.0.1, 2.0.2, 2.1.0
> Environment: - hadoop2.7
> - Java 7
>Reporter: liyan
>Assignee: Shixiong Zhu
>Priority: Critical
> Fix For: 2.1.1, 2.2.0
>
>
> bq. ./run-example sql.streaming.JavaStructuredKafkaWordCount 
> 192.168.3.110:9092 subscribe topic03
> when i run the spark example raises the following error:
> {quote}
> Exception in thread "main" 17/01/17 14:13:41 DEBUG ContextCleaner: Got 
> cleaning task CleanBroadcast(4)
> org.apache.spark.sql.streaming.StreamingQueryException: Job aborted due to 
> stage failure: Task 2 in stage 9.0 failed 1 times, most recent failure: Lost 
> task 2.0 in stage 9.0 (TID 46, localhost, executor driver): 
> java.lang.IllegalStateException: Error reading delta file 
> /tmp/temporary-157b89c1-27bb-49f3-a70c-ca1b75022b4d/state/0/2/1.delta of 
> HDFSStateStoreProvider[id = (op=0, part=2), dir = 
> /tmp/temporary-157b89c1-27bb-49f3-a70c-ca1b75022b4d/state/0/2]: 
> /tmp/temporary-157b89c1-27bb-49f3-a70c-ca1b75022b4d/state/0/2/1.delta does 
> not exist
>   at 
> org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$updateFromDeltaFile(HDFSBackedStateStoreProvider.scala:354)
>   at 
> org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1$$anonfun$6.apply(HDFSBackedStateStoreProvider.scala:306)
>   at 
> org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1$$anonfun$6.apply(HDFSBackedStateStoreProvider.scala:303)
>   at scala.Option.getOrElse(Option.scala:121)
>   at 
> org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1.apply(HDFSBackedStateStoreProvider.scala:303)
>   at 
> org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1.apply(HDFSBackedStateStoreProvider.scala:302)
>   at scala.Option.getOrElse(Option.scala:121)
>   at 
> org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap(HDFSBackedStateStoreProvider.scala:302)
>   at 
> org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.getStore(HDFSBackedStateStoreProvider.scala:220)
>   at 
> org.apache.spark.sql.execution.streaming.state.StateStore$.get(StateStore.scala:151)
>   at 
> org.apache.spark.sql.execution.streaming.state.StateStoreRDD.compute(StateStoreRDD.scala:61)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>   at org.apache.spark.scheduler.Task.run(Task.scala:99)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /tmp/temporary-157b89c1-27bb-49f3-a70c-ca1b75022b4d/state/0/2/1.delta
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> 

[jira] [Commented] (SPARK-18057) Update structured streaming kafka from 10.0.1 to 10.2.0

2017-05-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001474#comment-16001474
 ] 

Shixiong Zhu commented on SPARK-18057:
--

[~helena_e] I didn't mean for Spark. Even in Spark, the required code changes 
are in tests. I meant, as a Spark user, why you cannot add the Kafka client as 
a dependency and update the Kafka client? Because you have some test codes 
similar to Spark, or are you using Kafka API directly in your codes?

> Update structured streaming kafka from 10.0.1 to 10.2.0
> ---
>
> Key: SPARK-18057
> URL: https://issues.apache.org/jira/browse/SPARK-18057
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Reporter: Cody Koeninger
>
> There are a couple of relevant KIPs here, 
> https://archive.apache.org/dist/kafka/0.10.1.0/RELEASE_NOTES.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-13747) Concurrent execution in SQL doesn't work with Scala ForkJoinPool

2017-05-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001426#comment-16001426
 ] 

Shixiong Zhu commented on SPARK-13747:
--

[~revolucion09] The default dispatcher uses ForkJoinPool. See 
http://doc.akka.io/docs/akka/current/scala/dispatchers.html#Default_dispatcher 

> Concurrent execution in SQL doesn't work with Scala ForkJoinPool
> 
>
> Key: SPARK-13747
> URL: https://issues.apache.org/jira/browse/SPARK-13747
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
>
> Run the following codes may fail
> {code}
> (1 to 100).par.foreach { _ =>
>   println(sc.parallelize(1 to 5).map { i => (i, i) }.toDF("a", "b").count())
> }
> java.lang.IllegalArgumentException: spark.sql.execution.id is already set 
> at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87)
>  
> at 
> org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904) 
> at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1385) 
> {code}
> This is because SparkContext.runJob can be suspended when using a 
> ForkJoinPool (e.g.,scala.concurrent.ExecutionContext.Implicits.global) as it 
> calls Await.ready (introduced by https://github.com/apache/spark/pull/9264).
> So when SparkContext.runJob is suspended, ForkJoinPool will run another task 
> in the same thread, however, the local properties has been polluted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-13747) Concurrent execution in SQL doesn't work with Scala ForkJoinPool

2017-05-08 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001415#comment-16001415
 ] 

Shixiong Zhu commented on SPARK-13747:
--

Okey, I see, the thread name is "Sake-akka.actor.default-dispatcher-3", so I 
think it's just Akka default-dispatcher.

> Concurrent execution in SQL doesn't work with Scala ForkJoinPool
> 
>
> Key: SPARK-13747
> URL: https://issues.apache.org/jira/browse/SPARK-13747
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
>
> Run the following codes may fail
> {code}
> (1 to 100).par.foreach { _ =>
>   println(sc.parallelize(1 to 5).map { i => (i, i) }.toDF("a", "b").count())
> }
> java.lang.IllegalArgumentException: spark.sql.execution.id is already set 
> at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87)
>  
> at 
> org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904) 
> at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1385) 
> {code}
> This is because SparkContext.runJob can be suspended when using a 
> ForkJoinPool (e.g.,scala.concurrent.ExecutionContext.Implicits.global) as it 
> calls Await.ready (introduced by https://github.com/apache/spark/pull/9264).
> So when SparkContext.runJob is suspended, ForkJoinPool will run another task 
> in the same thread, however, the local properties has been polluted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



<    3   4   5   6   7   8   9   10   11   12   >