[jira] [Commented] (SPARK-30585) scalatest fails for Apache Spark SQL project

2020-01-22 Thread Hyukjin Kwon (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17021675#comment-17021675
 ] 

Hyukjin Kwon commented on SPARK-30585:
--

and please just don't copy and paste the logs. You should file an issue with 
describing symptoms with a reproducer if possible.
I am resolving this until they are provided clearly.

> scalatest fails for Apache Spark SQL project
> 
>
> Key: SPARK-30585
> URL: https://issues.apache.org/jira/browse/SPARK-30585
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.4.0
>Reporter: Rashmi
>Priority: Major
>
> Error logs:-
> 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 3.0 (TID 6, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 3.0 (TID 7, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:51.354 WARN 
> org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current 
> batch is falling behind. The trigger interval is 100 milliseconds, but spent 
> 1854 milliseconds
> 23:36:51.381 WARN 
> org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread:
>  data reader thread failed
> org.apache.spark.SparkException: Exception thrown in awaitResult:
>  at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
>  at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
>  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
>  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76)
>  at 
> org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.getRecord(ContinuousMemoryStream.scala:195)
>  at 
> org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.next(ContinuousMemoryStream.scala:181)
>  at 
> org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread.run(ContinuousQueuedDataReader.scala:143)
> Caused by: org.apache.spark.SparkException: Could not find 
> ContinuousMemoryStreamRecordEndpoint-f7d4460c-9f4e-47ee-a846-258b34964852-9.
>  at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
>  at 
> org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:135)
>  at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:229)
>  at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:523)
>  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:91)
>  ... 4 more
> 23:36:51.389 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 4.0 (TID 9, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:51.390 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 4.0 (TID 8, localhost, executor driver): TaskKilled (Stage cancelled)
> - flatMap
> 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 5.0 (TID 11, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 5.0 (TID 10, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:52.248 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 6.0 (TID 13, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:52.249 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 6.0 (TID 12, localhost, executor driver): TaskKilled (Stage cancelled)
> - filter
> 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 7.0 (TID 14, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 7.0 (TID 15, localhost, executor driver): TaskKilled (Stage cancelled)
> - deduplicate
> - timestamp
> 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 8.0 (TID 16, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 8.0 (TID 17, localhost, executor driver): TaskKilled (Stage cancelled)
> - subquery alias
> 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 9.0 (TID 19, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 9.0 (TID 18, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.953 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 10.0 (TID 21, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.953 WARN 

[jira] [Commented] (SPARK-30585) scalatest fails for Apache Spark SQL project

2020-01-22 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17021518#comment-17021518
 ] 

Dongjoon Hyun commented on SPARK-30585:
---

Hi, [~rashmi_sakhalkar]
Thank you for reporting, but please don't use `Blocker` during creating an 
issue.

> scalatest fails for Apache Spark SQL project
> 
>
> Key: SPARK-30585
> URL: https://issues.apache.org/jira/browse/SPARK-30585
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.4.0
>Reporter: Rashmi
>Priority: Major
>
> Error logs:-
> 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 3.0 (TID 6, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 3.0 (TID 7, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:51.354 WARN 
> org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current 
> batch is falling behind. The trigger interval is 100 milliseconds, but spent 
> 1854 milliseconds
> 23:36:51.381 WARN 
> org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread:
>  data reader thread failed
> org.apache.spark.SparkException: Exception thrown in awaitResult:
>  at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
>  at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
>  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
>  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76)
>  at 
> org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.getRecord(ContinuousMemoryStream.scala:195)
>  at 
> org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.next(ContinuousMemoryStream.scala:181)
>  at 
> org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread.run(ContinuousQueuedDataReader.scala:143)
> Caused by: org.apache.spark.SparkException: Could not find 
> ContinuousMemoryStreamRecordEndpoint-f7d4460c-9f4e-47ee-a846-258b34964852-9.
>  at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
>  at 
> org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:135)
>  at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:229)
>  at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:523)
>  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:91)
>  ... 4 more
> 23:36:51.389 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 4.0 (TID 9, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:51.390 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 4.0 (TID 8, localhost, executor driver): TaskKilled (Stage cancelled)
> - flatMap
> 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 5.0 (TID 11, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 5.0 (TID 10, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:52.248 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 6.0 (TID 13, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:52.249 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 6.0 (TID 12, localhost, executor driver): TaskKilled (Stage cancelled)
> - filter
> 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 7.0 (TID 14, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 7.0 (TID 15, localhost, executor driver): TaskKilled (Stage cancelled)
> - deduplicate
> - timestamp
> 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 8.0 (TID 16, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 8.0 (TID 17, localhost, executor driver): TaskKilled (Stage cancelled)
> - subquery alias
> 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 9.0 (TID 19, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 9.0 (TID 18, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.953 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 10.0 (TID 21, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.953 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 10.0 

[jira] [Commented] (SPARK-30585) scalatest fails for Apache Spark SQL project

2020-01-20 Thread Rashmi (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019425#comment-17019425
 ] 

Rashmi commented on SPARK-30585:


- cte.sql
- datetime.sql
- describe-table-column.sql
03:48:33.567 WARN org.apache.spark.sql.execution.command.DropTableCommand: 
org.apache.spark.sql.AnalysisException: Table or view not found: default.t; 
line 1 pos 14
org.apache.spark.sql.AnalysisException: Table or view not found: default.t; 
line 1 pos 14
 at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:47)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:733)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:685)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:715)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:708)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
 at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:89)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:87)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:708)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:654)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
 at 
scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
 at scala.collection.immutable.List.foldLeft(List.scala:84)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$35.apply(Analyzer.scala:699)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$35.apply(Analyzer.scala:692)
 at 
org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withAnalysisContext(Analyzer.scala:87)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:692)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:703)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:715)
 at 

[jira] [Commented] (SPARK-30585) scalatest fails for Apache Spark SQL project

2020-01-20 Thread Rashmi (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019335#comment-17019335
 ] 

Rashmi commented on SPARK-30585:


Trying to build Apache Spark on Power.

> scalatest fails for Apache Spark SQL project
> 
>
> Key: SPARK-30585
> URL: https://issues.apache.org/jira/browse/SPARK-30585
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.4.0
>Reporter: Rashmi
>Priority: Blocker
>
> Error logs:-
> 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 3.0 (TID 6, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 3.0 (TID 7, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:51.354 WARN 
> org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current 
> batch is falling behind. The trigger interval is 100 milliseconds, but spent 
> 1854 milliseconds
> 23:36:51.381 WARN 
> org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread:
>  data reader thread failed
> org.apache.spark.SparkException: Exception thrown in awaitResult:
>  at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
>  at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
>  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
>  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76)
>  at 
> org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.getRecord(ContinuousMemoryStream.scala:195)
>  at 
> org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.next(ContinuousMemoryStream.scala:181)
>  at 
> org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread.run(ContinuousQueuedDataReader.scala:143)
> Caused by: org.apache.spark.SparkException: Could not find 
> ContinuousMemoryStreamRecordEndpoint-f7d4460c-9f4e-47ee-a846-258b34964852-9.
>  at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
>  at 
> org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:135)
>  at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:229)
>  at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:523)
>  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:91)
>  ... 4 more
> 23:36:51.389 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 4.0 (TID 9, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:51.390 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 4.0 (TID 8, localhost, executor driver): TaskKilled (Stage cancelled)
> - flatMap
> 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 5.0 (TID 11, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 5.0 (TID 10, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:52.248 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 6.0 (TID 13, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:52.249 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 6.0 (TID 12, localhost, executor driver): TaskKilled (Stage cancelled)
> - filter
> 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 7.0 (TID 14, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 7.0 (TID 15, localhost, executor driver): TaskKilled (Stage cancelled)
> - deduplicate
> - timestamp
> 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 8.0 (TID 16, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 8.0 (TID 17, localhost, executor driver): TaskKilled (Stage cancelled)
> - subquery alias
> 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 9.0 (TID 19, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 9.0 (TID 18, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.953 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 10.0 (TID 21, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:53.953 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in 
> stage 10.0 (TID 20, localhost, executor driver): TaskKilled (Stage cancelled)
> 23:36:54.552