[jira] [Commented] (SPARK-30585) scalatest fails for Apache Spark SQL project
[ https://issues.apache.org/jira/browse/SPARK-30585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019425#comment-17019425 ] Rashmi commented on SPARK-30585: - cte.sql - datetime.sql - describe-table-column.sql 03:48:33.567 WARN org.apache.spark.sql.execution.command.DropTableCommand: org.apache.spark.sql.AnalysisException: Table or view not found: default.t; line 1 pos 14 org.apache.spark.sql.AnalysisException: Table or view not found: default.t; line 1 pos 14 at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:47) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:733) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:685) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:715) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:708) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:89) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326) at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187) at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:87) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:708) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:654) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76) at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$35.apply(Analyzer.scala:699) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$35.apply(Analyzer.scala:692) at org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withAnalysisContext(Analyzer.scala:87) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:692) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:703) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:715) at
[jira] [Created] (SPARK-30585) scalatest fails for Apache Spark SQL project
Rashmi created SPARK-30585: -- Summary: scalatest fails for Apache Spark SQL project Key: SPARK-30585 URL: https://issues.apache.org/jira/browse/SPARK-30585 Project: Spark Issue Type: Bug Components: Build Affects Versions: 2.4.0 Reporter: Rashmi Error logs:- 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 3.0 (TID 6, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 3.0 (TID 7, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:51.354 WARN org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 100 milliseconds, but spent 1854 milliseconds 23:36:51.381 WARN org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread: data reader thread failed org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92) at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76) at org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.getRecord(ContinuousMemoryStream.scala:195) at org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.next(ContinuousMemoryStream.scala:181) at org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread.run(ContinuousQueuedDataReader.scala:143) Caused by: org.apache.spark.SparkException: Could not find ContinuousMemoryStreamRecordEndpoint-f7d4460c-9f4e-47ee-a846-258b34964852-9. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:135) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:229) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:523) at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:91) ... 4 more 23:36:51.389 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 4.0 (TID 9, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:51.390 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 4.0 (TID 8, localhost, executor driver): TaskKilled (Stage cancelled) - flatMap 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 5.0 (TID 11, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 5.0 (TID 10, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:52.248 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 6.0 (TID 13, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:52.249 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 6.0 (TID 12, localhost, executor driver): TaskKilled (Stage cancelled) - filter 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 7.0 (TID 14, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 7.0 (TID 15, localhost, executor driver): TaskKilled (Stage cancelled) - deduplicate - timestamp 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 8.0 (TID 16, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 8.0 (TID 17, localhost, executor driver): TaskKilled (Stage cancelled) - subquery alias 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 9.0 (TID 19, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 9.0 (TID 18, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:53.953 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 10.0 (TID 21, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:53.953 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 10.0 (TID 20, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:54.552 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 11.0 (TID 23, localhost, executor driver): TaskKilled (Stage cancelled) 23:36:54.552 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 11.0 (TID 22, localhost, executor driver): TaskKilled (Stage cancelled) - repeatedly restart 23:36:54.591 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage
[jira] [Commented] (SPARK-30585) scalatest fails for Apache Spark SQL project
[ https://issues.apache.org/jira/browse/SPARK-30585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019335#comment-17019335 ] Rashmi commented on SPARK-30585: Trying to build Apache Spark on Power. > scalatest fails for Apache Spark SQL project > > > Key: SPARK-30585 > URL: https://issues.apache.org/jira/browse/SPARK-30585 > Project: Spark > Issue Type: Bug > Components: Build >Affects Versions: 2.4.0 >Reporter: Rashmi >Priority: Blocker > > Error logs:- > 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in > stage 3.0 (TID 6, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:49.039 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in > stage 3.0 (TID 7, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:51.354 WARN > org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current > batch is falling behind. The trigger interval is 100 milliseconds, but spent > 1854 milliseconds > 23:36:51.381 WARN > org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread: > data reader thread failed > org.apache.spark.SparkException: Exception thrown in awaitResult: > at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) > at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) > at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92) > at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76) > at > org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.getRecord(ContinuousMemoryStream.scala:195) > at > org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStreamInputPartitionReader.next(ContinuousMemoryStream.scala:181) > at > org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread.run(ContinuousQueuedDataReader.scala:143) > Caused by: org.apache.spark.SparkException: Could not find > ContinuousMemoryStreamRecordEndpoint-f7d4460c-9f4e-47ee-a846-258b34964852-9. > at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160) > at > org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:135) > at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:229) > at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:523) > at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:91) > ... 4 more > 23:36:51.389 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in > stage 4.0 (TID 9, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:51.390 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in > stage 4.0 (TID 8, localhost, executor driver): TaskKilled (Stage cancelled) > - flatMap > 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in > stage 5.0 (TID 11, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:51.754 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in > stage 5.0 (TID 10, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:52.248 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in > stage 6.0 (TID 13, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:52.249 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in > stage 6.0 (TID 12, localhost, executor driver): TaskKilled (Stage cancelled) > - filter > 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in > stage 7.0 (TID 14, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:52.611 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in > stage 7.0 (TID 15, localhost, executor driver): TaskKilled (Stage cancelled) > - deduplicate > - timestamp > 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in > stage 8.0 (TID 16, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:53.015 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in > stage 8.0 (TID 17, localhost, executor driver): TaskKilled (Stage cancelled) > - subquery alias > 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in > stage 9.0 (TID 19, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:53.572 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in > stage 9.0 (TID 18, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:53.953 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in > stage 10.0 (TID 21, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:53.953 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in > stage 10.0 (TID 20, localhost, executor driver): TaskKilled (Stage cancelled) > 23:36:54.552
[jira] [Commented] (SPARK-30400) Test failure in SQL module on ppc64le
[ https://issues.apache.org/jira/browse/SPARK-30400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17014052#comment-17014052 ] Rashmi commented on SPARK-30400: I am facing the same error building Spark on Power any pointers to fix this issue . > Test failure in SQL module on ppc64le > - > > Key: SPARK-30400 > URL: https://issues.apache.org/jira/browse/SPARK-30400 > Project: Spark > Issue Type: Bug > Components: SQL, Tests >Affects Versions: 2.4.0 > Environment: os: rhel 7.6 > arch: ppc64le >Reporter: AK97 >Priority: Major > > I have been trying to build the Apache Spark on rhel_7.6/ppc64le; however, > the test cases are failing in SQL module with following error : > {code} > - CREATE TABLE USING AS SELECT based on the file without write permission *** > FAILED *** > Expected exception org.apache.spark.SparkException to be thrown, but no > exception was thrown (CreateTableAsSelectSuite.scala:92) > - create a table, drop it and create another one with the same name *** > FAILED *** > org.apache.spark.sql.AnalysisException: Table default.jsonTable already > exists. You need to drop it first.; > at > org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:159) > at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) > at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) > at > org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) > at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:195) > at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:195) > at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3365) > at > org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78) > at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125) > at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73) > {code} > Would like some help on understanding the cause for the same . I am running > it on a High end VM with good connectivity. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org