[jira] [Updated] (SPARK-16870) add "spark.sql.broadcastTimeout" into docs/sql-programming-guide.md to help people to how to fix this timeout error when it happenned
[ https://issues.apache.org/jira/browse/SPARK-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Ke updated SPARK-16870: - Description: here my workload and what I found I run a large number jobs with spark-sql at the same time. and meet the error that print timeout (some job contains the broadcast-join operator) : 16/08/03 15:43:23 ERROR SparkExecuteStatementOperation: Error executing query, currentState RUNNING, java.util.concurrent.TimeoutException: Futures timed out after [300 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin.doExecute(BroadcastHashOuterJoin.scala:113) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Filter.doExecute(basicOperators.scala:70) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.ConvertToSafe.doExecute(rowFormatConverters.scala:56) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:201) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:276) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrame.(DataFrame.scala:145) at org.apache.spark.sql.DataFrame.(DataFrame.scala:130) at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecute StatementOperation$$execute(SparkExecuteStatementOperation.scala:211) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:154) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:151) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1793) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:16 4) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at
[jira] [Updated] (SPARK-16870) add "spark.sql.broadcastTimeout" into docs/sql-programming-guide.md to help people to how to fix this timeout error when it happenned
[ https://issues.apache.org/jira/browse/SPARK-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Ke updated SPARK-16870: - Description: here my workload and what I found I run a large number jobs with spark-sql at the same time. and meet the error that print timeout (some job contains the broadcast-join operator) : 16/08/03 15:43:23 ERROR SparkExecuteStatementOperation: Error executing query, currentState RUNNING, java.util.concurrent.TimeoutException: Futures timed out after [300 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin.doExecute(BroadcastHashOuterJoin.scala:113) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Filter.doExecute(basicOperators.scala:70) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.ConvertToSafe.doExecute(rowFormatConverters.scala:56) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:201) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:276) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrame.(DataFrame.scala:145) at org.apache.spark.sql.DataFrame.(DataFrame.scala:130) at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecute StatementOperation$$execute(SparkExecuteStatementOperation.scala:211) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:154) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:151) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1793) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:16 4) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at
[jira] [Updated] (SPARK-16870) add "spark.sql.broadcastTimeout" into docs/sql-programming-guide.md to help people to how to fix this timeout error when it happenned
[ https://issues.apache.org/jira/browse/SPARK-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Ke updated SPARK-16870: - Description: here my workload and what I found I run a large number jobs with spark-sql at the same time. and meet the error that print timeout (some job contains the broadcast-join operator) : 16/08/03 15:43:23 ERROR SparkExecuteStatementOperation: Error executing query, currentState RUNNING, java.util.concurrent.TimeoutException: Futures timed out after [300 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin.doExecute(BroadcastHashOuterJoin.scala:113) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Filter.doExecute(basicOperators.scala:70) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.ConvertToSafe.doExecute(rowFormatConverters.scala:56) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:201) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:276) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrame.(DataFrame.scala:145) at org.apache.spark.sql.DataFrame.(DataFrame.scala:130) at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecute StatementOperation$$execute(SparkExecuteStatementOperation.scala:211) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:154) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:151) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1793) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:16 4) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at
[jira] [Commented] (SPARK-16870) add "spark.sql.broadcastTimeout" into docs/sql-programming-guide.md to help people to how to fix this timeout error when it happenned
[ https://issues.apache.org/jira/browse/SPARK-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406834#comment-15406834 ] Liang Ke commented on SPARK-16870: -- thx :) I have update it > add "spark.sql.broadcastTimeout" into docs/sql-programming-guide.md to help > people to how to fix this timeout error when it happenned > - > > Key: SPARK-16870 > URL: https://issues.apache.org/jira/browse/SPARK-16870 > Project: Spark > Issue Type: Improvement >Reporter: Liang Ke >Priority: Trivial > > here my workload and what I found > I run a large number jobs with spark-sql at the same time. and meet the error > that print timeout (some job contains the broadcast-join operator) : > 16/08/03 15:43:23 ERROR SparkExecuteStatementOperation: Error executing > query, currentState RUNNING, > java.util.concurrent.TimeoutException: Futures timed out after [300 seconds] > at > scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) > at > scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) > at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) > at > scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) > at scala.concurrent.Await$.result(package.scala:107) > at > org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin.doExecute(BroadcastHashOuterJoin.scala:113) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) > at > org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) > at > org.apache.spark.sql.execution.Filter.doExecute(basicOperators.scala:70) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) > at > org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) > at > org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) > at > org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) > at > org.apache.spark.sql.execution.ConvertToSafe.doExecute(rowFormatConverters.scala:56) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) > at > org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) > at > org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:201) > at > org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127) > at > org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:276) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) > at > org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) > at > org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) > at > org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) > at org.apache.spark.sql.DataFrame.(DataFrame.scala:145) > at org.apache.spark.sql.DataFrame.(DataFrame.scala:130) > at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52) > at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) > at > org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecute > StatementOperation$$execute(SparkExecuteStatementOperation.scala:211) > at >
[jira] [Updated] (SPARK-16870) add "spark.sql.broadcastTimeout" into docs/sql-programming-guide.md to help people to how to fix this timeout error when it happenned
[ https://issues.apache.org/jira/browse/SPARK-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Ke updated SPARK-16870: - Description: here my workload and what I found I run a large number jobs with spark-sql at the same time. and meet the error that print timeout (some job contains the broadcast-join operator) : 16/08/03 15:43:23 ERROR SparkExecuteStatementOperation: Error executing query, currentState RUNNING, java.util.concurrent.TimeoutException: Futures timed out after [300 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin.doExecute(BroadcastHashOuterJoin.scala:113) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Filter.doExecute(basicOperators.scala:70) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.ConvertToSafe.doExecute(rowFormatConverters.scala:56) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:201) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:276) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrame.(DataFrame.scala:145) at org.apache.spark.sql.DataFrame.(DataFrame.scala:130) at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecute StatementOperation$$execute(SparkExecuteStatementOperation.scala:211) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:154) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:151) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1793) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:16 4) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at
[jira] [Created] (SPARK-16870) Timeout in seconds for the broadcast wait time in broadcast joins
Liang Ke created SPARK-16870: Summary: Timeout in seconds for the broadcast wait time in broadcast joins Key: SPARK-16870 URL: https://issues.apache.org/jira/browse/SPARK-16870 Project: Spark Issue Type: Improvement Reporter: Liang Ke Priority: Trivial 16/08/03 15:43:23 ERROR SparkExecuteStatementOperation: Error executing query, currentState RUNNING, java.util.concurrent.TimeoutException: Futures timed out after [300 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.sql.execution.joins.BroadcastHashOuterJoin.doExecute(BroadcastHashOuterJoin.scala:113) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Filter.doExecute(basicOperators.scala:70) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.ConvertToSafe.doExecute(rowFormatConverters.scala:56) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:201) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:276) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrame.(DataFrame.scala:145) at org.apache.spark.sql.DataFrame.(DataFrame.scala:130) at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecute StatementOperation$$execute(SparkExecuteStatementOperation.scala:211) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:154) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation. scala:151) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1793) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:16 4) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at
[jira] [Commented] (SPARK-16735) Fail to create a map contains decimal type with literals having different inferred precessions and scales
[ https://issues.apache.org/jira/browse/SPARK-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394864#comment-15394864 ] Liang Ke commented on SPARK-16735: -- https://github.com/apache/spark/pull/14374 > Fail to create a map contains decimal type with literals having different > inferred precessions and scales > - > > Key: SPARK-16735 > URL: https://issues.apache.org/jira/browse/SPARK-16735 > Project: Spark > Issue Type: Sub-task >Affects Versions: 2.0.0, 2.0.1 >Reporter: Liang Ke > > In Spark 2.0, we will parse float literals as decimals. However, it > introduces a side-effect, which is described below. > spark-sql> select map(0.1,0.01, 0.2,0.033); > Error in query: cannot resolve 'map(CAST(0.1 AS DECIMAL(1,1)), CAST(0.01 AS > DECIMAL(2,2)), CAST(0.2 AS DECIMAL(1,1)), CAST(0.033 AS DECIMAL(3,3)))' due > to data type mismatch: The given values of function map should all be the > same type, but they are [decimal(2,2), decimal(3,3)]; line 1 pos 7 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-16715) Fix a potential ExprId conflict for SubexpressionEliminationSuite."Semantic equals and hash"
[ https://issues.apache.org/jira/browse/SPARK-16715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394860#comment-15394860 ] Liang Ke commented on SPARK-16715: -- Sorry, mistake the jira id. > Fix a potential ExprId conflict for SubexpressionEliminationSuite."Semantic > equals and hash" > > > Key: SPARK-16715 > URL: https://issues.apache.org/jira/browse/SPARK-16715 > Project: Spark > Issue Type: Bug > Components: Tests >Reporter: Shixiong Zhu >Assignee: Shixiong Zhu > Fix For: 2.0.1, 2.1.0 > > > SubexpressionEliminationSuite."Semantic equals and hash" assumes the default > AttributeReference's exprId wont' be "ExprId(1)". However, that depends on > when this test runs. It may happen to use "ExprId(1)". -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Issue Comment Deleted] (SPARK-16735) Fail to create a map contains decimal type with literals having different inferred precessions and scales
[ https://issues.apache.org/jira/browse/SPARK-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Ke updated SPARK-16735: - Comment: was deleted (was: hi, if some one can help me to push my patch to github ? thx a lot : )) > Fail to create a map contains decimal type with literals having different > inferred precessions and scales > - > > Key: SPARK-16735 > URL: https://issues.apache.org/jira/browse/SPARK-16735 > Project: Spark > Issue Type: Sub-task >Affects Versions: 2.0.0, 2.0.1 >Reporter: Liang Ke > > In Spark 2.0, we will parse float literals as decimals. However, it > introduces a side-effect, which is described below. > spark-sql> select map(0.1,0.01, 0.2,0.033); > Error in query: cannot resolve 'map(CAST(0.1 AS DECIMAL(1,1)), CAST(0.01 AS > DECIMAL(2,2)), CAST(0.2 AS DECIMAL(1,1)), CAST(0.033 AS DECIMAL(3,3)))' due > to data type mismatch: The given values of function map should all be the > same type, but they are [decimal(2,2), decimal(3,3)]; line 1 pos 7 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-16735) Fail to create a map contains decimal type with literals having different inferred precessions and scales
[ https://issues.apache.org/jira/browse/SPARK-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Ke updated SPARK-16735: - Attachment: (was: SPARK-16735.patch) > Fail to create a map contains decimal type with literals having different > inferred precessions and scales > - > > Key: SPARK-16735 > URL: https://issues.apache.org/jira/browse/SPARK-16735 > Project: Spark > Issue Type: Sub-task >Affects Versions: 2.0.0, 2.0.1 >Reporter: Liang Ke > > In Spark 2.0, we will parse float literals as decimals. However, it > introduces a side-effect, which is described below. > spark-sql> select map(0.1,0.01, 0.2,0.033); > Error in query: cannot resolve 'map(CAST(0.1 AS DECIMAL(1,1)), CAST(0.01 AS > DECIMAL(2,2)), CAST(0.2 AS DECIMAL(1,1)), CAST(0.033 AS DECIMAL(3,3)))' due > to data type mismatch: The given values of function map should all be the > same type, but they are [decimal(2,2), decimal(3,3)]; line 1 pos 7 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-16735) Fail to create a map contains decimal type with literals having different inferred precessions and scales
[ https://issues.apache.org/jira/browse/SPARK-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393639#comment-15393639 ] Liang Ke edited comment on SPARK-16735 at 7/26/16 12:00 PM: hi, if some one can help me to push my patch to github ? thx a lot : ) was (Author: biglobster): hi, if some one can help me to push my patch to github? thx alot:) > Fail to create a map contains decimal type with literals having different > inferred precessions and scales > - > > Key: SPARK-16735 > URL: https://issues.apache.org/jira/browse/SPARK-16735 > Project: Spark > Issue Type: Sub-task >Affects Versions: 2.0.0, 2.0.1 >Reporter: Liang Ke > Attachments: SPARK-16735.patch > > > In Spark 2.0, we will parse float literals as decimals. However, it > introduces a side-effect, which is described below. > spark-sql> select map(0.1,0.01, 0.2,0.033); > Error in query: cannot resolve 'map(CAST(0.1 AS DECIMAL(1,1)), CAST(0.01 AS > DECIMAL(2,2)), CAST(0.2 AS DECIMAL(1,1)), CAST(0.033 AS DECIMAL(3,3)))' due > to data type mismatch: The given values of function map should all be the > same type, but they are [decimal(2,2), decimal(3,3)]; line 1 pos 7 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-16735) Fail to create a map contains decimal type with literals having different inferred precessions and scales
[ https://issues.apache.org/jira/browse/SPARK-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Ke updated SPARK-16735: - Attachment: SPARK-16735.patch hi, if some one can help me to push my patch to github? thx alot:) > Fail to create a map contains decimal type with literals having different > inferred precessions and scales > - > > Key: SPARK-16735 > URL: https://issues.apache.org/jira/browse/SPARK-16735 > Project: Spark > Issue Type: Sub-task >Affects Versions: 2.0.0, 2.0.1 >Reporter: Liang Ke > Attachments: SPARK-16735.patch > > > In Spark 2.0, we will parse float literals as decimals. However, it > introduces a side-effect, which is described below. > spark-sql> select map(0.1,0.01, 0.2,0.033); > Error in query: cannot resolve 'map(CAST(0.1 AS DECIMAL(1,1)), CAST(0.01 AS > DECIMAL(2,2)), CAST(0.2 AS DECIMAL(1,1)), CAST(0.033 AS DECIMAL(3,3)))' due > to data type mismatch: The given values of function map should all be the > same type, but they are [decimal(2,2), decimal(3,3)]; line 1 pos 7 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-16735) Fail to create a map contains decimal type with literals having different inferred precessions and scales
Liang Ke created SPARK-16735: Summary: Fail to create a map contains decimal type with literals having different inferred precessions and scales Key: SPARK-16735 URL: https://issues.apache.org/jira/browse/SPARK-16735 Project: Spark Issue Type: Sub-task Affects Versions: 2.0.0, 2.0.1 Reporter: Liang Ke In Spark 2.0, we will parse float literals as decimals. However, it introduces a side-effect, which is described below. spark-sql> select map(0.1,0.01, 0.2,0.033); Error in query: cannot resolve 'map(CAST(0.1 AS DECIMAL(1,1)), CAST(0.01 AS DECIMAL(2,2)), CAST(0.2 AS DECIMAL(1,1)), CAST(0.033 AS DECIMAL(3,3)))' due to data type mismatch: The given values of function map should all be the same type, but they are [decimal(2,2), decimal(3,3)]; line 1 pos 7 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391588#comment-15391588 ] Liang Ke commented on SPARK-16603: -- [~marymwu] this is not a bug > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391566#comment-15391566 ] Liang Ke edited comment on SPARK-16603 at 7/25/16 9:20 AM: --- Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. > select * from tsp where 'tsp.20_user_addr' <10; 16/07/25 18:05:34 INFO SparkSqlParser: Parsing command: select * from tsp where 'tsp.20_user_addr' <10 16/07/25 18:05:35 INFO HiveMetaStore: 0: create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:36 INFO CodeGenerator: Code generated in 300.833934 ms Time taken: 1.793 seconds 16/07/25 18:05:36 INFO CliDriver: Time taken: 1.793 seconds so, [~marymwu] this is not a bug was (Author: biglobster): Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. > select * from tsp where 'tsp.20_user_addr' <10; 16/07/25 18:05:34 INFO SparkSqlParser: Parsing command: select * from tsp where 'tsp.20_user_addr' <10 16/07/25 18:05:35 INFO HiveMetaStore: 0: create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:36 INFO CodeGenerator: Code generated in 300.833934 ms Time taken: 1.793 seconds 16/07/25 18:05:36 INFO CliDriver: Time taken: 1.793 seconds so, this is not a bug > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Issue Comment Deleted] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Ke updated SPARK-16603: - Comment: was deleted (was: [~marymwu] this is not a bug) > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391566#comment-15391566 ] Liang Ke edited comment on SPARK-16603 at 7/25/16 9:13 AM: --- Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. > select * from tsp where 'tsp.20_user_addr' <10; 16/07/25 18:05:34 INFO SparkSqlParser: Parsing command: select * from tsp where 'tsp.20_user_addr' <10 16/07/25 18:05:35 INFO HiveMetaStore: 0: create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:36 INFO CodeGenerator: Code generated in 300.833934 ms Time taken: 1.793 seconds 16/07/25 18:05:36 INFO CliDriver: Time taken: 1.793 seconds so, this is not a bug @marymwu was (Author: biglobster): Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. > select * from tsp where 'tsp.20_user_addr' <10; 16/07/25 18:05:34 INFO SparkSqlParser: Parsing command: select * from tsp where 'tsp.20_user_addr' <10 16/07/25 18:05:35 INFO HiveMetaStore: 0: create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:36 INFO CodeGenerator: Code generated in 300.833934 ms Time taken: 1.793 seconds 16/07/25 18:05:36 INFO CliDriver: Time taken: 1.793 seconds so, this is not a bug > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391566#comment-15391566 ] Liang Ke edited comment on SPARK-16603 at 7/25/16 9:13 AM: --- Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. > select * from tsp where 'tsp.20_user_addr' <10; 16/07/25 18:05:34 INFO SparkSqlParser: Parsing command: select * from tsp where 'tsp.20_user_addr' <10 16/07/25 18:05:35 INFO HiveMetaStore: 0: create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:36 INFO CodeGenerator: Code generated in 300.833934 ms Time taken: 1.793 seconds 16/07/25 18:05:36 INFO CliDriver: Time taken: 1.793 seconds so, this is not a bug was (Author: biglobster): Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. > select * from tsp where 'tsp.20_user_addr' <10; 16/07/25 18:05:34 INFO SparkSqlParser: Parsing command: select * from tsp where 'tsp.20_user_addr' <10 16/07/25 18:05:35 INFO HiveMetaStore: 0: create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:36 INFO CodeGenerator: Code generated in 300.833934 ms Time taken: 1.793 seconds 16/07/25 18:05:36 INFO CliDriver: Time taken: 1.793 seconds so, this is not a bug @marymwu > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391566#comment-15391566 ] Liang Ke edited comment on SPARK-16603 at 7/25/16 9:00 AM: --- Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. > select * from tsp where 'tsp.20_user_addr' <10; 16/07/25 18:05:34 INFO SparkSqlParser: Parsing command: select * from tsp where 'tsp.20_user_addr' <10 16/07/25 18:05:35 INFO HiveMetaStore: 0: create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_database: Database(name:default, description:default database, locationUri:hdfs://ht-chen-slave1:8020/apps/root/warehouse, parameters:{}) 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:35 INFO HiveMetaStore: 0: get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=tsp 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: int 16/07/25 18:05:35 INFO CatalystSqlParser: Parsing command: string 16/07/25 18:05:36 INFO CodeGenerator: Code generated in 300.833934 ms Time taken: 1.793 seconds 16/07/25 18:05:36 INFO CliDriver: Time taken: 1.793 seconds so, this is not a bug was (Author: biglobster): Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. select * from tsp where 'tsp.20_user_addr' <10 so, this is not a bug > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Issue Comment Deleted] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Ke updated SPARK-16603: - Comment: was deleted (was: so, this is not a bug) > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391566#comment-15391566 ] Liang Ke edited comment on SPARK-16603 at 7/25/16 9:00 AM: --- Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. select * from tsp where 'tsp.20_user_addr' <10 so, this is not a bug was (Author: biglobster): Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. select * from tsp where 'tsp.20_user_addr' <10 > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391568#comment-15391568 ] Liang Ke commented on SPARK-16603: -- so, this is not a bug > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391566#comment-15391566 ] Liang Ke edited comment on SPARK-16603 at 7/25/16 8:58 AM: --- Sorry. I read the spark sourcecode again, find it's a usage error: right usage is quoted this column name. and query again without error. select * from tsp where 'tsp.20_user_addr' <10 was (Author: biglobster): Sorry. I read the spark sourcecode again, find it's a usage error: right usage is adding quota between this column name. and query again without error. select * from tsp where 'tsp.20_user_addr' <10 > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
[ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391566#comment-15391566 ] Liang Ke commented on SPARK-16603: -- Sorry. I read the spark sourcecode again, find it's a usage error: right usage is adding quota between this column name. and query again without error. select * from tsp where 'tsp.20_user_addr' <10 > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > -- > > Key: SPARK-16603 > URL: https://issues.apache.org/jira/browse/SPARK-16603 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 >Reporter: marymwu >Priority: Minor > > Spark2.0 fail in executing the sql statement which field name begins with > number,like "d.30_day_loss_user" while spark1.6 supports > Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input > '.30' expecting > {')', ','} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org