[jira] [Created] (SPARK-19835) Runing CAST sql on spark hung

2017-03-06 Thread bruce xu (JIRA)
bruce xu created SPARK-19835:


 Summary: Runing CAST sql on spark hung 
 Key: SPARK-19835
 URL: https://issues.apache.org/jira/browse/SPARK-19835
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.0.1
 Environment: spark2.0.1, hadoop2.6.0, jdk1.7
Reporter: bruce xu


when I run CAST sql such as:
-
create table A as
select a.id, b.id, c.id from
(select * from B) a
left join
(select * from C) b
left join
(select momo_id from D) c
on a.id =b.id and a.id =c.id;
-
then it hung and not continue running, no errors return. the last few INFO msg 
is as follows:
17/03/06 18:57:47 INFO spark.SparkContext: Starting job: processCmd at 
CliDriver.java:376
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Got job 2 (processCmd at 
CliDriver.java:376) with 2 output partitions
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 
(processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Missing parents: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting ResultStage 3 
(MapPartitionsRDD[20] at processCmd at CliDriver.java:376), which has no 
missing parents
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8 stored as values 
in memory (estimated size 150.1 KB, free 3.7 GB)
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8_piece0 stored as 
bytes in memory (estimated size 55.0 KB, free 3.7 GB)
17/03/06 18:57:47 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on 10.87.101.151:64500 (size: 55.0 KB, free: 3.8 GB)
17/03/06 18:57:47 INFO spark.SparkContext: Created broadcast 8 from broadcast 
at DAGScheduler.scala:1012
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from 
ResultStage 3 (MapPartitionsRDD[20] at processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO cluster.YarnScheduler: Adding task set 3.0 with 2 tasks
17/03/06 18:57:47 INFO scheduler.FairSchedulableBuilder: Added task set 
TaskSet_3 tasks to pool default
17/03/06 18:57:47 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 3.0 
(TID 204, hadoop491.dx.momo.com, partition 0, RACK_LOCAL, 5824 bytes)
17/03/06 18:57:47 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 204 on executor id: 50 hostname: hadoop491.dx.momo.com.
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 3.8 MB, free: 2.8 GB)
17/03/06 18:57:51 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 3.0 
(TID 205, hadoop605.dx.momo.com, partition 1, ANY, 5824 bytes)
17/03/06 18:57:51 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 205 on executor id: 9 hostname: hadoop605.dx.momo.com.
17/03/06 18:57:51 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:52 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 3.8 MB, free: 2.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
10.87.101.151:64500 in memory (size: 10.0 KB, free: 3.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
hadoop572.dx.momo.com:22730 in memory (size: 10.0 KB, free: 2.8 GB)
---
when I enter CTRL + C to cancel the job,the output stack is as follows:
org.apache.spark.SparkException: Job 2 cancelled as part of cancellation of all 
jobs
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at 
org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1393)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply$mcVI$sp(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAG

[jira] [Updated] (SPARK-19835) Runing CTAS sql on spark2.0.1 hung

2017-03-06 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19835:
-
Summary: Runing CTAS sql on spark2.0.1 hung   (was: Runing CAST sql on 
spark hung )

> Runing CTAS sql on spark2.0.1 hung 
> ---
>
> Key: SPARK-19835
> URL: https://issues.apache.org/jira/browse/SPARK-19835
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: spark2.0.1, hadoop2.6.0, jdk1.7
>Reporter: bruce xu
>
> when I run CAST sql such as:
> -
> create table A as
> select a.id, b.id, c.id from
> (select * from B) a
> left join
> (select * from C) b
> left join
> (select momo_id from D) c
> on a.id =b.id and a.id =c.id;
> -
> then it hung and not continue running, no errors return. the last few INFO 
> msg is as follows:
> 17/03/06 18:57:47 INFO spark.SparkContext: Starting job: processCmd at 
> CliDriver.java:376
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Got job 2 (processCmd at 
> CliDriver.java:376) with 2 output partitions
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 
> (processCmd at CliDriver.java:376)
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Parents of final stage: List()
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Missing parents: List()
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting ResultStage 3 
> (MapPartitionsRDD[20] at processCmd at CliDriver.java:376), which has no 
> missing parents
> 17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8 stored as values 
> in memory (estimated size 150.1 KB, free 3.7 GB)
> 17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8_piece0 stored as 
> bytes in memory (estimated size 55.0 KB, free 3.7 GB)
> 17/03/06 18:57:47 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
> memory on 10.87.101.151:64500 (size: 55.0 KB, free: 3.8 GB)
> 17/03/06 18:57:47 INFO spark.SparkContext: Created broadcast 8 from broadcast 
> at DAGScheduler.scala:1012
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting 2 missing tasks 
> from ResultStage 3 (MapPartitionsRDD[20] at processCmd at CliDriver.java:376)
> 17/03/06 18:57:47 INFO cluster.YarnScheduler: Adding task set 3.0 with 2 tasks
> 17/03/06 18:57:47 INFO scheduler.FairSchedulableBuilder: Added task set 
> TaskSet_3 tasks to pool default
> 17/03/06 18:57:47 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 
> 3.0 (TID 204, hadoop491.dx.momo.com, partition 0, RACK_LOCAL, 5824 bytes)
> 17/03/06 18:57:47 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
> Launching task 204 on executor id: 50 hostname: hadoop491.dx.momo.com.
> 17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
> memory on hadoop491.dx.momo.com:33300 (size: 55.0 KB, free: 2.8 GB)
> 17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
> memory on hadoop491.dx.momo.com:33300 (size: 42.7 KB, free: 2.8 GB)
> 17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
> memory on hadoop491.dx.momo.com:33300 (size: 1678.6 KB, free: 2.8 GB)
> 17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
> memory on hadoop491.dx.momo.com:33300 (size: 3.8 MB, free: 2.8 GB)
> 17/03/06 18:57:51 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 
> 3.0 (TID 205, hadoop605.dx.momo.com, partition 1, ANY, 5824 bytes)
> 17/03/06 18:57:51 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
> Launching task 205 on executor id: 9 hostname: hadoop605.dx.momo.com.
> 17/03/06 18:57:51 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
> memory on hadoop605.dx.momo.com:37394 (size: 55.0 KB, free: 2.8 GB)
> 17/03/06 18:57:52 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
> memory on hadoop605.dx.momo.com:37394 (size: 42.7 KB, free: 2.8 GB)
> 17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
> memory on hadoop605.dx.momo.com:37394 (size: 1678.6 KB, free: 2.8 GB)
> 17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
> memory on hadoop605.dx.momo.com:37394 (size: 3.8 MB, free: 2.8 GB)
> 17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 
> on 10.87.101.151:64500 in memory (size: 10.0 KB, free: 3.8 GB)
> 17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 
> on hadoop572.dx.momo.com:22730 in memory (size: 10.0 KB, free: 2.8 GB)
> ---
> when I enter CTRL + C to cancel the job,the output stack is as follows:
> org.apache.spark.SparkException: Job 2 cancelled as part of cancellation of 
> all jobs
>   at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
>   at 
> org.apache.

[jira] [Updated] (SPARK-19835) Runing CTAS sql on spark2.0.1 hung

2017-03-06 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19835:
-
Description: 
when I run CAST sql such as:
-
create table A as
select a.id, b.id, c.id from
(select * from B) a
left join
(select * from C) b
left join
(select momo_id from D) c
on a.id =b.id and a.id =c.id;
-
then it hung and not continue running, no errors return. the last few INFO msg 
is as follows:
17/03/06 18:57:47 INFO spark.SparkContext: Starting job: processCmd at 
CliDriver.java:376
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Got job 2 (processCmd at 
CliDriver.java:376) with 2 output partitions
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 
(processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Missing parents: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting ResultStage 3 
(MapPartitionsRDD[20] at processCmd at CliDriver.java:376), which has no 
missing parents
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8 stored as values 
in memory (estimated size 150.1 KB, free 3.7 GB)
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8_piece0 stored as 
bytes in memory (estimated size 55.0 KB, free 3.7 GB)
17/03/06 18:57:47 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on 10.87.101.151:64500 (size: 55.0 KB, free: 3.8 GB)
17/03/06 18:57:47 INFO spark.SparkContext: Created broadcast 8 from broadcast 
at DAGScheduler.scala:1012
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from 
ResultStage 3 (MapPartitionsRDD[20] at processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO cluster.YarnScheduler: Adding task set 3.0 with 2 tasks
17/03/06 18:57:47 INFO scheduler.FairSchedulableBuilder: Added task set 
TaskSet_3 tasks to pool default
17/03/06 18:57:47 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 3.0 
(TID 204, hadoop491.dx.momo.com, partition 0, RACK_LOCAL, 5824 bytes)
17/03/06 18:57:47 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 204 on executor id: 50 hostname: hadoop491.dx.momo.com.
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 3.8 MB, free: 2.8 GB)
17/03/06 18:57:51 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 3.0 
(TID 205, hadoop605.dx.momo.com, partition 1, ANY, 5824 bytes)
17/03/06 18:57:51 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 205 on executor id: 9 hostname: hadoop605.dx.momo.com.
17/03/06 18:57:51 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:52 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 3.8 MB, free: 2.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
10.87.101.151:64500 in memory (size: 10.0 KB, free: 3.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
hadoop572.dx.momo.com:22730 in memory (size: 10.0 KB, free: 2.8 GB)
---
when I enter CTRL + C to cancel the job,the output stack is as follows:
org.apache.spark.SparkException: Job 2 cancelled as part of cancellation of all 
jobs
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at 
org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1393)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply$mcVI$sp(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
at 
org.apache.spark.scheduler.DAGScheduler.doCancelAllJobs(DAGScheduler.scala:725)
at 
org.apache.spark

[jira] [Updated] (SPARK-19835) Runing CTAS sql on spark2.0.1 hung

2017-03-06 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19835:
-
Description: 
when I run CAST sql such as:
-
create table A as
select a.id, b.id, c.id from
(select * from B) a
left join
(select * from C) b
left join
(select momo_id from D) c
on a.id =b.id and a.id =c.id;
-
then it hung and not continue running, no errors return. the last few INFO msg 
is as follows:
---
17/03/06 18:57:47 INFO spark.SparkContext: Starting job: processCmd at 
CliDriver.java:376
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Got job 2 (processCmd at 
CliDriver.java:376) with 2 output partitions
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 
(processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Missing parents: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting ResultStage 3 
(MapPartitionsRDD[20] at processCmd at CliDriver.java:376), which has no 
missing parents
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8 stored as values 
in memory (estimated size 150.1 KB, free 3.7 GB)
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8_piece0 stored as 
bytes in memory (estimated size 55.0 KB, free 3.7 GB)
17/03/06 18:57:47 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on 10.87.101.151:64500 (size: 55.0 KB, free: 3.8 GB)
17/03/06 18:57:47 INFO spark.SparkContext: Created broadcast 8 from broadcast 
at DAGScheduler.scala:1012
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from 
ResultStage 3 (MapPartitionsRDD[20] at processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO cluster.YarnScheduler: Adding task set 3.0 with 2 tasks
17/03/06 18:57:47 INFO scheduler.FairSchedulableBuilder: Added task set 
TaskSet_3 tasks to pool default
17/03/06 18:57:47 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 3.0 
(TID 204, hadoop491.dx.momo.com, partition 0, RACK_LOCAL, 5824 bytes)
17/03/06 18:57:47 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 204 on executor id: 50 hostname: hadoop491.dx.momo.com.
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 3.8 MB, free: 2.8 GB)
17/03/06 18:57:51 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 3.0 
(TID 205, hadoop605.dx.momo.com, partition 1, ANY, 5824 bytes)
17/03/06 18:57:51 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 205 on executor id: 9 hostname: hadoop605.dx.momo.com.
17/03/06 18:57:51 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:52 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 3.8 MB, free: 2.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
10.87.101.151:64500 in memory (size: 10.0 KB, free: 3.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
hadoop572.dx.momo.com:22730 in memory (size: 10.0 KB, free: 2.8 GB)
--
when I enter CTRL + C to cancel the job,the output stack is as follows:
--
org.apache.spark.SparkException: Job 2 cancelled as part of cancellation of all 
jobs
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at 
org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1393)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply$mcVI$sp(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at scala.collection.mutable.Hash

[jira] [Updated] (SPARK-19835) Runing CTAS sql on spark2.0.1 hung

2017-03-06 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19835:
-
Description: 
when I run CAST sql such as:
-
create table A as
select a.id, b.id, c.id from
(select * from B) a
left join
(select * from C) b
left join
(select momo_id from D) c
on a.id =b.id and a.id =c.id;
-
then it hung and not continue running, no errors return. the last few INFO msg 
is as follows:
---
17/03/06 18:57:47 INFO spark.SparkContext: Starting job: processCmd at 
CliDriver.java:376
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Got job 2 (processCmd at 
CliDriver.java:376) with 2 output partitions
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 
(processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Missing parents: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting ResultStage 3 
(MapPartitionsRDD[20] at processCmd at CliDriver.java:376), which has no 
missing parents
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8 stored as values 
in memory (estimated size 150.1 KB, free 3.7 GB)
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8_piece0 stored as 
bytes in memory (estimated size 55.0 KB, free 3.7 GB)
17/03/06 18:57:47 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on 10.87.101.151:64500 (size: 55.0 KB, free: 3.8 GB)
17/03/06 18:57:47 INFO spark.SparkContext: Created broadcast 8 from broadcast 
at DAGScheduler.scala:1012
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from 
ResultStage 3 (MapPartitionsRDD[20] at processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO cluster.YarnScheduler: Adding task set 3.0 with 2 tasks
17/03/06 18:57:47 INFO scheduler.FairSchedulableBuilder: Added task set 
TaskSet_3 tasks to pool default
17/03/06 18:57:47 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 3.0 
(TID 204, hadoop491.dx.momo.com, partition 0, RACK_LOCAL, 5824 bytes)
17/03/06 18:57:47 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 204 on executor id: 50 hostname: hadoop491.dx.momo.com.
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 3.8 MB, free: 2.8 GB)
17/03/06 18:57:51 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 3.0 
(TID 205, hadoop605.dx.momo.com, partition 1, ANY, 5824 bytes)
17/03/06 18:57:51 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 205 on executor id: 9 hostname: hadoop605.dx.momo.com.
17/03/06 18:57:51 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:52 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 3.8 MB, free: 2.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
10.87.101.151:64500 in memory (size: 10.0 KB, free: 3.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
hadoop572.dx.momo.com:22730 in memory (size: 10.0 KB, free: 2.8 GB)
--
when I enter CTRL + C to cancel the job,the output stack is as follows:
--
org.apache.spark.SparkException: Job 2 cancelled as part of cancellation of all 
jobs
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at 
org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1393)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply$mcVI$sp(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at scala.collection.mutable.Hash

[jira] [Updated] (SPARK-19835) Runing CTAS sql on spark2.0.1 hung

2017-03-06 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19835:
-
Description: 
when I run CAST sql such as:
-
create table A as
select a.id, b.id, c.id from
(select * from B) a
left join
(select * from C) b
left join
(select id from D group by id) c
on a.id =b.id and a.id =c.id;
-
then it hung and not continue running, no errors return. the last few INFO msg 
is as follows:
---
17/03/06 18:57:47 INFO spark.SparkContext: Starting job: processCmd at 
CliDriver.java:376
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Got job 2 (processCmd at 
CliDriver.java:376) with 2 output partitions
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 
(processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Missing parents: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting ResultStage 3 
(MapPartitionsRDD[20] at processCmd at CliDriver.java:376), which has no 
missing parents
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8 stored as values 
in memory (estimated size 150.1 KB, free 3.7 GB)
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8_piece0 stored as 
bytes in memory (estimated size 55.0 KB, free 3.7 GB)
17/03/06 18:57:47 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on 10.87.101.151:64500 (size: 55.0 KB, free: 3.8 GB)
17/03/06 18:57:47 INFO spark.SparkContext: Created broadcast 8 from broadcast 
at DAGScheduler.scala:1012
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from 
ResultStage 3 (MapPartitionsRDD[20] at processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO cluster.YarnScheduler: Adding task set 3.0 with 2 tasks
17/03/06 18:57:47 INFO scheduler.FairSchedulableBuilder: Added task set 
TaskSet_3 tasks to pool default
17/03/06 18:57:47 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 3.0 
(TID 204, hadoop491.dx.momo.com, partition 0, RACK_LOCAL, 5824 bytes)
17/03/06 18:57:47 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 204 on executor id: 50 hostname: hadoop491.dx.momo.com.
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 3.8 MB, free: 2.8 GB)
17/03/06 18:57:51 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 3.0 
(TID 205, hadoop605.dx.momo.com, partition 1, ANY, 5824 bytes)
17/03/06 18:57:51 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 205 on executor id: 9 hostname: hadoop605.dx.momo.com.
17/03/06 18:57:51 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:52 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 3.8 MB, free: 2.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
10.87.101.151:64500 in memory (size: 10.0 KB, free: 3.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
hadoop572.dx.momo.com:22730 in memory (size: 10.0 KB, free: 2.8 GB)
--
when I enter CTRL + C to cancel the job,the output stack is as follows:
--
org.apache.spark.SparkException: Job 2 cancelled as part of cancellation of all 
jobs
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at 
org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1393)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply$mcVI$sp(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at scala.collection.mutab

[jira] [Updated] (SPARK-19835) Runing CTAS sql on spark2.0.1 hung

2017-03-06 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19835:
-
Description: 
when I run CAST sql such as:
-
create table A as
select a.id, b.id, c.id from
(select * from B) a
left join
(select * from C) b
left join
(select id from D group by id) c
on a.id =b.id and a.id =c.id;
-
then it hung and not continue running, no errors return. the last few INFO msg 
is as follows:
---
17/03/06 18:57:47 INFO spark.SparkContext: Starting job: processCmd at 
CliDriver.java:376
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Got job 2 (processCmd at 
CliDriver.java:376) with 2 output partitions
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 
(processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Missing parents: List()
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting ResultStage 3 
(MapPartitionsRDD[20] at processCmd at CliDriver.java:376), which has no 
missing parents
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8 stored as values 
in memory (estimated size 150.1 KB, free 3.7 GB)
17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8_piece0 stored as 
bytes in memory (estimated size 55.0 KB, free 3.7 GB)
17/03/06 18:57:47 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on 10.87.101.151:64500 (size: 55.0 KB, free: 3.8 GB)
17/03/06 18:57:47 INFO spark.SparkContext: Created broadcast 8 from broadcast 
at DAGScheduler.scala:1012
17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from 
ResultStage 3 (MapPartitionsRDD[20] at processCmd at CliDriver.java:376)
17/03/06 18:57:47 INFO cluster.YarnScheduler: Adding task set 3.0 with 2 tasks
17/03/06 18:57:47 INFO scheduler.FairSchedulableBuilder: Added task set 
TaskSet_3 tasks to pool default
17/03/06 18:57:47 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 3.0 
(TID 204, hadoop491.dx.momo.com, partition 0, RACK_LOCAL, 5824 bytes)
17/03/06 18:57:47 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 204 on executor id: 50 hostname: hadoop491.dx.momo.com.
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop491.dx.momo.com:33300 (size: 3.8 MB, free: 2.8 GB)
17/03/06 18:57:51 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 3.0 
(TID 205, hadoop605.dx.momo.com, partition 1, ANY, 5824 bytes)
17/03/06 18:57:51 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Launching task 205 on executor id: 9 hostname: hadoop605.dx.momo.com.
17/03/06 18:57:51 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 55.0 KB, free: 2.8 GB)
17/03/06 18:57:52 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 42.7 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 1678.6 KB, free: 2.8 GB)
17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
memory on hadoop605.dx.momo.com:37394 (size: 3.8 MB, free: 2.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
10.87.101.151:64500 in memory (size: 10.0 KB, free: 3.8 GB)
17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on 
hadoop572.dx.momo.com:22730 in memory (size: 10.0 KB, free: 2.8 GB)
--
when I enter CTRL + C to cancel the job,the output stack is as follows:
--
org.apache.spark.SparkException: Job 2 cancelled as part of cancellation of all 
jobs
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at 
org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1393)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply$mcVI$sp(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:725)
at scala.collection.mutab

[jira] [Commented] (SPARK-19835) Runing CTAS sql on spark2.0.1 hung

2017-03-06 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15897269#comment-15897269
 ] 

bruce xu commented on SPARK-19835:
--

 Thx for your response,I correct the wrong sql , hope to check again.

> Runing CTAS sql on spark2.0.1 hung 
> ---
>
> Key: SPARK-19835
> URL: https://issues.apache.org/jira/browse/SPARK-19835
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: spark2.0.1, hadoop2.6.0, jdk1.7
>Reporter: bruce xu
>
> when I run CAST sql such as:
> -
> create table A as
> select a.id, b.id, c.id from
> (select * from B) a
> left join
> (select * from C) b
> left join
> (select id from D group by id) c
> on a.id =b.id and a.id =c.id;
> -
> then it hung and not continue running, no errors return. the last few INFO 
> msg is as follows:
> ---
> 17/03/06 18:57:47 INFO spark.SparkContext: Starting job: processCmd at 
> CliDriver.java:376
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Got job 2 (processCmd at 
> CliDriver.java:376) with 2 output partitions
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 
> (processCmd at CliDriver.java:376)
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Parents of final stage: List()
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Missing parents: List()
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting ResultStage 3 
> (MapPartitionsRDD[20] at processCmd at CliDriver.java:376), which has no 
> missing parents
> 17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8 stored as values 
> in memory (estimated size 150.1 KB, free 3.7 GB)
> 17/03/06 18:57:47 INFO memory.MemoryStore: Block broadcast_8_piece0 stored as 
> bytes in memory (estimated size 55.0 KB, free 3.7 GB)
> 17/03/06 18:57:47 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
> memory on 10.87.101.151:64500 (size: 55.0 KB, free: 3.8 GB)
> 17/03/06 18:57:47 INFO spark.SparkContext: Created broadcast 8 from broadcast 
> at DAGScheduler.scala:1012
> 17/03/06 18:57:47 INFO scheduler.DAGScheduler: Submitting 2 missing tasks 
> from ResultStage 3 (MapPartitionsRDD[20] at processCmd at CliDriver.java:376)
> 17/03/06 18:57:47 INFO cluster.YarnScheduler: Adding task set 3.0 with 2 tasks
> 17/03/06 18:57:47 INFO scheduler.FairSchedulableBuilder: Added task set 
> TaskSet_3 tasks to pool default
> 17/03/06 18:57:47 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 
> 3.0 (TID 204, hadoop491.dx.momo.com, partition 0, RACK_LOCAL, 5824 bytes)
> 17/03/06 18:57:47 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
> Launching task 204 on executor id: 50 hostname: hadoop491.dx.momo.com.
> 17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
> memory on hadoop491.dx.momo.com:33300 (size: 55.0 KB, free: 2.8 GB)
> 17/03/06 18:57:48 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
> memory on hadoop491.dx.momo.com:33300 (size: 42.7 KB, free: 2.8 GB)
> 17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
> memory on hadoop491.dx.momo.com:33300 (size: 1678.6 KB, free: 2.8 GB)
> 17/03/06 18:57:50 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
> memory on hadoop491.dx.momo.com:33300 (size: 3.8 MB, free: 2.8 GB)
> 17/03/06 18:57:51 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 
> 3.0 (TID 205, hadoop605.dx.momo.com, partition 1, ANY, 5824 bytes)
> 17/03/06 18:57:51 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
> Launching task 205 on executor id: 9 hostname: hadoop605.dx.momo.com.
> 17/03/06 18:57:51 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in 
> memory on hadoop605.dx.momo.com:37394 (size: 55.0 KB, free: 2.8 GB)
> 17/03/06 18:57:52 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
> memory on hadoop605.dx.momo.com:37394 (size: 42.7 KB, free: 2.8 GB)
> 17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in 
> memory on hadoop605.dx.momo.com:37394 (size: 1678.6 KB, free: 2.8 GB)
> 17/03/06 18:57:54 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in 
> memory on hadoop605.dx.momo.com:37394 (size: 3.8 MB, free: 2.8 GB)
> 17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 
> on 10.87.101.151:64500 in memory (size: 10.0 KB, free: 3.8 GB)
> 17/03/06 19:05:31 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 
> on hadoop572.dx.momo.com:22730 in memory (size: 10.0 KB, free: 2.8 GB)
> --
> when I enter CTRL + C to cancel the job,the output stack is as follows:
> --
> org.apache.spark.SparkException: Job 2 cancelled as part of c

[jira] [Commented] (SPARK-13983) HiveThriftServer2 can not get "--hiveconf" or ''--hivevar" variables since 1.6 version (both multi-session and single session)

2017-03-09 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15902924#comment-15902924
 ] 

bruce xu commented on SPARK-13983:
--

spark 2.0.1 still has the bug... 

bin/beeline -f test.sql --hivevar db_name=offline

the content of test.sql :  

use ${hivevar:db_name};
---
the errors:
---
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input '$'(line 1, pos 4)

== SQL ==
use ${hivevar:db_name}
^^^ (state=,code=0)
--

when can it be resolved?

> HiveThriftServer2 can not get "--hiveconf" or ''--hivevar" variables since 
> 1.6 version (both multi-session and single session)
> --
>
> Key: SPARK-13983
> URL: https://issues.apache.org/jira/browse/SPARK-13983
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.6.0, 1.6.1
> Environment: ubuntu, spark 1.6.0 standalone, spark 1.6.1 standalone
> (tried spark branch-1.6 snapshot as well)
> compiled with scala 2.10.5 and hadoop 2.6
> (-Phadoop-2.6 -Psparkr -Phive -Phive-thriftserver)
>Reporter: Teng Qiu
>Assignee: Cheng Lian
>
> HiveThriftServer2 should be able to get "\--hiveconf" or ''\-\-hivevar" 
> variables from JDBC client, either from command line parameter of beeline, 
> such as
> {{beeline --hiveconf spark.sql.shuffle.partitions=3 --hivevar 
> db_name=default}}
> or from JDBC connection string, like
> {{jdbc:hive2://localhost:1?spark.sql.shuffle.partitions=3#db_name=default}}
> this worked in spark version 1.5.x, but after upgraded to 1.6, it doesn't 
> work.
> to reproduce this issue, try to connect to HiveThriftServer2 with beeline:
> {code}
> bin/beeline -u jdbc:hive2://localhost:1 \
> --hiveconf spark.sql.shuffle.partitions=3 \
> --hivevar db_name=default
> {code}
> or
> {code}
> bin/beeline -u 
> jdbc:hive2://localhost:1?spark.sql.shuffle.partitions=3#db_name=default
> {code}
> will get following results:
> {code}
> 0: jdbc:hive2://localhost:1> set spark.sql.shuffle.partitions;
> +---++--+
> |  key  | value  |
> +---++--+
> | spark.sql.shuffle.partitions  | 200|
> +---++--+
> 1 row selected (0.192 seconds)
> 0: jdbc:hive2://localhost:1> use ${db_name};
> Error: org.apache.spark.sql.AnalysisException: cannot recognize input near 
> '$' '{' 'db_name' in switch database statement; line 1 pos 4 (state=,code=0)
> {code}
> -
> but this bug does not affect current versions of spark-sql CLI, following 
> commands works:
> {code}
> bin/spark-sql --master local[2] \
>   --hiveconf spark.sql.shuffle.partitions=3 \
>   --hivevar db_name=default
> spark-sql> set spark.sql.shuffle.partitions
> spark.sql.shuffle.partitions   3
> Time taken: 1.037 seconds, Fetched 1 row(s)
> spark-sql> use ${db_name};
> OK
> Time taken: 1.697 seconds
> {code}
> so I think it may caused by this change: 
> https://github.com/apache/spark/pull/8909 ( [SPARK-10810] [SPARK-10902] [SQL] 
> Improve session management in SQL )
> perhaps by calling {{hiveContext.newSession}}, the variables from 
> {{sessionConf}} were not loaded into the new session? 
> (https://github.com/apache/spark/pull/8909/files#diff-8f8b7f4172e8a07ff20a4dbbbcc57b1dR69)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)
bruce xu created SPARK-19927:


 Summary: SparkThriftServer2 can not get ''--hivevar" variables in 
spark 2.1
 Key: SPARK-19927
 URL: https://issues.apache.org/jira/browse/SPARK-19927
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.1.0
 Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
-Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
Reporter: bruce xu






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
External issue ID: https://issues.apache.org/jira/browse/SPARK-13983   
https://issues.apache.org/jira/browse/SPARK-18086  (was: 
https://issues.apache.org/jira/browse/SPARK-13983,https://issues.apache.org/jira/browse/SPARK-18086)

> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
External issue ID: 
https://issues.apache.org/jira/browse/SPARK-13983,https://issues.apache.org/jira/browse/SPARK-18086
  (was: https://issues.apache.org/jira/browse/SPARK-13983)

> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Description: 

suppose the content of test.sql:   USE  ${hivevar:db_name};

 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)

> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test.sql:   USE  ${hivevar:db_name};
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Description: 
suppose the content of test1.sql:
---
USE  ${hivevar:db_name};

 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)

so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:
-
!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};
- 



  was:

suppose the content of test.sql:   USE  ${hivevar:db_name};

 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)


> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test1.sql:
> ---
> USE  ${hivevar:db_name};
> 
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> so hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> - 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Description: 
suppose the content of test1.sql:

USE  ${hivevar:db_name};

 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)

so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:
-
!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};
- 



  was:
suppose the content of test1.sql:
---
USE  ${hivevar:db_name};

 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)

so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:
-
!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};
- 




> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test1.sql:
> USE  ${hivevar:db_name};
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> so hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> - 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Description: 
suppose the content of test1.sql:

USE  $//{hivevar:db_name//};

 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)

so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:
-
!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};
- 



  was:
suppose the content of test1.sql:

USE  ${hivevar:db_name};

 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)

so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:
-
!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};
- 




> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test1.sql:
> USE  $//{hivevar:db_name//};
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> so hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> - 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Description: 
suppose the content of test1.sql:
-
USE  ${hivevar:db_name};
-
 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)


so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:

!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};




  was:
suppose the content of test1.sql:

USE  $//{hivevar:db_name//};

 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)

so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:
-
!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};
- 




> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test1.sql:
> -
> USE  ${hivevar:db_name};
> -
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> so hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> 
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15906809#comment-15906809
 ] 

bruce xu commented on SPARK-19927:
--

The way I found this bug was to move from hiveQL production scripts to spark 
sql production environment.

My assume is that the Spark sql user behavior inherits the user behavior of 
Hive and the majority of spark user may have this assumption I guess.

> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test1.sql:
> -
> USE  ${hivevar:db_name};
> -
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> so hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> 
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Description: 
suppose the content of test1.sql:
-
USE${hivevar:db_name};
-
 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)


so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:

!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};




  was:
suppose the content of test1.sql:
-
USE  ${hivevar:db_name};
-
 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)


so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:

!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};





> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test1.sql:
> -
> USE${hivevar:db_name};
> -
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> so hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> 
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-12 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Description: 
suppose the content of test1.sql:
-
USE ${hivevar:db_name};
-
 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
the output is: 

Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)
-

so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:

!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};




  was:
suppose the content of test1.sql:
-
USE${hivevar:db_name};
-
 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline

the output is: 
Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)


so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:

!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};





> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test1.sql:
> -
> USE ${hivevar:db_name};
> -
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> 
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-15 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15925652#comment-15925652
 ] 

bruce xu commented on SPARK-19927:
--

ping  [~r...@databricks.com], hope to review this issue.

> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test1.sql:
> -
> USE ${hivevar:db_name};
> -
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> 
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-17 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Affects Version/s: 2.0.1

> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of test1.sql:
> -
> USE ${hivevar:db_name};
> -
>  
> when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> 
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-17 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Description: 
suppose the content of file test1.sql:
-
USE ${hivevar:db_name};
-
 
when execute command: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
the output is: 

Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)
-

so the parameter --hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:

!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};
--



  was:
suppose the content of test1.sql:
-
USE ${hivevar:db_name};
-
 
when execute: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
the output is: 

Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)
-

so hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:

!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};





> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test1.sql:
> -
> USE ${hivevar:db_name};
> -
>  
> when execute command: bin/spark-sql -f /tmp/test.sql  --hivevar 
> db_name=offline
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> 
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> --



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937644#comment-15937644
 ] 

bruce xu commented on SPARK-19927:
--

[~q79969786] Thx for response. it is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-1398 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work. 

so SPARK-19927 deal with this problem.



> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test1.sql:
> -
> USE ${hivevar:db_name};
> -
>  
> when execute command: bin/spark-sql -f /tmp/test.sql  --hivevar 
> db_name=offline
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from CLI.
> the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
> --hivevar db_name=offline with test2.sql:
> 
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> --



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-19927:
-
Description: 
suppose the content of file test.sql:
-
!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};
-
 
when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
db_name=offline 
the output is: 

Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)
-

so the parameter --hivevar can not be read from beeline CLI.


  was:
suppose the content of file test1.sql:
-
USE ${hivevar:db_name};
-
 
when execute command: bin/spark-sql -f /tmp/test.sql  --hivevar db_name=offline
the output is: 

Error: org.apache.spark.sql.catalyst.parser.ParseException: 
no viable alternative at input ''(line 1, pos 4)

== SQL ==
use 
^^^ (state=,code=0)
-

so the parameter --hivevar can not be read from CLI.
the bug still appears with beeline command: bin/beeline  -f /tmp/test2.sql  
--hivevar db_name=offline with test2.sql:

!connect jdbc:hive2://localhost:1 test test
USE ${hivevar:db_name};
--




> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> -
>  
> when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
> db_name=offline 
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from beeline CLI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937644#comment-15937644
 ] 

bruce xu edited comment on SPARK-19927 at 3/23/17 3:34 AM:
---

[~q79969786] Thx for response. it is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work. 

so SPARK-19927 deal with this problem.




was (Author: xwc3504):
[~q79969786] Thx for response. it is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-1398 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work. 

so SPARK-19927 deal with this problem.



> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> -
>  
> when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
> db_name=offline 
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from beeline CLI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937644#comment-15937644
 ] 

bruce xu edited comment on SPARK-19927 at 3/23/17 3:35 AM:
---

[~q79969786] Thx for response. it is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927's value is to deal with this problem.




was (Author: xwc3504):
[~q79969786] Thx for response. it is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927 deal with this problem.



> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> -
>  
> when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
> db_name=offline 
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from beeline CLI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937644#comment-15937644
 ] 

bruce xu edited comment on SPARK-19927 at 3/23/17 3:34 AM:
---

[~q79969786] Thx for response. it is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927 deal with this problem.




was (Author: xwc3504):
[~q79969786] Thx for response. it is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work. 

so SPARK-19927 deal with this problem.



> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> -
>  
> when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
> db_name=offline 
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from beeline CLI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937644#comment-15937644
 ] 

bruce xu edited comment on SPARK-19927 at 3/23/17 3:37 AM:
---

[~q79969786] Thx for response. your comment is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927's value is to deal with this problem.




was (Author: xwc3504):
[~q79969786] Thx for response. it is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927's value is to deal with this problem.



> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> -
>  
> when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
> db_name=offline 
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from beeline CLI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937644#comment-15937644
 ] 

bruce xu edited comment on SPARK-19927 at 3/23/17 3:39 AM:
---

[~q79969786] Thx for response. your comment is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927's value is to deal with this problem. hope to review again.




was (Author: xwc3504):
[~q79969786] Thx for response. your comment is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927's value is to deal with this problem.



> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> -
>  
> when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
> db_name=offline 
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from beeline CLI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937644#comment-15937644
 ] 

bruce xu edited comment on SPARK-19927 at 3/23/17 3:41 AM:
---

[~q79969786] Thx for response. your comment is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has been merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927's value is to deal with this problem. hope to review again.




was (Author: xwc3504):
[~q79969786] Thx for response. your comment is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927's value is to deal with this problem. hope to review again.



> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> -
>  
> when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
> db_name=offline 
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from beeline CLI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937644#comment-15937644
 ] 

bruce xu edited comment on SPARK-19927 at 3/23/17 3:46 AM:
---

[~q79969786] Thx for response. your comment is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has been merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927's value is to deal with this problem. hope to review again or 
merge SPARK-13983 is a workaround.




was (Author: xwc3504):
[~q79969786] Thx for response. your comment is half right.  reason:

- issue SPARK-19927 derives from SPARK-13983, but SPARK-13983 still not merge 
into master. 

- SPARK-18086 has been merged into master. however this issue only resolve 
bin/spark-sql shell interface(code change in SparkSQLCLIDriver)  problem but 
not dealing with bin/beeline interface(without code change in 
SparkSQLOperationManager).

that's why cmd: bin/beeline -f test.sql --hivevar db_name=online can not work 
in spark 2.X. 

so SPARK-19927's value is to deal with this problem. hope to review again.



> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> -
>  
> when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
> db_name=offline 
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from beeline CLI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19927) SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1

2017-03-22 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937822#comment-15937822
 ] 

bruce xu commented on SPARK-19927:
--

[~q79969786] Thx, I will have a try and still hope this bug could be fixed in 
master.

> SparkThriftServer2 can not get ''--hivevar" variables in spark 2.1
> --
>
> Key: SPARK-19927
> URL: https://issues.apache.org/jira/browse/SPARK-19927
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1, 2.1.0
> Environment: CentOS 6.5,spark 2.1 build with mvn -Pyarn -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -Dscala-2.11
>Reporter: bruce xu
>
> suppose the content of file test.sql:
> -
> !connect jdbc:hive2://localhost:1 test test
> USE ${hivevar:db_name};
> -
>  
> when execute beeline command: bin/beeline  -f /tmp/test.sql  --hivevar 
> db_name=offline 
> the output is: 
> 
> Error: org.apache.spark.sql.catalyst.parser.ParseException: 
> no viable alternative at input ''(line 1, pos 4)
> == SQL ==
> use 
> ^^^ (state=,code=0)
> -
> so the parameter --hivevar can not be read from beeline CLI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20069) enter spark thriftserver2 web UI very slow

2017-03-23 Thread bruce xu (JIRA)
bruce xu created SPARK-20069:


 Summary: enter spark thriftserver2 web UI very slow
 Key: SPARK-20069
 URL: https://issues.apache.org/jira/browse/SPARK-20069
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 2.0.1
 Environment: centos6, java 1.7, hadoop 2.6,, spark 2.0.1 on Yarn
Reporter: bruce xu
Priority: Minor


when the spark thriftserver2 has been on service for over roughly 14 hours, 
it's very slow(may cost 10 seconds or even more) to get into thriftserver2 
UI(e.g. 
http://hadoop003.dx.momo.com:8088/proxy/application_1489971774015_47233/jobs/)

every day we have over 70 jobs running on spark thriftserver2

anyone else reproduce this problem? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20135) spark thriftserver2: no job running but cores not release on yarn

2017-03-28 Thread bruce xu (JIRA)
bruce xu created SPARK-20135:


 Summary: spark thriftserver2: no job running but cores not release 
on yarn
 Key: SPARK-20135
 URL: https://issues.apache.org/jira/browse/SPARK-20135
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.0.1
 Environment: spark 2.0.1 with hadoop 2.6.0 
Reporter: bruce xu


i opened the executor dynamic allocation feature, however it doesn't work 
sometimes.

i set the initial executor num 50,  after job finished the cores and mem 
resource did not release. 

from the spark web UI, the active job/running task/stage num is 0 , but the 
executors page show  cores 1276, active task 7288.

from the yarn web UI,  the thriftserver job's running containers is 639

this may be a bug. 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20135) spark thriftserver2: no job running but cores not release on yarn

2017-03-28 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-20135:
-
Attachment: 0329-3.png
0329-2.png
0329-1.png

cores and memory not release for a long time when no job running

> spark thriftserver2: no job running but cores not release on yarn
> -
>
> Key: SPARK-20135
> URL: https://issues.apache.org/jira/browse/SPARK-20135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: spark 2.0.1 with hadoop 2.6.0 
>Reporter: bruce xu
> Attachments: 0329-1.png, 0329-2.png, 0329-3.png
>
>
> i opened the executor dynamic allocation feature, however it doesn't work 
> sometimes.
> i set the initial executor num 50,  after job finished the cores and mem 
> resource did not release. 
> from the spark web UI, the active job/running task/stage num is 0 , but the 
> executors page show  cores 1276, active task 7288.
> from the yarn web UI,  the thriftserver job's running containers is 639
> this may be a bug. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-20135) spark thriftserver2: no job running but cores not release on yarn

2017-03-29 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-20135:
-
Comment: was deleted

(was: cores and memory not release for a long time when no job running)

> spark thriftserver2: no job running but cores not release on yarn
> -
>
> Key: SPARK-20135
> URL: https://issues.apache.org/jira/browse/SPARK-20135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: spark 2.0.1 with hadoop 2.6.0 
>Reporter: bruce xu
> Attachments: 0329-1.png, 0329-2.png, 0329-3.png
>
>
> i opened the executor dynamic allocation feature, however it doesn't work 
> sometimes.
> i set the initial executor num 50,  after job finished the cores and mem 
> resource did not release. 
> from the spark web UI, the active job/running task/stage num is 0 , but the 
> executors page show  cores 1276, active task 7288.
> from the yarn web UI,  the thriftserver job's running containers is 639
> this may be a bug. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20135) spark thriftserver2: no job running but containers not release on yarn

2017-03-29 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-20135:
-
Summary: spark thriftserver2: no job running but containers not release on 
yarn  (was: spark thriftserver2: no job running but cores not release on yarn)

> spark thriftserver2: no job running but containers not release on yarn
> --
>
> Key: SPARK-20135
> URL: https://issues.apache.org/jira/browse/SPARK-20135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: spark 2.0.1 with hadoop 2.6.0 
>Reporter: bruce xu
> Attachments: 0329-1.png, 0329-2.png, 0329-3.png
>
>
> i opened the executor dynamic allocation feature, however it doesn't work 
> sometimes.
> i set the initial executor num 50,  after job finished the cores and mem 
> resource did not release. 
> from the spark web UI, the active job/running task/stage num is 0 , but the 
> executors page show  cores 1276, active task 7288.
> from the yarn web UI,  the thriftserver job's running containers is 639
> this may be a bug. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20135) spark thriftserver2: no job running but containers not release on yarn

2017-03-29 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-20135:
-
Description: 
i opened the executor dynamic allocation feature, however it doesn't work 
sometimes.

i set the initial executor num 50,  after job finished the cores and mem 
resource did not release. 

from the spark web UI, the active job/running task/stage num is 0 , but the 
executors page show  cores 1276, active task 7288.

from the yarn web UI,  the thriftserver job's running containers is 639 without 
releasing. 

this may be a bug. 


  was:
i opened the executor dynamic allocation feature, however it doesn't work 
sometimes.

i set the initial executor num 50,  after job finished the cores and mem 
resource did not release. 

from the spark web UI, the active job/running task/stage num is 0 , but the 
executors page show  cores 1276, active task 7288.

from the yarn web UI,  the thriftserver job's running containers is 639

this may be a bug. 



> spark thriftserver2: no job running but containers not release on yarn
> --
>
> Key: SPARK-20135
> URL: https://issues.apache.org/jira/browse/SPARK-20135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: spark 2.0.1 with hadoop 2.6.0 
>Reporter: bruce xu
> Attachments: 0329-1.png, 0329-2.png, 0329-3.png
>
>
> i opened the executor dynamic allocation feature, however it doesn't work 
> sometimes.
> i set the initial executor num 50,  after job finished the cores and mem 
> resource did not release. 
> from the spark web UI, the active job/running task/stage num is 0 , but the 
> executors page show  cores 1276, active task 7288.
> from the yarn web UI,  the thriftserver job's running containers is 639 
> without releasing. 
> this may be a bug. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20135) spark thriftserver2: no job running but containers not release on yarn

2017-04-04 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956235#comment-15956235
 ] 

bruce xu commented on SPARK-20135:
--

OK, Thanks.

> spark thriftserver2: no job running but containers not release on yarn
> --
>
> Key: SPARK-20135
> URL: https://issues.apache.org/jira/browse/SPARK-20135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: spark 2.0.1 with hadoop 2.6.0 
>Reporter: bruce xu
> Attachments: 0329-1.png, 0329-2.png, 0329-3.png
>
>
> i opened the executor dynamic allocation feature, however it doesn't work 
> sometimes.
> i set the initial executor num 50,  after job finished the cores and mem 
> resource did not release. 
> from the spark web UI, the active job/running task/stage num is 0 , but the 
> executors page show  cores 1276, active task 7288.
> from the yarn web UI,  the thriftserver job's running containers is 639 
> without releasing. 
> this may be a bug. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-22365) Spark UI executors empty list with 500 error

2017-12-03 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-22365:
-
Attachment: spark-executor-500error.png

> Spark UI executors empty list with 500 error
> 
>
> Key: SPARK-22365
> URL: https://issues.apache.org/jira/browse/SPARK-22365
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.2.0
>Reporter: Jakub Dubovsky
> Attachments: spark-executor-500error.png
>
>
> No data loaded on "execturos" tab in sparkUI with stack trace below. Apart 
> from exception I have nothing more. But if I can test something to make this 
> easier to resolve I am happy to help.
> {code}
> java.lang.NullPointerException
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
>   at 
> org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
>   at 
> org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.spark_project.jetty.server.Server.handle(Server.java:524)
>   at 
> org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 
> org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22365) Spark UI executors empty list with 500 error

2017-12-03 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276300#comment-16276300
 ] 

bruce xu commented on SPARK-22365:
--

@Jakub Dubovsky  I reproduce the same issue as yours. 
!spark-executor-500error.png!

> Spark UI executors empty list with 500 error
> 
>
> Key: SPARK-22365
> URL: https://issues.apache.org/jira/browse/SPARK-22365
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.2.0
>Reporter: Jakub Dubovsky
> Attachments: spark-executor-500error.png
>
>
> No data loaded on "execturos" tab in sparkUI with stack trace below. Apart 
> from exception I have nothing more. But if I can test something to make this 
> easier to resolve I am happy to help.
> {code}
> java.lang.NullPointerException
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
>   at 
> org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
>   at 
> org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.spark_project.jetty.server.Server.handle(Server.java:524)
>   at 
> org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 
> org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22365) Spark UI executors empty list with 500 error

2017-12-04 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276618#comment-16276618
 ] 

bruce xu commented on SPARK-22365:
--

Hi [~dubovsky]. Glad to have your response. I met this issue by using Spark 
ThriftServer for jdbc service and the spark version is spark 2.2.1-rc1. And I 
will also try to find the reason. Maybe it's a bug anyway.

> Spark UI executors empty list with 500 error
> 
>
> Key: SPARK-22365
> URL: https://issues.apache.org/jira/browse/SPARK-22365
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.2.0
>Reporter: Jakub Dubovsky
> Attachments: spark-executor-500error.png
>
>
> No data loaded on "execturos" tab in sparkUI with stack trace below. Apart 
> from exception I have nothing more. But if I can test something to make this 
> easier to resolve I am happy to help.
> {code}
> java.lang.NullPointerException
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
>   at 
> org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
>   at 
> org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.spark_project.jetty.server.Server.handle(Server.java:524)
>   at 
> org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 
> org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-22365) Spark UI executors empty list with 500 error

2017-12-04 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276618#comment-16276618
 ] 

bruce xu edited comment on SPARK-22365 at 12/5/17 3:46 AM:
---

Hi [~dubovsky]. Glad to have your response. I met this issue by using Spark 
ThriftServer for jdbc service and the spark version is spark 2.2.1-rc1. And I 
will also try to find the reason. Maybe it's a bug anyway.

UPDATE:
[~dubovsky]  I solved the problem by deleting jsr311-api-1.1.1.jar from 
$SPARK_HOME/jars. Reasons can be refered through  [NoSuchMethodError on startup 
in Java Jersey app

|https://stackoverflow.com/questions/28509370/nosuchmethoderror-on-startup-in-java-jersey-app]


was (Author: xwc3504):
Hi [~dubovsky]. Glad to have your response. I met this issue by using Spark 
ThriftServer for jdbc service and the spark version is spark 2.2.1-rc1. And I 
will also try to find the reason. Maybe it's a bug anyway.

> Spark UI executors empty list with 500 error
> 
>
> Key: SPARK-22365
> URL: https://issues.apache.org/jira/browse/SPARK-22365
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.2.0
>Reporter: Jakub Dubovsky
> Attachments: spark-executor-500error.png
>
>
> No data loaded on "execturos" tab in sparkUI with stack trace below. Apart 
> from exception I have nothing more. But if I can test something to make this 
> easier to resolve I am happy to help.
> {code}
> java.lang.NullPointerException
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
>   at 
> org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
>   at 
> org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.spark_project.jetty.server.Server.handle(Server.java:524)
>   at 
> org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 
> org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-22365) Spark UI executors empty list with 500 error

2017-12-04 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276618#comment-16276618
 ] 

bruce xu edited comment on SPARK-22365 at 12/5/17 3:47 AM:
---

Hi [~dubovsky]. Glad to have your response. I met this issue by using Spark 
ThriftServer for jdbc service and the spark version is spark 2.2.1-rc1. And I 
will also try to find the reason. Maybe it's a bug anyway.

UPDATE:
[~dubovsky]  I solved the problem by deleting jsr311-api-1.1.1.jar from 
$SPARK_HOME/jars. Reasons can be refered through  [NoSuchMethodError on startup 
in Java Jersey 
app|https://stackoverflow.com/questions/28509370/nosuchmethoderror-on-startup-in-java-jersey-app]


was (Author: xwc3504):
Hi [~dubovsky]. Glad to have your response. I met this issue by using Spark 
ThriftServer for jdbc service and the spark version is spark 2.2.1-rc1. And I 
will also try to find the reason. Maybe it's a bug anyway.

UPDATE:
[~dubovsky]  I solved the problem by deleting jsr311-api-1.1.1.jar from 
$SPARK_HOME/jars. Reasons can be refered through  [NoSuchMethodError on startup 
in Java Jersey app

|https://stackoverflow.com/questions/28509370/nosuchmethoderror-on-startup-in-java-jersey-app]

> Spark UI executors empty list with 500 error
> 
>
> Key: SPARK-22365
> URL: https://issues.apache.org/jira/browse/SPARK-22365
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.2.0
>Reporter: Jakub Dubovsky
> Attachments: spark-executor-500error.png
>
>
> No data loaded on "execturos" tab in sparkUI with stack trace below. Apart 
> from exception I have nothing more. But if I can test something to make this 
> easier to resolve I am happy to help.
> {code}
> java.lang.NullPointerException
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
>   at 
> org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
>   at 
> org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.spark_project.jetty.server.Server.handle(Server.java:524)
>   at 
> org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 
> org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-22365) Spark UI executors empty list with 500 error

2017-12-04 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276618#comment-16276618
 ] 

bruce xu edited comment on SPARK-22365 at 12/5/17 4:31 AM:
---

Hi [~dubovsky]. Glad to have your response. I met this issue by using Spark 
ThriftServer for jdbc service and the spark version is spark 2.2.1-rc1. And I 
will also try to find the reason. Maybe it's a bug anyway.

UPDATE:
[~dubovsky]  I solved the problem by deleting jsr311-api-1.1.1.jar from 
$SPARK_HOME/jars. Reasons can be refered through  [NoSuchMethodError on startup 
in Java Jersey 
app|https://stackoverflow.com/questions/28509370/nosuchmethoderror-on-startup-in-java-jersey-app].

[~sowen]  Delete jsr311-api-1.1.1.jar could solve the problem, but I wonder if 
this is the root cause.


was (Author: xwc3504):
Hi [~dubovsky]. Glad to have your response. I met this issue by using Spark 
ThriftServer for jdbc service and the spark version is spark 2.2.1-rc1. And I 
will also try to find the reason. Maybe it's a bug anyway.

UPDATE:
[~dubovsky]  I solved the problem by deleting jsr311-api-1.1.1.jar from 
$SPARK_HOME/jars. Reasons can be refered through  [NoSuchMethodError on startup 
in Java Jersey 
app|https://stackoverflow.com/questions/28509370/nosuchmethoderror-on-startup-in-java-jersey-app]

> Spark UI executors empty list with 500 error
> 
>
> Key: SPARK-22365
> URL: https://issues.apache.org/jira/browse/SPARK-22365
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.2.0
>Reporter: Jakub Dubovsky
> Attachments: spark-executor-500error.png
>
>
> No data loaded on "execturos" tab in sparkUI with stack trace below. Apart 
> from exception I have nothing more. But if I can test something to make this 
> easier to resolve I am happy to help.
> {code}
> java.lang.NullPointerException
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
>   at 
> org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
>   at 
> org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.spark_project.jetty.server.Server.handle(Server.java:524)
>   at 
> org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 
> org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@

[jira] [Created] (SPARK-22846) table's owner property in hive metastore is null

2017-12-20 Thread bruce xu (JIRA)
bruce xu created SPARK-22846:


 Summary: table's owner property in hive metastore is null
 Key: SPARK-22846
 URL: https://issues.apache.org/jira/browse/SPARK-22846
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.2.1
 Environment: spark 2.2.1, hive 0.14, hadoop 2.6.0
Reporter: bruce xu
Priority: Critical


I met this issue after upgrading from spark 2.0.1 to spark 2.2.1.

when creating table using spark sql or spark thriftserver, it occured that the 
table's owner info in metastore is null, which may cause other issue in my 
enviromnent.

After digging into the code, I found that in class HiveClientImpl:

{code:java}
private val userName = state.getAuthenticator.getUserName
{code}

the result of state.getAuthenticator.getUserName is null, which would cause all 
operation on tables have a null username, such as:

{code:java}
 def toHiveTable(table: CatalogTable, userName: Option[String] = None): 
HiveTable = {
{code}






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-22846) table's owner property in hive metastore is null

2017-12-20 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-22846:
-
Description: 
I met this issue after upgrading from spark 2.0.1 to spark 2.2.1.

when creating table using spark sql or spark thriftserver, it occured that the 
table's owner info in metastore is null, which may cause other issue on table 
authentication. It may be a bug.

After digging into the code, I found that in class HiveClientImpl:

{code:java}
private val userName = state.getAuthenticator.getUserName
{code}

the result of state.getAuthenticator.getUserName is null, which would cause all 
operation on tables have a null username, such as:

{code:java}
 def toHiveTable(table: CatalogTable, userName: Option[String] = None): 
HiveTable = {
{code}




  was:
I met this issue after upgrading from spark 2.0.1 to spark 2.2.1.

when creating table using spark sql or spark thriftserver, it occured that the 
table's owner info in metastore is null, which may cause other issue in my 
enviromnent.

After digging into the code, I found that in class HiveClientImpl:

{code:java}
private val userName = state.getAuthenticator.getUserName
{code}

the result of state.getAuthenticator.getUserName is null, which would cause all 
operation on tables have a null username, such as:

{code:java}
 def toHiveTable(table: CatalogTable, userName: Option[String] = None): 
HiveTable = {
{code}





> table's owner property in hive metastore is null
> 
>
> Key: SPARK-22846
> URL: https://issues.apache.org/jira/browse/SPARK-22846
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.2.1
> Environment: spark 2.2.1, hive 0.14, hadoop 2.6.0
>Reporter: bruce xu
>Priority: Critical
> Attachments: talbe_owner_null.png
>
>
> I met this issue after upgrading from spark 2.0.1 to spark 2.2.1.
> when creating table using spark sql or spark thriftserver, it occured that 
> the table's owner info in metastore is null, which may cause other issue on 
> table authentication. It may be a bug.
> After digging into the code, I found that in class HiveClientImpl:
> {code:java}
> private val userName = state.getAuthenticator.getUserName
> {code}
> the result of state.getAuthenticator.getUserName is null, which would cause 
> all operation on tables have a null username, such as:
> {code:java}
>  def toHiveTable(table: CatalogTable, userName: Option[String] = None): 
> HiveTable = {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-22846) table's owner property in hive metastore is null

2017-12-20 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-22846:
-
Attachment: talbe_owner_null.png

select info from hive metastore

> table's owner property in hive metastore is null
> 
>
> Key: SPARK-22846
> URL: https://issues.apache.org/jira/browse/SPARK-22846
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.2.1
> Environment: spark 2.2.1, hive 0.14, hadoop 2.6.0
>Reporter: bruce xu
>Priority: Critical
> Attachments: talbe_owner_null.png
>
>
> I met this issue after upgrading from spark 2.0.1 to spark 2.2.1.
> when creating table using spark sql or spark thriftserver, it occured that 
> the table's owner info in metastore is null, which may cause other issue on 
> table authentication. It may be a bug.
> After digging into the code, I found that in class HiveClientImpl:
> {code:java}
> private val userName = state.getAuthenticator.getUserName
> {code}
> the result of state.getAuthenticator.getUserName is null, which would cause 
> all operation on tables have a null username, such as:
> {code:java}
>  def toHiveTable(table: CatalogTable, userName: Option[String] = None): 
> HiveTable = {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-22846) table's owner property in hive metastore is null

2017-12-20 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-22846:
-
Description: 
I met this issue after upgrading from spark 2.0.1 to spark 2.2.1.

when creating table using spark sql or spark thriftserver, it occured that the 
table's owner info in metastore is null, which may cause other issue on table 
authentication. It may be a bug.

After digging into the code, I found that in class HiveClientImpl:

{code:java}
private val userName = state.getAuthenticator.getUserName
{code}

the result of state.getAuthenticator.getUserName is null, which would cause all 
operation on tables have a null username, such as method toHiveTable:

{code:java}
 def toHiveTable(table: CatalogTable, userName: Option[String] = None): 
HiveTable = {
{code}

my create table command:  create table  datapm.test_xwc9(id string,name string);





  was:
I met this issue after upgrading from spark 2.0.1 to spark 2.2.1.

when creating table using spark sql or spark thriftserver, it occured that the 
table's owner info in metastore is null, which may cause other issue on table 
authentication. It may be a bug.

After digging into the code, I found that in class HiveClientImpl:

{code:java}
private val userName = state.getAuthenticator.getUserName
{code}

the result of state.getAuthenticator.getUserName is null, which would cause all 
operation on tables have a null username, such as:

{code:java}
 def toHiveTable(table: CatalogTable, userName: Option[String] = None): 
HiveTable = {
{code}





> table's owner property in hive metastore is null
> 
>
> Key: SPARK-22846
> URL: https://issues.apache.org/jira/browse/SPARK-22846
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.2.1
> Environment: spark 2.2.1, hive 0.14, hadoop 2.6.0
>Reporter: bruce xu
>Priority: Critical
> Attachments: talbe_owner_null.png
>
>
> I met this issue after upgrading from spark 2.0.1 to spark 2.2.1.
> when creating table using spark sql or spark thriftserver, it occured that 
> the table's owner info in metastore is null, which may cause other issue on 
> table authentication. It may be a bug.
> After digging into the code, I found that in class HiveClientImpl:
> {code:java}
> private val userName = state.getAuthenticator.getUserName
> {code}
> the result of state.getAuthenticator.getUserName is null, which would cause 
> all operation on tables have a null username, such as method toHiveTable:
> {code:java}
>  def toHiveTable(table: CatalogTable, userName: Option[String] = None): 
> HiveTable = {
> {code}
> my create table command:  create table  datapm.test_xwc9(id string,name 
> string);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-22846) table's owner property in hive metastore is null

2017-12-20 Thread bruce xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce xu updated SPARK-22846:
-
Comment: was deleted

(was: select info from hive metastore)

> table's owner property in hive metastore is null
> 
>
> Key: SPARK-22846
> URL: https://issues.apache.org/jira/browse/SPARK-22846
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.2.1
> Environment: spark 2.2.1, hive 0.14, hadoop 2.6.0
>Reporter: bruce xu
>Priority: Critical
> Attachments: talbe_owner_null.png
>
>
> I met this issue after upgrading from spark 2.0.1 to spark 2.2.1.
> when creating table using spark sql or spark thriftserver, it occured that 
> the table's owner info in metastore is null, which may cause other issue on 
> table authentication. It may be a bug.
> After digging into the code, I found that in class HiveClientImpl:
> {code:java}
> private val userName = state.getAuthenticator.getUserName
> {code}
> the result of state.getAuthenticator.getUserName is null, which would cause 
> all operation on tables have a null username, such as method toHiveTable:
> {code:java}
>  def toHiveTable(table: CatalogTable, userName: Option[String] = None): 
> HiveTable = {
> {code}
> my create table command:  create table  datapm.test_xwc9(id string,name 
> string);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18665) Spark ThriftServer jobs where are canceled are still “STARTED”

2018-02-04 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16352018#comment-16352018
 ] 

bruce xu commented on SPARK-18665:
--

This issue still occurs in spark 2.2.1, and it should be resolved and merged to 
master.

> Spark ThriftServer jobs where are canceled are still “STARTED”
> --
>
> Key: SPARK-18665
> URL: https://issues.apache.org/jira/browse/SPARK-18665
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.6.3, 2.0.2, 2.1.0
>Reporter: cen yuhai
>Priority: Major
> Attachments: 1179ACF7-3E62-44C5-B01D-CA71C876ECCE.png, 
> 83C5E8AD-59DE-4A85-A483-2BE3FB83F378.png
>
>
> I find that, some jobs are canceled, but the state are still "STARTED", I 
> think this bug are imported by SPARK-6964
> I find some logs:
> {code}
> 16/12/01 11:43:34 ERROR SparkExecuteStatementOperation: Error running hive 
> query: 
> org.apache.hive.service.cli.HiveSQLException: Illegal Operation state 
> transition from CLOSED to ERROR
>   at 
> org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:91)
>   at 
> org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:97)
>   at 
> org.apache.hive.service.cli.operation.Operation.setState(Operation.java:126)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:259)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:166)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:176)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> org.apache.hive.service.cli.HiveSQLException: Illegal Operation state 
> transition from CANCELED to ERROR
>   at 
> org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:91)
>   at 
> org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:97)
>   at 
> org.apache.hive.service.cli.operation.Operation.setState(Operation.java:126)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:259)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:166)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:176)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-18665) Spark ThriftServer jobs where are canceled are still “STARTED”

2018-02-04 Thread bruce xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16352018#comment-16352018
 ] 

bruce xu edited comment on SPARK-18665 at 2/5/18 3:43 AM:
--

This issue still occurs in spark 2.2.1, this PR could resolve the issue with 
litter fix.


was (Author: xwc3504):
This issue still occurs in spark 2.2.1, and it should be resolved and merged to 
master.

> Spark ThriftServer jobs where are canceled are still “STARTED”
> --
>
> Key: SPARK-18665
> URL: https://issues.apache.org/jira/browse/SPARK-18665
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.6.3, 2.0.2, 2.1.0
>Reporter: cen yuhai
>Priority: Major
> Attachments: 1179ACF7-3E62-44C5-B01D-CA71C876ECCE.png, 
> 83C5E8AD-59DE-4A85-A483-2BE3FB83F378.png
>
>
> I find that, some jobs are canceled, but the state are still "STARTED", I 
> think this bug are imported by SPARK-6964
> I find some logs:
> {code}
> 16/12/01 11:43:34 ERROR SparkExecuteStatementOperation: Error running hive 
> query: 
> org.apache.hive.service.cli.HiveSQLException: Illegal Operation state 
> transition from CLOSED to ERROR
>   at 
> org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:91)
>   at 
> org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:97)
>   at 
> org.apache.hive.service.cli.operation.Operation.setState(Operation.java:126)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:259)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:166)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:176)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> org.apache.hive.service.cli.HiveSQLException: Illegal Operation state 
> transition from CANCELED to ERROR
>   at 
> org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:91)
>   at 
> org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:97)
>   at 
> org.apache.hive.service.cli.operation.Operation.setState(Operation.java:126)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:259)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:166)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:176)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org