[ 
https://issues.apache.org/jira/browse/SPARK-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301456#comment-15301456
 ] 

Yi Zhou commented on SPARK-15345:
---------------------------------

I issued 'show databases;' , ' use XXX' and 'show tables;'  and found the 
result is empty and there is no any tables to show. BTW, i can see tables by 
'show tables' in Hive CLI.

{code}
spark-sql> show databases;
16/05/26 11:11:47 INFO execution.SparkSqlParser: Parsing command: show databases
16/05/26 11:11:47 INFO log.PerfLogger: <PERFLOG method=create_database 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
16/05/26 11:11:47 INFO metastore.HiveMetaStore: 0: create_database: 
Database(name:default, description:default database, 
locationUri:hdfs://hw-node2:8020/user/hive/warehouse, parameters:{})
16/05/26 11:11:47 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      
cmd=create_database: Database(name:default, description:default database, 
locationUri:hdfs://hw-node2:8020/user/hive/warehouse, parameters:{})
16/05/26 11:11:47 ERROR metastore.RetryingHMSHandler: 
AlreadyExistsException(message:Database default already exists)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:944)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138)
        at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
        at com.sun.proxy.$Proxy34.create_database(Unknown Source)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:646)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:105)
        at com.sun.proxy.$Proxy35.createDatabase(Unknown Source)
        at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:345)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply$mcV$sp(HiveClientImpl.scala:289)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply(HiveClientImpl.scala:289)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply(HiveClientImpl.scala:289)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:260)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:207)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:206)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:249)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.createDatabase(HiveClientImpl.scala:288)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply$mcV$sp(HiveExternalCatalog.scala:94)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply(HiveExternalCatalog.scala:94)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply(HiveExternalCatalog.scala:94)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:68)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.createDatabase(HiveExternalCatalog.scala:93)
        at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:142)
        at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:84)
        at 
org.apache.spark.sql.hive.HiveSessionCatalog.<init>(HiveSessionCatalog.scala:50)
        at 
org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:49)
        at 
org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
        at 
org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:63)
        at 
org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
        at 
org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
        at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:62)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:532)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:652)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:323)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:239)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:724)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/05/26 11:11:47 INFO log.PerfLogger: </PERFLOG method=create_database 
start=1464232307903 end=1464232307908 duration=5 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0 
retryCount=-1 error=true>
16/05/26 11:11:48 INFO log.PerfLogger: <PERFLOG method=get_databases 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
16/05/26 11:11:48 INFO metastore.HiveMetaStore: 0: get_databases: *
16/05/26 11:11:48 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      
cmd=get_databases: *
16/05/26 11:11:48 INFO log.PerfLogger: </PERFLOG method=get_databases 
start=1464232308202 end=1464232308208 duration=6 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0 
retryCount=0 error=false>
16/05/26 11:11:48 INFO spark.SparkContext: Starting job: processCmd at 
CliDriver.java:376
16/05/26 11:11:48 INFO scheduler.DAGScheduler: Got job 0 (processCmd at 
CliDriver.java:376) with 1 output partitions
16/05/26 11:11:48 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 
(processCmd at CliDriver.java:376)
16/05/26 11:11:48 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/05/26 11:11:48 INFO scheduler.DAGScheduler: Missing parents: List()
16/05/26 11:11:48 INFO scheduler.DAGScheduler: Submitting ResultStage 0 
(MapPartitionsRDD[2] at processCmd at CliDriver.java:376), which has no missing 
parents
16/05/26 11:11:48 INFO memory.MemoryStore: Block broadcast_0 stored as values 
in memory (estimated size 3.9 KB, free 511.1 MB)
16/05/26 11:11:48 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as 
bytes in memory (estimated size 2.3 KB, free 511.1 MB)
16/05/26 11:11:48 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on 192.168.3.11:39454 (size: 2.3 KB, free: 511.1 MB)
16/05/26 11:11:48 INFO spark.SparkContext: Created broadcast 0 from broadcast 
at DAGScheduler.scala:1012
16/05/26 11:11:48 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from 
ResultStage 0 (MapPartitionsRDD[2] at processCmd at CliDriver.java:376)
16/05/26 11:11:48 INFO cluster.YarnScheduler: Adding task set 0.0 with 1 tasks
16/05/26 11:11:49 INFO spark.ExecutorAllocationManager: Requesting 1 new 
executor because tasks are backlogged (new desired total will be 1)
16/05/26 11:11:53 INFO cluster.YarnClientSchedulerBackend: Registered executor 
NettyRpcEndpointRef(null) (192.168.3.15:44052) with ID 1
16/05/26 11:11:53 INFO spark.ExecutorAllocationManager: New executor 1 has 
registered (new total is 1)
16/05/26 11:11:53 INFO storage.BlockManagerMasterEndpoint: Registering block 
manager hw-node5:54623 with 511.1 MB RAM, BlockManagerId(1, hw-node5, 54623)
16/05/26 11:11:53 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 
(TID 0, 192.168.3.15, partition 0, PROCESS_LOCAL, 5549 bytes)
16/05/26 11:11:53 INFO cluster.YarnClientSchedulerBackend: Launching task 0 on 
executor id: 1 hostname: 192.168.3.15.
16/05/26 11:11:54 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on hw-node5:54623 (size: 2.3 KB, free: 511.1 MB)
16/05/26 11:11:56 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 
(TID 0) in 2734 ms on 192.168.3.15 (1/1)
16/05/26 11:11:56 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks 
have all completed, from pool
16/05/26 11:11:56 INFO scheduler.DAGScheduler: ResultStage 0 (processCmd at 
CliDriver.java:376) finished in 7.670 s
16/05/26 11:11:56 INFO scheduler.DAGScheduler: Job 0 finished: processCmd at 
CliDriver.java:376, took 7.882660 s
bigbench_bb101_3tb_240_sparksql
default
{code}

{code}
use bigbench_bb101_3tb_240_sparksql;
16/05/26 11:15:49 INFO execution.SparkSqlParser: Parsing command: use 
bigbench_bb101_3tb_240_sparksql
16/05/26 11:15:49 INFO log.PerfLogger: <PERFLOG method=get_database 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
16/05/26 11:15:49 INFO metastore.HiveMetaStore: 0: get_database: 
bigbench_bb101_3tb_240_sparksql
16/05/26 11:15:49 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      
cmd=get_database: bigbench_bb101_3tb_240_sparksql
16/05/26 11:15:49 INFO log.PerfLogger: </PERFLOG method=get_database 
start=1464232549404 end=1464232549408 duration=4 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0 
retryCount=0 error=false>
16/05/26 11:15:49 INFO log.PerfLogger: <PERFLOG method=get_database 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
16/05/26 11:15:49 INFO metastore.HiveMetaStore: 0: get_database: 
bigbench_bb101_3tb_240_sparksql
16/05/26 11:15:49 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      
cmd=get_database: bigbench_bb101_3tb_240_sparksql
16/05/26 11:15:49 INFO log.PerfLogger: </PERFLOG method=get_database 
start=1464232549410 end=1464232549412 duration=2 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0 
retryCount=0 error=false>
16/05/26 11:15:49 INFO spark.SparkContext: Starting job: processCmd at 
CliDriver.java:376
16/05/26 11:15:49 INFO scheduler.DAGScheduler: Got job 1 (processCmd at 
CliDriver.java:376) with 1 output partitions
16/05/26 11:15:49 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 
(processCmd at CliDriver.java:376)
16/05/26 11:15:49 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/05/26 11:15:49 INFO scheduler.DAGScheduler: Missing parents: List()
16/05/26 11:15:49 INFO scheduler.DAGScheduler: Submitting ResultStage 1 
(MapPartitionsRDD[5] at processCmd at CliDriver.java:376), which has no missing 
parents
16/05/26 11:15:49 INFO memory.MemoryStore: Block broadcast_1 stored as values 
in memory (estimated size 3.2 KB, free 511.1 MB)
16/05/26 11:15:49 INFO memory.MemoryStore: Block broadcast_1_piece0 stored as 
bytes in memory (estimated size 1964.0 B, free 511.1 MB)
16/05/26 11:15:49 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in 
memory on 192.168.3.11:39454 (size: 1964.0 B, free: 511.1 MB)
16/05/26 11:15:49 INFO spark.SparkContext: Created broadcast 1 from broadcast 
at DAGScheduler.scala:1012
16/05/26 11:15:49 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from 
ResultStage 1 (MapPartitionsRDD[5] at processCmd at CliDriver.java:376)
16/05/26 11:15:49 INFO cluster.YarnScheduler: Adding task set 1.0 with 1 tasks
16/05/26 11:15:50 INFO spark.ExecutorAllocationManager: Requesting 1 new 
executor because tasks are backlogged (new desired total will be 1)
16/05/26 11:15:52 INFO spark.ContextCleaner: Cleaned accumulator 0
16/05/26 11:15:52 INFO storage.BlockManagerInfo: Removed broadcast_0_piece0 on 
192.168.3.11:39454 in memory (size: 2.3 KB, free: 511.1 MB)
16/05/26 11:15:53 INFO cluster.YarnClientSchedulerBackend: Registered executor 
NettyRpcEndpointRef(null) (192.168.3.15:44072) with ID 2
16/05/26 11:15:53 INFO spark.ExecutorAllocationManager: New executor 2 has 
registered (new total is 1)
16/05/26 11:15:53 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 
(TID 1, 192.168.3.15, partition 0, PROCESS_LOCAL, 5389 bytes)
16/05/26 11:15:53 INFO cluster.YarnClientSchedulerBackend: Launching task 1 on 
executor id: 2 hostname: 192.168.3.15.
16/05/26 11:15:53 INFO storage.BlockManagerMasterEndpoint: Registering block 
manager hw-node5:56967 with 511.1 MB RAM, BlockManagerId(2, hw-node5, 56967)
16/05/26 11:15:53 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in 
memory on hw-node5:56967 (size: 1964.0 B, free: 511.1 MB)
16/05/26 11:15:55 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 
(TID 1) in 2497 ms on 192.168.3.15 (1/1)
16/05/26 11:15:55 INFO cluster.YarnScheduler: Removed TaskSet 1.0, whose tasks 
have all completed, from pool
16/05/26 11:15:55 INFO scheduler.DAGScheduler: ResultStage 1 (processCmd at 
CliDriver.java:376) finished in 6.284 s
16/05/26 11:15:55 INFO scheduler.DAGScheduler: Job 1 finished: processCmd at 
CliDriver.java:376, took 6.308676 s
Time taken: 6.371 seconds
16/05/26 11:15:55 INFO CliDriver: Time taken: 6.371 seconds
{code}

{code}
show tables;
16/05/26 11:18:01 INFO execution.SparkSqlParser: Parsing command: show tables
16/05/26 11:18:01 INFO log.PerfLogger: <PERFLOG method=get_database 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
16/05/26 11:18:01 INFO metastore.HiveMetaStore: 0: get_database: 
bigbench_bb101_3tb_240_sparksql
16/05/26 11:18:01 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      
cmd=get_database: bigbench_bb101_3tb_240_sparksql
16/05/26 11:18:01 INFO log.PerfLogger: </PERFLOG method=get_database 
start=1464232681190 end=1464232681193 duration=3 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0 
retryCount=0 error=false>
16/05/26 11:18:01 INFO log.PerfLogger: <PERFLOG method=get_database 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
16/05/26 11:18:01 INFO metastore.HiveMetaStore: 0: get_database: 
bigbench_bb101_3tb_240_sparksql
16/05/26 11:18:01 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      
cmd=get_database: bigbench_bb101_3tb_240_sparksql
16/05/26 11:18:01 INFO log.PerfLogger: </PERFLOG method=get_database 
start=1464232681194 end=1464232681196 duration=2 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0 
retryCount=0 error=false>
16/05/26 11:18:01 INFO log.PerfLogger: <PERFLOG method=get_tables 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
16/05/26 11:18:01 INFO metastore.HiveMetaStore: 0: get_tables: 
db=bigbench_bb101_3tb_240_sparksql pat=*
16/05/26 11:18:01 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      
cmd=get_tables: db=bigbench_bb101_3tb_240_sparksql pat=*
16/05/26 11:18:01 INFO log.PerfLogger: </PERFLOG method=get_tables 
start=1464232681197 end=1464232681238 duration=41 
from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0 
retryCount=0 error=false>
16/05/26 11:18:01 INFO spark.SparkContext: Starting job: processCmd at 
CliDriver.java:376
16/05/26 11:18:01 INFO scheduler.DAGScheduler: Got job 2 (processCmd at 
CliDriver.java:376) with 1 output partitions
16/05/26 11:18:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 2 
(processCmd at CliDriver.java:376)
16/05/26 11:18:01 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/05/26 11:18:01 INFO scheduler.DAGScheduler: Missing parents: List()
16/05/26 11:18:01 INFO scheduler.DAGScheduler: Submitting ResultStage 2 
(MapPartitionsRDD[8] at processCmd at CliDriver.java:376), which has no missing 
parents
16/05/26 11:18:01 INFO memory.MemoryStore: Block broadcast_2 stored as values 
in memory (estimated size 4.0 KB, free 511.1 MB)
16/05/26 11:18:01 INFO memory.MemoryStore: Block broadcast_2_piece0 stored as 
bytes in memory (estimated size 2.4 KB, free 511.1 MB)
16/05/26 11:18:01 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in 
memory on 192.168.3.11:39454 (size: 2.4 KB, free: 511.1 MB)
16/05/26 11:18:01 INFO spark.SparkContext: Created broadcast 2 from broadcast 
at DAGScheduler.scala:1012
16/05/26 11:18:01 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from 
ResultStage 2 (MapPartitionsRDD[8] at processCmd at CliDriver.java:376)
16/05/26 11:18:01 INFO cluster.YarnScheduler: Adding task set 2.0 with 1 tasks
16/05/26 11:18:02 INFO spark.ExecutorAllocationManager: Requesting 1 new 
executor because tasks are backlogged (new desired total will be 1)
16/05/26 11:18:04 INFO cluster.YarnClientSchedulerBackend: Registered executor 
NettyRpcEndpointRef(null) (192.168.3.15:44086) with ID 3
16/05/26 11:18:04 INFO spark.ExecutorAllocationManager: New executor 3 has 
registered (new total is 1)
16/05/26 11:18:04 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 
(TID 2, 192.168.3.15, partition 0, PROCESS_LOCAL, 5365 bytes)
16/05/26 11:18:04 INFO cluster.YarnClientSchedulerBackend: Launching task 2 on 
executor id: 3 hostname: 192.168.3.15.
16/05/26 11:18:04 INFO storage.BlockManagerMasterEndpoint: Registering block 
manager hw-node5:57277 with 511.1 MB RAM, BlockManagerId(3, hw-node5, 57277)
16/05/26 11:18:05 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in 
memory on hw-node5:57277 (size: 2.4 KB, free: 511.1 MB)
16/05/26 11:18:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 2.0 
(TID 2) in 1952 ms on 192.168.3.15 (1/1)
16/05/26 11:18:06 INFO cluster.YarnScheduler: Removed TaskSet 2.0, whose tasks 
have all completed, from pool
16/05/26 11:18:06 INFO scheduler.DAGScheduler: ResultStage 2 (processCmd at 
CliDriver.java:376) finished in 5.532 s
16/05/26 11:18:06 INFO scheduler.DAGScheduler: Job 2 finished: processCmd at 
CliDriver.java:376, took 5.555326 s
Time taken: 5.677 seconds
16/05/26 11:18:06 INFO CliDriver: Time taken: 5.677 seconds
{code}


> SparkSession's conf doesn't take effect when there's already an existing 
> SparkContext
> -------------------------------------------------------------------------------------
>
>                 Key: SPARK-15345
>                 URL: https://issues.apache.org/jira/browse/SPARK-15345
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>            Reporter: Piotr Milanowski
>            Assignee: Reynold Xin
>            Priority: Blocker
>             Fix For: 2.0.0
>
>
> I am working with branch-2.0, spark is compiled with hive support (-Phive and 
> -Phvie-thriftserver).
> I am trying to access databases using this snippet:
> {code}
> from pyspark.sql import HiveContext
> hc = HiveContext(sc)
> hc.sql("show databases").collect()
> [Row(result='default')]
> {code}
> This means that spark doesn't find any databases specified in configuration.
> Using the same configuration (i.e. hive-site.xml and core-site.xml) in spark 
> 1.6, and launching above snippet, I can print out existing databases.
> When run in DEBUG mode this is what spark (2.0) prints out:
> {code}
> 16/05/16 12:17:47 INFO SparkSqlParser: Parsing command: show databases
> 16/05/16 12:17:47 DEBUG SimpleAnalyzer: 
> === Result of Batch Resolution ===
> !'Project [unresolveddeserializer(createexternalrow(if (isnull(input[0, 
> string])) null else input[0, string].toString, 
> StructField(result,StringType,false)), result#2) AS #3]   Project 
> [createexternalrow(if (isnull(result#2)) null else result#2.toString, 
> StructField(result,StringType,false)) AS #3]
>  +- LocalRelation [result#2]                                                  
>                                                                               
>                      +- LocalRelation [result#2]
>         
> 16/05/16 12:17:47 DEBUG ClosureCleaner: +++ Cleaning closure <function1> 
> (org.apache.spark.sql.Dataset$$anonfun$53) +++
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared fields: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public static final long 
> org.apache.spark.sql.Dataset$$anonfun$53.serialVersionUID
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      private final 
> org.apache.spark.sql.types.StructType 
> org.apache.spark.sql.Dataset$$anonfun$53.structType$1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared methods: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public final java.lang.Object 
> org.apache.spark.sql.Dataset$$anonfun$53.apply(java.lang.Object)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public final java.lang.Object 
> org.apache.spark.sql.Dataset$$anonfun$53.apply(org.apache.spark.sql.catalyst.InternalRow)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + inner classes: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer classes: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer objects: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + populating accessed fields because 
> this is the starting closure
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + fields accessed by starting 
> closure: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + there are no enclosing objects!
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  +++ closure <function1> 
> (org.apache.spark.sql.Dataset$$anonfun$53) is now cleaned +++
> 16/05/16 12:17:47 DEBUG ClosureCleaner: +++ Cleaning closure <function1> 
> (org.apache.spark.sql.execution.python.EvaluatePython$$anonfun$javaToPython$1)
>  +++
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared fields: 1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public static final long 
> org.apache.spark.sql.execution.python.EvaluatePython$$anonfun$javaToPython$1.serialVersionUID
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared methods: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public final java.lang.Object 
> org.apache.spark.sql.execution.python.EvaluatePython$$anonfun$javaToPython$1.apply(java.lang.Object)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public final 
> org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler 
> org.apache.spark.sql.execution.python.EvaluatePython$$anonfun$javaToPython$1.apply(scala.collection.Iterator)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + inner classes: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer classes: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer objects: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + populating accessed fields because 
> this is the starting closure
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + fields accessed by starting 
> closure: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + there are no enclosing objects!
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  +++ closure <function1> 
> (org.apache.spark.sql.execution.python.EvaluatePython$$anonfun$javaToPython$1)
>  is now cleaned +++
> 16/05/16 12:17:47 DEBUG ClosureCleaner: +++ Cleaning closure <function1> 
> (org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13) +++
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared fields: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public static final long 
> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.serialVersionUID
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      private final 
> org.apache.spark.rdd.RDD$$anonfun$collect$1 
> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.$outer
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared methods: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public final java.lang.Object 
> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(java.lang.Object)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public final java.lang.Object 
> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(scala.collection.Iterator)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + inner classes: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer classes: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      
> org.apache.spark.rdd.RDD$$anonfun$collect$1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      org.apache.spark.rdd.RDD
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer objects: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      <function0>
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      MapPartitionsRDD[5] at collect 
> at <stdin>:1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + populating accessed fields because 
> this is the starting closure
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + fields accessed by starting 
> closure: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      (class 
> org.apache.spark.rdd.RDD$$anonfun$collect$1,Set($outer))
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      (class 
> org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outermost object is not a closure 
> or REPL line object, so do not clone it: (class 
> org.apache.spark.rdd.RDD,MapPartitionsRDD[5] at collect at <stdin>:1)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + cloning the object <function0> of 
> class org.apache.spark.rdd.RDD$$anonfun$collect$1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + cleaning cloned closure 
> <function0> recursively (org.apache.spark.rdd.RDD$$anonfun$collect$1)
> 16/05/16 12:17:47 DEBUG ClosureCleaner: +++ Cleaning closure <function0> 
> (org.apache.spark.rdd.RDD$$anonfun$collect$1) +++
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared fields: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public static final long 
> org.apache.spark.rdd.RDD$$anonfun$collect$1.serialVersionUID
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      private final 
> org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$collect$1.$outer
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared methods: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public org.apache.spark.rdd.RDD 
> org.apache.spark.rdd.RDD$$anonfun$collect$1.org$apache$spark$rdd$RDD$$anonfun$$$outer()
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public final java.lang.Object 
> org.apache.spark.rdd.RDD$$anonfun$collect$1.apply()
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + inner classes: 1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      
> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer classes: 1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      org.apache.spark.rdd.RDD
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer objects: 1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      MapPartitionsRDD[5] at collect 
> at <stdin>:1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + fields accessed by starting 
> closure: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      (class 
> org.apache.spark.rdd.RDD$$anonfun$collect$1,Set($outer))
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      (class 
> org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outermost object is not a closure 
> or REPL line object, so do not clone it: (class 
> org.apache.spark.rdd.RDD,MapPartitionsRDD[5] at collect at <stdin>:1)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  +++ closure <function0> 
> (org.apache.spark.rdd.RDD$$anonfun$collect$1) is now cleaned +++
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  +++ closure <function1> 
> (org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13) is now cleaned +++
> 16/05/16 12:17:47 DEBUG ClosureCleaner: +++ Cleaning closure <function2> 
> (org.apache.spark.SparkContext$$anonfun$runJob$5) +++
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared fields: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public static final long 
> org.apache.spark.SparkContext$$anonfun$runJob$5.serialVersionUID
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      private final scala.Function1 
> org.apache.spark.SparkContext$$anonfun$runJob$5.cleanedFunc$1
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + declared methods: 2
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public final java.lang.Object 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(java.lang.Object,java.lang.Object)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:      public final java.lang.Object 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + inner classes: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer classes: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + outer objects: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + populating accessed fields because 
> this is the starting closure
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + fields accessed by starting 
> closure: 0
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  + there are no enclosing objects!
> 16/05/16 12:17:47 DEBUG ClosureCleaner:  +++ closure <function2> 
> (org.apache.spark.SparkContext$$anonfun$runJob$5) is now cleaned +++
> 16/05/16 12:17:47 INFO SparkContext: Starting job: collect at <stdin>:1
> 16/05/16 12:17:47 INFO DAGScheduler: Got job 1 (collect at <stdin>:1) with 1 
> output partitions
> 16/05/16 12:17:47 INFO DAGScheduler: Final stage: ResultStage 1 (collect at 
> <stdin>:1)
> 16/05/16 12:17:47 INFO DAGScheduler: Parents of final stage: List()
> 16/05/16 12:17:47 INFO DAGScheduler: Missing parents: List()
> 16/05/16 12:17:47 DEBUG DAGScheduler: submitStage(ResultStage 1)
> 16/05/16 12:17:47 DEBUG DAGScheduler: missing: List()
> 16/05/16 12:17:47 INFO DAGScheduler: Submitting ResultStage 1 
> (MapPartitionsRDD[5] at collect at <stdin>:1), which has no missing parents
> 16/05/16 12:17:47 DEBUG DAGScheduler: submitMissingTasks(ResultStage 1)
> 16/05/16 12:17:47 INFO MemoryStore: Block broadcast_1 stored as values in 
> memory (estimated size 3.1 KB, free 5.8 GB)
> 16/05/16 12:17:47 DEBUG BlockManager: Put block broadcast_1 locally took  1 ms
> 16/05/16 12:17:47 DEBUG BlockManager: Putting block broadcast_1 without 
> replication took  1 ms
> 16/05/16 12:17:47 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes 
> in memory (estimated size 1856.0 B, free 5.8 GB)
> 16/05/16 12:17:47 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory 
> on 188.165.13.157:35738 (size: 1856.0 B, free: 5.8 GB)
> 16/05/16 12:17:47 DEBUG BlockManagerMaster: Updated info of block 
> broadcast_1_piece0
> 16/05/16 12:17:47 DEBUG BlockManager: Told master about block 
> broadcast_1_piece0
> 16/05/16 12:17:47 DEBUG BlockManager: Put block broadcast_1_piece0 locally 
> took  1 ms
> 16/05/16 12:17:47 DEBUG BlockManager: Putting block broadcast_1_piece0 
> without replication took  2 ms
> 16/05/16 12:17:47 INFO SparkContext: Created broadcast 1 from broadcast at 
> DAGScheduler.scala:1012
> 16/05/16 12:17:47 INFO DAGScheduler: Submitting 1 missing tasks from 
> ResultStage 1 (MapPartitionsRDD[5] at collect at <stdin>:1)
> 16/05/16 12:17:47 DEBUG DAGScheduler: New pending partitions: Set(0)
> 16/05/16 12:17:47 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
> 16/05/16 12:17:47 DEBUG TaskSetManager: Epoch for TaskSet 1.0: 0
> 16/05/16 12:17:47 DEBUG TaskSetManager: Valid locality levels for TaskSet 
> 1.0: NO_PREF, ANY
> 16/05/16 12:17:47 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_1, 
> runningTasks: 0
> 16/05/16 12:17:47 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, 
> xxx3, partition 0, PROCESS_LOCAL, 5542 bytes)
> 16/05/16 12:17:47 DEBUG TaskSetManager: No tasks for locality level NO_PREF, 
> so moving to locality level ANY
> 16/05/16 12:17:47 INFO SparkDeploySchedulerBackend: Launching task 1 on 
> executor id: 0 hostname: xxx3.
> 16/05/16 12:17:48 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_1, 
> runningTasks: 1
> 16/05/16 12:17:48 DEBUG BlockManager: Getting local block broadcast_1_piece0 
> as bytes
> 16/05/16 12:17:48 DEBUG BlockManager: Level for block broadcast_1_piece0 is 
> StorageLevel(disk=true, memory=true, offheap=false, deserialized=false, 
> replication=1)
> 16/05/16 12:17:48 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory 
> on 188.165.13.158:53616 (size: 1856.0 B, free: 14.8 GB)
> 16/05/16 12:17:49 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_1, 
> runningTasks: 1
> 16/05/16 12:17:50 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_1, 
> runningTasks: 1
> 16/05/16 12:17:50 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_1, 
> runningTasks: 0
> 16/05/16 12:17:50 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) 
> in 2156 ms on xxx3 (1/1)
> 16/05/16 12:17:50 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks 
> have all completed, from pool 
> 16/05/16 12:17:50 INFO DAGScheduler: ResultStage 1 (collect at <stdin>:1) 
> finished in 2.158 s
> 16/05/16 12:17:50 DEBUG DAGScheduler: After removal of stage 1, remaining 
> stages = 0
> 16/05/16 12:17:50 INFO DAGScheduler: Job 1 finished: collect at <stdin>:1, 
> took 2.174808 s
> {code}
> I can't see any information on Hive connection in this trace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to