Hi Ted,

I was trying to set spark.sql.dialect to sql as to specify I only need 
“SQLContext” not HiveContext. It didn’t work. It still instantiate HiveContext. 
Since I don’t use HiveContext and I don’t want to start a mysql database 
because I want to have more than 1 session of spark-shell simultaneously. Is 
there an easy way to get around it? More exception here:

Caused by: java.sql.SQLException: Unable to open a test connection to the given 
database. JDBC url = jdbc:derby:;databaseName=metastore_db;create=true, 
username = APP. T
erminating connection pool (set lazyInit to true if you expect to start your 
database after your app). Original Exception: ------^M
java.sql.SQLException: Failed to start database 'metastore_db' with class 
loader org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@53a39109, 
see the next exc
eption for details.
        at 
org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
        at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
        at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
        at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown 
Source)
        at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
        at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
        at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
        at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
        at org.apache.derby.jdbc.Driver20.connect(Unknown Source)
        at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
        at java.sql.DriverManager.getConnection(DriverManager.java:571)

Best Regards,

Jerry

> On Nov 6, 2015, at 12:12 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> 
> In SQLContext.scala :
>     // After we have populated SQLConf, we call setConf to populate other 
> confs in the subclass
>     // (e.g. hiveconf in HiveContext).
>     properties.foreach {
>       case (key, value) => setConf(key, value)
>     }
> 
> I don't see config of skipping the above call.
> 
> FYI
> 
> On Fri, Nov 6, 2015 at 8:53 AM, Jerry Lam <chiling...@gmail.com 
> <mailto:chiling...@gmail.com>> wrote:
> Hi spark users and developers,
> 
> Is it possible to disable HiveContext from being instantiated when using 
> spark-shell? I got the following errors when I have more than one session 
> starts. Since I don't use HiveContext, it would be great if I can have more 
> than 1 spark-shell start at the same time. 
> 
> Exception in thread "main" java.lang.RuntimeException: 
> java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaS
> toreClient
>         at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
>         at 
> org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>         at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at 
> org.apache.spark.sql.hive.client.IsolatedClientLoader.liftedTree1$1(IsolatedClientLoader.scala:183)
>         at 
> org.apache.spark.sql.hive.client.IsolatedClientLoader.<init>(IsolatedClientLoader.scala:179)
>         at 
> org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:226)
>         at 
> org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:185)
>         at 
> org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:392)
>         at 
> org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:235)
>         at 
> org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:234)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>         at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:234)
>         at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:72)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>         at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at 
> org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
>         at 
> org.apache.spark.repl.SparkILoopExt.importSpark(SparkILoopExt.scala:154)
>         at 
> org.apache.spark.repl.SparkILoopExt$$anonfun$process$1.apply$mcZ$sp(SparkILoopExt.scala:127)
>         at 
> org.apache.spark.repl.SparkILoopExt$$anonfun$process$1.apply(SparkILoopExt.scala:113)
>         at 
> org.apache.spark.repl.SparkILoopExt$$anonfun$process$1.apply(SparkILoopExt.scala:113)
> 
> Best Regards,
> 
> Jerry
> 

Reply via email to