You may check if you have hive-site.xml under zeppelin/spark  config folder…

Thanks
Herman.



> On Nov 14, 2016, at 03:07, Ruslan Dautkhanov <dautkha...@gmail.com> wrote:
> 
> Dear Apache Zeppelin User group,
> 
> Got Zeppelin running, but can't get %spark.sql interpreter running correctly, 
> getting [1] in console output.
> 
> Running latest Zeppelin (0.6.2), Spark 2.0, Hive 1.1, Hadoop 2.6, Java 7.
> 
> My understanding is that Spark wants to initialize Hive Context, and Hive
> isn't getting its config correclty so tries to create a Derby local database
> in ZEPPELIN_HOME/bin/ 
> Nope, we don't use Derby for HMS and it's using a remote RDBMS, so clearly
> Hive/Spark isn't getting Hive settings correctly.
> 
> Btw, adding
> export ZEPPELIN_SPARK_USEHIVECONTEXT=false
> to zeppelin-env.sh didn't change anything.
> 
> See [2] for zeppelin-env.sh .
> 
> 
> PS. I was trying to workaround this through setting
> export SPARK_CLASSPATH=$HIVE_HOME/hive/lib/*:/etc/hive/conf
> but it gives other problems - complains 
> "Caused by: org.apache.spark.SparkException: Found both 
> spark.driver.extraClassPath and SPARK_CLASSPATH. Use only the former."
> Sidenote: that's interesting that spark-shell with the same SPARK_CLASSPATH
> gives opposite message - 
> " SPARK_CLASSPATH was detected (set to '...').
> This is deprecated in Spark 1.0+. "
> 
> 
> 
> Thank you,
> Ruslan Dautkhanov
> 
> 
> [0]
> 
> export JAVA_HOME=/usr/java/java7
> export ZEPPELIN_MEM="-Xms1024m -Xmx2048m -XX:MaxPermSize=512m"
> 
> export ZEPPELIN_LOG_DIR="/home/rdautkha/zeppelin/log"
> export ZEPPELIN_PID_DIR="/home/rdautkha/zeppelin/run"
> export ZEPPELIN_WAR_TEMPDIR="/home/rdautkha/zeppelin/tmp"
> export ZEPPELIN_NOTEBOOK_DIR="/home/rdautkha/zeppelin/notebooks"
> 
> export SPARK_HOME=/opt/cloudera/parcels/SPARK2/lib/spark2
> export SPARK_SUBMIT_OPTIONS="--principal rdautkha...@corp.epsilon.com 
> <mailto:rdautkha...@corp.epsilon.com> --keytab /home/rdautkha/.kt --conf 
> spark.driver.memory=7g --conf spark.executor.cores=2 --conf 
> spark.executor.memory=8g"
> export SPARK_APP_NAME="Zeppelin (test notebook 1)"
> 
> export HADOOP_CONF_DIR=/etc/hadoop/conf
> 
> export PYSPARK_PYTHON="/opt/cloudera/parcels/Anaconda/bin/python2"
> export 
> PYTHONPATH="/opt/cloudera/parcels/SPARK2/lib/spark2/python:/opt/cloudera/parcels/SPARK2/lib/spark2/python/lib/py4j-0.10.3-src.zip"
> export MASTER="yarn-client"
> 
> # these last three lines (and their combinations) were added in attempt to 
> resolve the the problem:
> export SPARK_CLASSPATH=/opt/cloudera/parcels/CDH/lib/hive/lib/*:/etc/hive/conf
> export ZEPPELIN_SPARK_USEHIVECONTEXT=false
> export HIVE_CONF_DIR=/etc/hive/conf
> 
> 
> 
> [1]
> 
> Caused by: ERROR XBM0H: Directory 
> /opt/zeppelin/zeppelin-0.6.2-bin-all/bin/metastore_db cannot be created.
>         at org.apache.derby.iapi.error.StandardException.newException(Unknown 
> Source)
>         at org.apache.derby.iapi.error.StandardException.newException(Unknown 
> Source)
>         at 
> org.apache.derby.impl.services.monitor.StorageFactoryService$10.run(Unknown 
> Source)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at 
> org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
>  Source)
>         at 
> org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown Source)
>         at 
> org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
>  Source)
>         at 
> org.apache.derby.impl.services.monitor.FileMonitor.createPersistentService(Unknown
>  Source)
>         at 
> org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
>  Source)
>         ... 104 more
> ============= end nested exception, level (1) ===========
> ============= begin nested exception, level (2) ===========
> ERROR XBM0H: Directory /opt/zeppelin/zeppelin-0.6.2-bin-all/bin/metastore_db 
> cannot be created.
>         at org.apache.derby.iapi.error.StandardException.newException(Unknown 
> Source)
>         at org.apache.derby.iapi.error.StandardException.newException(Unknown 
> Source)
>         at 
> org.apache.derby.impl.services.monitor.StorageFactoryService$10.run(Unknown 
> Source)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at 
> org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
>  Source)
>         at 
> org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown Source)
>         at 
> org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
>  Source)
>         at 
> org.apache.derby.impl.services.monitor.FileMonitor.createPersistentService(Unknown
>  Source)
>         at 
> org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
>  Source)
>         at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown 
> Source)
>         at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
>         at org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(Unknown 
> Source)
>         at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
>         at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
>         at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
>         at java.sql.DriverManager.getConnection(DriverManager.java:571)
>         at java.sql.DriverManager.getConnection(DriverManager.java:187)
>         at 
> com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
>         at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
>         at 
> com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
>         at 
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:501)
>         at 
> org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:298)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>         at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at 
> org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
>         at 
> org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
>         at 
> org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
>         at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
>         at 
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
>         at 
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
>         at 
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
>         at 
> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
>         at 
> javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>         at 
> javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>         at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:411)
>         at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:440)
>         at 
> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:335)
>         at 
> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:291)
>         at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>         at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>         at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
> 
> 
> 

Reply via email to