There're 2 processes, the first one is embedded mode. The second is the
normal spark launch mode. I am not sure which process you are using, can
you kill these processes and rerun again ?

And are you sure you put hadoop conf files under /opt/hadoop/conf/ ?

On Sun, Oct 9, 2016 at 3:14 PM, Xi Shen <davidshe...@gmail.com> wrote:

>
> ps aux | grep RemoteInterpreterServer
>
> root        71  0.1  3.3 3528164 133668 ?      Sl   06:21   0:04
> /usr/lib/jvm/java-8-openjdk-amd64//bin/java -Dfile.encoding=UTF-8
> -Dlog4j.configuration=file:///opt/zeppelin-0.6.1-bin-all/conf/log4j.properties
> -Dzeppelin.log.file=/opt/zeppelin-0.6.1-bin-all/logs/
> zeppelin-interpreter-sh--cad975db8a21.log -Xms1024m -Xmx1024m
> -XX:MaxPermSize=512m -cp ::/opt/zeppelin-0.6.1-bin-all/
> interpreter/sh/*::/opt/zeppelin-0.6.1-bin-all/lib/zeppelin-interpreter-0.6.1.jar
> org.apache.zeppelin.interpreter.remote.*RemoteInterpreterServer *45877
> root       346  0.7 13.7 3612088 555092 ?      Sl   06:28   0:18
> /usr/lib/jvm/java-8-openjdk-amd64//bin/java -cp
> /opt/zeppelin-0.6.1-bin-all/interpreter/spark/*:/opt/
> zeppelin-0.6.1-bin-all/lib/zeppelin-interpreter-0.6.1.
> jar:/opt/zeppelin-0.6.1-bin-all/interpreter/spark/
> zeppelin-spark_2.11-0.6.1.jar:/opt/spark/conf/:/opt/spark/jars/*:/opt/hadoop/conf/
> -Xmx1g -Dfile.encoding=UTF-8 -Dlog4j.configuration=file:///
> opt/zeppelin-0.6.1-bin-all/conf/log4j.properties -Dzeppelin.log.file=/opt/
> zeppelin-0.6.1-bin-all/logs/zeppelin-interpreter-spark--cad975db8a21.log
> org.apache.spark.deploy.SparkSubmit --conf spark.driver.extraClassPath=::
> /opt/zeppelin-0.6.1-bin-all/interpreter/spark/*::/opt/
> zeppelin-0.6.1-bin-all/lib/zeppelin-interpreter-0.6.1.
> jar:/opt/zeppelin-0.6.1-bin-all/interpreter/spark/zeppelin-spark_2.11-0.6.1.jar
> --conf spark.driver.extraJavaOptions= -Dfile.encoding=UTF-8
> -Dlog4j.configuration=file:///opt/zeppelin-0.6.1-bin-all/conf/log4j.properties
> -Dzeppelin.log.file=/opt/zeppelin-0.6.1-bin-all/logs/
> zeppelin-interpreter-spark--cad975db8a21.log --class org.apache.zeppelin.
> interpreter.remote.*RemoteInterpreterServer */opt/zeppelin-0.6.1-bin-all/
> interpreter/spark/zeppelin-spark_2.11-0.6.1.jar 44320
>
> I started my Zeppelin instance with
>
> /opt/zeppelin-0.6.1-bin-all/bin/zeppelin.sh --conf
> /opt/zeppelin-0.6.1-bin-all/conf
>
> On Sun, Oct 9, 2016 at 2:54 PM Jeff Zhang <zjf...@gmail.com> wrote:
>
>> Could you check the classpath of the interpreter process ? I suspect you
>> didn't export HADOOP_CONF_DIR correctly.
>>
>> Run the following command:
>>
>>       ps aux | grep RemoteInterpreterServer
>>
>> On Sun, Oct 9, 2016 at 2:36 PM, Xi Shen <davidshe...@gmail.com> wrote:
>>
>> Hi,
>>
>> I followed http://zeppelin.apache.org/docs/latest/interpreter/spark.html,
>> and set up SPARK_HOME, HADOOP_CONF_DIR.
>>
>> My SPARK build is 2.0. My Zeppelin build is the 0.6.1 binary from the web.
>>
>> After I start Zeppelin, I went to the interpreter setting page and
>> changed Spark interpreter settings as follow:
>>
>> master: yarn
>> deploy-mode: client
>>
>> Then, I created a new notebook and executed:
>>
>> %spark
>>
>> spark.version
>>
>> The block never finishes...no error either.
>>
>> In the ./logs/zeppelin-interpreter-spark-*.log, I found the following *which
>> I think is the cause of my problem*
>>
>>  INFO [2016-10-09 06:28:37,074] ({pool-2-thread-2}
>> Logging.scala[logInfo]:54) - Added JAR
>> *file:/opt/zeppelin-0.6.1-bin-all/interpreter/spark/zeppelin-spark_2.11-0.6.1.jar*
>> at spark://172.17.0.3:38775/jars/zeppelin-spark_2.11-0.6.1.jar with
>> timestamp 1475994517073
>>  INFO [2016-10-09 06:28:37,150] ({pool-2-thread-2}
>> Logging.scala[logInfo]:54) - Created default pool default, schedulingMode:
>> FIFO, minShare: 0, weight: 1
>>  INFO [2016-10-09 06:28:38,205] ({pool-2-thread-2}
>> RMProxy.java[createRMProxy]:98) - *Connecting to ResourceManager at
>> /0.0.0.0:8032 <http://0.0.0.0:8032>*
>>
>> Looks like Zeppelin is not using my Spark binary, and is not using my
>> hadoop configuration.
>>
>> What did I miss?
>>
>> --
>>
>>
>> Thanks,
>> David S.
>>
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
> --
>
>
> Thanks,
> David S.
>



-- 
Best Regards

Jeff Zhang

Reply via email to