Jeff:

    Running a simple spark.version paragraph I sometimes get this:
INFO [2019-03-15 01:12:18,720] ({pool-2-thread-49} 
RemoteInterpreter.java[call]:142) - Open RemoteInterpreter 
org.apache.zeppelin.spark.SparkInterpreter
 INFO [2019-03-15 01:12:18,721] ({pool-2-thread-49} 
RemoteInterpreter.java[pushAngularObjectRegistryToRemote]:436) - Push local 
angular object registry from ZeppelinServer to remote interpreter group 
spark:shared_process
 WARN [2019-03-15 01:13:30,593] ({pool-2-thread-49} 
NotebookServer.java[afterStatusChange]:2316) - Job 20190207-030535_192412278 is 
finished, status: ERROR, exception: null, result: %text 
java.lang.IllegalStateException: Spark context stopped while waiting for backend
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:614)
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:169)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:567)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:117)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2336)
        at org.apache.spark.SparkContext.getOrCreate(SparkContext.scala)
        at 
org.apache.zeppelin.spark.Spark2Shims.setupSparkListener(Spark2Shims.java:38)
        at 
org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:120)
        at 
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
        at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
        at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
        at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
        at 
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

 INFO [2019-03-15 01:13:30,598] ({pool-2-thread-49} 
VFSNotebookRepo.java[save]:196) - Saving note:2E4D6HQ3F
 INFO [2019-03-15 01:13:30,600] ({pool-2-thread-49} 
SchedulerFactory.java[jobFinished]:120) - Job 20190207-030535_192412278 
finished by scheduler 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter-spark:shared_process-shared_session
When I run this spark sql paragraph:

// DataStore params to a hypothetical GeoMesa Accumulo table
val dsParams = Map(
  "instanceId" -> "oedl",
  "zookeepers" -> "oedevnode00,oedevnode01,oedevnode02",
  "user"       -> "oe_user",
  "password"   -> "XXXXXXX",
  "tableName"  -> "CoalesceSearch")

// Create DataFrame using the "geomesa" format
val docdataFrame = 
spark.read.format("geomesa").options(dsParams).option("geomesa.feature", 
"oedocumentrecordset").load()
docdataFrame.createOrReplaceTempView("documentview")

Here is the complete stack trace:

INFO [2019-03-15 01:07:21,569] ({pool-2-thread-43} Paragraph.java[jobRun]:380) 
- Run paragraph [paragraph_id: 20190222-204451_856915056, interpreter: , 
note_id: 2E6X2CDWW, user: anonymous]
 WARN [2019-03-15 01:07:27,098] ({pool-2-thread-43} 
NotebookServer.java[afterStatusChange]:2316) - Job 20190222-204451_856915056 is 
finished, status: ERROR, exception: null, result: %text 
java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:

org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:259)
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:178)
org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:89)
org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:102)
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
org.apache.zeppelin.scheduler.Job.run(Job.java:188)
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

The currently active SparkContext was created at:

(No active SparkContext.)

  at org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:100)
  at 
org.apache.spark.SparkContext$$anonfun$parallelize$1.apply(SparkContext.scala:716)
  at 
org.apache.spark.SparkContext$$anonfun$parallelize$1.apply(SparkContext.scala:715)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.SparkContext.withScope(SparkContext.scala:701)
  at org.apache.spark.SparkContext.parallelize(SparkContext.scala:715)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
  at 
org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:2822)
  at org.apache.spark.sql.Dataset.createOrReplaceTempView(Dataset.scala:2605)
  ... 47 elided

 INFO [2019-03-15 01:07:27,118] ({pool-2-thread-43} 
VFSNotebookRepo.java[save]:196) - Saving note:2E6X2CDWW
 INFO [2019-03-15 01:07:27,124] ({pool-2-thread-43} 
SchedulerFactory.java[jobFinished]:120) - Job 20190222-204451_856915056 
finished by scheduler 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter-spark:shared_process-shared_session

On 3/14/19 9:02 PM, Jeff Zhang wrote:
Hi Dave,

Could you paste the full stacktrace ? You can find it in the spark interpreter 
log file which is located in ZEPPELIN_HOME/logs

Xun Liu <neliu...@163.com<mailto:neliu...@163.com>> 于2019年3月15日周五 上午8:21写道:
Hi

You can first execute a simple statement in spark, through sparksql, to see if 
it can run normally in YARN.
If sparksql is running without problems, check the zeppelin and spark on yarn 
issues.

Also, what do you use for zeppelin-0.7.4? zeppelin-0.8.2? Is it a branch that 
you maintain yourself?

在 2019年3月15日,上午6:31,Dave Boyd 
<db...@incadencecorp.com<mailto:db...@incadencecorp.com>> 写道:


All:

   I have some code that worked fine in Zeppelin 0.7.4 but I am having issues 
in 0.8.2 when going from spark master of local to yarn-client.  Yarn client 
worked in 0.7.4.

When my master is set to local[*] it runs just fine.  However, as soon as I 
switch to yarn-client I get the Cannot call methods on a stopped SparkContext 
error.   In looking at my yarn logs everything creates fine and the job 
finishes without an error.  The executors start just fine
from what I get out of yarn logs.

Any suggestions on where to look?   This happens with any note that trys to run 
spark.

If I try this very simple code:

// Spark Version
spark.version

I get this error:

java.lang.IllegalStateException: Spark context stopped while waiting for 
backend at 
org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:614)
 at 
org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:169)
 at org.apache.spark.SparkContext.<init>(SparkContext.scala:567) at 
org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313) at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
 at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
 at scala.Option.getOrElse(Option.scala:121) at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:259)
 at 
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:178)
 at 
org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:89)
 at 
org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:102)
 at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62) 
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
 at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
 at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at 
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

What am I missing?

--
========= mailto:db...@incadencecorp.com ============
David W. Boyd
VP,  Data Solutions
10432 Balls Ford, Suite 240
Manassas, VA 20109
office:   +1-703-552-2862
cell:     +1-703-402-7908
============== http://www.incadencecorp.com/ ============
ISO/IEC JTC1 WG9, editor ISO/IEC 20547 Big Data Reference Architecture
Chair ANSI/INCITS TC Big Data
Co-chair NIST Big Data Public Working Group Reference Architecture
First Robotic Mentor - FRC, FTC - 
www.iliterobotics.org<http://www.iliterobotics.org/>
Board Member- USSTEM Foundation - www.usstem.org<http://www.usstem.org/>

The information contained in this message may be privileged
and/or confidential and protected from disclosure.
If the reader of this message is not the intended recipient
or an employee or agent responsible for delivering this message
to the intended recipient, you are hereby notified that any
dissemination, distribution or copying of this communication
is strictly prohibited.  If you have received this communication
in error, please notify the sender immediately by replying to
this message and deleting the material from any computer.





--
Best Regards

Jeff Zhang

--
========= mailto:db...@incadencecorp.com ============
David W. Boyd
VP,  Data Solutions
10432 Balls Ford, Suite 240
Manassas, VA 20109
office:   +1-703-552-2862
cell:     +1-703-402-7908
============== http://www.incadencecorp.com/ ============
ISO/IEC JTC1 WG9, editor ISO/IEC 20547 Big Data Reference Architecture
Chair ANSI/INCITS TC Big Data
Co-chair NIST Big Data Public Working Group Reference Architecture
First Robotic Mentor - FRC, FTC - 
www.iliterobotics.org<http://www.iliterobotics.org>
Board Member- USSTEM Foundation - www.usstem.org<http://www.usstem.org>

The information contained in this message may be privileged
and/or confidential and protected from disclosure.
If the reader of this message is not the intended recipient
or an employee or agent responsible for delivering this message
to the intended recipient, you are hereby notified that any
dissemination, distribution or copying of this communication
is strictly prohibited.  If you have received this communication
in error, please notify the sender immediately by replying to
this message and deleting the material from any computer.


Reply via email to