Thanks.

With Zeppelin 0.8.1, I see a different issue.

When I run the first paragraph of the Spark notebook, it creates executor
pods. But these executor pods fail immediately and new ones are launched
and this cycle continues infinitely.
Because these pods are shortlived, I cannot even look at their logs to
debug this further.

The conf params remain the same as before. Any help is appreciated.

Thanks,
Amogh Shetkar
SnappyData Technologies (http://snappydata.io)


On Tue, Feb 12, 2019 at 6:58 PM Jeff Zhang <zjf...@gmail.com> wrote:

> Zeppelin 0.7.3 doesn't support spark 2.4, please try zeppelin 0.8.1
>
> Amogh Shetkar <ashet...@snappydata.io> 于2019年2月12日周二 下午7:18写道:
>
>> I have a Docker image with Zeppelin 0.7.3 + Spark interpreter installed.
>> I also have Spark 2.4 distribution in that image.
>>
>> When I launch this image on k8s and run the sample Spark notebook (Basic
>> Features (Spark)), it failed with a NullPointerException.
>>
>> java.lang.NullPointerException
>> at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
>> at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
>> at
>> org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:398)
>> at
>> org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:387)
>> at
>> org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146)
>> at
>> org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:843)
>> at
>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
>> at
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491)
>> at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
>> at
>> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>> at java.lang.Thread.run(Thread.java:748)
>>
>> The Zeppelin container was launched with below SPARK_SUBMIT_OPTIONS:
>> --conf spark.kubernetes.container.image=<spark2.4-docker-image> --conf
>> spark.executor.instances=2  --deploy-mode client  --conf
>> spark.kubernetes.namespace=spark --conf
>> spark.driver.host=<zeppelin-container-IP> --conf
>> spark.kubernetes.driver.pod.name=<zeppelin-container-hostname> --conf
>> spark.kubernetes.authenticate.driver.serviceAccountName=default --conf
>> spark.ui.port=4040
>>
>> The logs indicate that the task scheduler could not be instantiated:
>>
>> ERROR [2019-01-17 13:58:11,881] ({pool-2-thread-2}
>> Utils.java[invokeMethod]:40) -
>> java.lang.reflect.InvocationTargetException
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>         at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
>>         at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
>>         at
>> org.apache.zeppelin.spark.SparkInterpreter.createSparkSession(SparkInterpreter.java:368)
>>         at
>> org.apache.zeppelin.spark.SparkInterpreter.getSparkSession(SparkInterpreter.java:233)
>>         at
>> org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:841)
>>         at
>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
>>         at
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491)
>>         at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
>>         at
>> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
>>         at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>         at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>         at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>         at java.lang.Thread.run(Thread.java:748)
>> Caused by: org.apache.spark.SparkException: External scheduler cannot be
>> instantiated
>>         at
>> org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2794)
>>         at org.apache.spark.SparkContext.<init>(SparkContext.scala:493)
>>         at
>> org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
>>         at
>> org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
>>         at
>> org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
>>         at scala.Option.getOrElse(Option.scala:121)
>>         at
>> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
>>         ... 20 more
>> Caused by: io.fabric8.kubernetes.client.KubernetesClientException:
>> Operation: [get]  for kind: [Pod]  with name:
>> [zep-zeppelin-with-spark-6956f448d9-mmtsk]  in namespace: [spark]  failed.
>>         at
>> io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
>>         at
>> io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71)
>>         at
>> io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228)
>>         at
>> io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
>>         at
>> org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator$$anonfun$1.apply(ExecutorPodsAllocator.scala:57)
>>         at
>> org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator$$anonfun$1.apply(ExecutorPodsAllocator.scala:55)
>>         at scala.Option.map(Option.scala:146)
>>         at
>> org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:55)
>>         at
>> org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:89)
>>         at
>> org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2788)
>>         ... 26 more
>> Caused by: java.net.ProtocolException: Unexpected status line:
>> ^U^C^A^@^B^B
>>         at okhttp3.internal.http.StatusLine.parse(StatusLine.java:69)
>>         at
>> okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:189)
>>         at
>> okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:75)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>>         at
>> okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>>         at
>> okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>>         at
>> okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>>         at
>> okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>>         at
>> io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:93)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>>         at
>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>>         at
>> okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
>>         at okhttp3.RealCall.execute(RealCall.java:69)
>>         at
>> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:377)
>>         at
>> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
>>         at
>> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)
>>         at
>> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)
>>         at
>> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783)
>>         at
>> io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)
>>
>> If I try to explicitly submit a simple job from within the zeppelin
>> container, in client mode, it works fine. This works fine:
>>
>> <spark2.4-dir>/bin/spark-submit     --master $MASTER     --deploy-mode
>> client     --name spark-pi    --class org.apache.spark.examples.SparkPi
>> --conf spark.executor.instances=2  \
>>     --conf spark.kubernetes.container.image=<spark2.4-docker-image>
>> --conf spark.kubernetes.namespace=spark     --conf
>> spark.driver.host=<zeppelin-container-IP> \
>>     --conf spark.kubernetes.driver.pod.name=<zeppelin-container-hostname>
>> --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
>>     local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar;
>>
>> Anyone tried this or knows why it may not be working for me?
>>
>> Thanks,
>> Amogh Shetkar
>> SnappyData Technologies (http://snappydata.io)
>>
>
>
> --
> Best Regards
>
> Jeff Zhang
>

Reply via email to