Thanks Jeff and Liuxun for your response. Will try to make changes as
suggested and set up spark environment.

On Sat, Dec 15, 2018 at 7:03 PM Jeff Zhang <zjf...@gmail.com> wrote:

> Hi Vivek,
>
> The built-in spark is not recommended for production, it is only for some
> small data experiment.  You'd better to download spark and specify
> SPARK_HOME in zeppelin, and make proper configuration like @liuxun
> <hzliu...@corp.netease.com> mentioned above.
>
>
>
> liuxun <neliu...@163.com> 于2018年12月15日周六 下午3:30写道:
>
>> Hi *VIVEK NARAYANASETTY:*
>> I see the following error message in your run log:
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>   at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:76)
>>   at
>> org.apache.spark.storage.DiskBlockObjectWriter$ManualCloseBufferedOutputStream$1.<init>(DiskBlockObjectWriter.scala:107)
>>
>> I have encountered similar problems before, I solved it by modifying the
>> configuration of spark-defaults.conf. This is my configuration and I hope
>> it will help you. :-)
>> spark.ui.port                               0
>> spark.local.dir                             /home/hadoop/spark-tmp
>> spark.cleaner.periodicGC.interval           5min
>> spark.driver.maxResultSize                  2g
>> spark.driver.memory                         4g
>> spark.driver.cores                          4
>> spark.driver.memoryOverHead                 4g
>> spark.driver.memory                         16g
>> spark.dynamicAllocation.enabled             true
>> spark.dynamicAllocation.executorIdleTimeout 120s
>> spark.dynamicAllocation.cachedExecutorIdleTimeout       180s
>> spark.dynamicAllocation.initialExecutors    3
>> spark.dynamicAllocation.maxExecutors        74
>> spark.dynamicAllocation.minExecutors        0
>> spark.executor.cores                        4
>> spark.executor.extraJavaOptions             -XX:MetaspaceSize=256m
>> -XX:MaxMetaspaceSize=256m -verbose:gc -XX:+PrintGCDetails
>> -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution
>> spark.executor.instances                    3
>> spark.executor.memory                       16g
>> spark.yarn.executor.memoryOverHead          2048
>> spark.hadoop.fs.file.impl.disable.cache     true
>> spark.hadoop.fs.hdfs.impl.disable.cache     true
>> spark.kryoserializer.buffer.max             128m
>> spark.master                                yarn-client
>> spark.scheduler.mode                        FIFO
>> spark.serializer
>> org.apache.spark.serializer.KryoSerializer
>> spark.shuffle.service.enabled               true
>> spark.sql.autoBroadcastJoinThreshold        209715200
>> spark.sql.runSQLOnFiles                     false
>> spark.sql.shuffle.partitions                888
>> spark.yarn.am.cores                         2
>> spark.yarn.am.memory                        1g
>> spark.yarn.am.memoryOverhead                1g
>> spark.yarn.am.extraJavaOptions -XX:MetaspaceSize=128m
>> -XX:MaxMetaspaceSize=128m -verbose:gc -XX:+PrintGCDetails
>> -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution
>>
>>
>>
>>
>>
>> 在 2018年12月15日,上午3:15,VIVEK NARAYANASETTY <vive....@gmail.com> 写道:
>>
>> Hi All,
>>
>> I have restarted the Zeppelin service and this time I got out of memory
>> error when executing 2nd paragraph. Any suggestions on configurations
>> changes.
>>
>> WARN [2018-12-14 19:12:03,158] ({pool-2-thread-2}
>> NotebookServer.java[afterStatusChange]:2302) - Job 20181214-131313_96754935
>> is finished, status: ERROR, exception: null, result: %text
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task 3
>> in stage 2.0 failed 1 times, most recent failure: Lost task 3.0 in stage
>> 2.0 (TID 36, localhost, executor driver): java.lang.OutOfMemoryError: Java
>> heap space
>> at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:76)
>> at
>> org.apache.spark.storage.DiskBlockObjectWriter$ManualCloseBufferedOutputStream$1.<init>(DiskBlockObjectWriter.scala:107)
>> at
>> org.apache.spark.storage.DiskBlockObjectWriter.initialize(DiskBlockObjectWriter.scala:108)
>> at
>> org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:116)
>> at
>> org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:237)
>> at
>> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151)
>> at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>> at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>> at org.apache.spark.scheduler.Task.run(Task.scala:109)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> Driver stacktrace:
>>   at org.apache.spark.scheduler.DAGScheduler.org
>> <http://org.apache.spark.scheduler.dagscheduler.org/>
>> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
>>   at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
>>   at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
>>   at
>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>>   at
>> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
>>   at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
>>   at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
>>   at scala.Option.foreach(Option.scala:257)
>>   at
>> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
>>   at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
>>   at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
>>   at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
>>   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>>   at
>> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
>>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
>>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
>>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
>>   at
>> org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:363)
>>   at
>> org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
>>   at org.apache.spark.sql.Dataset.org
>> <http://org.apache.spark.sql.dataset.org/>
>> $apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3278)
>>   at
>> org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
>>   at
>> org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
>>   at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
>>   at
>> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>>   at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
>>   at org.apache.spark.sql.Dataset.head(Dataset.scala:2489)
>>   at org.apache.spark.sql.Dataset.take(Dataset.scala:2703)
>>   at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
>>   at org.apache.spark.sql.Dataset.show(Dataset.scala:725)
>>   at org.apache.spark.sql.Dataset.show(Dataset.scala:702)
>>   ... 52 elided
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>   at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:76)
>>   at
>> org.apache.spark.storage.DiskBlockObjectWriter$ManualCloseBufferedOutputStream$1.<init>(DiskBlockObjectWriter.scala:107)
>>   at
>> org.apache.spark.storage.DiskBlockObjectWriter.initialize(DiskBlockObjectWriter.scala:108)
>>   at
>> org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:116)
>>   at
>> org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:237)
>>   at
>> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151)
>>   at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>>   at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>>   at org.apache.spark.scheduler.Task.run(Task.scala:109)
>>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>>   ... 3 more
>>
>>  INFO [2018-12-14 19:12:03,218] ({pool-2-thread-2}
>> VFSNotebookRepo.java[save]:196) - Saving note:2DYCV6BGX
>>  INFO [2018-12-14 19:12:03,235] ({pool-2-thread-2}
>> SchedulerFactory.java[jobFinished]:115) - Job 20181214-131313_96754935
>> finished by scheduler
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter-spark:shared_process-shared_session
>> ERROR [2018-12-14 19:12:03,862] ({qtp1355316001-80}
>> HeliumRestApi.java[suggest]:126) -
>> org.apache.thrift.transport.TTransportException: java.net.SocketException:
>> Connection reset
>> java.lang.RuntimeException:
>> org.apache.thrift.transport.TTransportException: java.net.SocketException:
>> Connection reset
>> at
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:139)
>> at
>> org.apache.zeppelin.interpreter.InterpreterSettingManager.getAllResourcesExcept(InterpreterSettingManager.java:531)
>> at
>> org.apache.zeppelin.interpreter.InterpreterSettingManager.getAllResources(InterpreterSettingManager.java:513)
>> at org.apache.zeppelin.helium.Helium.suggestApp(Helium.java:361)
>> at org.apache.zeppelin.rest.HeliumRestApi.suggest(HeliumRestApi.java:123)
>> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
>> at
>> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
>> at
>> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
>> at
>> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
>> at
>> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
>> at
>> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
>> at
>> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
>> at
>> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
>> at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
>> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
>> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
>> at
>> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
>> at
>> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
>> at
>> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
>> at
>> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
>> at
>> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>> at
>> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>> at
>> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>> at
>> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
>> at
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
>> at org.apache.zeppelin.server.CorsFilter.doFilter(CorsFilter.java:72)
>> at
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>> at
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>> at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>> at
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>> at
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>> at
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>> at
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>> at
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>> at
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>> at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>> at
>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>> at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>> at org.eclipse.jetty.server.Server.handle(Server.java:499)
>> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>> at
>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>> at
>> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>> at
>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>> at
>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: org.apache.thrift.transport.TTransportException:
>> java.net.SocketException: Connection reset
>> at
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>> at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
>> at
>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
>> at
>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
>> at
>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
>> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>> at
>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_resourcePoolGetAll(RemoteInterpreterService.java:521)
>> at
>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.resourcePoolGetAll(RemoteInterpreterService.java:509)
>> at
>> org.apache.zeppelin.interpreter.InterpreterSettingManager$3.call(InterpreterSettingManager.java:535)
>> at
>> org.apache.zeppelin.interpreter.InterpreterSettingManager$3.call(InterpreterSettingManager.java:532)
>> at
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135)
>> ... 51 more
>> Caused by: java.net.SocketException: Connection reset
>> at java.net.SocketInputStream.read(SocketInputStream.java:209)
>> at java.net.SocketInputStream.read(SocketInputStream.java:141)
>> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>> at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>> at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>> at
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>> ... 61 more
>>  INFO [2018-12-14 19:12:03,869] ({Exec Default Executor}
>> RemoteInterpreterManagedProcess.java[onProcessComplete]:243) - Interpreter
>> process exited 0
>>
>>
>> On Sat, Dec 15, 2018 at 12:31 AM VIVEK NARAYANASETTY <vive....@gmail.com>
>> wrote:
>>
>>> Hi All,
>>>
>>> I am using apache zeppelin 0.8 with an inbuilt spark. I am getting below
>>> error message randomly when running paragraphs. Do I need to change any
>>> configurations to resolve this error?
>>>
>>> Paragraph1: Reading csv into df -> No Error
>>> Note: size of the dataframe should be 300 mb maximum.
>>>
>>> Paragraph 2: Error
>>> df.groupBy("account").
>>>       agg(
>>>         collect_list("month")
>>>       ).show(false)
>>>
>>> INFO [2018-12-14 18:47:28,408] ({qtp1355316001-75}
>>> VFSNotebookRepo.java[save]:196) - Saving note:2DYCV6BGX
>>>  INFO [2018-12-14 18:47:28,424] ({pool-2-thread-3}
>>> SchedulerFactory.java[jobStarted]:109) - Job 20181214-131313_96754935
>>> started by scheduler
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter-spark:shared_process-shared_session
>>>  INFO [2018-12-14 18:47:28,426] ({pool-2-thread-3}
>>> Paragraph.java[jobRun]:380) - Run paragraph [paragraph_id:
>>> 20181214-131313_96754935, interpreter: , note_id: 2DYCV6BGX, user:
>>> anonymous]
>>> ERROR [2018-12-14 18:48:00,000] ({JobProgressPoller,
>>> jobId=20181214-131313_96754935} JobProgressPoller.java[run]:58) - Can not
>>> get or update progress
>>> java.lang.RuntimeException:
>>> org.apache.thrift.transport.TTransportException: java.net.SocketException:
>>> Connection reset
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:139)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getProgress(RemoteInterpreter.java:334)
>>> at org.apache.zeppelin.notebook.Paragraph.progress(Paragraph.java:314)
>>> at
>>> org.apache.zeppelin.scheduler.JobProgressPoller.run(JobProgressPoller.java:55)
>>> Caused by: org.apache.thrift.transport.TTransportException:
>>> java.net.SocketException: Connection reset
>>> at
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>> at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
>>> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>>> at
>>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_getProgress(RemoteInterpreterService.java:321)
>>> at
>>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.getProgress(RemoteInterpreterService.java:306)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter$7.call(RemoteInterpreter.java:338)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter$7.call(RemoteInterpreter.java:335)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135)
>>> ... 3 more
>>> Caused by: java.net.SocketException: Connection reset
>>> at java.net.SocketInputStream.read(SocketInputStream.java:209)
>>> at java.net.SocketInputStream.read(SocketInputStream.java:141)
>>> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>>> at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>>> at
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>> ... 13 more
>>> ERROR [2018-12-14 18:48:00,000] ({pool-2-thread-3} Job.java[run]:190) -
>>> Job failed
>>> java.lang.RuntimeException:
>>> org.apache.thrift.transport.TTransportException
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:139)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:228)
>>> at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:437)
>>> at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
>>> at
>>> org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:307)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: org.apache.thrift.transport.TTransportException
>>> at
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>>> at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
>>> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>>> at
>>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_interpret(RemoteInterpreterService.java:274)
>>> at
>>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.interpret(RemoteInterpreterService.java:258)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:233)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:229)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135)
>>> ... 11 more
>>>  INFO [2018-12-14 18:48:00,005] ({Exec Default Executor}
>>> RemoteInterpreterManagedProcess.java[onProcessComplete]:243) - Interpreter
>>> process exited 0
>>> ERROR [2018-12-14 18:48:00,013] ({pool-2-thread-3}
>>> NotebookServer.java[afterStatusChange]:2294) - Error
>>> java.lang.RuntimeException:
>>> org.apache.thrift.transport.TTransportException
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:139)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:228)
>>> at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:437)
>>> at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
>>> at
>>> org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:307)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: org.apache.thrift.transport.TTransportException
>>> at
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>>> at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
>>> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>>> at
>>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_interpret(RemoteInterpreterService.java:274)
>>> at
>>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.interpret(RemoteInterpreterService.java:258)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:233)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:229)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135)
>>> ... 11 more
>>>  WARN [2018-12-14 18:48:00,014] ({pool-2-thread-3}
>>> NotebookServer.java[afterStatusChange]:2302) - Job 20181214-131313_96754935
>>> is finished, status: ERROR, exception: java.lang.RuntimeException:
>>> org.apache.thrift.transport.TTransportException, result: %text
>>> org.apache.thrift.transport.TTransportException
>>> at
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>>> at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
>>> at
>>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
>>> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>>> at
>>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_interpret(RemoteInterpreterService.java:274)
>>> at
>>> org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.interpret(RemoteInterpreterService.java:258)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:233)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:229)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:228)
>>> at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:437)
>>> at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
>>> at
>>> org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:307)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>>  INFO [2018-12-14 18:48:00,058] ({pool-2-thread-3}
>>> VFSNotebookRepo.java[save]:196) - Saving note:2DYCV6BGX
>>>  INFO [2018-12-14 18:48:00,072] ({pool-2-thread-3}
>>> SchedulerFactory.java[jobFinished]:115) - Job 20181214-131313_96754935
>>> finished by scheduler
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter-spark:shared_process-shared_session
>>>
>>>
>>> --
>>> Thanks & Regards
>>> *Vivek Narayanasetty*
>>>
>>>
>>>
>>>
>>> *Go Green: Think before you print this e-mail or its attachment. You can
>>> save paper if you do not really need to print.*
>>>
>>
>>
>> --
>> Thanks & Regards
>> *Vivek Narayanasetty*
>>
>>
>>
>>
>> *Go Green: Think before you print this e-mail or its attachment. You can
>> save paper if you do not really need to print.*
>>
>>
>>
>
> --
> Best Regards
>
> Jeff Zhang
>


-- 
Thanks & Regards
*Vivek Narayanasetty*




*Go Green: Think before you print this e-mail or its attachment. You can
save paper if you do not really need to print.*

Reply via email to