I have solved the similar issue before.  You should check on spark UI and 
probably you will see your single job is taking all the resources. Therefore 
further job that submitting to the same cluster will just hang on there. When 
you restart zeppelin then the old job is killed and all the resource it took 
will be released



xyun...@simuwell.com
 
From: RUSHIKESH RAUT
Date: 2017-02-17 02:29
To: users
Subject: Re: Zeppelin unable to respond after some time
Yes happens with r and spark codes frequently 

On Feb 17, 2017 3:25 PM, "小野圭二" <onoke...@gmail.com> wrote:
yes, almost every time.
There are not any special operations.
Just run the tutorial demos.
From my feeling, it happens in R demo frequently.

2017-02-17 18:50 GMT+09:00 Jeff Zhang <zjf...@gmail.com>:

Is it easy to reproduce it ?

小野圭二 <onoke...@gmail.com>于2017年2月17日周五 下午5:47写道:
I am facing on the same issue now.

2017-02-17 18:25 GMT+09:00 RUSHIKESH RAUT <rushikeshraut...@gmail.com>:
Hi all, 

I am facing a issue while using Zeppelin. I am trying to load some data(not 
that big data) into Zeppelin and then build some visualization on it. The 
problem is that when I try to run the code first time it's working but after 
some time the same code doesn't work. It remains in running state on gui, but 
no logs are generated in Zeppelin logs. Also all further tasks are hanging in 
pending state. 
As soon as I restart  Zeppelin it works. So I am guessing it's some memory 
issue. I have read that Zeppelin stores the data in memory so it is possible 
that it runs out of memory after some time.
How do I debug this issue? How much is the default memory that Zeppelin takes 
at start? Also is there any way that I can run Zeppelin with specified memory 
so that I can start the process with more memory. Because it doesn't make sense 
to restart Zeppelin after every half hour 

Thanks, 
Rushikesh Raut 



Reply via email to