Dear Victors,

Thanks for your reply. I'll create a ticket for this feature.

Dear Moon,

Thanks for the information. Sadly, our task are usually written in scala
and python. FIFO is OK for us now, but knowing the job queue should be
helpful.

Wush

2015-08-25 23:20 GMT+08:00 moon soo Lee <m...@apache.org>:

> Hi Wush,
>
> Spark SQL can run concurrently by setting 'zeppelin.spark.concurrentSQL'
> true in Interpreter page.
>
> scala/python code can not run concurrently at the moment. Here's a related
> discussion.
>
> http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/why-zeppelin-SparkInterpreter-use-FIFOScheduler-td579.html
>
> Best,
> moon
>
>
> On Tue, Aug 25, 2015 at 3:48 AM Victor Manuel Garcia <
> victor.gar...@beeva.com> wrote:
>
>> Hi Wush,
>>
>> by the moment you can not know .... but would be a nice feature...
>>
>> thks
>>
>> 2015-08-25 10:03 GMT+02:00 Wush Wu <w...@bridgewell.com>:
>>
>>> Dear all,
>>>
>>> Our team is using zeppelin to submit ad hoc queries to our spark
>>> cluster. There are many people using the zeppelin at the same time.
>>> Sometimes, we need to wait each other and the task is pending for a long
>>> time. Is there a place to see the task queue in the zeppelin?
>>>
>>> Thanks,
>>> Wush
>>>
>>>
>>
>>
>> --
>> *Victor Manuel Garcia Martinez*
>> *Ingeniero de Software
>>                            *
>>
>> *+34 672104297 <%2B34%20672104297>  | victor.gar...@beeva.com
>> <marta.ta...@beeva.com>*
>>              *              | victormanuel.garcia.marti...@bbva.com
>> <marta.ta...@bbva.com>*
>>
>>
>>
>> <http://www.beeva.com/>
>>
>>
>>

Reply via email to