Yes you're right and I can connect it through Tableau etc. tools but don't
know how I can connect from shell where I can submit more jobs to this
application.

Any insight on how can I connect using shell?

On Thu, Sep 3, 2015 at 1:39 PM, Steve Loughran <ste...@hortonworks.com>
wrote:

> If its running the thrift server from hive, it's got a SQL API for you to
> connect to...
>
> On 3 Sep 2015, at 17:03, Dhaval Patel <dhaval1...@gmail.com> wrote:
>
> I am accessing a shared cluster mode Spark environment. However, there is
> an existing application (SparkSQL/Thrift Server), running under a different
> user, that occupies all available cores. Please see attached screenshot to
> get an idea about current resource utilization.
>
> Is there a way I can use this application to submit my jobs (under
> different user than mapr) without restarting this app with reduced number
> of  cores (spark.deploy.defaultCores)?
>
> What could be optimal solution for this type of resource sharing issues?
> Fari Scheduler or any other approach? -
> http://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application
>
>
> <image.png>
> \<image.png>
>
>
>

Reply via email to