>
> it sure is not able to get sufficient resources from YARN to start the
> containers.
>
that's right. I worked when I reduced executors from thrift but it also
reduced thrift's performance.

But it is not the solution i am looking forward to. my sqoop import job
runs just once a day, and thrift apps will running for 24/7 for
fetching-processing-displaying online reports on website. reducing
executors and keeping some in spare is helping in running more jobs other
than thrift parallely but it's wasting the core when other jobs are not
working.

is there something which can help in allocating resources dynamically?
which will automatically allocate maximum resources to thrift when there
are no other jobs running, and automatically share resources with jobs/apps
other than thrift.?

I've heard of property in yarn - dynamicAlloction , can this help?


Thanks.

On Sat, Feb 24, 2018 at 7:14 AM, vijay.bvp <bvpsa...@gmail.com> wrote:

> it sure is not able to get sufficient resources from YARN to start the
> containers.
> is it only with this import job or if you submit any other job its failing
> to start.
>
> As a test just try to run another spark job or a mapredue job  and see if
> the job can be started.
>
> Reduce the thrift server executors and see overall there is available
> cluster capacity for new jobs.
>
>
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to