That's also available in standalone.

On Thu, Apr 14, 2016 at 12:47 PM, Alexander Pivovarov <apivova...@gmail.com>
wrote:

> Spark on Yarn supports dynamic resource allocation
>
> So, you can run several spark-shells / spark-submits / spark-jobserver /
> zeppelin on one cluster without defining upfront how many executors /
> memory you want to allocate to each app
>
> Great feature for regular users who just want to run Spark / Spark SQL
>
>
> On Thu, Apr 14, 2016 at 12:05 PM, Sean Owen <so...@cloudera.com> wrote:
>
>> I don't think usage is the differentiating factor. YARN and standalone
>> are pretty well supported. If you are only running a Spark cluster by
>> itself with nothing else, standalone is probably simpler than setting
>> up YARN just for Spark. However if you're running on a cluster that
>> will host other applications, you'll need to integrate with a shared
>> resource manager and its security model, and for anything
>> Hadoop-related that's YARN. Standalone wouldn't make as much sense.
>>
>> On Thu, Apr 14, 2016 at 6:46 PM, Alexander Pivovarov
>> <apivova...@gmail.com> wrote:
>> > AWS EMR includes Spark on Yarn
>> > Hortonworks and Cloudera platforms include Spark on Yarn as well
>> >
>> >
>> > On Thu, Apr 14, 2016 at 7:29 AM, Arkadiusz Bicz <
>> arkadiusz.b...@gmail.com>
>> > wrote:
>> >>
>> >> Hello,
>> >>
>> >> Is there any statistics regarding YARN vs Standalone Spark Usage in
>> >> production ?
>> >>
>> >> I would like to choose most supported and used technology in
>> >> production for our project.
>> >>
>> >>
>> >> BR,
>> >>
>> >> Arkadiusz Bicz
>> >>
>> >> ---------------------------------------------------------------------
>> >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> >> For additional commands, e-mail: user-h...@spark.apache.org
>> >>
>> >
>>
>
>

Reply via email to