I believe in the future the spark functional style api will dominate the
big data world. Very few people will use the native mapreduce API. Even now
usually users use third-party mapreduce library such as cascading,
scalding, scoobi or script language hive, pig rather than the native
mapreduce api.
And this functional style of api compatible both with hadoop's mapreduce
and spark's RDD. The underlying execution engine will be transparent to
users. So I guess or I hope in the future, the api will be unified  while
the underlying execution engine will been choose intelligently according
the resources you have and the metadata of the data you operate on.


On Thu, Mar 6, 2014 at 9:02 AM, Edward Capriolo <edlinuxg...@gmail.com>wrote:

> The thing about yarn is you chose what is right for the the workload.
>
> For example: Spark may not the right choice if for example join tables do
> not fit in memory.
>
>
> On Wednesday, March 5, 2014, Anthony Mattas <anth...@mattas.net> wrote:
> > With Tez and Spark becoming mainstream what does Map Reduce look like
> longer term? Will it become a component that sits on top of Tez, or will
> they continue to live side by side utilizing YARN?
> > I'm struggling a little bit to understand what the roadmap looks like
> for the technologies that sit on top of YARN.
> >
> > Anthony Mattas
> > anth...@mattas.net
>
> --
> Sorry this was sent from mobile. Will do less grammar and spell check than
> usual.
>

Reply via email to