Hi Gary,

I’m testing right now Mesos 0.17 CDH5 beta2 and Spark 0.9.1,

So far what I see it is sufficient to run only HDFS, there is not need to run 
or install yarn or map reduce.

However if u are using MR1 or MR2 then u need to do that, I think running MR2 
on mesos is in progress, please correct me if I am wrong.
---
Best regards
Lukasz Jastrzebski




On Mar 25, 2014, at 5:14 PM, ext Gary Malouf 
<[email protected]<mailto:[email protected]>> wrote:

For various reasons, our team needs to keep all of our projects on the same 
protobuf version.  We've now hit a point where we need to upgrade protobuf from 
2.4.1 to 2.5.0 across the board in our projects and dependent platforms.

Current stack: Mesos 0.15, Chronos, CDH 4.2.1-MRV1, Spark 0.9-pre-scala-2.10 
build off master

Ideal stack after upgrade: Mesos 0.17, Chronos, CDH5 beta2, Spark 0.9.1 (hadoop 
2.2 build)

>From what we understand, we need a dependency on Hadoop 2.2 to get the 
>necessary protobuf upgrade.  From reading Cloudera's documentation and 
>multiple google searches, it is not clear to me how we can construct the stack 
>to continue to work.

Has anyone else requested info on getting this combination to work?  From 
others we've spoken to, they've basically said that we'll be forced to use Yarn 
for Hadoop support in the very near future anyway so we should switch.  Since 
we colocate Spark with out HDFS nodes, it's hard to see how we would run both 
Yarn and Mesos on the same servers.

Reply via email to