Thats' really depends on what you're doing.
I've been running Spark in production with Mesos for as far as Spark ever
got open sourced.
Earlier this year, we added Cassandra in the mix by running it through
Docker and Marathon in host network mode with volume. Nothing fancy, since
it was for non cr
Have you tried to give spark the namenode explicitely
hdfs://namenode_ip:8020/hdfs/ as address?
On Mon, Sep 14, 2015 at 11:09 PM, Rodrick Brown
wrote:
> I have separate systems for the following services
>
> — Mesos (3 masters + 3 slaves)
> — Hadoop (2 NN + 8 slaves)
> — ZooKeeper (3 node
Hi Geraard,
isn't this the same issueas this?
https://issues.apache.org/jira/browse/MESOS-1688
On Mon, Jan 26, 2015 at 9:17 PM, Gerard Maas wrote:
> Hi,
>
> We are observing with certain regularity that our Spark jobs, as Mesos
> framework, are hoarding resources and not releasing them, resulti
3 matches
Mail list logo