On 01/14/2015 07:37 AM, John Spray wrote:
On Tue, Jan 13, 2015 at 1:25 PM, James <wirel...@tampabay.rr.com> wrote:
I was wondering if anyone has Mesos running on top of Ceph?
I want to test/use Ceph if lieu of HDFS.

You might be interested in http://ceph.com/docs/master/cephfs/hadoop/

It allows you to expose CephFS to applications that use expect HDFS.

However, as I understand it HDFS is optional with Mesos anyway, so
it's not completely clear what you're trying to accomplish.

Cheers,
John



Hello one and all,

I am suppose to be able to post to this group via gmane, but
I'm not seeing the postings via gmane:

http://news.gmane.org/gmane.comp.file-systems.ceph.user

Maybe my application to this group did not get processed?


Long version (hopefully more clear)?

I want a distributed, heterogeneous cluster, without Hadoop. Spark (in-memory) processing [1] of large FEM [2] (Finite Element [math] Methods) is the daunting application that will be used in all sorts of scientific simulations with very large datasets. This will also include rendering some very complex 3D video/simulations of fluid type flows [3]. Hopefully the simulations will be computed and rendered in real time (less that 200ms of latency). Surely other massive types of scientific simulations can benefit from Spark/mesos/cephfs/btrfs, imho.

Also being able to use the cluster for routine distcc compilations, Continuous Integration [4], log file processing, security scans and most other forms of routine server usage as of keen interest too.


Eventually, the cluster(s) will use both the GPUs, x64 and the new Arm_64 bit processors, all with as much ram as possible. This is a long journey, but I believe that cephfs on top of btrfs will eventually mature into the robust solution that is necessary.

The other portions of the solution like Distribute Features (Chronos, ansible/puppet/chef, DB etc etc will also be needed, but there does seem to be an abundance of choices for those needs; so discussion
is warmly received in these areas too, as they relate to cephfs/btrfs.


Cephfs  on top of Btrfs is the most challenging part of this journey so
far. I use openrc on gentoo, and have no interest in systemd, just so
you know.


James

[1] https://amplab.cs.berkeley.edu/

[2] http://dune.mathematik.uni-freiburg.de/

[3] http://www.opengeosys.org/

[4] http://www.zentoo.org/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to