Hi,
1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/)
This has been in production for more than a year now and heavily tested before.
Performance was not expected since frontend server mainly do read (90%).
Cheers.
Sébastien Han
Cloud Engineer
Always give
Hi Sebastien.
Thanks! WHen you say performance was not expected, can you elaborate a
little? Specifically, what did you notice in terms of performance?
On Mon, Nov 25, 2013 at 4:39 AM, Sebastien Han
sebastien@enovance.comwrote:
Hi,
1) nfs over rbd
Hi,
Well, basically, the frontend is composed of web servers.
They mostly do reads on the NFS mount.
I believe that the biggest frontend has around 60 virtual machines, accessing
the share and serving it.
Unfortunately, I don’t have any figures anymore but performances were really
poor in
On 11/19/2013 08:02 PM, YIP Wai Peng wrote:
Hm, so maybe this nfsceph is not _that_ bad after all! :) Your read clearly
wins, so I'm guessing the drdb write is the slow one. Which drdb mode are
you using?
Active/passive pair, meta-disk internal, protocol C over a 5-long
crossover cable on
2) Can't grow once you reach the hard limit of 14TB, and if you have
multiple of such machines, then fragmentation becomes a problem
3) might have the risk of 14TB partition corruption wiping out all
your shares
14TB limit is due to EXT(3/4) recommendation(/implementation)?
Hi Yip,
Thanks for the code. With respect to can't grow, I think I can (with some
difficulty perhaps?) resize the vm if I needed to, but I'm really just
trying to buy myself time till CEPH-FS is production readyPoint #3
scares me, so I'll have to think about that one. Most likely I'd use a
On Wednesday, 20 November 2013, Gautam Saxena wrote:
Hi Yip,
Thanks for the code. With respect to can't grow, I think I can (with
some difficulty perhaps?) resize the vm if I needed to, but I'm really just
trying to buy myself time till CEPH-FS is production readyPoint #3
scares me, so
On Wednesday, 20 November 2013, Dimitri Maziuk wrote:
On 11/18/2013 01:19 AM, YIP Wai Peng wrote:
Hi Dima,
Benchmark FYI.
$ /usr/sbin/bonnie++ -s 0 -n 5:1m:4k
Version 1.97 --Sequential Create-- Random
Create
altair -Create-- --Read---
Hi all,
I've uploaded it via github - https://github.com/waipeng/nfsceph. Standard
disclaimer applies. :)
Actually #3 is a novel idea, I have not thought of it. Thinking about the
difference just off the top of my head though, comparatively, #3 will have
1) more overheads (because of the
Hi Dima,
Benchmark FYI.
$ /usr/sbin/bonnie++ -s 0 -n 5:1m:4k
Version 1.97 --Sequential Create-- Random
Create
altair -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
I've recently accepted the fact CEPH-FS is not stable enough for production
based on 1) recent discussion this week with Inktank engineers, 2)
discovery that the documentation now explicitly states that all over the
place (http://eu.ceph.com/docs/wip-3060/cephfs/) and 3) a reading of the
recent
On 2013-11-14 16:08, Gautam Saxena wrote:
I've recently accepted the fact CEPH-FS is not stable...SAMBA no
longer working...
Alternatives
1) nfs over rbd...
2) nfs-ganesha for ceph...
3) create a large Centos 6.4 VM (eg 15 TB, 1 TB for OS using EXT4,
remaining 14 TB using either EXT4 or
On 2013-11-14 19:59, Dimitri Maziuk wrote:
Cehpfs is in fact one of ceph's big selling points,
IMO the issue is more that since it's not supported, the Enterprise
sector won't touch it.
___
ceph-users mailing list
ceph-users@lists.ceph.com
I've been using CephFS for a meager 40TB store of video clips for editing,
from Dumpling to Emperor, and (fingers crossed) so far I haven't had any
problems. The only disruption I've seen is that the metadata server will
crash every couple of days, and one of the standby MDS will pick up. The
On Fri, Nov 15, 2013 at 12:08 AM, Gautam Saxena gsax...@i-a-inc.com wrote:
1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
)
We are now running this - basically an intermediate/gateway node that
mounts ceph rbd objects and exports them as NFS.
15 matches
Mail list logo