Re: [ceph-users] Storing VM Images on CEPH with RBD-QEMU driver

2013-12-20 Thread James Pearce
fio --size=100m --ioengine=libaio --invalidate=1 --direct=1 --numjobs=10 --rw=read --name=fiojob --blocksize_range=4K-512k --iodepth=16 Since size=100m so reads would be entirely cached and, if hypervisor is write-back, potentially many writes would never make it to the cluster as well?

Re: [ceph-users] is the manual correct?

2013-12-16 Thread James Pearce
The OSD can be stopped from the host directly, sudo stop ceph-osd id=3 I don't know if that's the 'proper' way mind. On 2013-12-16 09:40, david.zhang...@gmail.com wrote: ceph osd start osd.{num}

Re: [ceph-users] 1MB/s throughput to 33-ssd test cluster

2013-12-09 Thread James Pearce
What SSDs are you using, and is there any under-provisioning on them? On 2013-12-09 16:06, Greg Poirier wrote: On Sun, Dec 8, 2013 at 8:33 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: I'd suggest testing the components separately - try to rule out NIC (and switch) issues and SSD

Re: [ceph-users] Journal, SSD and OS

2013-12-06 Thread James Pearce
Most servers also have internal SD card slots. There are SD cards advertising 90MB/s, though I haven't tried them as OS boot personally. On 2013-12-06 11:14, Gandalf Corvotempesta wrote: 2013/12/6 Sebastien Han sebastien@enovance.com: @James: I think that Gandalf’s main idea was to save

Re: [ceph-users] Impact of fancy striping

2013-12-06 Thread James Pearce
believing that fast-but-not-super-fast journals is the main reason for the poor performances observed. Maybe I am mistaken? Best regards, Nicolas Canceill Scalable Storage Systems SURFsara (Amsterdam, NL) On 12/03/2013 03:01 PM, James Pearce wrote: I would really appreciate it if someone could

Re: [ceph-users] Journal, SSD and OS

2013-12-05 Thread James Pearce
Another option is to run journals on individually presented SSDs, in a 5:1 ratio (spinning-disk:ssd) and have the OS somewhere else. Then the failure domain is smaller. Ideally implement some way to monitor SSD write life SMART data - at least it gives a guide as to device condition compared

Re: [ceph-users] Mounting Ceph on Linux/Windows

2013-12-05 Thread James Pearce
Native block support is coming for Hyper-V next year we hope... would be great to hear from InkTank on anything that can be shared publicly on that front :) On 2013-12-05 22:02, James Harper wrote: Can someone point me to directions on how to mount a Ceph storage volume on Linux as well as

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread James Pearce
Since the journal partitions are generally small, it shouldn't need to be. For example implement with substantial under-provisioning, either via HPA or simple partitions. On 2013-12-03 12:18, Loic Dachary wrote: Hi Ceph, When an SSD partition is used to store a journal

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread James Pearce
How much (%) is left unprovisioned on those (840s?) ? And were they trim'd/secure erased before deployment? On 2013-12-03 12:45, Emmanuel Lacour wrote: On Tue, Dec 03, 2013 at 12:38:54PM +, James Pearce wrote: Since the journal partitions are generally small, it shouldn't need

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread James Pearce
still a good idea - enabling discard mount option is generally counter-productive since trim is issue way too often, destroying performance (in my testing). On 2013-12-03 13:12, Emmanuel Lacour wrote: On Tue, Dec 03, 2013 at 12:48:21PM +, James Pearce wrote: How much (%) is left

Re: [ceph-users] Impact of fancy striping

2013-12-03 Thread James Pearce
I would really appreciate it if someone could: - explain why the journal setup is way more important than striping settings; I'm not sure if it's what you're asking, but any write must be physically written to the journal before the operation is acknowledged. So the overall cluster

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread James Pearce
://en.wikipedia.org/wiki/Write_amplification Hope that helps. On 2013-12-03 15:10, Loic Dachary wrote: On 03/12/2013 13:38, James Pearce wrote: Since the journal partitions are generally small, it shouldn't need to be. For example implement with substantial under-provisioning, either via HPA or simple

Re: [ceph-users] Impact of fancy striping

2013-11-29 Thread James Pearce
Did you try moving the journals to separate SSDs? It was recently discovered that due to a kernel bug/design, the journal writes are translated into device cache flush commands, so thinking about that I wonder also whether there would be performance improvement in the case that journal and

Re: [ceph-users] Impact of fancy striping

2013-11-29 Thread James Pearce
I will try to look into this issue of device cache flush. Do you have a tracker link for the bug? How I wish this were a forum! But here is a link: http://www.spinics.net/lists/ceph-users/msg05966.html And this:

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread James Pearce
On a related note, is there any discard/trim support in rbd-fuse? Apparently so (but not in the kernel module unfortunately). ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] One SSD Per Drive

2013-11-28 Thread James Pearce
Expensive at launch and limited to Windows as of now, but very interesting nevertheless. 120GB SSD and 1TB spinning disk - separately addressable - all in a 2.5 FF: http://www.wdc.com/en/products/products.aspx?id=1190 ___ ceph-users mailing

[ceph-users] tracker.ceph.com - public email address visibility?

2013-11-27 Thread James Pearce
I was going to add something to the bug tracker, but it looks to me that contributor email addresses all have public (unauthenticated) visibility? Can this be set in user preferences? Many thanks! ___ ceph-users mailing list

Re: [ceph-users] installing OS on software RAID

2013-11-26 Thread James Pearce
On 2013-11-25 21:55, Kyle Bader wrote: Several people have reported issues with combining OS and OSD journals on the same SSD drives/RAID due to contention. I wonder whether this has been due to the ATA write cache flush command bug that was found yesterday. It would seem to explain why a

Re: [ceph-users] Fancy striping support

2013-11-26 Thread James Pearce
Does rbd-fuse support cephx? I can't see any reference in the docs: http://ceph.com/docs/master/man/8/rbd-fuse/ On 2013-11-26 14:40, nicolasc wrote: Thank you for that quick answer. I will try the fuse client. ___ ceph-users mailing list

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-19 Thread James Pearce
2) Can't grow once you reach the hard limit of 14TB, and if you have multiple of such machines, then fragmentation becomes a problem 3) might have the risk of 14TB partition corruption wiping out all your shares 14TB limit is due to EXT(3/4) recommendation(/implementation)?

Re: [ceph-users] [ANN] ceph-deploy 1.3.2 released!

2013-11-14 Thread James Pearce
On 2013-11-13 13:29, Alfredo Deza wrote: I'm happy to announce a new release of ceph-deploy, the easy deployment tool for Ceph. Hi, is 1.3.2 known working with 0.72 release? With Ubuntu 13.04 clean builds it seems to be failing even to generate keys. I've also seen the breaks mentioned

Re: [ceph-users] [ANN] ceph-deploy 1.3.2 released!

2013-11-14 Thread James Pearce
Sorry, should have included some output as well..., $ ceph-deploy mon create mon0-0 ... Exception in thread Thread-5 (most likely raised ruing interpreter shutdown): Traceback (most recent call last): File /usr/lib/python2.7/threading.py, line 810, in __bootstrap_inner File

Re: [ceph-users] [ANN] ceph-deploy 1.3.2 released!

2013-11-14 Thread James Pearce
On 2013-11-14 17:14, Alfredo Deza wrote: I think that is just some leftover mishandled connection from the library that ceph-deploy uses to connect remotely and can be ignored. Could you share the complete log that got you to this point? I believe this bit is coming at the very end, right?

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-14 Thread James Pearce
On 2013-11-14 16:08, Gautam Saxena wrote: I've recently accepted the fact CEPH-FS is not stable...SAMBA no longer working... Alternatives 1) nfs over rbd... 2) nfs-ganesha for ceph... 3) create a large Centos 6.4 VM (eg 15 TB, 1 TB for OS using EXT4, remaining 14 TB using either EXT4 or

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-14 Thread James Pearce
On 2013-11-14 19:59, Dimitri Maziuk wrote: Cehpfs is in fact one of ceph's big selling points, IMO the issue is more that since it's not supported, the Enterprise sector won't touch it. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Running on disks that lose their head

2013-11-07 Thread James Pearce
when one head out of ten fails: disks can keep working with the nine remaining heads... some info on this at last in the SATA-IO 3.2 Spec... Rebuild Assist... Some info on the command set (SAS SATA implementations):