[ceph-users] NAND-backed DRAM for Ceph journals

2014-04-16 Thread Charles 'Boyo
Hello list. It is a well-known fact that speeding up the OSD journals results in overall performance improvement. And most installations use SSDs to gain this benefit. But is anyone using or considering using NAND-backed DRAM like the Viking ArxCiS-NV and similar NVDIMM solutions? I think thes

[ceph-users] Disabling OSD journals, parallel reads and eventual consistency for RBD

2014-06-12 Thread Charles 'Boyo
Hello list. Is it possible, or will it ever be possible to disable the OSD's journalling activity? I understand it is risky and has the potential for data loss but in my use case, the data is easily re-built from scratch and I'm really bothered about the reduced throughput "wasted" on journall

Re: [ceph-users] Disabling OSD journals, parallel reads and eventual consistency for RBD

2014-06-12 Thread Charles 'Boyo
Weil To: Charles 'Boyo Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Disabling OSD journals, parallel reads and eventual consistency for RBD Sent: Jun 13, 2014 00:29 On Thu, 12 Jun 2014, Charles 'Boyo wrote: > Hello list. > > Is it possible, or will it ever be possi

Re: [ceph-users] Disabling OSD journals, parallel reads and eventual consistency for RBD

2014-06-12 Thread Charles 'Boyo
OSDs in a replicated set writing at least to their journal causes latency concerns with my WAN-type links. Charles --Original Message-- From: Sage Weil To: Charles 'Boyo Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Disabling OSD journals, parallel reads and eventual cons

[ceph-users] Sparse RBD instance snapshots in OpenStack

2015-03-12 Thread Charles 'Boyo
Hello all. The current behavior of snapshotting instances RBD-backed in OpenStack involves uploading the snapshot into Glance. The resulting Glance image is fully allocated, causing an explosion of originally sparse RAW images. Is there a way to preserve the sparseness? Else I can use qemu-img

Re: [ceph-users] Replication question

2015-03-12 Thread Charles 'Boyo
Hello, On Thu, Mar 12, 2015 at 3:07 PM, Thomas Foster wrote: > I am looking into how I can maximize my space with replication, and I am > trying to understand how I can do that. > > I have 145TB of space and a replication of 3 for the pool and was thinking > that the max data I can have in the c

Re: [ceph-users] Sparse RBD instance snapshots in OpenStack

2015-03-12 Thread Charles 'Boyo
not that bad since booting from that snapshot will do a clone. So not sure if doing sparsify a good idea (libguestfs should be able to do that). However it’s better we could do that via RBD snapshots so we can have best of both worlds. > On 12 Mar 2015, at 03:45, Charles 'Boy

[ceph-users] Localized reads (RADOS/RBD)

2015-03-13 Thread Charles 'Boyo
Hello all. When, if ever, will Ceph clients have the ability to prefer certain OSDs/hosts over others? I am running 3 replica pools across 3 data centers connected by relatively narrow links. Writes have to travel out anyway but I'd prefer to keep reads local. The thinking is that since all w

[ceph-users] SSD recommendations for OSD journals

2013-07-21 Thread Charles 'Boyo
Hello. I am intending to build a Ceph cluster using several Dell C6100 multi-node chassis servers. These have only 3 disk bays per node (12 x 3.5" drives across 4 nodes) so I can't afford to sacrifice a third of my capacity for SSDs. However, fitting the SSD via PCI-e seems a valid option. Un

Re: [ceph-users] SSD recommendations for OSD journals

2013-07-22 Thread Charles 'Boyo
On Mon, Jul 22, 2013 at 7:10 PM, Mark Nelson wrote: > On 07/22/2013 01:02 PM, Oliver Fuckner wrote: >> Good evening, >> >> >> >> On the second look you see that they use 4 Sandisk X100 SSDs in RAID5 >> and those SSDs only have 80TBytes Write Endurance each... that makes me >> nervous. > > I'm less

Re: [ceph-users] SSD recommendations for OSD journals

2013-07-22 Thread Charles 'Boyo
Hi, On Mon, Jul 22, 2013 at 2:08 AM, Chen, Xiaoxi wrote: > Hi, > > ** ** > > > Can you share any information on the SSD you are using, is it > PCIe connected? > >Depends, if you use HDD as your OSD data disk, a SATA/SAS SSD is > enough for you. Instead of Intel 520, I woul

[ceph-users] Inability to create cluster using ceph-deploy and alternate cluster name (ceph rc script issue)

2013-10-17 Thread Charles 'Boyo
Hello list. I am trying to create a new single-node cluster using the ceph-deploy tool but the 'mon create' step keeps failing apparently because the 'ceph' cluster name is hardwired into the /etc/init.d/ceph rc script or more correctly, the rc script does not have any support for "--cluster ". Ha

[ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread Charles 'Boyo
Hello all. I have a Ceph cluster using XFS on the OSDs. Btrfs is not available to me at the moment (cluster is running CentOS 6.4 with stock kernel). I intend to maintain a full replica of an active ZFS dataset on the Ceph infrastructure by installing an OpenSolaris KVM guest using rbd-fuse to ex

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread Charles 'Boyo
in Ceph, then this would mean a single OSD > failure could cause data loss. For that reason, it seems it would be > better to do the replication in Ceph than in ZFS in this case. > > John > > On Fri, Nov 29, 2013 at 11:13 AM, Charles 'Boyo wrote: >> Hello all. >&g

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread Charles 'Boyo
>> On a related note, is there any discard/trim support in rbd-fuse? > > > Apparently so (but not in the kernel module unfortunately). > Ok, librbd (which is used by the qemu alternative) supports discard, not the rbd kernel module does not. Neither of these are available to me right now. Is rbd-f

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread Charles 'Boyo
>> That's because qemu-kvm >> in CentOS 6.4 doesn't support librbd. > > RedHat just added RBD support in qemu-kvm-rhev in RHEV 6.5. I don't > know if that will trickle down to CentOS but you can probably > recompile it yourself like we did. > > https://rhn.redhat.com/errata/RHSA-2013-1754.html > (h