Hello all.
When, if ever, will Ceph clients have the ability to prefer certain OSDs/hosts
over others?
I am running 3 replica pools across 3 data centers connected by relatively
narrow links. Writes have to travel out anyway but I'd prefer to keep reads
local.
The thinking is that since all w
not that bad since booting from that snapshot will do a
clone.
So not sure if doing sparsify a good idea (libguestfs should be able to do
that).
However it’s better we could do that via RBD snapshots so we can have best of
both worlds.
> On 12 Mar 2015, at 03:45, Charles 'Boy
Hello,
On Thu, Mar 12, 2015 at 3:07 PM, Thomas Foster
wrote:
> I am looking into how I can maximize my space with replication, and I am
> trying to understand how I can do that.
>
> I have 145TB of space and a replication of 3 for the pool and was thinking
> that the max data I can have in the c
Hello all.
The current behavior of snapshotting instances RBD-backed in OpenStack involves
uploading the snapshot into Glance.
The resulting Glance image is fully allocated, causing an explosion of
originally sparse RAW images. Is there a way to preserve the sparseness? Else I
can use qemu-img
OSDs in a replicated set writing at
least to their journal causes latency concerns with my WAN-type links.
Charles
--Original Message--
From: Sage Weil
To: Charles 'Boyo
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Disabling OSD journals, parallel reads and eventual
cons
Weil
To: Charles 'Boyo
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Disabling OSD journals, parallel reads and eventual
consistency for RBD
Sent: Jun 13, 2014 00:29
On Thu, 12 Jun 2014, Charles 'Boyo wrote:
> Hello list.
>
> Is it possible, or will it ever be possi
Hello list.
Is it possible, or will it ever be possible to disable the OSD's journalling
activity?
I understand it is risky and has the potential for data loss but in my use
case, the data is easily re-built from scratch and I'm really bothered about
the reduced throughput "wasted" on journall
Hello list.
It is a well-known fact that speeding up the OSD journals results in overall
performance improvement. And most installations use SSDs to gain this benefit.
But is anyone using or considering using NAND-backed DRAM like the Viking
ArxCiS-NV and similar NVDIMM solutions?
I think thes
>> That's because qemu-kvm
>> in CentOS 6.4 doesn't support librbd.
>
> RedHat just added RBD support in qemu-kvm-rhev in RHEV 6.5. I don't
> know if that will trickle down to CentOS but you can probably
> recompile it yourself like we did.
>
> https://rhn.redhat.com/errata/RHSA-2013-1754.html
> (h
>> On a related note, is there any discard/trim support in rbd-fuse?
>
>
> Apparently so (but not in the kernel module unfortunately).
>
Ok, librbd (which is used by the qemu alternative) supports discard,
not the rbd kernel module does not. Neither of these are available to
me right now.
Is rbd-f
in Ceph, then this would mean a single OSD
> failure could cause data loss. For that reason, it seems it would be
> better to do the replication in Ceph than in ZFS in this case.
>
> John
>
> On Fri, Nov 29, 2013 at 11:13 AM, Charles 'Boyo wrote:
>> Hello all.
>&g
Hello all.
I have a Ceph cluster using XFS on the OSDs. Btrfs is not available to
me at the moment (cluster is running CentOS 6.4 with stock kernel).
I intend to maintain a full replica of an active ZFS dataset on the
Ceph infrastructure by installing an OpenSolaris KVM guest using
rbd-fuse to ex
Hello list.
I am trying to create a new single-node cluster using the ceph-deploy
tool but the 'mon create' step keeps failing apparently because the
'ceph' cluster name is hardwired into the /etc/init.d/ceph rc script
or more correctly, the rc script does not have any support for
"--cluster ". Ha
Hi,
On Mon, Jul 22, 2013 at 2:08 AM, Chen, Xiaoxi wrote:
> Hi,
>
> ** **
>
> > Can you share any information on the SSD you are using, is it
> PCIe connected?
>
>Depends, if you use HDD as your OSD data disk, a SATA/SAS SSD is
> enough for you. Instead of Intel 520, I woul
On Mon, Jul 22, 2013 at 7:10 PM, Mark Nelson wrote:
> On 07/22/2013 01:02 PM, Oliver Fuckner wrote:
>> Good evening,
>>
>>
>>
>> On the second look you see that they use 4 Sandisk X100 SSDs in RAID5
>> and those SSDs only have 80TBytes Write Endurance each... that makes me
>> nervous.
>
> I'm less
Hello.
I am intending to build a Ceph cluster using several Dell C6100 multi-node
chassis servers.
These have only 3 disk bays per node (12 x 3.5" drives across 4 nodes) so I
can't afford to sacrifice a third of my capacity for SSDs. However, fitting the
SSD via PCI-e seems a valid option.
Un
16 matches
Mail list logo