> Read the above link again, carefully. ^o^
> In in it I state that:
> a) despite reading such in old posts, setting read_ahead on the OSD
> nodes
> has no or even negative effects. Inside the VM, it is very helpful:
> b) the read speed increased about 10 times, from 35MB/s to 380MB/s
Christian,
On Sat, 4 Oct 2014 17:56:22 +0100 (BST) Andrei Mikhailovsky wrote:
> > Read the above link again, carefully. ^o^
> > In in it I state that:
> > a) despite reading such in old posts, setting read_ahead on the OSD
> > nodes
> > has no or even negative effects. Inside the VM, it is very helpful:
>
>
On Sat, 4 Oct 2014 11:16:05 +0100 (BST) Andrei Mikhailovsky wrote:
> > While I doubt you're hitting any particular bottlenecks on your
> > storage
> > servers I don't think Zabbix (very limited experience with it so I
> > might
> > be wrong) monitors everything, nor does it so at sufficiently high
> While I doubt you're hitting any particular bottlenecks on your
> storage
> servers I don't think Zabbix (very limited experience with it so I
> might
> be wrong) monitors everything, nor does it so at sufficiently high
> freqency to show what is going on during a peak or fio test from a
> client
On Fri, 3 Oct 2014 11:24:38 +0100 (BST) Andrei Mikhailovsky wrote:
> From: "Christian Balzer"
>
> > To: ceph-users@lists.ceph.com
> > Sent: Friday, 3 October, 2014 2:06:48 AM
> > Subject: Re: [ceph-users] ceph, ssds, hdds, journals and caching
>
>
We have also got unrecoverable XFS errors with bcache. Our expereince is
that SSD journals provide about the same performance benefit (some times
better) than bcache. SSD journals are easier to set up.
On Fri, Oct 3, 2014 at 5:04 AM, Vladislav Gorbunov
wrote:
> >Has anyone tried using bcache of
That is what I am afraid of!
- Original Message -
> From: "Vladislav Gorbunov"
> To: "Andrei Mikhailovsky"
> Cc: "Christian Balzer" , ceph-users@lists.ceph.com
> Sent: Friday, 3 October, 2014 12:04:37 PM
> Subject: Re: [ceph-users] ce
>Has anyone tried using bcache of dm-cache with ceph?
I'm tested lvmcache (based on dm-cache) with ceph 0.80.5 on CentOS 7.
Got unrecoverable error with xfs and total lost osd server.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ce
From: "Christian Balzer"
> To: ceph-users@lists.ceph.com
> Sent: Friday, 3 October, 2014 2:06:48 AM
> Subject: Re: [ceph-users] ceph, ssds, hdds, journals and caching
> On Thu, 2 Oct 2014 21:54:54 +0100 (BST) Andrei Mikhailovsky wrote:
> > Hello Cephers,
> &g
On Thu, 2 Oct 2014 21:54:54 +0100 (BST) Andrei Mikhailovsky wrote:
> Hello Cephers,
>
> I am a bit lost on the best ways of using ssd and hdds for ceph cluster
> which uses rbd + kvm for guest vms.
>
> At the moment I've got 2 osd servers which currently have 8 hdd osds
> (max 16 bays) each an
Hello Cephers,
I am a bit lost on the best ways of using ssd and hdds for ceph cluster which
uses rbd + kvm for guest vms.
At the moment I've got 2 osd servers which currently have 8 hdd osds (max 16
bays) each and 4 ssd disks. Currently, I am using 2 ssds for osd journals and
I've got 2x512
11 matches
Mail list logo