[ceph-users] Re: ceph fs crashes on simple fio test

2019-08-24 Thread Frank Schilder
Same set-up as Robert. Two different VLANs for front and back network on same switch. We did load tests before and the switches have no problems routing the traffic. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 Fro

[ceph-users] Re: ceph fs crashes on simple fio test

2019-08-24 Thread Frank Schilder
Hi Robert, thanks for your reply. These are actually settings I found in cases I referred to with "other cases" in my mail. These settings could be a first step. Looking at the documentation, solving the overload problem might require some QoS settings I found below the description of "osd op

[ceph-users] ceph's replicas question

2019-08-24 Thread Wesley Peng
Hi, We have all SSD disks as ceph's backend storage. Consider the cost factor, can we setup the cluster to have only two replicas for objects? thanks & regards Wesley ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph

[ceph-users] Re: ceph's replicas question

2019-08-24 Thread Darren Soothill
So can you do it. Yes you can. Should you do it is the bigger question. So my first question would be what type of drives are you using? Enterprise class drives with a low failure rate? Then you have to ask yourself are you feeling lucky? If you do a scrub and 1 drive returns 1 value and anot

[ceph-users] ceph rbd disk performance question

2019-08-24 Thread linghucongsong
HI all! I use ceph as the openstack VM disk. I have a VM run postgresql. I found the disk on the vm run postgresql is very busy and slow! But the ceph cluster is very healthy and without any slow request. Even the vm disk is very busy, but the ceph cluster is look like very idle. My cep

[ceph-users] Re: ceph rbd disk performance question

2019-08-24 Thread Brett Chancellor
You aren't showing the I/O size, only the latency. It looks like this is mostly sequential writes, since it's merging most of the I/O. Because you only assigned 1 volume (sdb), you will be limited to a single queue. I'd recommend adding more volumes and striping the data across. On Sat, Aug 24, 20

[ceph-users] Re: ceph rbd disk performance question

2019-08-24 Thread Ronny Aasen
On 24.08.2019 18:00, Brett Chancellor wrote: You aren't showing the I/O size, only the latency. It looks like this is mostly sequential writes, since it's merging most of the I/O. Because you only assigned 1 volume (sdb), you will be limited to a single queue. I'd recommend adding more volumes

[ceph-users] Re: ceph rbd disk performance question

2019-08-24 Thread Brett Chancellor
Do both. Host striping will give you more queues, RBD striping will use more OSDs. On Sat, Aug 24, 2019, 12:39 PM Ronny Aasen wrote: > On 24.08.2019 18:00, Brett Chancellor wrote: > > You aren't showing the I/O size, only the latency. It looks like this > > is mostly sequential writes, since it'

[ceph-users] Re: ceph's replicas question

2019-08-24 Thread Wido den Hollander
> Op 24 aug. 2019 om 16:36 heeft Darren Soothill het > volgende geschreven: > > So can you do it. > > Yes you can. > > Should you do it is the bigger question. > > So my first question would be what type of drives are you using? Enterprise > class drives with a low failure rate? > Doesn’