Hello,

Yes 1) and 2) is correct, server provider does their only internal checking
before they allow a particular disk model to be used.

The two disk models are :
TOSHIBA MG06ACA10TEY
ST10000NM0156-2AA111

They are just using the onboard motherboard SATA3 port's, again only
difference between each server is the disk the MB/CPU/RAM e.t.c is the same
for all 4.

,Ashley



On Thu, Dec 13, 2018 at 7:36 PM Jake Grimmett <j...@mrc-lmb.cam.ac.uk> wrote:

> Hi Ashley,
>
> Always interesting to see hardware benchmarks :)
>
> Do I understand the following correctly?
>
> 1) your host (server provider) rates the Toshiba drives as faster
> 2) Ceph osd perf rates the Seagate drives as faster
>
> Could you share the benchmark output and drive model numbers?
>
> Presumably the nodes have otherwise identical hardware?
>
> Finally, what disk controller are you using?
>
> many thanks
>
> Jake
>
> On 12/13/18 9:26 AM, Ashley Merrick wrote:
> > Hello,
> >
> > Thanks for your reply!, while the performance within CEPH is different
> > the two disk's are exactly the same in rated performance / type e.t.c
> > just from two different manufacturers. Obviously id expected such a big
> > difference between 5200 and 7200RPM or SMR and CMR for example but these
> > are identical in this area. But both seem to perform the same in
> > standard work loads. Just with the default CEPH setup they are miles
> apart.
> >
> > Changing disks is one option but wanted to first see if there was some
> > thing's I could at least try and level the performance across the field.
> >
> > ,Ashley
> >
> > On Thu, Dec 13, 2018 at 5:21 PM Maged Mokhtar <mmokh...@petasan.org
> > <mailto:mmokh...@petasan.org>> wrote:
> >
> >
> >     On 13/12/2018 09:53, Ashley Merrick wrote:
> >>     I have a Mimic Bluestore EC RBD Pool running on 8+2, this is
> >>     currently running across 4 node's.
> >>
> >>     3 Node's are running Toshiba disk's while one node is running
> >>     Segate disks (same size, spinning speed, enterprise disks e.t.c),
> >>     I have noticed huge difference in IOWAIT and disk latency
> >>     performance between the two sets of disks, can also be seen from a
> >>     ceph osd perf during read and write operations.
> >>
> >>     Speaking to my host (server provider), they bench marked the two
> >>     disks before approving them for use in this type of server, they
> >>     actually saw higher performance from the Toshiba disk during their
> >>     tests.
> >>
> >>     They did however state there test where at higher / larger block
> >>     sizes, I can imagine CEPH using EC of 8+2 the block sizes /
> >>     requests are quite small?
> >>
> >>     Is there anything I can do ? Changing the RBD object size & stripe
> >>     unit to a bigger than default? Will this make the data sent to the
> >>     disk larger chunks at once compared to lot's of smaller block's.
> >>
> >>     If anyone else has any advice I'm open to trying.
> >>
> >>     P.s I have already disabled the disk cache on all disks and this
> >>     was causing high write latency across all.
> >>
> >>     Thanks
> >>
> >>     _______________________________________________
> >>     ceph-users mailing list
> >>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >     Since you say there is a huge difference between the disk types
> >     under your current workload, then i would focus on this, the logical
> >     thing to do is to replace them. You can probably run further
> >     benchmarks with fsync write speed at lower block sizes, but i think
> >     your current observation is conclusive enough.
> >
> >     Other less recommended options: use a lower ec profile such as k4
> >     m2, getting a controller with write back cache. For sequential io
> >     increasing your read_ahead_kb, using librbd client cache, adjusting
> >     your client os cache parameters. Also if you have a controlled
> >     application like a backup app where you can specify the block size,
> >     then increase it to above 1MB. But again i would recommend you focus
> >     on changing disks.
> >
> >     /Maged
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to