Re: [ceph-users] Does SSD Journal improve the performance?

2015-10-19 Thread Libin Wu
Hi,
My environment has 32 core CPU, and 256GB memory. The SSD can get
30k write IOPS when use directIO.

Finally, i figure out the problem, after change the scheduler of SSD to
noop, the
performance improve obviously.

Please forgive me, i don't realize IO scheduler could impact performance so
much.

Thanks!

2015-10-15 9:37 GMT+08:00 Christian Balzer :

>
> Hello,
>
> Firstly, this is clearly a ceph-users question, don't cross post to
> ceph-devel.
>
> On Thu, 15 Oct 2015 09:29:03 +0800 hzwuli...@gmail.com wrote:
>
> > Hi,
> >
> > It should be sure SSD Journal will improve the performance of IOPS. But
> > unfortunately it's not in my test.
> >
> > I have two pools with the same number of osds:
> > pool1, ssdj_sas:
> > 9 osd servers, 8 OSDs(SAS) on every server
> > Journal on SSD, one SSD disk for 4 SAS disks.
> >
> Details. All of them.
> Specific HW (CPU, RAM, etc.) of these servers and the network, what type of
> SSDs, HDDs, controllers.
>
> > pool 2, sas:
> > 9 osd servers, 8 OSDs(SAS) on every server
> > Journal on SAS disk itself。
> >
> Is the HW identical to pool1 except for the journal placement?
>
> > I use rbd to create a volume in pool1 and pool2 separately and use fio
> > to test the rand write IOPS。here is the fio configuration:
> >
> > rw=randwrite
> > ioengine=libaio
> > direct=1
> > iodepth=128
> > bs=4k
> > numjobs=1
> >
> > The result i got is:
> > volume in pool1, about 5k
> > volume in pool2, about 12k
> >
> Now this job will stress the CPUs quite a bit (which you should be able to
> see with atop or the likes).
>
> However if the HW is identical in both pools your SSD may be one of those
> that perform abysmal with direct IO.
>
> There are plenty of threads in the ML archives about this topic.
>
> Christian
>
> > It's a big gap here, anyone can give me some suggestion here?
> >
> > ceph version: hammer(0.94.3)
> > kernel: 3.10
> >
> >
> >
> > hzwuli...@gmail.com
>
>
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com   Global OnLine Japan/Fusion Communications
> http://www.gol.com/
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does SSD Journal improve the performance?

2015-10-14 Thread Christian Balzer

Hello,

Firstly, this is clearly a ceph-users question, don't cross post to
ceph-devel.

On Thu, 15 Oct 2015 09:29:03 +0800 hzwuli...@gmail.com wrote:

> Hi, 
> 
> It should be sure SSD Journal will improve the performance of IOPS. But
> unfortunately it's not in my test.
> 
> I have two pools with the same number of osds:
> pool1, ssdj_sas:
> 9 osd servers, 8 OSDs(SAS) on every server
> Journal on SSD, one SSD disk for 4 SAS disks.
> 
Details. All of them.
Specific HW (CPU, RAM, etc.) of these servers and the network, what type of
SSDs, HDDs, controllers.

> pool 2, sas:
> 9 osd servers, 8 OSDs(SAS) on every server
> Journal on SAS disk itself。
> 
Is the HW identical to pool1 except for the journal placement?

> I use rbd to create a volume in pool1 and pool2 separately and use fio
> to test the rand write IOPS。here is the fio configuration:
> 
> rw=randwrite
> ioengine=libaio
> direct=1
> iodepth=128
> bs=4k
> numjobs=1
> 
> The result i got is:
> volume in pool1, about 5k
> volume in pool2, about 12k
> 
Now this job will stress the CPUs quite a bit (which you should be able to
see with atop or the likes). 

However if the HW is identical in both pools your SSD may be one of those
that perform abysmal with direct IO.

There are plenty of threads in the ML archives about this topic.
 
Christian

> It's a big gap here, anyone can give me some suggestion here?
> 
> ceph version: hammer(0.94.3)
> kernel: 3.10
> 
> 
> 
> hzwuli...@gmail.com


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com