Re: [ceph-users] Testing a node by fio - strange results to me

2017-01-22 Thread Ahmed Khuraidah
Hi Udo, thanks for reply, I thought already that my message was missed the
list.
Not sure if understand correctly. Do you mean "rbd cache = true"? If yes,
then this is RBD client cache behavior, not on OSD side, isn't it?"


Regards
Ahmed


On Sun, Jan 22, 2017 at 6:45 PM, Udo Lembke  wrote:

> Hi,
>
> I don't use mds, but I thinks it's the same like with rdb - the readed
> data are cached on the OSD-nodes.
>
> The 4MB-chunks of the 3G-file fit completly in the cache, the other not.
>
>
> Udo
>
>
> On 18.01.2017 07:50, Ahmed Khuraidah wrote:
> > Hello community,
> >
> > I need your help to understand a little bit more about current MDS
> > architecture.
> > I have created one node CephFS deployment and tried to test it by fio.
> > I have used two file sizes of 3G and 320G. My question is why I have
> > around 1k+ IOps when perform random reading from 3G file into
> > comparison to expected ~100 IOps from 320G. Could somebody clarify
> > where is read buffer/caching performs here and how to control it?
> >
> > A little bit about setup - Ubuntu 14.04 server that consists Jewel
> > based: one MON, one MDS (default parameters, except mds_log = false)
> > and OSD using SATA drive (XFS) for placing data and SSD drive for
> > journaling. No RAID controller and no pool tiering used
> >
> > Thanks
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Testing a node by fio - strange results to me

2017-01-22 Thread Udo Lembke
Hi,

I don't use mds, but I thinks it's the same like with rdb - the readed
data are cached on the OSD-nodes.

The 4MB-chunks of the 3G-file fit completly in the cache, the other not.


Udo


On 18.01.2017 07:50, Ahmed Khuraidah wrote:
> Hello community,
>
> I need your help to understand a little bit more about current MDS
> architecture. 
> I have created one node CephFS deployment and tried to test it by fio.
> I have used two file sizes of 3G and 320G. My question is why I have
> around 1k+ IOps when perform random reading from 3G file into
> comparison to expected ~100 IOps from 320G. Could somebody clarify
> where is read buffer/caching performs here and how to control it?
>
> A little bit about setup - Ubuntu 14.04 server that consists Jewel
> based: one MON, one MDS (default parameters, except mds_log = false)
> and OSD using SATA drive (XFS) for placing data and SSD drive for
> journaling. No RAID controller and no pool tiering used
>
> Thanks
>  
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Testing a node by fio - strange results to me (Ahmed Khuraidah)

2017-01-20 Thread Ahmed Khuraidah
looking forward so read some good help here
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Testing a node by fio - strange results to me

2017-01-17 Thread Ahmed Khuraidah
Hello community,

I need your help to understand a little bit more about current MDS
architecture.
I have created one node CephFS deployment and tried to test it by fio. I
have used two file sizes of 3G and 320G. My question is why I have around 1k+
IOps when perform random reading from 3G file into comparison to expected
~100 IOps from 320G. Could somebody clarify where is read buffer/caching
performs here and how to control it?

A little bit about setup - Ubuntu 14.04 server that consists Jewel based:
one MON, one MDS (default parameters, except mds_log = false) and OSD using
SATA drive (XFS) for placing data and SSD drive for journaling. No RAID
controller and no pool tiering used

Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com