Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread Josh Durgin

On 03/01/2016 10:03 PM, min fang wrote:

thanks, with your help, I set the read ahead parameter. What is the
cache parameter for kernel module rbd?
Such as:
1) what is the cache size?
2) Does it support write back?
3) Will read ahead be disabled if max bytes has been read into cache?
(similar the concept as "rbd_readahead_disable_after_bytes".


The kernel rbd module does not implement any caching itself. If you're 
doing I/O to a file on a filesystem on top of a kernel rbd device,

it will go through the usual kernel page cache (unless you use O_DIRECT
of course).

Josh



2016-03-01 21:31 GMT+08:00 Adrien Gillard mailto:gillard.adr...@gmail.com>>:

As Tom stated, RBD cache only works if your client is using librbd
(KVM clients for instance).
Using the kernel RBD client, one of the parameter you can tune to
optimize sequential read is increasing
/sys/class/block/rbd4/queue/read_ahead_kb

Adrien



On Tue, Mar 1, 2016 at 12:48 PM, min fang mailto:louisfang2...@gmail.com>> wrote:

I can use the following command to change parameter, for example
as the following,  but not sure whether it will work.

  ceph --admin-daemon /var/run/ceph/ceph-mon.openpower-0.asok
config set rbd_readahead_disable_after_bytes 0

2016-03-01 15:07 GMT+08:00 Tom Christensen mailto:pav...@gmail.com>>:

If you are mapping the RBD with the kernel driver then
you're not using librbd so these settings will have no
effect I believe.  The kernel driver does its own caching
but I don't believe there are any settings to change its
default behavior.


On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo
mailto:ski...@redhat.com>> wrote:

You may want to set "ioengine=rbd", I guess.

Cheers,

- Original Message -
From: "min fang" mailto:louisfang2...@gmail.com>>
To: "ceph-users" mailto:ceph-users@lists.ceph.com>>
            Sent: Tuesday, March 1, 2016 1:28:54 PM
    Subject: [ceph-users]  rbd cache did not help improve
performance

Hi, I set the following parameters in ceph.conf

[client]
rbd cache=true
rbd cache size= 25769803776
rbd readahead disable after byte=0


map a rbd image to a rbd device then run fio testing on
4k read as the command
./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread
-rw=read -ioengine=aio -bs=4K -size=500G -numjobs=32
-runtime=300 -group_reporting -name=mytest2

Compared the result with setting rbd cache=false and
enable cache model, I did not see performance improved
by librbd cache.

Is my setting not right, or it is true that ceph librbd
cache will not have benefit on 4k seq read?

thanks.


___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread min fang
thanks, with your help, I set the read ahead parameter. What is the cache
parameter for kernel module rbd?
Such as:
1) what is the cache size?
2) Does it support write back?
3) Will read ahead be disabled if max bytes has been read into cache?
(similar the concept as "rbd_readahead_disable_after_bytes".

thanks again.

2016-03-01 21:31 GMT+08:00 Adrien Gillard :

> As Tom stated, RBD cache only works if your client is using librbd (KVM
> clients for instance).
> Using the kernel RBD client, one of the parameter you can tune to optimize
> sequential read is increasing /sys/class/block/rbd4/queue/read_ahead_kb
>
> Adrien
>
>
>
> On Tue, Mar 1, 2016 at 12:48 PM, min fang  wrote:
>
>> I can use the following command to change parameter, for example as the
>> following,  but not sure whether it will work.
>>
>>  ceph --admin-daemon /var/run/ceph/ceph-mon.openpower-0.asok config set
>> rbd_readahead_disable_after_bytes 0
>>
>> 2016-03-01 15:07 GMT+08:00 Tom Christensen :
>>
>>> If you are mapping the RBD with the kernel driver then you're not using
>>> librbd so these settings will have no effect I believe.  The kernel driver
>>> does its own caching but I don't believe there are any settings to change
>>> its default behavior.
>>>
>>>
>>> On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo 
>>> wrote:
>>>
>>>> You may want to set "ioengine=rbd", I guess.
>>>>
>>>> Cheers,
>>>>
>>>> - Original Message -
>>>> From: "min fang" 
>>>> To: "ceph-users" 
>>>> Sent: Tuesday, March 1, 2016 1:28:54 PM
>>>> Subject: [ceph-users]  rbd cache did not help improve performance
>>>>
>>>> Hi, I set the following parameters in ceph.conf
>>>>
>>>> [client]
>>>> rbd cache=true
>>>> rbd cache size= 25769803776
>>>> rbd readahead disable after byte=0
>>>>
>>>>
>>>> map a rbd image to a rbd device then run fio testing on 4k read as the
>>>> command
>>>> ./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read
>>>> -ioengine=aio -bs=4K -size=500G -numjobs=32 -runtime=300 -group_reporting
>>>> -name=mytest2
>>>>
>>>> Compared the result with setting rbd cache=false and enable cache
>>>> model, I did not see performance improved by librbd cache.
>>>>
>>>> Is my setting not right, or it is true that ceph librbd cache will not
>>>> have benefit on 4k seq read?
>>>>
>>>> thanks.
>>>>
>>>>
>>>> ___
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>> ___
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread Adrien Gillard
As Tom stated, RBD cache only works if your client is using librbd (KVM
clients for instance).
Using the kernel RBD client, one of the parameter you can tune to optimize
sequential read is increasing /sys/class/block/rbd4/queue/read_ahead_kb

Adrien



On Tue, Mar 1, 2016 at 12:48 PM, min fang  wrote:

> I can use the following command to change parameter, for example as the
> following,  but not sure whether it will work.
>
>  ceph --admin-daemon /var/run/ceph/ceph-mon.openpower-0.asok config set
> rbd_readahead_disable_after_bytes 0
>
> 2016-03-01 15:07 GMT+08:00 Tom Christensen :
>
>> If you are mapping the RBD with the kernel driver then you're not using
>> librbd so these settings will have no effect I believe.  The kernel driver
>> does its own caching but I don't believe there are any settings to change
>> its default behavior.
>>
>>
>> On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo  wrote:
>>
>>> You may want to set "ioengine=rbd", I guess.
>>>
>>> Cheers,
>>>
>>> - Original Message -----
>>> From: "min fang" 
>>> To: "ceph-users" 
>>> Sent: Tuesday, March 1, 2016 1:28:54 PM
>>> Subject: [ceph-users]  rbd cache did not help improve performance
>>>
>>> Hi, I set the following parameters in ceph.conf
>>>
>>> [client]
>>> rbd cache=true
>>> rbd cache size= 25769803776
>>> rbd readahead disable after byte=0
>>>
>>>
>>> map a rbd image to a rbd device then run fio testing on 4k read as the
>>> command
>>> ./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read
>>> -ioengine=aio -bs=4K -size=500G -numjobs=32 -runtime=300 -group_reporting
>>> -name=mytest2
>>>
>>> Compared the result with setting rbd cache=false and enable cache model,
>>> I did not see performance improved by librbd cache.
>>>
>>> Is my setting not right, or it is true that ceph librbd cache will not
>>> have benefit on 4k seq read?
>>>
>>> thanks.
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread min fang
I can use the following command to change parameter, for example as the
following,  but not sure whether it will work.

 ceph --admin-daemon /var/run/ceph/ceph-mon.openpower-0.asok config set
rbd_readahead_disable_after_bytes 0

2016-03-01 15:07 GMT+08:00 Tom Christensen :

> If you are mapping the RBD with the kernel driver then you're not using
> librbd so these settings will have no effect I believe.  The kernel driver
> does its own caching but I don't believe there are any settings to change
> its default behavior.
>
>
> On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo  wrote:
>
>> You may want to set "ioengine=rbd", I guess.
>>
>> Cheers,
>>
>> - Original Message -
>> From: "min fang" 
>> To: "ceph-users" 
>> Sent: Tuesday, March 1, 2016 1:28:54 PM
>> Subject: [ceph-users]  rbd cache did not help improve performance
>>
>> Hi, I set the following parameters in ceph.conf
>>
>> [client]
>> rbd cache=true
>> rbd cache size= 25769803776
>> rbd readahead disable after byte=0
>>
>>
>> map a rbd image to a rbd device then run fio testing on 4k read as the
>> command
>> ./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read
>> -ioengine=aio -bs=4K -size=500G -numjobs=32 -runtime=300 -group_reporting
>> -name=mytest2
>>
>> Compared the result with setting rbd cache=false and enable cache model,
>> I did not see performance improved by librbd cache.
>>
>> Is my setting not right, or it is true that ceph librbd cache will not
>> have benefit on 4k seq read?
>>
>> thanks.
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd cache did not help improve performance

2016-02-29 Thread Tom Christensen
If you are mapping the RBD with the kernel driver then you're not using
librbd so these settings will have no effect I believe.  The kernel driver
does its own caching but I don't believe there are any settings to change
its default behavior.


On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo  wrote:

> You may want to set "ioengine=rbd", I guess.
>
> Cheers,
>
> - Original Message -
> From: "min fang" 
> To: "ceph-users" 
> Sent: Tuesday, March 1, 2016 1:28:54 PM
> Subject: [ceph-users]  rbd cache did not help improve performance
>
> Hi, I set the following parameters in ceph.conf
>
> [client]
> rbd cache=true
> rbd cache size= 25769803776
> rbd readahead disable after byte=0
>
>
> map a rbd image to a rbd device then run fio testing on 4k read as the
> command
> ./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read
> -ioengine=aio -bs=4K -size=500G -numjobs=32 -runtime=300 -group_reporting
> -name=mytest2
>
> Compared the result with setting rbd cache=false and enable cache model, I
> did not see performance improved by librbd cache.
>
> Is my setting not right, or it is true that ceph librbd cache will not
> have benefit on 4k seq read?
>
> thanks.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd cache did not help improve performance

2016-02-29 Thread Shinobu Kinjo
You may want to set "ioengine=rbd", I guess.

Cheers,

- Original Message -
From: "min fang" 
To: "ceph-users" 
Sent: Tuesday, March 1, 2016 1:28:54 PM
Subject: [ceph-users]  rbd cache did not help improve performance

Hi, I set the following parameters in ceph.conf 

[client] 
rbd cache=true 
rbd cache size= 25769803776 
rbd readahead disable after byte=0 


map a rbd image to a rbd device then run fio testing on 4k read as the command 
./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read -ioengine=aio 
-bs=4K -size=500G -numjobs=32 -runtime=300 -group_reporting -name=mytest2 

Compared the result with setting rbd cache=false and enable cache model, I did 
not see performance improved by librbd cache. 

Is my setting not right, or it is true that ceph librbd cache will not have 
benefit on 4k seq read? 

thanks. 


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd cache did not help improve performance

2016-02-29 Thread min fang
Hi, I set the following parameters in ceph.conf

[client]
rbd cache=true
rbd cache size= 25769803776
rbd readahead disable after byte=0


map a rbd image to a rbd device then run fio testing on 4k read as the
command
./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read
-ioengine=aio -bs=4K -size=500G -numjobs=32 -runtime=300 -group_reporting
-name=mytest2

Compared the result with setting rbd cache=false and enable cache model, I
did not see performance improved by librbd cache.

Is my setting not right, or it is true that ceph librbd cache will not have
benefit on 4k seq read?

thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com