Re: [ceph-users] large reads become 512 kbyte reads on qemu-kvm rbd

2014-12-01 Thread Ilya Dryomov
On Mon, Dec 1, 2014 at 1:09 PM, Dan Van Der Ster
 wrote:
> Hi Ilya,
>
>> On 28 Nov 2014, at 17:56, Ilya Dryomov  wrote:
>>
>> On Fri, Nov 28, 2014 at 5:46 PM, Dan Van Der Ster
>>  wrote:
>>> Hi Andrei,
>>> Yes, I’m testing from within the guest.
>>>
>>> Here is an example. First, I do 2MB reads when the max_sectors_kb=512, and
>>> we see the reads are split into 4. (fio sees 25 iops, though iostat reports
>>> 100 smaller iops):
>>>
>>> # echo 512 >  /sys/block/vdb/queue/max_sectors_kb  # this is the default
>>> # fio --readonly --name /dev/vdb --rw=read --size=1G  --ioengine=libaio
>>> --direct=1 --runtime=10s --blocksize=2m
>>> /dev/vdb: (g=0): rw=read, bs=2M-2M/2M-2M/2M-2M, ioengine=libaio, iodepth=1
>>> fio-2.0.13
>>> Starting 1 process
>>> Jobs: 1 (f=1): [R] [100.0% done] [51200K/0K/0K /s] [25 /0 /0  iops] [eta
>>> 00m:00s]
>>>
>>> meanwhile iostat is reporting 100 iops of average size 1024 sectors (i.e.
>>> 512kB):
>>>
>>> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s avgrq-sz
>>> avgqu-sz   await  svctm  %util
>>> vdb   0.00 0.00  100.000.0050.00 0.00  1024.00
>>> 3.02   30.25  10.00 100.00
>>>
>>>
>>>
>>> Now increase the max_sectors_kb to 4MB, and the IOs are no longer split:
>>>
>>> # echo 4096 >  /sys/block/vdb/queue/max_sectors_kb
>>> # fio --readonly --name /dev/vdb --rw=read --size=1G  --ioengine=libaio
>>> --direct=1 --runtime=10s --blocksize=2m
>>> /dev/vdb: (g=0): rw=read, bs=2M-2M/2M-2M/2M-2M, ioengine=libaio, iodepth=1
>>> fio-2.0.13
>>> Starting 1 process
>>> Jobs: 1 (f=1): [R] [100.0% done] [200.0M/0K/0K /s] [100 /0 /0  iops] [eta
>>> 00m:00s]
>>>
>>> iostat reports 100 iops, 4096 sectors each read (i.e. 2MB):
>>>
>>> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s avgrq-sz
>>> avgqu-sz   await  svctm  %util
>>> vdb 300.00 0.00  100.000.00   200.00 0.00  4096.00
>>> 0.999.94   9.94  99.40
>>
>> We set the hard request size limit to rbd object size (4M typically)
>>
>>blk_queue_max_hw_sectors(q, segment_size / SECTOR_SIZE);
>>
>
> Are you referring to librbd or krbd? My observations are limited to librbd at 
> the moment. (I didn’t try this on krbd).

Yes, I was referring to krbd.  But it looks like that patch from
Christoph will change this for qemu+librbd as well - an artificial soft
limit imposed by the VM kernel will disappear.  CC'ing Josh.

>
>> but block layer then sets the soft limit for fs requests to 512K
>>
>>   BLK_DEF_MAX_SECTORS  = 1024,
>>
>>   limits->max_sectors = min_t(unsigned int, max_hw_sectors,
>>   BLK_DEF_MAX_SECTORS);
>>
>> which you are supposed to change on a per-device basis via sysfs.  We
>> could probably raise the soft limit to rbd object size by default as
>> well - I don't see any harm in that.
>>
>
> Indeed, this patch which was being targeted for 3.19:
>
> https://lkml.org/lkml/2014/9/6/123

Oh good, I was just about to send a patch for krbd.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large reads become 512 kbyte reads on qemu-kvm rbd

2014-12-01 Thread Dan Van Der Ster
Hi Ilya,

> On 28 Nov 2014, at 17:56, Ilya Dryomov  wrote:
> 
> On Fri, Nov 28, 2014 at 5:46 PM, Dan Van Der Ster
>  wrote:
>> Hi Andrei,
>> Yes, I’m testing from within the guest.
>> 
>> Here is an example. First, I do 2MB reads when the max_sectors_kb=512, and
>> we see the reads are split into 4. (fio sees 25 iops, though iostat reports
>> 100 smaller iops):
>> 
>> # echo 512 >  /sys/block/vdb/queue/max_sectors_kb  # this is the default
>> # fio --readonly --name /dev/vdb --rw=read --size=1G  --ioengine=libaio
>> --direct=1 --runtime=10s --blocksize=2m
>> /dev/vdb: (g=0): rw=read, bs=2M-2M/2M-2M/2M-2M, ioengine=libaio, iodepth=1
>> fio-2.0.13
>> Starting 1 process
>> Jobs: 1 (f=1): [R] [100.0% done] [51200K/0K/0K /s] [25 /0 /0  iops] [eta
>> 00m:00s]
>> 
>> meanwhile iostat is reporting 100 iops of average size 1024 sectors (i.e.
>> 512kB):
>> 
>> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s avgrq-sz
>> avgqu-sz   await  svctm  %util
>> vdb   0.00 0.00  100.000.0050.00 0.00  1024.00
>> 3.02   30.25  10.00 100.00
>> 
>> 
>> 
>> Now increase the max_sectors_kb to 4MB, and the IOs are no longer split:
>> 
>> # echo 4096 >  /sys/block/vdb/queue/max_sectors_kb
>> # fio --readonly --name /dev/vdb --rw=read --size=1G  --ioengine=libaio
>> --direct=1 --runtime=10s --blocksize=2m
>> /dev/vdb: (g=0): rw=read, bs=2M-2M/2M-2M/2M-2M, ioengine=libaio, iodepth=1
>> fio-2.0.13
>> Starting 1 process
>> Jobs: 1 (f=1): [R] [100.0% done] [200.0M/0K/0K /s] [100 /0 /0  iops] [eta
>> 00m:00s]
>> 
>> iostat reports 100 iops, 4096 sectors each read (i.e. 2MB):
>> 
>> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s avgrq-sz
>> avgqu-sz   await  svctm  %util
>> vdb 300.00 0.00  100.000.00   200.00 0.00  4096.00
>> 0.999.94   9.94  99.40
> 
> We set the hard request size limit to rbd object size (4M typically)
> 
>blk_queue_max_hw_sectors(q, segment_size / SECTOR_SIZE);
> 

Are you referring to librbd or krbd? My observations are limited to librbd at 
the moment. (I didn’t try this on krbd).

> but block layer then sets the soft limit for fs requests to 512K
> 
>   BLK_DEF_MAX_SECTORS  = 1024,
> 
>   limits->max_sectors = min_t(unsigned int, max_hw_sectors,
>   BLK_DEF_MAX_SECTORS);
> 
> which you are supposed to change on a per-device basis via sysfs.  We
> could probably raise the soft limit to rbd object size by default as
> well - I don't see any harm in that.
> 

Indeed, this patch which was being targeted for 3.19:

https://lkml.org/lkml/2014/9/6/123

Cheers, Dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large reads become 512 kbyte reads on qemu-kvm rbd

2014-11-28 Thread Lindsay Mathieson
On Fri, 28 Nov 2014 08:56:24 PM Ilya Dryomov wrote:
> which you are supposed to change on a per-device basis via sysfs.


Is there a way to do this for windows VM's?
-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large reads become 512 kbyte reads on qemu-kvm rbd

2014-11-28 Thread Ilya Dryomov
On Fri, Nov 28, 2014 at 5:46 PM, Dan Van Der Ster
 wrote:
> Hi Andrei,
> Yes, I’m testing from within the guest.
>
> Here is an example. First, I do 2MB reads when the max_sectors_kb=512, and
> we see the reads are split into 4. (fio sees 25 iops, though iostat reports
> 100 smaller iops):
>
> # echo 512 >  /sys/block/vdb/queue/max_sectors_kb  # this is the default
> # fio --readonly --name /dev/vdb --rw=read --size=1G  --ioengine=libaio
> --direct=1 --runtime=10s --blocksize=2m
> /dev/vdb: (g=0): rw=read, bs=2M-2M/2M-2M/2M-2M, ioengine=libaio, iodepth=1
> fio-2.0.13
> Starting 1 process
> Jobs: 1 (f=1): [R] [100.0% done] [51200K/0K/0K /s] [25 /0 /0  iops] [eta
> 00m:00s]
>
> meanwhile iostat is reporting 100 iops of average size 1024 sectors (i.e.
> 512kB):
>
> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> vdb   0.00 0.00  100.000.0050.00 0.00  1024.00
> 3.02   30.25  10.00 100.00
>
>
>
> Now increase the max_sectors_kb to 4MB, and the IOs are no longer split:
>
> # echo 4096 >  /sys/block/vdb/queue/max_sectors_kb
> # fio --readonly --name /dev/vdb --rw=read --size=1G  --ioengine=libaio
> --direct=1 --runtime=10s --blocksize=2m
> /dev/vdb: (g=0): rw=read, bs=2M-2M/2M-2M/2M-2M, ioengine=libaio, iodepth=1
> fio-2.0.13
> Starting 1 process
> Jobs: 1 (f=1): [R] [100.0% done] [200.0M/0K/0K /s] [100 /0 /0  iops] [eta
> 00m:00s]
>
> iostat reports 100 iops, 4096 sectors each read (i.e. 2MB):
>
> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> vdb 300.00 0.00  100.000.00   200.00 0.00  4096.00
> 0.999.94   9.94  99.40

We set the hard request size limit to rbd object size (4M typically)

blk_queue_max_hw_sectors(q, segment_size / SECTOR_SIZE);

but block layer then sets the soft limit for fs requests to 512K

   BLK_DEF_MAX_SECTORS  = 1024,

   limits->max_sectors = min_t(unsigned int, max_hw_sectors,
   BLK_DEF_MAX_SECTORS);

which you are supposed to change on a per-device basis via sysfs.  We
could probably raise the soft limit to rbd object size by default as
well - I don't see any harm in that.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large reads become 512 kbyte reads on qemu-kvm rbd

2014-11-28 Thread Dan Van Der Ster
Hi Andrei,
Yes, I’m testing from within the guest.

Here is an example. First, I do 2MB reads when the max_sectors_kb=512, and we 
see the reads are split into 4. (fio sees 25 iops, though iostat reports 100 
smaller iops):

# echo 512 >  /sys/block/vdb/queue/max_sectors_kb  # this is the default
# fio --readonly --name /dev/vdb --rw=read --size=1G  --ioengine=libaio 
--direct=1 --runtime=10s --blocksize=2m
/dev/vdb: (g=0): rw=read, bs=2M-2M/2M-2M/2M-2M, ioengine=libaio, iodepth=1
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [R] [100.0% done] [51200K/0K/0K /s] [25 /0 /0  iops] [eta 
00m:00s]

meanwhile iostat is reporting 100 iops of average size 1024 sectors (i.e. 
512kB):

Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s avgrq-sz 
avgqu-sz   await  svctm  %util
vdb   0.00 0.00  100.000.0050.00 0.00  1024.00 
3.02   30.25  10.00 100.00



Now increase the max_sectors_kb to 4MB, and the IOs are no longer split:

# echo 4096 >  /sys/block/vdb/queue/max_sectors_kb
# fio --readonly --name /dev/vdb --rw=read --size=1G  --ioengine=libaio 
--direct=1 --runtime=10s --blocksize=2m
/dev/vdb: (g=0): rw=read, bs=2M-2M/2M-2M/2M-2M, ioengine=libaio, iodepth=1
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [R] [100.0% done] [200.0M/0K/0K /s] [100 /0 /0  iops] [eta 
00m:00s]

iostat reports 100 iops, 4096 sectors each read (i.e. 2MB):

Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s avgrq-sz 
avgqu-sz   await  svctm  %util
vdb 300.00 0.00  100.000.00   200.00 0.00  4096.00 
0.999.94   9.94  99.40

Cheers, Dan


On 28 Nov 2014, at 15:28, Andrei Mikhailovsky 
mailto:and...@arhont.com>> wrote:

Dan, are you setting this on the guest vm side? Did you run some tests to see 
if this impacts performance? Like small block size performance, etc?

Cheers




From: "Dan Van Der Ster" 
mailto:daniel.vanders...@cern.ch>>
To: "ceph-users" mailto:ceph-users@lists.ceph.com>>
Sent: Friday, 28 November, 2014 1:33:20 PM
Subject: Re: [ceph-users] large reads become 512 kbyte reads on qemu-kvm rbd

Hi,
After some more tests we’ve found that max_sectors_kb is the reason for 
splitting large IOs.
We increased it to 4MB:
   echo 4096 > cat /sys/block/vdb/queue/max_sectors_kb
and now fio/iostat are showing reads up to 4MB are getting through to the block 
device unsplit.

We use 4MB to match the size of the underlying RBD objects. I can’t think of a 
reason to split IOs smaller than the RBD objects -- with a small max_sectors_kb 
the client would use 8 IOs to read a single object.

Does anyone know of a reason that max_sectors_kb should not be set to the RBD 
object size? Is there any udev rule or similar that could set max_sectors_kb 
when a RBD device is attached?

Cheers, Dan


On 27 Nov 2014, at 20:29, Dan Van Der Ster 
mailto:daniel.vanders...@cern.ch>> wrote:

Oops, I was off by a factor of 1000 in my original subject. We actually have 4M 
and 8M reads being split into 100 512kB reads per second. So perhaps these are 
limiting:
# cat /sys/block/vdb/queue/max_sectors_kb
512
# cat /sys/block/vdb/queue/read_ahead_kb
512
Questions below remain.
Cheers, Dan
On 27 Nov 2014 18:26, Dan Van Der Ster 
mailto:daniel.vanders...@cern.ch>> wrote:
Hi all,
We throttle (with qemu-kvm) rbd devices to 100 w/s and 100 r/s (and 80MB/s 
write and read).
With fio we cannot exceed 51.2MB/s sequential or random reads, no matter the 
reading block size. (But with large writes we can achieve 80MB/s).

I just realised that the VM subsytem is probably splitting large reads into 512 
byte reads, following at least one of:

# cat /sys/block/vdb/queue/hw_sector_size
512
# cat /sys/block/vdb/queue/minimum_io_size
512
# cat /sys/block/vdb/queue/optimal_io_size
0

vdb is an RBD device coming over librbd, with rbd cache=true and mounted like 
this:

  /dev/vdb on /vicepa type xfs (rw)

Did anyone observe this before?

Is there a kernel setting to stop splitting reads like that? or a way to change 
the io_sizes reported by RBD to the kernel).

(I found a similar thread on the lvm mailing list, but lvm shouldn’t be 
involved here.)

All components here are running latest dumpling. Client VM is running CentOS 
6.6.

Cheers, Dan
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large reads become 512 kbyte reads on qemu-kvm rbd

2014-11-28 Thread Andrei Mikhailovsky
Dan, are you setting this on the guest vm side? Did you run some tests to see 
if this impacts performance? Like small block size performance, etc? 

Cheers 

- Original Message -

> From: "Dan Van Der Ster" 
> To: "ceph-users" 
> Sent: Friday, 28 November, 2014 1:33:20 PM
> Subject: Re: [ceph-users] large reads become 512 kbyte reads on
> qemu-kvm rbd

> Hi,
> After some more tests we’ve found that max_sectors_kb is the reason
> for splitting large IOs.
> We increased it to 4MB:
> echo 4096 > cat /sys/block/vdb/queue/max_sectors_kb
> and now fio/iostat are showing reads up to 4MB are getting through to
> the block device unsplit.

> We use 4MB to match the size of the underlying RBD objects. I can’t
> think of a reason to split IOs smaller than the RBD objects -- with
> a small max_sectors_kb the client would use 8 IOs to read a single
> object.

> Does anyone know of a reason that max_sectors_kb should not be set to
> the RBD object size? Is there any udev rule or similar that could
> set max_sectors_kb when a RBD device is attached?

> Cheers, Dan

> > On 27 Nov 2014, at 20:29, Dan Van Der Ster <
> > daniel.vanders...@cern.ch > wrote:
> 

> > Oops, I was off by a factor of 1000 in my original subject. We
> > actually have 4M and 8M reads being split into 100 512kB reads per
> > second. So perhaps these are limiting:
> 
> > # cat /sys/block/vdb/queue/max_sectors_kb
> 
> > 512
> 
> > # cat /sys/block/vdb/queue/read_ahead_kb
> 
> > 512
> 
> > Questions below remain.
> 
> > Cheers, Dan
> 
> > On 27 Nov 2014 18:26, Dan Van Der Ster < daniel.vanders...@cern.ch
> > >
> > wrote:
> 

> > > Hi all,
> > 
> 
> > > We throttle (with qemu-kvm) rbd devices to 100 w/s and 100 r/s
> > > (and
> > > 80MB/s write and read).
> > 
> 
> > > With fio we cannot exceed 51.2MB/s sequential or random reads, no
> > > matter the reading block size. (But with large writes we can
> > > achieve
> > > 80MB/s).
> > 
> 

> > > I just realised that the VM subsytem is probably splitting large
> > > reads into 512 byte reads, following at least one of:
> > 
> 

> > > # cat /sys/block/vdb/queue/hw_sector_size
> > 
> 
> > > 512
> > 
> 
> > > # cat /sys/block/vdb/queue/minimum_io_size
> > 
> 
> > > 512
> > 
> 
> > > # cat /sys/block/vdb/queue/optimal_io_size
> > 
> 
> > > 0
> > 
> 

> > > vdb is an RBD device coming over librbd, with rbd cache=true and
> > > mounted like this:
> > 
> 

> > > /dev/vdb on /vicepa type xfs (rw)
> > 
> 

> > > Did anyone observe this before?
> > 
> 

> > > Is there a kernel setting to stop splitting reads like that? or a
> > > way
> > > to change the io_sizes reported by RBD to the kernel).
> > 
> 

> > > (I found a similar thread on the lvm mailing list, but lvm
> > > shouldn’t
> > > be involved here.)
> > 
> 

> > > All components here are running latest dumpling. Client VM is
> > > running
> > > CentOS 6.6.
> > 
> 

> > > Cheers, Dan
> > 
> 
> > > ___
> > 
> 
> > > ceph-users mailing list
> > 
> 
> > > ceph-users@lists.ceph.com
> > 
> 
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large reads become 512 kbyte reads on qemu-kvm rbd

2014-11-28 Thread Dan Van Der Ster
Hi,
After some more tests we’ve found that max_sectors_kb is the reason for 
splitting large IOs.
We increased it to 4MB:
   echo 4096 > cat /sys/block/vdb/queue/max_sectors_kb
and now fio/iostat are showing reads up to 4MB are getting through to the block 
device unsplit.

We use 4MB to match the size of the underlying RBD objects. I can’t think of a 
reason to split IOs smaller than the RBD objects -- with a small max_sectors_kb 
the client would use 8 IOs to read a single object.

Does anyone know of a reason that max_sectors_kb should not be set to the RBD 
object size? Is there any udev rule or similar that could set max_sectors_kb 
when a RBD device is attached?

Cheers, Dan


On 27 Nov 2014, at 20:29, Dan Van Der Ster 
mailto:daniel.vanders...@cern.ch>> wrote:


Oops, I was off by a factor of 1000 in my original subject. We actually have 4M 
and 8M reads being split into 100 512kB reads per second. So perhaps these are 
limiting:

# cat /sys/block/vdb/queue/max_sectors_kb
512
# cat /sys/block/vdb/queue/read_ahead_kb
512

Questions below remain.

Cheers, Dan

On 27 Nov 2014 18:26, Dan Van Der Ster 
mailto:daniel.vanders...@cern.ch>> wrote:
Hi all,
We throttle (with qemu-kvm) rbd devices to 100 w/s and 100 r/s (and 80MB/s 
write and read).
With fio we cannot exceed 51.2MB/s sequential or random reads, no matter the 
reading block size. (But with large writes we can achieve 80MB/s).

I just realised that the VM subsytem is probably splitting large reads into 512 
byte reads, following at least one of:

# cat /sys/block/vdb/queue/hw_sector_size
512
# cat /sys/block/vdb/queue/minimum_io_size
512
# cat /sys/block/vdb/queue/optimal_io_size
0

vdb is an RBD device coming over librbd, with rbd cache=true and mounted like 
this:

  /dev/vdb on /vicepa type xfs (rw)

Did anyone observe this before?

Is there a kernel setting to stop splitting reads like that? or a way to change 
the io_sizes reported by RBD to the kernel).

(I found a similar thread on the lvm mailing list, but lvm shouldn’t be 
involved here.)

All components here are running latest dumpling. Client VM is running CentOS 
6.6.

Cheers, Dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large reads become 512 kbyte reads on qemu-kvm rbd

2014-11-27 Thread Dan Van Der Ster
Oops, I was off by a factor of 1000 in my original subject. We actually have 4M 
and 8M reads being split into 100 512kB reads per second. So perhaps these are 
limiting:

# cat /sys/block/vdb/queue/max_sectors_kb
512
# cat /sys/block/vdb/queue/read_ahead_kb
512

Questions below remain.

Cheers, Dan

On 27 Nov 2014 18:26, Dan Van Der Ster  wrote:
Hi all,
We throttle (with qemu-kvm) rbd devices to 100 w/s and 100 r/s (and 80MB/s 
write and read).
With fio we cannot exceed 51.2MB/s sequential or random reads, no matter the 
reading block size. (But with large writes we can achieve 80MB/s).

I just realised that the VM subsytem is probably splitting large reads into 512 
byte reads, following at least one of:

# cat /sys/block/vdb/queue/hw_sector_size
512
# cat /sys/block/vdb/queue/minimum_io_size
512
# cat /sys/block/vdb/queue/optimal_io_size
0

vdb is an RBD device coming over librbd, with rbd cache=true and mounted like 
this:

  /dev/vdb on /vicepa type xfs (rw)

Did anyone observe this before?

Is there a kernel setting to stop splitting reads like that? or a way to change 
the io_sizes reported by RBD to the kernel).

(I found a similar thread on the lvm mailing list, but lvm shouldn’t be 
involved here.)

All components here are running latest dumpling. Client VM is running CentOS 
6.6.

Cheers, Dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com