Re: [ceph-users] Increase queue_depth in KVM

2018-07-13 Thread Damian Dabrowski
Konstantin, Thanks for explanation. But unfortunately, upgrading qemu is
nearly impossible in my case.

So is there something else I can do, or I have to agree with fact that
write IOPS had to be 8x smaller inside KVM rather than outside KVM? :|

pt., 13 lip 2018 o 04:22 Konstantin Shalygin  napisał(a):

> > I've seen some people using 'num_queues' but I don't have this parameter
> > in my schemas(libvirt version = 1.3.1, qemu version = 2.5.0
>
>
> num-queues is available from qemu 2.7 [1]
>
>
> [1] https://wiki.qemu.org/ChangeLog/2.7
>
>
>
>
> k
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increase queue_depth in KVM

2018-07-12 Thread Konstantin Shalygin

I've seen some people using 'num_queues' but I don't have this parameter
in my schemas(libvirt version = 1.3.1, qemu version = 2.5.0



num-queues is available from qemu 2.7 [1]


[1] https://wiki.qemu.org/ChangeLog/2.7




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increase queue_depth in KVM

2018-07-12 Thread Damian Dabrowski
Hello,

Steffen, Thanks for Your reply. Sorry but I was on holidays, now I'm back
and still digging into my problem.. :(


I've read thousands of google links but can't find anything which could
help me.

- tried all qemu drive IO(io=) and cache(cache=) modes, nothing could come
even close to the results I'm getting outside KVM.
- enabling blk-mq inside KVM guest didn't help
- enabling iothreads didn't help
- 'queues' parameter in my libvirt schemas can be only applied to
'virtio-serial', can't use it in virtio-scsi nor virtio-blk)
- I've seen some people using 'num_queues' but I don't have this parameter
in my schemas(libvirt version = 1.3.1, qemu version = 2.5.0)



So, is there really no way to increase queue_depth to rbd device in kvm
domain or any other way to achieve results similar to those obtained
outside KVM? :/

wt., 26 cze 2018 o 15:19 Steffen Winther Sørensen 
napisał(a):

>
>
> > On 26 Jun 2018, at 14.04, Damian Dabrowski  wrote:
> >
> > Hi Stefan, thanks for reply.
> >
> > Unfortunately it didn't work.
> >
> > disk config:
> > 
> >discard='unmap'/>
> >   
> > 
> >   
> >name='volumes-nvme/volume-ce247187-a625-49f1-bacd-fc03df215395'>
> > 
> > 
> > 
> >   
> >   
> >   ce247187-a625-49f1-bacd-fc03df215395
> >   
> > 
> >
> >
> > Controller config:
> > 
> >   
> >function='0x0'/>
> > 
> >
> >
> > benchmark command: fio --randrepeat=1 --ioengine=libaio --direct=1
> --name=test --filename=test --bs=4k --iodepth=64 --size=1G
> --readwrite=randwrite --time_based --runtime=60
> --write_iops_log=write_results --numjobs=8
> >
> > And I'm still getting very low random write IOPS inside KVM instance
> with 8vcores(3-5k compared to 20k+ outside KVM)
> >
> > Maybe do You have any idea how to deal with it?
> What about trying with io=‘threads’ and/or maybe cache=’none’ or swap from
> virtio-scsi to blk-mq?
>
> Other people had similar issues, try asking ‘G’
>
> https://serverfault.com/questions/425607/kvm-guest-io-is-much-slower-than-host-io-is-that-normal
> https://wiki.mikejung.biz/KVM_/_Xen
>
> /Steffen
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increase queue_depth in KVM

2018-06-26 Thread Steffen Winther Sørensen


> On 26 Jun 2018, at 14.04, Damian Dabrowski  wrote:
> 
> Hi Stefan, thanks for reply.
> 
> Unfortunately it didn't work.
> 
> disk config:
> 
>discard='unmap'/>
>   
> 
>   
>name='volumes-nvme/volume-ce247187-a625-49f1-bacd-fc03df215395'>
> 
> 
> 
>   
>   
>   ce247187-a625-49f1-bacd-fc03df215395
>   
> 
> 
> 
> Controller config:
> 
>   
>function='0x0'/>
> 
> 
> 
> benchmark command: fio --randrepeat=1 --ioengine=libaio --direct=1 
> --name=test --filename=test --bs=4k --iodepth=64 --size=1G 
> --readwrite=randwrite --time_based --runtime=60 
> --write_iops_log=write_results --numjobs=8
> 
> And I'm still getting very low random write IOPS inside KVM instance with 
> 8vcores(3-5k compared to 20k+ outside KVM)
> 
> Maybe do You have any idea how to deal with it?
What about trying with io=‘threads’ and/or maybe cache=’none’ or swap from 
virtio-scsi to blk-mq?

Other people had similar issues, try asking ‘G’
https://serverfault.com/questions/425607/kvm-guest-io-is-much-slower-than-host-io-is-that-normal
https://wiki.mikejung.biz/KVM_/_Xen

/Steffen

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increase queue_depth in KVM

2018-06-26 Thread Damian Dabrowski
Hi Stefan, thanks for reply.

Unfortunately it didn't work.

disk config:

  
  

  
  



  
  
  ce247187-a625-49f1-bacd-fc03df215395
  



Controller config:

  
  



benchmark command: fio --randrepeat=1 --ioengine=libaio --direct=1
--name=test --filename=test --bs=4k --iodepth=64 --size=1G
--readwrite=randwrite --time_based --runtime=60
--write_iops_log=write_results --numjobs=8

And I'm still getting very low random write IOPS inside KVM instance with
8vcores(3-5k compared to 20k+ outside KVM)

Maybe do You have any idea how to deal with it?


wt., 26 cze 2018 o 09:37 Stefan Kooman  napisał(a):

> Quoting Damian Dabrowski (scoot...@gmail.com):
> > Hello,
> >
> > When I mount rbd image with -o queue_depth=1024 I can see much
> improvement,
> > generally on writes(random write improvement from 3k IOPS on standard
> > queue_depth to 24k IOPS on queue_depth=1024).
> >
> > But is there any way to attach rbd disk to KVM instance with custom
> > queue_depth? I can't find any information about it.
>
> Not specifically "queue depth", but if you use virtio-scsi, and have a
> VM with more than 1 vCPU, you can give each vCPU it's own queue [1]:
>
> 
>
>
>
>
> Gr. Stefan
>
> [1]:
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-blockio-techniques
>
>
> --
> | BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
> | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increase queue_depth in KVM

2018-06-26 Thread Stefan Kooman
Quoting Damian Dabrowski (scoot...@gmail.com):
> Hello,
> 
> When I mount rbd image with -o queue_depth=1024 I can see much improvement,
> generally on writes(random write improvement from 3k IOPS on standard
> queue_depth to 24k IOPS on queue_depth=1024).
> 
> But is there any way to attach rbd disk to KVM instance with custom
> queue_depth? I can't find any information about it.

Not specifically "queue depth", but if you use virtio-scsi, and have a
VM with more than 1 vCPU, you can give each vCPU it's own queue [1]:


   
   


Gr. Stefan

[1]: 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-blockio-techniques


-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Increase queue_depth in KVM

2018-06-25 Thread Damian Dabrowski
Hello,

When I mount rbd image with -o queue_depth=1024 I can see much improvement,
generally on writes(random write improvement from 3k IOPS on standard
queue_depth to 24k IOPS on queue_depth=1024).

But is there any way to attach rbd disk to KVM instance with custom
queue_depth? I can't find any information about it.

Thanks for any informations.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com