Re: [GLLUG] KVM Performance

2020-06-15 Thread Ken Smith via GLLUG



James Courtier-Dutton via GLLUG wrote:

On Fri, 12 Jun 2020 at 08:55, Tim Woodall via GLLUG
 wrote:

On Tue, 9 Jun 2020, Ken Smith via GLLUG wrote:


Hi All,

While in lockdown I decided to do some performance testing on KVM.


Interesting, I've been looking into this myself trying to improve
performance/reduce cpu usage.


Hi,

My experience with KVM is that it is pretty good, performance wise,
with block devices.
E.g. qcow2 or actual block devices.
KVM appears very slow for USB devices.
Some of your tests appear to include USB in the path.
USB devices are also very slow when used with WINE. (Running windows
app on Linux)

Kind Regards

James

Not in my tests. Mine had two SATA 7200rpm 3TB disks.


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] KVM Performance

2020-06-13 Thread James Courtier-Dutton via GLLUG
On Fri, 12 Jun 2020 at 08:55, Tim Woodall via GLLUG
 wrote:
>
> On Tue, 9 Jun 2020, Ken Smith via GLLUG wrote:
>
> > Hi All,
> >
> > While in lockdown I decided to do some performance testing on KVM. I had
> > believed that passing a block device through to a guest rather than using a
> > QCOW2 file would get better performance. I wanted to see whether that was
> > true and indeed whether using iSCSI storage was any better/worse.
> >
>
> Interesting, I've been looking into this myself trying to improve
> performance/reduce cpu usage.
>

Hi,

My experience with KVM is that it is pretty good, performance wise,
with block devices.
E.g. qcow2 or actual block devices.
KVM appears very slow for USB devices.
Some of your tests appear to include USB in the path.
USB devices are also very slow when used with WINE. (Running windows
app on Linux)

Kind Regards

James

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] KVM Performance

2020-06-12 Thread Tim Woodall via GLLUG

On Tue, 9 Jun 2020, Ken Smith via GLLUG wrote:


Hi All,

While in lockdown I decided to do some performance testing on KVM. I had 
believed that passing a block device through to a guest rather than using a 
QCOW2 file would get better performance. I wanted to see whether that was 
true and indeed whether using iSCSI storage was any better/worse.




Interesting, I've been looking into this myself trying to improve
performance/reduce cpu usage.

This is a random file I happened to have lying around:
 scp _usr.dmp localhost:/mnt/nobackup/ _usr.dmp 100% 1990MB 149.9MB/s   00:13

Using nc (no encryption)
time cat _usr.dmp >/dev/tcp/::1/
real0m5.617s

When I copy it over the network (1Gbit) I get:
 scp _usr.dmp xen17:/dev/shm/ _usr.dmp 100% 1990MB  55.5MB/s   00:35

time cat _usr.dmp >/dev/tcp/fe80::d250:99ff:fec1:5e59%usb0/
real0m19.093s

(This is pretty close to the theoretical maximum for the network!)


Onto a virtual host that is running on xen17

#vm writing to /dev/zero
 time cat _usr.dmp >/dev/tcp/fe80::216:3eff:fee0:7253%usb0/
real0m19.798s

#vm writing to an iscsi device (on the xen17 host)
 time cat _usr.dmp >/dev/tcp/fe80::216:3eff:fee0:7253%usb0/
real0m40.941s

#using ssh:
scp _usr.dmp debootstrap17:/mnt/tmp/x _usr.dmp 100% 1990MB  26.9MB/s   01:14

#And when the vm has the device mounted as a raw device, not via iscsi:
 time cat _usr.dmp >/dev/tcp/fe80::216:3eff:fee0:7253%usb0/
real0m34.968s

And via SSH:
scp _usr.dmp debootstrap17:/mnt/tmp/x _usr.dmp 100% 1990MB  30.1MB/s   01:06


In my particular case, using ssh to move files on the lan is by far the
biggest hit and ssh tends to be used for everything nowadays. I will
probably patch ssh at some point to allow the null cipher so encryption
can be disabled in the .ssh/config file on a per host basis.

xen17 is an Intel(R) Celeron(R) CPU J1900  @ 1.99GHz with 16GB ram and
the source machine was a Intel(R) Core(TM) i3-7100U CPU @ 2.40GHz

Tim.


My test hardware is quite modest and this may adversely have affected what I 
measured. The processor is a Intel Core2 6300  @ 1.86GHz with VT-X support. 
It shows 3733 Bogomips at startup. There's 8GB RAM and an Intel 82801HB SATA 
controller on a Gigabyte MB. The disks are two 3TB SATA 7200RPM set up with a 
Raid 1 LVM Ext3 partition as well as other non-Raid partitions to use to 
test.


I used Fedora 32 as the KVM host and my testing was with Centos 8 as a guest.

On the host I got 60MB/s write and 143 MB/s read on Raid1/LVM/Ext3. I 
wrote/read 10GB files using dd. 10Gb so as to overflow any memory based 
caching. Without LVM that changed to 80 MB/s write and 149 MB/s read.


I tried all kinds of VM setups. Normal QCOW2, pass though of block devices 
Raid/LVM and Non-Raid/LVM. I consistently got around 14.5 MB/s write and 16.5 
MB/s read. Similar figures with iSCSI operating from both file based devices 
and block devices on the same host. The best I got by tweaking the 
performance settings in KVM was a modest improvement to 15 MB/s write and 17 
MB/s read.


As a reference point I did a test on a configuration that has Centos 6 on 
Hyper-V on an HP ML350 with SATA 7200 rpm disks. I appreciate that's much 
more capable hardware, although SATA rather than SAS, but I measured 176 MB/s 
write and 331 MB/s read. That system is using a file on the underlying NTFS 
file system to provide a block device to the Centos 6 VM.


I also tried booting the C8 guest via iSCSI on a Centos6 Laptop, which worked 
fine on a 1G network. I measured 16.8 MB/s write and 23.1 MB/s read that way.


I noticed an increase in processor load while running my DD tests, although I 
didn't take any actual measurements.


What to conclude? Is the hardware just not fast enough? Are newer processors 
better at abstracting the VM guests with less performance impact? What am I 
missing??


Any thoughts from virtualisation experts here most welcome.

Thanks

Ken






--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] KVM Performance

2020-06-10 Thread John Hearns via GLLUG
https://www.architecting.it/blog/wekaio-matrix-performance-das/

On Tue, 9 Jun 2020 at 21:52, Ken Smith via GLLUG 
wrote:

> Hi All,
>
> While in lockdown I decided to do some performance testing on KVM. I had
> believed that passing a block device through to a guest rather than
> using a QCOW2 file would get better performance. I wanted to see whether
> that was true and indeed whether using iSCSI storage was any better/worse.
>
> My test hardware is quite modest and this may adversely have affected
> what I measured. The processor is a Intel Core2 6300  @ 1.86GHz with
> VT-X support. It shows 3733 Bogomips at startup. There's 8GB RAM and an
> Intel 82801HB SATA controller on a Gigabyte MB. The disks are two 3TB
> SATA 7200RPM set up with a Raid 1 LVM Ext3 partition as well as other
> non-Raid partitions to use to test.
>
> I used Fedora 32 as the KVM host and my testing was with Centos 8 as a
> guest.
>
> On the host I got 60MB/s write and 143 MB/s read on Raid1/LVM/Ext3. I
> wrote/read 10GB files using dd. 10Gb so as to overflow any memory based
> caching. Without LVM that changed to 80 MB/s write and 149 MB/s read.
>
> I tried all kinds of VM setups. Normal QCOW2, pass though of block
> devices Raid/LVM and Non-Raid/LVM. I consistently got around 14.5 MB/s
> write and 16.5 MB/s read. Similar figures with iSCSI operating from both
> file based devices and block devices on the same host. The best I got by
> tweaking the performance settings in KVM was a modest improvement to 15
> MB/s write and 17 MB/s read.
>
> As a reference point I did a test on a configuration that has Centos 6
> on Hyper-V on an HP ML350 with SATA 7200 rpm disks. I appreciate that's
> much more capable hardware, although SATA rather than SAS, but I
> measured 176 MB/s write and 331 MB/s read. That system is using a file
> on the underlying NTFS file system to provide a block device to the
> Centos 6 VM.
>
> I also tried booting the C8 guest via iSCSI on a Centos6 Laptop, which
> worked fine on a 1G network. I measured 16.8 MB/s write and 23.1 MB/s
> read that way.
>
> I noticed an increase in processor load while running my DD tests,
> although I didn't take any actual measurements.
>
> What to conclude? Is the hardware just not fast enough? Are newer
> processors better at abstracting the VM guests with less performance
> impact? What am I missing??
>
> Any thoughts from virtualisation experts here most welcome.
>
> Thanks
>
> Ken
>
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>
> --
> GLLUG mailing list
> GLLUG@mailman.lug.org.uk
> https://mailman.lug.org.uk/mailman/listinfo/gllug
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] KVM Performance

2020-06-10 Thread Ken Smith via GLLUG




Mike Brodbelt via GLLUG wrote:

On 09/06/2020 22:52, Ken Smith via GLLUG wrote:




I can' t help thinking that there must be something more behind a 
drop from 60MB/s to 15MB/s in write performance





Can't speak to your exact bottleneck here, but I think I'd try and 
eliminate your disk hardware from the game for a start, to see what 
happens there.


Try creating a null block device 
(https://www.kernel.org/doc/html/latest/block/null_blk.html), and then 
benchmark writes to that on the host and the guest. See what that does 
as a baseline


Mike

Interesting - thanks for the tip. I measured 16.3 MB/s write to a null 
block device on one of the guests.


Conclusion - that's as fast as that hardware can manage.

:-) Ken


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] KVM Performance

2020-06-09 Thread Mike Brodbelt via GLLUG

On 09/06/2020 22:52, Ken Smith via GLLUG wrote:

Indeed. So wouldn't passing a block device from the host through to the 
guest minimise the 'components' that are 'in the way'


I can' t help thinking that there must be something more behind a drop 
from 60MB/s to 15MB/s in write performance


I'd value your thoughts :-) Ken


Can't speak to your exact bottleneck here, but I think I'd try and 
eliminate your disk hardware from the game for a start, to see what 
happens there.


Try creating a null block device 
(https://www.kernel.org/doc/html/latest/block/null_blk.html), and then 
benchmark writes to that on the host and the guest. See what that does 
as a baseline


Mike

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] KVM Performance

2020-06-09 Thread Ken Smith via GLLUG


Martin A. Brooks via GLLUG wrote:


The more levels of indirection you have between any 2 components in a 
system the slower stuff will move.


Indeed. So wouldn't passing a block device from the host through to the 
guest minimise the 'components' that are 'in the way'


I can' t help thinking that there must be something more behind a drop 
from 60MB/s to 15MB/s in write performance


I'd value your thoughts :-) Ken


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] KVM Performance

2020-06-09 Thread Martin A. Brooks via GLLUG

On 2020-06-09 21:51, Ken Smith via GLLUG wrote:

What to conclude? Is the hardware just not fast enough? Are newer
processors better at abstracting the VM guests with less performance
impact? What am I missing??


The more levels of indirection you have between any 2 components in a 
system the slower stuff will move.


You can offset indirection cost by making components or interconnects 
faster than they would otherwise need to be if the indirection wasn't 
there.


Faster, cheaper, better: pick any two.



--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

[GLLUG] KVM Performance

2020-06-09 Thread Ken Smith via GLLUG

Hi All,

While in lockdown I decided to do some performance testing on KVM. I had 
believed that passing a block device through to a guest rather than 
using a QCOW2 file would get better performance. I wanted to see whether 
that was true and indeed whether using iSCSI storage was any better/worse.


My test hardware is quite modest and this may adversely have affected 
what I measured. The processor is a Intel Core2 6300  @ 1.86GHz with 
VT-X support. It shows 3733 Bogomips at startup. There's 8GB RAM and an 
Intel 82801HB SATA controller on a Gigabyte MB. The disks are two 3TB 
SATA 7200RPM set up with a Raid 1 LVM Ext3 partition as well as other 
non-Raid partitions to use to test.


I used Fedora 32 as the KVM host and my testing was with Centos 8 as a 
guest.


On the host I got 60MB/s write and 143 MB/s read on Raid1/LVM/Ext3. I 
wrote/read 10GB files using dd. 10Gb so as to overflow any memory based 
caching. Without LVM that changed to 80 MB/s write and 149 MB/s read.


I tried all kinds of VM setups. Normal QCOW2, pass though of block 
devices Raid/LVM and Non-Raid/LVM. I consistently got around 14.5 MB/s 
write and 16.5 MB/s read. Similar figures with iSCSI operating from both 
file based devices and block devices on the same host. The best I got by 
tweaking the performance settings in KVM was a modest improvement to 15 
MB/s write and 17 MB/s read.


As a reference point I did a test on a configuration that has Centos 6 
on Hyper-V on an HP ML350 with SATA 7200 rpm disks. I appreciate that's 
much more capable hardware, although SATA rather than SAS, but I 
measured 176 MB/s write and 331 MB/s read. That system is using a file 
on the underlying NTFS file system to provide a block device to the 
Centos 6 VM.


I also tried booting the C8 guest via iSCSI on a Centos6 Laptop, which 
worked fine on a 1G network. I measured 16.8 MB/s write and 23.1 MB/s 
read that way.


I noticed an increase in processor load while running my DD tests, 
although I didn't take any actual measurements.


What to conclude? Is the hardware just not fast enough? Are newer 
processors better at abstracting the VM guests with less performance 
impact? What am I missing??


Any thoughts from virtualisation experts here most welcome.

Thanks

Ken



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug