Re: Looking for Xen blkfront driver xbf(4) tests

2016-12-18 Thread mabi
Hi Mike,

Thanks for your explanations. So far I did not have any troubles with this 
specific domU with xbf enabled. I tried to run your shell script in order to 
found out the num-ring-pages property but somehow there must be a small issue 
with it as I get the following output (after having removed the "#" comment out 
on the first line with the for loop:

hostctl: ioctl: No such file or directory
sd0 32







Regards,
M.



 Original Message 
Subject: Re: Looking for Xen blkfront driver xbf(4) tests
Local Time: December 13, 2016 8:46 PM
UTC Time: December 13, 2016 7:46 PM
From: m...@belopuhov.com
To: mabi <m...@protonmail.ch>
misc@openbsd.org <misc@openbsd.org>

On Sun, Dec 11, 2016 at 05:09 -0500, mabi wrote:
> Hi,
>
> Thanks for your efforts and making OpenBSD work even better on
> Xen. I use Xen for all types of virtualization and started only
> recently using OpenBSD 6.0 as domU. My current test setup is a 2
> node redundant cluster with Xen 4.4.1 and Debian 8 with DRBD for
> sync-replication and ZFS (RAIDZ-1) as storage with 3 Seagate
> enterprise 7.2k SATA (ST5000NM0024) disks on each nodes.
>
> So far so good I managed to re-configure the current kernel and
> re-compiled it with xbf enabled and at reboot it immediately used
> the xbf driver and switched using sd instead of wd. You will find
> the output of my dmesg below.
>
> For now the only thing a tried out is a quick "dd" as I was
> wondering how much more write throughput I could get on my guests
> disk using xbf. As you can see below I get around 81 MB/s and I
> remember before using xbf I would get around 25 MB/s. The read
> throughput didn't change much, if I remember correctly I had in both
> cases with and without xbf around 60 MB/s.
>
> $ dd if=/dev/zero of=file2.xbf bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes transferred in 12.277 secs (85405965 bytes/sec)
>
> Now is there anything else you would like to know/test or benchmarks
> you would like me to run? Keep in mind I am no dev but I am happy to
> help if it can make things progress with running OpenBSD even better
> on Xen.
>
> Cheers,
> Mabi
>

Hi,

Thanks for taking your time to test and report. There's nothing
special to test, just using the disk in a normal way is enough.
After a few reports from Nathanael Rensen several bugs have been
fixed.

I've looked through a bunch of Xen disk subsystem documents and
noted that one of the ways to improve performance is to use
persistent grants. However it would be nice to establish a
baseline, i.e. what kind of performance do NetBSD, FreeBSD and
Linux guests get out of Blkfront in the VM with the same
configuration on the same host compared to OpenBSD.

It's worth noting that MAXPHYS value limiting the size of an
individual I/O transfer is different on other systems.
Furthermore currently xbf(4) driver limits it further to 11
page segments (44k) since we don't support indirect requests
that potentially can get us extra 20k (MAXPHYS is 64k on
OpenBSD) but would add additional tax on grant table entries.

The other point of interest is the number of outstanding
requests configured by the driver. If not limited by the
host system, xbf(4) attempts to use 256 requests, but smaller
EC2 instances limit that to just 32 requests which can result
is large performance difference. To learn the amount of
configured outstanding requests a num-ring-pages property
must be queried:

# for xbf in $(hostctl device/vbd); do
dev=$(dmesg | grep $xbf | cut -f 1 -d ' ')
npages=$(hostctl device/vbd/$xbf/num-ring-pages)
if [ $? -eq 0 ]; then
echo $dev $((npages * 32))
else
echo $dev 32
fi
done

Output would look like so:

sd0 256
sd1 256
cd0 256

Cheers,
Mike



Re: Looking for Xen blkfront driver xbf(4) tests

2016-12-13 Thread Mike Belopuhov
On Sun, Dec 11, 2016 at 05:09 -0500, mabi wrote:
> Hi,
> 
> Thanks for your efforts and making OpenBSD work even better on
> Xen. I use Xen for all types of virtualization and started only
> recently using OpenBSD 6.0 as domU. My current test setup is a 2
> node redundant cluster with Xen 4.4.1 and Debian 8 with DRBD for
> sync-replication and ZFS (RAIDZ-1) as storage with 3 Seagate
> enterprise 7.2k SATA (ST5000NM0024) disks on each nodes.
> 
> So far so good I managed to re-configure the current kernel and
> re-compiled it with xbf enabled and at reboot it immediately used
> the xbf driver and switched using sd instead of wd. You will find
> the output of my dmesg below.
> 
> For now the only thing a tried out is a quick "dd" as I was
> wondering how much more write throughput I could get on my guests
> disk using xbf. As you can see below I get around 81 MB/s and I
> remember before using xbf I would get around 25 MB/s. The read
> throughput didn't change much, if I remember correctly I had in both
> cases with and without xbf around 60 MB/s.
> 
> $ dd if=/dev/zero of=file2.xbf bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes transferred in 12.277 secs (85405965 bytes/sec)
> 
> Now is there anything else you would like to know/test or benchmarks
> you would like me to run? Keep in mind I am no dev but I am happy to
> help if it can make things progress with running OpenBSD even better
> on Xen.
> 
> Cheers,
> Mabi
> 

Hi,

Thanks for taking your time to test and report.  There's nothing
special to test, just using the disk in a normal way is enough.
After a few reports from Nathanael Rensen several bugs have been
fixed.

I've looked through a bunch of Xen disk subsystem documents and
noted that one of the ways to improve performance is to use
persistent grants.  However it would be nice to establish a
baseline, i.e. what kind of performance do NetBSD, FreeBSD and
Linux guests get out of Blkfront in the VM with the same
configuration on the same host compared to OpenBSD.

It's worth noting that MAXPHYS value limiting the size of an
individual I/O transfer is different on other systems.
Furthermore currently xbf(4) driver limits it further to 11
page segments (44k) since we don't support indirect requests
that potentially can get us extra 20k (MAXPHYS is 64k on
OpenBSD) but would add additional tax on grant table entries.

The other point of interest is the number of outstanding
requests configured by the driver.  If not limited by the
host system, xbf(4) attempts to use 256 requests, but smaller
EC2 instances limit that to just 32 requests which can result
is large performance difference.  To learn the amount of
configured outstanding requests a num-ring-pages property
must be queried:

  # for xbf in $(hostctl device/vbd); do
 dev=$(dmesg | grep $xbf | cut -f 1 -d ' ')
 npages=$(hostctl device/vbd/$xbf/num-ring-pages)
 if [ $? -eq 0 ]; then
   echo $dev $((npages * 32))
 else
   echo $dev 32
 fi
   done

Output would look like so:

  sd0 256
  sd1 256
  cd0 256

Cheers,
Mike



Re: Looking for Xen blkfront driver xbf(4) tests

2016-12-11 Thread mabi
0
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
root on sd0a (3f0ed8d22a8ed12f.a) swap on sd0b dump on sd0b







 Original Message 

Subject: Looking for Xen blkfront driver xbf(4) tests
Local Time: December 7, 2016 7:30 PM
UTC Time: December 7, 2016 6:30 PM
From: m...@belopuhov.com
To: t...@openbsd.org
misc@openbsd.org

Hi,

I've committed today a driver for the Xen paravirtualized disk
interface also known as Blkfront. Despite being pretty stable
for me so far, it's not enabled by default at the moment.
Therefore I'm looking for additional tests on different Xen
versions and EC2 instances to ensure robustness and performance
of the software.

To enable the driver, uncomment the xbf line in the kernel
config file (/sys/arch/amd64/conf/GENERIC) and re-configure and
re-build the kernel. The system will automatically switch all
available wd* disks to sd* but, unless you have opted out of
using disklabel UIDs in the /etc/fstab, there's no configuration
tweaking required.

Please report successes and failures. In case of a reproducible
issue, please enable the XEN_DEBUG define in /sys/dev/pv/xenvar.h,
rebuild you kernel and send me relevant lines from the log (copied
from the console or /var/log/messages).

Cheers,
Mike



Re: Looking for Xen blkfront driver xbf(4) tests

2016-12-07 Thread Mike Belopuhov
On Wed, Dec 07, 2016 at 19:30 +0100, Mike Belopuhov wrote:
> Hi,
> 
> I've committed today a driver for the Xen paravirtualized disk
> interface also known as Blkfront.  Despite being pretty stable
> for me so far, it's not enabled by default at the moment.
> Therefore I'm looking for additional tests on different Xen
> versions and EC2 instances to ensure robustness and performance
> of the software.
> 
> To enable the driver, uncomment the xbf line in the kernel
> config file (/sys/arch/amd64/conf/GENERIC) and re-configure and
> re-build the kernel.  The system will automatically switch all
> available wd* disks to sd* but, unless you have opted out of
> using disklabel UIDs in the /etc/fstab, there's no configuration
> tweaking required.
> 
> Please report successes and failures.  In case of a reproducible
> issue, please enable the XEN_DEBUG define in /sys/dev/pv/xenvar.h,
> rebuild you kernel and send me relevant lines from the log (copied
> from the console or /var/log/messages).
> 
> Cheers,
> Mike

Reyk has endured some EC2 breakage and helped a great deal with
debugging.  As a result there has been some critical changes after
the initial check in so please make sure that xbf.c is at 1.5.

Cheers,
Mike



Looking for Xen blkfront driver xbf(4) tests

2016-12-07 Thread Mike Belopuhov
Hi,

I've committed today a driver for the Xen paravirtualized disk
interface also known as Blkfront.  Despite being pretty stable
for me so far, it's not enabled by default at the moment.
Therefore I'm looking for additional tests on different Xen
versions and EC2 instances to ensure robustness and performance
of the software.

To enable the driver, uncomment the xbf line in the kernel
config file (/sys/arch/amd64/conf/GENERIC) and re-configure and
re-build the kernel.  The system will automatically switch all
available wd* disks to sd* but, unless you have opted out of
using disklabel UIDs in the /etc/fstab, there's no configuration
tweaking required.

Please report successes and failures.  In case of a reproducible
issue, please enable the XEN_DEBUG define in /sys/dev/pv/xenvar.h,
rebuild you kernel and send me relevant lines from the log (copied
from the console or /var/log/messages).

Cheers,
Mike