On 14 July 2016 at 14:54, Maxim Khitrov <m...@mxcrypt.com> wrote: > On Wed, Jul 13, 2016 at 11:47 PM, Tinker <ti...@openmailbox.org> wrote: >> On 2016-07-14 07:27, Maxim Khitrov wrote: >> [...] >>> >>> No, the tests are run sequentially. Write performance is measured >>> first (20 MB/s), then rewrite (12 MB/s), then read (37 MB/s), then >>> seeks (95 IOPS). >> >> >> Okay, you are on a totally weird platform. Or, on an OK platform with a >> totally weird configuration. >> >> Or on an OK platform and configuration with a totally weird underlying >> storage device. >> >> Are you on a magnet disk, are you using a virtual block device or virtual >> SATA connection, or some legacy interface like IDE? >> >> I get some feeling that your hardware + platform + configuration crappiness >> factor is fairly much through the ceiling. > > Dell R720 and R620 servers, 10 gigabit Ethernet SAN, Dell MD3660i > storage array, 1.2 TB 10K RPM SAS disks in RAID6. I don't think there > is anything crappy or weird about the configuration. Test results for > CentOS on the same system: 170 MB/s write, 112 MB/s rewrite, 341 MB/s > read, 746 IOPS. > > I'm assuming that there are others running OpenBSD on Xen, so I was > hoping that someone else could share either bonnie++ or even just dd > performance numbers. That would help us figure out if there really is > an anomaly in our setup. >
Hi, Since you have already discovered that we don't provide a driver for the paravirtualized disk interface (blkfront), I'd say that most likely your setup is just fine, but emulated pciide performance is subpar. I plan to implement it, but right now the focus is on making networking and specifically interrupt delivery reliable and efficient. Regards, Mike