[ovirt-users] Re: very very bad iscsi performance

2020-07-24 Thread Stefan Hajnoczi
On Thu, Jul 23, 2020 at 07:25:14AM -0700, Philip Brown wrote:
> Usually in that kind of situation, if you dont turn on sync-to-disk on every 
> write, you get benchmarks that are artificially HIGH.
> Forcing O_DIRECT slows throughput down.
> Dont you think the results are bad enough already? :-}

The results that were posted do not show iSCSI performance in isolation
so it's hard to diagnose the problem.

The page cache is used when the O_DIRECT flag is absent. I/O is not sent
to the disk at all when it can be fulfilled from the page cache in
memory. Therefore the benchmark is not an accurate indicator of disk I/O
performance.

In addition to this, page cache behavior depends on various factors such
as available free memory, operating system implementation and version,
etc. This makes it hard to compare results across VMs, different
machines, etc.

Stefan


signature.asc
Description: PGP signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZCYB2SRTSMF6RL3S7HH5U4J6MXZDHV3/


[ovirt-users] Re: [BULK] Re: very very bad iscsi performance

2020-07-23 Thread Stefan Hajnoczi
On Tue, Jul 21, 2020 at 07:14:53AM -0700, Philip Brown wrote:
> Thank you for the analysis. I have some further comments:
> 
> First off, filebench pre-writes the files before doing oltp benchmarks, so I 
> dont think the thin provisioning is at play here.
> I will double check this, but if you dont hear otherwise, please presume that 
> is the case :)
> 
> Secondly, I am surprised at  your recommendation to use virtio instead of 
> virtio-scsi. since the writeup for virtio-scsi claims it has equivalent 
> performance in general, and adds better scaling
> https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi.html
> 
> As far as your suggestion for using multiple disks for scaling higher:
> We are using an SSD. Isnt the whole advantage of using SSD drives, that you 
> can get the IOP/s performance of 10 drives, out of a single drive?
> We certainly get that using it natively, outside of a VM.
> SO it would be nice to see performance approaching that within an ovirt VM.

Hi,
At first glance it appears that the filebench OLTP workload does not use
O_DIRECT, so this isn't a measurement of pure disk I/O performance:
https://github.com/filebench/filebench/blob/master/workloads/oltp.f

If you suspect that disk performance is the issue please run a benchmark
that bypasses the page cache using O_DIRECT.

The fio setting is direct=1.

Here is an example fio job for 70% read/30% write 4KB random I/O:

  [global]
  filename=/path/to/device
  runtime=120
  ioengine=libaio
  direct=1
  ramp_time=10# start measuring after warm-up time

  [read]
  readwrite=randrw
  rwmixread=70
  rwmixwrite=30
  iodepth=64
  blocksize=4k

(Based on 
https://blog.vmsplice.net/2017/11/common-disk-benchmarking-mistakes.html)

Stefan


signature.asc
Description: PGP signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SSXTL4OJ3PNWUEQVA7KJYIKDF74VO34N/


[ovirt-users] Re: [OT] Major and minor numbers assigned to /dev/vdx virtio devices

2020-07-14 Thread Stefan Hajnoczi
On Mon, Jul 13, 2020 at 09:53:31PM +0300, Nir Soffer wrote:
> On Wed, Jul 1, 2020 at 5:55 PM Gianluca Cecchi
>  wrote:
> >
> > Hello,
> > isn't there an official major/minor numbering scheme for virtio disks?
> > Sometimes I see 251 major or 252 or so... what is the udev assignment logic?
> > Reading here:
> > https://www.kernel.org/doc/Documentation/admin-guide/devices.txt
> >
> >  240-254 block LOCAL/EXPERIMENTAL USE
> > Allocated for local/experimental use.  For devices not
> > assigned official numbers, these ranges should be
> > used in order to avoid conflicting with future assignments.
> >
> > it seems they are in the range of experimental ones, while for example Xen 
> > /dev/xvdx devices have their own static assignment (202 major)

No, the Linux virtio_blk driver does not use a static device major number.

Regarding udev, on my Fedora system
/usr/lib/udev/rules.d/60-persistent-storage.rules has rules like this:

KERNEL=="vd*[!0-9]", ATTRS{serial}=="?*", ENV{ID_SERIAL}="$attr{serial}", 
SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL}"

The rules match on the "vd*" name. If you are writing udev rules you
could use the same approach.

Is there a specific problem faced when there is no static device major
number?

Stefan


signature.asc
Description: PGP signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2CN5HTOREEXF4ILHZHIHNLJEFZM7HH5L/


[ovirt-users] Re: [Qemu-block] Re: Debugging ceph access

2018-06-04 Thread Stefan Hajnoczi
On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote:
> On Thu, May 31, 2018 at 1:55 AM Bernhard Dick  wrote:
> 
> > Hi,
> >
> > I found the reason for my timeout problems: It is the version of librbd1
> > (which is 0.94.5) in conjunction with my CEPH test-environment which is
> > running the luminous release.
> > When I install the librbd1 (and librados2) packages from the
> > centos-ceph-luminous repository (version 12.2.5) I'm able to start and
> > migrate VMs inbetween the hosts.
> >
> 
> vdsm does not require librbd since qemu brings this dependency, and vdsm
> does not access ceph directly yet.
> 
> Maybe qemu should require newer version of librbd?

Upstream QEMU builds against any librbd version that exports the
necessary APIs.

The choice of library versions is mostly up to distro package
maintainers.

Have you filed a bug against Ceph on the distro you are using?

Stefan

> >
> >Regards
> >  Bernhard
> >
> > Am 25.05.2018 um 17:08 schrieb Bernhard Dick:
> > > Hi,
> > >
> > > as you might already know I try to use ceph with openstack in an oVirt
> > > test environment. I'm able to create and remove volumes. But if I try to
> > > run a VM which contains a ceph volume it is in the "Wait for launch"
> > > state for a very long time. Then it gets into "down" state again. The
> > > qemu log states
> > >
> > > 2018-05-25T15:03:41.100401Z qemu-kvm: -drive
> > >
> > file=rbd:rbd/volume-3bec499e-d0d0-45ef-86ad-2c187cdb2811:id=cinder:auth_supported=cephx\;none:mon_host=[mon0]\:6789\;[mon1]\:6789,file.password-secret=scsi0-0-0-0-secret0,format=raw,if=none,id=drive-scsi0-0-0-0,serial=3bec499e-d0d0-45ef-86ad-2c187cdb2811,cache=none,werror=stop,rerror=stop,aio=threads:
> >
> > > error connecting: Connection timed out
> > >
> > > 2018-05-25 15:03:41.109+: shutting down, reason=failed
> > >
> > > On the monitor hosts I see traffic with the ceph-mon-port, but not on
> > > other ports (the osds for example). In the ceph logs however I don't
> > > really see what happens.
> > > Do you have some tips how to debug this problem?
> > >
> > >Regards
> > >  Bernhard
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6ODADRIIYRJPSSX23ITWLNQLX3ER3Q4/
> >


signature.asc
Description: PGP signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNTWXF3YFXXPMSFF2GFFLF32WZ63VZIP/


Re: [Users] EL5 support for VirtIO SCSI?

2013-11-14 Thread Stefan Hajnoczi
On Thu, Nov 14, 2013 at 02:39:33AM -0500, Ayal Baron wrote:
  - Original Message -
   Hello Itamar.
   The specific use case is a particular propriety filesystem that needs to
   see
   a scsi device. It will do scsi inquiry conmmands to verify suitability.
   In talking to the devs - of the filesystem - there is no way around it. 
   I'd
   previously tried virtio-block - resulting in the /dev/vd* device - and the
   filesystem would not work.
   
   From doing a bit of web searching it appears the kvm/qemu supports (or did
   support) an emulated LSI scsi controller. My understanding is that the
   various virtualization platforms will emulate a well supported device (by
   the guest OSes) so that drivers are not an issue. For example this should
   allow a VM on Vmware vsphere/vcenter to be exported to Ovirt and have it
   boot up. The potential for further optimising the guest is there by
   installing ovirt/qemu/kvm guest utils that then allow the guest OS to
   understand the virtio nic and scsi devices. The guest could then be shut
   down, the nic and scsi controller changed and the guest booted up again.
   You can do the same thing in the Vmware world by installing their guest
   tools, shutting down the guest VM, then reconfiguring it with a vmxnet3 
   nic
   and pvscsi scsi adapter, then booting up again.
   It does seem somewhat inconsistent in Ovirt that we allow a choice of 
   Intel
   e1000 or virtio nics, but do not offer any choice with the scsi adapter.
  
  virtio-scsi support was just recently added to oVirt to allow for scsi
  passthrough and improved performance over virtio-blk.
  I believe the emulated scsi device in qemu never matured enough but possibly
  Stefan (cc'd) can correct me here.

The only supported emulated SCSI HBA device is virtio-scsi.  It was Tech
Preview in RHEL 6.3 and became fully supported in RHEL 6.4.  virtio-scsi
is not available in RHEL 5.

Stefan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users