[ovirt-users] Re: very very bad iscsi performance

2020-07-23 Thread Paolo Bonzini
Getting meaningful results is more important than getting good results. If
the benchmark is not meaningful, it is not useful towards fixing the issue.

Did you try virtio-blk with direct LUN?

Paolo

Il gio 23 lug 2020, 16:35 Philip Brown  ha scritto:

> Im in the middle of a priority issue right now, so cant take time out to
> rerun the bench, but...
> Usually in that kind of situation, if you dont turn on sync-to-disk on
> every write, you get benchmarks that are artificially HIGH.
> Forcing O_DIRECT slows throughput down.
> Dont you think the results are bad enough already? :-}
>
> - Original Message -
> From: "Stefan Hajnoczi" 
> To: "Philip Brown" 
> Cc: "Nir Soffer" , "users" ,
> "qemu-block" , "Paolo Bonzini" ,
> "Sergio Lopez Pascual" , "Mordechai Lehrer" <
> mleh...@redhat.com>, "Kevin Wolf" 
> Sent: Thursday, July 23, 2020 6:09:39 AM
> Subject: Re: [BULK]  Re: [ovirt-users] very very bad iscsi performance
>
>
> Hi,
> At first glance it appears that the filebench OLTP workload does not use
> O_DIRECT, so this isn't a measurement of pure disk I/O performance:
> https://github.com/filebench/filebench/blob/master/workloads/oltp.f
>
> If you suspect that disk performance is the issue please run a benchmark
> that bypasses the page cache using O_DIRECT.
>
> The fio setting is direct=1.
>
> Here is an example fio job for 70% read/30% write 4KB random I/O:
>
>   [global]
>   filename=/path/to/device
>   runtime=120
>   ioengine=libaio
>   direct=1
>   ramp_time=10# start measuring after warm-up time
>
>   [read]
>   readwrite=randrw
>   rwmixread=70
>   rwmixwrite=30
>   iodepth=64
>   blocksize=4k
>
> (Based on
> https://blog.vmsplice.net/2017/11/common-disk-benchmarking-mistakes.html)
>
> Stefan
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SENWWFESXYPGJY3IOV276RQDRMX6GDTH/


[ovirt-users] Re: very very bad iscsi performance

2020-07-20 Thread Paolo Bonzini
Il lun 20 lug 2020, 23:42 Nir Soffer  ha scritto:

> I think you will get the best performance using direct LUN.


Is direct LUN using the QEMU iSCSI initiator, or SG_IO, and if so is it
using /dev/sg or has that been fixed? SG_IO is definitely not going to be
the fastest, especially with /dev/sg.

Storage
> domain is best if you want
> to use features provided by storage domain. If your important feature
> is performance, you want
> to connect the storage in the most direct way to your VM.
>

Agreed but you want a virtio-blk device, not SG_IO; direct LUN with SG_IO
is only recommended if you want to do clustering and other stuff that
requires SCSI-level access.

Paolo


> Mordechai, did we do any similar performance tests in our lab?
> Do you have example results?
>
> Nir
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MXDNCWDFL4NCCYIHCKHOAHIU2HXTBXZY/


[ovirt-users] Re: [OT] Major and minor numbers assigned to /dev/vdx virtio devices

2020-07-14 Thread Paolo Bonzini
I think they are assigned at boot time but I would have to check the
sources and I am on vacation. :-)

Paolo

Il lun 13 lug 2020, 16:37 Sandro Bonazzola  ha scritto:

> +Paolo Bonzini  can you help here?
>
> Il giorno mer 1 lug 2020 alle ore 16:56 Gianluca Cecchi <
> gianluca.cec...@gmail.com> ha scritto:
>
>> Hello,
>> isn't there an official major/minor numbering scheme for virtio disks?
>> Sometimes I see 251 major or 252 or so... what is the udev assignment
>> logic?
>> Reading here:
>> https://www.kernel.org/doc/Documentation/admin-guide/devices.txt
>>
>>  240-254 block   LOCAL/EXPERIMENTAL USE
>>  Allocated for local/experimental use.  For devices not
>>  assigned official numbers, these ranges should be
>>  used in order to avoid conflicting with future assignments.
>>
>> it seems they are in the range of experimental ones, while for example
>> Xen /dev/xvdx devices have their own static assignment (202 major)
>>
>> Thanks,
>> Gianluca
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AEQB6H75QLYP6ENIEYROI2VY4BJS3SKL/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> <https://mojo.redhat.com/docs/DOC-1199578>*
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQZP2TFHCMWRFZLLHFTWW3HFQCRBJHF7/


Re: [ovirt-users] ovirt 4.1 - skylake - no avx512 support in virtual machines

2018-02-27 Thread Paolo Bonzini
On 27/02/2018 09:37, Sandro Bonazzola wrote:
> 
> Virtual Machine settings
> A) VM:
> Custom CPU Type: Use cluster default(Intel Skylake Family)
> General Tab shows: Guest CPU Type: Skylake-Client
> 
> avx512: NO
> 
> B) VM:
> Custom CPU Type: Skylake-Client
> General Tab shows: Guest CPU Type: Skylake-Client
> 
> avx512: NO
> 
> C) VM:
> Custom CPU Type: Use cluster default(Intel Skylake Family) [grey -
> cannot modify]
> Migration mode: Do not allow migration
> Pass-Through Host CPU
> General Tab shows: Guest CPU Type: Skylake-Client
> 
> avx512: YES   ( cat /proc/cpuinfo  |grep avx512: avx512f avx512dq
> avx512cd avx512bw avx512vl )
> 
> Using pass-through host cpu (disabling vm migration) is the only way to
> access avx512 in a VM, is it a bug or am I missing something?

Skylake-Client does _not_ have AVX512 (I tried now on a Kaby Lake Core
i7 laptop).  Only Skylake-Server has it and it will be in RHEL 7.5.

Thanks,

Paolo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-22 Thread Paolo Bonzini


On 22/12/2016 15:29, Michal Skrivanek wrote:
>> it is important for 3.6 and 4.0, but in 4.1 we should not be using any 
>> rhel-6 machine type anymore
>> There is https://bugzilla.redhat.com/show_bug.cgi?id=1402435 for HE which 
>> should be fixed (if real)
> 
> though, the bug opened by Paolo talks about rhel-6.6.0 machine type, but we 
> are using rhel-6.5.0, and then only 7.0,7.2,7.3
> Can you confirm? The report from Juergen indicates 6.5 is broken too

All RHEL-6 machine types are broken in the same way.

Paolo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread Paolo Bonzini


On 15/12/2016 17:07, InterNetX - Juergen Gotteswinter wrote:
> Am 15.12.2016 um 16:46 schrieb Sandro Bonazzola:
>>
>>
>> Il 15/Dic/2016 16:17, "InterNetX - Juergen Gotteswinter"
>> > ha scritto:
>>
>> Am 15.12.2016 um 15:51 schrieb Sandro Bonazzola:
>> >
>> >
>> > On Thu, Dec 15, 2016 at 3:02 PM, InterNetX - Juergen Gotteswinter
>> > 
>> >> wrote:
>> >
>> > i can confirm that it will break ...
>> >
>> > Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly
>> closed
>> > the monitor: Unexpected error in object_property_find() at
>> > qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm:
>> can't apply
>> > global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic'
>> not found
>> >
>> >
>> > Just an heads up that qemu-kvm-ev 2.6 is now
>> > in http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
>> 
>> > > >
>>
>> [16:16:47][root@vm1:/var/log]$rpm -aq |grep qemu-kvm-ev
>> qemu-kvm-ev-2.6.0-27.1.el7.x86_64
>> [16:16:52][root@vm1:/var/log]$
>>
>> this message is from 2.6
>>
>>
>> Adding Paolo and Michal.
> 
> sorry, theres a little bit more in the startup log which might be helpful
> 
> Unexpected error in object_property_find() at qom/object.c:1003:
> 2016-12-15T13:58:43.140073Z qemu-kvm: can't apply global
> Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic' not found

This is now bug 1405123.

Paolo

> 
> the complete startup parameters in that case are
> 
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> guest=jg123_vm1_loadtest,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-jg123_vm1_loadtest/master-key.aes
> -machine rhel6.5.0,accel=kvm,usb=off -cpu Opteron_G4 -m 65536 -realtime
> mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -uuid
> 20047459-7e48-4160-ac77-0e26a4f99472 -smbios
> 'type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0039-3310-8043-B2C04F463032,uuid=20047459-7e48-4160-ac77-0e26a4f99472'
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-jg123_vm1_loadtest/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2016-12-15T13:58:41,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/rhev/data-center/0002-0002-0002-0002-02f7/d5b56ea4-782e-4002-bb9a-478b337b5c9f/images/f022eca0-1af3-43ad-acad-4731ceceed3e/94b35a95-c80b-434c-afe7-e8ab4391395c,format=qcow2,if=none,id=drive-scsi0-0-0-0,serial=f022eca0-1af3-43ad-acad-4731ceceed3e,cache=none,werror=stop,rerror=stop,aio=native
> -device
> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
> -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:5e:43:04,bus=pci.0,addr=0x3,bootindex=2
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/20047459-7e48-4160-ac77-0e26a4f99472.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/20047459-7e48-4160-ac77-0e26a4f99472.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice
> tls-port=5900,addr=192.168.210.80,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2
> -msg timestamp=on
> 
> 
> 
>>
>>
>>
>>
>>
>> >
>> >
>> >
>> >
>> > cheers,
>> >
>> > Juergen
>> >
>> > Am 13.12.2016 um 10:30 schrieb Ralf Schenk:
>> > > Hello
>> 

Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread Paolo Bonzini


On 15/12/2016 16:46, Sandro Bonazzola wrote:
> 
> 
> Il 15/Dic/2016 16:17, "InterNetX - Juergen Gotteswinter"
> > ha scritto:
> 
> Am 15.12.2016 um 15:51 schrieb Sandro Bonazzola:
> >
> >
> > On Thu, Dec 15, 2016 at 3:02 PM, InterNetX - Juergen Gotteswinter
> > 
> >> wrote:
> >
> > i can confirm that it will break ...
> >
> > Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly
> closed
> > the monitor: Unexpected error in object_property_find() at
> > qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm:
> can't apply
> > global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic'
> not found
> >
> >
> > Just an heads up that qemu-kvm-ev 2.6 is now
> > in http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
> 
> >  >
> 
> [16:16:47][root@vm1:/var/log]$rpm -aq |grep qemu-kvm-ev
> qemu-kvm-ev-2.6.0-27.1.el7.x86_64
> [16:16:52][root@vm1:/var/log]$
> 
> this message is from 2.6
> 
> 
> Adding Paolo and Michal.

The message is ugly, but that "x1apic" should have read "x2apic".

Paolo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-14 Thread Paolo Bonzini


On 13/12/2016 18:28, Gianluca Cecchi wrote:
> - So I have to try the mix of 7.3 kernel and qemu 2.6, correct?

Yes, please.  If it works, the problem is transient.

Thanks,

Paolo

> Perhaps it was a problem only during install and not happening now that
> the VM has been deployed?
> Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-13 Thread Paolo Bonzini


On 13/12/2016 12:38, Gianluca Cecchi wrote:
> flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb
> rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
> nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx
> est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt
> tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch
> ida arat epb pln pts dtherm hwp hwp_noitfy hwp_act_window hwp_epp
> tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep
> bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1
> xsaves
> bogomips: 3600.06
> clflush size: 64
> cache_alignment: 64
> address sizes: 39 bits physical, 48 bits virtual
> power management:
> 
> . . . 
> 
> What is the flag to check?

It's erms, which is there.  But it's not the culprit.

Sorry if you have already tested it, but have you tried using 7.2 kernel
with QEMU 2.6, and then 7.3 kernel with QEMU 2.3?  That would allow
finding the guilty component more easily.

Thanks,

> Even if harmless, it seems that qemu-2.6 doesn't boot the self hosted
> engine VM, because it remains somehow inside a sort of bios window,
> while qemu-2.3 boots it without any problem and able to configure hosted
> engine...
> Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/users