[ovirt-devel] Reviews

2019-02-14 Thread Germano Veit Michel
Hello,

I've pinged this a few times before, sorry for doing it again. If there is
interest to incorporate these changes in ovirt to make troubleshooting
LSM/Snapshots easier please review these changes, otherwise we can abandon
them.

https://gerrit.ovirt.org/#/q/topic:snapshot-tools
https://gerrit.ovirt.org/#/q/topic:dump-chains-sqlite

Thanks,
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FPLRFL2ROBVEWPBSOCYNU42EYIX3GI7P/


[ovirt-devel] Re: oVirt Glance repo not accessible blocked OST

2019-02-14 Thread Tian Xu
After skip verify_glance_import test,  test "add_graphics_console" 
failed with below exception



  time="0.156">
    message="
status: 400
reason: Bad 
Request
detail: Cannot ${action} ${type}. One VM can not 
contain more than one device with the same graphics 
type.">
  


On 2019/2/14 22:00, Tian Xu wrote:

Hi Experts,

I am trying to run ovirt-system-test in my local lab, which need proxy 
to access internet. ovirt cannot work with proxy to access ovirt 
glance repository, whose URL is: http://glance.ovirt.org:9292, a 
related bugzilla was filed by someone and in open status: 
https://bugzilla.redhat.com/show_bug.cgi?id=1362433


Some ovirt system tests depend on ovirt glance repository for VM 
images, these tests either skipped or failed, then subsequent VM 
related tests cannot be run. Here is the test file: 
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-4.2/test-scenarios/004_basic_sanity.py, 
and the verify_glance_import failed, then other test blocked too.


Below is exception message and details test log see attachment.


Running test scenario 004_basic_sanity.py


  time="2.473"/>
  time="0.050">
    
  
  time="0.266">
    
  


Thanks,

Xu


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3QZHVULTXMJ4UR5GYVSYXVSCW3RWI7NC/


[ovirt-devel] Re: Failed dependencies Ovirt 4.3 on clean install Centos 7.6

2019-02-14 Thread Erick Perez
In case user wants to go ahead and install cockpit and the dashboard, user need 
to 

1- do not do yum update on a fresh 7.6 Centos
2- install: dpdk-17.11-15.el7.x86_64.rpm
# yum install dpdk-17.11-15.el7
3- install cockpit and dashboard
# yum install cockpit cockpit-ovirt-dashboard -y
4- permanently exclude in yum.conf dpdk package until fix is available
modify /etc/yum.conf and add a line that says:
exclude=dpdk-*
5- use yum update as usual

Alternate method:
if user does not want to modify yum.conf then:
# yum install dpdk-17.11-15.el7
# yum install cockpit cockpit-ovirt-dashboard -y
# yum -x dpdk-* update
# the -x parameter will exclude dpdk-* packages

 Install will then suceed.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IFKC6KTOR4BZFOLMXRE4M4VCCJAT4BMI/


[ovirt-devel] Re: Failed dependencies Ovirt 4.3 on clean install Centos 7.6

2019-02-14 Thread Greg Sheremeta
Unfortunately there was an ovs packaging problem in CentOS.
It might work if you install this
https://cbs.centos.org/koji/buildinfo?buildID=25150
but I haven't tried it yet.
"""
* Thu Feb 14 2019 Alfredo Moralejo  - 2.10.1-3 -
Disabled dpdk support until a new release with support for dpdk-18.11 is
created.
"""

Best wishes,
Greg

On Thu, Feb 14, 2019 at 4:19 PM Erick Perez  wrote:

> Centos 7.6 (minimal install ISO)
> UEFI boot
> yum -y update
> reboot
> [root@ovirt01] yum install cockpit-ovirt-dashboard
> Error: Package: 1:openvswitch-2.10.1-1.el7.x86_64
> (ovirt-4.3-centos-ovirt43)
> Requires: librte_mbuf.so.3()(64bit)
> Available: dpdk-17.11-13.el7.x86_64 (extras)
> librte_mbuf.so.3()(64bit)
> Available: dpdk-17.11-15.el7.x86_64 (extras)
> librte_mbuf.so.3()(64bit)
> Installed:dpdk-18.11-2.el7_6.x86_64 (@extras)
> ~librte_mbuf.so.4()(64bit)
> You could try using --skip-broken to work around the problem
> You could try running: rpm -Va --nofiles --nodigest
> [root@ovirt01]
>
> Note: The above message appears for librte_mbuf/mempool/pmd and several
> others.
> It seems openvswitch specifically need DPDK v17 and not v18.
> Please clarify.
>
> thanks,
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/R5J3UVFCBZ36TIM3AQ7RLPAWG45CGCIS/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3UIV5I5ZISEHYGRYDVVC3LZGMDWFMFVG/


[ovirt-devel] Failed dependencies Ovirt 4.3 on clean install Centos 7.6

2019-02-14 Thread Erick Perez
Centos 7.6 (minimal install ISO)
UEFI boot
yum -y update
reboot
[root@ovirt01] yum install cockpit-ovirt-dashboard
Error: Package: 1:openvswitch-2.10.1-1.el7.x86_64 (ovirt-4.3-centos-ovirt43)
Requires: librte_mbuf.so.3()(64bit)
Available: dpdk-17.11-13.el7.x86_64 (extras)
librte_mbuf.so.3()(64bit)
Available: dpdk-17.11-15.el7.x86_64 (extras)
librte_mbuf.so.3()(64bit)
Installed:dpdk-18.11-2.el7_6.x86_64 (@extras)
~librte_mbuf.so.4()(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
[root@ovirt01]

Note: The above message appears for librte_mbuf/mempool/pmd and several others.
It seems openvswitch specifically need DPDK v17 and not v18.
Please clarify.

thanks,
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/R5J3UVFCBZ36TIM3AQ7RLPAWG45CGCIS/


[ovirt-devel] Re: oVirt Glance repo not accessible blocked OST

2019-02-14 Thread Dan Kenigsberg
On Thu, Feb 14, 2019 at 7:51 PM Tian Xu  wrote:
>
> Hi Experts,
>
> I am trying to run ovirt-system-test in my local lab, which need proxy
> to access internet. ovirt cannot work with proxy to access ovirt glance
> repository, whose URL is: http://glance.ovirt.org:9292, a related
> bugzilla was filed by someone and in open status:
> https://bugzilla.redhat.com/show_bug.cgi?id=1362433

Well, patches are most welcome. I suspect that one should go to
https://github.com/woorea/openstack-java-sdk and another - to make
this configurable in ovirt-engine-setup.

>
> Some ovirt system tests depend on ovirt glance repository for VM images,
> these tests either skipped or failed, then subsequent VM related tests
> cannot be run. Here is the test file:
> https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-4.2/test-scenarios/004_basic_sanity.py,
> and the verify_glance_import failed, then other test blocked too.

I think that we are not being consistent here. snapshot_cold_merge
seems to be skipped if Glance is unreachable, yet verify_glance_import
fails the whole suite loudly.
I'd prefer to so a OST_SKIP_GLANCE environment variable is used to
control if these tests are attempted. Then, if you do not care for
Glance, you can simply drop it.

Theoretically you can raise a SkipTest() from verify_glance_import.
However, tryng and skipping is not a good practice. It leads to false
negatives: production code can break without anyone noticing that the
tests are skipped.

>
> Below is exception message and details test log see attachment.
>
>
> Running test scenario 004_basic_sanity.py
> 
> 
> time="2.473"/>
> time="0.050">
>  
>
> time="0.266">
>  
>
> 
>
> Thanks,
>
> Xu
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZQRRCNOHIO52BXIPCS342L6FWVGUJNZ2/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QBVJAWYSBOHG7OWHIEJN2UNLPU7BN7KX/


[ovirt-devel] oVirt Glance repo not accessible blocked OST

2019-02-14 Thread Tian Xu

Hi Experts,

I am trying to run ovirt-system-test in my local lab, which need proxy 
to access internet. ovirt cannot work with proxy to access ovirt glance 
repository, whose URL is: http://glance.ovirt.org:9292, a related 
bugzilla was filed by someone and in open status: 
https://bugzilla.redhat.com/show_bug.cgi?id=1362433


Some ovirt system tests depend on ovirt glance repository for VM images, 
these tests either skipped or failed, then subsequent VM related tests 
cannot be run. Here is the test file: 
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-4.2/test-scenarios/004_basic_sanity.py, 
and the verify_glance_import failed, then other test blocked too.


Below is exception message and details test log see attachment.


Running test scenario 004_basic_sanity.py


  time="2.473"/>
  time="0.050">
    
  
  time="0.266">
    
  


Thanks,

Xu

2019-02-14 10:37:33.608866112+ run_suite.sh::main::INFO:: Using lago 
0.45.0
2019-02-14 10:37:34.108960551+ run_suite.sh::main::INFO:: Using lago 
ovirt 0.45.0
2019-02-14 10:37:34.114868893+ run_suite.sh::main::INFO:: Running 
suite found in /mnt/zz/bak/ovirt-system-tests-master/basic-suite-4.2
2019-02-14 10:37:34.117947363+ run_suite.sh::main::INFO:: Environment 
will be deployed at 
/mnt/zz/bak/ovirt-system-tests-master/deployment-basic-suite-4.2
nat-settings: &nat-settings
type: nat
dhcp:
  start: 100
  end: 254
management: False

vm-common-settings: &vm-common-settings
root-password: 123456
service_provider: systemd
artifacts:
  - /var/log
  - /etc/resolv.conf

domains:
  lago-basic-suite-4-2-engine:
<<: *vm-common-settings
vm-type: ovirt-engine
memory: 4096
nics:
  - net: lago-basic-suite-4-2-net-management
  - net: lago-basic-suite-4-2-net-storage
disks:
  - template_name: el7.6-base
type: template
name: root
dev: vda
format: qcow2
  - comment: Main NFS device
size: 101G
type: empty
name: nfs
dev: sda
format: raw
  - comment: Main iSCSI device
size: 105G
type: empty
name: iscsi
dev: sdc
format: raw
metadata:
  ovirt-engine-password: 123
  deploy-scripts:
- $LAGO_INITFILE_PATH/deploy-scripts/add_local_repo.sh
- $LAGO_INITFILE_PATH/deploy-scripts/setup_engine.sh
artifacts:
  - /var/log
  - /var/cache/ovirt-engine
  - /var/lib/pgsql/upgrade_rh-postgresql95-postgresql.log
  - /var/lib/ovirt-engine/setup/answers
  - /etc/ovirt-engine
  - /etc/ovirt-engine-dwh
  - /etc/ovirt-engine-metrics
  - /etc/ovirt-engine-setup.conf.d
  - /etc/ovirt-engine-setup.env.d
  - /etc/ovirt-host-deploy.conf.d
  - /etc/ovirt-imageio-proxy
  - /etc/ovirt-provider-ovn
  - /etc/ovirt-vmconsole
  - /etc/ovirt-web-ui
  - /etc/dnf
  - /etc/firewalld
  - /etc/httpd
  - /etc/sysconfig
  - /etc/yum
  - /etc/resolv.conf
  - /tmp/ovirt*
  - /tmp/otopi*
  lago-basic-suite-4-2-host-0:
<<: *vm-common-settings
vm-type: ovirt-host
memory: 2047
nics:
  - net: lago-basic-suite-4-2-net-management
  - net: lago-basic-suite-4-2-net-storage
  - net: lago-basic-suite-4-2-net-bonding
  - net: lago-basic-suite-4-2-net-bonding
disks:
  - template_name: el7.6-base
type: template
name: root
dev: vda
format: qcow2
metadata:
  deploy-scripts:
- $LAGO_INITFILE_PATH/deploy-scripts/add_local_repo.sh
- $LAGO_INITFILE_PATH/deploy-scripts/setup_host_el7.sh
- $LAGO_INITFILE_PATH/deploy-scripts/setup_1st_host_el7.sh

artifacts:
  - /etc/resolv.conf
  - /var/log
  
  lago-basic-suite-4-2-host-1:
<<: *vm-common-settings
vm-type: ovirt-host
memory: 2047
nics:
  - net: lago-basic-suite-4-2-net-management
  - net: lago-basic-suite-4-2-net-storage
  - net: lago-basic-suite-4-2-net-bonding
  - net: lago-basic-suite-4-2-net-bonding
disks:
  - template_name: el7.6-base
type: template
name: root
dev: vda
format: qcow2
metadata:
  deploy-scripts:
- $LAGO_INITFILE_PATH/deploy-scripts/add_local_repo.sh
- $LAGO_INITFILE_PATH/deploy-scripts/setup_host_el7.sh


artifacts:
  - /etc/resolv.conf
  - /var/log
  
nets:
  lago-basic-suite-4-2-net-management:
<<: *nat-settings
management: true
dns_domain_name: lago.local
  lago-basic-suite-4-2-net-bonding:
<<: *nat-settings
  lago-basic-suite-4-2-net-storage:
<<: *nat-settings
/mnt/zz/bak/ovirt-system-tests-master
[Prefix]:
Base directory: 
/mnt/zz/bak/ovirt-system-tests-master/deployment-basic-suite-4.2/default
[Networks]:
[lago-basic-suite-4-2-net-bonding]:
gateway: 192.168.204.1
management: False
   

[ovirt-devel] [VDSM] Running the new storage tests on your laptop

2019-02-14 Thread Nir Soffer
I want to share our new block storage tests, running on your laptop, from
your editor, creating
real block storage domain with real logical volumes.

One catch, these tests require root - there is no way to create devices
without root.
To make it easy to run as root only the tests that need root, they are
marked with "root" mark.

Here an example of running the root tests for block storage domain:

$ sudo ~/.local/bin/tox -e storage-py27 tests/storage/blocksd_test.py
-- -m root

And for lvm:

$ sudo ~/.local/bin/tox -e storage-py27 tests/storage/lvm_test.py -- -m
root

To run all storage tests that require root:

$ sudo ~/.local/bin/tox -e storage-py27 -- -m root tests/storage

Another issue - after running the tests as root, you need to fix ownership
of some
files in .tox/, tests/htmlcov*, and /var/tmp/vdsm. You can:

$ sudo chown -R $USER:$USER .tox tests /var/tmp/vdsm

We will improve this later.

Note that I'm running tox installed as user:

$ pip install --user tox

This gets most recent tox with minimal breakage of system python.

With the new tests, our code coverage is now 57%:
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/2888/artifact/check-patch.tests-py27.fc28.x86_64/htmlcov-storage-py27/index.html

- blockSD: 47%
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/2888/artifact/check-patch.tests-py27.fc28.x86_64/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockSD_py.html

- lvm: 74%
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/2888/artifact/check-patch.tests-py27.fc28.x86_64/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_lvm_py.html

These tests are rather slow, all the root tests take 26 seconds. But OST
takes more then
40 minutes, and cover less code in this area.

OST coverage for lvm mudle: 71%
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/4071/artifact/exported-artifacts/coverage/vdsm/html/_usr_lib_python2_7_site-packages_vdsm_storage_lvm_py.html

To get debug the tests, you can use new option in recent pytest
--log-cli-level:

$ sudo ~/.local/bin/tox -e storage-py27 tests/storage/blocksd_test.py
-- -m root --log-cli-level=info

Here is an example output from a test creating a storage domain
(use --log-cli-leve=debug if this is not verbose enough)

---
live log call
---
blockSD.py1034 INFO
 sdUUID=d4d7649d-4849-4413-bdc2-b7b84f239092 domainName=loop-domain
domClass=1 vgUUID=3OJX6U-UDLc-VFtg-2cRO-q3kR-UH2g-Nvf78I storageType=3
version=3, block_size=512, alignment=1048576
blockSD.py 600 INFO size 512 MB (metaratio 262144)
lvm.py1168 INFO Creating LV
(vg=d4d7649d-4849-4413-bdc2-b7b84f239092, lv=metadata, size=512m,
activate=True, contiguous=False, initialTags=(), device=None)
lvm.py1198 WARNING  Could not change ownership of one
or more volumes in vg (d4d7649d-4849-4413-bdc2-b7b84f239092) - metadata
blockSD.py 522 INFO Create: SORT MAPPING: ['/dev/loop2']
lvm.py1168 INFO Creating LV
(vg=d4d7649d-4849-4413-bdc2-b7b84f239092, lv=inbox, size=16m,
activate=True, contiguous=False, initialTags=(), device=None)
lvm.py1198 WARNING  Could not change ownership of one
or more volumes in vg (d4d7649d-4849-4413-bdc2-b7b84f239092) - inbox
lvm.py1168 INFO Creating LV
(vg=d4d7649d-4849-4413-bdc2-b7b84f239092, lv=outbox, size=16m,
activate=True, contiguous=False, initialTags=(), device=None)
lvm.py1198 WARNING  Could not change ownership of one
or more volumes in vg (d4d7649d-4849-4413-bdc2-b7b84f239092) - outbox
lvm.py1168 INFO Creating LV
(vg=d4d7649d-4849-4413-bdc2-b7b84f239092, lv=ids, size=8m, activate=True,
contiguous=False, initialTags=(), device=None)
lvm.py1198 WARNING  Could not change ownership of one
or more volumes in vg (d4d7649d-4849-4413-bdc2-b7b84f239092) - ids
lvm.py1168 INFO Creating LV
(vg=d4d7649d-4849-4413-bdc2-b7b84f239092, lv=leases, size=2048m,
activate=True, contiguous=False, initialTags=(), device=None)
lvm.py1198 WARNING  Could not change ownership of one
or more volumes in vg (d4d7649d-4849-4413-bdc2-b7b84f239092) - leases
lvm.py1168 INFO Creating LV
(vg=d4d7649d-4849-4413-bdc2-b7b84f239092, lv=master, size=1024m,
activate=True, contiguous=False, initialTags=(), device=None)
lvm.py1198 WARNING  Could not change ownership of one
or more volumes in vg (d4d7649d-4849-4413-bdc2-b7b84f239092) - master
lvm.py1333 INFO Deactivating lvs:
vg=d4d7649d-4849-4413-bdc2-b7b84f239092 lvs=['master']
blockdev.py 84 INFO Zeroing device
/dev

[ovirt-devel] Fwd: [ovirt-users] Ovirt Cluster completely unstable

2019-02-14 Thread Sandro Bonazzola
Any suggestion from Gluster team on how to get back to a stable system in a
very short loop?
opened https://bugzilla.redhat.com/show_bug.cgi?id=1677160 to track this on
gluster side.


-- Forwarded message -
From: 
Date: gio 14 feb 2019 alle ore 00:26
Subject: [ovirt-users] Ovirt Cluster completely unstable
To: 


I'm abandoning my production ovirt cluster due to instability.   I have a 7
host cluster running about 300 vms and have been for over a year.  It has
become unstable over the past three days.  I have random hosts both,
compute and storage disconnecting.  AND many vms disconnecting and becoming
unusable.

7 host are 4 compute hosts running Ovirt 4.2.8 and three glusterfs hosts
running 3.12.5.  I submitted a bugzilla bug and they immediately assigned
it to the storage people but have not responded with any meaningful
information.  I have submitted several logs.

I have found some discussion on problems with instability with gluster
3.12.5.  I would be willing to upgrade my gluster to a more stable version
if that's the culprit.  I installed gluster using the ovirt gui and this is
the version the ovirt gui installed.

Is there an ovirt health monitor available?  Where should I be looking to
get a resolution the problems I'm facing.
___
Users mailing list -- us...@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/BL4M3JQA3IEXCQUY4IGQXOAALRUQ7TVB/


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6BR4DXBL76BJBM4XAVA6TWXHUMQJ2GAM/


[ovirt-devel] Re: [Qemu-block] disk cache issues

2019-02-14 Thread Stefan Hajnoczi
On Thu, Feb 14, 2019 at 02:05:00AM +0200, Nir Soffer wrote:
> On Thu, Feb 14, 2019 at 1:28 AM Hetz Ben Hamo  wrote:
> 
> > Hi,
> >
> > After digging around and finding a bit of info about viodiskcache - I
> > understand that if the user enable it - then the VM cannot be live migrated.
> >
> > Umm, unless the op decides to do a live migration including changing
> > storage - I don't understand why the live migration is disabled. If the VM
> > will only be live migrated between nodes, then the storage is the same,
> > nothing is saved locally on the node's hard disk, so what is the reason to
> > disable live migration?
> >
> 
> I think the issue is synchronizing the host buffer cache on different
> hosts. Each kernel thinks it
> control storage, while storage is accessed by two hosts at the same time.

Yes, the problem is that the host page cache on the destiation may
contain stale data.  This is because the destination QEMU reads the
shared disk before migration handover.

QEMU 3.0.0 introduced support for live migration even with the host page
cache.  This happened in commit dd577a26ff03b6829721b1ffbbf9e7c411b72378
("block/file-posix: implement bdrv_co_invalidate_cache() on Linux").

Libvirt still considers such configurations unsafe for live migration
but this can be overridden with the "virsh migrate --unsafe" option.
Work is required so that libvirt can detect QEMU binaries that support
live migration when the host page cache is in use.

(cache=writeback can produce misleading performance results, so think
carefully if you're using it because it appears faster.  The results may
look good during benchmarking but change depending on host load/memory
pressure.  In production there are probably other VMs on the same host
so using cache=none leads to more consistent results and the host page
cache isn't a great help if the host is close to capacity anyway.)

Stefan


signature.asc
Description: PGP signature
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VG5TYKLGR6IT2BQSNOJSANNMKTC6KSAX/