[ovirt-users] Re: status of oVirt 4.4.x and CentOS 8.2

2020-06-29 Thread Ales Musil
On Wed, Jun 24, 2020 at 12:17 AM Mark R  wrote:

> Follow up, as well as the logged error from supervdsm.log, I see this in
> dmesg so it appears that it very briefly does create DMZ0 and the bond0.22
> interface, but rolls everything back.
>
> ```
> [ 2205.551920] IPv6: ADDRCONF(NETDEV_UP): DMZ0: link is not ready
> [ 2205.558366] IPv6: ADDRCONF(NETDEV_UP): bond0.22: link is not ready
> [ 2205.655177] DMZ0: port 1(bond0.22) entered blocking state
> [ 2205.655179] DMZ0: port 1(bond0.22) entered disabled state
> ```
> Both ports for the bond are up and active (of couse, as this is currently
> the first host of the HE deployment so I couldn't manage it if they
> weren't).
>
> ```
> # cat /proc/net/bonding/bond0
> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
>
> Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> Transmit Hash Policy: layer2 (0)
> MII Status: up
> MII Polling Interval (ms): 100
> Up Delay (ms): 0
> Down Delay (ms): 0
> Peer Notification Delay (ms): 0
>
> 802.3ad info
> LACP rate: slow
> Min links: 0
> Aggregator selection policy (ad_select): stable
> System priority: 65535
> System MAC address: bc:97:e1:24:c5:40
> Active Aggregator Info:
> Aggregator ID: 1
> Number of ports: 2
> Actor Key: 21
> Partner Key: 3
> Partner Mac Address: 0a:33:5e:69:1f:1e
>
> Slave Interface: eno33np0
> MII Status: up
> Speed: 25000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: bc:97:e1:24:c5:40
> Slave queue ID: 0
> Aggregator ID: 1
> Actor Churn State: none
> Partner Churn State: none
> Actor Churned Count: 0
> Partner Churned Count: 0
> details actor lacp pdu:
> system priority: 65535
> system mac address: bc:97:e1:24:c5:40
> port key: 21
> port priority: 255
> port number: 1
> port state: 61
> details partner lacp pdu:
> system priority: 4096
> system mac address: 0a:33:5e:69:1f:1e
> oper key: 3
> port priority: 8192
> port number: 65
> port state: 61
>
> Slave Interface: ens2f0np0
> MII Status: up
> Speed: 25000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: b0:26:28:cd:ec:d0
> Slave queue ID: 0
> Aggregator ID: 1
> Actor Churn State: none
> Partner Churn State: none
> Actor Churned Count: 0
> Partner Churned Count: 0
> details actor lacp pdu:
> system priority: 65535
> system mac address: bc:97:e1:24:c5:40
> port key: 21
> port priority: 255
> port number: 2
> port state: 61
> details partner lacp pdu:
> system priority: 4096
> system mac address: 0a:33:5e:69:1f:1e
> oper key: 3
> port priority: 32768
> port number: 33
> port state: 61
> ```
>
> Thanks,
> Mark
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FY5227PI4PUFPOMTCE3UY3R7J6NHRQH3/
>

Hello,

sorry for the late response. We are failing to reproduce this issue. Can
you please provide us with more info?

I need to know an exact package version of nmstate and NetworkManager.

Also if possible please enable trace logs for NetworkManager:

Please create `/etc/NetworkManager/conf.d/debug.conf` with following
content:
[logging]
level=TRACE
domains=ALL

Restart NetworkManager service and trigger the issue again.
The NM log is available via journalctl -u NetworkManager.

Thank you.
Best regards,
Ales Musil

-- 

Ales Musil

Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NFTLIKUZRFI7HEHIFL6CVRNBS7VJKL5U/


[ovirt-users] oVirt 2020 online conference

2020-06-29 Thread Sandro Bonazzola
It is our pleasure to invite you to oVirt 2020 online conference. The
conference,organized by oVirt community, will take place online on Monday,
September 7th 2020!

oVirt 2020 is a free conference for oVirt community project users and
contributors coming to a web browser near you!
There is no admission or ticket charge for this event. However, you will be
required to complete a free registration.
Watch https://blogs.ovirt.org/ovirt-2020-online-conference/ for updates
about registration. Talks, presentations and workshops will all be in
English.

We encourage students and new graduates as well as professionals to submit
proposals to oVirt conferences.
We will be looking for talks and discussions across virtualization, and how
oVirt 4.4 can effectively solve user issues around:

   - Upgrade flows
   - New features
   - Integration with other projects
   - User stories

The deadline to submit abstracts is July 26th 2020.
To submit your abstract, please click on the following link: submission form

More information are available at
https://blogs.ovirt.org/ovirt-2020-online-conference/

Thanks,
-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V75LDWIZM52LMARJSWM2XGFNIJFDATVW/


[ovirt-users] How to renew an Ovirt host certificate (vdsmcert.pem) ?

2020-06-29 Thread noua . toukourou
Hi,

I have an Ovirt host that the vdsmcert.pem expired. The problem is that host 
contains the self-hosted engine.
How to renew the certificate without breaking the self-hosted engine ?

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EEHMUJMOZFXEQUEJSRHLRRYUGBGVFXO6/


[ovirt-users] Re: How to renew an Ovirt host certificate (vdsmcert.pem) ?

2020-06-29 Thread Martin Perina
Hi,

just migrate the hosted engine VM to a different host, move the host to
Maintenance, execute Enroll Certificate and after successful finish of
enrolling new certificate you can activate the host again.

Regards,
Martin

On Mon, Jun 29, 2020 at 10:53 AM  wrote:

> Hi,
>
> I have an Ovirt host that the vdsmcert.pem expired. The problem is that
> host contains the self-hosted engine.
> How to renew the certificate without breaking the self-hosted engine ?
>
> Thanks,
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EEHMUJMOZFXEQUEJSRHLRRYUGBGVFXO6/
>


-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHSWC6WYQUO6AVSKWZRH23CYVMJNJOOY/


[ovirt-users] Re: NVMe-oF

2020-06-29 Thread Daniel Menzel
Wenn ich mit meinen begrenzten Fähigkeiten, zu programmieren, helfen
könnte würde ich das sofort machen. Wozu also der Seitenhieb?

Am 27.06.20 um 23:20 schrieb tho...@hoberg.net:
> Irgendwie habe ich das Gefühl, daß Dir da eine schlüsselfertige Lösung 
> vorschwebt, die Du direkt verticken kannst...

-- 
Daniel Menzel
Geschäftsführer

Menzel IT GmbH
Charlottenburger Str. 33a
13086 Berlin

+49 (0) 30 / 5130 444 - 00
daniel.men...@menzel-it.net
https://menzel-it.net

Geschäftsführer: Daniel Menzel, Josefin Menzel
Unternehmenssitz: Berlin
Handelsregister: Amtsgericht Charlottenburg
Handelsregister-Nummer: HRB 149835 B
USt-ID: DE 309 226 751

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MPZTPGGCZ5TEURWJIHLOEQD2ICJ6IK44/


[ovirt-users] vdsm error every hour

2020-06-29 Thread eevans
In the web ui I get an error about every hour: VDSM command 
SetVolumeDescriptionVDS failed: Volume does not exist: 
(u'e3f79840-8355-45b0-ad2b-440c877be637',)
I looked in storage and disks and this disk does not exist. Its more of an 
annoyance than a problem but if there is way to get rid of this error I would 
like to know. 
My research says I can install vdsm-tools and vdsm-cli but vdsm-cli is not 
available and I really don;t want to install anything until I know it's what I 
need.
Is there a vdsm command to  purge a missing disk so this error won't show up?
Thanks in advance.
Eric
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/USWMCT7F4AHA46SFUNNQ4XUEGGGXWOXM/


[ovirt-users] Re: Error with Imported VM Run

2020-06-29 Thread Ian Easter
Yes - copied the wrong line it seems.

Attempted a fresh restart to go off here.

2020-06-29 11:53:23,239-04 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(default task-40) [44ec18fa-de38-4f7c-b291-44dbffc81044] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[8877c89c-3640-4031-96c7-f4b6ff64b92a=VM]',
sharedLocks=''}'
2020-06-29 11:53:23,255-04 INFO
 [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default
task-40) [44ec18fa-de38-4f7c-b291-44dbffc81044] START,
IsVmDuringInitiatingVDSCommand(
IsVmDuringInitiatingVDSCommandParameters:{vmId='8877c89c-3640-4031-96c7-f4b6ff64b92a'}),
log id: 4c62e9d1
2020-06-29 11:53:23,255-04 INFO
 [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default
task-40) [44ec18fa-de38-4f7c-b291-44dbffc81044] FINISH,
IsVmDuringInitiatingVDSCommand, return: false, log id: 4c62e9d1
2020-06-29 11:53:23,308-04 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(EE-ManagedThreadFactory-engine-Thread-121870)
[44ec18fa-de38-4f7c-b291-44dbffc81044] Running command: RunVmCommand
internal: false. Entities affected :  ID:
8877c89c-3640-4031-96c7-f4b6ff64b92a Type: VMAction group RUN_VM with role
type USER
2020-06-29 11:53:23,313-04 INFO
 [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils]
(EE-ManagedThreadFactory-engine-Thread-121870)
[44ec18fa-de38-4f7c-b291-44dbffc81044] Emulated machine
'pc-i440fx-rhel8.1.0' which is different than that of the cluster is set
for 'mtl-portal-01'(8877c89c-3640-4031-96c7-f4b6ff64b92a)
2020-06-29 11:53:23,319-04 INFO
 [org.ovirt.engine.core.bll.scheduling.SchedulingManager]
(EE-ManagedThreadFactory-engine-Thread-121870)
[44ec18fa-de38-4f7c-b291-44dbffc81044] Candidate host 'mtl-hv-14.teve.inc'
('64ee2f32-0b33-4cc8-9e76-667310f4899b') was filtered out by
'VAR__FILTERTYPE__INTERNAL' filter 'Emulated-Machine' (correlation id:
44ec18fa-de38-4f7c-b291-44dbffc81044)
2020-06-29 11:53:23,319-04 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
(EE-ManagedThreadFactory-engine-Thread-121870)
[44ec18fa-de38-4f7c-b291-44dbffc81044] Can't find VDS to run the VM
'8877c89c-3640-4031-96c7-f4b6ff64b92a' on, so this VM will not be run.
2020-06-29 11:53:23,329-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-121870)
[44ec18fa-de38-4f7c-b291-44dbffc81044] EVENT_ID: USER_FAILED_RUN_VM(54),
Failed to run VM mtl-portal-01  (User: admin@internal-authz).
2020-06-29 11:53:23,336-04 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(EE-ManagedThreadFactory-engine-Thread-121870)
[44ec18fa-de38-4f7c-b291-44dbffc81044] Lock freed to object
'EngineLock:{exclusiveLocks='[8877c89c-3640-4031-96c7-f4b6ff64b92a=VM]',
sharedLocks=''}'
2020-06-29 11:53:23,337-04 INFO
 [org.ovirt.engine.core.bll.ProcessDownVmCommand]
(EE-ManagedThreadFactory-engine-Thread-121871) [10cf061a] Running command:
ProcessDownVmCommand internal: true.
2020-06-29 11:53:24,236-04 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1)
[] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for
tasks.
2020-06-29 11:53:24,237-04 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1)
[] Thread pool 'engine' is using 0 threads out of 500, 22 threads waiting
for tasks and 0 tasks in queue.
2020-06-29 11:53:24,237-04 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1)
[] Thread pool 'engineScheduledThreadPool' is using 0 threads out of 1, 100
threads waiting for tasks.
2020-06-29 11:53:24,237-04 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1)
[] Thread pool 'engineThreadMonitoringThreadPool' is using 1 threads out of
1, 0 threads waiting for tasks.
2020-06-29 11:53:24,237-04 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1)
[] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 5 threads
waiting for tasks.

httpd/ovirt-requests-log is the only other log with a reference
to 44ec18fa-de38-4f7c-b291-44dbffc81044

[29/Jun/2020:11:53:23 -0400] 10.1.55.107 "Correlation-Id:
44ec18fa-de38-4f7c-b291-44dbffc81044" "Duration: 83201us" "POST
/ovirt-engine/webadmin/GenericApiGWTService HTTP/1.1" 252

Host machine doesn't have anything in vdsm log(s) relating to the start
attempt.


Checked for the previous ID and same thing.

*Thank you,*
*Ian*




On Fri, Jun 26, 2020 at 3:46 PM Nir Soffer  wrote:

> On Fri, Jun 26, 2020 at 7:23 PM Ian Easter  wrote:
> >
> >
> > I imported an orphaned Storage Domain into a new oVirt instance (4.4)
> and imported the VMs that were found in that storage domain.  The SD was
> previously attached to, 4.2/4.3 instance of oVi

[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-06-29 Thread Strahil Nikolov via Users


На 29 юни 2020 г. 4:14:33 GMT+03:00, jury cat  написа:
>If i destroy the brick, i might upgrade to ovirt 4.4 and Centos 8.2.
>Do you think upgrade to ovirt 4.4 with glusterfs improves performance
>or i am better with NFS ?

Actually only you can find out as  we cannot know the workload  of your VMs.
ovirt 4.4 uses gluster v7 , but I have  to warn you that several people  has 
reported  issues after upgrading from v6.5 to 6.6+ or from 7.0  to 7.1+ . It's 
still under  investigation.
>
>If that partition alignment is so important, can i have an example
>command how to set it up ?

You  are  using 64K stripe size  , but usually Red Hat recommend either 128k 
for raid6 or 256k  for raid10.  In your case 256k sounds nice.
Your stripe width will be 64k x 2 data disks = 128k
So you should use :
pvcreate --dataalignment 128k /dev/raid-device

For details , check RHGS Documentation;
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/brick_configuration

>I have upload an image with my current Raid 0 size and strip size.
>
>Btw i manage to enable Jumbo Frames with 9k MTU on the Storage Gluster
>Network and i can also try to enable multique scheduler

Verify that the MTU is the same on egery device.
As IP + ICMP stack need  28 bits , you can try:
ping -M do -c 10 -s 8972 remote_gluster node

Also, you can test lige changing the I/O scheduler.

>Can i use the latest glusterfs version 8 with ovirt 4.3.10 or 4.4 ? if
>of course has performance benefits.
 Gluster v8.0 is planned for community tests - it's too early for it - use the 
4.4 default (v7.X).

>Also can you share the rhgs-random-io.settings you use.

I can't claim those are universal, but here is mine :

[main]
summary=Optimize for running KVM guests on Gluster (Random IO)
include=throughput-performance


[cpu]
governor=ondemand|powersave
energy_perf_bias=powersave|power

[sysctl]
#vm.dirty_ratio = 5
#Random io -> 2 , vm host -> 5
#vm.dirty_background_ratio = 4
vm.dirty_background_bytes = 2
vm.dirty_bytes = 45000

# The total time the scheduler will consider a migrated process
# "cache hot" and thus less likely to be re-migrated
# (system default is 50, i.e. 0.5 ms)
kernel.sched_migration_cost_ns = 500

I'm using powersave governor, as I'm chasing better power efficiency than 
performance  . I would recommend you to take a look  in the source rpm from the 
previous e-mail,  which contains Red Hat's  tuned profile.
>
>Thanks,
>Emy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T7GH43RJ2DMDDBEUH2VBS2NIJ5WQDUII/


[ovirt-users] Re: vdsm error every hour

2020-06-29 Thread Nir Soffer
On Mon, Jun 29, 2020 at 7:02 PM  wrote:
>
> In the web ui I get an error about every hour: VDSM command 
> SetVolumeDescriptionVDS failed: Volume does not exist: 
> (u'e3f79840-8355-45b0-ad2b-440c877be637',)
> I looked in storage and disks and this disk does not exist. Its more of an 
> annoyance than a problem but if there is way to get rid of this error I would 
> like to know.
> My research says I can install vdsm-tools and vdsm-cli but vdsm-cli is not 
> available and I really don;t want to install anything until I know it's what 
> I need.

vdsm-cli was an old command line tool in historic versions. It was replaced
by vdsm-client.

vdsm-tool is always installed, it is part of vdsm package.

> Is there a vdsm command to  purge a missing disk so this error won't show up?
> Thanks in advance.

Which version are you running (engine, vdsm)?
Which storage domain (nfs, iscsi, fc, gluster, local?)

Engine updates the OVF_STORE volumes every hour. These volumes are
created by engine
when you create a storage domain and update every hour or when the
storage domain is
deactivated.

The error you get may be a bug in vdsm, failing to update the volume
or find the volume,
or a real issue if the volume is missing.

If there is a bug in vdsm, it may be fixed by restarting vdsm. Do you
experience the
same error after that?

Please file a bug for this and attach all engine and vdsm log that
contain this volume
(e3f79840-8355-45b0-ad2b-440c877be637). The logs may explain why the volume
is missing or the operation fails.

If you run 4.4, attaching output of this may help:

$ vdsm-client StorageDomain dump sd_id=stoage-domain-uuid full=1

If you run 4.3, output of this tool may help:

vdsm-tool dump-volume-chains -o json storage-domain-uuid


Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G57DKSBUD2W6BZKVF7OSU5CC2TVE5FR2/


[ovirt-users] VMs shutdown mysteriously

2020-06-29 Thread Bobby
Hello,

All 4 VMs on one of my oVirt cluster node shutdown for an unknown reason
almost simultaneously.
Please help me to find the root cause.
Thanks.

Please note the host seems doing fine and never crash or hangs and I can
migrate VMs back to it later.
Here is the exact timeline of all the related events combined from the host
and the VM(s):

On oVirt host:
/var/log/vdsm/vdsm.log:
2020-06-25 15:25:16,944-0500 WARN  (qgapoller/3)
[virt.periodic.VmDispatcher] could not run  at
0x7f4ed2f9f5f0> on ['e0257b06-28fd-4d41-83a9-adf1904d3622'] (periodic:289)
2020-06-25 15:25:19,203-0500 WARN  (libvirt/events) [root] File:
/var/lib/libvirt/qemu/channels/e0257b06-28fd-4d41-83a9-adf1904d3622.ovirt-guest-agent.0
already removed (fileutils:54)
2020-06-25 15:25:19,203-0500 WARN  (libvirt/events) [root] File:
/var/lib/libvirt/qemu/channels/e0257b06-28fd-4d41-83a9-adf1904d3622.org.qemu.guest_agent.0
already removed (fileutils:54)

[root@athos log]# journalctl -u NetworkManager --since=today
-- Logs begin at Wed 2020-05-20 22:07:33 CDT, end at Thu 2020-06-25
16:36:05 CDT. --
Jun 25 15:25:18 athos NetworkManager[1600]:   [1593116718.1136]
device (vnet0): state change: disconnected -> unmanaged (reason
'unmanaged', sys-iface-state: 'removed')
Jun 25 15:25:18 athos NetworkManager[1600]:   [1593116718.1146]
device (vnet0): released from master device SRV-VL

/var/log/messages:
Jun 25 15:25:18 athos kernel: SRV-VL: port 2(vnet0) entered disabled state
Jun 25 15:25:18 athos NetworkManager[1600]:   [1593116718.1136]
device (vnet0): state change: disconnected -> unmanaged (reason
'unmanaged', sys-iface-state: 'removed')
Jun 25 15:25:18 athos NetworkManager[1600]:   [1593116718.1146]
device (vnet0): released from master device SRV-VL
Jun 25 15:25:18 athos libvirtd: 2020-06-25 20:25:18.122+: 2713: error :
qemuMonitorIO:718 : internal error: End of file from qemu monitor

/var/log/libvirt/qemu/aries.log:
2020-06-25T20:25:28.353975Z qemu-kvm: terminating on signal 15 from pid
2713 (/usr/sbin/libvirtd)
2020-06-25 20:25:28.584+: shutting down, reason=shutdown

=
On the first VM effected (same thing on others):
/var/log/ovirt-guest-agent/ovirt-guest-agent.log:
MainThread::INFO::2020-06-25
15:25:20,270::ovirt-guest-agent::104::root::Stopping oVirt guest agent
CredServer::INFO::2020-06-25
15:25:20,626::CredServer::262::root::CredServer has stopped.
MainThread::INFO::2020-06-25
15:25:21,150::ovirt-guest-agent::78::root::oVirt guest agent is down.

=
Packages version installated:
Host OS version: CentOS 7.7.1908:
ovirt-hosted-engine-ha-2.3.5-1.el7.noarch
ovirt-provider-ovn-driver-1.2.22-1.el7.noarch
ovirt-release43-4.3.6-1.el7.noarch
ovirt-imageio-daemon-1.5.2-0.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-imageio-common-1.5.2-0.el7.x86_64
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-vmconsole-host-1.0.7-2.el7.noarch
ovirt-host-4.3.4-1.el7.x86_64
libvirt-4.5.0-23.el7_7.1.x86_64
libvirt-daemon-4.5.0-23.el7_7.1.x86_6
qemu-kvm-ev-2.12.0-33.1.el7.x86_64
qemu-kvm-common-ev-2.12.0-33.1.el7.x86_64

On guest VM:
ovirt-guest-agent-1.0.13-1.el6.noarch
qemu-guest-agent-0.12.1.2-2.491.el6_8.3.x86_64
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LGQSLTNG37VZDJM2GYXRVHPSLWOLOKSC/


[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-06-29 Thread shadow emy
Thank you for the information provided.

Yeap MTU is working ok with Jumbo Frames, on all gluster nodes.

In the next days if i have time, I will try to play with ovirt 4.4 and gluster 
7.x vs ovirt 4.4 and NFS to check for performance.
I might try even ceph with ovirt 4.4
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNJ3KF73OVQKIEUMY4FAOEIZWEZYHS4I/


[ovirt-users] Re: VMs shutdown mysteriously

2020-06-29 Thread Lev Veyde
Hi Bobby,

Can you please share the engine logs as well?
It could help to understand what happened there.

Right now, looking at the pieces of the logs you sent I couldn't spot
anything unusual.

Thanks in advance,

On Mon, Jun 29, 2020 at 10:40 PM Bobby  wrote:

> Hello,
>
> All 4 VMs on one of my oVirt cluster node shutdown for an unknown reason
> almost simultaneously.
> Please help me to find the root cause.
> Thanks.
>
> Please note the host seems doing fine and never crash or hangs and I can
> migrate VMs back to it later.
> Here is the exact timeline of all the related events combined from the
> host and the VM(s):
>
> On oVirt host:
> /var/log/vdsm/vdsm.log:
> 2020-06-25 15:25:16,944-0500 WARN  (qgapoller/3)
> [virt.periodic.VmDispatcher] could not run  at
> 0x7f4ed2f9f5f0> on ['e0257b06-28fd-4d41-83a9-adf1904d3622'] (periodic:289)
> 2020-06-25 15:25:19,203-0500 WARN  (libvirt/events) [root] File:
> /var/lib/libvirt/qemu/channels/e0257b06-28fd-4d41-83a9-adf1904d3622.ovirt-guest-agent.0
> already removed (fileutils:54)
> 2020-06-25 15:25:19,203-0500 WARN  (libvirt/events) [root] File:
> /var/lib/libvirt/qemu/channels/e0257b06-28fd-4d41-83a9-adf1904d3622.org.qemu.guest_agent.0
> already removed (fileutils:54)
>
> [root@athos log]# journalctl -u NetworkManager --since=today
> -- Logs begin at Wed 2020-05-20 22:07:33 CDT, end at Thu 2020-06-25
> 16:36:05 CDT. --
> Jun 25 15:25:18 athos NetworkManager[1600]:   [1593116718.1136]
> device (vnet0): state change: disconnected -> unmanaged (reason
> 'unmanaged', sys-iface-state: 'removed')
> Jun 25 15:25:18 athos NetworkManager[1600]:   [1593116718.1146]
> device (vnet0): released from master device SRV-VL
>
> /var/log/messages:
> Jun 25 15:25:18 athos kernel: SRV-VL: port 2(vnet0) entered disabled state
> Jun 25 15:25:18 athos NetworkManager[1600]:   [1593116718.1136]
> device (vnet0): state change: disconnected -> unmanaged (reason
> 'unmanaged', sys-iface-state: 'removed')
> Jun 25 15:25:18 athos NetworkManager[1600]:   [1593116718.1146]
> device (vnet0): released from master device SRV-VL
> Jun 25 15:25:18 athos libvirtd: 2020-06-25 20:25:18.122+: 2713: error
> : qemuMonitorIO:718 : internal error: End of file from qemu monitor
>
> /var/log/libvirt/qemu/aries.log:
> 2020-06-25T20:25:28.353975Z qemu-kvm: terminating on signal 15 from pid
> 2713 (/usr/sbin/libvirtd)
> 2020-06-25 20:25:28.584+: shutting down, reason=shutdown
>
>
> =
> On the first VM effected (same thing on others):
> /var/log/ovirt-guest-agent/ovirt-guest-agent.log:
> MainThread::INFO::2020-06-25
> 15:25:20,270::ovirt-guest-agent::104::root::Stopping oVirt guest agent
> CredServer::INFO::2020-06-25
> 15:25:20,626::CredServer::262::root::CredServer has stopped.
> MainThread::INFO::2020-06-25
> 15:25:21,150::ovirt-guest-agent::78::root::oVirt guest agent is down.
>
>
> =
> Packages version installated:
> Host OS version: CentOS 7.7.1908:
> ovirt-hosted-engine-ha-2.3.5-1.el7.noarch
> ovirt-provider-ovn-driver-1.2.22-1.el7.noarch
> ovirt-release43-4.3.6-1.el7.noarch
> ovirt-imageio-daemon-1.5.2-0.el7.noarch
> ovirt-vmconsole-1.0.7-2.el7.noarch
> ovirt-imageio-common-1.5.2-0.el7.x86_64
> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
> ovirt-vmconsole-host-1.0.7-2.el7.noarch
> ovirt-host-4.3.4-1.el7.x86_64
> libvirt-4.5.0-23.el7_7.1.x86_64
> libvirt-daemon-4.5.0-23.el7_7.1.x86_6
> qemu-kvm-ev-2.12.0-33.1.el7.x86_64
> qemu-kvm-common-ev-2.12.0-33.1.el7.x86_64
>
> On guest VM:
> ovirt-guest-agent-1.0.13-1.el6.noarch
> qemu-guest-agent-0.12.1.2-2.491.el6_8.3.x86_64
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LGQSLTNG37VZDJM2GYXRVHPSLWOLOKSC/
>


-- 

Lev Veyde

Senior Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJA44VI5GHUDEBCK4DWBXWIQMRIPIAPU/


[ovirt-users] Re: 4.4.1-rc5: Looking for correct way to configure machine=q35 instead of machine=pc for arch=x86_64

2020-06-29 Thread Glenn Marcy
Hello Sandro,

Having been wholely unsuccessful in getting even the latest master
snapshot drivers to work, I applied the following PR from github to
my environment and everything installed cleanly.

https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/pull/331

That PR was closed last week without being applied.  I have added
some comments describing my experiences with this approach for you
to consider.  I am happy to try any alternative solutions, but so
far this is the only one that has worked on my server.

Regards,

Glenn Marcy

Sandro Bonazzola  wrote on 06/22/2020 04:49:13 AM:

> From: Sandro Bonazzola 
> To: Glenn Marcy , Asaf Rachmani 
> , Evgeny Slutsky 
> Cc: users 
> Date: 06/22/2020 04:52 AM
> Subject: [EXTERNAL] [ovirt-users] Re: 4.4.1-rc5: Looking for correct
> way to configure machine=q35 instead of machine=pc for arch=x86_64
> 
> +Asaf Rachmani , +Evgeny Slutsky can you please investigate?
> 
> Il giorno lun 22 giu 2020 alle ore 08:07 Glenn Marcy  > ha scritto:
> Hello, I am hoping for some insight from folks with more hosted 
> engine install experience.
> 
> When I try to install the hosted engine using the RC5 dist I get the
> following error during the startup
> of the HostedEngine VM:
> 
>   XML error: The PCI controller with index='0' must be model='pci-
> root' for this machine type, but model='pcie-root' was found instead
> 
> This is due to the HE Domain XML description using machine="pc-
> i440fx-rhel7.6.0".
> 
> I've tried to override the default of 'pc' from ovirt-ansible-
> hosted-engine-setup/defaults/main.yml:
> 
>   he_emulated_machine: pc
> 
> by passing to the ovirt-hosted-engine-setup script a --config-
> append=file parameter where file contains:
> 
>   [environment:default]
>   OVEHOSTED_VM/emulatedMachine=str:q35
> 
> When the "Create ovirt-hosted-engine-ha run directory" step finishes
> the vm.conf file contains:
> 
> cpuType=IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear
> emulatedMachine=q35
> 
> At the "Start ovirt-ha-broker service on the host" step that file is
> removed.  When that file appears
> again during the "Check engine VM health" step it now contains:
> 
> cpuType=IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear
> emulatedMachine=pc-i440fx-rhel7.6.0
> 
> After that the install fails with the metadata from "virsh dumpxml 
> HostedEngine" containing:
> 
> 1
> XML error: The PCI controller with 
> index='0' must be model='pci-root' for this machine type, but 
> model='pcie-root' was found instead
> 
> Interestingly enough, the HostedEngineLocal VM that is running the 
> appliance image has the value I need:
> 
>   hvm
> 
> Does anyone on the list have any experience with where this needs to
> be overridden?  Somewhere in the
> hosted engine setup or do I need to do something at a deeper level 
> like vdsm or libvirt?
> 
> Help much appreciated !
> 
> Thanks,
> Glenn
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/
> community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/
> users@ovirt.org/message/2S5NKX4L7VUYGMEAPKT553IBFAYZZESD/
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> Red Hat EMEA
> sbona...@redhat.com   
> 
> [image removed] 
> 
> Red Hat respects your work life balance. Therefore there is no need 
> to answer this email out of your office hours.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://urldefense.proofpoint.com/v2/url?
> 
u=https-3A__www.ovirt.org_privacy-2Dpolicy.html&d=DwIGaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=7CLmQhnB2i8fvxtmu_t4kAY5P_X7VyLhPZG9j6YjHak&m=-JJI4DBHi9Q-
> kVIoqpavNyeLpU9NGoyhoUNlwk7zDFc&s=x7Dyn-
> w0xcJQ7hq39_6qq8_jMtMp7tbs6RBLhUBWW-s&e= 
> oVirt Code of Conduct: https://urldefense.proofpoint.com/v2/url?
> 
u=https-3A__www.ovirt.org_community_about_community-2Dguidelines_&d=DwIGaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=7CLmQhnB2i8fvxtmu_t4kAY5P_X7VyLhPZG9j6YjHak&m=-JJI4DBHi9Q-
> 
kVIoqpavNyeLpU9NGoyhoUNlwk7zDFc&s=9G0hbPahmHmMuXMH2B0JPNDyApbBHGLrhMBBXpRIemE&e=
> List Archives: https://urldefense.proofpoint.com/v2/url?
> 
u=https-3A__lists.ovirt.org_archives_list_users-40ovirt.org_message_IM3EQSBHBTORQZM5MAHPOWKYUXIKZCHQ_&d=DwIGaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=7CLmQhnB2i8fvxtmu_t4kAY5P_X7VyLhPZG9j6YjHak&m=-JJI4DBHi9Q-
> 
kVIoqpavNyeLpU9NGoyhoUNlwk7zDFc&s=sFKjXjkMVWaQ_gcLZiZcjJNmjLm6nvfF9zxge_sO2i0&e=


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/user