[ovirt-users] Strange drive performance over 10Gb

2017-11-30 Thread Wesley Stewart
I was curious if anyone else has seen this or had any suggestions.

I have recently began playing around with my two servers (Freenas Box and
oVirt box) and their 10Gb Ethernet ports.

I can access the Freenas SMB share over the 10Gb port without issue and I
have been playing around with the capabilities.  After finding out that my
Linux Raid (MDADM) mirror is having horrible write performance, I decided
to plug in an NVMe drive that I had lying around and check out its
performance.

*For my first test*,I added the NVMe drive as a passthrough device to a
Windows guest and was able to transfer to and from Freenas box without
issue.  Speeds were typically ~350-400 MB/s but could drop down to 250 MB/s
or so, and would top out around 525 MB/s, pretty slick!

*For my second test*, I decided to mount the NVMe drive on the CentOS ovirt
host and make it a local datastore.  I migrated my Windows Guest to it, and
decided to test and see what sort of transfer speeds I got and saw some
weird results...

Writing TOO the NAS worked about the same.  Perhaps a little slower but at
least had a steady 250-300 MB/s.

Writing to the Windows Guest had a very "Fast and then slow, fast and then
slow" type of throughput.  I took a few screenshots:

(Writing TO the NAS was fairly consistent)

[image: Inline image 1]
https://i.imgur.com/jWNNvfp.png


(Writing TO the Windows Guest on NVMe storage)
Sometimes these hit the *low 10-20 MB/s* during the transfer.
[image: Inline image 2]
https://i.imgur.com/aizG6n0.png

[image: Inline image 3]
https://i.imgur.com/AjRpR0K.png
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted engine and about more than 1000 vms

2017-11-30 Thread 董青龙
Hi, all
We want to deploy an environment of hosted engine which will manage 
more than 1000 vms. How many vcpus and memory should we give to hosted engine? 
And are there any other things should we pay attention to? Hope someone can 
help. Thanks!___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Convert local storage domain to shared

2017-11-30 Thread Gianluca Cecchi
On Wed, Nov 29, 2017 at 6:49 PM, Demeter Tibor  wrote:

> Hi,
>
> Yes, I understand what do you talk about. It isn't too safe..:(
> We have terrabytes under that VM.
> I could make a downtime at most for eight hours (maybe), but meanwhile I
> have to copy 3 TB of vdisks. Firstly I need export (with a gigabit nic) to
> export domain, and back under 10gbe nic.
> I don't know how is enough this.
>
> Thanks
>
> Tibor


Hi Tibor,
I'm in shortage of time these days, but I have to admit your problem was so
intriguing that I couldn't resist and I decided to try and reproduce it.
All happened on my laptop with Fedora 26 (time to upgrade? not enough
time... ;-)

So this is the test environment of all vms inside virt-manager:

1) Create 3.5.6 environment

- CentOS 6.6 VM (this was the iso I had at hand...) with hostname
c6engine35 where I installed oVirt 3.5.6 as engine
- CentOS 6.6 VM with hostname c6rhv35 (sorry for the rhv in the name but
these weeks I'm also working on it so it came out quite naturally...) were
I installed the Hypervisor of 3.5.6 repo

I created a local DC on top of a directory of the hypervisor (/ldomain)
I created a CentOS 6.6 VM in this storage domain with a 4Gb disk

2) Detach the local domain from DC

HERE YOUR THEORETICAL DOWNTIME BEGINS

To do so I powered off the test VM and created a fake further local domain
based on another directory of c6rhv35
Then put into maintenance the local domain to be imported in 4.1
The fake local domain becomes the master.
Detach the local domain.


3) Create 4.1.7 environment (in your case it is already there..)
- CentOS 7.4 VM with hostname c7engine41 where I installed oVirt 4.1.7 as
engine
- CentOS 7.4 VM with hostname c7rhv41 were I installed the Hypervisor of
4.1.7 repo

I created a shared DC NFSDC with a cluster NFSCL
To speed things I exported a directory from the engine and used it to
create an NFS storage domain (DATANFS) for the 4.1 host and activated it

4) Shutdown 3.5 environment and start/configure the 3.5 hypervisor to
export its previously local storage domain directory

Start c6rhv35 in single user mode
chkconfig service_name off

for this service_name:
ebtables ip6tables iptables libvirt-guests libvirtd momd numad sanlock
supervdsmd vdsmd wdmd

reboot
create an entry in /etc/exports

/ldomain c7rhv41.localdomain.local(rw)

service nfs start

set up accordingly the /etc/hosts of the servers involved so that all know
all...

5) import domain in 4.1
Select Storage -> Import domain and put

c6rhv35.localdomain.local:/ldomain

You will get a warning about it being already part of another DC:
https://drive.google.com/file/d/1HjFZhW6fCkasPak0jQH5k49Bdsg1NLSN/view?usp=sharing

Approve operation and you arrive here:
https://drive.google.com/file/d/10d1ea0TbPCZhoaAf7br5IVqnvZx0LzSu/view?usp=sharing

Activate the domain and you arrive here:
https://drive.google.com/file/d/1-4sMfVVj5WyaglPI8zhWUsdJqkVkxzAT/view?usp=sharing

Now you can proceed importing your VMs; in my case only the testvm
Select he imported storage domain and then the "VM Import" tab; select the
VM and "Import":

https://drive.google.com/file/d/18yjPvoHjTw6mOhUrlHJ2RpsdPph4qBxL/view?usp=sharing

https://drive.google.com/file/d/1CrCzVUYC3vI4aQ2ly83b3uAQ3QQh1xhm/view?usp=sharing

Note that it is an immediate operation, and not depending on the size of
the disks of the VM itself
At the end you get your VM imported; here details:

https://drive.google.com/file/d/1W00TpIKAQ7cWUit_tLIQkm30wj5j56AN/view?usp=sharing
https://drive.google.com/file/d/1qq7sZV2vwapRRdjbi21Z43OOBM0m2NuY/view?usp=sharing

While you import, you can then gradually start your VMs, so that your
downtime becomes partial and not total
https://drive.google.com/file/d/1kwrzSJBXISC0wBTtZIpdh3yJdA0g3G0A/view?usp=sharing

When you have started all your imported VMs, YOUR THEORETICAL DOWNTIME ENDS

Your VMS are now running on your old local storage, exported from your old
3.5 host to your new 4.1 hosts via NFS

You can now execute live storage migration of your disks one by one to the
desired 4.1 storage domain:
https://drive.google.com/file/d/1p6OOgDBbOFFGgy3uuWT-V8VnCMaxk4iP/view?usp=sharing

and at the end of the move
https://drive.google.com/file/d/1dUuKQQxI0r4Bhz-N0TmRcsnjwQl1bwU6/view?usp=sharing

Obviously there are many caveats in a real environment such as:

- actual oVirt origin and target version could differ from mine and
behavior be different
- network visibility between the two oVirt environments
- layout of the logical networks of the two oVirt environments: when you
import you could need to change logical network and have conflicting MACS:
in my test scenario it was all on ovirtmgmt with the same macs range
- live storage migration of TB of disks.. not tested yet (by me at
least)
- other things that don't come to mind right now

HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fedora support (was: [ANN] oVirt 4.2.0 Second Beta Release is now available for testing)

2017-11-30 Thread Blaster

Thank you.

The mention of Fedora then should be removed from the release notes, 
maybe even stating that it's not recommended?


On 11/30/2017 4:21 AM, Yedidyah Bar David wrote:

On Wed, Nov 29, 2017 at 7:29 PM, Blaster  wrote:

Is Fedora not supported anymore?

I've read the release notes for the 4.2r2 beta and 4.1.7, they mention
specific versions of RHEL and CentOS, but only mention Fedora by name, with
no specific version information.

We currently have too many problems with fedora to call it even 'Technical
Preview', as was done in the past.

You can still use the nightly snapshots, and most things work, more-or-less,
with some issues having known workarounds. See e.g.:

https://bugzilla.redhat.com/showdependencytree.cgi?id=1460625_resolved=1

And also:

http://lists.ovirt.org/pipermail/devel/2017-August/030990.html

(not sure that one is still relevant for Fedora 27, didn't check recently).


On 11/15/2017 9:17 AM, Sandro Bonazzola wrote


This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later


This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later

* oVirt Node 4.2 (available for x86_64 only)

tp://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fedora support (was: [ANN] oVirt 4.2.0 Second Beta Release is now available for testing)

2017-11-30 Thread Yedidyah Bar David
On Wed, Nov 29, 2017 at 7:29 PM, Blaster  wrote:
> Is Fedora not supported anymore?
>
> I've read the release notes for the 4.2r2 beta and 4.1.7, they mention
> specific versions of RHEL and CentOS, but only mention Fedora by name, with
> no specific version information.

We currently have too many problems with fedora to call it even 'Technical
Preview', as was done in the past.

You can still use the nightly snapshots, and most things work, more-or-less,
with some issues having known workarounds. See e.g.:

https://bugzilla.redhat.com/showdependencytree.cgi?id=1460625_resolved=1

And also:

http://lists.ovirt.org/pipermail/devel/2017-August/030990.html

(not sure that one is still relevant for Fedora 27, didn't check recently).

>
> On 11/15/2017 9:17 AM, Sandro Bonazzola wrote
>
>
> This release is available now on x86_64 architecture for:
>
> * Red Hat Enterprise Linux 7.4 or later
>
> * CentOS Linux (or similar) 7.4 or later
>
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
>
> * Red Hat Enterprise Linux 7.4 or later
>
> * CentOS Linux (or similar) 7.4 or later
>
> * oVirt Node 4.2 (available for x86_64 only)
>
> tp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-node-ng-update

2017-11-30 Thread Nathanaël Blanchet

thanx for replying


Le 30/11/2017 à 10:19, Yuval Turgeman a écrit :

2 more questions -

1.  Which ovirt repos are enabled on your node ?
centos-opstools-release/x86_64   CentOS-7 - OpsTools - 
release  263
ovirt-4.1/7  Latest oVirt 4.1 
Release 1 997
ovirt-4.1-centos-gluster38/x86_64    CentOS-7 - Gluster 
3.8  30
ovirt-4.1-epel/x86_64    Extra Packages for Enterprise 
Linux 7 - x86_64  12 131
ovirt-4.1-patternfly1-noarch-epel/x86_64 Copr repo for patternfly1 owned 
by patternfly    2
ovirt-centos-ovirt41/x86_64  CentOS-7 - oVirt 
4.1   405
sac-gdeploy/x86_64   Copr repo for gdeploy owned by 
sac   4
virtio-win-stable    virtio-win builds roughly 
matching what was shipped in



2.  Can you share the output from `rpm -qa | grep ovirt-node-ng` ?

[root@mascota ~]# rpm -qva | grep ovirt-node
ovirt-node-ng-nodectl-4.1.5-0.20171107.0.el7.noarch
ovirt-node-ng-image-update-placeholder-4.1.7-1.el7.centos.noarch



Thanks,
Yuval.

On Thu, Nov 30, 2017 at 11:02 AM, Nathanaël Blanchet > wrote:




Le 30/11/2017 à 08:58, Yuval Turgeman a écrit :

Hi,

Which version are you using ?

4.1.7



Thanks,
Yuval.

On Wed, Nov 29, 2017 at 4:17 PM, Nathanaël Blanchet
> wrote:

Hi all,

I didn't find any explicit howto about upgrade of ovirt-node,
but I may mistake...

However, here is what I guess: after installing a fresh
ovirt-node-ng  iso, the engine check upgrade finds an
available update "ovirt-node-ng-image-update"

But, the available update is the same as the current one.  If
I choose installing it succeeds, but after rebooting,
ovirt-node-ng-image-update is not still part of installed
rpms so that engine tells me an update of ovirt-node is still
available.

-- 
Nathanaël Blanchet


Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala


34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr 

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





-- 
Nathanaël Blanchet


Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr   



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-node-ng-update

2017-11-30 Thread Yuval Turgeman
2 more questions -

1.  Which ovirt repos are enabled on your node ?
2.  Can you share the output from `rpm -qa | grep ovirt-node-ng` ?

Thanks,
Yuval.

On Thu, Nov 30, 2017 at 11:02 AM, Nathanaël Blanchet 
wrote:

>
>
> Le 30/11/2017 à 08:58, Yuval Turgeman a écrit :
>
> Hi,
>
> Which version are you using ?
>
> 4.1.7
>
>
> Thanks,
> Yuval.
>
> On Wed, Nov 29, 2017 at 4:17 PM, Nathanaël Blanchet 
> wrote:
>
>> Hi all,
>>
>> I didn't find any explicit howto about upgrade of ovirt-node, but I may
>> mistake...
>>
>> However, here is what I guess: after installing a fresh ovirt-node-ng
>> iso, the engine check upgrade finds an available update
>> "ovirt-node-ng-image-update"
>>
>> But, the available update is the same as the current one.  If I choose
>> installing it succeeds, but after rebooting, ovirt-node-ng-image-update is
>> not still part of installed rpms so that engine tells me an update of
>> ovirt-node is still available.
>>
>> --
>> Nathanaël Blanchet
>>
>> Supervision réseau
>> Pôle Infrastrutures Informatiques
>> 227 avenue Professeur-Jean-Louis-Viala
>> 
>> 34193 MONTPELLIER CEDEX 5
>> Tél. 33 (0)4 67 54 84 55
>> Fax  33 (0)4 67 54 84 14
>> blanc...@abes.fr
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Configuring LDAP backend in non interactive mode

2017-11-30 Thread Luca 'remix_tj' Lorenzetto
Hello,

i'm looking for some examples on how to configure LDAP authn and authz
in non interactive mode. I'd like to add some authentication sources
immediately after hosted-engine deployment.

I've seen that ovirt-engine-extension-aaa-ldap-setup has an option
config-append. Help refers to an answer file, but i can't find out how
to generate it. All the servers i'm deploying should use the same
configurations to access to 3 different active directory servers.

Anyone has experiences about this?

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-node-ng-update

2017-11-30 Thread Nathanaël Blanchet



Le 30/11/2017 à 08:58, Yuval Turgeman a écrit :

Hi,

Which version are you using ?

4.1.7


Thanks,
Yuval.

On Wed, Nov 29, 2017 at 4:17 PM, Nathanaël Blanchet > wrote:


Hi all,

I didn't find any explicit howto about upgrade of ovirt-node, but
I may mistake...

However, here is what I guess: after installing a fresh
ovirt-node-ng  iso, the engine check upgrade finds an available
update "ovirt-node-ng-image-update"

But, the available update is the same as the current one. If I
choose installing it succeeds, but after rebooting,
ovirt-node-ng-image-update is not still part of installed rpms so
that engine tells me an update of ovirt-node is still available.

-- 
Nathanaël Blanchet


Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala


34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr 

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node ng upgrade failed

2017-11-30 Thread Yuval Turgeman
Looks like it, yes - we try to add setfiles_t to permissive, because we
assume selinux is on, and if it's disabled, semanage fails with the error
you mentioned.  Can you open a bug on this ?

If you would like to fix the system, you will need to clean the unused LVs,
remove the relevant boot entries from grub (if they exist) and
/boot/ovirt-node-ng-4.1.7-0.20171108.0+1 (if it exists), then reinstall the
rpm.


On Thu, Nov 30, 2017 at 10:16 AM, Kilian Ries  wrote:

> Yes, selinux is disabled via /etc/selinux/config; Is that the problem? :/
> --
> *Von:* Yuval Turgeman 
> *Gesendet:* Donnerstag, 30. November 2017 09:13:34
> *An:* Kilian Ries
> *Cc:* users
>
> *Betreff:* Re: [ovirt-users] oVirt Node ng upgrade failed
>
> Kilian, did you disable selinux by any chance ? (selinux=0 on boot) ?
>
> On Thu, Nov 30, 2017 at 9:57 AM, Yuval Turgeman  wrote:
>
>> Looks like selinux is broken on your machine for some reason, can you
>> share /etc/selinux ?
>>
>> Thanks,
>> Yuval.
>>
>> On Tue, Nov 28, 2017 at 6:31 PM, Kilian Ries  wrote:
>>
>>> @Yuval Turgeman
>>>
>>>
>>> ###
>>>
>>>
>>> [17:27:10][root@vm5:~]$semanage permissive -a setfiles_t
>>>
>>> SELinux:  Could not downgrade policy file 
>>> /etc/selinux/targeted/policy/policy.30,
>>> searching for an older version.
>>>
>>> SELinux:  Could not open policy file <= 
>>> /etc/selinux/targeted/policy/policy.30:
>>>  No such file or directory
>>>
>>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>>
>>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>>> (No such file or directory).
>>>
>>> SELinux:  Could not downgrade policy file 
>>> /etc/selinux/targeted/policy/policy.30,
>>> searching for an older version.
>>>
>>> SELinux:  Could not open policy file <= 
>>> /etc/selinux/targeted/policy/policy.30:
>>>  No such file or directory
>>>
>>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>>
>>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>>> (No such file or directory).
>>>
>>> OSError: No such file or directory
>>>
>>>
>>> ###
>>>
>>>
>>> @Ryan Barry
>>>
>>>
>>> Manual yum upgrade finished without any error but imgbased.log still
>>> shows me the following:
>>>
>>>
>>> ###
>>>
>>>
>>> 2017-11-28 17:25:28,372 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Creating /home as
>>> {'attach': True, 'size': '1G'}
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling binary: (['vgs',
>>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'stderr':
>>> }
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling: (['vgs',
>>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'close_fds':
>>> True, 'stderr': }
>>>
>>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Returned: onn/home
>>>
>>>   onn/tmp
>>>
>>>   onn/var_log
>>>
>>>   onn/var_log_audit
>>>
>>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', '/etc'],) {}
>>>
>>> 2017-11-28 17:25:28,534 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> '/etc'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:28,539 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', u'/tmp/mnt.tuHU8'],) {}
>>>
>>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling binary: (['rmdir',
>>> u'/tmp/mnt.tuHU8'],) {}
>>>
>>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling: (['rmdir',
>>> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:28,640 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,641 [ERROR] (MainThread) Failed to migrate etc
>>>
>>> Traceback (most recent call last):
>>>
>>>   File 
>>> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>>> line 109, in on_new_layer
>>>
>>> check_nist_layout(imgbase, new_lv)
>>>
>>>   File 
>>> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>>> line 179, in check_nist_layout
>>>
>>> v.create(t, paths[t]["size"], paths[t]["attach"])
>>>
>>>   File 
>>> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/volume.py",
>>> line 48, in create
>>>
>>> "Path is already a volume: %s" % where
>>>
>>> AssertionError: Path is already a volume: /home
>>>
>>> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', u'/tmp/mnt.bEW2k'],) {}
>>>
>>> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:29,061 [DEBUG] 

Re: [ovirt-users] oVirt Node ng upgrade failed

2017-11-30 Thread Kilian Ries
Yes, selinux is disabled via /etc/selinux/config; Is that the problem? :/


Von: Yuval Turgeman 
Gesendet: Donnerstag, 30. November 2017 09:13:34
An: Kilian Ries
Cc: users
Betreff: Re: [ovirt-users] oVirt Node ng upgrade failed

Kilian, did you disable selinux by any chance ? (selinux=0 on boot) ?

On Thu, Nov 30, 2017 at 9:57 AM, Yuval Turgeman 
> wrote:
Looks like selinux is broken on your machine for some reason, can you share 
/etc/selinux ?

Thanks,
Yuval.

On Tue, Nov 28, 2017 at 6:31 PM, Kilian Ries 
> wrote:

@Yuval Turgeman


###


[17:27:10][root@vm5:~]$semanage permissive -a setfiles_t

SELinux:  Could not downgrade policy file 
/etc/selinux/targeted/policy/policy.30, searching for an older version.

SELinux:  Could not open policy file <= /etc/selinux/targeted/policy/policy.30: 
 No such file or directory

/sbin/load_policy:  Can't load policy:  No such file or directory

libsemanage.semanage_reload_policy: load_policy returned error code 2. (No such 
file or directory).

SELinux:  Could not downgrade policy file 
/etc/selinux/targeted/policy/policy.30, searching for an older version.

SELinux:  Could not open policy file <= /etc/selinux/targeted/policy/policy.30: 
 No such file or directory

/sbin/load_policy:  Can't load policy:  No such file or directory

libsemanage.semanage_reload_policy: load_policy returned error code 2. (No such 
file or directory).

OSError: No such file or directory


###


@Ryan Barry


Manual yum upgrade finished without any error but imgbased.log still shows me 
the following:


###


2017-11-28 17:25:28,372 [DEBUG] (MainThread) Returned:

2017-11-28 17:25:28,434 [DEBUG] (MainThread) Creating /home as {'attach': True, 
'size': '1G'}

2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling binary: (['vgs', 
'--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'stderr': }

2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', 
'@imgbased:volume', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': }

2017-11-28 17:25:28,533 [DEBUG] (MainThread) Returned: onn/home

  onn/tmp

  onn/var_log

  onn/var_log_audit

2017-11-28 17:25:28,533 [DEBUG] (MainThread) Calling binary: (['umount', '-l', 
'/etc'],) {}

2017-11-28 17:25:28,534 [DEBUG] (MainThread) Calling: (['umount', '-l', 
'/etc'],) {'close_fds': True, 'stderr': -2}

2017-11-28 17:25:28,539 [DEBUG] (MainThread) Returned:

2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling binary: (['umount', '-l', 
u'/tmp/mnt.tuHU8'],) {}

2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling: (['umount', '-l', 
u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}

2017-11-28 17:25:28,635 [DEBUG] (MainThread) Returned:

2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling binary: (['rmdir', 
u'/tmp/mnt.tuHU8'],) {}

2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling: (['rmdir', 
u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}

2017-11-28 17:25:28,640 [DEBUG] (MainThread) Returned:

2017-11-28 17:25:28,641 [ERROR] (MainThread) Failed to migrate etc

Traceback (most recent call last):

  File 
"/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
 line 109, in on_new_layer

check_nist_layout(imgbase, new_lv)

  File 
"/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
 line 179, in check_nist_layout

v.create(t, paths[t]["size"], paths[t]["attach"])

  File 
"/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/volume.py", line 
48, in create

"Path is already a volume: %s" % where

AssertionError: Path is already a volume: /home

2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling binary: (['umount', '-l', 
u'/tmp/mnt.bEW2k'],) {}

2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling: (['umount', '-l', 
u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}

2017-11-28 17:25:29,061 [DEBUG] (MainThread) Returned:

2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling binary: (['rmdir', 
u'/tmp/mnt.bEW2k'],) {}

2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling: (['rmdir', 
u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}

2017-11-28 17:25:29,067 [DEBUG] (MainThread) Returned:

2017-11-28 17:25:29,067 [DEBUG] (MainThread) Calling binary: (['umount', '-l', 
u'/tmp/mnt.UB5Yg'],) {}

2017-11-28 17:25:29,067 [DEBUG] (MainThread) Calling: (['umount', '-l', 
u'/tmp/mnt.UB5Yg'],) {'close_fds': True, 'stderr': -2}

2017-11-28 17:25:29,625 [DEBUG] (MainThread) Returned:

2017-11-28 17:25:29,625 [DEBUG] (MainThread) Calling binary: (['rmdir', 
u'/tmp/mnt.UB5Yg'],) {}

2017-11-28 17:25:29,626 [DEBUG] (MainThread) Calling: (['rmdir', 
u'/tmp/mnt.UB5Yg'],) {'close_fds': True, 'stderr': -2}

2017-11-28 17:25:29,631 [DEBUG] (MainThread) Returned:

Traceback (most recent call last):

  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main

"__main__", 

Re: [ovirt-users] oVirt Node ng upgrade failed

2017-11-30 Thread Yuval Turgeman
Kilian, did you disable selinux by any chance ? (selinux=0 on boot) ?

On Thu, Nov 30, 2017 at 9:57 AM, Yuval Turgeman  wrote:

> Looks like selinux is broken on your machine for some reason, can you
> share /etc/selinux ?
>
> Thanks,
> Yuval.
>
> On Tue, Nov 28, 2017 at 6:31 PM, Kilian Ries  wrote:
>
>> @Yuval Turgeman
>>
>>
>> ###
>>
>>
>> [17:27:10][root@vm5:~]$semanage permissive -a setfiles_t
>>
>> SELinux:  Could not downgrade policy file 
>> /etc/selinux/targeted/policy/policy.30,
>> searching for an older version.
>>
>> SELinux:  Could not open policy file <= 
>> /etc/selinux/targeted/policy/policy.30:
>>  No such file or directory
>>
>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>
>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>> (No such file or directory).
>>
>> SELinux:  Could not downgrade policy file 
>> /etc/selinux/targeted/policy/policy.30,
>> searching for an older version.
>>
>> SELinux:  Could not open policy file <= 
>> /etc/selinux/targeted/policy/policy.30:
>>  No such file or directory
>>
>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>
>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>> (No such file or directory).
>>
>> OSError: No such file or directory
>>
>>
>> ###
>>
>>
>> @Ryan Barry
>>
>>
>> Manual yum upgrade finished without any error but imgbased.log still
>> shows me the following:
>>
>>
>> ###
>>
>>
>> 2017-11-28 17:25:28,372 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Creating /home as {'attach':
>> True, 'size': '1G'}
>>
>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling binary: (['vgs',
>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'stderr':
>> }
>>
>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling: (['vgs',
>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'close_fds':
>> True, 'stderr': }
>>
>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Returned: onn/home
>>
>>   onn/tmp
>>
>>   onn/var_log
>>
>>   onn/var_log_audit
>>
>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Calling binary: (['umount',
>> '-l', '/etc'],) {}
>>
>> 2017-11-28 17:25:28,534 [DEBUG] (MainThread) Calling: (['umount', '-l',
>> '/etc'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:28,539 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling binary: (['umount',
>> '-l', u'/tmp/mnt.tuHU8'],) {}
>>
>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling: (['umount', '-l',
>> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling binary: (['rmdir',
>> u'/tmp/mnt.tuHU8'],) {}
>>
>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling: (['rmdir',
>> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:28,640 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:28,641 [ERROR] (MainThread) Failed to migrate etc
>>
>> Traceback (most recent call last):
>>
>>   File 
>> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>> line 109, in on_new_layer
>>
>> check_nist_layout(imgbase, new_lv)
>>
>>   File 
>> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>> line 179, in check_nist_layout
>>
>> v.create(t, paths[t]["size"], paths[t]["attach"])
>>
>>   File 
>> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/volume.py",
>> line 48, in create
>>
>> "Path is already a volume: %s" % where
>>
>> AssertionError: Path is already a volume: /home
>>
>> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling binary: (['umount',
>> '-l', u'/tmp/mnt.bEW2k'],) {}
>>
>> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling: (['umount', '-l',
>> u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling binary: (['rmdir',
>> u'/tmp/mnt.bEW2k'],) {}
>>
>> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling: (['rmdir',
>> u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Calling binary: (['umount',
>> '-l', u'/tmp/mnt.UB5Yg'],) {}
>>
>> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Calling: (['umount', '-l',
>> u'/tmp/mnt.UB5Yg'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:29,625 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:29,625 [DEBUG] (MainThread) Calling binary: (['rmdir',
>> u'/tmp/mnt.UB5Yg'],) {}
>>
>> 2017-11-28 17:25:29,626 [DEBUG] (MainThread) Calling: (['rmdir',
>> u'/tmp/mnt.UB5Yg'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:29,631 [DEBUG] (MainThread) Returned:
>>
>> Traceback (most recent call last):
>>
>>   File "/usr/lib64/python2.7/runpy.py",