Re: [ovirt-users] oVirt node and NFS

2017-11-22 Thread Pavol Brilla
Hi

if issue is really only NFS blocked by firewalld on host ( so you have
correct permissions, ownerships on storage directory ), solution should be
simple as:

firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload

But be careful, as availability of such storage is dependable on this host.

On Tue, Nov 21, 2017 at 5:38 PM, Shani Leviim  wrote:

> Hi Mangnus,
> Have you tried the troubleshooting-nfs-storage-issues page?
> https://www.ovirt.org/documentation/how-to/troubleshooting/
> troubleshooting-nfs-storage-issues/
>
>
> *Regards,*
>
> *Shani Leviim*
>
> On Tue, Nov 21, 2017 at 12:26 PM, Magnus Isaksson  wrote:
>
>> Anyone?
>>
>>
>>
>> //Magnus
>>
>>
>>
>> *From:* Magnus Isaksson
>> *Sent:* den 20 november 2017 16:01
>> *To:* 'users@ovirt.org' 
>> *Subject:* oVirt node and NFS
>>
>>
>>
>> Hi,
>>
>>
>>
>> This is probably an easy thing, but I can’t seem to find the solution.
>>
>>
>>
>> On my oVirt node 4.1 I have some NFS shares that I want other hosts to
>> reach, but I noticed that the firewall is not open for that on the host.
>>
>> So, how to I configure the Nodes firewall?
>>
>>
>>
>> Regards
>>
>> Magnus Isaksson
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

PAVOL BRILLA

RHV QUALITY ENGINEER, CLOUD

Red Hat Czech Republic, Brno 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt node and NFS

2017-11-22 Thread Magnus Isaksson
Hi,

This is probably an easy thing, but I can't seem to find the solution.

On my oVirt node 4.1 I have some NFS shares that I want other hosts to reach, 
but I noticed that the firewall is not open for that on the host.
So, how to I configure the Nodes firewall?

Regards
Magnus Isaksson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] multiple ip routing table issue

2017-11-22 Thread Edward Haas
On Wed, Nov 22, 2017 at 1:26 AM, Edward Clay 
wrote:

> On Tue, 2017-11-21 at 16:01 -0700, Edward Clay wrote:
>
> On Wed, 2017-11-22 at 00:17 +0200, Edward Haas wrote:
>
>
>
> On Tue, Nov 21, 2017 at 6:16 PM, Edward Clay 
> wrote:
>
>
> On Tue, 2017-11-21 at 09:00 +0200, Edward Haas wrote:
>
>
>
> On Tue, Nov 21, 2017 at 1:24 AM, Edward Clay 
> wrote:
>
> Hello,
>
> We have an issue where hosts are configured with the public facing nework
> interface as the ovirtmgmt network and it's default route is added to a
> ovirt created table but not to the main routing table. From my searching
> I've found this snippet from https://www.ovirt.org/develop/
> release-management/features/network/multiple-gateways/ which seems to
> explain why I can't ping anything or communicate with any other system
> needing a default route.
>
>
> By default, the default route is set on the ovirtmgmt network (the default
> one, defined on the interface/ip which you added the host to Engine).
> Do you have a different network set up which you will like to set the
> default route on?
>
>
>
> "And finally, here's the host's main routing table. Any traffic coming in
> to the host will use the ip rules and an interface's routing table. The
> main routing table is only used for traffic originating from the host."
>
> I'm seeing the following main and custom ovirt created tables.
>
> main:
> # ip route show table main
> 10.0.0.0/8 via 10.4.16.1 dev enp3s0.106
> 10.4.16.0/24 dev enp3s0.106 proto kernel scope link src 10.4.16.15
> 1.1.1.0/24 dev PUBLICB proto kernel scope link src 1.1.1.1 169.254.0.0/16
> dev enp6s0 scope link metric 1002
> 169.254.0.0/16 dev enp3s0 scope link metric 1003
> 169.254.0.0/16 dev enp7s0 scope link metric 1004
> 169.254.0.0/16 dev enp3s0.106 scope link metric 1020
> 169.254.0.0/16 dev PRIVATE scope link metric 1022
> 169.254.0.0/16 dev PUBLIC scope link metric 1024
>
> table 1138027711
> # ip route show table 1138027711
> default via 1.1.1.1 dev PUBLIC
> 1.1.1.0/24 via 1.1.1.1 dev PUBLIC
>
> If I manually execute the following command to add the default route as
> well to the main table I can ping ouside of the local network.
>
> ip route add 0.0.0.0/0 via 1.1.1.1 dev PUBLIC
>
> If I attempt to modify the /etc/sysconfig/network-scripts/route-PUBLIC ad
> reboot the server ad one would think this file is recreated by vdsm on boot.
>
> What I'm looking for is the correct way to setup a default gateway for the
> main routing table so the hosts can get OS updates and communicate with the
> outside world.
>
>
> Providing the output from "ip addr" may help clear up some things.
> It looks like you have on the host the default route set as 10.4.16.1 (on
> enp3s0.106), could you elaborate what this interface is?
>
>
> We have setup vlan taging to utilize the 2 internetal network interfaces
> (originally enp6s0 and enp7s0) to be configured with mulitiple networks
> each. We eventually added 10Gb nics to all servers to improve san glusterfs
> performance which is enp3s0 which replaced enp6s0 in our setup.
>
> enp3s0.106 = ovirtmgmt network access to private internal networks only
> enp3s0.206 = private network bridge PRIVATE used for private internal
> network access for VMs
> enp7s0.606 = is used for public access for both VMs (bridge) and each
> host/cp/san in our ovirt setup named PUBLIC
>
> # ip addr show
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: enp6s0:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 00:25:90:38:d6:2c brd ff:ff:ff:ff:ff:ff
> inet6 fe80::225:90ff:fe38:d62c/64 scope link
>valid_lft forever preferred_lft forever
> 3: enp3s0:  mtu 1500 qdisc mq state UP
> qlen 1000
> link/ether 90:e2:ba:1d:a4:00 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::92e2:baff:fe1d:a400/64 scope link
>valid_lft forever preferred_lft forever
> 4: enp7s0:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 00:25:90:38:d6:2d brd ff:ff:ff:ff:ff:ff
> 20: enp3s0.106@enp3s0:  mtu 1500 qdisc
> noqueue state UP qlen 1000
> link/ether 90:e2:ba:1d:a4:00 brd ff:ff:ff:ff:ff:ff
> inet 10.4.16.15/24 brd 10.4.16.255 scope global enp3s0.106
>valid_lft forever preferred_lft forever
> 21: enp3s0.206@enp3s0:  mtu 1500 qdisc
> noqueue master PRIVATEB state UP qlen 1000
> link/ether 90:e2:ba:1d:a4:00 brd ff:ff:ff:ff:ff:ff
> 22: PRIVATE:  mtu 1500 qdisc noqueue
> state UP qlen 1000
> link/ether 90:e2:ba:1d:a4:00 brd ff:ff:ff:ff:ff:ff
> 23: enp7s0.606@enp7s0:  mtu 1500 qdisc
> noqueue master PUBLICB state UP qlen 1000
> link/ether 00:25:90:38:d6:2d brd ff:ff:ff:ff:ff:ff
> 24: PUBLIC:  mtu 1500 qdisc noqueue
> state UP qlen 1000
> link/ether 00:25:90:38:d6:2d brd ff:ff:ff:ff:ff:ff
> inet 1.1.1.10/24 brd 1.1.1.255 scope global PUBLICB
> 

Re: [ovirt-users] Failed to open grubx64.efi

2017-11-22 Thread Yuval Turgeman
Hi,

I checked a little more, anaconda uses blivet to detect if the machine is
EFI for its boot partition requirements, and blivet checks if
/sys/firmware/efi exists [1].
The kernel registers /sys/firmware/efi only if EFI_BOOT is enabled [2], and
this is set in on setup, when the kernel searches for an EFI loader
signature
in boot_params [3] (the signatures are defined as "EL32" and "EL64").

You can access boot_params under /sys/kernel/boot_params/data, so if you
want
to understand what's going on with your machine, you can try to use strings
to
see if it's enabled while running anaconda - on my test machine it looks
like this:


[anaconda root@localhost ~]# strings /sys/kernel/boot_params/data

EL64

fHdrS


Hope this helps :)
Yuval.

[1]
https://github.com/storaged-project/blivet/blob/44bd6738a49cd15398dd151cc2653f175efccf14/blivet/arch.py#L242
[2]
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/drivers/firmware/efi/efi.c?h=v3.10.108#n90
[3]
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/arch/x86/kernel/setup.c#n930


On Tue, Nov 21, 2017 at 8:57 PM, Yuval Turgeman  wrote:

> Boot partition reqs should be handled the same on both ovirt node and
> centos (or rhel) and you having to add this manually on rhel can give us a
> hint that this is a bug in anaconda (doesnt detect efi?).
>
> In other words, if you need to add this to rhel you'd need to add it to
> ovirt node and autopart shouldnt scare you off, just follow the
> partitioning guidelines, add your changes and you are all set.  You can
> grab some ks examples here:
>
> git clone https://gerrit.ovirt.org/ovirt-node-ng
>
> On Nov 21, 2017 20:13, "Luca 'remix_tj' Lorenzetto" <
> lorenzetto.l...@gmail.com> wrote:
>
>> On Tue, Nov 21, 2017 at 4:54 PM, Yuval Turgeman 
>> wrote:
>> > Hi,
>> >
>> > I tried to recreate this without success, i'll try with different hw
>> > tomorrow.
>> > The thing is, autopart with thinp doesn't mean that everything is
>> lvm-thin -
>> > /boot should be a regular (primary) partition (for details you can check
>> > anaconda's ovirt install class)
>> > This could be a bug in anaconda or in the kickstart that deploys the
>> node
>> > (if not installing directly from the iso), can you install CentOS-7.4
>> with
>> > UEFI enabled on this machine ?  If you have some installation logs, that
>> > would help :)
>>
>> Hi,
>>
>> on the same hardware i can install with success rhel 7.4 using UEFI.
>> This required to change my default partitioning to another one
>> containing
>> /boot/efi/ partition, adding this entry:
>>
>> part /boot/efi --fstype=efi --size=200 --ondisk=sda
>>
>> I suppose that autopart doesn't create this partition.
>>
>> Luca
>>
>>
>> --
>> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
>> calcoli che potrebbero essere affidati a chiunque se si usassero delle
>> macchine"
>> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>>
>> "Internet è la più grande biblioteca del mondo.
>> Ma il problema è che i libri sono tutti sparsi sul pavimento"
>> John Allen Paulos, Matematico (1945-vivente)
>>
>> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
>> lorenzetto.l...@gmail.com>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI multipathing missing tab

2017-11-22 Thread Nicolas Ecarnot

Le 21/11/2017 à 15:21, Nicolas Ecarnot a écrit :

Hello,

oVirt 4.1.6.2-1.el7.centos

Under the datacenter section, I see no iSCSI multipathing tab.
As I'm building this new DC, could this be because this DC is not yet 
initialized?




Self-replying (sorry, once again), for the record :

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html-single/administration_guide/#Configuring_iSCSI_Multipathing


Prerequisites

Ensure you have created an iSCSI storage domain and discovered and logged into all the paths to the iSCSI target(s). 


As usual : Me, Read The Fine Manual...

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt node and NFS

2017-11-22 Thread Magnus Isaksson
Hi Shani,

I am currently using NFS shares on this oVirt node 4.1.6 host without problem, 
i only have one host at the moment so it is setup with self-hosted-engine.
But it is when I want to access these NFS shares from another host/client that 
I cannot connect.
So when I run “Iptables -L” I can see that the NFS port are not open.
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination
ACCEPT all  --  anywhere anywhere state 
RELATED,ESTABLISHED
ACCEPT icmp --  anywhere anywhere
ACCEPT all  --  anywhere anywhere
ACCEPT tcp  --  anywhere anywhere tcp dpt:54321
ACCEPT tcp  --  anywhere anywhere tcp dpt:54322
ACCEPT tcp  --  anywhere anywhere tcp dpt:sunrpc
ACCEPT udp  --  anywhere anywhere udp dpt:sunrpc
ACCEPT tcp  --  anywhere anywhere tcp dpt:ssh
ACCEPT udp  --  anywhere anywhere udp dpt:snmp
ACCEPT tcp  --  anywhere anywhere tcp dpt:websm
ACCEPT tcp  --  anywhere anywhere tcp dpt:16514
ACCEPT tcp  --  anywhere anywhere multiport dports 
rockwell-csp2
ACCEPT tcp  --  anywhere anywhere multiport dports 
rfb:6923
ACCEPT tcp  --  anywhere anywhere multiport dports 
49152:49216
ACCEPT udp  --  anywhere anywhere udp dpt:6081
REJECT all  --  anywhere anywhere reject-with 
icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target prot opt source   destination
REJECT all  --  anywhere anywhere PHYSDEV match ! 
--physdev-is-bridged reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination
ACCEPT udp  --  anywhere anywhere udp dpt:6081

My first thought was to check with firewalld, but that is not running, so how 
are these iptables rules set? And how do I add an opening for NFS?
I have checked that troubleshooting guide but it have no info about this.

Regards
Magnus Isaksson

From: Shani Leviim [mailto:slev...@redhat.com]
Sent: den 21 november 2017 17:39
To: Magnus Isaksson 
Cc: users@ovirt.org
Subject: Re: [ovirt-users] oVirt node and NFS

Hi Mangnus,
Have you tried the troubleshooting-nfs-storage-issues page?
https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/

Regards,
Shani Leviim

On Tue, Nov 21, 2017 at 12:26 PM, Magnus Isaksson 
mailto:mag...@vmar.se>> wrote:
Anyone?

//Magnus

From: Magnus Isaksson
Sent: den 20 november 2017 16:01
To: 'users@ovirt.org' 
mailto:users@ovirt.org>>
Subject: oVirt node and NFS

Hi,

This is probably an easy thing, but I can’t seem to find the solution.

On my oVirt node 4.1 I have some NFS shares that I want other hosts to reach, 
but I noticed that the firewall is not open for that on the host.
So, how to I configure the Nodes firewall?

Regards
Magnus Isaksson


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] New post on oVirt blog: LLDP Information Now Available via the Administration Portal

2017-11-22 Thread John Marks
Hello!

Just a quick heads up that there is a new post on the oVirt blog:

LLDP Information Now Available via the Administration Portal


In a nutshell:

oVirt 4.2 now includes support for the LLDP protocol, for easy management
of network resources. The LLDP protocol simplifies life for administrators
handling complex networks. Read the post.


See you on the oVirt blog!

Best,

John


-- 
John Marks
Technical Writer, oVirt
redhat Israel
Cell: +972 52 8644 491



-- 
John Marks
Technical Writer, oVirt
redhat Israel
Cell: +972 52 8644 491
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrading from Hosted Engine 3.6 to 4.X

2017-11-22 Thread Cam Wright
Hi there,

We're looking to upgrade our hosted engine setup from 3.6 to 4.0 (or 4.1)...

We built the 3.6 setup a couple of years ago with Fedora 22 (we wanted
the newer-at-the-time kernel 4.4) on the hosts and engine, but when we
move to 4.X we'd like to move to EL7 on the engine (as that seems to
be the supported version) and to the oVirt Node ISO installer on the
hypervisors.

We've got only four hosts in our oVirt datacentre, configured in two clusters.

Our current idea is to take a backup of the oVirt database using the
backup-restore tool, and to take a 'dd' of the virtual disk too, for
good measure. Then upgrade the engine to 4.X and confirm that the 3.6
hosts will run, and then finally piecemeal upgrade the hosts to 4.X
using the oVirt Node ISO installer.

Looking at this page -
https://www.ovirt.org/documentation/self-hosted/chap-Maintenance_and_Upgrading_Resources/
- it seems the 'hosted-engine --upgrade-appliance' path is the best
way to do this... but because our hosts are running Fedora instead of
EL, I think that makes this option moot to us.

Is what I've suggested a valid upgrade path, or is there a more sane
way of going about this?

-C

Cam Wright - Systems and Technical Resource Administrator
CUTTINGEDGE /
90 Victoria St, West End, Brisbane, QLD, 4101
T + 61 7 3013 6200M 0420 827 007
E cwri...@cuttingedge.com.au | W www.cuttingedge.com.au

/SYD /BNE /TYO

-- 


This email is confidential and solely for the use of the intended recipient.
  If you have received this email in error please notify the author and 
delete it immediately. This email is not to be distributed without the 
author's written consent. Unauthorised forwarding, printing, copying or use 
is strictly prohibited and may be a breach of copyright. Any views 
expressed in this email are those of the individual sender unless 
specifically stated to be the views of Cutting Edge Post Pty Ltd (Cutting 
Edge). Although this email has been sent in the belief that it is 
virus-free, it is the responsibility of the recipient to ensure that it is 
virus free. No responsibility is accepted by Cutting Edge for any loss or 
damage arising in any way from receipt or use of this email.  This email may 
contain legally privileged information and privilege is not waived if you 
have received this email in error.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] slow performance with export storage on glusterfs

2017-11-22 Thread Shani Leviim
Hi Jiri,
Sorry for the delay.

Do you experience the same issue for non-gluster domains?

In order to profile your gluster volume while export is in progress, follow
the instructions in this link [1].
(Please execute "gluster volume profile  start " and then "gluster
volume profile  info")

[1]
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Monitor_Workload.html


*Regards,*

*Shani Leviim*

On Mon, Nov 20, 2017 at 5:20 PM, Jiří Sléžka  wrote:

> Hi,
>
> I am trying realize why is exporting of vm to export storage on
> glusterfs such slow.
>
> I am using oVirt and RHV, both instalations on version 4.1.7.
>
> Hosts have dedicated nics for rhevm network - 1gbps, data storage itself
> is on FC.
>
> GlusterFS cluster lives separate on 4 dedicated hosts. It has slow disks
> but I can achieve about 200-400mbit throughput in other applications (we
> are using it for "cold" data, backups mostly).
>
> I am using this glusterfs cluster as backend for export storage. When I
> am exporting vm I can see only about 60-80mbit throughput.
>
> What could be the bottleneck here?
>
> Could it be qemu-img utility?
>
> vdsm  97739  0.3  0.0 354212 29148 ?S /usr/bin/qemu-img convert -p -t none -T none -f raw
> /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> -O raw
> /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
> export/81094499-a392-4ea2-b081-7c6288fbb636/images/
> ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>
> Any idea how to make it work faster or what throughput should I expected?
>
> Cheers,
>
> Jiri
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-22 Thread Алексей Максимов
Hello, Benny. I deleted the empty directory and the problem disappeared.Thank you for your help. PS:I don't know how to properly open a bug on https://bugzilla.redhat.com/Don't know which option to choose (https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt).Maybe you can open a bug and attach my logs? 20.11.2017, 13:08, "Benny Zlotnik" :Yes, you can remove it On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов  wrote:I found an empty directory in the Export domain storage: # ls -la /rhev/data-center/mnt/fs01.my.dom-holding.com:_mnt_quadstor-vv1_ovirt-vm-backup/3a514c90-e574-4282-b1ee-779602e35f24/master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6 total 16drwxr-xr-x.   2 vdsm kvm  4096 Nov  9 02:32 .drwxr-xr-x. 106 vdsm kvm 12288 Nov  9 02:32 .. I can just remove this directory? 19.11.2017, 18:51, "Benny Zlotnik" :+ ovirt-users On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik  wrote:Hi, There are a couple of issues here, can you please open a bug so we can track this properly? https://bugzilla.redhat.com/and attach all relevant logs  I went over the logs, are you sure the export domain was formatted properly? Couldn't find it in the engine.logLooking at the logs it seems VMs were found on the export domain (id=3a514c90-e574-4282-b1ee-779602e35f24) 2017-11-19 13:18:13,007+0300 INFO  (jsonrpc/2) [storage.StorageDomain] vmList=[u'01a4f53e-699e-4ea5-aef4-458638f23ce9', u'03c9e965-710d-4fc8-be06-583abbd1d7a9', u'07dab4f6-d677-4faa-9875-97bd6d601f49', u'0b94a559-b31a-475d-9599-36e0dbea579a', u'13b42f3a-3057-4eb1-ad4b-f4e52f6ff196', u'151a4e75-d67a-4603-8f52-abfb46cb74c1', u'177479f5-2ed8-4b6c-9120-ec067d1a1247', u'18945b31-3ba5-4e54-9bf0-8fdc3a7d7411', u'1e72be16-f540-4cfd-b0e9-52b66220a98b', u'1ec85134-a7b5-46c2-9c6c-eaba340c5ffd', u'20b88cfc-bfae-4983-8d83-ba4e0c7feeb7', u'25fa96d1-6083-4daa-9755-026e632553d9', u'273ffd05-6f93-4e4a-aac9-149360b5f0b4', u'28188426-ae8b-4999-8e31-4c04fbba4dac', u'28e9d5f2-4312-4d0b-9af9-ec1287bae643', u'2b7093dc-5d16-4204-b211-5b3a1d729872', u'32ecfcbb-2678-4f43-8d59-418e03920693', u'3376ef0b-2af5-4a8b-9987-18f28f6bb334', u'34d1150f-7899-44d9-b8cf-1c917822f624', u'383bbfc6-6841-4476-b108-a1878ed9ce43', u'388e372f-b0e8-408f-b21b-0a5c4a84c457', u'39396196-42eb-4a27-9a57-a3e0dad8a361', u'3fc02ca2-7a03-4d5e-bc21-688f138a914f', u'4101ac1e-0582-4ebe-b4fb-c4aed39fadcf', u'44e10588-8047-4734-81b3-6a98c229b637', u'4794ca9c-5abd-4111-b19c-bdfbf7c39c86', u'47a83986-d3b8-4905-b017-090276e967f5', u'49d83471-a312-412e-b791-8ee0badccbb5', u'4b1b9360-a48a-425b-9a2e-19197b167c99', u'4d783e2a-2d81-435a-98c4-f7ed862e166b', u'51976b6e-d93f-477e-a22b-0fa84400ff84', u'56b77077-707c-4949-9ea9-3aca3ea912ec', u'56dc5c41-6caf-435f-8146-6503ea3eaab9', u'5729e036-5f6e-473b-9d1d-f1c4c5c55b2d', u'5873f804-b992-4559-aff5-797f97bfebf7', u'58b7a4ea-d572-4ab4-a4f1-55dddc5dc8e8', u'590d1adb-52e4-4d29-af44-c9aa5d328186', u'5c79f970-6e7b-4996-a2ce-1781c28bff79', u'5feab1f2-9a3d-4870-a0f3-fd97ea3c85c3', u'63749307-4486-4702-ade9-4324f5bfe80c', u'6555ac11-7b20-4074-9d71-f86bc10c01f9', u'66b4b8a0-b53b-40ea-87ab-75f6d9eef728', u'672c4e12-628f-4dcd-a57e-b4ff822a19f3', u'679c0445-512c-4988-8903-64c0c08b5fab', u'6ae337d0-e6a0-489f-82e6-57a85f63176a', u'6d713cb9-993d-4822-a030-ac7591794050', u'72a50ef0-945d-428a-a336-6447c4a70b99', u'751dfefc-9e18-4f26-bed6-db412cdb258c', u'7587db59-e840-41bc-96f3-b212b7b837a4', u'778c969e-1d22-46e3-bdbe-e20e0c5bb967', u'7810dec1-ee1c-4291-93f4-18e9a15fa8e2', u'7a6cfe35-e493-4c04-8fc6-e0bc72efc72d', u'7a7d814e-4586-40d5-9750-8896b00a6490', u'7af76921-4cf2-4c3c-9055-59c24d9e8b08', u'7d781e21-6613-41f4-bcea-8b57417e1211', u'7da51499-d7db-49fd-88f6-bcac30e5dd86', u'850a8041-77a4-4ae3-98f9-8d5f3a5778e6', u'85169fe8-8198-492f-b988-b8e24822fd01', u'87839926-8b84-482b-adec-5d99573edd9e', u'8a7eb414-71fa-4f91-a906-d70f95ccf995', u'8a9a1071-b005-4448-ba3f-c72bd7e0e34b', u'8b73e593-8513-4a8e-b051-ce91765b22bd', u'8cbd5615-4206-4e4a-992d-8705b2f2aac2', u'92e9d966-c552-4cf9-b84a-21dda96f3f81', u'95209226-a9a5-4ada-8eed-a672d58ba72c', u'986ce2a5-9912-4069-bfa9-e28f7a17385d', u'9f6c8d1d-da81-4020-92e5-1c14cf082d2c', u'9ff87197-d089-4b2d-8822-b0d6f6e67292', u'a0a0c756-fbe9-4f8e-b6e9-1f2d58f1d957', u'a46d5615-8d9f-4944-9334-2fca2b53c27e', u'a6a50244-366b-4b7c-b80f-04d7ce2d8912', u'aa6a4de6-cc9e-4d79-a795-98326bbd83db', u'accc0bc3-c501-4f0b-aeeb-6858f7e894fd', u'b09e5783-6765-4514-a5a3-86e5e73b729b', u'b1ecfe29-7563-44a9-b814-0faefac5465b', u'baa542e1-492a-4b1b-9f54-e9566a4fe315', u'bb91f9f5-98df-45b1-b8ca-9f67a92eef03', u'bd11f11e-be3d-4456-917c-f93ba9a19abe', u'bee3587e-50f4-44bc-a199-35b38a19ffc5', u'bf573d58-1f49-48a9-968d-039e0916c973', u'c01d466a-8ad8-4afe-b383-e365deebc6b8', u'c0be5c12-be26-47b7-ad26-3ec2469f1d3f', u'c31f4f53-c22b-40ff-8408-f36f591f55b5', u'c530e339-99bf-48a2-a63a-cfd2a4dba198', u'c8a610c8-72e5-4217-b4d9-130f85db1db7', u'ca0567e1-d445-4875-94b1-85e31f331b87', u'd2c3dab7-eb8c-410e-87c1-3d0139d9903c', u'd330c4b

Re: [ovirt-users] Upgrading from Hosted Engine 3.6 to 4.X

2017-11-22 Thread Yedidyah Bar David
On Wed, Nov 22, 2017 at 2:46 PM, Cam Wright  wrote:
>
> Hi there,
>
> We're looking to upgrade our hosted engine setup from 3.6 to 4.0 (or 4.1)...
>
> We built the 3.6 setup a couple of years ago with Fedora 22 (we wanted
> the newer-at-the-time kernel 4.4) on the hosts and engine, but when we
> move to 4.X we'd like to move to EL7 on the engine (as that seems to
> be the supported version) and to the oVirt Node ISO installer on the
> hypervisors.
>
> We've got only four hosts in our oVirt datacentre, configured in two clusters.
>
> Our current idea is to take a backup of the oVirt database using the
> backup-restore tool, and to take a 'dd' of the virtual disk too, for
> good measure. Then upgrade the engine to 4.X and confirm that the 3.6
> hosts will run, and then finally piecemeal upgrade the hosts to 4.X
> using the oVirt Node ISO installer.
>
> Looking at this page -
> https://www.ovirt.org/documentation/self-hosted/chap-Maintenance_and_Upgrading_Resources/
> - it seems the 'hosted-engine --upgrade-appliance' path is the best
> way to do this... but because our hosts are running Fedora instead of
> EL, I think that makes this option moot to us.

Basically yes. You might be able to somehow patch it to enforce
this, not sure it's worth it.

>
> Is what I've suggested a valid upgrade path, or is there a more sane
> way of going about this?

Sounds reasonable.

You didn't mention if you can stand downtime for your VMs or not.
If not, or if you need to minimize it, you should design and test carefully.

If you can, something like this should work:

1. Take down all VMs on all hosts that are hosted-engine hosts
2. Move all hosted-engine hosts to maintenance
3. Remove one hosted-engine host from the engine
4. Take a backup
5. Reinstall the host as el7
6. Deploy new hosted-engine on new storage on this host, tell it to
not run engine-setup
7. Inside the new engine vm, restore the backup and engine-setup
8. See that you can start the VMs on the new host
9. Remove the other host on that cluster, reinstall it with el7, add
10. Handle the other cluster

Plan well and test well. You can use VMs and nested-kvm for the testing.
Do not restore a backup of the real engine on a test vm that has access
to your hosts - it will try to manage them. Do the testing in an isolated
network.

Best regards,

>
> -C
>
> Cam Wright - Systems and Technical Resource Administrator
> CUTTINGEDGE /
> 90 Victoria St, West End, Brisbane, QLD, 4101
> T + 61 7 3013 6200M 0420 827 007
> E cwri...@cuttingedge.com.au | W www.cuttingedge.com.au
>
> /SYD /BNE /TYO
>
> --
>
>
> This email is confidential and solely for the use of the intended recipient.
>   If you have received this email in error please notify the author and
> delete it immediately. This email is not to be distributed without the
> author's written consent. Unauthorised forwarding, printing, copying or use
> is strictly prohibited and may be a breach of copyright. Any views
> expressed in this email are those of the individual sender unless
> specifically stated to be the views of Cutting Edge Post Pty Ltd (Cutting
> Edge). Although this email has been sent in the belief that it is
> virus-free, it is the responsibility of the recipient to ensure that it is
> virus free. No responsibility is accepted by Cutting Edge for any loss or
> damage arising in any way from receipt or use of this email.  This email may
> contain legally privileged information and privilege is not waived if you
> have received this email in error.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users




-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-22 Thread Benny Zlotnik
Hi, glad to hear it helped.

https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
The component is BLL.Storage
and the team is Storage

Thanks

On Wed, Nov 22, 2017 at 3:51 PM, Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:

> Hello, Benny.
>
> I deleted the empty directory and the problem disappeared.
> Thank you for your help.
>
> PS:I don't know how to properly open a bug on https://bugzilla.redhat.com/
> Don't know which option to choose (https://bugzilla.redhat.com/
> enter_bug.cgi?classification=oVirt).
> Maybe you can open a bug and attach my logs?
>
> 20.11.2017, 13:08, "Benny Zlotnik" :
>
> Yes, you can remove it
>
> On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов <
> aleksey.i.maksi...@yandex.ru> wrote:
>
> I found an empty directory in the Export domain storage:
>
> # ls -la /rhev/data-center/mnt/fs01.my.dom-holding.com:_mnt_quadstor-
> vv1_ovirt-vm-backup/3a514c90-e574-4282-b1ee-779602e35f24/
> master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6
>
> total 16
> drwxr-xr-x.   2 vdsm kvm  4096 Nov  9 02:32 .
> drwxr-xr-x. 106 vdsm kvm 12288 Nov  9 02:32 ..
>
> I can just remove this directory?
>
> 19.11.2017, 18:51, "Benny Zlotnik" :
>
> + ovirt-users
>
> On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik 
> wrote:
>
> Hi,
>
> There are a couple of issues here, can you please open a bug so we can
> track this properly? https://bugzilla.redhat.com/
> and attach all relevant logs
>
> I went over the logs, are you sure the export domain was formatted
> properly? Couldn't find it in the engine.log
> Looking at the logs it seems VMs were found on the export domain
> (id=3a514c90-e574-4282-b1ee-779602e35f24)
>
> 2017-11-19 13:18:13,007+0300 INFO  (jsonrpc/2) [storage.StorageDomain]
> vmList=[u'01a4f53e-699e-4ea5-aef4-458638f23ce9',
> u'03c9e965-710d-4fc8-be06-583abbd1d7a9', 
> u'07dab4f6-d677-4faa-9875-97bd6d601f49',
> u'0b94a559-b31a-475d-9599-36e0dbea579a', 
> u'13b42f3a-3057-4eb1-ad4b-f4e52f6ff196',
> u'151a4e75-d67a-4603-8f52-abfb46cb74c1', 
> u'177479f5-2ed8-4b6c-9120-ec067d1a1247',
> u'18945b31-3ba5-4e54-9bf0-8fdc3a7d7411', 
> u'1e72be16-f540-4cfd-b0e9-52b66220a98b',
> u'1ec85134-a7b5-46c2-9c6c-eaba340c5ffd', 
> u'20b88cfc-bfae-4983-8d83-ba4e0c7feeb7',
> u'25fa96d1-6083-4daa-9755-026e632553d9', 
> u'273ffd05-6f93-4e4a-aac9-149360b5f0b4',
> u'28188426-ae8b-4999-8e31-4c04fbba4dac', 
> u'28e9d5f2-4312-4d0b-9af9-ec1287bae643',
> u'2b7093dc-5d16-4204-b211-5b3a1d729872', u'32ecfcbb-2678-4f43-8d59-418e
> 03920693', u'3376ef0b-2af5-4a8b-9987-18f28f6bb334',
> u'34d1150f-7899-44d9-b8cf-1c917822f624', 
> u'383bbfc6-6841-4476-b108-a1878ed9ce43',
> u'388e372f-b0e8-408f-b21b-0a5c4a84c457', 
> u'39396196-42eb-4a27-9a57-a3e0dad8a361',
> u'3fc02ca2-7a03-4d5e-bc21-688f138a914f', 
> u'4101ac1e-0582-4ebe-b4fb-c4aed39fadcf',
> u'44e10588-8047-4734-81b3-6a98c229b637', 
> u'4794ca9c-5abd-4111-b19c-bdfbf7c39c86',
> u'47a83986-d3b8-4905-b017-090276e967f5', 
> u'49d83471-a312-412e-b791-8ee0badccbb5',
> u'4b1b9360-a48a-425b-9a2e-19197b167c99', 
> u'4d783e2a-2d81-435a-98c4-f7ed862e166b',
> u'51976b6e-d93f-477e-a22b-0fa84400ff84', 
> u'56b77077-707c-4949-9ea9-3aca3ea912ec',
> u'56dc5c41-6caf-435f-8146-6503ea3eaab9', 
> u'5729e036-5f6e-473b-9d1d-f1c4c5c55b2d',
> u'5873f804-b992-4559-aff5-797f97bfebf7', 
> u'58b7a4ea-d572-4ab4-a4f1-55dddc5dc8e8',
> u'590d1adb-52e4-4d29-af44-c9aa5d328186', 
> u'5c79f970-6e7b-4996-a2ce-1781c28bff79',
> u'5feab1f2-9a3d-4870-a0f3-fd97ea3c85c3', 
> u'63749307-4486-4702-ade9-4324f5bfe80c',
> u'6555ac11-7b20-4074-9d71-f86bc10c01f9', 
> u'66b4b8a0-b53b-40ea-87ab-75f6d9eef728',
> u'672c4e12-628f-4dcd-a57e-b4ff822a19f3', 
> u'679c0445-512c-4988-8903-64c0c08b5fab',
> u'6ae337d0-e6a0-489f-82e6-57a85f63176a', u'6d713cb9-993d-4822-a030-ac75
> 91794050', u'72a50ef0-945d-428a-a336-6447c4a70b99',
> u'751dfefc-9e18-4f26-bed6-db412cdb258c', 
> u'7587db59-e840-41bc-96f3-b212b7b837a4',
> u'778c969e-1d22-46e3-bdbe-e20e0c5bb967', 
> u'7810dec1-ee1c-4291-93f4-18e9a15fa8e2',
> u'7a6cfe35-e493-4c04-8fc6-e0bc72efc72d', 
> u'7a7d814e-4586-40d5-9750-8896b00a6490',
> u'7af76921-4cf2-4c3c-9055-59c24d9e8b08', 
> u'7d781e21-6613-41f4-bcea-8b57417e1211',
> u'7da51499-d7db-49fd-88f6-bcac30e5dd86', 
> u'850a8041-77a4-4ae3-98f9-8d5f3a5778e6',
> u'85169fe8-8198-492f-b988-b8e24822fd01', 
> u'87839926-8b84-482b-adec-5d99573edd9e',
> u'8a7eb414-71fa-4f91-a906-d70f95ccf995', 
> u'8a9a1071-b005-4448-ba3f-c72bd7e0e34b',
> u'8b73e593-8513-4a8e-b051-ce91765b22bd', 
> u'8cbd5615-4206-4e4a-992d-8705b2f2aac2',
> u'92e9d966-c552-4cf9-b84a-21dda96f3f81', 
> u'95209226-a9a5-4ada-8eed-a672d58ba72c',
> u'986ce2a5-9912-4069-bfa9-e28f7a17385d', 
> u'9f6c8d1d-da81-4020-92e5-1c14cf082d2c',
> u'9ff87197-d089-4b2d-8822-b0d6f6e67292', 
> u'a0a0c756-fbe9-4f8e-b6e9-1f2d58f1d957',
> u'a46d5615-8d9f-4944-9334-2fca2b53c27e', 
> u'a6a50244-366b-4b7c-b80f-04d7ce2d8912',
> u'aa6a4de6-cc9e-4d79-a795-98326bbd83db', 
> u'accc0bc3-c501-4f0b-aeeb-6858f7e894fd',
> u'b09e5783-6765-4514-a5a3-86e5e73b729b', 
> u'b1ecfe29-7563-44a9-b814-0faefac5465b',
> u'baa

Re: [ovirt-users] multiple ip routing table issue

2017-11-22 Thread Edward Clay
On Wed, 2017-11-22 at 10:46 +0200, Edward Haas wrote:
> On Wed, Nov 22, 2017 at 1:26 AM, Edward Clay  m> wrote:
> > On Tue, 2017-11-21 at 16:01 -0700, Edward Clay wrote:
> > > On Wed, 2017-11-22 at 00:17 +0200, Edward Haas wrote:
> > > > On Tue, Nov 21, 2017 at 6:16 PM, Edward Clay  > > > oup.com> wrote:
> > > > > On Tue, 2017-11-21 at 09:00 +0200, Edward Haas wrote:
> > > > > > On Tue, Nov 21, 2017 at 1:24 AM, Edward Clay  > > > > > k2group.com> wrote:
> > > > > > > Hello,
> > > > > > > 
> > > > > > > We have an issue where hosts are configured with the
> > > > > > > public facing nework interface as the ovirtmgmt network
> > > > > > > and it's default route is added to a ovirt created table
> > > > > > > but not to the main routing table.  From my searching
> > > > > > > I've found this snippet from https://www.ovirt.org/develo
> > > > > > > p/release-management/features/network/multiple-gateways/
> > > > > > > which seems to explain why I can't ping anything or communicate 
> > > > > > > with any other system needing a default route.
> > > > > > 
> > > > > > By default, the default route is set on the ovirtmgmt
> > > > > > network (the default one, defined on the interface/ip which
> > > > > > you added the host to Engine).
> > > > > > Do you have a different network set up which you will like
> > > > > > to set the default route on?
> > > > > > 
> > > > > >  
> > > > > > > "And finally, here's the host's main routing table. Any
> > > > > > > traffic coming in to the host will use the ip rules and
> > > > > > > an interface's routing table. The main routing table is
> > > > > > > only used for traffic originating from the host."
> > > > > > > 
> > > > > > > I'm seeing the following main and custom ovirt created
> > > > > > > tables.
> > > > > > > 
> > > > > > > main:
> > > > > > > # ip route show table main
> > > > > > > 10.0.0.0/8 via 10.4.16.1 dev enp3s0.106 
> > > > > > > 10.4.16.0/24 dev enp3s0.106 proto kernel scope link src
> > > > > > > 10.4.16.15 
> > > > > > > 1.1.1.0/24 dev PUBLICB proto kernel scope link src
> > > > > > > 1.1.1.1 169.254.0.0/16 dev enp6s0 scope link metric 1002 
> > > > > > > 169.254.0.0/16 dev enp3s0 scope link metric 1003 
> > > > > > > 169.254.0.0/16 dev enp7s0 scope link metric 1004 
> > > > > > > 169.254.0.0/16 dev enp3s0.106 scope link metric 1020 
> > > > > > > 169.254.0.0/16 dev PRIVATE scope link metric 1022 
> > > > > > > 169.254.0.0/16 dev PUBLIC scope link metric 1024 
> > > > > > > 
> > > > > > > table 1138027711
> > > > > > > # ip route show table 1138027711
> > > > > > > default via 1.1.1.1 dev PUBLIC
> > > > > > > 1.1.1.0/24 via 1.1.1.1 dev PUBLIC
> > > > > > > 
> > > > > > > If I manually execute the following command to add the
> > > > > > > default route as well to the main table I can ping ouside
> > > > > > > of the local network.
> > > > > > > 
> > > > > > > ip route add 0.0.0.0/0 via 1.1.1.1 dev PUBLIC
> > > > > > > 
> > > > > > > If I attempt to modify the /etc/sysconfig/network-
> > > > > > > scripts/route-PUBLIC ad reboot the server ad one would
> > > > > > > think this file is recreated by vdsm on boot.
> > > > > > > 
> > > > > > > What I'm looking for is the correct way to setup a
> > > > > > > default gateway for the main routing table so the hosts
> > > > > > > can get OS updates and communicate with the outside
> > > > > > > world.
> > > > > > 
> > > > > > Providing the output from "ip addr" may help clear up some
> > > > > > things.
> > > > > > It looks like you have on the host the default route set as
> > > > > > 10.4.16.1 (on enp3s0.106), could you elaborate what this
> > > > > > interface is?
> > > > > 
> > > > > We have setup vlan taging to utilize the 2 internetal network
> > > > > interfaces (originally enp6s0 and enp7s0) to be configured
> > > > > with mulitiple networks each.  We eventually added 10Gb nics
> > > > > to all servers to improve san glusterfs performance which is
> > > > > enp3s0 which replaced enp6s0 in our setup.
> > > > > 
> > > > > enp3s0.106 = ovirtmgmt network access to private internal
> > > > > networks only
> > > > > enp3s0.206 = private network bridge PRIVATE used for private
> > > > > internal network access for VMs
> > > > > enp7s0.606 = is used for public access for both VMs (bridge)
> > > > > and each host/cp/san in our ovirt setup named PUBLIC
> > > > > 
> > > > > # ip addr show
> > > > > 1: lo:  mtu 65536 qdisc noqueue state
> > > > > UNKNOWN qlen 1
> > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > > inet 127.0.0.1/8 scope host lo
> > > > >valid_lft forever preferred_lft forever
> > > > > inet6 ::1/128 scope host 
> > > > >valid_lft forever preferred_lft forever
> > > > > 2: enp6s0:  mtu 1500 qdisc
> > > > > pfifo_fast state UP qlen 1000
> > > > > link/ether 00:25:90:38:d6:2c brd ff:ff:ff:ff:ff:ff
> > > > > inet6 fe80::225:90ff:fe38:d62c/64 scope link 
> > > > >valid_lft forever preferred_lft forever
> > > > > 3: enp3s0:  mtu 1500 qdisc
> > >

Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-22 Thread Алексей Максимов
https://bugzilla.redhat.com/show_bug.cgi?id=1516494 22.11.2017, 17:47, "Benny Zlotnik" :Hi, glad to hear it helped. https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engineThe component is BLL.Storageand the team is Storage Thanks On Wed, Nov 22, 2017 at 3:51 PM, Алексей Максимов  wrote:Hello, Benny. I deleted the empty directory and the problem disappeared.Thank you for your help. PS:I don't know how to properly open a bug on https://bugzilla.redhat.com/Don't know which option to choose (https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt).Maybe you can open a bug and attach my logs? 20.11.2017, 13:08, "Benny Zlotnik" :Yes, you can remove it On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов  wrote:I found an empty directory in the Export domain storage: # ls -la /rhev/data-center/mnt/fs01.my.dom-holding.com:_mnt_quadstor-vv1_ovirt-vm-backup/3a514c90-e574-4282-b1ee-779602e35f24/master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6 total 16drwxr-xr-x.   2 vdsm kvm  4096 Nov  9 02:32 .drwxr-xr-x. 106 vdsm kvm 12288 Nov  9 02:32 .. I can just remove this directory? 19.11.2017, 18:51, "Benny Zlotnik" :+ ovirt-users On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik  wrote:Hi, There are a couple of issues here, can you please open a bug so we can track this properly? https://bugzilla.redhat.com/and attach all relevant logs  I went over the logs, are you sure the export domain was formatted properly? Couldn't find it in the engine.logLooking at the logs it seems VMs were found on the export domain (id=3a514c90-e574-4282-b1ee-779602e35f24) 2017-11-19 13:18:13,007+0300 INFO  (jsonrpc/2) [storage.StorageDomain] vmList=[u'01a4f53e-699e-4ea5-aef4-458638f23ce9', u'03c9e965-710d-4fc8-be06-583abbd1d7a9', u'07dab4f6-d677-4faa-9875-97bd6d601f49', u'0b94a559-b31a-475d-9599-36e0dbea579a', u'13b42f3a-3057-4eb1-ad4b-f4e52f6ff196', u'151a4e75-d67a-4603-8f52-abfb46cb74c1', u'177479f5-2ed8-4b6c-9120-ec067d1a1247', u'18945b31-3ba5-4e54-9bf0-8fdc3a7d7411', u'1e72be16-f540-4cfd-b0e9-52b66220a98b', u'1ec85134-a7b5-46c2-9c6c-eaba340c5ffd', u'20b88cfc-bfae-4983-8d83-ba4e0c7feeb7', u'25fa96d1-6083-4daa-9755-026e632553d9', u'273ffd05-6f93-4e4a-aac9-149360b5f0b4', u'28188426-ae8b-4999-8e31-4c04fbba4dac', u'28e9d5f2-4312-4d0b-9af9-ec1287bae643', u'2b7093dc-5d16-4204-b211-5b3a1d729872', u'32ecfcbb-2678-4f43-8d59-418e03920693', u'3376ef0b-2af5-4a8b-9987-18f28f6bb334', u'34d1150f-7899-44d9-b8cf-1c917822f624', u'383bbfc6-6841-4476-b108-a1878ed9ce43', u'388e372f-b0e8-408f-b21b-0a5c4a84c457', u'39396196-42eb-4a27-9a57-a3e0dad8a361', u'3fc02ca2-7a03-4d5e-bc21-688f138a914f', u'4101ac1e-0582-4ebe-b4fb-c4aed39fadcf', u'44e10588-8047-4734-81b3-6a98c229b637', u'4794ca9c-5abd-4111-b19c-bdfbf7c39c86', u'47a83986-d3b8-4905-b017-090276e967f5', u'49d83471-a312-412e-b791-8ee0badccbb5', u'4b1b9360-a48a-425b-9a2e-19197b167c99', u'4d783e2a-2d81-435a-98c4-f7ed862e166b', u'51976b6e-d93f-477e-a22b-0fa84400ff84', u'56b77077-707c-4949-9ea9-3aca3ea912ec', u'56dc5c41-6caf-435f-8146-6503ea3eaab9', u'5729e036-5f6e-473b-9d1d-f1c4c5c55b2d', u'5873f804-b992-4559-aff5-797f97bfebf7', u'58b7a4ea-d572-4ab4-a4f1-55dddc5dc8e8', u'590d1adb-52e4-4d29-af44-c9aa5d328186', u'5c79f970-6e7b-4996-a2ce-1781c28bff79', u'5feab1f2-9a3d-4870-a0f3-fd97ea3c85c3', u'63749307-4486-4702-ade9-4324f5bfe80c', u'6555ac11-7b20-4074-9d71-f86bc10c01f9', u'66b4b8a0-b53b-40ea-87ab-75f6d9eef728', u'672c4e12-628f-4dcd-a57e-b4ff822a19f3', u'679c0445-512c-4988-8903-64c0c08b5fab', u'6ae337d0-e6a0-489f-82e6-57a85f63176a', u'6d713cb9-993d-4822-a030-ac7591794050', u'72a50ef0-945d-428a-a336-6447c4a70b99', u'751dfefc-9e18-4f26-bed6-db412cdb258c', u'7587db59-e840-41bc-96f3-b212b7b837a4', u'778c969e-1d22-46e3-bdbe-e20e0c5bb967', u'7810dec1-ee1c-4291-93f4-18e9a15fa8e2', u'7a6cfe35-e493-4c04-8fc6-e0bc72efc72d', u'7a7d814e-4586-40d5-9750-8896b00a6490', u'7af76921-4cf2-4c3c-9055-59c24d9e8b08', u'7d781e21-6613-41f4-bcea-8b57417e1211', u'7da51499-d7db-49fd-88f6-bcac30e5dd86', u'850a8041-77a4-4ae3-98f9-8d5f3a5778e6', u'85169fe8-8198-492f-b988-b8e24822fd01', u'87839926-8b84-482b-adec-5d99573edd9e', u'8a7eb414-71fa-4f91-a906-d70f95ccf995', u'8a9a1071-b005-4448-ba3f-c72bd7e0e34b', u'8b73e593-8513-4a8e-b051-ce91765b22bd', u'8cbd5615-4206-4e4a-992d-8705b2f2aac2', u'92e9d966-c552-4cf9-b84a-21dda96f3f81', u'95209226-a9a5-4ada-8eed-a672d58ba72c', u'986ce2a5-9912-4069-bfa9-e28f7a17385d', u'9f6c8d1d-da81-4020-92e5-1c14cf082d2c', u'9ff87197-d089-4b2d-8822-b0d6f6e67292', u'a0a0c756-fbe9-4f8e-b6e9-1f2d58f1d957', u'a46d5615-8d9f-4944-9334-2fca2b53c27e', u'a6a50244-366b-4b7c-b80f-04d7ce2d8912', u'aa6a4de6-cc9e-4d79-a795-98326bbd83db', u'accc0bc3-c501-4f0b-aeeb-6858f7e894fd', u'b09e5783-6765-4514-a5a3-86e5e73b729b', u'b1ecfe29-7563-44a9-b814-0faefac5465b', u'baa542e1-492a-4b1b-9f54-e9566a4fe315', u'bb91f9f5-98df-45b1-b8ca-9f67a92eef03', u'bd11f11e-be3d-4456-917c-f93ba9a19abe', u'bee3587e-50f4-44bc-a199-35b38a1

Re: [ovirt-users] slow performance with export storage on glusterfs

2017-11-22 Thread Nir Soffer
On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka  wrote:

> Hi,
>
> I am trying realize why is exporting of vm to export storage on
> glusterfs such slow.
>
> I am using oVirt and RHV, both instalations on version 4.1.7.
>
> Hosts have dedicated nics for rhevm network - 1gbps, data storage itself
> is on FC.
>
> GlusterFS cluster lives separate on 4 dedicated hosts. It has slow disks
> but I can achieve about 200-400mbit throughput in other applications (we
> are using it for "cold" data, backups mostly).
>
> I am using this glusterfs cluster as backend for export storage. When I
> am exporting vm I can see only about 60-80mbit throughput.
>
> What could be the bottleneck here?
>
> Could it be qemu-img utility?
>
> vdsm  97739  0.3  0.0 354212 29148 ?S /usr/bin/qemu-img convert -p -t none -T none -f raw
>
> /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> -O raw
> /rhev/data-center/mnt/glusterSD/10.20.30.41:
> _rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>
> Any idea how to make it work faster or what throughput should I expected?
>

gluster storage operations are using fuse mount - so every write:
- travel to the kernel
- travel back to the gluster fuse helper process
- travel to all 3 replicas - replication is done on client side
- return to kernel when all writes succeeded
- return to caller

So gluster will never set any speed record.

Additionally, you are copying from raw lv on FC - qemu-img cannot do
anything
smart and avoid copying unused clusters. Instead if copies gigabytes of
zeros
from FC.

However 7.5-10 MiB/s sounds too slow.

I would try to test with dd - how much time it takes to copy
the same image from FC to your gluster storage?

dd
if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
of=/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
bs=8M oflag=direct status=progress

If dd can do this faster, please ask on qemu-discuss mailing list:
https://lists.nongnu.org/mailman/listinfo/qemu-discuss

If both give similar results, I think asking in gluster mailing list
about this can help. Maybe your gluster setup can be optimized.

Nir


>
> Cheers,
>
> Jiri
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VDSM multipath.conf - prevent automatic management of local devices

2017-11-22 Thread Ben Bradley

Hi All

I have been running ovirt in a lab environment on CentOS 7 for several 
months but have only just got around to really testing things.
I understand that VDSM manages multipath.conf and I understand that I 
can make changes to that file and set it to private to prevent VDSM 
making further changes.


I don't mind VDSM managing the file but is it possible to set to prevent 
local devices being automatically added to multipathd?


Many times I have had to flush local devices from multipath when they 
are added/removed or re-partitioned or the system is rebooted.
It doesn't even look like oVirt does anything with these devices once 
they are setup in multipathd.


I'm assuming it's the VDSM additions to multipath that are causing this. 
Can anyone else confirm this?


Is there a way to prevent new or local devices being added automatically?

Regards
Ben
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading from Hosted Engine 3.6 to 4.X

2017-11-22 Thread Cam Wright
Thanks for your response. Good to know that what I've suggested isn't
completely whacky!

> You didn't mention if you can stand downtime for your VMs or not.
We can't afford a lot of downtime on the VMs, probably 1-2 hours early
morning at maximum given business needs, but we'd prefer to not have
any downtime if at all possible.

...on your suggested plan, as that seems the more sane option if we
can get a bigger maintenance window than the aforementioned couple of
hours.
The biggest issue I can see even in step one, however, is that all of
our hosts are hosted-engine hosts, in that all four hosts are capable
of running the engine.. so we'd need to bring both clusters (i.e.: all
hosts) down.

Having said that, it seems like a safer plan... as long as business
requirements can accommodate.

Thanks again.

-C



Cam Wright - Systems and Technical Resource Administrator
CUTTINGEDGE /
90 Victoria St, West End, Brisbane, QLD, 4101
T + 61 7 3013 6200M 0420 827 007
E cwri...@cuttingedge.com.au | W www.cuttingedge.com.au

/SYD /BNE /TYO


On Wed, Nov 22, 2017 at 11:56 PM, Yedidyah Bar David  wrote:
> On Wed, Nov 22, 2017 at 2:46 PM, Cam Wright  
> wrote:
>>
>> Hi there,
>>
>> We're looking to upgrade our hosted engine setup from 3.6 to 4.0 (or 4.1)...
>>
>> We built the 3.6 setup a couple of years ago with Fedora 22 (we wanted
>> the newer-at-the-time kernel 4.4) on the hosts and engine, but when we
>> move to 4.X we'd like to move to EL7 on the engine (as that seems to
>> be the supported version) and to the oVirt Node ISO installer on the
>> hypervisors.
>>
>> We've got only four hosts in our oVirt datacentre, configured in two 
>> clusters.
>>
>> Our current idea is to take a backup of the oVirt database using the
>> backup-restore tool, and to take a 'dd' of the virtual disk too, for
>> good measure. Then upgrade the engine to 4.X and confirm that the 3.6
>> hosts will run, and then finally piecemeal upgrade the hosts to 4.X
>> using the oVirt Node ISO installer.
>>
>> Looking at this page -
>> https://www.ovirt.org/documentation/self-hosted/chap-Maintenance_and_Upgrading_Resources/
>> - it seems the 'hosted-engine --upgrade-appliance' path is the best
>> way to do this... but because our hosts are running Fedora instead of
>> EL, I think that makes this option moot to us.
>
> Basically yes. You might be able to somehow patch it to enforce
> this, not sure it's worth it.
>
>>
>> Is what I've suggested a valid upgrade path, or is there a more sane
>> way of going about this?
>
> Sounds reasonable.
>
> You didn't mention if you can stand downtime for your VMs or not.
> If not, or if you need to minimize it, you should design and test carefully.
>
> If you can, something like this should work:
>
> 1. Take down all VMs on all hosts that are hosted-engine hosts
> 2. Move all hosted-engine hosts to maintenance
> 3. Remove one hosted-engine host from the engine
> 4. Take a backup
> 5. Reinstall the host as el7
> 6. Deploy new hosted-engine on new storage on this host, tell it to
> not run engine-setup
> 7. Inside the new engine vm, restore the backup and engine-setup
> 8. See that you can start the VMs on the new host
> 9. Remove the other host on that cluster, reinstall it with el7, add
> 10. Handle the other cluster
>
> Plan well and test well. You can use VMs and nested-kvm for the testing.
> Do not restore a backup of the real engine on a test vm that has access
> to your hosts - it will try to manage them. Do the testing in an isolated
> network.
>
> Best regards,
>
>>
>> -C
>>
>> Cam Wright - Systems and Technical Resource Administrator
>> CUTTINGEDGE /
>> 90 Victoria St, West End, Brisbane, QLD, 4101
>> T + 61 7 3013 6200M 0420 827 007
>> E cwri...@cuttingedge.com.au | W www.cuttingedge.com.au
>>
>> /SYD /BNE /TYO
>>
>> --
>>
>>
>> This email is confidential and solely for the use of the intended recipient.
>>   If you have received this email in error please notify the author and
>> delete it immediately. This email is not to be distributed without the
>> author's written consent. Unauthorised forwarding, printing, copying or use
>> is strictly prohibited and may be a breach of copyright. Any views
>> expressed in this email are those of the individual sender unless
>> specifically stated to be the views of Cutting Edge Post Pty Ltd (Cutting
>> Edge). Although this email has been sent in the belief that it is
>> virus-free, it is the responsibility of the recipient to ensure that it is
>> virus free. No responsibility is accepted by Cutting Edge for any loss or
>> damage arising in any way from receipt or use of this email.  This email may
>> contain legally privileged information and privilege is not waived if you
>> have received this email in error.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> --
> Didi

-- 


This email is confidential and solely for the use of the intended recipient.
  If