[ovirt-users] Re: Removing Direct Mapped LUNs

2021-07-15 Thread Nir Soffer
On Thu, Jul 15, 2021 at 3:50 PM Gianluca Cecchi
 wrote:
>
> On Fri, Apr 23, 2021 at 7:15 PM Nir Soffer  wrote:
>>
>>
>> >> > 1) Is this the expected behavior?
>> >>
>> >> yes, before removing multipath devices, you need to unzone LUN on storage
>> >> server. As oVirt doesn't manage storage server in case of iSCSI, it has 
>> >> to be
>> >> done by storage sever admin and therefore oVirt cannot manage whole flow.
>> >>
>> > Thank you for the information. Perhaps you can expand then on how the 
>> > volumes are picked up once mapped from the Storage system?  Traditionally 
>> > when mapping storage from an iSCSI or Fibre Channel storage we have to 
>> > initiate a LIP or iSCSI login. How is it that oVirt doesn't need to do 
>> > this?
>> >
>> >> > 2) Are we supposed to go to each KVM host and manually remove the
>> >> > underlying multipath devices?
>> >>
>> >> oVirt provides ansible script for it:
>> >>
>> >> https://github.com/oVirt/ovirt-ansible-collection/blob/master/examples/
>> >> remove_mpath_device.yml
>> >>
>> >> Usage is as follows:
>> >>
>> >> ansible-playbook --extra-vars "lun=" remove_mpath_device.yml
>> >
>
>
> I had to decommission one iSCSI based storage domain, after having added one 
> new iSCSI one (with another portal) and moved all the objects into the new 
> one (vm disks, template disks, iso disks, leases).
> The Environment is based on 4.4.6, with 3 hosts, external engine.
> So I tried the ansible playbook way to verify it.
>
> Initial situation is this below; the storage domain to decommission is the 
> ovsd3750, based on the 5Tb LUN.
>
> $ sudo multipath -l
> 364817197c52f98316900666e8c2b0b2b dm-13 EQLOGIC,100E-00
> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> `-+- policy='round-robin 0' prio=0 status=active
>   |- 16:0:0:0 sde 8:64 active undef running
>   `- 17:0:0:0 sdf 8:80 active undef running
> 36090a0d800851c9d2195d5b837c9e328 dm-2 EQLOGIC,100E-00
> size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> `-+- policy='round-robin 0' prio=0 status=active
>   |- 13:0:0:0 sdb 8:16 active undef running
>   `- 14:0:0:0 sdc 8:32 active undef running
>
> Connections are using iSCSI multipathing (iscsi1 and iscs2 in web admin gui), 
> so that I have two paths to each LUN:
>
> $sudo  iscsiadm -m node
> 10.10.100.7:3260,1 
> iqn.2001-05.com.equallogic:0-8a0906-9d1c8500d-28e3c937b8d59521-ovsd3750
> 10.10.100.7:3260,1 
> iqn.2001-05.com.equallogic:0-8a0906-9d1c8500d-28e3c937b8d59521-ovsd3750
> 10.10.100.9:3260,1 
> iqn.2001-05.com.equallogic:4-771816-31982fc59-2b0b2b8c6e660069-ovsd3920
> 10.10.100.9:3260,1 
> iqn.2001-05.com.equallogic:4-771816-31982fc59-2b0b2b8c6e660069-ovsd3920
>
> $ sudo iscsiadm -m session
> tcp: [1] 10.10.100.7:3260,1 
> iqn.2001-05.com.equallogic:0-8a0906-9d1c8500d-28e3c937b8d59521-ovsd3750 
> (non-flash)
> tcp: [2] 10.10.100.7:3260,1 
> iqn.2001-05.com.equallogic:0-8a0906-9d1c8500d-28e3c937b8d59521-ovsd3750 
> (non-flash)
> tcp: [4] 10.10.100.9:3260,1 
> iqn.2001-05.com.equallogic:4-771816-31982fc59-2b0b2b8c6e660069-ovsd3920 
> (non-flash)
> tcp: [5] 10.10.100.9:3260,1 
> iqn.2001-05.com.equallogic:4-771816-31982fc59-2b0b2b8c6e660069-ovsd3920 
> (non-flash)
>
> One point not taken in consideration inside the previously opened bugs in my 
> opinion is the deletion of iSCSI connections and node at host side (probably 
> to be done by the os admin, but it could be taken in charge by the ansible 
> playbook...)
> The bugs I'm referring are:
> Bug 1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors
> Bug 1928041 - Stale DM links after block SD removal
>
> Actions done:
> put storage domain into maintenance
> detach storage domain
> remove storage domain
> remove access from equallogic admin gui
>
> I have a group named ovirt in ansible inventory composed by my 3 hosts: 
> ov200, ov300 and ov301
> executed
> $ ansible-playbook -b -l ovirt --extra-vars 
> "lun=36090a0d800851c9d2195d5b837c9e328" remove_mpath_device.yml
>
> it went all ok with ov200 and ov300, but for ov301 I got
>
> fatal: [ov301: FAILED! => {"changed": true, "cmd": "multipath -f 
> \"36090a0d800851c9d2195d5b837c9e328\"", "delta": "0:00:00.009003", "end": 
> "2021-07-15 11:17:37.340584", "msg": "non-zero return code", "rc": 1, 
> "start": "2021-07-15 11:17:37.331581", "stderr": "Jul 15 11:17:37 | 
> 36090a0d800851c9d2195d5b837c9e328: map in use", "stderr_lines": ["Jul 15 
> 11:17:37 | 36090a0d800851c9d2195d5b837c9e328: map in use"], "stdout": "", 
> "stdout_lines": []}
>
> the complete output:
>
> $ ansible-playbook -b -l ovirt --extra-vars 
> "lun=36090a0d800851c9d2195d5b837c9e328" remove_mpath_device.yml
>
> PLAY [Cleanly remove unzoned storage devices (LUNs)] 
> *
>
> TASK [Gathering Facts] 
> ***
> ok: [ov200]
> ok: [ov300]
> ok: [ov301]
>
> TASK [Get underlying disks (paths) for a multipath device and turn t

[ovirt-users] Re: Removing Direct Mapped LUNs

2021-07-15 Thread Gianluca Cecchi
On Fri, Apr 23, 2021 at 7:15 PM Nir Soffer  wrote:

>
> >> > 1) Is this the expected behavior?
> >>
> >> yes, before removing multipath devices, you need to unzone LUN on
> storage
> >> server. As oVirt doesn't manage storage server in case of iSCSI, it has
> to be
> >> done by storage sever admin and therefore oVirt cannot manage whole
> flow.
> >>
> > Thank you for the information. Perhaps you can expand then on how the
> volumes are picked up once mapped from the Storage system?  Traditionally
> when mapping storage from an iSCSI or Fibre Channel storage we have to
> initiate a LIP or iSCSI login. How is it that oVirt doesn't need to do this?
> >
> >> > 2) Are we supposed to go to each KVM host and manually remove the
> >> > underlying multipath devices?
> >>
> >> oVirt provides ansible script for it:
> >>
> >> https://github.com/oVirt/ovirt-ansible-collection/blob/master/examples/
> >> remove_mpath_device.yml
> >>
> >> Usage is as follows:
> >>
> >> ansible-playbook --extra-vars "lun=" remove_mpath_device.yml
> >
>

I had to decommission one iSCSI based storage domain, after having added
one new iSCSI one (with another portal) and moved all the objects into the
new one (vm disks, template disks, iso disks, leases).
The Environment is based on 4.4.6, with 3 hosts, external engine.
So I tried the ansible playbook way to verify it.

Initial situation is this below; the storage domain to decommission is the
ovsd3750, based on the 5Tb LUN.

$ sudo multipath -l
364817197c52f98316900666e8c2b0b2b dm-13 EQLOGIC,100E-00
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 16:0:0:0 sde 8:64 active undef running
  `- 17:0:0:0 sdf 8:80 active undef running
36090a0d800851c9d2195d5b837c9e328 dm-2 EQLOGIC,100E-00
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 13:0:0:0 sdb 8:16 active undef running
  `- 14:0:0:0 sdc 8:32 active undef running

Connections are using iSCSI multipathing (iscsi1 and iscs2 in web admin
gui), so that I have two paths to each LUN:

$sudo  iscsiadm -m node
10.10.100.7:3260,1
iqn.2001-05.com.equallogic:0-8a0906-9d1c8500d-28e3c937b8d59521-ovsd3750
10.10.100.7:3260,1
iqn.2001-05.com.equallogic:0-8a0906-9d1c8500d-28e3c937b8d59521-ovsd3750
10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-31982fc59-2b0b2b8c6e660069-ovsd3920
10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-31982fc59-2b0b2b8c6e660069-ovsd3920

$ sudo iscsiadm -m session
tcp: [1] 10.10.100.7:3260,1
iqn.2001-05.com.equallogic:0-8a0906-9d1c8500d-28e3c937b8d59521-ovsd3750
(non-flash)
tcp: [2] 10.10.100.7:3260,1
iqn.2001-05.com.equallogic:0-8a0906-9d1c8500d-28e3c937b8d59521-ovsd3750
(non-flash)
tcp: [4] 10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-31982fc59-2b0b2b8c6e660069-ovsd3920
(non-flash)
tcp: [5] 10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-31982fc59-2b0b2b8c6e660069-ovsd3920
(non-flash)

One point not taken in consideration inside the previously opened bugs in
my opinion is the deletion of iSCSI connections and node at host side
(probably to be done by the os admin, but it could be taken in charge by
the ansible playbook...)
The bugs I'm referring are:
Bug 1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors
Bug 1928041 - Stale DM links after block SD removal

Actions done:
put storage domain into maintenance
detach storage domain
remove storage domain
remove access from equallogic admin gui

I have a group named ovirt in ansible inventory composed by my 3 hosts:
ov200, ov300 and ov301
executed
$ ansible-playbook -b -l ovirt --extra-vars
"lun=36090a0d800851c9d2195d5b837c9e328" remove_mpath_device.yml

it went all ok with ov200 and ov300, but for ov301 I got

fatal: [ov301: FAILED! => {"changed": true, "cmd": "multipath -f
\"36090a0d800851c9d2195d5b837c9e328\"", "delta": "0:00:00.009003", "end":
"2021-07-15 11:17:37.340584", "msg": "non-zero return code", "rc": 1,
"start": "2021-07-15 11:17:37.331581", "stderr": "Jul 15 11:17:37 |
36090a0d800851c9d2195d5b837c9e328: map in use", "stderr_lines": ["Jul 15
11:17:37 | 36090a0d800851c9d2195d5b837c9e328: map in use"], "stdout": "",
"stdout_lines": []}

the complete output:

$ ansible-playbook -b -l ovirt --extra-vars
"lun=36090a0d800851c9d2195d5b837c9e328" remove_mpath_device.yml

PLAY [Cleanly remove unzoned storage devices (LUNs)]
*

TASK [Gathering Facts]
***
ok: [ov200]
ok: [ov300]
ok: [ov301]

TASK [Get underlying disks (paths) for a multipath device and turn them
into a list.] 
changed: [ov300]
changed: [ov200]
changed: [ov301]

TASK [Remove from multipath device.]
*
changed: [ov200]
changed: [ov300]
fatal: [ov301]: FAILED! => {"changed": true, "cmd": "mul

[ovirt-users] Re: random adapter (NIC) resets on oVirt Node 4.4.6

2021-07-15 Thread Lev Veyde
Hi Tivon,

I think that the most interesting one to see is the /var/log/messages ,
however I think it's best to simply archive the whole /var/log

Thanks in advance,

On Thu, Jul 15, 2021 at 1:36 PM Tivon Häberlein 
wrote:

> Hi Lev,
>
> thanks for your reply.
> I'll gladly grab the logs in the next couple of days (got to go back to
> the DC to swap the cards back).
>
> Can you give me a list of logs I should grab so I don't miss any?
>
> --
> Best regards
> Tivon Häberlein
>
> On 15.07.2021 01:25, Lev Veyde wrote:
>
> Hi Tivon,
>
> I personally think that it's worth it to reproduce the issue and get the
> logs, even though it does really sound like a driver/kernel issue.
> That may help get more understanding as to why it happens, and maybe even
> get the driver/kernel fix.
>
> Thanks in advance,
>
> On Thu, Jul 15, 2021 at 12:38 AM Tivon Häberlein <
> tivon.haeberl...@secges.de> wrote:
>
>> Hi Nathaniel,
>>
>> thanks for your time here and sorry for my late reply now.
>>
>> Even though my NICs didn't use the E1000E driver I now got a broadcom NIC
>> from the stash and gave it a try.
>> I'm happy to announce that the NICs don't seem to be resetting on the
>> broadcom NIC.
>> This obviously means that there's some driver issue with the Intel NICs I
>> have been trying.
>>
>> I still don't get the host into operational state because "Failed to
>> connect Host n3 to Storage Pool cl1" even though NFS is mounted properly
>> but I this is a different issue I think.
>>
>> If you want I can reproduce this issue and grab all logs to maybe find a
>> fix other than "get a broadcom NIC" for the community.
>> To be honest though, I think this just can be added to the "weird driver
>> fuckups in centos" list if we start digging.
>>
>> --
>> Best regards
>> Tivon Häberlein
>>
>>
>>
>> On 13.07.2021 01:07, Nathaniel Roach via Users wrote:
>>
>>
>> On 12/7/21 11:44 pm, Nathaniel Roach via Users wrote:
>>
>> Do you get anything in the logs at all? For something like this I would
>> expect it to show in syslog from the kernel.
>>
>> It really does sound like the E1000E issue, but will probably have a
>> different fix - I first encountered it on a router when I was pushing
>> >100Mbps in *and then back out* the same NIC. Otherwise it wouldn't
>> happen at all. That would explain why it's not an issue in maintenance mode
>> and downloading an image works fine.
>> On 12/7/21 7:57 am, Tivon Häberlein wrote:
>>
>> Hi Strahil,
>>
>> the server uses Intel NICs with ixgbe and igb kernel drivers.
>> I did upgrade the firmware to the latest available one (through Dell
>> lifecycle-contoller).
>> I also tried replacing the network card itself but without success.
>>
>> As this issue did not arise when running Debian 10 or even oVirt Node
>> before adding it to the cluster I don't think its hardware related. For my
>> testing I mounted my oVirt Datastore manually on the fresh install of oVirt
>> node (using the ISO) and then coping a large ISO file to the local disk.
>> This fills the NIC up to the full 1 Gbit/s I have available there for a
>> good 5 to 10 minutes.
>> Also the administration through cockpit works perfectly before adding it
>> to the cluster.
>>
>> As soon as I add the node to the cluster the trouble starts.
>> 1. oVirt reports that the install has failed on this host
>> 2. the node logs (kernel log) adapter resets on some interfaces (even
>> ones that arent UP)
>>
>> Having read your message again, are you able to capture these log events
>> before the node gets fenced (or just disable fencing for the time)?
>>
>> 3. the engine looses connection to the host and declares it "Unresponsive"
>> 4. the node becomes unmanageable through cockpit or ssh because the
>> connection is lost repeatedly.
>> 5. the fencing agent reboots the node (If fencing is enabled)
>> 6. node comes up and gets added to the cluster (oVirt says the node is in
>> state UP)
>> 7. repeat from step 2
>>
>> It seems that this behavior stops when I put the node into maintenance.
>> Then I can even mount the Datastore manually and transfer large ISOs
>> without it dropping the connection.
>>
>> This is all very strange and I don't understand what causes this.
>>
>> Thank you.
>>
>> --
>> Best regards
>> Tivon Häberlein
>>
>> On 11.07.2021 13:51, Strahil Nikolov wrote:
>>
>> Are you sure it's not a HW issue ?
>> Try to update the server to latest firmware and test again.At least it
>> won't hurt.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Sat, Jul 10, 2021 at 14:45, Tivon Häberlein
>>   wrote:
>>
>> Hi,
>>
>> I've been trying to get oVirt Node 4.4.6 up and running on my Dell r620
>> hosts but am facing a strange issue where seemingly all network adapters
>> get reset at random times after install.
>> The interfaces reset as soon as a bit of traffic is flowing through them.
>> Also the logs show nfs timeouts.
>>
>> This only happens after I have installed the host using the oVirt engine
>> and it also only happens when the host is connected to the engi

[ovirt-users] Re: random adapter (NIC) resets on oVirt Node 4.4.6

2021-07-15 Thread Tivon Häberlein

Hi Lev,

thanks for your reply.
I'll gladly grab the logs in the next couple of days (got to go back to 
the DC to swap the cards back).


Can you give me a list of logs I should grab so I don't miss any?

--
Best regards
Tivon Häberlein

On 15.07.2021 01:25, Lev Veyde wrote:

Hi Tivon,

I personally think that it's worth it to reproduce the issue and get 
the logs, even though it does really sound like a driver/kernel issue.
That may help get more understanding as to why it happens, and maybe 
even get the driver/kernel fix.


Thanks in advance,

On Thu, Jul 15, 2021 at 12:38 AM Tivon Häberlein 
mailto:tivon.haeberl...@secges.de>> wrote:


Hi Nathaniel,

thanks for your time here and sorry for my late reply now.

Even though my NICs didn't use the E1000E driver I now got a
broadcom NIC from the stash and gave it a try.
I'm happy to announce that the NICs don't seem to be resetting on
the broadcom NIC.
This obviously means that there's some driver issue with the Intel
NICs I have been trying.

I still don't get the host into operational state because "Failed
to connect Host n3 to Storage Pool cl1" even though NFS is mounted
properly but I this is a different issue I think.

If you want I can reproduce this issue and grab all logs to maybe
find a fix other than "get a broadcom NIC" for the community.
To be honest though, I think this just can be added to the "weird
driver fuckups in centos" list if we start digging.

-- 
Best regards

Tivon Häberlein



On 13.07.2021 01:07, Nathaniel Roach via Users wrote:



On 12/7/21 11:44 pm, Nathaniel Roach via Users wrote:


Do you get anything in the logs at all? For something like this
I would expect it to show in syslog from the kernel.

It really does sound like the E1000E issue, but will probably
have a different fix - I first encountered it on a router when I
was pushing >100Mbps in *and then back out* the same NIC.
Otherwise it wouldn't happen at all. That would explain why it's
not an issue in maintenance mode and downloading an image works
fine.

On 12/7/21 7:57 am, Tivon Häberlein wrote:


Hi Strahil,

the server uses Intel NICs with ixgbe and igb kernel drivers.
I did upgrade the firmware to the latest available one (through
Dell lifecycle-contoller).
I also tried replacing the network card itself but without success.

As this issue did not arise when running Debian 10 or even
oVirt Node before adding it to the cluster I don't think its
hardware related. For my testing I mounted my oVirt Datastore
manually on the fresh install of oVirt node (using the ISO) and
then coping a large ISO file to the local disk. This fills the
NIC up to the full 1 Gbit/s I have available there for a good 5
to 10 minutes.
Also the administration through cockpit works perfectly before
adding it to the cluster.

As soon as I add the node to the cluster the trouble starts.
1. oVirt reports that the install has failed on this host
2. the node logs (kernel log) adapter resets on some interfaces
(even ones that arent UP)


Having read your message again, are you able to capture these log
events before the node gets fenced (or just disable fencing for
the time)?


3. the engine looses connection to the host and declares it
"Unresponsive"
4. the node becomes unmanageable through cockpit or ssh because
the connection is lost repeatedly.
5. the fencing agent reboots the node (If fencing is enabled)
6. node comes up and gets added to the cluster (oVirt says the
node is in state UP)
7. repeat from step 2

It seems that this behavior stops when I put the node into
maintenance. Then I can even mount the Datastore manually and
transfer large ISOs without it dropping the connection.

This is all very strange and I don't understand what causes this.

Thank you.

-- 
Best regards

Tivon Häberlein
On 11.07.2021 13:51, Strahil Nikolov wrote:

Are you sure it's not a HW issue ?
Try to update the server to latest firmware and test again.At
least it won't hurt.

Best Regards,
Strahil Nikolov

On Sat, Jul 10, 2021 at 14:45, Tivon Häberlein

 wrote:

Hi,

I've been trying to get oVirt Node 4.4.6 up and running on
my Dell r620 hosts but am facing a strange issue where
seemingly all network adapters get reset at random times
after install.
The interfaces reset as soon as a bit of traffic is
flowing through them.
Also the logs show nfs timeouts.

This only happens after I have installed the host using
the oVirt engine and it also only happens when the host is
connected to the engine. When the host is in maintenance
mode it also seems to work fine.

The host and ne

[ovirt-users] Re: oVirt and ARM

2021-07-15 Thread Marko Vrgotic
Thank you all.

-
kind regards/met vriendelijke groeten

Marko Vrgotic
Sr. System Engineer @ System Administration

ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgo...@activevideo.com
w: www.activevideo.com

ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ 
Hilversum, The Netherlands. The information contained in this message may be 
legally privileged and confidential. It is intended to be read only by the 
individual or entity to whom it is addressed or by their designee. If the 
reader of this message is not the intended recipient, you are on notice that 
any distribution of this message, in any form, is strictly prohibited.  If you 
have received this message in error, please immediately notify the sender 
and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or 
destroy any copy of this message.



From: Arik Hadas 
Date: Wednesday, 14 July 2021 at 15:26
To: Milan Zamazal 
Cc: Marko Vrgotic , Sandro Bonazzola 
, Evgheni Dereveanchin , Zhenyu Zheng 
, Joey Ma , users@ovirt.org 

Subject: Re: oVirt and ARM

***CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you recognize the sender!!!***


On Wed, Jul 14, 2021 at 3:14 PM Milan Zamazal 
mailto:mzama...@redhat.com>> wrote:
Arik Hadas mailto:aha...@redhat.com>> writes:

> On Wed, Jul 14, 2021 at 10:36 AM Milan Zamazal 
> mailto:mzama...@redhat.com>> wrote:
>
>> Marko Vrgotic mailto:m.vrgo...@activevideo.com>> 
>> writes:
>>
>> > Dear Arik and Milan,
>> >
>> > In the meantime, I was asked to check if in current 4.4 version or
>> > coming 4.5, are/will there any capabilities or options of emulating
>> > aarch64 on x86_64 platform and if so, what would be the steps to
>> > test/enable it.
>> >
>> > Can you provide some information?
>>
>> Hi Marko,
>>
>> I don't think there is a way to emulate a non-native architecture.
>> Engine doesn't have ARM support and it cannot handle ARM (native or
>> emulated) hosts.  You could try to run emulated ARM VMs presented as x86
>> to Engine using Vdsm hooks but I doubt it would work.
>>
>
> Oh I just sent a draft I had in my mailbox without noticing this comment
> and I see we both mentioned Vdsm hook
> What is the source of the doubts about Vdsm hooks to work for this?

It's possible to override the domain XML obtained from Engine to change
it from x86 to ARM but Engine will get back the non-x86 domain XML.
Engine may not care about the emulator but perhaps it can be confused by
the reported CPU etc.  There can be problems with devices and
architecture specific settings in both the directions.  Engine will base
assumptions about the VM capabilities based on x86, which won't exactly
match ARM.

Yeah, it would be best to try and see what the misalignment between what the 
engine thinks and what the VM runs with leads to
But from the top of my head I don't think a misalignment in regards to CPU or 
capabilities could be problematic since the engine doesn't check what the VM is 
actually set with, it just assumes that the VM is set with what it wrote to its 
domain XML
The engine certainly looks at the devices but as long as they preserve their 
user-alias, managed devices won't be unplugged. Other devices would be added as 
unmanaged. So I think that should also be fine.


Maybe it'd be possible to simply run and stop a VM with some effort and
it would be enough for certain purposes.  But for more than that it's a
question whether the effort would be better spent on implementing a
proper architecture support.

>> I'm afraid the only way is to add ARM support to oVirt.  My former
>> colleague has played with running oVirt on Raspberry Pi hosts some years
>> ago (there are traces of that effort in Vdsm) and I think adding ARM
>> support should be, at least in theory, possible.  Particular features
>> available would be mostly dependent on ARM support in QEMU and libvirt.
>
>
>> Regards,
>> Milan
>>
>> > -
>> > kind regards/met vriendelijke groeten
>> >
>> > Marko Vrgotic
>> > Sr. System Engineer @ System Administration
>> >
>> > ActiveVideo
>> > o: +31 (35) 6774131
>> > m: +31 (65) 5734174
>> > e: 
>> > m.vrgo...@activevideo.com>
>> > w: 
>> > www.activevideo.com
>> >
>> > ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein
>> > 1.1217 WJ Hilversum, The Netherlands. The information contained in
>> > this message may be legally privileged and confidential. It is
>> > intended to be read only by the individual or entity to whom it is
>> > addressed or by their designee. If the reader of this message is not
>> > the intended recipient, you are on notice that any distribution of
>> > this message, in any form, is strictly prohibited.  If you have
>> > received this messag