Re: [ovirt-users] hosted-engine engine crash

2016-07-13 Thread Mark Gagnon
Sorry, hit send by accident.

More details :

When I notice that the engine is down, if I type hosted-engine --vm-status
on any hosts, it hangs and then writes a bunch of stuff saying it's down.
If I type hosted-engine --vm-start on one of the hosts (Any), it just
starts and gets back to business.

hosted-engine --vm-status result :

ovirt_hosted_engine_ha.lib.exceptions.RequestError: Failed to set storage
domain FilesystemBackend, options {'dom_type': 'nfs3', 'sd_uuid':
'3d67cf89-92de-428d-9714-e02aceae281e'}: Connection timed out


Here's some logs from vdsm.log :

Thread-98649::WARNING::2016-07-13
22:54:04,418::fileSD::749::Storage.scanDomains::(collectMetaFiles) Could
not collect metadata file for domain path
/rhev/data-center/mnt/engine.domain.com:_var_lib_exports_iso
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/fileSD.py", line 735, in collectMetaFiles
sd.DOMAIN_META_DATA))
  File "/usr/share/vdsm/storage/outOfProcess.py", line 121, in glob
return self._iop.glob(pattern)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 534,
in glob
return self._sendCommand("glob", {"pattern": pattern}, self.timeout)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 419,
in _sendCommand
raise Timeout(os.strerror(errno.ETIMEDOUT))
Timeout: Connection timed out
Thread-63::ERROR::2016-07-13
22:54:04,418::sdc::145::Storage.StorageDomainCache::(_findDomain) domain
bd73cb0f-bb9c-432a-90ee-a32757a8bc10 not found


Thread-98498::ERROR::2016-07-13
22:50:33,895::brokerlink::279::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate)
Connection closed: Connection timed out
Thread-98498::ERROR::2016-07-13 22:50:33,895::API::1871::vds::(_getHaInfo)
failed to retrieve Hosted Engine HA info
Traceback (most recent call last):
  File "/usr/share/vdsm/API.py", line 1851, in _getHaInfo
stats = instance.get_all_stats()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 103, in get_all_stats
self._configure_broker_conn(broker)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 180, in _configure_broker_conn
dom_type=dom_type)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 176, in set_storage_domain
.format(sd_type, options, e))
RequestError: Failed to set storage domain FilesystemBackend, options
{'dom_type': 'nfs3', 'sd_uuid': '3d67cf89-92de-428d-9714-e02aceae281e'}:
Connection timed out


Thanks for your input, and even if it's a storage problem, if it's to
happen, how can I force it to restart the engine?
At first I tought it was a split-brain issue so I added a 3rd host but I
still have the same problem.



On Wed, Jul 13, 2016 at 11:13 PM, Mark Gagnon  wrote:

> Hi,
> We have a 3 hosted-engine nodes setup using 2 NFS3 shares on which the
> engine keeps crashing every few days.
>
> Looking at VDSM logs, it looks like a storage problem but I'm wondering
> why don't they restart the engine?
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine engine crash

2016-07-13 Thread Mark Gagnon
Hi,
We have a 3 hosted-engine nodes setup using 2 NFS3 shares on which the
engine keeps crashing every few days.

Looking at VDSM logs, it looks like a storage problem but I'm wondering why
don't they restart the engine?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Trunk port for a guest nic?

2016-07-13 Thread Dan Lavu
Hello,

I remember reading some posts about this in the past, but I don't know if
anything came of it. Is this now possible? If so, does anybody have any
documentation on how to do this in 4.0?

Thanks,

Dan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-13 Thread Jorick Astrego
Hi,

You could try to add " net.inet.carp.drop_echoed=1" to pfsense in 
/etc/sysctl.conf ?

It is an old fix for VMWare and FreeBSD. I am not able to test it at the 
moment but I can see it's not in the config of the latest version of 
PFSense.

Maybe this will help?

https://doc.pfsense.org/index.php/CARP_Configuration_Troubleshooting


Client Port Issues

If a physical CARP cluster is connected to a switch with an ESX box
using multiple ports on the ESX box (lagg group or similar), and
only certain devices/IPs are reachable by the target VM, then the
port group settings in ESX may need adjusted to set the load
balancing for the group to hash based on IP, not the originating
interface.

Side effects of having that set incorrectly include:

  * Traffic only reaching the target VM in promisc mode on its NIC
  * Inability to reach the CARP IP from the target VM when the
"real" IP of the primary firewall is reachable
  * Port forwards or other inbound connections to the target VM work
from some IPs and not others.






On 07/13/2016 03:59 PM, Matt . wrote:
> As addition: I get the same result using mode=4, only when I use
> multiple VLANS on the interface.
>
> 2016-07-13 15:58 GMT+02:00 Matt . :
>> Hi Pavel,
>>
>> Thanks for your update. I also saw that the post are both online but I
>> thought the second nic only advertises the mac so the switch does not
>> get confused.
>>
>> The issue might be that i do VRRP, so the bond is connected to two
>> switches, they are not stacked, only trunked as that's what VRRP
>> requires and works well on the side where there is only one VLAN on
>> the Host interface.
>>
>> It just goes wrong on multiple vlans.
>>
>> This is what I see everywhere.
>>
>> Mode 1 (active-backup)
>> This mode places one of the interfaces into a backup state and will
>> only make it active if the link is lost by the active interface. Only
>> one slave in the bond is active at an instance of time. A different
>> slave becomes active only when the active slave fails. This mode
>> provides fault tolerance.
>>
>> It's sure I need to get my traffic back on my sending port, so that is
>> why the arp for the passive port was there I thought.
>>
>> Are there other modes that should be working on VRRP in your understanding ?
>>
>> Thanks a lot,
>>
>> Matt
>>
>>
>>
>> 2016-07-13 15:43 GMT+02:00 Pavel Gashev :
>>> In mode=1 the active interface sends traffic, but both interfaces accept 
>>> incoming traffic. Hardware switches send broadcast/multicast/unknown 
>>> destination MACs to all ports, including the passive interface. So packet 
>>> sent from the active interface can be received back from the passive 
>>> interface. FreeBSD CARP just would go mad when it receives its own packets.
>>>
>>> I believe if you get Linux implementation, it will work well in the same 
>>> network setup. I use keepalived in oVirt VMs with bonded network, and have 
>>> no issues.
>>>
>>> -Original Message-
>>> From: "Matt ." 
>>> Date: Wednesday 13 July 2016 at 15:54
>>> To: Pavel Gashev , users 
>>> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>>>
>>> How can it lead into packet duplication when the passive should not be
>>> active and only it's mac-address should be visible on the switch to
>>> prevent confusion on the switch ?
>>>
>>> For a VRRP setup on the switch there is no other option then mode=1 as
>>> far as I know ?
>>>
>>> 2016-07-13 14:50 GMT+02:00 Pavel Gashev :
 I would say that bonding breaks CARP somehow. In example mode=1 can lead 
 to packet duplication, so pfsense can receive it's own packets. Try 
 firewall in pfsense all incomming packets that have the same source MAC 
 address as pfsense.

 -Original Message-
 From: "Matt ." 
 Date: Wednesday 13 July 2016 at 15:29
 To: Pavel Gashev 
 Subject: Re: [ovirt-users] CARP Fails on Bond mode=1

 Hi Pavel,

 No it's Pfsense, so FreeBSD.

 Is there something different there ?



 2016-07-13 13:59 GMT+02:00 Pavel Gashev :
> Matt,
>
> How is CARP implemented? Is it OpenBSD?
>
> -Original Message-
> From:  on behalf of "Matt ." 
> 
> Date: Wednesday 13 July 2016 at 12:42
> Cc: users 
> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>
> Hi Pavel,
>
> This is done and used without the Bond before.
>
> Now I applied a bond it goes wrong and I'm searching but can't find a
> thing about it.
>
>
>
> 2016-07-13 11:03 GMT+02:00 Pavel Gashev :
>> Matt,
>>
>> In order to use CARP/VRRP in a VM you have to disable MAC spoofing 
>> 

Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-13 Thread Matt .
As addition: I get the same result using mode=4, only when I use
multiple VLANS on the interface.

2016-07-13 15:58 GMT+02:00 Matt . :
> Hi Pavel,
>
> Thanks for your update. I also saw that the post are both online but I
> thought the second nic only advertises the mac so the switch does not
> get confused.
>
> The issue might be that i do VRRP, so the bond is connected to two
> switches, they are not stacked, only trunked as that's what VRRP
> requires and works well on the side where there is only one VLAN on
> the Host interface.
>
> It just goes wrong on multiple vlans.
>
> This is what I see everywhere.
>
> Mode 1 (active-backup)
> This mode places one of the interfaces into a backup state and will
> only make it active if the link is lost by the active interface. Only
> one slave in the bond is active at an instance of time. A different
> slave becomes active only when the active slave fails. This mode
> provides fault tolerance.
>
> It's sure I need to get my traffic back on my sending port, so that is
> why the arp for the passive port was there I thought.
>
> Are there other modes that should be working on VRRP in your understanding ?
>
> Thanks a lot,
>
> Matt
>
>
>
> 2016-07-13 15:43 GMT+02:00 Pavel Gashev :
>> In mode=1 the active interface sends traffic, but both interfaces accept 
>> incoming traffic. Hardware switches send broadcast/multicast/unknown 
>> destination MACs to all ports, including the passive interface. So packet 
>> sent from the active interface can be received back from the passive 
>> interface. FreeBSD CARP just would go mad when it receives its own packets.
>>
>> I believe if you get Linux implementation, it will work well in the same 
>> network setup. I use keepalived in oVirt VMs with bonded network, and have 
>> no issues.
>>
>> -Original Message-
>> From: "Matt ." 
>> Date: Wednesday 13 July 2016 at 15:54
>> To: Pavel Gashev , users 
>> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>>
>> How can it lead into packet duplication when the passive should not be
>> active and only it's mac-address should be visible on the switch to
>> prevent confusion on the switch ?
>>
>> For a VRRP setup on the switch there is no other option then mode=1 as
>> far as I know ?
>>
>> 2016-07-13 14:50 GMT+02:00 Pavel Gashev :
>>> I would say that bonding breaks CARP somehow. In example mode=1 can lead to 
>>> packet duplication, so pfsense can receive it's own packets. Try firewall 
>>> in pfsense all incomming packets that have the same source MAC address as 
>>> pfsense.
>>>
>>> -Original Message-
>>> From: "Matt ." 
>>> Date: Wednesday 13 July 2016 at 15:29
>>> To: Pavel Gashev 
>>> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>>>
>>> Hi Pavel,
>>>
>>> No it's Pfsense, so FreeBSD.
>>>
>>> Is there something different there ?
>>>
>>>
>>>
>>> 2016-07-13 13:59 GMT+02:00 Pavel Gashev :
 Matt,

 How is CARP implemented? Is it OpenBSD?

 -Original Message-
 From:  on behalf of "Matt ." 
 
 Date: Wednesday 13 July 2016 at 12:42
 Cc: users 
 Subject: Re: [ovirt-users] CARP Fails on Bond mode=1

 Hi Pavel,

 This is done and used without the Bond before.

 Now I applied a bond it goes wrong and I'm searching but can't find a
 thing about it.



 2016-07-13 11:03 GMT+02:00 Pavel Gashev :
> Matt,
>
> In order to use CARP/VRRP in a VM you have to disable MAC spoofing 
> prevention.
> http://lists.ovirt.org/pipermail/users/2015-May/032839.html
>
> -Original Message-
> From:  on behalf of "Matt ." 
> 
> Date: Tuesday 12 July 2016 at 21:58
> To: users 
> Subject: [ovirt-users] CARP Fails on Bond mode=1
>
> Hi guys,
>
> I have been testing bonding with a vm connected to the network on this
> bond mode=1 (vlans on top of it) where the vm uses a carp IP for
> failover.
>
> It seems that when the VM which holds the Carp IP and so is Master you
> can ping both IP's, so interface IP and Carp IP, but you cannot
> throw/route any traffic over it.
>
> You can route traffic over the interface IP of the Carp Slave.
>
> Is this known or just not possible ?
>
> I hope it's a "bug" :)
>
> Thanks,
>
> Matt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>>
>>

Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-13 Thread Matt .
Hi Pavel,

Thanks for your update. I also saw that the post are both online but I
thought the second nic only advertises the mac so the switch does not
get confused.

The issue might be that i do VRRP, so the bond is connected to two
switches, they are not stacked, only trunked as that's what VRRP
requires and works well on the side where there is only one VLAN on
the Host interface.

It just goes wrong on multiple vlans.

This is what I see everywhere.

Mode 1 (active-backup)
This mode places one of the interfaces into a backup state and will
only make it active if the link is lost by the active interface. Only
one slave in the bond is active at an instance of time. A different
slave becomes active only when the active slave fails. This mode
provides fault tolerance.

It's sure I need to get my traffic back on my sending port, so that is
why the arp for the passive port was there I thought.

Are there other modes that should be working on VRRP in your understanding ?

Thanks a lot,

Matt



2016-07-13 15:43 GMT+02:00 Pavel Gashev :
> In mode=1 the active interface sends traffic, but both interfaces accept 
> incoming traffic. Hardware switches send broadcast/multicast/unknown 
> destination MACs to all ports, including the passive interface. So packet 
> sent from the active interface can be received back from the passive 
> interface. FreeBSD CARP just would go mad when it receives its own packets.
>
> I believe if you get Linux implementation, it will work well in the same 
> network setup. I use keepalived in oVirt VMs with bonded network, and have no 
> issues.
>
> -Original Message-
> From: "Matt ." 
> Date: Wednesday 13 July 2016 at 15:54
> To: Pavel Gashev , users 
> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>
> How can it lead into packet duplication when the passive should not be
> active and only it's mac-address should be visible on the switch to
> prevent confusion on the switch ?
>
> For a VRRP setup on the switch there is no other option then mode=1 as
> far as I know ?
>
> 2016-07-13 14:50 GMT+02:00 Pavel Gashev :
>> I would say that bonding breaks CARP somehow. In example mode=1 can lead to 
>> packet duplication, so pfsense can receive it's own packets. Try firewall in 
>> pfsense all incomming packets that have the same source MAC address as 
>> pfsense.
>>
>> -Original Message-
>> From: "Matt ." 
>> Date: Wednesday 13 July 2016 at 15:29
>> To: Pavel Gashev 
>> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>>
>> Hi Pavel,
>>
>> No it's Pfsense, so FreeBSD.
>>
>> Is there something different there ?
>>
>>
>>
>> 2016-07-13 13:59 GMT+02:00 Pavel Gashev :
>>> Matt,
>>>
>>> How is CARP implemented? Is it OpenBSD?
>>>
>>> -Original Message-
>>> From:  on behalf of "Matt ." 
>>> 
>>> Date: Wednesday 13 July 2016 at 12:42
>>> Cc: users 
>>> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>>>
>>> Hi Pavel,
>>>
>>> This is done and used without the Bond before.
>>>
>>> Now I applied a bond it goes wrong and I'm searching but can't find a
>>> thing about it.
>>>
>>>
>>>
>>> 2016-07-13 11:03 GMT+02:00 Pavel Gashev :
 Matt,

 In order to use CARP/VRRP in a VM you have to disable MAC spoofing 
 prevention.
 http://lists.ovirt.org/pipermail/users/2015-May/032839.html

 -Original Message-
 From:  on behalf of "Matt ." 
 
 Date: Tuesday 12 July 2016 at 21:58
 To: users 
 Subject: [ovirt-users] CARP Fails on Bond mode=1

 Hi guys,

 I have been testing bonding with a vm connected to the network on this
 bond mode=1 (vlans on top of it) where the vm uses a carp IP for
 failover.

 It seems that when the VM which holds the Carp IP and so is Master you
 can ping both IP's, so interface IP and Carp IP, but you cannot
 throw/route any traffic over it.

 You can route traffic over the interface IP of the Carp Slave.

 Is this known or just not possible ?

 I hope it's a "bug" :)

 Thanks,

 Matt
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-13 Thread Pavel Gashev
In mode=1 the active interface sends traffic, but both interfaces accept 
incoming traffic. Hardware switches send broadcast/multicast/unknown 
destination MACs to all ports, including the passive interface. So packet sent 
from the active interface can be received back from the passive interface. 
FreeBSD CARP just would go mad when it receives its own packets.

I believe if you get Linux implementation, it will work well in the same 
network setup. I use keepalived in oVirt VMs with bonded network, and have no 
issues. 

-Original Message-
From: "Matt ." 
Date: Wednesday 13 July 2016 at 15:54
To: Pavel Gashev , users 
Subject: Re: [ovirt-users] CARP Fails on Bond mode=1

How can it lead into packet duplication when the passive should not be
active and only it's mac-address should be visible on the switch to
prevent confusion on the switch ?

For a VRRP setup on the switch there is no other option then mode=1 as
far as I know ?

2016-07-13 14:50 GMT+02:00 Pavel Gashev :
> I would say that bonding breaks CARP somehow. In example mode=1 can lead to 
> packet duplication, so pfsense can receive it's own packets. Try firewall in 
> pfsense all incomming packets that have the same source MAC address as 
> pfsense.
>
> -Original Message-
> From: "Matt ." 
> Date: Wednesday 13 July 2016 at 15:29
> To: Pavel Gashev 
> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>
> Hi Pavel,
>
> No it's Pfsense, so FreeBSD.
>
> Is there something different there ?
>
>
>
> 2016-07-13 13:59 GMT+02:00 Pavel Gashev :
>> Matt,
>>
>> How is CARP implemented? Is it OpenBSD?
>>
>> -Original Message-
>> From:  on behalf of "Matt ." 
>> 
>> Date: Wednesday 13 July 2016 at 12:42
>> Cc: users 
>> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>>
>> Hi Pavel,
>>
>> This is done and used without the Bond before.
>>
>> Now I applied a bond it goes wrong and I'm searching but can't find a
>> thing about it.
>>
>>
>>
>> 2016-07-13 11:03 GMT+02:00 Pavel Gashev :
>>> Matt,
>>>
>>> In order to use CARP/VRRP in a VM you have to disable MAC spoofing 
>>> prevention.
>>> http://lists.ovirt.org/pipermail/users/2015-May/032839.html
>>>
>>> -Original Message-
>>> From:  on behalf of "Matt ." 
>>> 
>>> Date: Tuesday 12 July 2016 at 21:58
>>> To: users 
>>> Subject: [ovirt-users] CARP Fails on Bond mode=1
>>>
>>> Hi guys,
>>>
>>> I have been testing bonding with a vm connected to the network on this
>>> bond mode=1 (vlans on top of it) where the vm uses a carp IP for
>>> failover.
>>>
>>> It seems that when the VM which holds the Carp IP and so is Master you
>>> can ping both IP's, so interface IP and Carp IP, but you cannot
>>> throw/route any traffic over it.
>>>
>>> You can route traffic over the interface IP of the Carp Slave.
>>>
>>> Is this known or just not possible ?
>>>
>>> I hope it's a "bug" :)
>>>
>>> Thanks,
>>>
>>> Matt
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-13 Thread Matt .
How can it lead into packet duplication when the passive should not be
active and only it's mac-address should be visible on the switch to
prevent confusion on the switch ?

For a VRRP setup on the switch there is no other option then mode=1 as
far as I know ?

2016-07-13 14:50 GMT+02:00 Pavel Gashev :
> I would say that bonding breaks CARP somehow. In example mode=1 can lead to 
> packet duplication, so pfsense can receive it's own packets. Try firewall in 
> pfsense all incomming packets that have the same source MAC address as 
> pfsense.
>
> -Original Message-
> From: "Matt ." 
> Date: Wednesday 13 July 2016 at 15:29
> To: Pavel Gashev 
> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>
> Hi Pavel,
>
> No it's Pfsense, so FreeBSD.
>
> Is there something different there ?
>
>
>
> 2016-07-13 13:59 GMT+02:00 Pavel Gashev :
>> Matt,
>>
>> How is CARP implemented? Is it OpenBSD?
>>
>> -Original Message-
>> From:  on behalf of "Matt ." 
>> 
>> Date: Wednesday 13 July 2016 at 12:42
>> Cc: users 
>> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>>
>> Hi Pavel,
>>
>> This is done and used without the Bond before.
>>
>> Now I applied a bond it goes wrong and I'm searching but can't find a
>> thing about it.
>>
>>
>>
>> 2016-07-13 11:03 GMT+02:00 Pavel Gashev :
>>> Matt,
>>>
>>> In order to use CARP/VRRP in a VM you have to disable MAC spoofing 
>>> prevention.
>>> http://lists.ovirt.org/pipermail/users/2015-May/032839.html
>>>
>>> -Original Message-
>>> From:  on behalf of "Matt ." 
>>> 
>>> Date: Tuesday 12 July 2016 at 21:58
>>> To: users 
>>> Subject: [ovirt-users] CARP Fails on Bond mode=1
>>>
>>> Hi guys,
>>>
>>> I have been testing bonding with a vm connected to the network on this
>>> bond mode=1 (vlans on top of it) where the vm uses a carp IP for
>>> failover.
>>>
>>> It seems that when the VM which holds the Carp IP and so is Master you
>>> can ping both IP's, so interface IP and Carp IP, but you cannot
>>> throw/route any traffic over it.
>>>
>>> You can route traffic over the interface IP of the Carp Slave.
>>>
>>> Is this known or just not possible ?
>>>
>>> I hope it's a "bug" :)
>>>
>>> Thanks,
>>>
>>> Matt
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-13 Thread Pavel Gashev
Matt,

How is CARP implemented? Is it OpenBSD?

-Original Message-
From:  on behalf of "Matt ." 
Date: Wednesday 13 July 2016 at 12:42
Cc: users 
Subject: Re: [ovirt-users] CARP Fails on Bond mode=1

Hi Pavel,

This is done and used without the Bond before.

Now I applied a bond it goes wrong and I'm searching but can't find a
thing about it.



2016-07-13 11:03 GMT+02:00 Pavel Gashev :
> Matt,
>
> In order to use CARP/VRRP in a VM you have to disable MAC spoofing prevention.
> http://lists.ovirt.org/pipermail/users/2015-May/032839.html
>
> -Original Message-
> From:  on behalf of "Matt ." 
> Date: Tuesday 12 July 2016 at 21:58
> To: users 
> Subject: [ovirt-users] CARP Fails on Bond mode=1
>
> Hi guys,
>
> I have been testing bonding with a vm connected to the network on this
> bond mode=1 (vlans on top of it) where the vm uses a carp IP for
> failover.
>
> It seems that when the VM which holds the Carp IP and so is Master you
> can ping both IP's, so interface IP and Carp IP, but you cannot
> throw/route any traffic over it.
>
> You can route traffic over the interface IP of the Carp Slave.
>
> Is this known or just not possible ?
>
> I hope it's a "bug" :)
>
> Thanks,
>
> Matt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.0.1 Second Release candidate is now available for testing

2016-07-13 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.0.1 for testing, as of July 13th, 2016.

This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.

This release is available now for:
* Fedora 23 (tech preview)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23 (tech preview)
* oVirt Next Generation Node 4.0

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO is already available [4].
* A new oVirt Next Generation Node will be available soon [4]
* A new oVirt Engine Appliance wil be available soon.
* Mirrors[5] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 4.0.1 release candidate highlights:
  http://www.ovirt.org/release/4.0.1/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
  http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.1/
[4] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt metrics

2016-07-13 Thread Dominique Taffin
Hello!

very nice implementation - and very good blog entry.

IMHO a very important metric should be made available: IOPS.

IOPS are a very important factor when using shared storage among a
cluster. It is a key indicator for identifying issues and performance
bottlenecks and also having a base for scaling proper storage devices.
Beeing able to display IOPS per VM would be a great benefit for large
scale enterprises.

best,
 Dominique


On Wed, 2016-07-13 at 13:36 +0300, Yaniv Bronheim wrote:
> Hi,
> 
> In oVirt-4.0 we introduced integration with metrics collectors, In
> [1] you will find a guide for utilizing your environment to retrieve
> visualized reports about hosts and vms statistics.
> 
> I encourage to try that out and send us requests for additional
> valuable metrics that you think vdsm should publish.
> This area is still work in progress and we plan to support more
> technologies and different architectures for metrics collections as
> describes in the post. This will follow by additional links in the
> post ([1]) that describe how to do so.. stay tuned.
> 
> [1] https://bronhaim.wordpress.com/2016/06/26/ovirt-metrics
> 
> -- 
> Yaniv Bronhaim.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt-shell command to move a disk

2016-07-13 Thread Juan Hernández
On 07/13/2016 10:30 AM, Jure Kranjc wrote:
> On 01. 12. 2014 14:40, Nicolas Ecarnot wrote:
>> Le 01/12/2014 13:23, Juan Hernández a écrit :
>>> On 12/01/2014 12:51 PM, Michael Pasternak wrote:
 not sure what sdk version 3.4.4 is, but according to log, latest
 official for 3.4 is 3.4.1.1-1
 (make you have it installed)

>>>
>>> There are two issues here. First is that the "move" disk operation on
>>> the top level collection isn't correctly documented in the RSDL
>>> metadata. As a result the Python SDK and the CLI don't support this
>>> operation. You can however use the same operation in the context of 
>>> the VM:
>>>
>>># action disk {disk:id} move --vm-identifier {vm:id}
>>> --storage_domain-name={storagedomain:name}
>>>
>>> Please open a bug requesting a fix for this.
>>
>> Done!
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1169376
>>
>>> The other issue is that the 3.4 version doesn't support specifying disks
>>> by alias, only by id. This has been fixed in 3.5.
>>>
>>> So, all in all, at the moment you will need a command like this:
>>>
>>># action disk c6aab66a-b551-4cc5-8628-efe9622c0dce move
>>> --vm-identifier myvm --storage_domain-name mysd
>>
>> Your workaround is working : thank you.
>>
> Hi,
> 
> i know this is an old thread but i need to move a bunch of disks from 
> one storage domain to another. I am unable to move disks with 
> ovirt-shell as it seems it does not support moving disks when quota 
> enabled and enforced on datacenter. Is that correct? Any help appreciated.
> 
> ovirt shell
> action disk 689ce8fe-0d40-47e1-a933-7bae5ed0812b move 
> --storage_domain-name NLSAS_PRIM
>status: 400
>reason: Bad Request
>detail: Cannot move Virtual Machine Disk. Quota is not valid.
> 
> I can move disks normally via webadmin.
> Using ovirt-engine-cli-3.6.2.0-1.fc23.noarch, 
> ovirt-engine-3.5.6.2-1.el6.noarch
> 

Doron, Roy, internally the API uses the "MoveDisks" command to move the
disks, and that action is marked as "QuotaDependency.STORAGE". Is that
correct? Can you take a look?

-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] oVirt metrics

2016-07-13 Thread Roman Mohr
On Wed, Jul 13, 2016 at 12:36 PM, Yaniv Bronheim  wrote:
> Hi,
>
> In oVirt-4.0 we introduced integration with metrics collectors, In [1] you
> will find a guide for utilizing your environment to retrieve visualized
> reports about hosts and vms statistics.
>
> I encourage to try that out and send us requests for additional valuable
> metrics that you think vdsm should publish.
> This area is still work in progress and we plan to support more technologies
> and different architectures for metrics collections as describes in the
> post. This will follow by additional links in the post ([1]) that describe
> how to do so.. stay tuned.
>
> [1] https://bronhaim.wordpress.com/2016/06/26/ovirt-metrics
>

Very nice work! Looking forward to see the VM stats there :)

Roman

> --
> Yaniv Bronhaim.
>
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt metrics

2016-07-13 Thread Yaniv Bronheim
Hi,

In oVirt-4.0 we introduced integration with metrics collectors, In [1] you
will find a guide for utilizing your environment to retrieve visualized
reports about hosts and vms statistics.

I encourage to try that out and send us requests for additional valuable
metrics that you think vdsm should publish.
This area is still work in progress and we plan to support more
technologies and different architectures for metrics collections as
describes in the post. This will follow by additional links in the post
([1]) that describe how to do so.. stay tuned.

[1] https://bronhaim.wordpress.com/2016/06/26/ovirt-metrics

--
*Yaniv Bronhaim.*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-13 Thread Matt .
Hi Pavel,

This is done and used without the Bond before.

Now I applied a bond it goes wrong and I'm searching but can't find a
thing about it.



2016-07-13 11:03 GMT+02:00 Pavel Gashev :
> Matt,
>
> In order to use CARP/VRRP in a VM you have to disable MAC spoofing prevention.
> http://lists.ovirt.org/pipermail/users/2015-May/032839.html
>
> -Original Message-
> From:  on behalf of "Matt ." 
> Date: Tuesday 12 July 2016 at 21:58
> To: users 
> Subject: [ovirt-users] CARP Fails on Bond mode=1
>
> Hi guys,
>
> I have been testing bonding with a vm connected to the network on this
> bond mode=1 (vlans on top of it) where the vm uses a carp IP for
> failover.
>
> It seems that when the VM which holds the Carp IP and so is Master you
> can ping both IP's, so interface IP and Carp IP, but you cannot
> throw/route any traffic over it.
>
> You can route traffic over the interface IP of the Carp Slave.
>
> Is this known or just not possible ?
>
> I hope it's a "bug" :)
>
> Thanks,
>
> Matt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Kernel related errors with Fedora 24 Guest

2016-07-13 Thread Alexis HAUSER
This doesn't looks really good, right ? Should I report that somewhere ?

I actually had this bug when using RHEL7 profile for a Fedora 24 (to provide 
enough vram, because the default with other profiles is really lower).



[Wed Jul 13 11:00:12 2016] [ cut here ]
[Wed Jul 13 11:00:12 2016] WARNING: CPU: 2 PID: 1750 at 
drivers/gpu/drm/drm_irq.c:689 drm_calc_timestamping_constants+0x15b/0x160 
[drm]()
[Wed Jul 13 11:00:12 2016] Modules linked in: uinput fuse 
nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ip6t_REJECT 
nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_broute bridge stp llc 
ebtable_nat ip6table_security ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 
nf_nat_ipv6 ip6table_raw ip6table_mangle iptable_security iptable_nat 
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_raw 
iptable_mangle ebtable_filter ebtables ip6table_filter ip6_tables 
crct10dif_pclmul crc32_pclmul ghash_clmulni_intel ppdev joydev i2c_piix4 
virtio_balloon parport_pc parport acpi_cpufreq tpm_tis tpm nfsd auth_rpcgss 
nfs_acl lockd grace sunrpc virtio_console virtio_scsi virtio_blk virtio_net qxl 
drm_kms_helper ttm crc32c_intel drm serio_raw virtio_pci virtio_ring virtio 
ata_generic pata_acpi

[Wed Jul 13 11:00:12 2016] CPU: 2 PID: 1750 Comm: Xorg Tainted: GW  
 4.5.5-300.fc24.x86_64 #1
[Wed Jul 13 11:00:12 2016] Hardware name: Red Hat RHEV Hypervisor, BIOS 
seabios-1.7.5-11.el7 04/01/2014
[Wed Jul 13 11:00:12 2016]  0286 9e0fbed4 880074e93978 
813d35af
[Wed Jul 13 11:00:12 2016]   a009b9dc 880074e939b0 
810a5f12
[Wed Jul 13 11:00:12 2016]  8800360b7800 880036b92800 880036b92b78 
0001
[Wed Jul 13 11:00:12 2016] Call Trace:
[Wed Jul 13 11:00:12 2016]  [] dump_stack+0x63/0x84
[Wed Jul 13 11:00:12 2016]  [] warn_slowpath_common+0x82/0xc0
[Wed Jul 13 11:00:12 2016]  [] warn_slowpath_null+0x1a/0x20
[Wed Jul 13 11:00:12 2016]  [] 
drm_calc_timestamping_constants+0x15b/0x160 [drm]
[Wed Jul 13 11:00:12 2016]  [] 
drm_crtc_helper_set_mode+0x42f/0x510 [drm_kms_helper]
[Wed Jul 13 11:00:12 2016]  [] 
drm_crtc_helper_set_config+0xa43/0xb90 [drm_kms_helper]
[Wed Jul 13 11:00:12 2016]  [] 
drm_mode_set_config_internal+0x62/0x100 [drm]
[Wed Jul 13 11:00:12 2016]  [] drm_mode_setcrtc+0x2ef/0x520 
[drm]
[Wed Jul 13 11:00:12 2016]  [] drm_ioctl+0x152/0x540 [drm]
[Wed Jul 13 11:00:12 2016]  [] ? 
drm_mode_setplane+0x1b0/0x1b0 [drm]
[Wed Jul 13 11:00:12 2016]  [] do_vfs_ioctl+0xa3/0x5d0
[Wed Jul 13 11:00:12 2016]  [] SyS_ioctl+0x79/0x90
[Wed Jul 13 11:00:12 2016]  [] 
entry_SYSCALL_64_fastpath+0x12/0x6d
[Wed Jul 13 11:00:12 2016] ---[ end trace d65ce2e725b31419 ]---
[Wed Jul 13 11:00:12 2016] input: spice vdagent tablet as 
/devices/virtual/input/input12
[Wed Jul 13 11:00:18 2016] input: spice vdagent tablet as 
/devices/virtual/input/input13
[Wed Jul 13 11:00:20 2016] input: spice vdagent tablet as 
/devices/virtual/input/input14
[Wed Jul 13 11:00:38 2016] input: spice vdagent tablet as 
/devices/virtual/input/input15
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-13 Thread Pavel Gashev
Matt,

In order to use CARP/VRRP in a VM you have to disable MAC spoofing prevention.
http://lists.ovirt.org/pipermail/users/2015-May/032839.html

-Original Message-
From:  on behalf of "Matt ." 
Date: Tuesday 12 July 2016 at 21:58
To: users 
Subject: [ovirt-users] CARP Fails on Bond mode=1

Hi guys,

I have been testing bonding with a vm connected to the network on this
bond mode=1 (vlans on top of it) where the vm uses a carp IP for
failover.

It seems that when the VM which holds the Carp IP and so is Master you
can ping both IP's, so interface IP and Carp IP, but you cannot
throw/route any traffic over it.

You can route traffic over the interface IP of the Carp Slave.

Is this known or just not possible ?

I hope it's a "bug" :)

Thanks,

Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages

2016-07-13 Thread Martin Perina
Hi,

could you please share also vdsm.log from your hosts and also server.log
and setup logs from /var/log/ovirt-engine/setup directory?

Thanks

Martin Perina


On Wed, Jul 13, 2016 at 10:36 AM,  wrote:

> Hi,
>
> We upgraded from 3.6.6 to 4.0.0 and we have a big issue since the engine
> cannot connect to hosts. In the logs all we see is this error:
>
> ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp
> Reactor) [] Unable to process messages
>
> I'm attaching full logs.
>
> Could someone help please?
>
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages

2016-07-13 Thread Michal Skrivanek

> On 13 Jul 2016, at 10:36, nico...@devels.es wrote:
> 
> Hi,
> 
> We upgraded from 3.6.6 to 4.0.0 and we have a big issue since the engine 
> cannot connect to hosts. In the logs all we see is this error:

4.0.1 might be a better bet. There were quite a few json-rpc issues around the 
time of GA, you may be suffering from those. 4.0.1 is about to be GAed, and if 
things are broken right now the last RC can only help.

> 
>ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) 
> [] Unable to process messages
> 
> I'm attaching full logs.
> 
> Could someone help please?
> 
> Thanks.___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt-shell command to move a disk

2016-07-13 Thread Jure Kranjc

On 01. 12. 2014 14:40, Nicolas Ecarnot wrote:

Le 01/12/2014 13:23, Juan Hernández a écrit :

On 12/01/2014 12:51 PM, Michael Pasternak wrote:

not sure what sdk version 3.4.4 is, but according to log, latest
official for 3.4 is 3.4.1.1-1
(make you have it installed)



There are two issues here. First is that the "move" disk operation on
the top level collection isn't correctly documented in the RSDL
metadata. As a result the Python SDK and the CLI don't support this
operation. You can however use the same operation in the context of 
the VM:


   # action disk {disk:id} move --vm-identifier {vm:id}
--storage_domain-name={storagedomain:name}

Please open a bug requesting a fix for this.


Done!

https://bugzilla.redhat.com/show_bug.cgi?id=1169376


The other issue is that the 3.4 version doesn't support specifying disks
by alias, only by id. This has been fixed in 3.5.

So, all in all, at the moment you will need a command like this:

   # action disk c6aab66a-b551-4cc5-8628-efe9622c0dce move
--vm-identifier myvm --storage_domain-name mysd


Your workaround is working : thank you.


Hi,

i know this is an old thread but i need to move a bunch of disks from 
one storage domain to another. I am unable to move disks with 
ovirt-shell as it seems it does not support moving disks when quota 
enabled and enforced on datacenter. Is that correct? Any help appreciated.


ovirt shell
action disk 689ce8fe-0d40-47e1-a933-7bae5ed0812b move 
--storage_domain-name NLSAS_PRIM

  status: 400
  reason: Bad Request
  detail: Cannot move Virtual Machine Disk. Quota is not valid.

I can move disks normally via webadmin.
Using ovirt-engine-cli-3.6.2.0-1.fc23.noarch, 
ovirt-engine-3.5.6.2-1.el6.noarch


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users