Re: [ovirt-users] Importing existing (dirty) storage domains

2017-02-16 Thread Doug Ingham
Hi Nir,

On 16 Feb 2017 22:41, "Nir Soffer"  wrote:

On Fri, Feb 17, 2017 at 3:16 AM, Doug Ingham  wrote:
> Well that didn't go so well. I deleted both dom_md/ids & dom_md/leases in
> the cloned volume, and I still can't import the storage domain.

You cannot delete files from dom_md, this will invalidate the storage
domain and you
will not be able to use it without restoring these files.

The leases file on file storage domain is unused, so creating empty
file is enough.

The ids file must be created and initialized using sanlock, please you
should find
instructions how to do it in the archives.

> The snapshot was also taken some 4 hours before the attempted import, so
I'm
> surprised the locks haven't expired by themselves...

Leases do not expire if vdsm is connected to storage, and sanlock can access
the storage.

I'm not sure what do you mean by volume snapshots.


A snapshot is like a save-point, the state of a storage volume from a
specific point in time.

In this case, it means I have created a copy/clone of my active data
volume. It's a completely new, separate volume, and is not attached to any
running services.

I'm using this copy/clone to test the import process, before doing it with
my live volume. If I "break" something in the cloned volume, no worries, I
can just delete it and recreate it from the snapshot.

Hope that clears things up a bit!


To import a storage domain, you should first make that no other setup is
using
the storage domain.

The best way to do it is to detach the storage domain from the other setup.

If you are using hosted engine, you must also put the hosted engine agent in
global maintenance mode.

If your engine is broken, the best way to disconnect from storage is to
reboot
the hosts.


So this is the issue. My current tests emulate exactly this, however I'm
still not able to import the domain into the new Engine. When I try to do
so, I get the resulting logs I copied in my earlier email.


Doug


Nir


>
>
> 2017-02-16 21:58:24,630-03 INFO
> [org.ovirt.engine.core.bll.storage.connection.
AddStorageServerConnectionCommand]
> (default task-45) [d59bc8c0-3c53-4a34-9d7c-8c982ee14e14] Lock Acquired to
> object
> 'EngineLock:{exclusiveLocks='[localhost:data-teste2= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2017-02-16 21:58:24,645-03 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-45) [d59bc8c0-3c53-4a34-9d7c-8c982ee14e14] START,
> ConnectStorageServerVDSCommand(HostName = v5.dc0.example.com,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
> storagePoolId='----',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='null',
> connection='localhost:data-teste2', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
nfsTimeo='null',
> iface='null', netIfaceName='null'}]'}), log id: 726df65e
> 2017-02-16 21:58:26,046-03 INFO
> [org.ovirt.engine.core.bll.storage.connection.
AddStorageServerConnectionCommand]
> (default task-45) [d59bc8c0-3c53-4a34-9d7c-8c982ee14e14] Lock freed to
> object 'EngineLock:{exclusiveLocks='[localhost:data
> teste2=]',
> sharedLocks='null'}'
> 2017-02-16 21:58:26,206-03 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCom
mand]
> (default task-52) [85548427-713f-4ffb-a385-a97a7ee4109d] START,
> HSMGetStorageDomainsListVDSCommand(HostName = v5.dc0.example.com,
> HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
> hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
> storagePoolId='----', storageType='null',
> storageDomainType='Data', path='localhost:data-teste2'}), log id: 79f6cc88
> 2017-02-16 21:58:27,899-03 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCom
mand]
> (default task-50) [38e87311-a7a5-49a8-bf18-857dd969cd5f] START,
> HSMGetStorageDomainsListVDSCommand(HostName = v5.dc0.example.com,
> HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
> hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
> storagePoolId='----', storageType='null',
> storageDomainType='Data', path='localhost:data-teste2'}), log id: 7280d13
> 2017-02-16 21:58:29,156-03 INFO
> [org.ovirt.engine.core.bll.storage.connection.
RemoveStorageServerConnectionCommand]
> (default task-56) [1b3826e4-4890-43d4-8854-16f3c573a31f] Lock Acquired to
> object
> 'EngineLock:{exclusiveLocks='[localhost:data-teste2= ACTION_TYPE_FAILED_OBJECT_LOCKED>,
> 5e5f6610-c759-448b-a53d-9a456f513681= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2017-02-16 21:58:29,168-03 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSComm
and]
> (default task-57) [5e4b20cf-60d2-4ae9-951b-c2693603aa6f] START,
> DisconnectStorageServerVDSCommand(HostName = v5.dc0.example.com,
> StorageServerConnectionManagementVDSParam

Re: [ovirt-users] Importing existing (dirty) storage domains

2017-02-16 Thread Nir Soffer
On Fri, Feb 17, 2017 at 3:16 AM, Doug Ingham  wrote:
> Well that didn't go so well. I deleted both dom_md/ids & dom_md/leases in
> the cloned volume, and I still can't import the storage domain.

You cannot delete files from dom_md, this will invalidate the storage
domain and you
will not be able to use it without restoring these files.

The leases file on file storage domain is unused, so creating empty
file is enough.

The ids file must be created and initialized using sanlock, please you
should find
instructions how to do it in the archives.

> The snapshot was also taken some 4 hours before the attempted import, so I'm
> surprised the locks haven't expired by themselves...

Leases do not expire if vdsm is connected to storage, and sanlock can access
the storage.

I'm not sure what do you mean by volume snapshots.

To import a storage domain, you should first make that no other setup is using
the storage domain.

The best way to do it is to detach the storage domain from the other setup.

If you are using hosted engine, you must also put the hosted engine agent in
global maintenance mode.

If your engine is broken, the best way to disconnect from storage is to reboot
the hosts.

Nir

>
>
> 2017-02-16 21:58:24,630-03 INFO
> [org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
> (default task-45) [d59bc8c0-3c53-4a34-9d7c-8c982ee14e14] Lock Acquired to
> object
> 'EngineLock:{exclusiveLocks='[localhost:data-teste2= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2017-02-16 21:58:24,645-03 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-45) [d59bc8c0-3c53-4a34-9d7c-8c982ee14e14] START,
> ConnectStorageServerVDSCommand(HostName = v5.dc0.example.com,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
> storagePoolId='----',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='null',
> connection='localhost:data-teste2', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
> iface='null', netIfaceName='null'}]'}), log id: 726df65e
> 2017-02-16 21:58:26,046-03 INFO
> [org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
> (default task-45) [d59bc8c0-3c53-4a34-9d7c-8c982ee14e14] Lock freed to
> object 'EngineLock:{exclusiveLocks='[localhost:data
> teste2=]',
> sharedLocks='null'}'
> 2017-02-16 21:58:26,206-03 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
> (default task-52) [85548427-713f-4ffb-a385-a97a7ee4109d] START,
> HSMGetStorageDomainsListVDSCommand(HostName = v5.dc0.example.com,
> HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
> hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
> storagePoolId='----', storageType='null',
> storageDomainType='Data', path='localhost:data-teste2'}), log id: 79f6cc88
> 2017-02-16 21:58:27,899-03 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
> (default task-50) [38e87311-a7a5-49a8-bf18-857dd969cd5f] START,
> HSMGetStorageDomainsListVDSCommand(HostName = v5.dc0.example.com,
> HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
> hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
> storagePoolId='----', storageType='null',
> storageDomainType='Data', path='localhost:data-teste2'}), log id: 7280d13
> 2017-02-16 21:58:29,156-03 INFO
> [org.ovirt.engine.core.bll.storage.connection.RemoveStorageServerConnectionCommand]
> (default task-56) [1b3826e4-4890-43d4-8854-16f3c573a31f] Lock Acquired to
> object
> 'EngineLock:{exclusiveLocks='[localhost:data-teste2= ACTION_TYPE_FAILED_OBJECT_LOCKED>,
> 5e5f6610-c759-448b-a53d-9a456f513681= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2017-02-16 21:58:29,168-03 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
> (default task-57) [5e4b20cf-60d2-4ae9-951b-c2693603aa6f] START,
> DisconnectStorageServerVDSCommand(HostName = v5.dc0.example.com,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
> storagePoolId='----',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='5e5f6610-c759-448b-a53d-9a456f513681',
> connection='localhost:data-teste2', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
> iface='null', netIfaceName='null'}]'}), log id: 6042b108
> 2017-02-16 21:58:29,193-03 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
> (default task-56) [1b3826e4-4890-43d4-8854-16f3c573a31f] START,
> DisconnectStorageServerVDSCommand(HostName = v5.dc0.example.com,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='

Re: [ovirt-users] Importing existing (dirty) storage domains

2017-02-16 Thread Doug Ingham
Well that didn't go so well. I deleted both dom_md/ids & dom_md/leases in
the cloned volume, and I still can't import the storage domain.
The snapshot was also taken some 4 hours before the attempted import, so
I'm surprised the locks haven't expired by themselves...


2017-02-16 21:58:24,630-03 INFO
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
(default task-45) [d59bc8c0-3c53-4a34-9d7c-8c982ee14e14] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[localhost:data-teste2=]', sharedLocks='null'}'
2017-02-16 21:58:24,645-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-45) [d59bc8c0-3c53-4a34-9d7c-8c982ee14e14] START,
ConnectStorageServerVDSCommand(HostName = v5.dc0.example.com,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
storagePoolId='----',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='localhost:data-teste2', iqn='null', vfsType='glusterfs',
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 726df65e
2017-02-16 21:58:26,046-03 INFO
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
(default task-45) [d59bc8c0-3c53-4a34-9d7c-8c982ee14e14] Lock freed to
object 'EngineLock:{exclusiveLocks='[localhost:data
teste2=]',
sharedLocks='null'}'
2017-02-16 21:58:26,206-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(default task-52) [85548427-713f-4ffb-a385-a97a7ee4109d] START,
HSMGetStorageDomainsListVDSCommand(HostName = v5.dc0.example.com,
HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
storagePoolId='----', storageType='null',
storageDomainType='Data', path='localhost:data-teste2'}), log id: 79f6cc88
2017-02-16 21:58:27,899-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(default task-50) [38e87311-a7a5-49a8-bf18-857dd969cd5f] START,
HSMGetStorageDomainsListVDSCommand(HostName = v5.dc0.example.com,
HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
storagePoolId='----', storageType='null',
storageDomainType='Data', path='localhost:data-teste2'}), log id: 7280d13
2017-02-16 21:58:29,156-03 INFO
[org.ovirt.engine.core.bll.storage.connection.RemoveStorageServerConnectionCommand]
(default task-56) [1b3826e4-4890-43d4-8854-16f3c573a31f] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[localhost:data-teste2=,
5e5f6610-c759-448b-a53d-9a456f513681=]', sharedLocks='null'}'
2017-02-16 21:58:29,168-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-57) [5e4b20cf-60d2-4ae9-951b-c2693603aa6f] START,
DisconnectStorageServerVDSCommand(HostName = v5.dc0.example.com,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
storagePoolId='----',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='5e5f6610-c759-448b-a53d-9a456f513681',
connection='localhost:data-teste2', iqn='null', vfsType='glusterfs',
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 6042b108
2017-02-16 21:58:29,193-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-56) [1b3826e4-4890-43d4-8854-16f3c573a31f] START,
DisconnectStorageServerVDSCommand(HostName = v5.dc0.example.com,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='1a3f10f2-e4ce-44b9-9495-06e445cfa0b0',
storagePoolId='----',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='5e5f6610-c759-448b-a53d-9a456f513681',
connection='localhost:data-teste2', iqn='null', vfsType='glusterfs',
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 4e9421cf
2017-02-16 21:58:31,398-03 INFO
[org.ovirt.engine.core.bll.storage.connection.RemoveStorageServerConnectionCommand]
(default task-56) [1b3826e4-4890-43d4-8854-16f3c573a31f] Lock freed to
object
'EngineLock:{exclusiveLocks='[localhost:data-teste2=,
5e5f6610-c759-448b-a53d-9a456f513681=]', sharedLocks='null'}'

Again, many thanks!
 Doug

On 16 February 2017 at 18:53, Doug Ingham  wrote:

> Hi Nir,
>
> On 16 February 2017 at 13:55, Nir Soffer  wrote:
>
>> On Mon, Feb 13, 2017 at 3:35 PM, Doug Ingham  wrote:
>> > Hi Sahina,
>> >
>> > On 13 February 2017 at 05:45, Sahina Bose  wrote:
>> >>
>> >> Any errors in the gluster mount logs for this gluster volume?
>> >>
>> >> How about "gluster vol heal  info" - does it list any entries
>> to
>> >> heal?
>> >
>> >

Re: [ovirt-users] OVN routing and firewalling in oVirt

2017-02-16 Thread Gianluca Cecchi
On Thu, Feb 16, 2017 at 5:09 PM, Simone Tiraboschi 
wrote:

>
>
>> http://blog.spinhirne.com/2016/09/the-ovn-gateway-router.html
>>
>
>> Great!
>> Actually using the previous blog post of the series:
>> http://blog.spinhirne.com/2016/09/an-introduction-to-ovn-routing.html
>>
>
> It was something I wished to show this Monday in the workshop but we were
> really out of time!
>

Don't worry Simone; you were superfast for the time you had available and
you didn't any mistake jumping from one presentation to another in
realtime... superb ;-)



>
>
>>
>>
>>
>> And now vm1 is able to ping both the gateways ip on subn1 and subn2 and
>> to ssh into vm2
>> It remains a sort of spof the fact of the central ovn server, where the
>> logical router lives... but for initial testing it is ok
>>
>
> Are you sure? did you tried bringing it down?
>
> AFAIU, OVN is already providing distributed routing since 2.6: if the node
> where you have the oVirt OVN provider and the OVN controller with
> northbound and southbound DB is down you cannot edit logical networks but
> the existing flows should still be there.
>
>
>

No, I'm not sure... it was only my wrong assumption.
And you are right. This is a single host environment with self hosted
engine.
I put the provider on hosted engine.
I set global maintenance and shutdown the engine.
And I'm still able to go from ovn_net1 to ovn_net2 without any problem...
Fine!

After exiting global maintenance and automatic power on of the engine I can
verify that the configuration has been retained with the configured virtual
router and its gateway ports in nb database.

Just a question: so where does the virtual router live? which command can I
run on the host to verify the sw defined router configuration while the
provider is down, how this information is mapped on the host itself so that
it routes packets from ovn_net1 to ovn_net2?

Cheers,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Importing existing (dirty) storage domains

2017-02-16 Thread Doug Ingham
Hi Nir,

On 16 February 2017 at 13:55, Nir Soffer  wrote:

> On Mon, Feb 13, 2017 at 3:35 PM, Doug Ingham  wrote:
> > Hi Sahina,
> >
> > On 13 February 2017 at 05:45, Sahina Bose  wrote:
> >>
> >> Any errors in the gluster mount logs for this gluster volume?
> >>
> >> How about "gluster vol heal  info" - does it list any entries
> to
> >> heal?
> >
> >
> > After more investigating, I found out that there is a sanlock daemon that
> > runs with VDSM, independently of the HE, so I'd basically have to bring
> the
> > volume down & wait for the leases to expire/delete them* before I can
> import
> > the domain.
> >
> > *I understand removing /dom_md/leases/ should do the job?
>
> No, the issue is probably dom_md/ids accessed by sanlock, but removing
> files
> accessed by sanlock will not help, an open file will remain open until
> sanlock
> close the file.
>

I'm testing this with volume snapshots at the moment, so there are no
processes accessing the new volume.


Did you try to reboot the host before installing it again? If you did and
> you
> still have these issues, you probably need to remove the previous
> installation
> properly before installing again.
>
> Adding Simone to help with uninstalling and reinstalling hosted engine.
>

The Hosted-Engine database had been corrupted and the restore wasn't
running correctly, so I installed a new engine on a new server - no
restores or old data. The aim is to import the old storage domain into the
new Engine & then import the VMs into the new storage domain.
My only problem with this is that there appear to be some file based leases
somewhere that, unless I manage to locate & delete them, force me to wait
for the leases to timeout before I can import the old storage domain.
To minimise downtime, I'm trying to avoid having to wait for the leases to
timeout.

Regards,
 Doug


>
> Nir
>
> >
> >
> >>
> >>
> >> On Thu, Feb 9, 2017 at 11:57 PM, Doug Ingham  wrote:
> >>>
> >>> Some interesting output from the vdsm log...
> >>>
> >>>
> >>> 2017-02-09 15:16:24,051 INFO  (jsonrpc/1) [storage.StorageDomain]
> >>> Resource namespace 01_img_60455567-ad30-42e3-a9df-62fe86c7fd25 already
> >>> registered (sd:731)
> >>> 2017-02-09 15:16:24,051 INFO  (jsonrpc/1) [storage.StorageDomain]
> >>> Resource namespace 02_vol_60455567-ad30-42e3-a9df-62fe86c7fd25 already
> >>> registered (sd:740)
> >>> 2017-02-09 15:16:24,052 INFO  (jsonrpc/1) [storage.SANLock] Acquiring
> >>> Lease(name='SDM',
> >>> path=u'/rhev/data-center/mnt/glusterSD/localhost:data2/
> 60455567-ad30-42e3-a9df-6
> >>> 2fe86c7fd25/dom_md/leases', offset=1048576) for host id 1
> >>> (clusterlock:343)
> >>> 2017-02-09 15:16:24,057 INFO  (jsonrpc/1) [storage.SANLock] Releasing
> >>> host id for domain 60455567-ad30-42e3-a9df-62fe86c7fd25 (id: 1)
> >>> (clusterlock:305)
> >>> 2017-02-09 15:16:25,149 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
> >>> call GlusterHost.list succeeded in 0.17 seconds (__init__:515)
> >>> 2017-02-09 15:16:25,264 INFO  (Reactor thread)
> >>> [ProtocolDetector.AcceptorImpl] Accepted connection from
> >>> :::127.0.0.1:55060 (protocoldetector:72)
> >>> 2017-02-09 15:16:25,270 INFO  (Reactor thread)
> >>> [ProtocolDetector.Detector] Detected protocol stomp from
> >>> :::127.0.0.1:55060 (protocoldetector:127)
> >>> 2017-02-09 15:16:25,271 INFO  (Reactor thread) [Broker.StompAdapter]
> >>> Processing CONNECT request (stompreactor:102)
> >>> 2017-02-09 15:16:25,271 INFO  (JsonRpc (StompReactor))
> >>> [Broker.StompAdapter] Subscribe command received (stompreactor:129)
> >>> 2017-02-09 15:16:25,416 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
> >>> call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:515)
> >>> 2017-02-09 15:16:25,419 INFO  (jsonrpc/6) [dispatcher] Run and protect:
> >>> repoStats(options=None) (logUtils:49)
> >>> 2017-02-09 15:16:25,419 INFO  (jsonrpc/6) [dispatcher] Run and protect:
> >>> repoStats, Return response: {u'e8d04da7-ad3d-4227-a45d-b5a29b2f43e5':
> >>> {'code': 0, 'actual': True
> >>> , 'version': 4, 'acquired': True, 'delay': '0.000854128', 'lastCheck':
> >>> '5.1', 'valid': True}, u'a77b8821-ff19-4d17-a3ce-a6c3a69436d5':
> {'code': 0,
> >>> 'actual': True, 'vers
> >>> ion': 4, 'acquired': True, 'delay': '0.000966556', 'lastCheck': '2.6',
> >>> 'valid': True}} (logUtils:52)
> >>> 2017-02-09 15:16:25,447 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
> >>> call Host.getStats succeeded in 0.03 seconds (__init__:515)
> >>> 2017-02-09 15:16:25,450 ERROR (JsonRpc (StompReactor)) [vds.dispatcher]
> >>> SSL error receiving from  connected
> >>> (':::127.0.0.1', 55060, 0, 0) at 0x7f69c0043cf8>: unexpected eof
> >>> (betterAsyncore:113)
> >>> 2017-02-09 15:16:25,812 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> >>> call GlusterVolume.list succeeded in 0.10 seconds (__init__:515)
> >>> 2017-02-09 15:16:25,940 INFO  (Reactor thread)
> >>> [ProtocolDetector.AcceptorImpl] Accepted connection from
> >>> :::127.0.0.1:55062 (protocoldetector:72)
> >>> 2017-02-09 

Re: [ovirt-users] questions on OVN

2017-02-16 Thread Lance Richardson
> From: "Gianluca Cecchi" 
> To: "Marcin Mirecki" 
> Cc: "Ovirt Users" 
> Sent: Thursday, February 16, 2017 4:40:46 AM
> Subject: Re: [ovirt-users] questions on OVN
> 
> On Thu, Feb 16, 2017 at 9:54 AM, Marcin Mirecki < mmire...@redhat.com >
> wrote:
> 
> 
> 
> OVN is aleady using GENEVE, VXLAN or STT tunnels (the user can choose any),
> so the isolation is already assured.
> The scripts provided by ovirt configure a geneve tunnel.
> You are free so override this manually to vxlan or stt if you want, let me
> know if you need any howto info.
> 

A small correction/clarification: for hypervisor-hypervisor tunnels, the
only tunnel encapsulations that are currently supported are GENEVE and STT.
The rationale is explained in detail at
http://openvswitch.org/support/dist-docs/ovn-architecture.7.html in the
"Design Decisions" section.  VXLAN tunnels are supported for hypervisor-gateway
tunnels only.


> yes, please.
> I have used in the mean time the vdsm-tool command that takes care of
> creating the default geneve tunnel
> In my case
> vdsm-tool ovn-config 10.4.168.80 10.4.168.81
> 
> but I would like to know how to manually use other types too.
> I watched the deep dive demo about ovn but at the bottom of the related slide
> there are three lines that should be equivalent to the above command,
> something like
> 
> ovs-vsctl set open ? external-ids:ovn-remote=tcp: 10.4.168.80:6642
> ovs-vsctl set open ? external-ids:ovn-encap=type=geneve
> ovs-vsctl set open ? external-ids:ovn-encap-ip=10.4.168.81
> 
> The ? character seems a dot or a comma, I have not understood the syntax
> (what are the accepted words for type= in the second line?)
> 

The syntax here is "ovs-vsctl set   [:]=...".
In this case, the table name is "Open_vSwitch", "open" can be used as a
shorthand because the table name is not case-sensitive and prefixes of
the table name are accepted as long as they are unique.

The "." character specifies the record name as explained in the ovs-vsctl man
page at http://openvswitch.org/support/dist-docs-2.5/ovs-vsctl.8.txt:

   Open_vSwitch
  Global  configuration  for an ovs-vswitchd.  This table contains
  exactly one record, identified by specifying  .  as  the  record
  name.

Valid settings for external-ids:ovn-encap-type= are given in the ovn-controller
man page http://openvswitch.org/support/dist-docs-2.5/ovn-controller.8.txt:

  external_ids:ovn-encap-type
 The  encapsulation type that a chassis should use to con‐
 nect to this node.  Multiple encapsulation types  may  be
 specified  with  a  comma-separated  list.   Each  listed
 encapsulation type will be paired with ovn-encap-ip.

 Supported tunnel types  for  connecting  hypervisors  are
 geneve and stt.  Gateways may use geneve, vxlan, or stt.

 Due to the limited amount of metadata in vxlan, the capa‐
 bilities and performance of connected  gateways  will  be
 reduced versus other tunnel formats.

Hope this helps,

Lance

> Thanks again,
> Gianluca
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to export VM

2017-02-16 Thread Pat Riehecky
Any attempts to export my VM error out.  Last night the disk images got 
'unregistered' from oVirt and I had to rescan the storage domain to find 
them again.  Now I'm just trying to get a backup of the VM.


The snapshots off of the old disks are still listed, but I don't know if 
the lvm slices are still real or if that is even what is wrong.


steps I followed ->
Halt VM
Click Export
leave things unchecked and click OK

oVirt version:
ovirt-engine-4.0.3-1.el7.centos.noarch
ovirt-engine-backend-4.0.3-1.el7.centos.noarch
ovirt-engine-cli-3.6.9.2-1.el7.noarch
ovirt-engine-dashboard-1.0.3-1.el7.centos.noarch
ovirt-engine-dbscripts-4.0.3-1.el7.centos.noarch
ovirt-engine-dwh-4.0.2-1.el7.centos.noarch
ovirt-engine-dwh-setup-4.0.2-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.1.0-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.2.1-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.2.1-1.el7.noarch
ovirt-engine-extensions-api-impl-4.0.3-1.el7.centos.noarch
ovirt-engine-lib-4.0.3-1.el7.centos.noarch
ovirt-engine-restapi-4.0.3-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-engine-setup-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-base-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.0.3-1.el7.centos.noarch
ovirt-engine-tools-4.0.3-1.el7.centos.noarch
ovirt-engine-tools-backup-4.0.3-1.el7.centos.noarch
ovirt-engine-userportal-4.0.3-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
ovirt-engine-webadmin-portal-4.0.3-1.el7.centos.noarch
ovirt-engine-websocket-proxy-4.0.3-1.el7.centos.noarch
ovirt-engine-wildfly-10.0.0-1.el7.x86_64
ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
ovirt-guest-agent-common-1.0.12-4.el7.noarch
ovirt-host-deploy-1.5.1-1.el7.centos.noarch
ovirt-host-deploy-java-1.5.1-1.el7.centos.noarch
ovirt-imageio-common-0.3.0-1.el7.noarch
ovirt-imageio-proxy-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
ovirt-imageio-proxy-setup-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
ovirt-image-uploader-4.0.0-1.el7.centos.noarch
ovirt-iso-uploader-4.0.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.2-1.el7.centos.noarch
ovirt-vmconsole-1.0.4-1.el7.centos.noarch
ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch




log snippet:
2017-02-16 11:34:44,959 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand] 
(default task-28) [] START, GetVmsInfoVDSCommand( 
GetVmsInfoVDSCommandParameters:{runAsync='true', 
storagePoolId='0001-0001-0001-0001-01a5', 
ignoreFailoverLimit='false', 
storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1', 
vmIdList='null'}), log id: 3c406c84
2017-02-16 11:34:45,967 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand] 
(default task-28) [] FINISH, GetVmsInfoVDSCommand, log id: 3c406c84
2017-02-16 11:34:46,178 INFO 
[org.ovirt.engine.core.bll.exportimport.ExportVmCommand] (default 
task-24) [50b27eef] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[ba806b93-b6fe-4873-99ec-55bb34c12e5f=ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-02-16 11:34:46,221 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand] 
(default task-24) [50b27eef] START, GetVmsInfoVDSCommand( 
GetVmsInfoVDSCommandParameters:{runAsync='true', 
storagePoolId='0001-0001-0001-0001-01a5', 
ignoreFailoverLimit='false', 
storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1', 
vmIdList='null'}), log id: 61bfd908
2017-02-16 11:34:47,227 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand] 
(default task-24) [50b27eef] FINISH, GetVmsInfoVDSCommand, log id: 61bfd908
2017-02-16 11:34:47,242 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand] 
(default task-24) [50b27eef] START, GetVmsInfoVDSCommand( 
GetVmsInfoVDSCommandParameters:{runAsync='true', 
storagePoolId='0001-0001-0001-0001-01a5', 
ignoreFailoverLimit='false', 
storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1', 
vmIdList='null'}), log id: 7cd19381
2017-02-16 11:34:47,276 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand] 
(default task-24) [50b27eef] FINISH, GetVmsInfoVDSCommand, log id: 7cd19381
2017-02-16 11:34:47,294 INFO 
[org.ovirt.engine.core.bll.exportimport.ExportVmCommand] 
(org.ovirt.thread.pool-8-thread-39) [50b27eef] Running command: 
ExportVmCommand internal: false. Entities affected :  ID: 
13127103-3f59-418a-90f1-5b1ade8526b1 Type: StorageAction group 
IMPORT_EXPORT_VM with role type ADMIN
2017-02-16 11:34:47,296 INFO 
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-39) [50b27eef] START, 
SetVmStatusVDSCommand( SetVmStatusVDSCommandParameters:{runAsync='true', 
vmId='ba806b93-b6fe-4873-99ec-55bb34c12e5f', status='ImageLocked', 
exitStatus='Normal'}), log 

Re: [ovirt-users] Importing existing (dirty) storage domains

2017-02-16 Thread Nir Soffer
On Mon, Feb 13, 2017 at 3:35 PM, Doug Ingham  wrote:
> Hi Sahina,
>
> On 13 February 2017 at 05:45, Sahina Bose  wrote:
>>
>> Any errors in the gluster mount logs for this gluster volume?
>>
>> How about "gluster vol heal  info" - does it list any entries to
>> heal?
>
>
> After more investigating, I found out that there is a sanlock daemon that
> runs with VDSM, independently of the HE, so I'd basically have to bring the
> volume down & wait for the leases to expire/delete them* before I can import
> the domain.
>
> *I understand removing /dom_md/leases/ should do the job?

No, the issue is probably dom_md/ids accessed by sanlock, but removing files
accessed by sanlock will not help, an open file will remain open until sanlock
close the file.

Did you try to reboot the host before installing it again? If you did and you
still have these issues, you probably need to remove the previous installation
properly before installing again.

Adding Simone to help with uninstalling and reinstalling hosted engine.

Nir

>
>
>>
>>
>> On Thu, Feb 9, 2017 at 11:57 PM, Doug Ingham  wrote:
>>>
>>> Some interesting output from the vdsm log...
>>>
>>>
>>> 2017-02-09 15:16:24,051 INFO  (jsonrpc/1) [storage.StorageDomain]
>>> Resource namespace 01_img_60455567-ad30-42e3-a9df-62fe86c7fd25 already
>>> registered (sd:731)
>>> 2017-02-09 15:16:24,051 INFO  (jsonrpc/1) [storage.StorageDomain]
>>> Resource namespace 02_vol_60455567-ad30-42e3-a9df-62fe86c7fd25 already
>>> registered (sd:740)
>>> 2017-02-09 15:16:24,052 INFO  (jsonrpc/1) [storage.SANLock] Acquiring
>>> Lease(name='SDM',
>>> path=u'/rhev/data-center/mnt/glusterSD/localhost:data2/60455567-ad30-42e3-a9df-6
>>> 2fe86c7fd25/dom_md/leases', offset=1048576) for host id 1
>>> (clusterlock:343)
>>> 2017-02-09 15:16:24,057 INFO  (jsonrpc/1) [storage.SANLock] Releasing
>>> host id for domain 60455567-ad30-42e3-a9df-62fe86c7fd25 (id: 1)
>>> (clusterlock:305)
>>> 2017-02-09 15:16:25,149 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
>>> call GlusterHost.list succeeded in 0.17 seconds (__init__:515)
>>> 2017-02-09 15:16:25,264 INFO  (Reactor thread)
>>> [ProtocolDetector.AcceptorImpl] Accepted connection from
>>> :::127.0.0.1:55060 (protocoldetector:72)
>>> 2017-02-09 15:16:25,270 INFO  (Reactor thread)
>>> [ProtocolDetector.Detector] Detected protocol stomp from
>>> :::127.0.0.1:55060 (protocoldetector:127)
>>> 2017-02-09 15:16:25,271 INFO  (Reactor thread) [Broker.StompAdapter]
>>> Processing CONNECT request (stompreactor:102)
>>> 2017-02-09 15:16:25,271 INFO  (JsonRpc (StompReactor))
>>> [Broker.StompAdapter] Subscribe command received (stompreactor:129)
>>> 2017-02-09 15:16:25,416 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
>>> call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:515)
>>> 2017-02-09 15:16:25,419 INFO  (jsonrpc/6) [dispatcher] Run and protect:
>>> repoStats(options=None) (logUtils:49)
>>> 2017-02-09 15:16:25,419 INFO  (jsonrpc/6) [dispatcher] Run and protect:
>>> repoStats, Return response: {u'e8d04da7-ad3d-4227-a45d-b5a29b2f43e5':
>>> {'code': 0, 'actual': True
>>> , 'version': 4, 'acquired': True, 'delay': '0.000854128', 'lastCheck':
>>> '5.1', 'valid': True}, u'a77b8821-ff19-4d17-a3ce-a6c3a69436d5': {'code': 0,
>>> 'actual': True, 'vers
>>> ion': 4, 'acquired': True, 'delay': '0.000966556', 'lastCheck': '2.6',
>>> 'valid': True}} (logUtils:52)
>>> 2017-02-09 15:16:25,447 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
>>> call Host.getStats succeeded in 0.03 seconds (__init__:515)
>>> 2017-02-09 15:16:25,450 ERROR (JsonRpc (StompReactor)) [vds.dispatcher]
>>> SSL error receiving from >> (':::127.0.0.1', 55060, 0, 0) at 0x7f69c0043cf8>: unexpected eof
>>> (betterAsyncore:113)
>>> 2017-02-09 15:16:25,812 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
>>> call GlusterVolume.list succeeded in 0.10 seconds (__init__:515)
>>> 2017-02-09 15:16:25,940 INFO  (Reactor thread)
>>> [ProtocolDetector.AcceptorImpl] Accepted connection from
>>> :::127.0.0.1:55062 (protocoldetector:72)
>>> 2017-02-09 15:16:25,946 INFO  (Reactor thread)
>>> [ProtocolDetector.Detector] Detected protocol stomp from
>>> :::127.0.0.1:55062 (protocoldetector:127)
>>> 2017-02-09 15:16:25,947 INFO  (Reactor thread) [Broker.StompAdapter]
>>> Processing CONNECT request (stompreactor:102)
>>> 2017-02-09 15:16:25,947 INFO  (JsonRpc (StompReactor))
>>> [Broker.StompAdapter] Subscribe command received (stompreactor:129)
>>> 2017-02-09 15:16:26,058 ERROR (jsonrpc/1) [storage.TaskManager.Task]
>>> (Task='02cad901-5fe8-4f2d-895b-14184f67feab') Unexpected error (task:870)
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/task.py", line 877, in _run
>>> return fn(*args, **kargs)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in
>>> wrapper
>>> res = f(*args, **kwargs)
>>>   File "/usr/share/vdsm/storage/hsm.py", line 812, in
>>> forcedDetachStorageDomain
>>> self._deatchStorageDomainFromOldPools(sdUUID)
>>>   File "/usr/

Re: [ovirt-users] iscsi or iser targets are not detached from the host in the maintenance mode

2017-02-16 Thread Arman Khalatyan
done: https://bugzilla.redhat.com/show_bug.cgi?id=1422959


On Thu, Feb 16, 2017 at 5:00 PM, Nir Soffer  wrote:

> On Thu, Feb 16, 2017 at 5:55 PM, Arman Khalatyan 
> wrote:
> > The test disks are not attached to any virtual machine. nothing done on
> the
> > hosts.
> >  I just saw that the all test LUNs are still logged into target side so I
> > went to the host(it was in the maintenance ) all disks are still there.
> >
> > I just managing everything over the web gui:
> > Select host as SPM, then disks->new->directLUN->
> discovertargets->login->ok
> > on the host the disks are visible.
> > iscsiadm -m session -o show
> > tcp: [6] 10.10.10.35:3260,1
> > iqn.2003-01.org.linux-iscsi.c1701.x8664:sn.5b791971cd78 (non-flash)
> >
> > Putting host to maintenance:
> > the disks are still there:
> > lsscsi
> > [0:0:0:0]diskATA  ST3500630NS  K /dev/sda
> > [11:0:0:0]   diskLIO-ORG  c1701iser4.0   /dev/sdb
> > [11:0:0:1]   diskLIO-ORG  c1701iser32k 4.0   /dev/sdc
> >
> > activating deactivating the host does not change situation.
> > I dont see any attempt of vdsm to logout the disks. I can see in the
> > vdsmd.logs that "[storage.Mount] unmounting /rhev/data-center/mnt/.."
> > unmounting the nfs part but nothing from [storage.ISCSI]
>
> Sounds like a bug in target discovery flow. We add nodes and sessions and
> do not clean them.
>
> Would you file a bug?
>
> Nir
>
> >
> >
> >
> > On Thu, Feb 16, 2017 at 4:25 PM, Nir Soffer  wrote:
> >>
> >> On Thu, Feb 16, 2017 at 5:13 PM, Arman Khalatyan 
> >> wrote:
> >> > Hi,
> >> > In ovirt 4.1 when I put the host into maintenance mode then the nfs
> >> > mounts
> >> > are unmounted as expected.
> >> > but the hosts are still logged into the targets.
> >> >
> >> > Is it expected behavior?? If yes what is  the use of it?
> >>
> >> No, if ovirt connected to the target, it should disconnect from the
> >> target.
> >>
> >> Maybe you connected manually to the target before that?
> >>
> >> A good test to verify this would be to do this in maintenance mode:
> >>
> >> iscsiadm -m node -o delete
> >>
> >> Then activate and deactivate the host several times, and check that
> >> no iscsi session are active when host enter maintenance.
> >>
> >> > Another thing concerning to the permanently removed direct LUNs.
> >> > They are still in the /var/lib/iscsi/nodes and
> >> > /var/lib/iscsi/send_targets/*
> >> > Would be good to cleanup the folders if users are removing permanently
> >> > the
> >> > LUNs.
> >>
> >> We don't manage the LUNs - if you are removing the LUNs manually, and
> >> the target providing this LUNs is not needed any more, you are
> responsible
> >> for removing the nodes from iscsi database.
> >>
> >> I don't think we are removing nodes and targets from a host, only
> updating
> >> them when you connect to a server. We also don't have a way to remove
> >> a target from engine database, so engine cannot ask vdsm to remove
> >> targets.
> >>
> >> Nir
> >
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVN routing and firewalling in oVirt

2017-02-16 Thread Simone Tiraboschi
On Thu, Feb 16, 2017 at 4:49 PM, Gianluca Cecchi 
wrote:

> On Thu, Feb 16, 2017 at 2:26 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Thu, Feb 16, 2017 at 2:20 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> Hello,
>>> how do we manage routing between different OVN networks in oVirt?
>>> And between OVN networks and physical ones?
>>>
>>
>> Take a look at this blog post:
>> http://blog.spinhirne.com/2016/09/the-ovn-gateway-router.html
>>
>
> Great!
> Actually using the previous blog post of the series:
> http://blog.spinhirne.com/2016/09/an-introduction-to-ovn-routing.html
>

It was something I wished to show this Monday in the workshop but we were
really out of time!


>
>
> I was able to complete routing between two different oVirt subnets:
>
> In oVirt I have previously created:
>
> ovn_net1 network with subnet subn1 (defined as 172.16.10.0/24 with gw
> 172.16.10.1)
> so that ip usable range is from 172.16.10.1 to 172.16.10.254
>
> ovn_net2 network with subnet subn2 (defined as 192.168.10.0/24 with gw
> 192.168.10.1)
> so that ip usable range is from 192.168.10.1 to 192.168.10.254
>
> I have to VMs defined on the two subnets:
> vm1 172.16.10.2
> vm2 192.168.10.101
>
> on central server (that is my engine)
> # define the new logical switches
> # no, already created from inside oVirt: they are ovn_net1 and ovn_net2
>
> # add the router
> ovn-nbctl lr-add net1net2
>
> # create router port for the connection to net1
> ovn-nbctl lrp-add net1net2 net1 02:ac:10:ff:01:29 172.16.10.1/24
>
> # create the net1 switch port for connection to net1net2
> ovn-nbctl lsp-add ovn_net1 net1-net1net2
> ovn-nbctl lsp-set-type net1-net1net2 router
> ovn-nbctl lsp-set-addresses net1-net1net2 02:ac:10:ff:01:29
> ovn-nbctl lsp-set-options net1-net1net2 router-port=net1
>
> # create router port for the connection to net2
> ovn-nbctl lrp-add net1net2 net2 02:ac:10:ff:01:93 192.168.10.1/24
>
> # create the net2 switch port for connection to net1net2
> ovn-nbctl lsp-add ovn_net2 net2-net1net2
> ovn-nbctl lsp-set-type net2-net1net2 router
> ovn-nbctl lsp-set-addresses net2-net1net2 02:ac:10:ff:01:93
> ovn-nbctl lsp-set-options net2-net1net2 router-port=net2
>
> # show config
> ovn-nbctl show
>
> [root@ractorshe ~]# ovn-nbctl show
> switch 38cca50c-e8b2-43fe-b585-2ee815191939 (ovn_net1)
> port 5562d95d-060f-4c64-b535-0e460ae6aa5a
> addresses: ["00:1a:4a:16:01:52 dynamic"]
> port 87fea70a-583b-4484-b72b-030e2f175aa6
> addresses: ["00:1a:4a:16:01:53 dynamic"]
> port net1-net1net2
> addresses: ["02:ac:10:ff:01:29"]
> port 99f619fc-29d2-4d40-8c28-4ce9291eb97a
> addresses: ["00:1a:4a:16:01:51 dynamic"]
> switch 6a0e7a92-8edc-44dd-970a-2b1f5c07647d (ovn_net2)
> port net2-net1net2
> addresses: ["02:ac:10:ff:01:93"]
> port 9b7a79a3-aa38-43b1-abd4-58370171755e
> addresses: ["00:1a:4a:16:01:54 dynamic"]
> router 59d79312-a434-4150-be46-285a9f37df8d (net1net2)
> port net2
> mac: "02:ac:10:ff:01:93"
> networks: ["192.168.10.1/24"]
> port net1
> mac: "02:ac:10:ff:01:29"
> networks: ["172.16.10.1/24"]
> [root@ractorshe ~]#
>
> And now vm1 is able to ping both the gateways ip on subn1 and subn2 and to
> ssh into vm2
> It remains a sort of spof the fact of the central ovn server, where the
> logical router lives... but for initial testing it is ok
>

Are you sure? did you tried bringing it down?

AFAIU, OVN is already providing distributed routing since 2.6: if the node
where you have the oVirt OVN provider and the OVN controller with
northbound and southbound DB is down you cannot edit logical networks but
the existing flows should still be there.



>
> Thanks again,
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi or iser targets are not detached from the host in the maintenance mode

2017-02-16 Thread Nir Soffer
On Thu, Feb 16, 2017 at 5:55 PM, Arman Khalatyan  wrote:
> The test disks are not attached to any virtual machine. nothing done on the
> hosts.
>  I just saw that the all test LUNs are still logged into target side so I
> went to the host(it was in the maintenance ) all disks are still there.
>
> I just managing everything over the web gui:
> Select host as SPM, then disks->new->directLUN->discovertargets->login->ok
> on the host the disks are visible.
> iscsiadm -m session -o show
> tcp: [6] 10.10.10.35:3260,1
> iqn.2003-01.org.linux-iscsi.c1701.x8664:sn.5b791971cd78 (non-flash)
>
> Putting host to maintenance:
> the disks are still there:
> lsscsi
> [0:0:0:0]diskATA  ST3500630NS  K /dev/sda
> [11:0:0:0]   diskLIO-ORG  c1701iser4.0   /dev/sdb
> [11:0:0:1]   diskLIO-ORG  c1701iser32k 4.0   /dev/sdc
>
> activating deactivating the host does not change situation.
> I dont see any attempt of vdsm to logout the disks. I can see in the
> vdsmd.logs that "[storage.Mount] unmounting /rhev/data-center/mnt/.."
> unmounting the nfs part but nothing from [storage.ISCSI]

Sounds like a bug in target discovery flow. We add nodes and sessions and
do not clean them.

Would you file a bug?

Nir

>
>
>
> On Thu, Feb 16, 2017 at 4:25 PM, Nir Soffer  wrote:
>>
>> On Thu, Feb 16, 2017 at 5:13 PM, Arman Khalatyan 
>> wrote:
>> > Hi,
>> > In ovirt 4.1 when I put the host into maintenance mode then the nfs
>> > mounts
>> > are unmounted as expected.
>> > but the hosts are still logged into the targets.
>> >
>> > Is it expected behavior?? If yes what is  the use of it?
>>
>> No, if ovirt connected to the target, it should disconnect from the
>> target.
>>
>> Maybe you connected manually to the target before that?
>>
>> A good test to verify this would be to do this in maintenance mode:
>>
>> iscsiadm -m node -o delete
>>
>> Then activate and deactivate the host several times, and check that
>> no iscsi session are active when host enter maintenance.
>>
>> > Another thing concerning to the permanently removed direct LUNs.
>> > They are still in the /var/lib/iscsi/nodes and
>> > /var/lib/iscsi/send_targets/*
>> > Would be good to cleanup the folders if users are removing permanently
>> > the
>> > LUNs.
>>
>> We don't manage the LUNs - if you are removing the LUNs manually, and
>> the target providing this LUNs is not needed any more, you are responsible
>> for removing the nodes from iscsi database.
>>
>> I don't think we are removing nodes and targets from a host, only updating
>> them when you connect to a server. We also don't have a way to remove
>> a target from engine database, so engine cannot ask vdsm to remove
>> targets.
>>
>> Nir
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi or iser targets are not detached from the host in the maintenance mode

2017-02-16 Thread Arman Khalatyan
The test disks are not attached to any virtual machine. nothing done on the
hosts.
 I just saw that the all test LUNs are still logged into target side so I
went to the host(it was in the maintenance ) all disks are still there.

I just managing everything over the web gui:
Select host as SPM, then disks->new->directLUN->discovertargets->login->ok
on the host the disks are visible.
iscsiadm -m session -o show
tcp: [6] 10.10.10.35:3260,1
iqn.2003-01.org.linux-iscsi.c1701.x8664:sn.5b791971cd78
(non-flash)

Putting host to maintenance:
the disks are still there:
lsscsi
[0:0:0:0]diskATA  ST3500630NS  K /dev/sda
[11:0:0:0]   diskLIO-ORG  c1701iser4.0   /dev/sdb
[11:0:0:1]   diskLIO-ORG  c1701iser32k 4.0   /dev/sdc

activating deactivating the host does not change situation.
I dont see any attempt of vdsm to logout the disks. I can see in the
vdsmd.logs that "[storage.Mount] unmounting /rhev/data-center/mnt/.."
unmounting the nfs part but nothing from [storage.ISCSI]



On Thu, Feb 16, 2017 at 4:25 PM, Nir Soffer  wrote:

> On Thu, Feb 16, 2017 at 5:13 PM, Arman Khalatyan 
> wrote:
> > Hi,
> > In ovirt 4.1 when I put the host into maintenance mode then the nfs
> mounts
> > are unmounted as expected.
> > but the hosts are still logged into the targets.
> >
> > Is it expected behavior?? If yes what is  the use of it?
>
> No, if ovirt connected to the target, it should disconnect from the target.
>
> Maybe you connected manually to the target before that?
>
> A good test to verify this would be to do this in maintenance mode:
>
> iscsiadm -m node -o delete
>
> Then activate and deactivate the host several times, and check that
> no iscsi session are active when host enter maintenance.
>
> > Another thing concerning to the permanently removed direct LUNs.
> > They are still in the /var/lib/iscsi/nodes and
> /var/lib/iscsi/send_targets/*
> > Would be good to cleanup the folders if users are removing permanently
> the
> > LUNs.
>
> We don't manage the LUNs - if you are removing the LUNs manually, and
> the target providing this LUNs is not needed any more, you are responsible
> for removing the nodes from iscsi database.
>
> I don't think we are removing nodes and targets from a host, only updating
> them when you connect to a server. We also don't have a way to remove
> a target from engine database, so engine cannot ask vdsm to remove targets.
>
> Nir
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVN routing and firewalling in oVirt

2017-02-16 Thread Gianluca Cecchi
On Thu, Feb 16, 2017 at 2:26 PM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Feb 16, 2017 at 2:20 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> how do we manage routing between different OVN networks in oVirt?
>> And between OVN networks and physical ones?
>>
>
> Take a look at this blog post:
> http://blog.spinhirne.com/2016/09/the-ovn-gateway-router.html
>

Great!
Actually using the previous blog post of the series:
http://blog.spinhirne.com/2016/09/an-introduction-to-ovn-routing.html

I was able to complete routing between two different oVirt subnets:

In oVirt I have previously created:

ovn_net1 network with subnet subn1 (defined as 172.16.10.0/24 with gw
172.16.10.1)
so that ip usable range is from 172.16.10.1 to 172.16.10.254

ovn_net2 network with subnet subn2 (defined as 192.168.10.0/24 with gw
192.168.10.1)
so that ip usable range is from 192.168.10.1 to 192.168.10.254

I have to VMs defined on the two subnets:
vm1 172.16.10.2
vm2 192.168.10.101

on central server (that is my engine)
# define the new logical switches
# no, already created from inside oVirt: they are ovn_net1 and ovn_net2

# add the router
ovn-nbctl lr-add net1net2

# create router port for the connection to net1
ovn-nbctl lrp-add net1net2 net1 02:ac:10:ff:01:29 172.16.10.1/24

# create the net1 switch port for connection to net1net2
ovn-nbctl lsp-add ovn_net1 net1-net1net2
ovn-nbctl lsp-set-type net1-net1net2 router
ovn-nbctl lsp-set-addresses net1-net1net2 02:ac:10:ff:01:29
ovn-nbctl lsp-set-options net1-net1net2 router-port=net1

# create router port for the connection to net2
ovn-nbctl lrp-add net1net2 net2 02:ac:10:ff:01:93 192.168.10.1/24

# create the net2 switch port for connection to net1net2
ovn-nbctl lsp-add ovn_net2 net2-net1net2
ovn-nbctl lsp-set-type net2-net1net2 router
ovn-nbctl lsp-set-addresses net2-net1net2 02:ac:10:ff:01:93
ovn-nbctl lsp-set-options net2-net1net2 router-port=net2

# show config
ovn-nbctl show

[root@ractorshe ~]# ovn-nbctl show
switch 38cca50c-e8b2-43fe-b585-2ee815191939 (ovn_net1)
port 5562d95d-060f-4c64-b535-0e460ae6aa5a
addresses: ["00:1a:4a:16:01:52 dynamic"]
port 87fea70a-583b-4484-b72b-030e2f175aa6
addresses: ["00:1a:4a:16:01:53 dynamic"]
port net1-net1net2
addresses: ["02:ac:10:ff:01:29"]
port 99f619fc-29d2-4d40-8c28-4ce9291eb97a
addresses: ["00:1a:4a:16:01:51 dynamic"]
switch 6a0e7a92-8edc-44dd-970a-2b1f5c07647d (ovn_net2)
port net2-net1net2
addresses: ["02:ac:10:ff:01:93"]
port 9b7a79a3-aa38-43b1-abd4-58370171755e
addresses: ["00:1a:4a:16:01:54 dynamic"]
router 59d79312-a434-4150-be46-285a9f37df8d (net1net2)
port net2
mac: "02:ac:10:ff:01:93"
networks: ["192.168.10.1/24"]
port net1
mac: "02:ac:10:ff:01:29"
networks: ["172.16.10.1/24"]
[root@ractorshe ~]#

And now vm1 is able to ping both the gateways ip on subn1 and subn2 and to
ssh into vm2
It remains a sort of spof the fact of the central ovn server, where the
logical router lives... but for initial testing it is ok

Thanks again,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi or iser targets are not detached from the host in the maintenance mode

2017-02-16 Thread Nir Soffer
On Thu, Feb 16, 2017 at 5:13 PM, Arman Khalatyan  wrote:
> Hi,
> In ovirt 4.1 when I put the host into maintenance mode then the nfs mounts
> are unmounted as expected.
> but the hosts are still logged into the targets.
>
> Is it expected behavior?? If yes what is  the use of it?

No, if ovirt connected to the target, it should disconnect from the target.

Maybe you connected manually to the target before that?

A good test to verify this would be to do this in maintenance mode:

iscsiadm -m node -o delete

Then activate and deactivate the host several times, and check that
no iscsi session are active when host enter maintenance.

> Another thing concerning to the permanently removed direct LUNs.
> They are still in the /var/lib/iscsi/nodes and /var/lib/iscsi/send_targets/*
> Would be good to cleanup the folders if users are removing permanently the
> LUNs.

We don't manage the LUNs - if you are removing the LUNs manually, and
the target providing this LUNs is not needed any more, you are responsible
for removing the nodes from iscsi database.

I don't think we are removing nodes and targets from a host, only updating
them when you connect to a server. We also don't have a way to remove
a target from engine database, so engine cannot ask vdsm to remove targets.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iscsi or iser targets are not detached from the host in the maintenance mode

2017-02-16 Thread Arman Khalatyan
Hi,
In ovirt 4.1 when I put the host into maintenance mode then the nfs mounts
are unmounted as expected.
but the hosts are still logged into the targets.

Is it expected behavior?? If yes what is  the use of it?
Another thing concerning to the permanently removed direct LUNs.
They are still in the /var/lib/iscsi/nodes and /var/lib/iscsi/send_targets/*
Would be good to cleanup the folders if users are removing permanently the
LUNs.

Thanks,
Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.1.1 Second Test compose

2017-02-16 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Test Compose of oVirt 4.1.1 for testing, as of February 16th, 2017

This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.

This update is a test compose allowing to test fixes included in
ovirt-4.1.1 since
4.1.0 release.
See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.


This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* Fedora 24 (tech preview)

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Live  and oVirt Engine appliance has been already built [4]
- oVirt Node has not been built for this test compose.

Additional Resources:
* Read more about the oVirt 4.1.1 release highlights:
http://www.ovirt.org/release/4.1.1/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.1/
[4] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What is the libcacard-ev.x86_64 ??

2017-02-16 Thread Gianluca Cecchi
On Thu, Feb 16, 2017 at 2:38 PM, Miroslav Rezanina 
wrote:

>
> > >
> > How to solve on already installed hosts?
> > [root@ractor ~]# yum install libcacard
>
> What happened when you try yum update libcacard?
>

It was the first try I executed. Nothing to be updated

- with yum update libcacard
Package(s) libcacard available, but not installed.
No packages marked for update

- with yum update libcacard-ev
No packages marked for update

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VMs are not running/booting on one host

2017-02-16 Thread Peter Hudec
memtest run without issues.
I'm thinking about install on this host latest ovirt and try running the vm.
It may solve the issues if it's ovirt related and not hw/kvm.

latest stable version is 4.1.

Peter

I'm thinking to try lastest oVirt
On 15/02/2017 21:46, Peter Hudec wrote:
> On 15/02/2017 21:20, Nir Soffer wrote:
>> On Wed, Feb 15, 2017 at 10:05 PM, Peter Hudec  wrote:
>>> Hi,
>>>
>>> so theproblem is little bit different. When I wait for a long time, the
>>> VM boots ;(
>>
>> Is this an issue only with old vms imported from the old setup, or
>> also with new vms?
> I do not have new VMs, so with the OLD one.But I did not import them
> from old setup.
> The Host OS upgrade I did by our docs, creating new cluster, host
> upgrade and vm migrations. There was no outage until now.
> 
> I tried to install new VM, but the installer hangs on that host.
> 
> 
>>
>>>
>>> But ... /see the log/. I'm invetigating the reason.
>>> The difference between the dipovirt0{1,2} and the dipovirt03 isthe
>>> installation time. The first 2 was migrated last week, the last one
>>> yesterday. There some newer packages, but nothing related to KVM.
>>>
>>> [  292.429622] INFO: rcu_sched self-detected stall on CPU { 0}  (t=72280
>>> jiffies g=393 c=392 q=35)
>>> [  292.430294] sending NMI to all CPUs:
>>> [  292.430305] NMI backtrace for cpu 0
>>> [  292.430309] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.16.0-4-amd64
>>> #1 Debian 3.16.39-1
>>> [  292.430311] Hardware name: oVirt oVirt Node, BIOS 0.5.1 01/01/2011
>>> [  292.430313] task: 8181a460 ti: 8180 task.ti:
>>> 8180
>>> [  292.430315] RIP: 0010:[]  []
>>> native_write_msr_safe+0x6/0x10
>>> [  292.430323] RSP: 0018:88001fc03e08  EFLAGS: 0046
>>> [  292.430325] RAX: 0400 RBX:  RCX:
>>> 0830
>>> [  292.430326] RDX:  RSI: 0400 RDI:
>>> 0830
>>> [  292.430327] RBP: 818e2a80 R08: 818e2a80 R09:
>>> 01e8
>>> [  292.430329] R10:  R11: 88001fc03b96 R12:
>>> 
>>> [  292.430330] R13: a0ea R14: 0002 R15:
>>> 0008
>>> [  292.430335] FS:  () GS:88001fc0()
>>> knlGS:
>>> [  292.430337] CS:  0010 DS:  ES:  CR0: 8005003b
>>> [  292.430339] CR2: 01801000 CR3: 1c6de000 CR4:
>>> 06f0
>>> [  292.430343] Stack:
>>> [  292.430344]  8104b30d 0002 0082
>>> 88001fc0d6a0
>>> [  292.430347]  81853800  818e2fe0
>>> 0023
>>> [  292.430349]  81853800 81047d63 88001fc0d6a0
>>> 810c73fa
>>> [  292.430352] Call Trace:
>>> [  292.430354]  
>>>
>>> [  292.430360]  [] ? __x2apic_send_IPI_mask+0xad/0xe0
>>> [  292.430365]  [] ?
>>> arch_trigger_all_cpu_backtrace+0xc3/0x140
>>> [  292.430369]  [] ? rcu_check_callbacks+0x42a/0x670
>>> [  292.430373]  [] ? account_process_tick+0xde/0x180
>>> [  292.430376]  [] ? tick_sched_handle.isra.16+0x60/0x60
>>> [  292.430381]  [] ? update_process_times+0x40/0x70
>>> [  292.430404]  [] ? tick_sched_handle.isra.16+0x20/0x60
>>> [  292.430407]  [] ? tick_sched_timer+0x3c/0x60
>>> [  292.430410]  [] ? __run_hrtimer+0x67/0x210
>>> [  292.430412]  [] ? hrtimer_interrupt+0xe9/0x220
>>> [  292.430416]  [] ? smp_apic_timer_interrupt+0x3b/0x50
>>> [  292.430420]  [] ? apic_timer_interrupt+0x6d/0x80
>>> [  292.430422]  
>>>
>>> [  292.430425]  [] ? sched_clock_local+0x15/0x80
>>> [  292.430428]  [] ? mwait_idle+0xa0/0xa0
>>> [  292.430431]  [] ? native_safe_halt+0x2/0x10
>>> [  292.430434]  [] ? default_idle+0x19/0xd0
>>> [  292.430437]  [] ? cpu_startup_entry+0x374/0x470
>>> [  292.430440]  [] ? start_kernel+0x497/0x4a2
>>> [  292.430442]  [] ? set_init_arg+0x4e/0x4e
>>> [  292.430445]  [] ? early_idt_handler_array+0x120/0x120
>>> [  292.430447]  [] ? x86_64_start_kernel+0x14d/0x15c
>>> [  292.430448] Code: c2 48 89 d0 c3 89 f9 0f 32 31 c9 48 c1 e2 20 89 c0
>>> 89 0e 48 09 c2 48 89 d0 c3 66 66 2e 0f 1f 84 00 00 00 00 00 89 f0 89 f9
>>> 0f 30 <31> c0 c3 0f 1f 80 00 00 00 00 89 f9 0f 33 48 c1 e2 20 89 c0 48
>>> [  292.430579] Clocksource tsc unstable (delta = -289118137838 ns)
>>>
>>>
>>> On 15/02/2017 20:39, Peter Hudec wrote:
 Hi,

 I did already, but not find any suspicious, see attached logs and the
 spice screenshot.

 Actually the VM is booting, but is stacked in some  bad state.
 When migrating, the migration is sucessfull, but the vm is not acessible
 /even on network/

 Right now I found one VM, which is working well.

 In logs look for diplci01 at 2017-02-15 20:23:00,420, the VM ID is
 7ddf349b-fb9a-44f4-9e88-73e84625a44e

   thanks
   Peter

 On 15/02/2017 19:40, Nir Soffer wrote:
> On Wed, Feb 15, 2017 at 8:11 PM, Peter Hudec  wrote:
>> Hi,
>>
>> I'

Re: [ovirt-users] What is the libcacard-ev.x86_64 ??

2017-02-16 Thread Sandro Bonazzola
On Thu, Feb 16, 2017 at 3:33 PM, Gianluca Cecchi 
wrote:

> On Thu, Feb 16, 2017 at 2:38 PM, Miroslav Rezanina 
> wrote:
>
>>
>> > >
>> > How to solve on already installed hosts?
>> > [root@ractor ~]# yum install libcacard
>>
>> What happened when you try yum update libcacard?
>>
>
> It was the first try I executed. Nothing to be updated
>
> - with yum update libcacard
> Package(s) libcacard available, but not installed.
> No packages marked for update
>
> - with yum update libcacard-ev
> No packages marked for update
>

Give me a couple of days, I'll fix this.


>
> Gianluca
>
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What is the libcacard-ev.x86_64 ??

2017-02-16 Thread Miroslav Rezanina


- 元のメッセージ -
> 差出人: "Gianluca Cecchi" 
> 宛先: "Miroslav Rezanina" 
> Cc: "Sandro Bonazzola" , "Karanbir Singh" 
> , "Alan Pevec"
> , "Arman Khalatyan" , "users" 
> 
> 送信済み: 2017年2月16日, 木曜日 午後 2:25:41
> 件名: Re: [ovirt-users] What is the libcacard-ev.x86_64 ??
> 
> On Thu, Feb 16, 2017 at 1:47 PM, Miroslav Rezanina 
> wrote:
> 
> >
> >
> > This issue is probably due to move libcacard-rhev from qemu-kvm-rhev
> > subpackage
> > to separate component libcacard (and so merged with regular libcacard from
> > rhel).
> >
> > This transition was most probably not properly mirrored
> > to ovirt so it still expect libcacard-ev to be part of qemu-kvm-ev build.
> > In case
> > qemu-kvm-ev is based on qemu-kvm-rhev, libcacard subpackage disappeared
> > but was not
> > added as new component.
> >
> > Mirek
> >
> >
> How to solve on already installed hosts?
> [root@ractor ~]# yum install libcacard

What happened when you try yum update libcacard?

> ...
> Resolving Dependencies
> --> Running transaction check
> ---> Package libcacard.x86_64 40:2.5.2-2.el7 will be installed
> ...
> Transaction check error:
>   file /usr/lib64/libcacard.so.0.0.0 from install of
> libcacard-40:2.5.2-2.el7.x86_64 conflicts with file from package
> libcacard-ev-10:2.3.0-31.el7.16.1.x86_64
> 
> [root@ractor ~]# yum remove libcacard-ev
> ...
> 
> Removing:
>  libcacard-ev x86_6410:2.3.0-31.el7.16.1
>  @ovirt-4.0 47 k
> Removing for dependencies:
>  libguestfs   x86_641:1.32.7-3.el7.centos.2
> @updates  3.8 M
>  libguestfs-tools-c   x86_641:1.32.7-3.el7.centos.2
> @updates   14 M
>  libguestfs-winsupportx86_647.2-1.el7
> installed 2.2 M
>  libvirt-daemon-kvm   x86_642.0.0-10.el7_3.4
>  @updates  0.0
>  ovirt-hosted-engine-ha   noarch2.1.0.1-1.el7.centos
>  @ovirt-4.11.8 M
>  ovirt-hosted-engine-setupnoarch2.1.0.1-1.el7.centos
>  @ovirt-4.12.0 M
>  ovirt-provider-ovn-drivernoarch1.0-1.20161219125609.git.el7.centos
> @ovirt-4.19.4 k
>  python-libguestfsx86_641:1.32.7-3.el7.centos.2
> @updates  1.1 M
>  qemu-kvm-ev  x86_6410:2.6.0-28.el7_3.3.1
> @ovirt-4.09.6 M
>  safeleasex86_641.0-7.el7
> installed  43 k
>  spice-glib   x86_640.31-6.el7_3.2
>  @updates  1.2 M
>  spice-gtk3   x86_640.31-6.el7_3.2
>  @updates  112 k
>  spice-xpix86_642.8-8.el7
> @base 179 k
>  vdsm x86_644.19.4-1.el7.centos
> @ovirt-4.12.5 M
>  vdsm-hook-vmfex-dev  noarch4.19.4-1.el7.centos
> @ovirt-4.1 21 k
>  virt-v2v x86_641:1.32.7-3.el7.centos.2
> @updates   16 M
>  virt-viewer  x86_642.0-12.el7
>  @base 1.2 M
> 
> Probably it is needed a new libcacard package that obsoletes libcacard-ev,
> before doing anything, correct?
> 

Yes, this can be solved by new build. RHEL libcacard is aware only of RHEL/RHV
environment. When package is transferred to centos environment, rhev suffix 
should
be replaced with ev suffix.

Mirek

> Gianluca
> 

-- 
Miroslav Rezanina
Software Engineer - Virtualization Team

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What is the libcacard-ev.x86_64 ??

2017-02-16 Thread Miroslav Rezanina


- 元のメッセージ -
> 差出人: "Sandro Bonazzola" 
> 宛先: "Gianluca Cecchi" , "Miroslav Rezanina" 
> , "Karanbir Singh"
> , "Alan Pevec" 
> Cc: "Arman Khalatyan" , "users" 
> 送信済み: 2017年2月16日, 木曜日 午後 1:34:30
> 件名: Re: [ovirt-users] What is the libcacard-ev.x86_64 ??
> 
> On Thu, Feb 16, 2017 at 1:25 PM, Sandro Bonazzola 
> wrote:
> 
> >
> >
> > On Tue, Feb 14, 2017 at 9:20 AM, Gianluca Cecchi <
> > gianluca.cec...@gmail.com> wrote:
> >
> >> On Fri, Feb 10, 2017 at 11:49 AM, Arman Khalatyan 
> >> wrote:
> >>
> >>> I have a host which was updated since 3.6
> >>> libcacard is disappeared from ovirt4.1 but it has a many dependencies
> >>> which stops to remove it:
> >>>
> >>>
> >> Hello,
> >> I reconnect to this thread, as I have similar problem/doubt relatd to
> >> libcacard / libcacard-ev packages.
> >> I had a cluster composed by two nodes, plain CentOS 7.3 nodes.
> >> They were installed in 4.0.6 (on 21/01) and this involved installation of
> >> libcacard-ev-2.3.0-31.el7.16.1.x86_64 package.
> >> Then I updated to 4.1 (on 08/02) using "yum update" strategy and nothing
> >> changed about this package.
> >> Yesteray I added a third node, but I notice that on it I don't have
> >> libcacard-ev but libcacard-2.5.2-2.el7.x86_64
> >>
> >
> >
> > I think the right one is libcacard-2.5.2-2.el7.x86_64 since libcacard is
> > not built anymore with qemu-kkvm-ev.
> > Not sure why libcacard-ev-2.3.0-31.el7.16.1.x86_64 has not been updated
> > but I guess it's related to the Obsolete clause.
> > I guess I'll need to rebuild libcacard adding an obsolete on libcacard-ev
> > too.
> >
> > Adding Miroslav, Alan and Karanbir to let them know about the issue.

This issue is probably due to move libcacard-rhev from qemu-kvm-rhev subpackage
to separate component libcacard (and so merged with regular libcacard from 
rhel). 

This transition was most probably not properly mirrored
to ovirt so it still expect libcacard-ev to be part of qemu-kvm-ev build. In 
case
qemu-kvm-ev is based on qemu-kvm-rhev, libcacard subpackage disappeared but was 
not
added as new component.

Mirek
> >
> 
> Tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1422868
> 
> 
> 
> >
> >
> >>
> >> Source for the -ev package is qemu-kvm-ev-2.3.0-31.el7.16.1.src.rpm and
> >> build host el7-vm28.phx.ovirt.org and this makes me think it is the
> >> right package.
> >> But it is not pulled in into the new host where I think I have the
> >> standard CentOS one with source RPM libcacard-2.5.2-2.el7.src.rpm and
> >> Packager CentOS BuildSystem
> >>
> >> The strange thing is that in their changelog I have:
> >>
> >> - for standard libcacard:
> >> * Fri Mar 18 2016 Miroslav Rezanina  - 2.5.2-2.el7
> >> - Obsolete libcacard-rhev (bz#1315953)
> >>
> >> * Fri Jan 29 2016 Miroslav Rezanina  - 2.5.2-1.el7
> >> - Initial build
> >>
> >> - for libcacard-ev
> >> * Fri Jul 08 2016 Sandro Bonazzola  -
> >> ev-2.3.0-31.el7_2.16.1
> >> - Removing RH branding from package name
> >>
> >> * Thu Jun 16 2016 Miroslav Rezanina  -
> >> rhev-2.3.0-31.el7_2.16
> >> - kvm-vga-add-sr_vbe-register-set.patch [bz#1347185]
> >> - Resolves: bz#1347185
> >>   (Regression from CVE-2016-3712: windows installer fails to start)
> >> ...
> >>
> >> What to choose and to do?
> >> Thanks,
> >> Gianluca
> >>
> >>
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >>
> >
> >
> > --
> > Sandro Bonazzola
> > Better technology. Faster innovation. Powered by community collaboration.
> > See how it works at redhat.com
> >
> 
> 
> 
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> 

-- 
Miroslav Rezanina
Software Engineer - Virtualization Team

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm conf files sync

2017-02-16 Thread Yaniv Kaul
On Thu, Feb 16, 2017 at 3:10 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 16 Feb 2017, at 14:05, Gianluca Cecchi 
> wrote:
>
> On Thu, Feb 16, 2017 at 1:06 PM, Nir Soffer  wrote:
>
>>
>> Would you open RFE for this?
>>
>> Nir
>>
>
> Done against ovirt-engine, host-deploy component:
> https://bugzilla.redhat.com/show_bug.cgi?id=1422880
>
>
> I’m sorry to say that a bit late, but this has been discussed and
> considered years ago, and ultimately put away.
> We do not plan to work on any vdsm.conf sync mechanism
> Rather, anything beyond host-specific customization should be properly
> handled in engine, be it a complete implementation or spec params or custom
> properties.
>

Or via configuration management tools, taking into account host status.
For example, Ansible script using oVirt Ansible modules can properly do
this.

Y.


>
> Thanks,
> michal
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVN routing and firewalling in oVirt

2017-02-16 Thread Simone Tiraboschi
On Thu, Feb 16, 2017 at 2:20 PM, Gianluca Cecchi 
wrote:

> Hello,
> how do we manage routing between different OVN networks in oVirt?
> And between OVN networks and physical ones?
>

Take a look at this blog post:
http://blog.spinhirne.com/2016/09/the-ovn-gateway-router.html

The good news is that a distributed NAT is going to be introduced with OVN
2.7:
https://patchwork.ozlabs.org/patch/726766/


> Based on architecture read here:
> http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
>
> I see terms for logical routers and gateway routers respectively but how
> to apply to oVirt configuration?
> Do I have to choose between setting up a specialized VM or a physical one:
> is it applicable/advisable to put on oVirt host itself the gateway
> functionality?
>
> Is there any security policy (like security groups in Openstack) to
> implement?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What is the libcacard-ev.x86_64 ??

2017-02-16 Thread Gianluca Cecchi
On Thu, Feb 16, 2017 at 1:47 PM, Miroslav Rezanina 
wrote:

>
>
> This issue is probably due to move libcacard-rhev from qemu-kvm-rhev
> subpackage
> to separate component libcacard (and so merged with regular libcacard from
> rhel).
>
> This transition was most probably not properly mirrored
> to ovirt so it still expect libcacard-ev to be part of qemu-kvm-ev build.
> In case
> qemu-kvm-ev is based on qemu-kvm-rhev, libcacard subpackage disappeared
> but was not
> added as new component.
>
> Mirek
>
>
How to solve on already installed hosts?
[root@ractor ~]# yum install libcacard
...
Resolving Dependencies
--> Running transaction check
---> Package libcacard.x86_64 40:2.5.2-2.el7 will be installed
...
Transaction check error:
  file /usr/lib64/libcacard.so.0.0.0 from install of
libcacard-40:2.5.2-2.el7.x86_64 conflicts with file from package
libcacard-ev-10:2.3.0-31.el7.16.1.x86_64

[root@ractor ~]# yum remove libcacard-ev
...

Removing:
 libcacard-ev x86_6410:2.3.0-31.el7.16.1
 @ovirt-4.0 47 k
Removing for dependencies:
 libguestfs   x86_641:1.32.7-3.el7.centos.2
@updates  3.8 M
 libguestfs-tools-c   x86_641:1.32.7-3.el7.centos.2
@updates   14 M
 libguestfs-winsupportx86_647.2-1.el7
installed 2.2 M
 libvirt-daemon-kvm   x86_642.0.0-10.el7_3.4
 @updates  0.0
 ovirt-hosted-engine-ha   noarch2.1.0.1-1.el7.centos
 @ovirt-4.11.8 M
 ovirt-hosted-engine-setupnoarch2.1.0.1-1.el7.centos
 @ovirt-4.12.0 M
 ovirt-provider-ovn-drivernoarch1.0-1.20161219125609.git.el7.centos
@ovirt-4.19.4 k
 python-libguestfsx86_641:1.32.7-3.el7.centos.2
@updates  1.1 M
 qemu-kvm-ev  x86_6410:2.6.0-28.el7_3.3.1
@ovirt-4.09.6 M
 safeleasex86_641.0-7.el7
installed  43 k
 spice-glib   x86_640.31-6.el7_3.2
 @updates  1.2 M
 spice-gtk3   x86_640.31-6.el7_3.2
 @updates  112 k
 spice-xpix86_642.8-8.el7
@base 179 k
 vdsm x86_644.19.4-1.el7.centos
@ovirt-4.12.5 M
 vdsm-hook-vmfex-dev  noarch4.19.4-1.el7.centos
@ovirt-4.1 21 k
 virt-v2v x86_641:1.32.7-3.el7.centos.2
@updates   16 M
 virt-viewer  x86_642.0-12.el7
 @base 1.2 M

Probably it is needed a new libcacard package that obsoletes libcacard-ev,
before doing anything, correct?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] OVN routing and firewalling in oVirt

2017-02-16 Thread Gianluca Cecchi
Hello,
how do we manage routing between different OVN networks in oVirt?
And between OVN networks and physical ones?

Based on architecture read here:
http://openvswitch.org/support/dist-docs/ovn-architecture.7.html

I see terms for logical routers and gateway routers respectively but how to
apply to oVirt configuration?
Do I have to choose between setting up a specialized VM or a physical one:
is it applicable/advisable to put on oVirt host itself the gateway
functionality?

Is there any security policy (like security groups in Openstack) to
implement?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm conf files sync

2017-02-16 Thread Michal Skrivanek

> On 16 Feb 2017, at 14:05, Gianluca Cecchi  wrote:
> 
> On Thu, Feb 16, 2017 at 1:06 PM, Nir Soffer  > wrote:
> 
> Would you open RFE for this?
> 
> Nir
> 
> Done against ovirt-engine, host-deploy component:
> https://bugzilla.redhat.com/show_bug.cgi?id=1422880 
> 

I’m sorry to say that a bit late, but this has been discussed and considered 
years ago, and ultimately put away.
We do not plan to work on any vdsm.conf sync mechanism
Rather, anything beyond host-specific customization should be properly handled 
in engine, be it a complete implementation or spec params or custom properties.

Thanks,
michal

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm conf files sync

2017-02-16 Thread Gianluca Cecchi
On Thu, Feb 16, 2017 at 1:06 PM, Nir Soffer  wrote:

>
> Would you open RFE for this?
>
> Nir
>

Done against ovirt-engine, host-deploy component:
https://bugzilla.redhat.com/show_bug.cgi?id=1422880
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change Vlan ID's which are connected to the Host/VM's

2017-02-16 Thread Matt .
Hi,

OK good to know. Why would it not be possible to change the name as
well ? I think it's not that difficult in the same command ?

Cheers,

Matt

2017-02-16 13:54 GMT+01:00 Michael Burman :
> Hi Matt
>
> Are you trying to change the network's name? if this is the case, then it
> doesn't allowed while network is attached to host/s.
>
> Yes, it is a bug in the error message when trying to rename network while it
> is attached to multiple hosts. I will report a bug for this issue.
>
> Thanks!
>
> On Thu, Feb 16, 2017 at 2:41 PM, Matt .  wrote:
>>
>> Hi Micheal,
>>
>> I trying pro process the change the issue is that I still get the
>> message (with some variable bug in it) that some hosts have the
>> logical network in usage. Is this Only on the name of the network ?
>>
>> I'm on Ovirt 4.0.6
>>
>> Error while executing action: Cannot edit Network. This logical
>> network is used by hosts:
>> (${ACTION_TYPE_FAILED_NETWORK_IN_ONE_USE_LIST_COUNTER}):
>> ${ACTION_TYPE_FAILED_NETWORK_IN_ONE_USE_LIST}
>> - Please detach hosts using this logical network and try again.
>>
>> Thanks,
>>
>> Matt
>>
>> 2017-02-15 9:46 GMT+01:00 Matt . :
>> > HI Micheal,
>> >
>> > Thanks! it looks like it's what I need, checking out!
>> >
>> > Thank you very much!
>> >
>> > Matt
>> >
>> > 2017-02-15 7:12 GMT+01:00 Michael Burman :
>> >> Check this feature page as well)
>> >>
>> >>
>> >> http://www.ovirt.org/develop/release-management/features/network/multihostnetworkconfiguration/
>> >>
>> >>
>> >>
>> >> On Wed, Feb 15, 2017 at 8:06 AM, Michael Burman 
>> >> wrote:
>> >>>
>> >>> Hello Matt
>> >>>
>> >>> Yes, there is a way, it can be done as part of our MultiHost Network
>> >>> feature(since 3.4).
>> >>> If you have the same logical network attached to multiple hosts in
>> >>> your
>> >>> DC, you can update their vlan IDs via the 'Networks' main tab and this
>> >>> will
>> >>> send a setup networks command through all of your hosts in the DC and
>> >>> update
>> >>> the logical network with the new vlan ID.
>> >>> - Note that this can be done when the VMs are running on the host/s as
>> >>> well.
>> >>>
>> >>> Good luck)
>> >>>
>> >>> On Tue, Feb 14, 2017 at 11:51 PM, Matt . 
>> >>> wrote:
>> 
>>  Hi Guys,
>> 
>>  Is there a way to change Vlan ID's which are connected to the Hosts
>>  when the VM's on it are down ?
>> 
>>  As i need to change a lot of them it would be create if the engine
>>  can
>>  perform some process which allows me to hammer my hosts with the new
>>  vlan ID's for their bridges.
>> 
>>  I hope someone has an idea.
>> 
>>  Thanks!
>> 
>>  Matt
>>  ___
>>  Users mailing list
>>  Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Michael Burman
>> >>> RedHat Israel, RHV-M Network QE
>> >>>
>> >>> Mobile: 054-5355725
>> >>> IRC: mburman
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Michael Burman
>> >> RedHat Israel, RHV-M Network QE
>> >>
>> >> Mobile: 054-5355725
>> >> IRC: mburman
>
>
>
>
> --
> Michael Burman
> RedHat Israel, RHV-M Network QE
>
> Mobile: 054-5355725
> IRC: mburman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change Vlan ID's which are connected to the Host/VM's

2017-02-16 Thread Michael Burman
Hi Matt

Are you trying to change the network's name? if this is the case, then it
doesn't allowed while network is attached to host/s.

Yes, it is a bug in the error message when trying to rename network while
it is attached to multiple hosts. I will report a bug for this issue.

Thanks!

On Thu, Feb 16, 2017 at 2:41 PM, Matt .  wrote:

> Hi Micheal,
>
> I trying pro process the change the issue is that I still get the
> message (with some variable bug in it) that some hosts have the
> logical network in usage. Is this Only on the name of the network ?
>
> I'm on Ovirt 4.0.6
>
> Error while executing action: Cannot edit Network. This logical
> network is used by hosts:
> (${ACTION_TYPE_FAILED_NETWORK_IN_ONE_USE_LIST_COUNTER}):
> ${ACTION_TYPE_FAILED_NETWORK_IN_ONE_USE_LIST}
> - Please detach hosts using this logical network and try again.
>
> Thanks,
>
> Matt
>
> 2017-02-15 9:46 GMT+01:00 Matt . :
> > HI Micheal,
> >
> > Thanks! it looks like it's what I need, checking out!
> >
> > Thank you very much!
> >
> > Matt
> >
> > 2017-02-15 7:12 GMT+01:00 Michael Burman :
> >> Check this feature page as well)
> >>
> >> http://www.ovirt.org/develop/release-management/features/network/
> multihostnetworkconfiguration/
> >>
> >>
> >>
> >> On Wed, Feb 15, 2017 at 8:06 AM, Michael Burman 
> wrote:
> >>>
> >>> Hello Matt
> >>>
> >>> Yes, there is a way, it can be done as part of our MultiHost Network
> >>> feature(since 3.4).
> >>> If you have the same logical network attached to multiple hosts in your
> >>> DC, you can update their vlan IDs via the 'Networks' main tab and this
> will
> >>> send a setup networks command through all of your hosts in the DC and
> update
> >>> the logical network with the new vlan ID.
> >>> - Note that this can be done when the VMs are running on the host/s as
> >>> well.
> >>>
> >>> Good luck)
> >>>
> >>> On Tue, Feb 14, 2017 at 11:51 PM, Matt . 
> wrote:
> 
>  Hi Guys,
> 
>  Is there a way to change Vlan ID's which are connected to the Hosts
>  when the VM's on it are down ?
> 
>  As i need to change a lot of them it would be create if the engine can
>  perform some process which allows me to hammer my hosts with the new
>  vlan ID's for their bridges.
> 
>  I hope someone has an idea.
> 
>  Thanks!
> 
>  Matt
>  ___
>  Users mailing list
>  Users@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Michael Burman
> >>> RedHat Israel, RHV-M Network QE
> >>>
> >>> Mobile: 054-5355725
> >>> IRC: mburman
> >>
> >>
> >>
> >>
> >> --
> >> Michael Burman
> >> RedHat Israel, RHV-M Network QE
> >>
> >> Mobile: 054-5355725
> >> IRC: mburman
>



-- 
Michael Burman
RedHat Israel, RHV-M Network QE

Mobile: 054-5355725
IRC: mburman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change Vlan ID's which are connected to the Host/VM's

2017-02-16 Thread Matt .
Hi Micheal,

I trying pro process the change the issue is that I still get the
message (with some variable bug in it) that some hosts have the
logical network in usage. Is this Only on the name of the network ?

I'm on Ovirt 4.0.6

Error while executing action: Cannot edit Network. This logical
network is used by hosts:
(${ACTION_TYPE_FAILED_NETWORK_IN_ONE_USE_LIST_COUNTER}):
${ACTION_TYPE_FAILED_NETWORK_IN_ONE_USE_LIST}
- Please detach hosts using this logical network and try again.

Thanks,

Matt

2017-02-15 9:46 GMT+01:00 Matt . :
> HI Micheal,
>
> Thanks! it looks like it's what I need, checking out!
>
> Thank you very much!
>
> Matt
>
> 2017-02-15 7:12 GMT+01:00 Michael Burman :
>> Check this feature page as well)
>>
>> http://www.ovirt.org/develop/release-management/features/network/multihostnetworkconfiguration/
>>
>>
>>
>> On Wed, Feb 15, 2017 at 8:06 AM, Michael Burman  wrote:
>>>
>>> Hello Matt
>>>
>>> Yes, there is a way, it can be done as part of our MultiHost Network
>>> feature(since 3.4).
>>> If you have the same logical network attached to multiple hosts in your
>>> DC, you can update their vlan IDs via the 'Networks' main tab and this will
>>> send a setup networks command through all of your hosts in the DC and update
>>> the logical network with the new vlan ID.
>>> - Note that this can be done when the VMs are running on the host/s as
>>> well.
>>>
>>> Good luck)
>>>
>>> On Tue, Feb 14, 2017 at 11:51 PM, Matt .  wrote:

 Hi Guys,

 Is there a way to change Vlan ID's which are connected to the Hosts
 when the VM's on it are down ?

 As i need to change a lot of them it would be create if the engine can
 perform some process which allows me to hammer my hosts with the new
 vlan ID's for their bridges.

 I hope someone has an idea.

 Thanks!

 Matt
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>>
>>> --
>>> Michael Burman
>>> RedHat Israel, RHV-M Network QE
>>>
>>> Mobile: 054-5355725
>>> IRC: mburman
>>
>>
>>
>>
>> --
>> Michael Burman
>> RedHat Israel, RHV-M Network QE
>>
>> Mobile: 054-5355725
>> IRC: mburman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What is the libcacard-ev.x86_64 ??

2017-02-16 Thread Sandro Bonazzola
On Thu, Feb 16, 2017 at 1:25 PM, Sandro Bonazzola 
wrote:

>
>
> On Tue, Feb 14, 2017 at 9:20 AM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Fri, Feb 10, 2017 at 11:49 AM, Arman Khalatyan 
>> wrote:
>>
>>> I have a host which was updated since 3.6
>>> libcacard is disappeared from ovirt4.1 but it has a many dependencies
>>> which stops to remove it:
>>>
>>>
>> Hello,
>> I reconnect to this thread, as I have similar problem/doubt relatd to
>> libcacard / libcacard-ev packages.
>> I had a cluster composed by two nodes, plain CentOS 7.3 nodes.
>> They were installed in 4.0.6 (on 21/01) and this involved installation of
>> libcacard-ev-2.3.0-31.el7.16.1.x86_64 package.
>> Then I updated to 4.1 (on 08/02) using "yum update" strategy and nothing
>> changed about this package.
>> Yesteray I added a third node, but I notice that on it I don't have
>> libcacard-ev but libcacard-2.5.2-2.el7.x86_64
>>
>
>
> I think the right one is libcacard-2.5.2-2.el7.x86_64 since libcacard is
> not built anymore with qemu-kkvm-ev.
> Not sure why libcacard-ev-2.3.0-31.el7.16.1.x86_64 has not been updated
> but I guess it's related to the Obsolete clause.
> I guess I'll need to rebuild libcacard adding an obsolete on libcacard-ev
> too.
>
> Adding Miroslav, Alan and Karanbir to let them know about the issue.
>

Tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1422868



>
>
>>
>> Source for the -ev package is qemu-kvm-ev-2.3.0-31.el7.16.1.src.rpm and
>> build host el7-vm28.phx.ovirt.org and this makes me think it is the
>> right package.
>> But it is not pulled in into the new host where I think I have the
>> standard CentOS one with source RPM libcacard-2.5.2-2.el7.src.rpm and
>> Packager CentOS BuildSystem
>>
>> The strange thing is that in their changelog I have:
>>
>> - for standard libcacard:
>> * Fri Mar 18 2016 Miroslav Rezanina  - 2.5.2-2.el7
>> - Obsolete libcacard-rhev (bz#1315953)
>>
>> * Fri Jan 29 2016 Miroslav Rezanina  - 2.5.2-1.el7
>> - Initial build
>>
>> - for libcacard-ev
>> * Fri Jul 08 2016 Sandro Bonazzola  -
>> ev-2.3.0-31.el7_2.16.1
>> - Removing RH branding from package name
>>
>> * Thu Jun 16 2016 Miroslav Rezanina  -
>> rhev-2.3.0-31.el7_2.16
>> - kvm-vga-add-sr_vbe-register-set.patch [bz#1347185]
>> - Resolves: bz#1347185
>>   (Regression from CVE-2016-3712: windows installer fails to start)
>> ...
>>
>> What to choose and to do?
>> Thanks,
>> Gianluca
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What is the libcacard-ev.x86_64 ??

2017-02-16 Thread Sandro Bonazzola
On Tue, Feb 14, 2017 at 9:20 AM, Gianluca Cecchi 
wrote:

> On Fri, Feb 10, 2017 at 11:49 AM, Arman Khalatyan 
> wrote:
>
>> I have a host which was updated since 3.6
>> libcacard is disappeared from ovirt4.1 but it has a many dependencies
>> which stops to remove it:
>>
>>
> Hello,
> I reconnect to this thread, as I have similar problem/doubt relatd to
> libcacard / libcacard-ev packages.
> I had a cluster composed by two nodes, plain CentOS 7.3 nodes.
> They were installed in 4.0.6 (on 21/01) and this involved installation of
> libcacard-ev-2.3.0-31.el7.16.1.x86_64 package.
> Then I updated to 4.1 (on 08/02) using "yum update" strategy and nothing
> changed about this package.
> Yesteray I added a third node, but I notice that on it I don't have
> libcacard-ev but libcacard-2.5.2-2.el7.x86_64
>


I think the right one is libcacard-2.5.2-2.el7.x86_64 since libcacard is
not built anymore with qemu-kkvm-ev.
Not sure why libcacard-ev-2.3.0-31.el7.16.1.x86_64 has not been updated but
I guess it's related to the Obsolete clause.
I guess I'll need to rebuild libcacard adding an obsolete on libcacard-ev
too.

Adding Miroslav, Alan and Karanbir to let them know about the issue.


>
> Source for the -ev package is qemu-kvm-ev-2.3.0-31.el7.16.1.src.rpm and
> build host el7-vm28.phx.ovirt.org and this makes me think it is the right
> package.
> But it is not pulled in into the new host where I think I have the
> standard CentOS one with source RPM libcacard-2.5.2-2.el7.src.rpm and
> Packager CentOS BuildSystem
>
> The strange thing is that in their changelog I have:
>
> - for standard libcacard:
> * Fri Mar 18 2016 Miroslav Rezanina  - 2.5.2-2.el7
> - Obsolete libcacard-rhev (bz#1315953)
>
> * Fri Jan 29 2016 Miroslav Rezanina  - 2.5.2-1.el7
> - Initial build
>
> - for libcacard-ev
> * Fri Jul 08 2016 Sandro Bonazzola  -
> ev-2.3.0-31.el7_2.16.1
> - Removing RH branding from package name
>
> * Thu Jun 16 2016 Miroslav Rezanina  -
> rhev-2.3.0-31.el7_2.16
> - kvm-vga-add-sr_vbe-register-set.patch [bz#1347185]
> - Resolves: bz#1347185
>   (Regression from CVE-2016-3712: windows installer fails to start)
> ...
>
> What to choose and to do?
> Thanks,
> Gianluca
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm conf files sync

2017-02-16 Thread Nir Soffer
On Thu, Feb 16, 2017 at 12:00 PM, Gianluca Cecchi
 wrote:
> Hello,
> in my 2 original 4.1 hosts I got some storage errors using rdbms machines
> when restoring or doing hevy I/O.
> My storage domain is FC SAN based.
> I solved the problem setting this conservative settings into
> /etc/vdsm/vdsm.conf.d
>
> cat 50_thin_block_extension_rules.conf
> [irs]
>
> # Together with volume_utilization_chunk_mb, set the minimal free
> # space before a thin provisioned block volume is extended. Use lower
> # values to extend earlier.
> volume_utilization_percent = 25
>
> # Size of extension chunk in megabytes, and together with
> # volume_utilization_percent, set the free space limit. Use higher
> # values to extend in bigger chunks.
> volume_utilization_chunk_mb = 4096
>
> Then I added a third host in a second time and I wrongly supposed that an
> equal vdsm configurtion would have been deployed with "New Host" from
> gui
> But is not so.
> Yesterday with a VM running on this third hypervisor I got the previous
> experimented messages; some cycles of these
>
> VM dbatest6 has recovered from paused back to up.
> VM dbatest6 has been paused due to no Storage space error.
> VM dbatest6 has been paused.
>
> in a 2 hours period.
>
> Two questions:
> - why not align hypervisor configuration when adding host and in particular
> the vdsm one? Any reason in general for having different config in hosts of
> the same cluster?

Host configuration is not managed by the system. You are responsible for
configuring a new host using your special configuration.

> - the host that was running the VM was not the SPM.
> Who is in charge of applying the settings about volume extension when a VM
> I/O load requires it because of a thin provisioned disk in use?
> I presume not the SPM but the host that has in charge the VM, based on what
> I saw yesterday...

The host running the vm is monitoring the data written to the disks
and ask the SPM
to extend the disks when needed.

I think that engine should manage the configuration. Being able to
configure a host
in a different way may be important, but it should not be the normal way.

The way it should work is configuring stuff on engine and letting engine apply
the configuration on all hosts, same way we do with anything else.

Would you open RFE for this?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Requirements to use ovirt-image-repository

2017-02-16 Thread Fred Rolland
The SPM one

On Thu, Feb 16, 2017 at 12:01 PM, Gianluca Cecchi  wrote:

> On Thu, Feb 16, 2017 at 10:53 AM, Fred Rolland 
> wrote:
>
>> The VDSM is doing the download
>>
>> On Sun, Feb 12, 2017 at 2:09 AM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> Hello,
>>> is it sufficient to open outside glance port (9292) of glance.ovirt.org
>>> or is there anything else to do?
>>> Making a test from an environment without outside restrictions it seems
>>> that is only the engine that connects, correct?
>>>
>>> Thanks,
>>> Gianluca
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
> vdsm of which host if I have n hosts?
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Secure installation via RPM

2017-02-16 Thread Sandro Bonazzola
On Wed, Feb 15, 2017 at 5:22 AM, Michael Gauthier 
wrote:

> Hello,
>
> The getting download documentation (http://www.ovirt.org/download/) says
> to use yum to install a RPM directly from a plain HTTP URL. Is there a
> HTTPS host for the oVirt RPMs?


We're not running https on resources.ovirt.org. I'm opening a ticket on
infra for this.



> If not, is the RPM signed with a GPG key? Is the GPG signature available
> via HTTPS?
>

release rpms are signed, you can get gpg key manually as described in the
website:

$ gpg --recv-keys FE590CB7
$ gpg --list-keys --with-fingerprint FE590CB7
---
pub   2048R/FE590CB7 2014-03-30 [expires: 2021-04-03]
  Key fingerprint = 31A5 D783 7FAD 7CB2 86CD  3469 AB8C 4F9D FE59 0CB7
uid  oVirt 
sub   2048R/004BC303 2014-03-30 [expires: 2021-04-03]
---
$ gpg --export --armor FE590CB7 > ovirt-infra.pub
# rpm --import ovirt-infra.pub




>
> Please see https://access.redhat.com/blogs/766093/posts/1976693
>
> Thanks,
> Mike
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Requirements to use ovirt-image-repository

2017-02-16 Thread Gianluca Cecchi
On Thu, Feb 16, 2017 at 10:53 AM, Fred Rolland  wrote:

> The VDSM is doing the download
>
> On Sun, Feb 12, 2017 at 2:09 AM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> is it sufficient to open outside glance port (9292) of glance.ovirt.org
>> or is there anything else to do?
>> Making a test from an environment without outside restrictions it seems
>> that is only the engine that connects, correct?
>>
>> Thanks,
>> Gianluca
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
vdsm of which host if I have n hosts?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vdsm conf files sync

2017-02-16 Thread Gianluca Cecchi
Hello,
in my 2 original 4.1 hosts I got some storage errors using rdbms machines
when restoring or doing hevy I/O.
My storage domain is FC SAN based.
I solved the problem setting this conservative settings into
/etc/vdsm/vdsm.conf.d

cat 50_thin_block_extension_rules.conf
[irs]

# Together with volume_utilization_chunk_mb, set the minimal free
# space before a thin provisioned block volume is extended. Use lower
# values to extend earlier.
volume_utilization_percent = 25

# Size of extension chunk in megabytes, and together with
# volume_utilization_percent, set the free space limit. Use higher
# values to extend in bigger chunks.
volume_utilization_chunk_mb = 4096

Then I added a third host in a second time and I wrongly supposed that an
equal vdsm configurtion would have been deployed with "New Host" from
gui
But is not so.
Yesterday with a VM running on this third hypervisor I got the previous
experimented messages; some cycles of these

VM dbatest6 has recovered from paused back to up.
VM dbatest6 has been paused due to no Storage space error.
VM dbatest6 has been paused.

in a 2 hours period.

Two questions:
- why not align hypervisor configuration when adding host and in particular
the vdsm one? Any reason in general for having different config in hosts of
the same cluster?
- the host that was running the VM was not the SPM.
Who is in charge of applying the settings about volume extension when a VM
I/O load requires it because of a thin provisioned disk in use?
I presume not the SPM but the host that has in charge the VM, based on what
I saw yesterday...

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Requirements to use ovirt-image-repository

2017-02-16 Thread Fred Rolland
The VDSM is doing the download

On Sun, Feb 12, 2017 at 2:09 AM, Gianluca Cecchi 
wrote:

> Hello,
> is it sufficient to open outside glance port (9292) of glance.ovirt.org
> or is there anything else to do?
> Making a test from an environment without outside restrictions it seems
> that is only the engine that connects, correct?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] questions on OVN

2017-02-16 Thread Gianluca Cecchi
On Thu, Feb 16, 2017 at 9:54 AM, Marcin Mirecki  wrote:

> OVN is aleady using GENEVE, VXLAN or STT tunnels (the user can choose
> any), so the isolation is already assured.
> The scripts provided by ovirt configure a geneve tunnel.
> You are free so override this manually to vxlan or stt if you want, let me
> know if you need any howto info.
>

yes, please.
I have used in the mean time the vdsm-tool command that takes care of
creating the default geneve tunnel
In my case
vdsm-tool ovn-config 10.4.168.80 10.4.168.81

but I would like to know how to manually use other types too.
I watched the deep dive demo about ovn but at the bottom of the related
slide there are three lines that should be equivalent to the above command,
something like

ovs-vsctl set open ?  external-ids:ovn-remote=tcp:10.4.168.80:6642
ovs-vsctl set open ?  external-ids:ovn-encap=type=geneve
ovs-vsctl set open ?  external-ids:ovn-encap-ip=10.4.168.81

The ? character seems a dot or a comma, I have not understood the
syntax
(what are the accepted words for type= in the second line?)

Thanks again,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] questions on OVN

2017-02-16 Thread Marcin Mirecki
I missed the last line.

A vdsm configuration is created from a default vdsm configuration file
under /etc/vdsm/vdsm.conf. Then it reads conf files from drop-in dirs and
updates the configuration according to the files:

- /etc/vdsm/vdsm.conf - for user configuration. We install this file if
missing, and never touch this file during upgrade.
- /etc/vdsm/vdsm.conf.d/ - for admin drop-in conf files.
- /usr/lib/vdsm/vdsm.conf.d/ - for vendor drop-in configuration files.
- /var/run/vdsm/vdsm.conf.d/ - for admin temporary configuration.

Files with a .conf suffix can be placed into any of the vdsm.conf.d drop-in
directories.

The priority of the configuration files is determined by the number prefix
of each file.


On Wed, Feb 15, 2017 at 4:43 PM, Gianluca Cecchi 
wrote:

> On Wed, Feb 15, 2017 at 4:23 PM, Marcin Mirecki 
> wrote:
>
>> It should not have any negative interference on configuration issues,
>> but
>> it could have a negative impact on performace of your ovirtmgmt network,
>> in case your OVN traffic saturates the connection.
>>
>> >Cannot edit Interface. External network cannot be changed while the
>> virtual machine is running.
>> The error message is incorrect (it predates the introduction of nic
>> hotplugging)
>> It is enough to unplug/plug the nic before/after doing changes (the nic
>> must be in the unplugged state to change it).
>> As far as I know there is already a bug reported about the error message
>> being incorrect.
>>
>
> OK. I just verified that it works as you described, thanks
>
>
>> >In the sense that the tunnel basically already realizes the isolation
>> from the ovirtmgmt network itself (what usually we do making vlans) without
>> >interfering in case I have a great exchange of data for example over the
>> tunnel between 2 VMs placed on different hosts?
>> If the traffic going over the tunnel saturates that link, it will
>> interfere with with your ovirtmgm traffic. For testing this setup should be
>> ok, I would not recommend it for production.
>>
>
> OK, but at least the packets would be invisible to the ovirtmgmt network
> I mean, typically on the same adapter you put separate vlans to segregate
> traffic. This doesn't give you the double of the bandwidth but the
> isolation of the network so that it doesn't to go and inspect the packet to
> see what is the target and so on...
> Does this make sense in this way for the tunnel too or nothing at all?
>
>
>
>>
>>
>> >BTW: does it make sense to create another vlan on the bonding (that is
>> already setup with vlans), assigning an ip on the hosts and then use it?
>> The tunnel should take care of the isolation, so I don't think it would
>> add any value.
>>
>> >The same question could also apply to a general case where for example
>> my hosts have to integrate into a dedicated lan in the infrastructure (eg
>> for backup or monitoring or what else)... would I configure this lan from
>> oVirt or better from hosts themselves?
>> Any configuration changes made manually would cause ovirt to see them as
>> unsynchronized. To do it cleanly you would have to hide the nics used for
>> this by adding them to 'hidden_nic' in vdsm configuration (nics ignored by
>> ovirt). Let me know if you want more information on this.
>> If you need a network to be used by the host, a better solution would be
>> to just create a separate network from ovirt (a non-vm network if you don't
>> need a bridge on top of the nic).
>>
>
> Ah, I see. I think the relevant lines in vdsm.conf are:
>
> # Comma-separated list of fnmatch-patterns for host nics to be hidden
> # from vdsm.
> # hidden_nics = w*,usb*
>
> # Comma-separated list of fnmatch-patterns for host bonds to be hidden
> # from vdsm.
> # hidden_bonds =
>
> # Comma-separated list of fnmatch-patterns for host vlans to be hidden
> # from vdsm. vlan names must be in the format "dev.VLANID" (e.g.
> # eth0.100, em1.20, eth2.200). vlans with alternative names must be
> # hidden from vdsm (e.g. eth0.10-fcoe, em1.myvlan100, vlan200)
> # hidden_vlans =
>
> And in case I have to create some file of type 01_hidden.conf in
> /etc/vdsm/vdsm.conf.d/ to preserve across upgrades, correct?
>
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] questions on OVN

2017-02-16 Thread Marcin Mirecki
OVN is aleady using GENEVE, VXLAN or STT tunnels (the user can choose any),
so the isolation is already assured.
The scripts provided by ovirt configure a geneve tunnel.
You are free so override this manually to vxlan or stt if you want, let me
know if you need any howto info.

On Wed, Feb 15, 2017 at 4:43 PM, Gianluca Cecchi 
wrote:

> On Wed, Feb 15, 2017 at 4:23 PM, Marcin Mirecki 
> wrote:
>
>> It should not have any negative interference on configuration issues,
>> but
>> it could have a negative impact on performace of your ovirtmgmt network,
>> in case your OVN traffic saturates the connection.
>>
>> >Cannot edit Interface. External network cannot be changed while the
>> virtual machine is running.
>> The error message is incorrect (it predates the introduction of nic
>> hotplugging)
>> It is enough to unplug/plug the nic before/after doing changes (the nic
>> must be in the unplugged state to change it).
>> As far as I know there is already a bug reported about the error message
>> being incorrect.
>>
>
> OK. I just verified that it works as you described, thanks
>
>
>> >In the sense that the tunnel basically already realizes the isolation
>> from the ovirtmgmt network itself (what usually we do making vlans) without
>> >interfering in case I have a great exchange of data for example over the
>> tunnel between 2 VMs placed on different hosts?
>> If the traffic going over the tunnel saturates that link, it will
>> interfere with with your ovirtmgm traffic. For testing this setup should be
>> ok, I would not recommend it for production.
>>
>
> OK, but at least the packets would be invisible to the ovirtmgmt network
> I mean, typically on the same adapter you put separate vlans to segregate
> traffic. This doesn't give you the double of the bandwidth but the
> isolation of the network so that it doesn't to go and inspect the packet to
> see what is the target and so on...
> Does this make sense in this way for the tunnel too or nothing at all?
>
>
>
>>
>>
>> >BTW: does it make sense to create another vlan on the bonding (that is
>> already setup with vlans), assigning an ip on the hosts and then use it?
>> The tunnel should take care of the isolation, so I don't think it would
>> add any value.
>>
>> >The same question could also apply to a general case where for example
>> my hosts have to integrate into a dedicated lan in the infrastructure (eg
>> for backup or monitoring or what else)... would I configure this lan from
>> oVirt or better from hosts themselves?
>> Any configuration changes made manually would cause ovirt to see them as
>> unsynchronized. To do it cleanly you would have to hide the nics used for
>> this by adding them to 'hidden_nic' in vdsm configuration (nics ignored by
>> ovirt). Let me know if you want more information on this.
>> If you need a network to be used by the host, a better solution would be
>> to just create a separate network from ovirt (a non-vm network if you don't
>> need a bridge on top of the nic).
>>
>
> Ah, I see. I think the relevant lines in vdsm.conf are:
>
> # Comma-separated list of fnmatch-patterns for host nics to be hidden
> # from vdsm.
> # hidden_nics = w*,usb*
>
> # Comma-separated list of fnmatch-patterns for host bonds to be hidden
> # from vdsm.
> # hidden_bonds =
>
> # Comma-separated list of fnmatch-patterns for host vlans to be hidden
> # from vdsm. vlan names must be in the format "dev.VLANID" (e.g.
> # eth0.100, em1.20, eth2.200). vlans with alternative names must be
> # hidden from vdsm (e.g. eth0.10-fcoe, em1.myvlan100, vlan200)
> # hidden_vlans =
>
> And in case I have to create some file of type 01_hidden.conf in
> /etc/vdsm/vdsm.conf.d/ to preserve across upgrades, correct?
>
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disaster Recovery Testing

2017-02-16 Thread Fred Rolland
Gary,

See this wiki page, it explains how to import storage domains :
http://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/

Regards,

Fred

On Wed, Feb 15, 2017 at 10:15 PM, Nir Soffer  wrote:

> On Wed, Feb 15, 2017 at 9:30 PM, Gary Lloyd  wrote:
> > Hi Nir thanks for the guidance
> >
> > We started to use ovirt a good few years ago now (version 3.2).
> >
> > At the time iscsi multipath wasn't supported, so we made our own
> > modifications to vdsm and this worked well with direct lun.
> > We decided to go with direct lun in case things didn't work out with
> OVirt
> > and in that case we would go back to using vanilla kvm / virt-manager.
> >
> > At the time I don't believe that you could import iscsi data domains that
> > had already been configured into a different installation, so we
> replicated
> > each raw VM volume using the SAN to another server room for DR purposes.
> > We use Dell Equallogic and there is a documented limitation of 1024 iscsi
> > connections and 256 volume replications. This isn't a problem at the
> moment,
> > but the more VMs that we have the more conscious I am about us reaching
> > those limits (we have around 300 VMs at the moment and we have a vdsm
> hook
> > that closes off iscsi connections if a vm is migrated /powered off).
> >
> > Moving to storage domains keeps the number of iscsi connections /
> replicated
> > volumes down and we won't need to make custom changes to vdsm when we
> > upgrade.
> > We can then use the SAN to replicate the storage domains to another data
> > centre and bring that online with a different install of OVirt (we will
> have
> > to use these arrays for at least the next 3 years).
> >
> > I didn't realise that each storage domain contained the configuration
> > details/metadata for the VMs.
> > This to me is an extra win as we can recover VMs faster than we can now
> if
> > we have to move them to a different data centre in the event of a
> disaster.
> >
> >
> > Are there any maximum size / vm limits or recommendations for each
> storage
> > domain ?
>
> The recommended limit in rhel 6 was 350 lvs per storage domain. We believe
> this
> limit is not correct for rhel 7 and recent ovirt versions. We are
> testing currently
> 1000 lvs per storage domain, but we did not finish testing yet, so I
> cannot say
> what is the recommended limit yet.
>
> Preallocated disk has one lv, if you have thin disk, you have one lv
> per snapshot.
>
> There is no practical limit to the size of a storage domain.
>
> > Does Ovirt support moving VM's between different storage domain type e.g.
> > ISCSI to gluster ?
>
> Sure, you can move vm disks from any storage domain to any storage domain
> (except ceph).
>
> >
> >
> > Many Thanks
> >
> > Gary Lloyd
> > 
> > I.T. Systems:Keele University
> > Finance & IT Directorate
> > Keele:Staffs:IC1 Building:ST5 5NB:UK
> > +44 1782 733063
> > 
> >
> > On 15 February 2017 at 18:56, Nir Soffer  wrote:
> >>
> >> On Wed, Feb 15, 2017 at 2:32 PM, Gary Lloyd 
> wrote:
> >> > Hi
> >> >
> >> > We currently use direct lun for our virtual machines and I would like
> to
> >> > move away from doing this and move onto storage domains.
> >> >
> >> > At the moment we are using an ISCSI SAN and we use on replicas created
> >> > on
> >> > the SAN for disaster recovery.
> >> >
> >> > As a test I thought I would replicate an existing storage domain's
> >> > volume
> >> > (via the SAN) and try to mount again as a separate storage domain
> (This
> >> > is
> >> > with ovirt 4.06 (cluster mode 3.6))
> >>
> >> Why do want to replicate a storage domain and connect to it?
> >>
> >> > I can log into the iscsi disk but then nothing gets listed under
> Storage
> >> > Name / Storage ID (VG Name)
> >> >
> >> >
> >> > Should this be possible or will it not work due the the uids being
> >> > identical
> >> > ?
> >>
> >> Connecting 2 storage domains with same uid will not work. You can use
> >> either
> >> the old or the new, but not both at the same time.
> >>
> >> Can you explain how replicating the storage domain volume is related to
> >> moving from direct luns to storage domains?
> >>
> >> If you want to move from direct lun to storage domain, you need to
> create
> >> a new disk on the storage domain, and copy the direct lun data to the
> new
> >> disk.
> >>
> >> We don't support this yet, but you can copy manually like this:
> >>
> >> 1. Find the lv of the new disk
> >>
> >> lvs -o name --select "{IU_} = lv_tags" vg-name
> >>
> >> 2. Activate the lv
> >>
> >> lvchange -ay vg-name/lv-name
> >>
> >> 3. Copy the data from the lun
> >>
> >> qemu-img convert -p -f raw -O raw -t none -T none
> >> /dev/mapper/xxxyyy /dev/vg-name/lv-name
> >>
> >> 4. Deactivate the disk
> >>
> >> lvchange -an vg-name/lv-name
> >>
> >> Nir
> >
> >
> ___
> Users mailing list
> Users@ovirt.org
> 

Re: [ovirt-users] VMs are not running/booting on one host

2017-02-16 Thread Peter Hudec
Hi,

On 16/02/2017 06:09, Karli Sjöberg wrote:
> 
> Den 15 feb. 2017 9:06 em skrev Peter Hudec :
>>
>> Hi,
>>
>> so theproblem is little bit different. When I wait for a long time, the
>> VM boots ;(
> 
> Does the VM have VirtIO serial console activated? I had the same problem
> and removing that fixed it.
No, they don't. I planned to activate it in 3.6.

Peter

> /K
> 
>>
>> But ... /see the log/. I'm invetigating the reason.
>> The difference between the dipovirt0{1,2} and the dipovirt03 isthe
>> installation time. The first 2 was migrated last week, the last one
>> yesterday. There some newer packages, but nothing related to KVM.
>>
>> [  292.429622] INFO: rcu_sched self-detected stall on CPU { 0}  (t=72280
>> jiffies g=393 c=392 q=35)
>> [  292.430294] sending NMI to all CPUs:
>> [  292.430305] NMI backtrace for cpu 0
>> [  292.430309] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.16.0-4-amd64
>> #1 Debian 3.16.39-1
>> [  292.430311] Hardware name: oVirt oVirt Node, BIOS 0.5.1 01/01/2011
>> [  292.430313] task: 8181a460 ti: 8180 task.ti:
>> 8180
>> [  292.430315] RIP: 0010:[]  []
>> native_write_msr_safe+0x6/0x10
>> [  292.430323] RSP: 0018:88001fc03e08  EFLAGS: 0046
>> [  292.430325] RAX: 0400 RBX:  RCX:
>> 0830
>> [  292.430326] RDX:  RSI: 0400 RDI:
>> 0830
>> [  292.430327] RBP: 818e2a80 R08: 818e2a80 R09:
>> 01e8
>> [  292.430329] R10:  R11: 88001fc03b96 R12:
>> 
>> [  292.430330] R13: a0ea R14: 0002 R15:
>> 0008
>> [  292.430335] FS:  () GS:88001fc0()
>> knlGS:
>> [  292.430337] CS:  0010 DS:  ES:  CR0: 8005003b
>> [  292.430339] CR2: 01801000 CR3: 1c6de000 CR4:
>> 06f0
>> [  292.430343] Stack:
>> [  292.430344]  8104b30d 0002 0082
>> 88001fc0d6a0
>> [  292.430347]  81853800  818e2fe0
>> 0023
>> [  292.430349]  81853800 81047d63 88001fc0d6a0
>> 810c73fa
>> [  292.430352] Call Trace:
>> [  292.430354]  
>>
>> [  292.430360]  [] ? __x2apic_send_IPI_mask+0xad/0xe0
>> [  292.430365]  [] ?
>> arch_trigger_all_cpu_backtrace+0xc3/0x140
>> [  292.430369]  [] ? rcu_check_callbacks+0x42a/0x670
>> [  292.430373]  [] ? account_process_tick+0xde/0x180
>> [  292.430376]  [] ? tick_sched_handle.isra.16+0x60/0x60
>> [  292.430381]  [] ? update_process_times+0x40/0x70
>> [  292.430404]  [] ? tick_sched_handle.isra.16+0x20/0x60
>> [  292.430407]  [] ? tick_sched_timer+0x3c/0x60
>> [  292.430410]  [] ? __run_hrtimer+0x67/0x210
>> [  292.430412]  [] ? hrtimer_interrupt+0xe9/0x220
>> [  292.430416]  [] ? smp_apic_timer_interrupt+0x3b/0x50
>> [  292.430420]  [] ? apic_timer_interrupt+0x6d/0x80
>> [  292.430422]  
>>
>> [  292.430425]  [] ? sched_clock_local+0x15/0x80
>> [  292.430428]  [] ? mwait_idle+0xa0/0xa0
>> [  292.430431]  [] ? native_safe_halt+0x2/0x10
>> [  292.430434]  [] ? default_idle+0x19/0xd0
>> [  292.430437]  [] ? cpu_startup_entry+0x374/0x470
>> [  292.430440]  [] ? start_kernel+0x497/0x4a2
>> [  292.430442]  [] ? set_init_arg+0x4e/0x4e
>> [  292.430445]  [] ? early_idt_handler_array+0x120/0x120
>> [  292.430447]  [] ? x86_64_start_kernel+0x14d/0x15c
>> [  292.430448] Code: c2 48 89 d0 c3 89 f9 0f 32 31 c9 48 c1 e2 20 89 c0
>> 89 0e 48 09 c2 48 89 d0 c3 66 66 2e 0f 1f 84 00 00 00 00 00 89 f0 89 f9
>> 0f 30 <31> c0 c3 0f 1f 80 00 00 00 00 89 f9 0f 33 48 c1 e2 20 89 c0 48
>> [  292.430579] Clocksource tsc unstable (delta = -289118137838 ns)
>>
>>
>> On 15/02/2017 20:39, Peter Hudec wrote:
>> > Hi,
>> >
>> > I did already, but not find any suspicious, see attached logs and the
>> > spice screenshot.
>> >
>> > Actually the VM is booting, but is stacked in some  bad state.
>> > When migrating, the migration is sucessfull, but the vm is not acessible
>> > /even on network/
>> >
>> > Right now I found one VM, which is working well.
>> >
>> > In logs look for diplci01 at 2017-02-15 20:23:00,420, the VM ID is
>> > 7ddf349b-fb9a-44f4-9e88-73e84625a44e
>> >
>> >thanks
>> >Peter
>> >
>> > On 15/02/2017 19:40, Nir Soffer wrote:
>> >> On Wed, Feb 15, 2017 at 8:11 PM, Peter Hudec  wrote:
>> >>> Hi,
>> >>>
>> >>> I'm preparing to migrate from 3.5 to 3.6
>> >>> The first step is the CentOS6 -> CentOS7 for hosts.
>> >>>
>> >>> setup:
>> >>>   - 3x hosts /dipovitrt01, dipovirt02, dipovirt03/
>> >>>   - 1x hosted engine /on all 3 hosts/
>> >>>
>> >>> The upgrade of the first 2 hosts was OK, all VM are running OK.
>> >>> When I upgraded the 3rd host /dipovirt03/, some  VMs are not able
> to run
>> >>> on the or boot on this host. I tried  to full reinstall the host, but
>> >>> wth the same result.
>> >>>
>> >>> In case of migration the VMm will stop running in a while.
>> >>> In ca