Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Sahina Bose
On Wed, Jul 5, 2017 at 3:10 AM, Gianluca Cecchi 
wrote:

> On Tue, Jul 4, 2017 at 2:57 PM, Gianluca Cecchi  > wrote:
>
>>
>>> No, it's not. One option is to update glusterfs packages to 3.10.
>>>
>>
>> Is it supported throughout oVirt to use CentOS Storage SIG packages
>> instead of ovirt provided ones? I imagine you mean it, correct?
>>
>> If this is a case, would I have to go with Gluster 3.9 (non LTS)
>> https://lists.centos.org/pipermail/centos-announce/2017-Janu
>> ary/022249.html
>>
>> Or Gluster 3.10 (LTS)
>> https://lists.centos.org/pipermail/centos-announce/2017-March/022337.html
>>
>> I suppose the latter...
>> Any problem then with updates of oVirt itself, eg going through 4.1.2 to
>> 4.1.3?
>>
>> Thanks
>> Gianluca
>>
>>>
>>> Is 3.9 version of Gluster packages provided when updating to upcoming
>>> 4.1.3, perhaps?
>>>
>>
> Never mind, I will verify. At the end this is a test system.
> I put the nodes in maintenance one by one and then installed glusterfs
> 3.10 with;
>
> yum install centos-release-gluster
> yum update
>
> All were able to self heal then and I see the 4 storage domains (engine,
> data, iso, export) up and running.
> See some notes at the end of the e-mail.
> Now I'm ready to test the change of gluster network traffic.
>
> In my case the current hostnames that are also matching the ovirtmgmt
> network are ovirt0N.localdomain.com with N=1,2,3
>
> On my vlan2, defined as gluster network role in the cluster, I have
> defined (on each node /etc/hosts file) the hostnames
>
> 10.10.2.102 gl01.localdomain.local gl01
> 10.10.2.103 gl02.localdomain.local gl02
> 10.10.2.104 gl03.localdomain.local gl03
>
> I need more details about command to run:
>
> Currently I have
>
> [root@ovirt03 ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: ovirt01.localdomain.local
> Uuid: e9717281-a356-42aa-a579-a4647a29a0bc
> State: Peer in Cluster (Connected)
> Other names:
> 10.10.2.102
>
> Hostname: ovirt02.localdomain.local
> Uuid: b89311fe-257f-4e44-8e15-9bff6245d689
> State: Peer in Cluster (Connected)
> Other names:
> 10.10.2.103
>
> Suppose I start form export volume, that has these info:
>
> [root@ovirt03 ~]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt01.localdomain.local:/gluster/brick3/export
> Brick2: ovirt02.localdomain.local:/gluster/brick3/export
> Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)
> ...
>
> then the commands I need to run would be:
>
> gluster volume reset-brick export 
> ovirt01.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt01.localdomain.local:/gluster/brick3/export
> gl01.localdomain.local:/gluster/brick3/export commit force
>
> Correct?
>

Yes, correct. gl01.localdomain.local should resolve correctly on all 3
nodes.


> Is it sufficient to run it on a single node? And then on the same node, to
> run also for the other bricks of the same volume:
>

Yes, it is sufficient to run on single node. You can run the reset-brick
for all bricks from same node.


>
> gluster volume reset-brick export 
> ovirt02.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt02.localdomain.local:/gluster/brick3/export
> gl02.localdomain.local:/gluster/brick3/export commit force
>
> and
>
> gluster volume reset-brick export 
> ovirt03.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt03.localdomain.local:/gluster/brick3/export
> gl03.localdomain.local:/gluster/brick3/export commit force
>
> Correct? Do I have to wait self-heal after each commit command, before
> proceeding with the other ones?
>

Ideally, gluster should recognize this as same brick as before, and heal
will not be needed. Please confirm that it is indeed the case before
proceeding


>
> Thanks in advance for input so that I can test it.
>
> Gianluca
>
>
> NOTE: during the update of gluster packages from 3.8 to 3.10 I got these:
>
> warning: /var/lib/glusterd/vols/engine/engine.ovirt01.localdomain.
> local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/
> engine.ovirt01.localdomain.local.gluster-brick1-engine.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/engine.ovirt02.localdomain.
> local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/
> engine.ovirt02.localdomain.local.gluster-brick1-engine.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/engine.ovirt03.localdomain.
> local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/
> engine.ovirt03.localdomain.local.gluster-brick1-engine.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol saved
> as /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol.rpmsave
> warning: 

Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Vinícius Ferrão
Adding another question to what Matthias has said.

I also noted that oVirt (and RHV) documentation does not mention the supported 
block size on iSCSI domains.

RHV: 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html/administration_guide/chap-storage
oVirt: http://www.ovirt.org/documentation/admin-guide/chap-Storage/

I’m interested on 4K blocks over iSCSI, but this isn’t really widely supported. 
The question is: oVirt supports this? Or should we stay with the default 512 
bytes of block size?

Thanks,
V.

On 4 Jul 2017, at 09:10, Matthias Leopold 
> 
wrote:



Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão 
 > wrote:
   Thanks, Konstantin.
   Just to be clear enough: the first deployment would be made on
   classic eth interfaces and later after the deployment of Hosted
   Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
   Another question: what about iSCSI Multipath on Self Hosted Engine?
   I've looked through the net and only found this issue:
   https://bugzilla.redhat.com/show_bug.cgi?id=1193961
   
   Appears to be unsupported as today, but there's an workaround on the
   comments. It's safe to deploy this way? Should I use NFS instead?
It's probably not the most tested path but once you have an engine you should 
be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI bond 
configuration.
A different story is instead having ovirt-ha-agent connecting multiple IQNs or 
multiple targets over your SAN. This is currently not supported for the 
hosted-engine storage domain.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579

Hi Simone,

i think my post to this list titled "iSCSI multipathing setup troubles" just 
recently is about the exact same problem, except i'm not talking about the 
hosted-engine storage domain. i would like to configure _any_ iSCSI storage 
domain the way you describe it in 
https://bugzilla.redhat.com/show_bug.cgi?id=1149579#c1. i would like to do so 
using the oVirt "iSCSI Multipathing" GUI after everything else is setup. i 
can't find a way to do this. is this now possible? i think the iSCSI 
Multipathing documentation could be improved by describing an example IP setup 
for this.

thanks a lot
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Gianluca Cecchi
On Tue, Jul 4, 2017 at 2:57 PM, Gianluca Cecchi 
wrote:

>
>> No, it's not. One option is to update glusterfs packages to 3.10.
>>
>
> Is it supported throughout oVirt to use CentOS Storage SIG packages
> instead of ovirt provided ones? I imagine you mean it, correct?
>
> If this is a case, would I have to go with Gluster 3.9 (non LTS)
> https://lists.centos.org/pipermail/centos-announce/2017-
> January/022249.html
>
> Or Gluster 3.10 (LTS)
> https://lists.centos.org/pipermail/centos-announce/2017-March/022337.html
>
> I suppose the latter...
> Any problem then with updates of oVirt itself, eg going through 4.1.2 to
> 4.1.3?
>
> Thanks
> Gianluca
>
>>
>> Is 3.9 version of Gluster packages provided when updating to upcoming
>> 4.1.3, perhaps?
>>
>
Never mind, I will verify. At the end this is a test system.
I put the nodes in maintenance one by one and then installed glusterfs 3.10
with;

yum install centos-release-gluster
yum update

All were able to self heal then and I see the 4 storage domains (engine,
data, iso, export) up and running.
See some notes at the end of the e-mail.
Now I'm ready to test the change of gluster network traffic.

In my case the current hostnames that are also matching the ovirtmgmt
network are ovirt0N.localdomain.com with N=1,2,3

On my vlan2, defined as gluster network role in the cluster, I have defined
(on each node /etc/hosts file) the hostnames

10.10.2.102 gl01.localdomain.local gl01
10.10.2.103 gl02.localdomain.local gl02
10.10.2.104 gl03.localdomain.local gl03

I need more details about command to run:

Currently I have

[root@ovirt03 ~]# gluster peer status
Number of Peers: 2

Hostname: ovirt01.localdomain.local
Uuid: e9717281-a356-42aa-a579-a4647a29a0bc
State: Peer in Cluster (Connected)
Other names:
10.10.2.102

Hostname: ovirt02.localdomain.local
Uuid: b89311fe-257f-4e44-8e15-9bff6245d689
State: Peer in Cluster (Connected)
Other names:
10.10.2.103

Suppose I start form export volume, that has these info:

[root@ovirt03 ~]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt01.localdomain.local:/gluster/brick3/export
Brick2: ovirt02.localdomain.local:/gluster/brick3/export
Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)
...

then the commands I need to run would be:

gluster volume reset-brick export
ovirt01.localdomain.local:/gluster/brick3/export start
gluster volume reset-brick export
ovirt01.localdomain.local:/gluster/brick3/export
gl01.localdomain.local:/gluster/brick3/export commit force

Correct?

Is it sufficient to run it on a single node? And then on the same node, to
run also for the other bricks of the same volume:

gluster volume reset-brick export
ovirt02.localdomain.local:/gluster/brick3/export start
gluster volume reset-brick export
ovirt02.localdomain.local:/gluster/brick3/export
gl02.localdomain.local:/gluster/brick3/export commit force

and

gluster volume reset-brick export
ovirt03.localdomain.local:/gluster/brick3/export start
gluster volume reset-brick export
ovirt03.localdomain.local:/gluster/brick3/export
gl03.localdomain.local:/gluster/brick3/export commit force

Correct? Do I have to wait self-heal after each commit command, before
proceeding with the other ones?

Thanks in advance for input so that I can test it.

Gianluca


NOTE: during the update of gluster packages from 3.8 to 3.10 I got these:

warning:
/var/lib/glusterd/vols/engine/engine.ovirt01.localdomain.local.gluster-brick1-engine.vol
saved as
/var/lib/glusterd/vols/engine/engine.ovirt01.localdomain.local.gluster-brick1-engine.vol.rpmsave
warning:
/var/lib/glusterd/vols/engine/engine.ovirt02.localdomain.local.gluster-brick1-engine.vol
saved as
/var/lib/glusterd/vols/engine/engine.ovirt02.localdomain.local.gluster-brick1-engine.vol.rpmsave
warning:
/var/lib/glusterd/vols/engine/engine.ovirt03.localdomain.local.gluster-brick1-engine.vol
saved as
/var/lib/glusterd/vols/engine/engine.ovirt03.localdomain.local.gluster-brick1-engine.vol.rpmsave
warning: /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol saved as
/var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol.rpmsave
warning: /var/lib/glusterd/vols/engine/engine.tcp-fuse.vol saved as
/var/lib/glusterd/vols/engine/engine.tcp-fuse.vol.rpmsave
warning:
/var/lib/glusterd/vols/data/data.ovirt01.localdomain.local.gluster-brick2-data.vol
saved as
/var/lib/glusterd/vols/data/data.ovirt01.localdomain.local.gluster-brick2-data.vol.rpmsave
warning:
/var/lib/glusterd/vols/data/data.ovirt02.localdomain.local.gluster-brick2-data.vol
saved as
/var/lib/glusterd/vols/data/data.ovirt02.localdomain.local.gluster-brick2-data.vol.rpmsave
warning:
/var/lib/glusterd/vols/data/data.ovirt03.localdomain.local.gluster-brick2-data.vol
saved as
/var/lib/glusterd/vols/data/data.ovirt03.localdomain.local.gluster-brick2-data.vol.rpmsave

Re: [ovirt-users] ovirt can't find user

2017-07-04 Thread Fabrice Bacchella

> Le 1 juil. 2017 à 09:09, Fabrice Bacchella  a 
> écrit :
> 
> 
>> Le 30 juin 2017 à 23:25, Ondra Machacek  a écrit :
>> 
>> On Thu, Jun 29, 2017 at 5:16 PM, Fabrice Bacchella
>>  wrote:
>>> 
 Le 29 juin 2017 à 14:42, Fabrice Bacchella  a 
 écrit :
 
 
> Le 29 juin 2017 à 13:41, Ondra Machacek  a écrit :
> 
> How do you login? Do you use webadmin or API/SDK, if using SDK, don't
> you use kerberos=True?
 
 Ok, got it.
 It's tested with the sdk, using kerberos. But Kerberos authentication is 
 done in Apache and I configure a profile for that, so I needed to add: 
 config.artifact.arg = X-Remote-User in my 
 /etc/ovirt-engine/extensions.d/MyProfile.authn.properties. But this is 
 missing from internal-authn.properties. So rexecutor@internal  is checked 
 with my profil, and not found. But as the internal profil don't know about 
 X-Remote-User, it can't check the user and fails silently. That's why I'm 
 getting only one line. Perhaps the log line should have said the 
 extensions name that was failing, not the generic "External 
 Authentication" that did'nt caught my eye.
 
 I will check that as soon as I have a few minutes to spare and tell you.
>>> 
>>> I'm starting to understand. I need two authn modules, both using 
>>> org.ovirt.engineextensions.aaa.misc.http.AuthnExtension but with a 
>>> different authz.plugin. Is that possible ? If I do what, in what order the 
>>> different Authn will be tried ? Are they all tried until one succeed  both 
>>> authn and authz ?
>>> 
>> 
>> Yes you can have multiple authn profiles and it tries to login until
>> one succeed:
>> 
>> https://github.com/oVirt/ovirt-engine/blob/de46aa78f3117cbe436ab10926ac0c23fcdd7cfc/backend/manager/modules/aaa/src/main/java/org/ovirt/engine/core/aaa/filters/NegotiationFilter.java#L125
>> 
>> The order isn't guaranteed, but I think it's not important, or is it for you?
> 
> I'm not sure. As I need two 
> org.ovirt.engineextensions.aaa.misc.http.AuthnExtension, the authentication 
> will always succeed. It's the auhtz that fails as user as either in one 
> backend or the other. So if ExtMap output = profile.getAuthn().invoke(..) 
> calls the authz part I will be fine.
> 

I think it's not possible to have 2 
org.ovirt.engineextensions.aaa.misc.http.AuthnExtension with different authz.

The first authz ldap based backend is tried and return:
2017-07-04 17:50:25,711+02 DEBUG 
[org.ovirt.engineextensions.aaa.ldap.AuthzExtension] (default task-2) [] 
Exception: java.lang.RuntimeException: Cannot resolve principal 'rexecutor'
at 
org.ovirt.engineextensions.aaa.ldap.AuthzExtension.doFetchPrincipalRecord(AuthzExtension.java:579)
 [ovirt-engine-extension-aaa-ldap.jar:]
at 
org.ovirt.engineextensions.aaa.ldap.AuthzExtension.invoke(AuthzExtension.java:478)
 [ovirt-engine-extension-aaa-ldap.jar:]
at 
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:49)
at 
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:73)
at 
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:109)
at 
org.ovirt.engine.core.sso.utils.NegotiateAuthUtils.doAuth(NegotiateAuthUtils.java:122)
at 
org.ovirt.engine.core.sso.utils.NegotiateAuthUtils.doAuth(NegotiateAuthUtils.java:68)
at 
org.ovirt.engine.core.sso.utils.NonInteractiveAuth$2.doAuth(NonInteractiveAuth.java:51)
at 
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.issueTokenUsingHttpHeaders(OAuthTokenServlet.java:183)
at 
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.service(OAuthTokenServlet.java:72)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at 
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at 
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73)
at 
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at 
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at 
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
at 
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at 
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at 
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94)
at 
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at 
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at 

Re: [ovirt-users] ovirt-guest-agent - Ubuntu 16.04

2017-07-04 Thread FERNANDO FREDIANI
I am still getting problems with ovirt-guest-agent on Ubuntu machines in 
any scenario, new or upgraded instalation.


One of the VMs has been upgraded to Ubuntu 17.04 (zesty) and the 
upgraded version of ovirt-guest-agent also doesn't start due something 
with python.


When trying to run it manually with: "/usr/bin/python 
/usr/share/ovirt-guest-agent/ovirt-guest-agent.py" I get the following 
error:
root@hostname:~# /usr/bin/python 
/usr/share/ovirt-guest-agent/ovirt-guest-agent.py

*** stack smashing detected ***: /usr/bin/python terminated
Aborted (core dumped)

Tried also to install the previous version (16.04) from evilissimo but 
doesn't work either.


Fernando


On 30/06/2017 06:16, Sandro Bonazzola wrote:
Adding Laszlo Boszormenyi (GCS) > which is the maintainer according to 
http://it.archive.ubuntu.com/ubuntu/ubuntu/ubuntu/pool/universe/o/ovirt-guest-agent/ovirt-guest-agent_1.0.13.dfsg-1.dsc 



On Wed, Jun 28, 2017 at 5:37 PM, FERNANDO FREDIANI 
> wrote:


Hello

Is the maintainer of ovirt-guest-agent for Ubuntu on this mail list ?

I have noticed that if you install ovirt-guest-agent package from
Ubuntu repositories it doesn't start. Throws an error about python
and never starts. Has anyone noticied the same ? OS in this case
is a clean minimal install of Ubuntu 16.04.

Installing it from the following repository works fine -

http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04



Fernando

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

  
TRIED. TESTED. TRUSTED. 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Gianluca Cecchi
On Tue, Jul 4, 2017 at 12:45 PM, Sahina Bose  wrote:

>
>
> On Tue, Jul 4, 2017 at 3:18 PM, Gianluca Cecchi  > wrote:
>
>>
>>
>> On Mon, Jul 3, 2017 at 12:48 PM, Sahina Bose  wrote:
>>
>>>
>>>
>>> On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham  wrote:
>>>

 Only problem I would like to manage is that I have gluster network
> shared with ovirtmgmt one.
> Can I move it now with these updated packages?
>

 Are the gluster peers configured with the same hostnames/IPs as your
 hosts within oVirt?

 Once they're configured on the same network, separating them might be a
 bit difficult. Also, the last time I looked, oVirt still doesn't support
 managing HCI oVirt/Gluster nodes running each service on a different
 interface (see below).

 In theory, the procedure would involve stopping all of the Gluster
 processes on all of the peers, updating the peer addresses in the gluster
 configs on all of the nodes, then restarting glusterd & the bricks. I've
 not tested this however, and it's not a "supported" procedure. I've no idea
 how oVirt would deal with these changes either.

>>>
>>> Which version of glusterfs do you have running now? With glusterfs>=
>>> 3.9, there's a reset-brick command that can help you do this.
>>>
>>
>> At this moment on my oVirt nodes I have gluster packages as provided by
>> 4.1.2 repos, so:
>>
>> glusterfs-3.8.13-1.el7.x86_64
>> glusterfs-api-3.8.13-1.el7.x86_64
>> glusterfs-cli-3.8.13-1.el7.x86_64
>> glusterfs-client-xlators-3.8.13-1.el7.x86_64
>> glusterfs-fuse-3.8.13-1.el7.x86_64
>> glusterfs-geo-replication-3.8.13-1.el7.x86_64
>> glusterfs-libs-3.8.13-1.el7.x86_64
>> glusterfs-server-3.8.13-1.el7.x86_64
>> vdsm-gluster-4.19.15-1.el7.centos.noarch
>>
>> Is 3.9 version of Gluster packages provided when updating to upcoming
>> 4.1.3, perhaps?
>>
>
> No, it's not. One option is to update glusterfs packages to 3.10.
>

Is it supported throughout oVirt to use CentOS Storage SIG packages instead
of ovirt provided ones? I imagine you mean it, correct?

If this is a case, would I have to go with Gluster 3.9 (non LTS)
https://lists.centos.org/pipermail/centos-announce/2017-January/022249.html

Or Gluster 3.10 (LTS)
https://lists.centos.org/pipermail/centos-announce/2017-March/022337.html

I suppose the latter...
Any problem then with updates of oVirt itself, eg going through 4.1.2 to
4.1.3?

Thanks
Gianluca



>
> There's an RFE open to add this to GUI. For now, this has to be done from
> command line of one of the gluster nodes.
>

Ok. Depending on answer related to version of Gluster to use, I will try it
In the mean time I have completed steps 1 and 2 and I'm going to read
referenced docs for reset-brick command

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.1.2 and rubygem-fluent-plugin packages missing

2017-07-04 Thread Gianluca Cecchi
Hello,
an environment with engine in 4.1.2 and 3 hosts too (updated all from 4.0.5
3 days ago).
In web admin gui the 3 hosts keep the symbol that there are updates
available.

In events message board I have

Check for available updates on host ovirt01.localdomain.local was completed
successfully with message 'found updates for packages
rubygem-fluent-plugin-collectd-nest-0.1.3-1.el7,
rubygem-fluent-plugin-viaq_data_model-0.0.3-1.el7'.

But on host:

[root@ovirt01 qemu]# yum update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: it.centos.contactlab.it
 * epel: mirror.spreitzer.ch
 * extras: it.centos.contactlab.it
 * ovirt-4.1: ftp.nluug.nl
 * ovirt-4.1-epel: mirror.spreitzer.ch
 * updates: it.centos.contactlab.it
No packages marked for update
[root@ovirt01 qemu]#

And
[root@ovirt01 qemu]# rpm -q rubygem-fluent-plugin-collectd-nest
rubygem-fluent-plugin-viaq_data_model
package rubygem-fluent-plugin-collectd-nest is not installed
package rubygem-fluent-plugin-viaq_data_model is not installed
[root@ovirt01 qemu]#

Is it a bug in 4.1.2? Or should I manually install these two packages?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Matthias Leopold



Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:



On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão > wrote:


Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on
classic eth interfaces and later after the deployment of Hosted
Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine?
I've looked through the net and only found this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1193961


Appears to be unsupported as today, but there's an workaround on the
comments. It's safe to deploy this way? Should I use NFS instead?


It's probably not the most tested path but once you have an engine you 
should be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI 
bond configuration.


A different story is instead having ovirt-ha-agent connecting multiple 
IQNs or multiple targets over your SAN. This is currently not supported 
for the hosted-engine storage domain.

See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579



Hi Simone,

i think my post to this list titled "iSCSI multipathing setup troubles" 
just recently is about the exact same problem, except i'm not talking 
about the hosted-engine storage domain. i would like to configure _any_ 
iSCSI storage domain the way you describe it in 
https://bugzilla.redhat.com/show_bug.cgi?id=1149579#c1. i would like to 
do so using the oVirt "iSCSI Multipathing" GUI after everything else is 
setup. i can't find a way to do this. is this now possible? i think the 
iSCSI Multipathing documentation could be improved by describing an 
example IP setup for this.


thanks a lot
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Networking and oVirt 4.1

2017-07-04 Thread Gabriel Stein
Hi all,

I'm installing oVirt for the first time and I'm having some issues with the
Networking.

Setup:

OS: CentOS 7 Mininal
3 Bare Metal Servers(1 for Engine, 2 for Nodes).
Network:
Nn Trunk Interfaces with VLANs and Bridges.
e.g.:
trunk.100, VLAN: 100, Bridge: vmbr100. IPV4 only.

I have already a VLAN for MGMNT, without DHCP Server(not needed for oVirt,
but explaining my setup).


Networking works as expected, I can ping/ssh each host without problems.

On the two nodes, I have a Interface named ovirtmgmt and dhcp...

Question 1: What kind of configuration can I use here? Can I set static IPs
from VLAN MGMNT and put everything from oVirt on that VLAN? oVirt doens't
have a Internal DHCP Server for Nodes, or?

Question 2: Should I leave oVirt to Setup it(ovirtmgmt Interface) for me?


Problems:

I configured the Engine with the IP 1.1.1.1, and I reach the web interface
with https://FQDN( which is IP: 1.1.1.1)

But, when I add a Host to the Cluster, I have some errors:

"Host  does not comply with the cluster Default networks, the following
networks are missing on host: 'ovirtmgmt'"

Question 3: I saw that Engine tries to call dhclient and Setup an IP for
it, but could I have  static IPs? Where can I configure it?

* vdsm.log
































*2017-07-03 15:15:01,772+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call Host.getCapabilities succeeded in 0.11 seconds
(__init__:533)2017-07-03 15:15:01,808+0200 INFO  (jsonrpc/0)
[jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.01
seconds (__init__:533)2017-07-03 15:15:06,870+0200 INFO  (periodic/3)
[dispatcher] Run and protect: repoStats(options=None)
(logUtils:51)2017-07-03 15:15:06,871+0200 INFO  (periodic/3) [dispatcher]
Run and protect: repoStats, Return response: {} (logUtils:54)2017-07-03
15:15:10,059+0200 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call
Host.getAllVmStats succeeded in 0.00 seconds (__init__:533)2017-07-03
15:15:11,643+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call
Host.getAllVmStats succeeded in 0.00 seconds (__init__:533)2017-07-03
15:15:12,270+0200 INFO  (jsonrpc/3) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:51)2017-07-03 15:15:12,271+0200 INFO
(jsonrpc/3) [dispatcher] Run and protect: repoStats, Return response: {}
(logUtils:54)2017-07-03 15:15:12,277+0200 INFO  (jsonrpc/3)
[jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds
(__init__:533)2017-07-03 15:15:21,915+0200 INFO  (periodic/3) [dispatcher]
Run and protect: repoStats(options=None) (logUtils:51)2017-07-03
15:15:21,916+0200 INFO  (periodic/3) [dispatcher] Run and protect:
repoStats, Return response: {} (logUtils:54)2017-07-03 15:15:25,078+0200
INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats
succeeded in 0.00 seconds (__init__:533)2017-07-03 15:15:27,273+0200 INFO
(jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded
in 0.00 seconds (__init__:533)2017-07-03 15:15:28,330+0200 INFO
(jsonrpc/6) [dispatcher] Run and protect: repoStats(options=None)
(logUtils:51)2017-07-03 15:15:28,330+0200 INFO  (jsonrpc/6) [dispatcher]
Run and protect: repoStats, Return response: {} (logUtils:54)2017-07-03
15:15:28,337+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call
Host.getStats succeeded in 0.00 seconds (__init__:533)2017-07-03
15:15:36,960+0200 INFO  (periodic/3) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:51)2017-07-03 15:15:36,960+0200 INFO
(periodic/3) [dispatcher] Run and protect: repoStats, Return response: {}
(logUtils:54)2017-07-03 15:15:40,096+0200 INFO  (jsonrpc/7)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00
seconds (__init__:533)2017-07-03 15:15:43,280+0200 INFO  (jsonrpc/0)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00
seconds (__init__:533)2017-07-03 15:15:44,408+0200 INFO  (jsonrpc/1)
[dispatcher] Run and protect: repoStats(options=None)
(logUtils:51)2017-07-03 15:15:44,408+0200 INFO  (jsonrpc/1) [dispatcher]
Run and protect: repoStats, Return response: {} (logUtils:54)2017-07-03
15:15:44,415+0200 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call
Host.getStats succeeded in 0.01 seconds (__init__:533)2017-07-03
15:15:52,006+0200 INFO  (periodic/3) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:51)2017-07-03 15:15:52,006+0200 INFO
(periodic/3) [dispatcher] Run and protect: repoStats, Return response: {}
(logUtils:54)2017-07-03 15:15:55,115+0200 INFO  (jsonrpc/2)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00
seconds (__init__:533)2017-07-03 15:15:59,287+0200 INFO  (jsonrpc/3)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00
seconds (__init__:533)2017-07-03 15:16:00,465+0200 INFO  (jsonrpc/4)
[dispatcher] Run and protect: repoStats(options=None)
(logUtils:51)2017-07-03 15:16:00,465+0200 INFO  (jsonrpc/4) [dispatcher]
Run and protect: repoStats, Return response: {} (logUtils:54)*
* supervdsm.log







Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Sahina Bose
On Tue, Jul 4, 2017 at 3:18 PM, Gianluca Cecchi 
wrote:

>
>
> On Mon, Jul 3, 2017 at 12:48 PM, Sahina Bose  wrote:
>
>>
>>
>> On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham  wrote:
>>
>>>
>>> Only problem I would like to manage is that I have gluster network
 shared with ovirtmgmt one.
 Can I move it now with these updated packages?

>>>
>>> Are the gluster peers configured with the same hostnames/IPs as your
>>> hosts within oVirt?
>>>
>>> Once they're configured on the same network, separating them might be a
>>> bit difficult. Also, the last time I looked, oVirt still doesn't support
>>> managing HCI oVirt/Gluster nodes running each service on a different
>>> interface (see below).
>>>
>>> In theory, the procedure would involve stopping all of the Gluster
>>> processes on all of the peers, updating the peer addresses in the gluster
>>> configs on all of the nodes, then restarting glusterd & the bricks. I've
>>> not tested this however, and it's not a "supported" procedure. I've no idea
>>> how oVirt would deal with these changes either.
>>>
>>
>> Which version of glusterfs do you have running now? With glusterfs>= 3.9,
>> there's a reset-brick command that can help you do this.
>>
>
> At this moment on my oVirt nodes I have gluster packages as provided by
> 4.1.2 repos, so:
>
> glusterfs-3.8.13-1.el7.x86_64
> glusterfs-api-3.8.13-1.el7.x86_64
> glusterfs-cli-3.8.13-1.el7.x86_64
> glusterfs-client-xlators-3.8.13-1.el7.x86_64
> glusterfs-fuse-3.8.13-1.el7.x86_64
> glusterfs-geo-replication-3.8.13-1.el7.x86_64
> glusterfs-libs-3.8.13-1.el7.x86_64
> glusterfs-server-3.8.13-1.el7.x86_64
> vdsm-gluster-4.19.15-1.el7.centos.noarch
>
> Is 3.9 version of Gluster packages provided when updating to upcoming
> 4.1.3, perhaps?
>

No, it's not. One option is to update glusterfs packages to 3.10.


>
>
>
>>
>> It's possible to move to the new interface for gluster.
>>
>> The procedure would be:
>>
>> 1. Create a network with "gluster" network role.
>> 2. On each host, use "Setup networks" to associate the gluster network on
>> the desired interface. (This would ensure thet the engine will peer probe
>> this interface's IP address as well, so that it can be used to identify the
>> host in brick defintion)
>> 3. For each of the volume's bricks - change the definition of the brick,
>> so that the new ip address is used. Ensure that there's no pending heal
>> (i.e gluster volume heal info - should list 0 entires) before you start
>> this(see https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ -
>> Introducing reset-brick command)
>>
>> gluster volume reset-brick VOLNAME :BRICKPATH start
>> gluster volume reset-brick VOLNAME :BRICKPATH 
>> :BRICKPATH commit force
>>
>>
>>
>
> So do you think I can use any other commands with oVirt 4.1.2 and gluster
> 3.8?
> Can I safely proceed with steps 1 and 2? When I setup a gluster network
> and associated it to one host, what are exactly the implications? Will I
> disrupt anything, or is it seen only an option for having gluster traffing
> going on...?
>

Steps 1 & 2 will ensure that the IP address associated with the gluster
network is peer probed. It does not ensure that brick communication happens
using that interface. This happens only when the brick is identified using
that IP as well. (Step 3)


>
> BTW: How would I complete the webadmin gui part of step 3? I don't see an
> "edit" brick funcionality; I only see "Add" and "Replace Brick"...
>

There's an RFE open to add this to GUI. For now, this has to be done from
command line of one of the gluster nodes.


>
> Thanks,
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Access VM Console on a Smart Phone with User Permission

2017-07-04 Thread Filip Krepinsky
On Tue, Jun 27, 2017 at 12:26 PM, Tomas Jelinek  wrote:

>
>
> On Tue, Jun 27, 2017 at 12:08 PM, Jerome R 
> wrote:
>
>> I tried this workaround, I tried to logon the user acct to moVirt with 1
>> of the admin permission resources, it works I can access to the VM assigned
>> however I'm able to see what Admin can see in the portal though not able to
>> perform action. So far that's one of my concern the user should be able to
>> see just his/her VM assigned.
>>
>
> yes, this is a consequence of using the admin API - you can see all the
> entities and do actions only on the ones you have explicit rights to.
>
> Unfortunately, until the https://github.com/oVirt/moVirt/issues/282 is
> done, there is nothing better I can offer you.
>
> We can try to give that item a priority, just need to get the current RC
> out of the door (hopefully soon).
>
>
>>
>> Thanks,
>> Jerome
>>
>> On Tue, Jun 27, 2017 at 3:20 PM, Tomas Jelinek 
>> wrote:
>>
>>>
>>>
>>> On Tue, Jun 27, 2017 at 10:13 AM, Jerome Roque 
>>> wrote:
>>>
 Hi Tomas,

 Thanks for your response. What do you mean by "removing the support for
 user permissions"? I'm using

>>>
>>> The oVirt permission model expects to be told explicitly by one header
>>> if the logged in user has some admin permissions or not. In the past the
>>> API behaved differently in this two cases so we needed to remove the option
>>> to use it without admin permissions.
>>>
>>> Now the situation is better so we may be able to bring this support
>>> back, but it will require some testing.
>>>
>>
I created an experimental apk here
https://github.com/suomiy/moVirt/raw/user-roles/moVirt/moVirt-release.apk
It has some limitations for user roles, so entity events and event search
query are disabled. Also the apk has not been tested thoroughly.


>
>>>
 the latest version of moVirt 1.7.1, and ovirt-engine 4.1.
 Is there anyone tried running user role in moVirt?

>>>
Please let us know if this works for you or if you encounter any bugs.

Regards
Filip


>>> you will get permission denied from the API if you try to log in with a
>>> user which has no admin permission. If you give him any admin permission on
>>> any resource, it might work as a workaround.
>>>
>>>

 Best Regards,
 Jerome

 On Tue, Jun 20, 2017 at 5:14 PM, Tomas Jelinek 
 wrote:

>
>
> On Fri, Jun 16, 2017 at 6:14 AM, Jerome Roque  > wrote:
>
>> Good day oVirt Users,
>>
>> I need some little help. I have a KVM and used oVirt for the
>> management of VMs. What I want is that my client will log on to their
>> account and access their virtual machine using their Smart phone. I tried
>> to install mOvirt and yes can connect to the console of my machine, but 
>> it
>> is only accessible for admin console.
>>
>
> moVirt originally worked both with admin and user permissions. We had
> to remove the support for user permissions since the oVirt API did not
> provide all features moVirt needed for user permissions (search for
> example). But the API moved significantly since then (the search works 
> also
> for users now for one) so we can move it back. I have opened an issue 
> about
> it: https://github.com/oVirt/moVirt/issues/282 - we can try to do it
> in next version.
>
>
>> Tried to use web console, it downloaded console.vv but can't open it.
>> By any chance could make this thing possible?
>>
>
> If you want to use a web console for users, I would suggest to try the
> new ovirt-web-ui [1] - you have a link to it from oVirt landing page and
> since 4.1 it is installed by default with oVirt.
>
> The .vv file can not be opened using aSPICE AFAIK - adding Iordan as
> the author of aSPICE to comment on this.
>
> [1]: https://github.com/oVirt/ovirt-web-ui
>
>
>>
>> Thank you,
>> Jerome
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>

>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Simone Tiraboschi
On Tue, Jul 4, 2017 at 10:30 AM, Vinícius Ferrão  wrote:

> Thanks for your input, Simone.
>
> On 4 Jul 2017, at 05:01, Simone Tiraboschi  wrote:
>
>
>
> On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão  wrote:
>
>> Thanks, Konstantin.
>>
>> Just to be clear enough: the first deployment would be made on classic
>> eth interfaces and later after the deployment of Hosted Engine I can
>> convert the "ovirtmgmt" network to a LACP Bond, right?
>>
>> Another question: what about iSCSI Multipath on Self Hosted Engine? I've
>> looked through the net and only found this issue: https://bugzilla.redhat
>> .com/show_bug.cgi?id=1193961
>>
>> Appears to be unsupported as today, but there's an workaround on the
>> comments. It's safe to deploy this way? Should I use NFS instead?
>>
>
> It's probably not the most tested path but once you have an engine you
> should be able to create an iSCSI bond on your hosts from the engine.
> Network configuration is persisted across host reboots and so the iSCSI
> bond configuration.
>
> A different story is instead having ovirt-ha-agent connecting multiple
> IQNs or multiple targets over your SAN. This is currently not supported for
> the hosted-engine storage domain.
> See:
> https://bugzilla.redhat.com/show_bug.cgi?id=1149579
>
>
> Just to be clear, when we talk about bonding on iSCSI, we’re talking about
> iSCSI MPIO and not LACP (or something similar) on iSCSI interfaces, right?
>

Yes, correct.


> In my case there are two different fabrics dedicated to iSCSI. They do not
> even transit on the same switch, so it’s plain ethernet (with fancy things,
> like mtu 9216 enabled and QoS).
>
> So I think we’re talking about the unsupported feature of multiple IQN’s
> right?
>

Multiple IQNs on the host side (multiple initiators) should work trough
iSCSI bonding as managed by oVirt engine:
https://www.ovirt.org/documentation/admin-guide/chap-Storage/#configuring-iscsi-multipathing

Multiple IQN on your SAN are instead currently not supported by
ovirt-ha-agent for the hosted-engine storage domain



>
> Thanks once again,
> V.
>
>
>
>>
>> Thanks,
>> V.
>>
>> Sent from my iPhone
>>
>> On 3 Jul 2017, at 21:55, Konstantin Shalygin  wrote:
>>
>> Hello,
>>
>>
>> I’m deploying oVirt for the first time and a question has emerged: what
>> is the good practice to enable LACP on oVirt Node? Should I create 802.3ad
>> bond during the oVirt Node installation in Anaconda, or it should be done
>> in a posterior moment inside the Hosted Engine manager?
>>
>>
>> In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP
>> bond for management and servers VLAN’s, while eth1 and eth2 are Multipath
>> iSCSI disks (MPIO).
>>
>>
>> Thanks,
>>
>> V.
>>
>>
>> Do all your network settings in ovirt-engine webadmin.
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Gianluca Cecchi
On Mon, Jul 3, 2017 at 12:48 PM, Sahina Bose  wrote:

>
>
> On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham  wrote:
>
>>
>> Only problem I would like to manage is that I have gluster network shared
>>> with ovirtmgmt one.
>>> Can I move it now with these updated packages?
>>>
>>
>> Are the gluster peers configured with the same hostnames/IPs as your
>> hosts within oVirt?
>>
>> Once they're configured on the same network, separating them might be a
>> bit difficult. Also, the last time I looked, oVirt still doesn't support
>> managing HCI oVirt/Gluster nodes running each service on a different
>> interface (see below).
>>
>> In theory, the procedure would involve stopping all of the Gluster
>> processes on all of the peers, updating the peer addresses in the gluster
>> configs on all of the nodes, then restarting glusterd & the bricks. I've
>> not tested this however, and it's not a "supported" procedure. I've no idea
>> how oVirt would deal with these changes either.
>>
>
> Which version of glusterfs do you have running now? With glusterfs>= 3.9,
> there's a reset-brick command that can help you do this.
>

At this moment on my oVirt nodes I have gluster packages as provided by
4.1.2 repos, so:

glusterfs-3.8.13-1.el7.x86_64
glusterfs-api-3.8.13-1.el7.x86_64
glusterfs-cli-3.8.13-1.el7.x86_64
glusterfs-client-xlators-3.8.13-1.el7.x86_64
glusterfs-fuse-3.8.13-1.el7.x86_64
glusterfs-geo-replication-3.8.13-1.el7.x86_64
glusterfs-libs-3.8.13-1.el7.x86_64
glusterfs-server-3.8.13-1.el7.x86_64
vdsm-gluster-4.19.15-1.el7.centos.noarch

Is 3.9 version of Gluster packages provided when updating to upcoming
4.1.3, perhaps?



>
> It's possible to move to the new interface for gluster.
>
> The procedure would be:
>
> 1. Create a network with "gluster" network role.
> 2. On each host, use "Setup networks" to associate the gluster network on
> the desired interface. (This would ensure thet the engine will peer probe
> this interface's IP address as well, so that it can be used to identify the
> host in brick defintion)
> 3. For each of the volume's bricks - change the definition of the brick,
> so that the new ip address is used. Ensure that there's no pending heal
> (i.e gluster volume heal info - should list 0 entires) before you start
> this(see https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ -
> Introducing reset-brick command)
>
> gluster volume reset-brick VOLNAME :BRICKPATH start
> gluster volume reset-brick VOLNAME :BRICKPATH 
> :BRICKPATH commit force
>
>
>

So do you think I can use any other commands with oVirt 4.1.2 and gluster
3.8?
Can I safely proceed with steps 1 and 2? When I setup a gluster network and
associated it to one host, what are exactly the implications? Will I
disrupt anything, or is it seen only an option for having gluster traffing
going on...?

BTW: How would I complete the webadmin gui part of step 3? I don't see an
"edit" brick funcionality; I only see "Add" and "Replace Brick"...

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moVirt 2.0 released!

2017-07-04 Thread Karli Sjöberg
Den 30 juni 2017 6:56 em skrev Filip Krepinsky :Hello everyone,moVirt 2.0 has just been released and should arrive to your devices soon! You can also get the apk from our GitHub [1].The main feature of this release is managing multiple oVirt installations + many other cool features [2].Thanks everybody who helped with testing and especially big thanks to Shira who gave us lots of valuable input.Have a nice dayFilip[1]: https://github.com/oVirt/moVirt/releases/tag/v2.0[2]: https://github.com/oVirt/moVirt/wiki/Changelog
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi!Been using the new version since release and haven't found anything wrong with it in my light day-to-day use. Good job!/K___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Vinícius Ferrão
Thanks for your input, Simone.

On 4 Jul 2017, at 05:01, Simone Tiraboschi 
> wrote:



On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão 
> wrote:
Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic eth 
interfaces and later after the deployment of Hosted Engine I can convert the 
"ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine? I've looked 
through the net and only found this issue: 
https://bugzilla.redhat.com/show_bug.cgi?id=1193961

Appears to be unsupported as today, but there's an workaround on the comments. 
It's safe to deploy this way? Should I use NFS instead?

It's probably not the most tested path but once you have an engine you should 
be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI bond 
configuration.

A different story is instead having ovirt-ha-agent connecting multiple IQNs or 
multiple targets over your SAN. This is currently not supported for the 
hosted-engine storage domain.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579

Just to be clear, when we talk about bonding on iSCSI, we’re talking about 
iSCSI MPIO and not LACP (or something similar) on iSCSI interfaces, right? In 
my case there are two different fabrics dedicated to iSCSI. They do not even 
transit on the same switch, so it’s plain ethernet (with fancy things, like mtu 
9216 enabled and QoS).

So I think we’re talking about the unsupported feature of multiple IQN’s right?

Thanks once again,
V.



Thanks,
V.

Sent from my iPhone

On 3 Jul 2017, at 21:55, Konstantin Shalygin 
> wrote:

Hello,

I’m deploying oVirt for the first time and a question has emerged: what is the 
good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during 
the oVirt Node installation in Anaconda, or it should be done in a posterior 
moment inside the Hosted Engine manager?

In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond 
for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI 
disks (MPIO).

Thanks,
V.

Do all your network settings in ovirt-engine webadmin.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Vinícius Ferrão

> On 4 Jul 2017, at 02:49, Yedidyah Bar David  wrote:
> 
> On Tue, Jul 4, 2017 at 3:51 AM, Vinícius Ferrão  wrote:
>> Hello,
>> 
>> I’m deploying oVirt for the first time and a question has emerged: what is 
>> the good practice to enable LACP on oVirt Node? Should I create 802.3ad bond 
>> during the oVirt Node installation in Anaconda, or it should be done in a 
>> posterior moment inside the Hosted Engine manager?
> 
> Adding Simone for this, but I think that hosted-engine --deploy does
> not know to create bonds, so you better do this beforehand. It does
> know to recognize bonds and their slaves, and so will not let you
> configure the ovirtmgmt bridge on one of the slave nics of a bond.
> 
>> 
>> In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP 
>> bond for management and servers VLAN’s, while eth1 and eth2 are Multipath 
>> iSCSI disks (MPIO).
> 
> You probably meant eth2 and eth3 for the latter bond.
> 
> This is probably more a matter of personal preference than a result of
> a scientific examination, but I personally prefer, assuming that eth0
> and eth1 are managed by a single PCI component and eth2 and eth3 by
> another one, and especially if they are different, and using different
> kernel modules, to have one bond on eth0 and eth2, and another on eth1
> and eth3. This way, presumably, if some (hardware or software) bug hits
> one of the PCI devices, both bonds hopefully keep working.

It’s one single card with 4 interfaces. It came onboard on the IBM System x3550 
M4 servers that I’m using and they are Intel based, don’t remember exactly 
which chipset. Anyway this is interesting, I really avoid mixing different 
controllers on bonding to keep things stable, but you’ve a point.

> Just my two cents,
> 
>> 
>> Thanks,
>> V.
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> -- 
> Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Simone Tiraboschi
On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão  wrote:

> Thanks, Konstantin.
>
> Just to be clear enough: the first deployment would be made on classic eth
> interfaces and later after the deployment of Hosted Engine I can convert
> the "ovirtmgmt" network to a LACP Bond, right?
>
> Another question: what about iSCSI Multipath on Self Hosted Engine? I've
> looked through the net and only found this issue: https://bugzilla.
> redhat.com/show_bug.cgi?id=1193961
>
> Appears to be unsupported as today, but there's an workaround on the
> comments. It's safe to deploy this way? Should I use NFS instead?
>

It's probably not the most tested path but once you have an engine you
should be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI
bond configuration.

A different story is instead having ovirt-ha-agent connecting multiple IQNs
or multiple targets over your SAN. This is currently not supported for the
hosted-engine storage domain.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579


>
> Thanks,
> V.
>
> Sent from my iPhone
>
> On 3 Jul 2017, at 21:55, Konstantin Shalygin  wrote:
>
> Hello,
>
>
> I’m deploying oVirt for the first time and a question has emerged: what is
> the good practice to enable LACP on oVirt Node? Should I create 802.3ad
> bond during the oVirt Node installation in Anaconda, or it should be done
> in a posterior moment inside the Hosted Engine manager?
>
>
> In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP
> bond for management and servers VLAN’s, while eth1 and eth2 are Multipath
> iSCSI disks (MPIO).
>
>
> Thanks,
>
> V.
>
>
> Do all your network settings in ovirt-engine webadmin.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Ovirt 4.0.6] Suggestion required for Network Throughput options

2017-07-04 Thread TranceWorldLogic .
Hi All,

I tried using hwrng as source for VM I got below error.

qemu/backends/rng-random.c:44:entropy_available: assertion failed: (len != -1)

Please help to understand it more.

Thanks,

~Rohit


On Mon, Jul 3, 2017 at 8:22 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi Yaniv,
>
> I tried looking into rng direction and I found below stats.
> I am not familiar with rng device but it look to me /dev/urandom giving me
> better option.
> But I am unaware how can I use urandom device in ovirt.
>
> RANDOM DEVICE ==>
> cat /dev/random | rngtest -c 1000
> rngtest 5
> Copyright (c) 2004 by Henrique de Moraes Holschuh
> This is free software; see the source for copying conditions.  There is NO
> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
>
> rngtest: starting FIPS tests...
> rngtest: bits received from input: 2032
> rngtest: FIPS 140-2 successes: 1000
> rngtest: FIPS 140-2 failures: 0
> rngtest: FIPS 140-2(2001-10-10) Monobit: 0
> rngtest: FIPS 140-2(2001-10-10) Poker: 0
> rngtest: FIPS 140-2(2001-10-10) Runs: 0
> rngtest: FIPS 140-2(2001-10-10) Long run: 0
> rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
> rngtest: input channel speed: (min=3.594; avg=4.813; max=5.968)Mibits/s
> rngtest: FIPS tests speed: (min=93.958; avg=129.073; max=157.632)Mibits/s
> rngtest: Program run time: 4111375 microseconds
>
> URANDOM DEVICE ==>
> cat /dev/urandom | rngtest -c 1000
> rngtest 5
> Copyright (c) 2004 by Henrique de Moraes Holschuh
> This is free software; see the source for copying conditions.  There is NO
> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
>
> rngtest: starting FIPS tests...
> rngtest: bits received from input: 2032
> rngtest: FIPS 140-2 successes: 1000
> rngtest: FIPS 140-2 failures: 0
> rngtest: FIPS 140-2(2001-10-10) Monobit: 0
> rngtest: FIPS 140-2(2001-10-10) Poker: 0
> rngtest: FIPS 140-2(2001-10-10) Runs: 0
> rngtest: FIPS 140-2(2001-10-10) Long run: 0
> rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
> rngtest: input channel speed: (min=1.035; avg=17.311; max=18.626)Gibits/s
> rngtest: FIPS tests speed: (min=119.959; avg=161.107; max=164.427)Mibits/s
> rngtest: Program run time: 120154 microseconds
>
> Thanks,
> ~Rohit
>
>
>
> On Fri, Jun 30, 2017 at 7:42 PM, TranceWorldLogic . <
> tranceworldlo...@gmail.com> wrote:
>
>> Your understanding is correct issue only due to encryption/decryption
>> process but not got idea why not word.
>> I found that in centos 7 we not have install rng-tools.
>>
>> Do it required to install for random generator ?
>>
>> I have changes nothing I just increase number of queues in vnet. It
>> diffenetly increase throughput and create multiple softIRQs in VM.
>> But for normal traffic this all things are not required it gives 10G
>> throughput.
>>
>> Thanks,
>> ~Rohit
>>
>> On Fri, Jun 30, 2017 at 7:08 PM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Fri, Jun 30, 2017 at 4:14 PM, TranceWorldLogic . <
>>> tranceworldlo...@gmail.com> wrote:
>>>
 Hi Yaniv,

 I have enabled random generator in cluster and also in VM.
 But still not see any improvement in throughput.

 lsmod | grep -i virtio
 virtio_rng 13019  0

>>>
>>> Are you sure it's being used? What is the qemu command line (do you see
>>> the device in the guest?)
>>>
>>>
 virtio_balloon 13834  0
 virtio_console 28115  2
 virtio_blk 18156  4
 virtio_scsi18361  0
 virtio_net 28024  0
 virtio_pci 22913  0
 virtio_ring21524  7 virtio_blk,virtio_net,virtio_p
 ci,virtio_rng,virtio_balloon,virtio_console,virtio_scsi
 virtio 15008  7 virtio_blk,virtio_net,virtio_p
 ci,virtio_rng,virtio_balloon,virtio_console,virtio_scsi

 Would please check do I missing some virtio module ?

 One more finding, if I set queue property in vnic profile then I got
 good throughput.

>>>
>>> Interesting - I had assumed the bottleneck would be the
>>> encryption/decryption process, not the network. What do you set exactly?
>>> Does it matter in non-encrypted traffic as well? Are the packets (and the
>>> whole communication) large or small (i.e, would jumbo frames help) ?
>>>  Y.
>>>
>>>
 Thanks,
 ~Rohit


 On Fri, Jun 30, 2017 at 12:11 AM, Yaniv Kaul  wrote:

>
>
> On Thu, Jun 29, 2017 at 4:02 PM, TranceWorldLogic . <
> tranceworldlo...@gmail.com> wrote:
>
>> Got it, just I need to do modprobe to add virtio-rng driver.
>> I will try with this option.
>>
>
> Make sure it is checked on the cluster.
> Y.
>
>>
>> Thanks for your help,
>> ~Rohit
>>
>> On Thu, Jun 29, 2017 at 6:20 PM, TranceWorldLogic . <
>> tranceworldlo...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I am using host as Centos 7.3 and guest also centos 7.3

[ovirt-users] Ris: Best practices for LACP bonds on oVirt

2017-07-04 Thread NUNIN Roberto




 Messaggio originale 
Oggetto: Re: [ovirt-users] Best practices for LACP bonds on oVirt
Da: Yedidyah Bar David
A: Vinícius Ferrão ,Simone Tiraboschi
CC: users


On Tue, Jul 4, 2017 at 3:51 AM, Vinícius Ferrão wrote:

> In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond 
> for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI 
> disks (MPIO).

>You probably meant eth2 and eth3 for the latter bond.

Sorry to jump inside this interesting discussion, but LACP on iSCSI ?

If I remember correctly, there was warning about using LACP on iSCSI comms, 
active/standby was the preferred one, due to performances ?
Something has changed about this topic ?

Thanks,
Roberto
___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



--
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users