[ovirt-users] Installation problem - missing dependencies?

2017-07-05 Thread Paweł Zaskórski
Hi everyone!

I'm trying to install oVirt on fresh CentOS 7 (up-to-date). According
to documentation i did:

# yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
# yum install ovirt-engine

Unfortunately, I'm getting an error:

--> Finished Dependency Resolution
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
   Requires: ovirt-engine-cli >= 3.6.2.0
Error: Package:
ovirt-engine-setup-plugin-ovirt-engine-4.1.3.5-1.el7.centos.noarch
(ovirt-4.1)
   Requires: ovirt-engine-dwh-setup >= 4.0
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
   Requires: ovirt-iso-uploader >= 4.0.0
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
   Requires: ovirt-engine-wildfly >= 10.1.0
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
   Requires: ovirt-engine-wildfly-overlay >= 10.0.0
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
   Requires: ovirt-imageio-proxy
Error: Package:
ovirt-engine-setup-plugin-ovirt-engine-4.1.3.5-1.el7.centos.noarch
(ovirt-4.1)
   Requires: ovirt-imageio-proxy-setup
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
   Requires: ovirt-engine-dashboard >= 1.0.0
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

My enabled repos:

# yum --noplugins repolist | awk 'FNR > 1 {print $1}' | head -n-1
base/7/x86_64
centos-opstools-release/7/x86_64
extras/7/x86_64
ovirt-4.1/7
ovirt-4.1-centos-gluster38/x86_64
ovirt-4.1-epel/x86_64
ovirt-4.1-patternfly1-noarch-epel/x86_64
ovirt-centos-ovirt41/7/x86_64
sac-gdeploy/x86_64
updates/7/x86_64
virtio-win-stable

Did I miss something? From which repository come packages like
ovirt-engine-dashboard or ovirt-engine-dwh-setup?

Thank you in advance for your help!

Best regards,
Paweł
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Atin Mukherjee
On Thu, Jul 6, 2017 at 3:47 AM, Gianluca Cecchi 
wrote:

> On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee 
> wrote:
>
>> OK, so the log just hints to the following:
>>
>> [2017-07-05 15:04:07.178204] E [MSGID: 106123]
>> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit
>> failed for operation Reset Brick on local node
>> [2017-07-05 15:04:07.178214] E [MSGID: 106123]
>> [glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases]
>> 0-management: Commit Op Failed
>>
>> While going through the code, glusterd_op_reset_brick () failed resulting
>> into these logs. Now I don't see any error logs generated from
>> glusterd_op_reset_brick () which makes me thing that have we failed from a
>> place where we log the failure in debug mode. Would you be able to restart
>> glusterd service with debug log mode and reran this test and share the log?
>>
>>
> Do you mean to run the reset-brick command for another volume or for the
> same? Can I run it against this "now broken" volume?
>
> Or perhaps can I modify /usr/lib/systemd/system/glusterd.service and
> change in [service] section
>
> from
> Environment="LOG_LEVEL=INFO"
>
> to
> Environment="LOG_LEVEL=DEBUG"
>
> and then
> systemctl daemon-reload
> systemctl restart glusterd
>

Yes, that's how you can run glusterd in debug log mode.

>
> I think it would be better to keep gluster in debug mode the less time
> possible, as there are other volumes active right now, and I want to
> prevent fill the log files file system
> Best to put only some components in debug mode if possible as in the
> example commands above.
>

You can switch back to info mode the moment this is hit one more time with
the debug log enabled. What I'd need here is the glusterd log (with debug
mode) to figure out the exact cause of the failure.


>
> Let me know,
> thanks
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Gianluca Cecchi
On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee  wrote:

> OK, so the log just hints to the following:
>
> [2017-07-05 15:04:07.178204] E [MSGID: 106123] 
> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit]
> 0-management: Commit failed for operation Reset Brick on local node
> [2017-07-05 15:04:07.178214] E [MSGID: 106123]
> [glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases]
> 0-management: Commit Op Failed
>
> While going through the code, glusterd_op_reset_brick () failed resulting
> into these logs. Now I don't see any error logs generated from
> glusterd_op_reset_brick () which makes me thing that have we failed from a
> place where we log the failure in debug mode. Would you be able to restart
> glusterd service with debug log mode and reran this test and share the log?
>
>
What's the best way to set glusterd in debug mode?
Can I set this volume, and work on it even if it is now compromised?

I ask because I have tried this:

[root@ovirt01 ~]# gluster volume get export diagnostics.brick-log-level
Option
Value
--
-
diagnostics.brick-log-level INFO


[root@ovirt01 ~]# gluster volume set export diagnostics.brick-log-level
DEBUG
volume set: failed: Error, Validation Failed
[root@ovirt01 ~]#

While on another volume that is in good state, I can run

[root@ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level DEBUG
volume set: success
[root@ovirt01 ~]#

[root@ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option
Value
--
-
diagnostics.brick-log-level DEBUG

[root@ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level INFO
volume set: success
[root@ovirt01 ~]#

 [root@ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option
Value
--
-
diagnostics.brick-log-level
INFO
[root@ovirt01 ~]#

Do you mean to run the reset-brick command for another volume or for the
same? Can I run it against this "now broken" volume?

Or perhaps can I modify /usr/lib/systemd/system/glusterd.service and change
in [service] section

from
Environment="LOG_LEVEL=INFO"

to
Environment="LOG_LEVEL=DEBUG"

and then
systemctl daemon-reload
systemctl restart glusterd

I think it would be better to keep gluster in debug mode the less time
possible, as there are other volumes active right now, and I want to
prevent fill the log files file system
Best to put only some components in debug mode if possible as in the
example commands above.

Let me know,
thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Networking and oVirt 4.1

2017-07-05 Thread Dominik Holler
On Mon, 3 Jul 2017 15:27:36 +0200
Gabriel Stein  wrote:

> Hi all,
> 
> I'm installing oVirt for the first time and I'm having some issues
> with the Networking.
> 
> Setup:
> 
> OS: CentOS 7 Mininal
> 3 Bare Metal Servers(1 for Engine, 2 for Nodes).
> Network:
> Nn Trunk Interfaces with VLANs and Bridges.
> e.g.:
> trunk.100, VLAN: 100, Bridge: vmbr100. IPV4 only.
> 
> I have already a VLAN for MGMNT, without DHCP Server(not needed for
> oVirt, but explaining my setup).
> 
> 
> Networking works as expected, I can ping/ssh each host without
> problems.
> 
> On the two nodes, I have a Interface named ovirtmgmt and dhcp...
> 
> Question 1: What kind of configuration can I use here? Can I set
> static IPs from VLAN MGMNT and put everything from oVirt on that
> VLAN? 

Yes.

> oVirt doens't have a Internal DHCP Server for Nodes, or?
> 

oVirt doens't have an internal DHCP Server.

> Question 2: Should I leave oVirt to Setup it(ovirtmgmt Interface) for
> me?
> 

I would use the comfort oVirt provides to configure the hosts, if there
is not a good reason to take the burden on me.


> Problems:
> 
> I configured the Engine with the IP 1.1.1.1, and I reach the web
> interface with https://FQDN( which is IP: 1.1.1.1)
> 
> But, when I add a Host to the Cluster, I have some errors:
> 
> "Host  does not comply with the cluster Default networks, the
> following networks are missing on host: 'ovirtmgmt'"

Try to use the "Setup Host Networks" functionality of oVirt.
If you select a host in the Administration Portal, there is
a tab labeled "Network Interfaces", which provides a button "Setup Host
Networks".
In the "Setup Host Networks" dialog is a graphical representation of
all logical networks, which can be assigned by drag'n drop to the
network interfaces of the host.

> Question 3: I saw that Engine tries to call dhclient and Setup an IP
> for it, but could I have  static IPs? Where can I configure it?

In the "Setup Host Networks" dialog each assigned logical network has an
icon of a pencil, which can be used to open an other dialog to
configure IP addresses.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Atin Mukherjee
OK, so the log just hints to the following:

[2017-07-05 15:04:07.178204] E [MSGID: 106123]
[glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit failed
for operation Reset Brick on local node
[2017-07-05 15:04:07.178214] E [MSGID: 106123]
[glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases]
0-management: Commit Op Failed

While going through the code, glusterd_op_reset_brick () failed resulting
into these logs. Now I don't see any error logs generated from
glusterd_op_reset_brick () which makes me thing that have we failed from a
place where we log the failure in debug mode. Would you be able to restart
glusterd service with debug log mode and reran this test and share the log?


On Wed, Jul 5, 2017 at 9:12 PM, Gianluca Cecchi 
wrote:

>
>
> On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee 
> wrote:
>
>> And what does glusterd log indicate for these failures?
>>
>
>
> See here in gzip format
>
> https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/
> view?usp=sharing
>
> It seems that on each host the peer files have been updated with a new
> entry "hostname2":
>
> [root@ovirt01 ~]# cat /var/lib/glusterd/peers/*
> uuid=b89311fe-257f-4e44-8e15-9bff6245d689
> state=3
> hostname1=ovirt02.localdomain.local
> hostname2=10.10.2.103
> uuid=ec81a04c-a19c-4d31-9d82-7543cefe79f3
> state=3
> hostname1=ovirt03.localdomain.local
> hostname2=10.10.2.104
> [root@ovirt01 ~]#
>
> [root@ovirt02 ~]# cat /var/lib/glusterd/peers/*
> uuid=e9717281-a356-42aa-a579-a4647a29a0bc
> state=3
> hostname1=ovirt01.localdomain.local
> hostname2=10.10.2.102
> uuid=ec81a04c-a19c-4d31-9d82-7543cefe79f3
> state=3
> hostname1=ovirt03.localdomain.local
> hostname2=10.10.2.104
> [root@ovirt02 ~]#
>
> [root@ovirt03 ~]# cat /var/lib/glusterd/peers/*
> uuid=b89311fe-257f-4e44-8e15-9bff6245d689
> state=3
> hostname1=ovirt02.localdomain.local
> hostname2=10.10.2.103
> uuid=e9717281-a356-42aa-a579-a4647a29a0bc
> state=3
> hostname1=ovirt01.localdomain.local
> hostname2=10.10.2.102
> [root@ovirt03 ~]#
>
>
> But not the gluster info on the second and third node that have lost the
> ovirt01/gl01 host brick information...
>
> Eg on ovirt02
>
>
> [root@ovirt02 peers]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 0 x (2 + 1) = 2
> Transport-type: tcp
> Bricks:
> Brick1: ovirt02.localdomain.local:/gluster/brick3/export
> Brick2: ovirt03.localdomain.local:/gluster/brick3/export
> Options Reconfigured:
> transport.address-family: inet
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
> [root@ovirt02 peers]#
>
> And on ovirt03
>
> [root@ovirt03 ~]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 0 x (2 + 1) = 2
> Transport-type: tcp
> Bricks:
> Brick1: ovirt02.localdomain.local:/gluster/brick3/export
> Brick2: ovirt03.localdomain.local:/gluster/brick3/export
> Options Reconfigured:
> transport.address-family: inet
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
> [root@ovirt03 ~]#
>
> While on ovirt01 it seems isolated...
>
> [root@ovirt01 ~]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 0 x (2 + 1) = 1
> Transport-type: tcp
> Bricks:
> Brick1: gl01.localdomain.local:/gluster/brick3/export
> Options Reconfigured:
> transport.address-family: inet
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> 

[ovirt-users] user permissions

2017-07-05 Thread Fabrice Bacchella
I'm trying to give a user the permissions to stop/start a specific server.

This user is given the generic UserRole for the System.

I tried to give him the roles :
UserVmManager
UserVmRunTimeManager
UserInstanceManager
InstanceCreator
UserRole

for that specific VM, I always get: query execution failed due to insufficient 
permissions.

As soon as I give him the SuperUser role, he can stop/start it.

What role should I give him for that VM ? I don't want to give the privilege to 
destroy the vm, or add disks. But he should be able to change the os settings 
too.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Gianluca Cecchi
On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee  wrote:

> And what does glusterd log indicate for these failures?
>


See here in gzip format

https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing


It seems that on each host the peer files have been updated with a new
entry "hostname2":

[root@ovirt01 ~]# cat /var/lib/glusterd/peers/*
uuid=b89311fe-257f-4e44-8e15-9bff6245d689
state=3
hostname1=ovirt02.localdomain.local
hostname2=10.10.2.103
uuid=ec81a04c-a19c-4d31-9d82-7543cefe79f3
state=3
hostname1=ovirt03.localdomain.local
hostname2=10.10.2.104
[root@ovirt01 ~]#

[root@ovirt02 ~]# cat /var/lib/glusterd/peers/*
uuid=e9717281-a356-42aa-a579-a4647a29a0bc
state=3
hostname1=ovirt01.localdomain.local
hostname2=10.10.2.102
uuid=ec81a04c-a19c-4d31-9d82-7543cefe79f3
state=3
hostname1=ovirt03.localdomain.local
hostname2=10.10.2.104
[root@ovirt02 ~]#

[root@ovirt03 ~]# cat /var/lib/glusterd/peers/*
uuid=b89311fe-257f-4e44-8e15-9bff6245d689
state=3
hostname1=ovirt02.localdomain.local
hostname2=10.10.2.103
uuid=e9717281-a356-42aa-a579-a4647a29a0bc
state=3
hostname1=ovirt01.localdomain.local
hostname2=10.10.2.102
[root@ovirt03 ~]#


But not the gluster info on the second and third node that have lost the
ovirt01/gl01 host brick information...

Eg on ovirt02


[root@ovirt02 peers]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 2
Transport-type: tcp
Bricks:
Brick1: ovirt02.localdomain.local:/gluster/brick3/export
Brick2: ovirt03.localdomain.local:/gluster/brick3/export
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
[root@ovirt02 peers]#

And on ovirt03

[root@ovirt03 ~]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 2
Transport-type: tcp
Bricks:
Brick1: ovirt02.localdomain.local:/gluster/brick3/export
Brick2: ovirt03.localdomain.local:/gluster/brick3/export
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
[root@ovirt03 ~]#

While on ovirt01 it seems isolated...

[root@ovirt01 ~]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 1
Transport-type: tcp
Bricks:
Brick1: gl01.localdomain.local:/gluster/brick3/export
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
[root@ovirt01 ~]#
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Atin Mukherjee
And what does glusterd log indicate for these failures?

On Wed, Jul 5, 2017 at 8:43 PM, Gianluca Cecchi 
wrote:

>
>
> On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose  wrote:
>
>>
>>
>> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>>
>>>
>>> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose  wrote:
>>>


> ...
>
> then the commands I need to run would be:
>
> gluster volume reset-brick export 
> ovirt01.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt01.localdomain.local:/gluster/brick3/export
> gl01.localdomain.local:/gluster/brick3/export commit force
>
> Correct?
>

 Yes, correct. gl01.localdomain.local should resolve correctly on all 3
 nodes.

>>>
>>>
>>> It fails at first step:
>>>
>>>  [root@ovirt01 ~]# gluster volume reset-brick export
>>> ovirt01.localdomain.local:/gluster/brick3/export start
>>> volume reset-brick: failed: Cannot execute command. The cluster is
>>> operating at version 30712. reset-brick command reset-brick start is
>>> unavailable in this version.
>>> [root@ovirt01 ~]#
>>>
>>> It seems somehow in relation with this upgrade not of the commercial
>>> solution Red Hat Gluster Storage
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Storag
>>> e/3.1/html/Installation_Guide/chap-Upgrading_Red_Hat_Storage.html
>>>
>>> So ti seems I have to run some command of type:
>>>
>>> gluster volume set all cluster.op-version X
>>>
>>> with X > 30712
>>>
>>> It seems that latest version of commercial Red Hat Gluster Storage is
>>> 3.1 and its op-version is indeed 30712..
>>>
>>> So the question is which particular op-version I have to set and if the
>>> command can be set online without generating disruption
>>>
>>
>> It should have worked with the glusterfs 3.10 version from Centos repo.
>> Adding gluster-users for help on the op-version
>>
>>
>>>
>>> Thanks,
>>> Gianluca
>>>
>>
>>
>
> It seems op-version is not updated automatically by default, so that it
> can manage mixed versions while you update one by one...
>
> I followed what described here:
> https://gluster.readthedocs.io/en/latest/Upgrade-Guide/op_version/
>
>
> - Get current version:
>
> [root@ovirt01 ~]# gluster volume get all cluster.op-version
> Option  Value
>
> --  -
>
> cluster.op-version  30712
>
> [root@ovirt01 ~]#
>
>
> - Get maximum version I can set for current setup:
>
> [root@ovirt01 ~]# gluster volume get all cluster.max-op-version
> Option  Value
>
> --  -
>
> cluster.max-op-version  31000
>
> [root@ovirt01 ~]#
>
>
> - Get op version information for all the connected clients:
>
> [root@ovirt01 ~]# gluster volume status all clients | grep ":49" | awk
> '{print $4}' | sort | uniq -c
>  72 31000
> [root@ovirt01 ~]#
>
> --> ok
>
>
> - Update op-version
>
> [root@ovirt01 ~]# gluster volume set all cluster.op-version 31000
> volume set: success
> [root@ovirt01 ~]#
>
>
> - Verify:
> [root@ovirt01 ~]# gluster volume get all cluster.op-versionOption
>  Value
> --  -
>
> cluster.op-version  31000
>
> [root@ovirt01 ~]#
>
> --> ok
>
> [root@ovirt01 ~]# gluster volume reset-brick export
> ovirt01.localdomain.local:/gluster/brick3/export start
> volume reset-brick: success: reset-brick start operation successful
>
> [root@ovirt01 ~]# gluster volume reset-brick export
> ovirt01.localdomain.local:/gluster/brick3/export 
> gl01.localdomain.local:/gluster/brick3/export
> commit force
> volume reset-brick: failed: Commit failed on ovirt02.localdomain.local.
> Please check log file for details.
> Commit failed on ovirt03.localdomain.local. Please check log file for
> details.
> [root@ovirt01 ~]#
>
> [root@ovirt01 bricks]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gl01.localdomain.local:/gluster/brick3/export
> Brick2: ovirt02.localdomain.local:/gluster/brick3/export
> Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)
> Options Reconfigured:
> transport.address-family: inet
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> 

Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Atin Mukherjee
On Wed, Jul 5, 2017 at 8:32 PM, Sahina Bose  wrote:

>
>
> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi  > wrote:
>
>>
>>
>> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose  wrote:
>>
>>>
>>>
 ...

 then the commands I need to run would be:

 gluster volume reset-brick export 
 ovirt01.localdomain.local:/gluster/brick3/export
 start
 gluster volume reset-brick export 
 ovirt01.localdomain.local:/gluster/brick3/export
 gl01.localdomain.local:/gluster/brick3/export commit force

 Correct?

>>>
>>> Yes, correct. gl01.localdomain.local should resolve correctly on all 3
>>> nodes.
>>>
>>
>>
>> It fails at first step:
>>
>>  [root@ovirt01 ~]# gluster volume reset-brick export
>> ovirt01.localdomain.local:/gluster/brick3/export start
>> volume reset-brick: failed: Cannot execute command. The cluster is
>> operating at version 30712. reset-brick command reset-brick start is
>> unavailable in this version.
>> [root@ovirt01 ~]#
>>
>> It seems somehow in relation with this upgrade not of the commercial
>> solution Red Hat Gluster Storage
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storag
>> e/3.1/html/Installation_Guide/chap-Upgrading_Red_Hat_Storage.html
>>
>> So ti seems I have to run some command of type:
>>
>> gluster volume set all cluster.op-version X
>>
>> with X > 30712
>>
>> It seems that latest version of commercial Red Hat Gluster Storage is 3.1
>> and its op-version is indeed 30712..
>>
>> So the question is which particular op-version I have to set and if the
>> command can be set online without generating disruption
>>
>
> It should have worked with the glusterfs 3.10 version from Centos repo.
> Adding gluster-users for help on the op-version
>

This definitely means your cluster op-version is running < 3.9.0

 if (conf->op_version < GD_OP_VERSION_3_9_0
&&
strcmp (cli_op, "GF_REPLACE_OP_COMMIT_FORCE"))
{
snprintf (msg, sizeof (msg), "Cannot execute command. The
"
  "cluster is operating at version %d. reset-brick
"
  "command %s is unavailable in this
version.",

conf->op_version,
  gd_rb_op_to_str
(cli_op));
ret =
-1;
goto
out;
}

What's the version of gluster bits are you running across the gluster
cluster? Please note cluster.op-version is not exactly the same as of rpm
version and with every upgrades it's recommended to bump up the op-version.


>
>>
>> Thanks,
>> Gianluca
>>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Gianluca Cecchi
On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose  wrote:

>
>
> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi  > wrote:
>
>>
>>
>> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose  wrote:
>>
>>>
>>>
 ...

 then the commands I need to run would be:

 gluster volume reset-brick export 
 ovirt01.localdomain.local:/gluster/brick3/export
 start
 gluster volume reset-brick export 
 ovirt01.localdomain.local:/gluster/brick3/export
 gl01.localdomain.local:/gluster/brick3/export commit force

 Correct?

>>>
>>> Yes, correct. gl01.localdomain.local should resolve correctly on all 3
>>> nodes.
>>>
>>
>>
>> It fails at first step:
>>
>>  [root@ovirt01 ~]# gluster volume reset-brick export
>> ovirt01.localdomain.local:/gluster/brick3/export start
>> volume reset-brick: failed: Cannot execute command. The cluster is
>> operating at version 30712. reset-brick command reset-brick start is
>> unavailable in this version.
>> [root@ovirt01 ~]#
>>
>> It seems somehow in relation with this upgrade not of the commercial
>> solution Red Hat Gluster Storage
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storag
>> e/3.1/html/Installation_Guide/chap-Upgrading_Red_Hat_Storage.html
>>
>> So ti seems I have to run some command of type:
>>
>> gluster volume set all cluster.op-version X
>>
>> with X > 30712
>>
>> It seems that latest version of commercial Red Hat Gluster Storage is 3.1
>> and its op-version is indeed 30712..
>>
>> So the question is which particular op-version I have to set and if the
>> command can be set online without generating disruption
>>
>
> It should have worked with the glusterfs 3.10 version from Centos repo.
> Adding gluster-users for help on the op-version
>
>
>>
>> Thanks,
>> Gianluca
>>
>
>

It seems op-version is not updated automatically by default, so that it can
manage mixed versions while you update one by one...

I followed what described here:
https://gluster.readthedocs.io/en/latest/Upgrade-Guide/op_version/


- Get current version:

[root@ovirt01 ~]# gluster volume get all cluster.op-version
Option  Value

--  -

cluster.op-version  30712

[root@ovirt01 ~]#


- Get maximum version I can set for current setup:

[root@ovirt01 ~]# gluster volume get all cluster.max-op-version
Option  Value

--  -

cluster.max-op-version  31000

[root@ovirt01 ~]#


- Get op version information for all the connected clients:

[root@ovirt01 ~]# gluster volume status all clients | grep ":49" | awk
'{print $4}' | sort | uniq -c
 72 31000
[root@ovirt01 ~]#

--> ok


- Update op-version

[root@ovirt01 ~]# gluster volume set all cluster.op-version 31000
volume set: success
[root@ovirt01 ~]#


- Verify:
[root@ovirt01 ~]# gluster volume get all cluster.op-versionOption
   Value
--  -

cluster.op-version  31000

[root@ovirt01 ~]#

--> ok

[root@ovirt01 ~]# gluster volume reset-brick export
ovirt01.localdomain.local:/gluster/brick3/export start
volume reset-brick: success: reset-brick start operation successful

[root@ovirt01 ~]# gluster volume reset-brick export
ovirt01.localdomain.local:/gluster/brick3/export
gl01.localdomain.local:/gluster/brick3/export commit force
volume reset-brick: failed: Commit failed on ovirt02.localdomain.local.
Please check log file for details.
Commit failed on ovirt03.localdomain.local. Please check log file for
details.
[root@ovirt01 ~]#

[root@ovirt01 bricks]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gl01.localdomain.local:/gluster/brick3/export
Brick2: ovirt02.localdomain.local:/gluster/brick3/export
Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
[root@ovirt01 bricks]# gluster volume reset-brick export
ovirt02.localdomain.local:/gluster/brick3/export start
volume reset-brick: success: reset-brick start operation successful
[root@ovirt01 bricks]# gluster volume 

[ovirt-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Sahina Bose
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi 
wrote:

>
>
> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose  wrote:
>
>>
>>
>>> ...
>>>
>>> then the commands I need to run would be:
>>>
>>> gluster volume reset-brick export 
>>> ovirt01.localdomain.local:/gluster/brick3/export
>>> start
>>> gluster volume reset-brick export 
>>> ovirt01.localdomain.local:/gluster/brick3/export
>>> gl01.localdomain.local:/gluster/brick3/export commit force
>>>
>>> Correct?
>>>
>>
>> Yes, correct. gl01.localdomain.local should resolve correctly on all 3
>> nodes.
>>
>
>
> It fails at first step:
>
>  [root@ovirt01 ~]# gluster volume reset-brick export
> ovirt01.localdomain.local:/gluster/brick3/export start
> volume reset-brick: failed: Cannot execute command. The cluster is
> operating at version 30712. reset-brick command reset-brick start is
> unavailable in this version.
> [root@ovirt01 ~]#
>
> It seems somehow in relation with this upgrade not of the commercial
> solution Red Hat Gluster Storage
> https://access.redhat.com/documentation/en-US/Red_Hat_
> Storage/3.1/html/Installation_Guide/chap-Upgrading_Red_Hat_Storage.html
>
> So ti seems I have to run some command of type:
>
> gluster volume set all cluster.op-version X
>
> with X > 30712
>
> It seems that latest version of commercial Red Hat Gluster Storage is 3.1
> and its op-version is indeed 30712..
>
> So the question is which particular op-version I have to set and if the
> command can be set online without generating disruption
>

It should have worked with the glusterfs 3.10 version from Centos repo.
Adding gluster-users for help on the op-version


>
> Thanks,
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Libguestfs] virt-v2v import from KVM without storage-pool ?

2017-07-05 Thread Arik Hadas
On Wed, Jul 5, 2017 at 4:15 PM, Richard W.M. Jones 
wrote:

> On Wed, Jul 05, 2017 at 11:14:09AM +0200, Matthias Leopold wrote:
> > hi,
> >
> > i'm trying to import a VM in oVirt from a KVM host that doesn't use
> > storage pools. this fails with the following message in
> > /var/log/vdsm/vdsm.log:
> >
> > 2017-07-05 09:34:20,513+0200 ERROR (jsonrpc/5) [root] Error getting
> > disk size (v2v:1089)
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1078, in
> > _get_disk_info
> > vol = conn.storageVolLookupByPath(disk['alias'])
> >   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4770,
> > in storageVolLookupByPath
> > if ret is None:raise libvirtError('virStorageVolLookupByPath()
> > failed', conn=self)
> > libvirtError: Storage volume not found: no storage vol with matching path
> >
> > the disks in the origin VM are defined as
> >
> > 
> >   
> >   
> >
> > 
> >   
> >   
> >
> > is this a virt-v2v or oVirt problem?
>
> Well the stack trace is in the oVirt code, so I guess it's an oVirt
> problem.  Adding ovirt-users mailing list.
>

Right, import of KVM VMs to oVirt doesn't involve virt-v2v.
The current process gets the virtual size (i.e., capacity) and the actual
size (i.e., allocation) of the volume using libvirt api that seems to rely
on having the volume in a storage pool. So the process would need to be
extended in order to support the case of having volumes that are not part
of a storage pool.


>
> Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat
> http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> virt-df lists disk usage of guests without needing to install any
> software inside the virtual machine.  Supports Linux and Windows.
> http://people.redhat.com/~rjones/virt-df/
>
> ___
> Libguestfs mailing list
> libgues...@redhat.com
> https://www.redhat.com/mailman/listinfo/libguestfs
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Libguestfs] virt-v2v import from KVM without storage-pool ?

2017-07-05 Thread Richard W.M. Jones
On Wed, Jul 05, 2017 at 11:14:09AM +0200, Matthias Leopold wrote:
> hi,
> 
> i'm trying to import a VM in oVirt from a KVM host that doesn't use
> storage pools. this fails with the following message in
> /var/log/vdsm/vdsm.log:
> 
> 2017-07-05 09:34:20,513+0200 ERROR (jsonrpc/5) [root] Error getting
> disk size (v2v:1089)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1078, in
> _get_disk_info
> vol = conn.storageVolLookupByPath(disk['alias'])
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4770,
> in storageVolLookupByPath
> if ret is None:raise libvirtError('virStorageVolLookupByPath()
> failed', conn=self)
> libvirtError: Storage volume not found: no storage vol with matching path
> 
> the disks in the origin VM are defined as
> 
> 
>   
>   
> 
> 
>   
>   
> 
> is this a virt-v2v or oVirt problem?

Well the stack trace is in the oVirt code, so I guess it's an oVirt
problem.  Adding ovirt-users mailing list.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-05 Thread Vinícius Ferrão

On 5 Jul 2017, at 05:35, Yaniv Kaul > 
wrote:



On Wed, Jul 5, 2017 at 7:12 AM, Vinícius Ferrão 
> wrote:
Adding another question to what Matthias has said.

I also noted that oVirt (and RHV) documentation does not mention the supported 
block size on iSCSI domains.

RHV: 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html/administration_guide/chap-storage
oVirt: http://www.ovirt.org/documentation/admin-guide/chap-Storage/

I’m interested on 4K blocks over iSCSI, but this isn’t really widely supported. 
The question is: oVirt supports this? Or should we stay with the default 512 
bytes of block size?

It does not.
Y.

Discovered this with the hard way, the system is able to detect it as 4K LUN, 
but ovirt-hosted-engine-setup gets confused:

[2] 36589cfc0071cbf2f2ef314a6212c   1600GiB FreeNAS 
iSCSI Disk
status: free, paths: 4 active

[3] 36589cfc0043589992bce09176478   200GiB  FreeNAS 
iSCSI Disk
status: free, paths: 4 active

[4] 36589cfc00992f7abf38c11295bb6   400GiB  FreeNAS 
iSCSI Disk
status: free, paths: 4 active

[2] is 4k
[3] is 512bytes
[4] is 1k (just to prove the point)

On the system it’s appears to be OK:

Disk /dev/mapper/36589cfc0071cbf2f2ef314a6212c: 214.7 GB, 214748364800 
bytes, 52428800 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes


Disk /dev/mapper/36589cfc0043589992bce09176478: 214.7 GB, 214748364800 
bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes

But whatever, just reporting back to the list. It’s a good ideia to have a note 
about it on the documentation.

V.



Thanks,
V.

On 4 Jul 2017, at 09:10, Matthias Leopold 
> 
wrote:



Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão 
 > wrote:
   Thanks, Konstantin.
   Just to be clear enough: the first deployment would be made on
   classic eth interfaces and later after the deployment of Hosted
   Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
   Another question: what about iSCSI Multipath on Self Hosted Engine?
   I've looked through the net and only found this issue:
   https://bugzilla.redhat.com/show_bug.cgi?id=1193961
   
   Appears to be unsupported as today, but there's an workaround on the
   comments. It's safe to deploy this way? Should I use NFS instead?
It's probably not the most tested path but once you have an engine you should 
be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI bond 
configuration.
A different story is instead having ovirt-ha-agent connecting multiple IQNs or 
multiple targets over your SAN. This is currently not supported for the 
hosted-engine storage domain.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579

Hi Simone,

i think my post to this list titled "iSCSI multipathing setup troubles" just 
recently is about the exact same problem, except i'm not talking about the 
hosted-engine storage domain. i would like to configure _any_ iSCSI storage 
domain the way you describe it in 
https://bugzilla.redhat.com/show_bug.cgi?id=1149579#c1. i would like to do so 
using the oVirt "iSCSI Multipathing" GUI after everything else is setup. i 
can't find a way to do this. is this now possible? i think the iSCSI 
Multipathing documentation could be improved by describing an example IP setup 
for this.

thanks a lot
matthias


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent - Ubuntu 16.04

2017-07-05 Thread Tomáš Golembiovský
Hi Fernando,

this seems to be a bug in python-ethtool package (or the ethtool library). I
have opened a bug report for Ubuntu here:

https://bugs.launchpad.net/ubuntu/+source/python-ethtool/+bug/1702437

Hope this helps,

Tomas


On Tue, 4 Jul 2017 11:26:48 -0300
FERNANDO FREDIANI  wrote:

> I am still getting problems with ovirt-guest-agent on Ubuntu machines in 
> any scenario, new or upgraded instalation.
> 
> One of the VMs has been upgraded to Ubuntu 17.04 (zesty) and the 
> upgraded version of ovirt-guest-agent also doesn't start due something 
> with python.
> 
> When trying to run it manually with: "/usr/bin/python 
> /usr/share/ovirt-guest-agent/ovirt-guest-agent.py" I get the following 
> error:
> root@hostname:~# /usr/bin/python 
> /usr/share/ovirt-guest-agent/ovirt-guest-agent.py
> *** stack smashing detected ***: /usr/bin/python terminated
> Aborted (core dumped)
> 
> Tried also to install the previous version (16.04) from evilissimo but 
> doesn't work either.
> 
> Fernando
> 
> 
> On 30/06/2017 06:16, Sandro Bonazzola wrote:
> > Adding Laszlo Boszormenyi (GCS)  > > which is the maintainer according to 
> > http://it.archive.ubuntu.com/ubuntu/ubuntu/ubuntu/pool/universe/o/ovirt-guest-agent/ovirt-guest-agent_1.0.13.dfsg-1.dsc
> >  
> >
> >
> > On Wed, Jun 28, 2017 at 5:37 PM, FERNANDO FREDIANI 
> > > wrote:
> >
> > Hello
> >
> > Is the maintainer of ovirt-guest-agent for Ubuntu on this mail list ?
> >
> > I have noticed that if you install ovirt-guest-agent package from
> > Ubuntu repositories it doesn't start. Throws an error about python
> > and never starts. Has anyone noticied the same ? OS in this case
> > is a clean minimal install of Ubuntu 16.04.
> >
> > Installing it from the following repository works fine -
> > 
> > http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04
> > 
> > 
> >
> > Fernando
> >
> > ___
> > Users mailing list
> > Users@ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
> >
> >
> >
> >
> > -- 
> >
> > SANDRO BONAZZOLA
> >
> > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
> >
> > Red Hat EMEA 
> >
> > 
> > TRIED. TESTED. TRUSTED. 
> >  
> 


-- 
Tomáš Golembiovský 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.2 and rubygem-fluent-plugin packages missing

2017-07-05 Thread Shirly Radco
Hi,


These package are required for oVirt Metrics.
We added in 4.1.3 as part of the setup script to also install these
packages if they are missing.
You are not required to manually install them.

Best regards,


--

SHIRLY RADCO

BI SOFTWARE ENGINEER,

Red Hat Israel 

sra...@redhat.com
 
 


On Tue, Jul 4, 2017 at 3:24 PM, Gianluca Cecchi 
wrote:

>
> Hello,
> an environment with engine in 4.1.2 and 3 hosts too (updated all from
> 4.0.5 3 days ago).
> In web admin gui the 3 hosts keep the symbol that there are updates
> available.
>
> In events message board I have
>
> Check for available updates on host ovirt01.localdomain.local was
> completed successfully with message 'found updates for packages
> rubygem-fluent-plugin-collectd-nest-0.1.3-1.el7,
> rubygem-fluent-plugin-viaq_data_model-0.0.3-1.el7'.
>
> But on host:
>
> [root@ovirt01 qemu]# yum update
> Loaded plugins: fastestmirror, langpacks
> Loading mirror speeds from cached hostfile
>  * base: it.centos.contactlab.it
>  * epel: mirror.spreitzer.ch
>  * extras: it.centos.contactlab.it
>  * ovirt-4.1: ftp.nluug.nl
>  * ovirt-4.1-epel: mirror.spreitzer.ch
>  * updates: it.centos.contactlab.it
> No packages marked for update
> [root@ovirt01 qemu]#
>
> And
> [root@ovirt01 qemu]# rpm -q rubygem-fluent-plugin-collectd-nest
> rubygem-fluent-plugin-viaq_data_model
> package rubygem-fluent-plugin-collectd-nest is not installed
> package rubygem-fluent-plugin-viaq_data_model is not installed
> [root@ovirt01 qemu]#
>
> Is it a bug in 4.1.2? Or should I manually install these two packages?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-05 Thread Yaniv Kaul
On Wed, Jul 5, 2017 at 7:12 AM, Vinícius Ferrão  wrote:

> Adding another question to what Matthias has said.
>
> I also noted that oVirt (and RHV) documentation does not mention the
> supported block size on iSCSI domains.
>
> RHV: https://access.redhat.com/documentation/en-us/red_
> hat_virtualization/4.0/html/administration_guide/chap-storage
> oVirt: http://www.ovirt.org/documentation/admin-guide/chap-Storage/
>
> I’m interested on 4K blocks over iSCSI, but this isn’t really widely
> supported. The question is: oVirt supports this? Or should we stay with the
> default 512 bytes of block size?
>

It does not.
Y.


>
> Thanks,
> V.
>
> On 4 Jul 2017, at 09:10, Matthias Leopold  ac.at> wrote:
>
>
>
> Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
>
> On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão  mailto:fer...@if.ufrj.br >> wrote:
>Thanks, Konstantin.
>Just to be clear enough: the first deployment would be made on
>classic eth interfaces and later after the deployment of Hosted
>Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
>Another question: what about iSCSI Multipath on Self Hosted Engine?
>I've looked through the net and only found this issue:
>https://bugzilla.redhat.com/show_bug.cgi?id=1193961
>
>Appears to be unsupported as today, but there's an workaround on the
>comments. It's safe to deploy this way? Should I use NFS instead?
> It's probably not the most tested path but once you have an engine you
> should be able to create an iSCSI bond on your hosts from the engine.
> Network configuration is persisted across host reboots and so the iSCSI
> bond configuration.
> A different story is instead having ovirt-ha-agent connecting multiple
> IQNs or multiple targets over your SAN. This is currently not supported for
> the hosted-engine storage domain.
> See:
> https://bugzilla.redhat.com/show_bug.cgi?id=1149579
>
>
> Hi Simone,
>
> i think my post to this list titled "iSCSI multipathing setup troubles"
> just recently is about the exact same problem, except i'm not talking about
> the hosted-engine storage domain. i would like to configure _any_ iSCSI
> storage domain the way you describe it in https://bugzilla.redhat.com/
> show_bug.cgi?id=1149579#c1. i would like to do so using the oVirt "iSCSI
> Multipathing" GUI after everything else is setup. i can't find a way to do
> this. is this now possible? i think the iSCSI Multipathing documentation
> could be improved by describing an example IP setup for this.
>
> thanks a lot
> matthias
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create template

2017-07-05 Thread Fred Rolland
Can you please open bugs for the two issues for future tracking ?
These needs further investigations.

On Mon, Jul 3, 2017 at 2:17 AM, aduckers  wrote:

> Thanks for the assistance.  Versions are:
>
> vdsm.x86_64 4.19.15-1.el7.centos
> ovirt-engine.noarch4.1.2.2-1.el7.centos
>
> Logs are attached.  The GUI shows a creation date of 2017-06-23 11:30:13
> for the disk image that is stuck finalizing, so that might be a good place
> to start in the logs.
>
>
>
>
>
> > On Jul 2, 2017, at 3:52 AM, Fred Rolland  wrote:
> >
> > Hi,
> >
> > Thanks for the logs.
> >
> > What exact version are you using ? (VDSM,engine)
> >
> > Regarding the upload issue, can you please provide imageio-proxy and
> imageio-daemon logs ?
> > Issue in [1] looks with the same symptoms, but we need more info.
> >
> > Regarding the template issue, it looks like [2].
> > There were some issues when calculating the estimated size target
> volume, that should be already fixed.
> > Please provide the exact versions, so I can check if it includes the
> fixes.
> >
> > Thanks,
> >
> > Fred
> >
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1357269
> > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1448606
> >
> >
> > On Fri, Jun 30, 2017 at 5:11 AM, aduckers 
> wrote:
> >
> >
> > Attached.  I’ve also got an image upload to the ISO domain stuck in
> “Finalizing”, and can’t cancel or clear it.  Not sure if related or not,
> but it might show in the logs and if that can be cleared that’d be great
> too.
> >
> > Thanks
> >
> >
> >> On Jun 29, 2017, at 9:20 AM, Fred Rolland  wrote:
> >>
> >> Can you please attach engine and Vdsm logs ?
> >>
> >> On Thu, Jun 29, 2017 at 6:21 PM, aduckers 
> wrote:
> >> I’m running 4.1 with a hosted engine, using FC SAN storage.  I’ve
> uploaded a qcow2 image, then created a VM and attached that image.
> >> When trying to create a template from that VM, we get failures with:
> >>
> >> failed: low level image copy failed
> >> VDSM command DeleteImageGroupVDS failed: Image does not exist in domain
> >> failed to create template
> >>
> >> What should I be looking at to resolve this?  Anyone recognize this
> issue?
> >>
> >> Thanks
> >>
> >>
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >
> >
> >
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users