[ovirt-users] "remove" option greyed out on Permissions tab

2017-07-06 Thread Ian Neilsen
Hey guys

Ive just noticed that I am unable to choose the "remove" option on any
"Permissions" tab in Ovirt Self-hosted 4.1.

Anyone have a suggestion on how to fix this. Im logged in as admin,
original admin created during installation.

Thanks in Advance

-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cloning VM on NFS Leads to Locked Disks

2017-07-06 Thread Charles Tassell

Hi Fred,

  Unfortunately we reinstalled the cluster a while ago, so I don't know 
the exact version we were running at the time.  It would have been the 
latest version in the 4.1 production repos though if that helps.



On 2017-07-02 11:32 AM, Fred Rolland wrote:

Seems you hit [1].
Can you tell me what is the exact version of the engine ?

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1395793

On Thu, May 11, 2017 at 10:29 PM, Charles Tassell 
> wrote:


Sure, it's pretty big so I've put it online for download at
http://krissy.islandadmin.ca/public/engine.log.txt



On 2017-05-11 04:08 PM, Fred Rolland wrote:

The locking is on the engine side and restarting the vdsm will
not affect it .
Can you send the whole engine log ?
Which exact version are you using ?


On Thu, May 11, 2017 at 9:30 PM, Charles Tassell
> wrote:

Just as an update, I created a new VM and had the same issue:
the disk remains locked.  So I then added a new data store
(this one iSCSI not NFS) and create a new VM on that.  Again,
the disk remains locked.  So the problem seems to be that any
action that sets to modify a disk image on my cluster locks
the disk and keeps it locked permanently.

I tried restarting the vdsm daemon, but that didn't make a
difference.  I'm seeing this in my sanlock.log file though,
which worries me:


017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error
-202 delta_length 10 last_success 1738508

Here's the last 20 lines:
2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio
timeout RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto
10 to_count 67
2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read
timeout 10 sec offset 0

/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids
2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio
timeout RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto
10 to_count 65
2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read
timeout 10 sec offset 0

/var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio
collect RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000
result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio
collect RD 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000
result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio
collect RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000
result 1048576:0 match reap
2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource

20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long
write time 10 sec
2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource

20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource

20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,112346
2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace

b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource

b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576
for 7,21,112346
2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace

b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:18:13-0300 2110930 

[ovirt-users] How to reset volume info in storage domain metadata LV

2017-07-06 Thread Kuko Armas

Recently I got a disk snapshot marked as illegal in an VM due to a failed live 
merge

I fixed the snapshot manually with qemu-img rebase, now qemu-img check shows 
everything OK

I also set the imagestatus to OK in the engine database, as shown below

engine=# select 
image_guid,parentid,imagestatus,vm_snapshot_id,volume_type,volume_format,active 
from images where image_group_id='c35ccdc5-a256-4460-8dd2-9e639b8430e9';
  image_guid  |   parentid   | 
imagestatus |vm_snapshot_id| volume_type 
| volume_format | active 
--+--+-+--+-
+---+
 07729f62-2cd2-45d0-993a-ec8d7fbb6ee0 | ---- |  
 1 | 7ae58a5b-eacf-4f6b-a06f-bda1d85170b5 |   1 
| 5 | f
 ee733323-308a-40c8-95d4-b33ca6307362 | 07729f62-2cd2-45d0-993a-ec8d7fbb6ee0 |  
 1 | 2ae07078-cc48-4e20-a249-52d3f44082b4 |   2 
| 4 | t

But when I try to boot the VM it still fails with this error in the SPM

Thread-292345::ERROR::2017-07-06 
14:20:33,155::task::866::Storage.TaskManager.Task::(_setError) 
Task=`2d262a05-5a03-4ff9-8347-eaa22b6e143c`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3227, in prepareImage
raise se.prepareIllegalVolumeError(volUUID)
prepareIllegalVolumeError: Cannot prepare illegal volume: 
('ee733323-308a-40c8-95d4-b33ca6307362',)
Thread-292345::DEBUG::2017-07-06 
14:20:33,156::task::885::Storage.TaskManager.Task::(_run) 
Task=`2d262a05-5a03-4ff9-8347-eaa22b6e143c`::Task._run: 
2d262a05-5a03-4ff9-8347-eaa22b6e143c (u'146dca57-05fd-4b3f-af8d-b253a7ca6f6e', 
u'0001-0001-0001-0001-014d', 
u'c35ccdc5-a256-4460-8dd2-9e639b8430e9', 
u'ee733323-308a-40c8-95d4-b33ca6307362') {} failed - stopping task

Just before that error I see this operation in the log:
/usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=11 bs=512 
if=/dev/146dca57-05fd-4b3f-af8d-b253a7ca6f6e/metadata count=1

If I run it manually I get this:
[root@blade6 ~]# /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct 
skip=11 bs=512 if=/dev/146dca57-05fd-4b3f-af8d-b253a7ca6f6e/metadata count=1
DOMAIN=146dca57-05fd-4b3f-af8d-b253a7ca6f6e
VOLTYPE=LEAF
CTIME=1497399380
FORMAT=COW
IMAGE=c35ccdc5-a256-4460-8dd2-9e639b8430e9
DISKTYPE=2
PUUID=07729f62-2cd2-45d0-993a-ec8d7fbb6ee0
LEGALITY=ILLEGAL
MTIME=0
POOL_UUID=
SIZE=209715200
TYPE=SPARSE
DESCRIPTION=
EOF

So I guess the volume info is cached in the storage domain's metadata LV, and 
it's still in ILLEGAL status there

Is there a way to force ovirt to update the information in the metadata LV?

Of course I've thought of updating manually with dd, with it seems too risky 
(and scary) to do it in production 

Salu2!
-- 
Miguel Armas
CanaryTek Consultoria y Sistemas SL
http://www.canarytek.com/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SQL : last time halted?

2017-07-06 Thread Juan Hernández

On 07/06/2017 02:07 PM, Nicolas Ecarnot wrote:

[For the record]

Juan,

Thanks to your hint, I eventually found it more convenient for me to use 
a SQL query to find out which VM was unsed for months :


SELECT
   vm_static.vm_name,
   vm_dynamic.status,
   vm_dynamic.vm_ip,
   vm_dynamic.vm_host,
   vm_dynamic.last_start_time,
   vm_dynamic.vm_guid,
   vm_dynamic.last_stop_time
FROM
   public.vm_dynamic,
   public.vm_static
WHERE
   vm_dynamic.vm_guid = vm_static.vm_guid AND
   vm_dynamic.status = 0
ORDER BY
   vm_dynamic.last_stop_time ASC;

Thank you.



That is nice. Just keep in mind that the database schema isn't kept 
backwards compatible. A minor change in the engine can make your query 
fail or return incorrect results.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.1.3 Release is now available

2017-07-06 Thread Lev Veyde
The oVirt Project is pleased to announce the availability of the oVirt
4.1.3 Release, as of July 6th, 2017

This update is the third in a series of stabilization updates to the 4.1
series.
4.1.3 brings 20 enhancements and 138 bugfixes, including 76 high or urgent
severity fixes, on top of oVirt 4.1 series

This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* oVirt Node 4.1
* Fedora 24 (tech preview)

See the release notes [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Live has been already built [4]
- oVirt Node will be available soon [4]

Additional Resources:
* Read more about the oVirt 4.1.3 release
highlights:http://www.ovirt.org/release/4.1.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.3/
[4] http://resources.ovirt.org/pub/ovirt-4.1/iso/



-- 

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-06 Thread Gianluca Cecchi
On Thu, Jul 6, 2017 at 2:16 PM, Atin Mukherjee  wrote:

>
>
> On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi  > wrote:
>
>> On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>>
>>> Eventually I can destroy and recreate this "export" volume again with
>>> the old names (ovirt0N.localdomain.local) if you give me the sequence of
>>> commands, then enable debug and retry the reset-brick command
>>>
>>> Gianluca
>>>
>>
>>
>> So it seems I was able to destroy and re-create.
>> Now I see that the volume creation uses by default the new ip, so I
>> reverted the hostnames roles in the commands after putting glusterd in
>> debug mode on the host where I execute the reset-brick command (do I have
>> to set debug for the the nodes too?)
>>
>
> You have to set the log level to debug for glusterd instance where the
> commit fails and share the glusterd log of that particular node.
>
>

Ok, done.

Command executed on ovirt01 with timestamp "2017-07-06 13:04:12" in
glusterd log files

[root@ovirt01 export]# gluster volume reset-brick export
gl01.localdomain.local:/gluster/brick3/export start
volume reset-brick: success: reset-brick start operation successful

[root@ovirt01 export]# gluster volume reset-brick export
gl01.localdomain.local:/gluster/brick3/export
ovirt01.localdomain.local:/gluster/brick3/export commit force
volume reset-brick: failed: Commit failed on ovirt02.localdomain.local.
Please check log file for details.
Commit failed on ovirt03.localdomain.local. Please check log file for
details.
[root@ovirt01 export]#

See glusterd log files for the 3 nodes in debug mode here:
ovirt01:
https://drive.google.com/file/d/0BwoPbcrMv8mvY1RTTGp3RUhScm8/view?usp=sharing
ovirt02:
https://drive.google.com/file/d/0BwoPbcrMv8mvSVpJUHNhMzhMSU0/view?usp=sharing
ovirt03:
https://drive.google.com/file/d/0BwoPbcrMv8mvT2xiWEdQVmJNb0U/view?usp=sharing

HIH debugging
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-06 Thread Atin Mukherjee
On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi 
wrote:

> On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi  > wrote:
>
>>
>> Eventually I can destroy and recreate this "export" volume again with the
>> old names (ovirt0N.localdomain.local) if you give me the sequence of
>> commands, then enable debug and retry the reset-brick command
>>
>> Gianluca
>>
>
>
> So it seems I was able to destroy and re-create.
> Now I see that the volume creation uses by default the new ip, so I
> reverted the hostnames roles in the commands after putting glusterd in
> debug mode on the host where I execute the reset-brick command (do I have
> to set debug for the the nodes too?)
>

You have to set the log level to debug for glusterd instance where the
commit fails and share the glusterd log of that particular node.


>
>
> [root@ovirt01 ~]# gluster volume reset-brick export
> gl01.localdomain.local:/gluster/brick3/export start
> volume reset-brick: success: reset-brick start operation successful
>
> [root@ovirt01 ~]# gluster volume reset-brick export
> gl01.localdomain.local:/gluster/brick3/export 
> ovirt01.localdomain.local:/gluster/brick3/export
> commit force
> volume reset-brick: failed: Commit failed on ovirt02.localdomain.local.
> Please check log file for details.
> Commit failed on ovirt03.localdomain.local. Please check log file for
> details.
> [root@ovirt01 ~]#
>
> See here the glusterd.log in zip format:
> https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/
> view?usp=sharing
>
> Time of the reset-brick operation in logfile is 2017-07-06 11:42
> (BTW: can I have time in log not in UTC format, as I'm using CEST date in
> my system?)
>
> I see a difference, because the brick doesn't seems isolated as before...
>
> [root@ovirt01 glusterfs]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt01.localdomain.local:/gluster/brick3/export
> Brick2: 10.10.2.103:/gluster/brick3/export
> Brick3: 10.10.2.104:/gluster/brick3/export (arbiter)
>
> [root@ovirt02 ~]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt01.localdomain.local:/gluster/brick3/export
> Brick2: 10.10.2.103:/gluster/brick3/export
> Brick3: 10.10.2.104:/gluster/brick3/export (arbiter)
>
> And also in oVirt I see all 3 bricks online
>
> Gianluca
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SQL : last time halted?

2017-07-06 Thread Nicolas Ecarnot

[For the record]

Juan,

Thanks to your hint, I eventually found it more convenient for me to use 
a SQL query to find out which VM was unsed for months :


SELECT
  vm_static.vm_name,
  vm_dynamic.status,
  vm_dynamic.vm_ip,
  vm_dynamic.vm_host,
  vm_dynamic.last_start_time,
  vm_dynamic.vm_guid,
  vm_dynamic.last_stop_time
FROM
  public.vm_dynamic,
  public.vm_static
WHERE
  vm_dynamic.vm_guid = vm_static.vm_guid AND
  vm_dynamic.status = 0
ORDER BY
  vm_dynamic.last_stop_time ASC;

Thank you.

--
Nicolas ECARNOT

Le 30/05/2017 à 17:29, Juan Hernández a écrit :

On 05/30/2017 05:02 PM, Nicolas Ecarnot wrote:

Hello,

I'm trying to find a way to clean up the VMs list of my DCs.
I think some of my users have created VM they're not using anymore, but
it's difficult to sort them out.
In some cases, I can shutdown some of them and wait.
Is there somewhere stored in the db tables the date of the last VM
exctinction?

Thank you.



Did you consider using the API? There is a 'stop_time' attribute that
you can use. For example, to list all the VMs and sort them by stop time
you can use the following Python script:

---8<---
import ovirtsdk4 as sdk
import ovirtsdk4.types as types

# Create the connection to the server:
connection = sdk.Connection(
 url='https://engine.example.com/ovirt-engine/api',
 username='admin@internal',
 password='...',
 ca_file='/etc/pki/ovirt-engine/ca.pem'
)

# List the virtual machines:
vms_service = connection.system_service().vms_service()
vms = vms_service.list()

# Sort the them by stop time:
vms.sort(key=lambda vm: vm.stop_time)

# Print the result:
for vm in vms:
 print("%s: %s" % (vm.name, vm.stop_time))

# Close the connection to the server:
connection.close()
--->8---




--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-06 Thread Gianluca Cecchi
On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi 
wrote:

>
> Eventually I can destroy and recreate this "export" volume again with the
> old names (ovirt0N.localdomain.local) if you give me the sequence of
> commands, then enable debug and retry the reset-brick command
>
> Gianluca
>


So it seems I was able to destroy and re-create.
Now I see that the volume creation uses by default the new ip, so I
reverted the hostnames roles in the commands after putting glusterd in
debug mode on the host where I execute the reset-brick command (do I have
to set debug for the the nodes too?)


[root@ovirt01 ~]# gluster volume reset-brick export
gl01.localdomain.local:/gluster/brick3/export start
volume reset-brick: success: reset-brick start operation successful

[root@ovirt01 ~]# gluster volume reset-brick export
gl01.localdomain.local:/gluster/brick3/export
ovirt01.localdomain.local:/gluster/brick3/export commit force
volume reset-brick: failed: Commit failed on ovirt02.localdomain.local.
Please check log file for details.
Commit failed on ovirt03.localdomain.local. Please check log file for
details.
[root@ovirt01 ~]#

See here the glusterd.log in zip format:
https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing

Time of the reset-brick operation in logfile is 2017-07-06 11:42
(BTW: can I have time in log not in UTC format, as I'm using CEST date in
my system?)

I see a difference, because the brick doesn't seems isolated as before...

[root@ovirt01 glusterfs]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt01.localdomain.local:/gluster/brick3/export
Brick2: 10.10.2.103:/gluster/brick3/export
Brick3: 10.10.2.104:/gluster/brick3/export (arbiter)

[root@ovirt02 ~]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt01.localdomain.local:/gluster/brick3/export
Brick2: 10.10.2.103:/gluster/brick3/export
Brick3: 10.10.2.104:/gluster/brick3/export (arbiter)

And also in oVirt I see all 3 bricks online

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to create a new Gluster volume

2017-07-06 Thread Gianluca Cecchi
On Thu, Jul 6, 2017 at 11:51 AM, Gianluca Cecchi 
wrote:

> Hello,
> I'm trying to create a new volume. I'm in 4.1.2
> I'm following these indications:
> http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_
> Storage/
>
> When I click the "add brick" button, I don't see anything in "Brick
> Directory" dropdown field and I cannot manuall input a directory name.
>
> On the 3 nodes I already have formatted and mounted fs
>
> [root@ovirt01 ~]# df -h /gluster/brick3/
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/mapper/gluster-export   50G   33M   50G   1% /gluster/brick3
> [root@ovirt01 ~]#
>
> The guide tells
>
> 7. Click the Add Bricks button to select bricks to add to the volume.
> Bricks must be created externally on the Gluster Storage nodes.
>
> What does it mean with "created externally"?
> The next step from os point would be volume creation but it is indeed what
> I would like to do from the gui...
>
> Thanks,
> Gianluca
>
>
It seems I have to de-select the checkbox "Show available bricks from host"
and so I can manually the the directory of the bricks

BTW: I see that after creating a volume optimized for oVirt in web admin
gui of 4.1.2 I get slight option for it in respect for a pre-existing
volume created in 4.0.5 during initial setup with gdeploy.

NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have
gluster 3.10 (manually updated from CentOS storage SIG)

Making a "gluster volume info" and then a diff of the output for the 2
volumes I have:

new volume ==   <
old volume  ==>

< cluster.shd-max-threads: 8
---
> cluster.shd-max-threads: 6
13a13,14
> features.shard-block-size: 512MB
16c17
< network.remote-dio: enable
---
> network.remote-dio: off
23a25
> performance.readdir-ahead: on
25c27
< server.allow-insecure: on
---
> performance.strict-o-direct: on

Do I have to change anything for the newly created one?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation problem - missing dependencies?

2017-07-06 Thread Paweł Zaskórski
Yes, i can confirm - it already works :)

Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation problem - missing dependencies?

2017-07-06 Thread Simone Tiraboschi
On Thu, Jul 6, 2017 at 11:29 AM, Simone Tiraboschi 
wrote:

> Yes, thanks,
> we had an issue with repo composition for 4.1.3.
> We are recomposing it.
>
>
Ok, now the repo should be fine.
Sorry for the issue.


>
> On Thu, Jul 6, 2017 at 7:30 AM, Paweł Zaskórski  wrote:
>
>> Hi everyone!
>>
>> I'm trying to install oVirt on fresh CentOS 7 (up-to-date). According
>> to documentation i did:
>>
>> # yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
>> # yum install ovirt-engine
>>
>> Unfortunately, I'm getting an error:
>>
>> --> Finished Dependency Resolution
>> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>>Requires: ovirt-engine-cli >= 3.6.2.0
>> Error: Package:
>> ovirt-engine-setup-plugin-ovirt-engine-4.1.3.5-1.el7.centos.noarch
>> (ovirt-4.1)
>>Requires: ovirt-engine-dwh-setup >= 4.0
>> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>>Requires: ovirt-iso-uploader >= 4.0.0
>> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>>Requires: ovirt-engine-wildfly >= 10.1.0
>> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>>Requires: ovirt-engine-wildfly-overlay >= 10.0.0
>> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>>Requires: ovirt-imageio-proxy
>> Error: Package:
>> ovirt-engine-setup-plugin-ovirt-engine-4.1.3.5-1.el7.centos.noarch
>> (ovirt-4.1)
>>Requires: ovirt-imageio-proxy-setup
>> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>>Requires: ovirt-engine-dashboard >= 1.0.0
>>  You could try using --skip-broken to work around the problem
>>  You could try running: rpm -Va --nofiles --nodigest
>>
>> My enabled repos:
>>
>> # yum --noplugins repolist | awk 'FNR > 1 {print $1}' | head -n-1
>> base/7/x86_64
>> centos-opstools-release/7/x86_64
>> extras/7/x86_64
>> ovirt-4.1/7
>> ovirt-4.1-centos-gluster38/x86_64
>> ovirt-4.1-epel/x86_64
>> ovirt-4.1-patternfly1-noarch-epel/x86_64
>> ovirt-centos-ovirt41/7/x86_64
>> sac-gdeploy/x86_64
>> updates/7/x86_64
>> virtio-win-stable
>>
>> Did I miss something? From which repository come packages like
>> ovirt-engine-dashboard or ovirt-engine-dwh-setup?
>>
>> Thank you in advance for your help!
>>
>> Best regards,
>> Paweł
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Configure an ovirt 4.1 hosted engine using SAS storage (aka DAS storage)

2017-07-06 Thread yayo (j)
Hi,

I have seen that if I choose the option "FC" the wizard "hosted-engine
--deploy" show me the available LUN exposed via SAS:


  Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs3, nfs4)[nfs3]: fc
  The following luns have been found on the requested target:
[1] 3600a098000afe51200eb5940ba27   2048GiB
NETAPP  INF-01-00
status: used, paths: 2 active

[2] 3600a098000afe57e01045940baa3   2048GiB
NETAPP  INF-01-00
status: used, paths: 2 active



Can I go forward or is this totally not supported?

Thank you


2017-07-06 11:43 GMT+02:00 yayo (j) :

>
> Hi all,
>
> I'm tring to install a new cluster ovirt 4.1 (Centos 7) configured to use
> a SAN that expose LUN via SAS . When I start to deploy ovirt and the engine
> using "hosted-engine --deploy" the only options I have are:
>
> (glusterfs, iscsi, fc, nfs3, nfs4)
>
> There is no option for "local" storage (that are not local naturally, but
> are multipath device exposed by SAN via LUN)
>
> Can you help me? What is the right configuration?
>
> Thank you
>



-- 
Linux User: 369739 http://counter.li.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] user permissions

2017-07-06 Thread Fabrice Bacchella
It's getting stranger. I have written code to dump roles and permits for a 
given user.

./ovcmd user -n rexecutor roles | gsort -V
...
has role 'InstanceCreator' on vm 'fa42'
has role 'UserInstanceManager' on vm 'fa42'
has role 'UserRole' on vm 'fa42'
has role 'UserVmManager' on vm 'fa42'
has role 'UserVmRunTimeManager' on vm 'fa42'

So no super-user role for that VM.

./ovcmd user -n rexecutor permits
...
vm/fa42:
  add_users_and_groups_from_directory
  assign_cpu_profile
  attach_disk
  change_vm_cd
  configure_vm_network
  configure_vm_storage
  connect_to_vm
  create_disk
  create_vm
  delete_disk
  delete_vm
  edit_disk_properties
  edit_vm_properties
  hibernate_vm
  login
  manipulate_permissions
  reboot_vm
  run_vm
  shut_down_vm
  sparsify_disk
  stop_vm

./ovcmd  -u rexecutor@internal --passwordfile=/tmp/passwordfile vm -n fa42 stop
The action "vm stop" failed with: query execution failed due to insufficient 
permissions.

The role has the stop_vm but it can't stop it.

Now I add the SuperUser role for that VM.

./ovcmd user -n rexecutor roles | gsort -V
...
has role 'InstanceCreator' on vm 'fa42'
has role 'SuperUser' on vm 'fa42'
has role 'UserInstanceManager' on vm 'fa42'
has role 'UserRole' on vm 'fa42'
has role 'UserVmManager' on vm 'fa42'
has role 'UserVmRunTimeManager' on vm 'fa42'


The permits are the same:

./ovcmd user -n rexecutor permits
vm/fa42:
  add_users_and_groups_from_directory
  assign_cpu_profile
  attach_disk
  change_vm_cd
  configure_vm_network
  configure_vm_storage
  connect_to_vm
  create_disk
  create_vm
  delete_disk
  delete_vm
  edit_disk_properties
  edit_vm_properties
  hibernate_vm
  login
  manipulate_permissions
  reboot_vm
  run_vm
  shut_down_vm
  sparsify_disk
  stop_vm

./ovcmd  -u rexecutor@internal --passwordfile=/tmp/passwordfile vm -n fa42 stop
(OK)

But now it can stop the vm. Why ?


> Le 5 juil. 2017 à 17:55, Fabrice Bacchella  a 
> écrit :
> 
> I'm trying to give a user the permissions to stop/start a specific server.
> 
> This user is given the generic UserRole for the System.
> 
> I tried to give him the roles :
> UserVmManager
> UserVmRunTimeManager
> UserInstanceManager
> InstanceCreator
> UserRole
> 
> for that specific VM, I always get: query execution failed due to 
> insufficient permissions.
> 
> As soon as I give him the SuperUser role, he can stop/start it.
> 
> What role should I give him for that VM ? I don't want to give the privilege 
> to destroy the vm, or add disks. But he should be able to change the os 
> settings too.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to create a new Gluster volume

2017-07-06 Thread Gianluca Cecchi
Hello,
I'm trying to create a new volume. I'm in 4.1.2
I'm following these indications:
http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Storage/

When I click the "add brick" button, I don't see anything in "Brick
Directory" dropdown field and I cannot manuall input a directory name.

On the 3 nodes I already have formatted and mounted fs

[root@ovirt01 ~]# df -h /gluster/brick3/
Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/gluster-export   50G   33M   50G   1% /gluster/brick3
[root@ovirt01 ~]#

The guide tells

7. Click the Add Bricks button to select bricks to add to the volume.
Bricks must be created externally on the Gluster Storage nodes.

What does it mean with "created externally"?
The next step from os point would be volume creation but it is indeed what
I would like to do from the gui...

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Configure an ovirt 4.1 hosted engine using SAS storage (aka DAS storage)

2017-07-06 Thread yayo (j)
Hi all,

I'm tring to install a new cluster ovirt 4.1 (Centos 7) configured to use a
SAN that expose LUN via SAS . When I start to deploy ovirt and the engine
using "hosted-engine --deploy" the only options I have are:

(glusterfs, iscsi, fc, nfs3, nfs4)

There is no option for "local" storage (that are not local naturally, but
are multipath device exposed by SAN via LUN)

Can you help me? What is the right configuration?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Access VM Console on a Smart Phone with User Permission

2017-07-06 Thread Jerome R
Hello everyone! I haven't tried your uploaded app yet. I just want to ask
regarding this experimental, with just user role only it can logon to
moVirt and see its VM? no admin role to be applied?
Thanks!

Best Regards,

On Tue, Jul 4, 2017 at 5:39 PM, Filip Krepinsky  wrote:

>
>
> On Tue, Jun 27, 2017 at 12:26 PM, Tomas Jelinek 
> wrote:
>
>>
>>
>> On Tue, Jun 27, 2017 at 12:08 PM, Jerome R 
>> wrote:
>>
>>> I tried this workaround, I tried to logon the user acct to moVirt with 1
>>> of the admin permission resources, it works I can access to the VM assigned
>>> however I'm able to see what Admin can see in the portal though not able to
>>> perform action. So far that's one of my concern the user should be able to
>>> see just his/her VM assigned.
>>>
>>
>> yes, this is a consequence of using the admin API - you can see all the
>> entities and do actions only on the ones you have explicit rights to.
>>
>> Unfortunately, until the https://github.com/oVirt/moVirt/issues/282 is
>> done, there is nothing better I can offer you.
>>
>> We can try to give that item a priority, just need to get the current RC
>> out of the door (hopefully soon).
>>
>>
>>>
>>> Thanks,
>>> Jerome
>>>
>>> On Tue, Jun 27, 2017 at 3:20 PM, Tomas Jelinek 
>>> wrote:
>>>


 On Tue, Jun 27, 2017 at 10:13 AM, Jerome Roque  wrote:

> Hi Tomas,
>
> Thanks for your response. What do you mean by "removing the support
> for user permissions"? I'm using
>

 The oVirt permission model expects to be told explicitly by one header
 if the logged in user has some admin permissions or not. In the past the
 API behaved differently in this two cases so we needed to remove the option
 to use it without admin permissions.

 Now the situation is better so we may be able to bring this support
 back, but it will require some testing.

>>>
> I created an experimental apk here https://github.com/suomiy/
> moVirt/raw/user-roles/moVirt/moVirt-release.apk
> It has some limitations for user roles, so entity events and event search
> query are disabled. Also the apk has not been tested thoroughly.
>
>
>>

> the latest version of moVirt 1.7.1, and ovirt-engine 4.1.
> Is there anyone tried running user role in moVirt?
>

> Please let us know if this works for you or if you encounter any bugs.
>
> Regards
> Filip
>
>
 you will get permission denied from the API if you try to log in with a
 user which has no admin permission. If you give him any admin permission on
 any resource, it might work as a workaround.


>
> Best Regards,
> Jerome
>
> On Tue, Jun 20, 2017 at 5:14 PM, Tomas Jelinek 
> wrote:
>
>>
>>
>> On Fri, Jun 16, 2017 at 6:14 AM, Jerome Roque <
>> jerzkie102...@gmail.com> wrote:
>>
>>> Good day oVirt Users,
>>>
>>> I need some little help. I have a KVM and used oVirt for the
>>> management of VMs. What I want is that my client will log on to their
>>> account and access their virtual machine using their Smart phone. I 
>>> tried
>>> to install mOvirt and yes can connect to the console of my machine, but 
>>> it
>>> is only accessible for admin console.
>>>
>>
>> moVirt originally worked both with admin and user permissions. We had
>> to remove the support for user permissions since the oVirt API did not
>> provide all features moVirt needed for user permissions (search for
>> example). But the API moved significantly since then (the search works 
>> also
>> for users now for one) so we can move it back. I have opened an issue 
>> about
>> it: https://github.com/oVirt/moVirt/issues/282 - we can try to do it
>> in next version.
>>
>>
>>> Tried to use web console, it downloaded console.vv but can't open
>>> it. By any chance could make this thing possible?
>>>
>>
>> If you want to use a web console for users, I would suggest to try
>> the new ovirt-web-ui [1] - you have a link to it from oVirt landing page
>> and since 4.1 it is installed by default with oVirt.
>>
>> The .vv file can not be opened using aSPICE AFAIK - adding Iordan as
>> the author of aSPICE to comment on this.
>>
>> [1]: https://github.com/oVirt/ovirt-web-ui
>>
>>
>>>
>>> Thank you,
>>> Jerome
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>

>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org

Re: [ovirt-users] Installation problem - missing dependencies?

2017-07-06 Thread Simone Tiraboschi
Yes, thanks,
we had an issue with repo composition for 4.1.3.
We are recomposing it.

On Thu, Jul 6, 2017 at 7:30 AM, Paweł Zaskórski  wrote:

> Hi everyone!
>
> I'm trying to install oVirt on fresh CentOS 7 (up-to-date). According
> to documentation i did:
>
> # yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
> # yum install ovirt-engine
>
> Unfortunately, I'm getting an error:
>
> --> Finished Dependency Resolution
> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>Requires: ovirt-engine-cli >= 3.6.2.0
> Error: Package:
> ovirt-engine-setup-plugin-ovirt-engine-4.1.3.5-1.el7.centos.noarch
> (ovirt-4.1)
>Requires: ovirt-engine-dwh-setup >= 4.0
> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>Requires: ovirt-iso-uploader >= 4.0.0
> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>Requires: ovirt-engine-wildfly >= 10.1.0
> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>Requires: ovirt-engine-wildfly-overlay >= 10.0.0
> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>Requires: ovirt-imageio-proxy
> Error: Package:
> ovirt-engine-setup-plugin-ovirt-engine-4.1.3.5-1.el7.centos.noarch
> (ovirt-4.1)
>Requires: ovirt-imageio-proxy-setup
> Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
>Requires: ovirt-engine-dashboard >= 1.0.0
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
>
> My enabled repos:
>
> # yum --noplugins repolist | awk 'FNR > 1 {print $1}' | head -n-1
> base/7/x86_64
> centos-opstools-release/7/x86_64
> extras/7/x86_64
> ovirt-4.1/7
> ovirt-4.1-centos-gluster38/x86_64
> ovirt-4.1-epel/x86_64
> ovirt-4.1-patternfly1-noarch-epel/x86_64
> ovirt-centos-ovirt41/7/x86_64
> sac-gdeploy/x86_64
> updates/7/x86_64
> virtio-win-stable
>
> Did I miss something? From which repository come packages like
> ovirt-engine-dashboard or ovirt-engine-dwh-setup?
>
> Thank you in advance for your help!
>
> Best regards,
> Paweł
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-06 Thread Gianluca Cecchi
On Thu, Jul 6, 2017 at 6:55 AM, Atin Mukherjee  wrote:

>
>
>>
> You can switch back to info mode the moment this is hit one more time with
> the debug log enabled. What I'd need here is the glusterd log (with debug
> mode) to figure out the exact cause of the failure.
>
>
>>
>> Let me know,
>> thanks
>>
>>
>
Yes, but with the volume in the current state I cannot run the reset-brick
command.
I have another volume, named "iso", that I can use, but I would like to use
it as clean after understanding the problem on "export" volume.
Currently on "export" volume in fact  I have this

[root@ovirt01 ~]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 1
Transport-type: tcp
Bricks:
Brick1: gl01.localdomain.local:/gluster/brick3/export
Options Reconfigured:
...

While on the other two nodes

[root@ovirt02 ~]#  gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 2
Transport-type: tcp
Bricks:
Brick1: ovirt02.localdomain.local:/gluster/brick3/export
Brick2: ovirt03.localdomain.local:/gluster/brick3/export
Options Reconfigured:


[root@ovirt03 ~]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 2
Transport-type: tcp
Bricks:
Brick1: ovirt02.localdomain.local:/gluster/brick3/export
Brick2: ovirt03.localdomain.local:/gluster/brick3/export
Options Reconfigured:
...

Eventually I can destroy and recreate this "export" volume again with the
old names (ovirt0N.localdomain.local) if you give me the sequence of
commands, then enable debug and retry the reset-brick command

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users