Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-01 Thread Ravishankar N

On 03/02/2016 12:02 PM, Sahina Bose wrote:



On 03/02/2016 03:45 AM, Nir Soffer wrote:
On Tue, Mar 1, 2016 at 10:51 PM, p...@email.cz > wrote:

>
> HI,
> requested output:
>
> # ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:

> total 2,1M
> -rw-rw 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- good
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.17 leases
> -rw-r--r-- 1 vdsm kvm  335  7. lis 22.17 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.41 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 03.56 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 03.56 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.14 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.14 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.51 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 23.12 leases
> -rw-r--r-- 1 vdsm kvm  998 25. úno 00.35 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.44 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 00.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 00.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:

> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.32 ids  <-- bad (other can read)
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:

> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- bad (other can read)
> -rw-rw 1 vdsm kvm  16M 25. úno 00.42 inbox
> -rw-rw 1 vdsm kvm 2,0M 25. úno 00.44 leases
> -rw-r--r-- 1 vdsm kvm  997 24. úno 02.46 metadata
> -rw-rw 1 vdsm kvm  16M 25. úno 00.44 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P3/ef010d08-aed1-41c4-ba9a-e6d9bdecb4b4/dom_md:

> total 2,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.34 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.35 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 22.38 leases
> -rw-r--r-- 1 vdsm kvm 1,1K 24. úno 19.07 metadata
> -rw-rw 1 vdsm kvm  16M 23. úno 22.27 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md:

> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- bad (other can read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 23.50 inbox  <-- bad (other can 
read)
> -rw-rw-r-- 1 vdsm kvm 2,0M  6. lis 23.51 leases<-- bad (other 
can read)
> -rw-rw-r-- 1 vdsm kvm  734  7. lis 02.13 metadata  <-- bad (group 
can write, other can read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 16.55 outbox  <-- bad (other can 
read)

>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md:

> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.35 ids  <-- bad (other can read)
> -rw-rw-r-- 1 vdsm kvm  16M 24. úno 01.06 inbox
> -rw-rw-r-- 1 vdsm kvm 2,0M 24. úno 02.44 leases
> -rw-r--r-- 1 vdsm kvm  998 24. úno 19.07 metadata
> -rw-rw-r-- 1 vdsm kvm  16M  7. lis 22.20 outbox


It should look like this:

-rw-rw. 1 vdsm kvm 1.0M Mar  1 23:36 ids
-rw-rw. 1 vdsm kvm 2.0M Mar  1 23:35 leases
-rw-r--r--. 1 vdsm kvm  353 Mar  1 23:35 metadata
-rw-rw. 1 vdsm kvm  16M Mar  1 23:34 outbox
-rw-rw. 1 vdsm kvm  16M Mar  1 23:34 inbox

This explains the EACCES error.

You can start by fixing the permissions manually, you can do this online.

>  The ids files was generated by "touch" command after deleting them 
due "sanlock locking hang"  gluster crash & reboot
> I expected that they will be filled automaticaly after gluster 
reboot ( the 

Re: [ovirt-users] open error -13 = sanlock

2016-03-01 Thread Sahina Bose



On 03/02/2016 03:45 AM, Nir Soffer wrote:
On Tue, Mar 1, 2016 at 10:51 PM, p...@email.cz  
> wrote:

>
> HI,
> requested output:
>
> # ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:

> total 2,1M
> -rw-rw 1 vdsm kvm 1,0M  1. bře 21.28 ids<-- good
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.17 leases
> -rw-r--r-- 1 vdsm kvm  335  7. lis 22.17 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.41 ids<-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 03.56 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 03.56 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids<-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.14 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.14 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids<-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.51 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 23.12 leases
> -rw-r--r-- 1 vdsm kvm  998 25. úno 00.35 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.44 ids<-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 00.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 00.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:

> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.32 ids<-- bad (other 
can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:

> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids<-- bad (other 
can read)

> -rw-rw 1 vdsm kvm  16M 25. úno 00.42 inbox
> -rw-rw 1 vdsm kvm 2,0M 25. úno 00.44 leases
> -rw-r--r-- 1 vdsm kvm  997 24. úno 02.46 metadata
> -rw-rw 1 vdsm kvm  16M 25. úno 00.44 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P3/ef010d08-aed1-41c4-ba9a-e6d9bdecb4b4/dom_md:

> total 2,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.34 ids<-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.35 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 22.38 leases
> -rw-r--r-- 1 vdsm kvm 1,1K 24. úno 19.07 metadata
> -rw-rw 1 vdsm kvm  16M 23. úno 22.27 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md:

> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids<-- bad (other 
can read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 23.50 inbox  <-- bad (other can 
read)
> -rw-rw-r-- 1 vdsm kvm 2,0M  6. lis 23.51 leases  <-- bad (other can 
read)
> -rw-rw-r-- 1 vdsm kvm  734  7. lis 02.13 metadata  <-- bad (group 
can write, other can read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 16.55 outbox  <-- bad (other can 
read)

>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md:

> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.35 ids  <-- bad (other can read)
> -rw-rw-r-- 1 vdsm kvm  16M 24. úno 01.06 inbox
> -rw-rw-r-- 1 vdsm kvm 2,0M 24. úno 02.44 leases
> -rw-r--r-- 1 vdsm kvm  998 24. úno 19.07 metadata
> -rw-rw-r-- 1 vdsm kvm  16M  7. lis 22.20 outbox


It should look like this:

-rw-rw. 1 vdsm kvm 1.0M Mar  1 23:36 ids
-rw-rw. 1 vdsm kvm 2.0M Mar  1 23:35 leases
-rw-r--r--. 1 vdsm kvm  353 Mar  1 23:35 metadata
-rw-rw. 1 vdsm kvm  16M Mar  1 23:34 outbox
-rw-rw. 1 vdsm kvm  16M Mar  1 23:34 inbox

This explains the EACCES error.

You can start by fixing the permissions manually, you can do this online.

>  The ids files was generated by "touch" command after deleting them 
due "sanlock locking hang"  gluster crash & reboot
> I expected that they will be filled 

[ovirt-users] Know the console client IP from guest

2016-03-01 Thread Simon Lévesuqe
From a windows guest (can be useful for linux guest as well), is it 
possible to get the IP of the spice client connected to console?


I setup oVirt for a VDI. The users use thin clients that PXE boot using 
LTSP. I need the client IP to offer remote support via VNC. Now, I use 
epoptes as remote support tool, its an excellent peace of software but I 
need technicians from other branch of the company to be able to offer 
remote support to my users and they are all Windows guy ans don't want 
to connect to the LTSP server to start epoptes... They already use VNC 
to support Wyse type clients so I installed x11vnc on my clients chroot 
and setup it to start at boot. I know this is a security issue but this 
is the boss decision.


Some client log to terminal servers using xfreerdp (LTSP clients boot 
directly to xfreerdp). In that case, everything is ok, we setup bginfo 
to print the client name and ip as they are windows environment variables.


The problem is that many other clients boot to the oVirt user portal and 
users log to Windows 7 vm using Virt-Viewer and Spice. In bginfo, the 
only thing i can get is the IP of the Windows guest. It's ok, for most 
cases we can take control of the guest ant it do the trick but sometimes 
it's useful to vnc the thin itself.


Thanks!

Simon L
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] My oVirt 3.6 installation guide

2016-03-01 Thread Glenn Snead
I'm not planning on it..for now.  This is my learning lab, and I only need
a few VMs.  If anything I'll swap motherboards and up the RAM.  It's more
than sufficient for my needs.

On Tue, Mar 1, 2016 at 5:25 PM Nir Soffer  wrote:

> Cool!
>
> But it looks like you installed hosted engine and gluster on a single
> host. Do you plan to add more hosts?
>
> Nir
>
> On Tue, Mar 1, 2016 at 11:38 PM, Glenn Snead  wrote:
>
>> I rebuilt my humble oVirt server over the weekend, and I wrote a blog
>> posting about how I did it.
>> https://glennsnead.wordpress.com/2016/02/28/ovirt-3-6-installation/
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] My oVirt 3.6 installation guide

2016-03-01 Thread Nir Soffer
Cool!

But it looks like you installed hosted engine and gluster on a single host.
Do you plan to add more hosts?

Nir

On Tue, Mar 1, 2016 at 11:38 PM, Glenn Snead  wrote:

> I rebuilt my humble oVirt server over the weekend, and I wrote a blog
> posting about how I did it.
> https://glennsnead.wordpress.com/2016/02/28/ovirt-3-6-installation/
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] open error -13 = sanlock

2016-03-01 Thread David Teigland
On Wed, Mar 02, 2016 at 12:15:17AM +0200, Nir Soffer wrote:
> 1. Stop engine,  so it will not try to start vdsm
> 2. Stop vdsm on all hosts, so they do not try to acquire a host id with
> sanlock
> This does not affect running vms
> 3. Fix the permissions on the ids file, via glusterfs mount
> 4. Initialize the ids files from one of the hosts, via the glusterfs mount
> This should fix the ids files on all replicas
> 5. Start vdsm on all hosts
> 6. Start engine
> 
> Engine will connect to all hosts, hosts will connect to storage and try to
> acquire a host id.
> Then Engine will start the SPM on one of the hosts, and your DC should
> become up.
> 
> David, Sahina, can you confirm that this procedure is safe?

Looks right.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] open error -13 = sanlock

2016-03-01 Thread Nir Soffer
On Tue, Mar 1, 2016 at 10:51 PM, p...@email.cz  wrote:
>
> HI,
> requested output:
>
> # ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:
> total 2,1M
> -rw-rw 1 vdsm kvm 1,0M  1. bře 21.28 ids<-- good
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.17 leases
> -rw-r--r-- 1 vdsm kvm  335  7. lis 22.17 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 outbox
>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:
> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.41 ids<-- bad (sanlock
cannot write, other can read)
> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 03.56 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 03.56 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 outbox
>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:
> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids<-- bad (sanlock
cannot write, other can read)
> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.14 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.14 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 outbox
>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:
> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids<-- bad (sanlock
cannot write, other can read)
> -rw-rw 1 vdsm kvm  16M 23. úno 22.51 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 23.12 leases
> -rw-r--r-- 1 vdsm kvm  998 25. úno 00.35 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.16 outbox
>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:
> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.44 ids<-- bad (sanlock
cannot write, other can read)
> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 00.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 00.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 outbox
>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:
> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.32 ids<-- bad (other can
read)
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 outbox
>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:
> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids<-- bad (other can
read)
> -rw-rw 1 vdsm kvm  16M 25. úno 00.42 inbox
> -rw-rw 1 vdsm kvm 2,0M 25. úno 00.44 leases
> -rw-r--r-- 1 vdsm kvm  997 24. úno 02.46 metadata
> -rw-rw 1 vdsm kvm  16M 25. úno 00.44 outbox
>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P3/ef010d08-aed1-41c4-ba9a-e6d9bdecb4b4/dom_md:
> total 2,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.34 ids<-- bad (sanlock
cannot write, other can read)
> -rw-rw 1 vdsm kvm  16M 23. úno 22.35 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 22.38 leases
> -rw-r--r-- 1 vdsm kvm 1,1K 24. úno 19.07 metadata
> -rw-rw 1 vdsm kvm  16M 23. úno 22.27 outbox
>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md:
> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids<-- bad (other can
read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 23.50 inbox<-- bad (other can
read)
> -rw-rw-r-- 1 vdsm kvm 2,0M  6. lis 23.51 leases<-- bad (other can
read)
> -rw-rw-r-- 1 vdsm kvm  734  7. lis 02.13 metadata<-- bad (group
can write, other can read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 16.55 outbox<-- bad (other can
read)
>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md:
> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.35 ids<-- bad (other can
read)
> -rw-rw-r-- 1 vdsm kvm  16M 24. úno 01.06 inbox
> -rw-rw-r-- 1 vdsm kvm 2,0M 24. úno 02.44 leases
> -rw-r--r-- 1 vdsm kvm  998 24. úno 19.07 metadata
> -rw-rw-r-- 1 vdsm kvm  16M  7. lis 22.20 outbox


It should look like this:

-rw-rw. 1 vdsm kvm 1.0M Mar  1 23:36 ids
-rw-rw. 1 vdsm kvm 2.0M Mar  1 23:35 leases
-rw-r--r--. 1 vdsm kvm  353 Mar  1 23:35 metadata
-rw-rw. 1 vdsm kvm  16M Mar  1 23:34 outbox
-rw-rw. 1 vdsm kvm  16M Mar  1 23:34 inbox

This explains the EACCES error.

You can start by fixing the permissions manually, you can do this online.

>  The ids files was generated by "touch" command after deleting them due
"sanlock locking hang"  gluster crash & reboot
> I expected that they will be filled automaticaly after gluster reboot (
the  shadow copy from   ".gluster "   directory  was deleted & created

[ovirt-users] My oVirt 3.6 installation guide

2016-03-01 Thread Glenn Snead
I rebuilt my humble oVirt server over the weekend, and I wrote a blog
posting about how I did it.
https://glennsnead.wordpress.com/2016/02/28/ovirt-3-6-installation/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 3.6.3 Final Release is now available

2016-03-01 Thread Arman Khalatyan
Looks good!!thank you!
Am 01.03.2016 13:10 schrieb "Sandro Bonazzola" :

> The oVirt Project is pleased to announce today the general availability of
> oVirt 3.6.3.
>
> This latest community release includes numerous bug fixes and several new
> features, such as:
>
>
>- The WebSocketProxy VDC option(and a few others) can now be updated
>without need to restart the engine.
>
>
>- OVIRT-CLI now use remote-viewer instead of spicec for spice based
>console
>
>
>- unassigned host status now reflects more the real status
>
>
>- cloud-init service after appliance deployment is now disabled
>
>
> oVirt is an open-source, openly-governed enterprise virtualization
> management application, developed by a global community. You can use the
> oVirt management interface (oVirt Engine) to manage hardware nodes, storage
> and network resources, and to deploy and monitor virtual machines running
> in your data center.
>
> If you are familiar with VMware products, oVirt is conceptually similar to
> vSphere. oVirt serves as the bedrock for Red Hat's Enterprise
> Virtualization product, and it is the "upstream" project where new features
> are developed prior to their inclusion in Red Hat's supported product
> offering.
>
> Additional Resources:
>
> * Read more about the oVirt 3.6.3 release highlights:
> http://www.ovirt.org/release/3.6.3/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Read more about oVirt Project community events:
> http://www.ovirt.org/events/
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] open error -13 = sanlock

2016-03-01 Thread Nir Soffer
On Tue, Mar 1, 2016 at 5:07 PM, p...@email.cz  wrote:

> Hello,  can anybody  explain this error no.13 ( open file ) in sanlock.log
> .
>

This is EACCES

Can you share the outoput of:

ls -lh /rhev/data-center/mnt/:<_path>//dom_md


>
> The size of  "ids" file is zero (0)
>

This is how we create the ids file when initializing it.

But then we use sanlock to initialize the ids file, and it should be 1MiB
after that.

Is this ids files created by vdsm, or one you created yourself?


> 2016-02-28 03:25:46+0100 269626 [1951]: open error -13
> /rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
> 2016-02-28 03:25:46+0100 269626 [1951]: s187985 open_disk
> /rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
> error -13
> 2016-02-28 03:25:56+0100 269636 [11304]: s187992 lockspace
> 7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0
>
> If the main problem is about zero file size, can I regenerate  this file
> online securely , with no VM dependence  
>

Yes, I think I already referred to the instructions how to do that in a
previous mail.


>
> dist = RHEL - 7 - 2.1511
> kernel = 3.10.0 - 327.10.1.el7.x86_64
> KVM = 2.3.0 - 29.1.el7
> libvirt = libvirt-1.2.17-13.el7_2.3
> vdsm = vdsm-4.16.30-0.el7
> GlusterFS = glusterfs-3.7.8-1.el7
>
>
> regs.
> Pavel
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not seeing Gluster options on Cluster properties

2016-03-01 Thread Sahina Bose



On 03/01/2016 09:50 PM, Will Dennis wrote:


Ah, although this worked, and I see the Gluster volumes now in the 
tree & can manage them, I see for both volumes I have (“engine” and 
“vm_data”) that the number of bricks is “0”... Why can’t it see the 
existing bricks?




This is because the hosts were imported before the gluster service was 
enabled, and hence the gluster server uuid info is not updated in the 
engine.
You can put the host to maintenance and activate the hosts - to refresh 
these details


Could you also log a bug to refresh gluster details on enabling the 
gluster services?




*From:*Will Dennis
*Sent:* Tuesday, March 01, 2016 11:13 AM
*To:* 'Sahina Bose'; users
*Subject:* RE: [ovirt-users] Not seeing Gluster options on Cluster 
properties


Great, that worked – thanks Sahina!



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not seeing Gluster options on Cluster properties

2016-03-01 Thread Will Dennis
Great, that worked – thanks Sahina!


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not seeing Gluster options on Cluster properties

2016-03-01 Thread Will Dennis
Ah, although this worked, and I see the Gluster volumes now in the tree & can 
manage them, I see for both volumes I have (“engine” and “vm_data”) that the 
number of bricks is “0”... Why can’t it see the existing bricks?

From: Will Dennis
Sent: Tuesday, March 01, 2016 11:13 AM
To: 'Sahina Bose'; users
Subject: RE: [ovirt-users] Not seeing Gluster options on Cluster properties

Great, that worked – thanks Sahina!


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] open error -13 = sanlock

2016-03-01 Thread Fred Rolland
Hi,

Can you share VDSM logs ?

There was something similar in this thread:
http://lists.ovirt.org/pipermail/users/2016-February/038046.html

Thanks,
Fred

On Tue, Mar 1, 2016 at 5:07 PM, p...@email.cz  wrote:

> Hello,  can anybody  explain this error no.13 ( open file ) in sanlock.log
> .
>
> The size of  "ids" file is zero (0)
>
> 2016-02-28 03:25:46+0100 269626 [1951]: open error -13
> /rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
> 2016-02-28 03:25:46+0100 269626 [1951]: s187985 open_disk
> /rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
> error -13
> 2016-02-28 03:25:56+0100 269636 [11304]: s187992 lockspace
> 7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0
>
> If the main problem is about zero file size, can I regenerate  this file
> online securely , with no VM dependence  
>
>
> dist = RHEL - 7 - 2.1511
> kernel = 3.10.0 - 327.10.1.el7.x86_64
> KVM = 2.3.0 - 29.1.el7
> libvirt = libvirt-1.2.17-13.el7_2.3
> vdsm = vdsm-4.16.30-0.el7
> GlusterFS = glusterfs-3.7.8-1.el7
>
>
> regs.
> Pavel
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not seeing Gluster options on Cluster properties

2016-03-01 Thread Sahina Bose



On 03/01/2016 08:48 PM, Will Dennis wrote:

-Original Message-
From: Sahina Bose [mailto:sab...@redhat.com]
Sent: Tuesday, March 01, 2016 10:01 AM
To: Will Dennis; users
Subject: Re: [ovirt-users] Not seeing Gluster options on Cluster properties


How did you install the engine? Automatic install via ovirt-engine appliance?
If so, it's likely that the engine is installed in "Virt" only mode.

Yes, I used the appliance OVF


If using appliance OVF, you should opt to manually install engine (see 
http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.html)


To correct this now, you will need to update value in engine database, 
and restart ovirt-engine service.


su - postgres -c "psql -d engine -c \"update vdc_options set 
option_value=255 where option_name='ApplicationMode';\""/




/



Can you also check "engine-config -g AllowClusterWithVirtGlusterEnabled"
- this needs to be true.

[root@ovirt-engine-01 ~]# engine-config -g AllowClusterWithVirtGlusterEnabled
AllowClusterWithVirtGlusterEnabled: true version: general

What additional changes need to be made to support Gluster management?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not seeing Gluster options on Cluster properties

2016-03-01 Thread Will Dennis
> -Original Message-
> From: Sahina Bose [mailto:sab...@redhat.com] 
> Sent: Tuesday, March 01, 2016 10:01 AM
> To: Will Dennis; users
> Subject: Re: [ovirt-users] Not seeing Gluster options on Cluster properties
>
>
> How did you install the engine? Automatic install via ovirt-engine appliance?
> If so, it's likely that the engine is installed in "Virt" only mode.

Yes, I used the appliance OVF

> Can you also check "engine-config -g AllowClusterWithVirtGlusterEnabled" 
> - this needs to be true.

[root@ovirt-engine-01 ~]# engine-config -g AllowClusterWithVirtGlusterEnabled
AllowClusterWithVirtGlusterEnabled: true version: general

What additional changes need to be made to support Gluster management?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] open error -13 = sanlock

2016-03-01 Thread p...@email.cz

Hello,  can anybody  explain this error no.13 ( open file ) in sanlock.log .

The size of  "ids" file is zero (0)

2016-02-28 03:25:46+0100 269626 [1951]: open error -13 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
2016-02-28 03:25:46+0100 269626 [1951]: s187985 open_disk 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids 
error -13
2016-02-28 03:25:56+0100 269636 [11304]: s187992 lockspace 
7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0


If the main problem is about zero file size, can I regenerate  this file 
online securely , with no VM dependence  



dist = RHEL - 7 - 2.1511
kernel = 3.10.0 - 327.10.1.el7.x86_64
KVM = 2.3.0 - 29.1.el7
libvirt = libvirt-1.2.17-13.el7_2.3
vdsm = vdsm-4.16.30-0.el7
GlusterFS = glusterfs-3.7.8-1.el7


regs.
Pavel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not seeing Gluster options on Cluster properties

2016-03-01 Thread Sahina Bose



On 02/29/2016 07:04 PM, Will Dennis wrote:

Hi all,

I am running a hyperconverged setup of oVirt 3.6, where I pre-made the Gluster 
volumes that are used for the hosted engine and the VM storage domains. I have seen 
in screenshots of 3.6 HC setups that there should be options to enable Gluster 
integration in oVirt by checking a box in Cluster properties > General tab 
(“Enable Gluster Service”) but I do not see that checkbox in my Cluster properties 
> General tab. What must I do to enable this integration?


How did you install the engine? Automatic install via ovirt-engine 
appliance? If so, it's likely that the engine is installed in "Virt" 
only mode.


Can you also check "engine-config -g AllowClusterWithVirtGlusterEnabled" 
- this needs to be true.





BTW, I see that there is a package "vdsm-gluster-4.17.18-0.el7.centos.noarch” 
installed on all of my oVirt nodes… Is there another package that I am missing?

Thanks,
Will
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 3.6.3 Final Release is now available

2016-03-01 Thread Sandro Bonazzola
The oVirt Project is pleased to announce today the general availability of
oVirt 3.6.3.

This latest community release includes numerous bug fixes and several new
features, such as:


   - The WebSocketProxy VDC option(and a few others) can now be updated
   without need to restart the engine.


   - OVIRT-CLI now use remote-viewer instead of spicec for spice based
   console


   - unassigned host status now reflects more the real status


   - cloud-init service after appliance deployment is now disabled


oVirt is an open-source, openly-governed enterprise virtualization
management application, developed by a global community. You can use the
oVirt management interface (oVirt Engine) to manage hardware nodes, storage
and network resources, and to deploy and monitor virtual machines running
in your data center.

If you are familiar with VMware products, oVirt is conceptually similar to
vSphere. oVirt serves as the bedrock for Red Hat's Enterprise
Virtualization product, and it is the "upstream" project where new features
are developed prior to their inclusion in Red Hat's supported product
offering.

Additional Resources:

* Read more about the oVirt 3.6.3 release highlights:
http://www.ovirt.org/release/3.6.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Read more about oVirt Project community events:
http://www.ovirt.org/events/


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] virt-v2v converting multi-disk

2016-03-01 Thread Richard W.M. Jones
On Mon, Feb 29, 2016 at 06:31:31PM -0600, Clint Boggio wrote:
>
> I've tried the -i libvirtxml method and it fails and I suspect it's
> because the legacy KVM environment is Ubuntu based. Any tricks or
> pointers would be appreciated.

Run `virt-v2v -v -x -i libvirtxml [...]' and capture the
complete output and post it somewhere.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users