Re: [ovirt-users] Safe to upgrade HE hosts from GUI?

2016-07-28 Thread Wee Sritippho

On 28/7/2559 15:54, Simone Tiraboschi wrote:
On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho > wrote:


On 21/7/2559 16:53, Simone Tiraboschi wrote:

On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho
> wrote:

Can I just follow

http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
until step 3 and do everything else via GUI?

Yes, absolutely.


Hi, I upgrade a host (host02) via GUI and now its score is 0.
Restarted the services but the result is still the same. Kinda
lost now. What should I do next?


Can you please attach ovirt-ha-agent logs?

Yes, here are the logs:
https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh

--
Wee

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread David Gossage
On Thu, Jul 28, 2016 at 11:44 AM, Siavash Safi 
wrote:

> It seems that dir modes are wrong!?
> [root@node1 ~]# ls -ld /data/brick*/brick*
> drw---. 5 vdsm kvm 107 Jul 28 20:13 /data/brick1/brick1
> drw---. 5 vdsm kvm  82 Jul 27 23:08 /data/brick2/brick2
> [root@node2 ~]# ls -ld /data/brick*/brick*
> drwxr-xr-x. 5 vdsm kvm 107 Apr 26 19:33 /data/brick1/brick1
> drw---. 5 vdsm kvm  82 Jul 27 23:08 /data/brick2/brick2
> drw---. 5 vdsm kvm 107 Jul 28 20:13 /data/brick3/brick3
> [root@node3 ~]# ls -ld /data/brick*/brick*
> drw---. 5 vdsm kvm 107 Jul 28 20:10 /data/brick1/brick1
> drw---. 5 vdsm kvm  82 Jul 27 23:08 /data/brick2/brick2
>

That would probably do it.  kvm does read access.  plus the lack of x on
directories isnt great either.  I'd think since they are the bricks you
could maybe manually chmod them appropriately 755.  I "think" gluster is
only tracking files under /data/brick1/brick1/* not /data/brick1/brick1
itself.


/data/brick3/brick3 is the only "new" one?  Wonder if it could be a bug
from the brick move in some way.  Might be something worth posting to
gluster list about.

>
> On Thu, Jul 28, 2016 at 9:06 PM Sahina Bose  wrote:
>
>>
>>
>> - Original Message -
>> > From: "Siavash Safi" 
>> > To: "Sahina Bose" 
>> > Cc: "David Gossage" , "users" <
>> users@ovirt.org>, "Nir Soffer" ,
>> > "Allon Mureinik" 
>> > Sent: Thursday, July 28, 2016 9:04:32 PM
>> > Subject: Re: [ovirt-users] Cannot find master domain
>> >
>> > Please check the attachment.
>>
>> Nothing out of place in the mount logs.
>>
>> Can you ensure the brick dir permissions are vdsm:kvm - even for the
>> brick that was replaced?
>>
>> >
>> > On Thu, Jul 28, 2016 at 7:46 PM Sahina Bose  wrote:
>> >
>> > >
>> > >
>> > > - Original Message -
>> > > > From: "Siavash Safi" 
>> > > > To: "Sahina Bose" 
>> > > > Cc: "David Gossage" , "users" <
>> > > users@ovirt.org>
>> > > > Sent: Thursday, July 28, 2016 8:35:18 PM
>> > > > Subject: Re: [ovirt-users] Cannot find master domain
>> > > >
>> > > > [root@node1 ~]# ls -ld /rhev/data-center/mnt/glusterSD/
>> > > > drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:28
>> /rhev/data-center/mnt/glusterSD/
>> > > > [root@node1 ~]# getfacl /rhev/data-center/mnt/glusterSD/
>> > > > getfacl: Removing leading '/' from absolute path names
>> > > > # file: rhev/data-center/mnt/glusterSD/
>> > > > # owner: vdsm
>> > > > # group: kvm
>> > > > user::rwx
>> > > > group::r-x
>> > > > other::r-x
>> > > >
>> > >
>> > >
>> > > The ACLs look correct to me. Adding Nir/Allon for insights.
>> > >
>> > > Can you attach the gluster mount logs from this host?
>> > >
>> > >
>> > > > And as I mentioned in another message, the directory is empty.
>> > > >
>> > > > On Thu, Jul 28, 2016 at 7:24 PM Sahina Bose 
>> wrote:
>> > > >
>> > > > > Error from vdsm log: Permission settings on the specified path do
>> not
>> > > > > allow access to the storage. Verify permission settings on the
>> > > specified
>> > > > > storage path.: 'path = /rhev/data-center/mnt/glusterSD/
>> 172.16.0.11:
>> > > _ovirt'
>> > > > >
>> > > > > I remember another thread about a similar issue - can you check
>> the ACL
>> > > > > settings on the storage path?
>> > > > >
>> > > > > - Original Message -
>> > > > > > From: "Siavash Safi" 
>> > > > > > To: "David Gossage" 
>> > > > > > Cc: "users" 
>> > > > > > Sent: Thursday, July 28, 2016 7:58:29 PM
>> > > > > > Subject: Re: [ovirt-users] Cannot find master domain
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
>> > > > > dgoss...@carouselchecks.com >
>> > > > > > wrote:
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi <
>> > > siavash.s...@gmail.com >
>> > > > > > wrote:
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > Hi,
>> > > > > >
>> > > > > > Issue: Cannot find master domain
>> > > > > > Changes applied before issue started to happen: replaced
>> > > > > > 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:
>> > > /data/brick3/brick3,
>> > > > > did
>> > > > > > minor package upgrades for vdsm and glusterfs
>> > > > > >
>> > > > > > vdsm log: https://paste.fedoraproject.org/396842/
>> > > > > >
>> > > > > >
>> > > > > > Any errrors in glusters brick or server logs? The client
>> gluster logs
>> > > > > from
>> > > > > > ovirt?
>> > > > > > Brick errors:
>> > > > > > [2016-07-28 14:03:25.002396] E [MSGID: 113091]
>> > > [posix.c:178:posix_lookup]
>> > > > > > 0-ovirt-posix: null gfid for path (null)
>> > > > > > [2016-07-28 14:03:25.002430] E [MSGID: 113018]
>> > > [posix.c:196:posix_lookup]
>> > > > > > 0-ovirt-posix: lstat on null 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Siavash Safi
It seems that dir modes are wrong!?
[root@node1 ~]# ls -ld /data/brick*/brick*
drw---. 5 vdsm kvm 107 Jul 28 20:13 /data/brick1/brick1
drw---. 5 vdsm kvm  82 Jul 27 23:08 /data/brick2/brick2
[root@node2 ~]# ls -ld /data/brick*/brick*
drwxr-xr-x. 5 vdsm kvm 107 Apr 26 19:33 /data/brick1/brick1
drw---. 5 vdsm kvm  82 Jul 27 23:08 /data/brick2/brick2
drw---. 5 vdsm kvm 107 Jul 28 20:13 /data/brick3/brick3
[root@node3 ~]# ls -ld /data/brick*/brick*
drw---. 5 vdsm kvm 107 Jul 28 20:10 /data/brick1/brick1
drw---. 5 vdsm kvm  82 Jul 27 23:08 /data/brick2/brick2

On Thu, Jul 28, 2016 at 9:06 PM Sahina Bose  wrote:

>
>
> - Original Message -
> > From: "Siavash Safi" 
> > To: "Sahina Bose" 
> > Cc: "David Gossage" , "users" <
> users@ovirt.org>, "Nir Soffer" ,
> > "Allon Mureinik" 
> > Sent: Thursday, July 28, 2016 9:04:32 PM
> > Subject: Re: [ovirt-users] Cannot find master domain
> >
> > Please check the attachment.
>
> Nothing out of place in the mount logs.
>
> Can you ensure the brick dir permissions are vdsm:kvm - even for the brick
> that was replaced?
>
> >
> > On Thu, Jul 28, 2016 at 7:46 PM Sahina Bose  wrote:
> >
> > >
> > >
> > > - Original Message -
> > > > From: "Siavash Safi" 
> > > > To: "Sahina Bose" 
> > > > Cc: "David Gossage" , "users" <
> > > users@ovirt.org>
> > > > Sent: Thursday, July 28, 2016 8:35:18 PM
> > > > Subject: Re: [ovirt-users] Cannot find master domain
> > > >
> > > > [root@node1 ~]# ls -ld /rhev/data-center/mnt/glusterSD/
> > > > drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:28
> /rhev/data-center/mnt/glusterSD/
> > > > [root@node1 ~]# getfacl /rhev/data-center/mnt/glusterSD/
> > > > getfacl: Removing leading '/' from absolute path names
> > > > # file: rhev/data-center/mnt/glusterSD/
> > > > # owner: vdsm
> > > > # group: kvm
> > > > user::rwx
> > > > group::r-x
> > > > other::r-x
> > > >
> > >
> > >
> > > The ACLs look correct to me. Adding Nir/Allon for insights.
> > >
> > > Can you attach the gluster mount logs from this host?
> > >
> > >
> > > > And as I mentioned in another message, the directory is empty.
> > > >
> > > > On Thu, Jul 28, 2016 at 7:24 PM Sahina Bose 
> wrote:
> > > >
> > > > > Error from vdsm log: Permission settings on the specified path do
> not
> > > > > allow access to the storage. Verify permission settings on the
> > > specified
> > > > > storage path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11
> :
> > > _ovirt'
> > > > >
> > > > > I remember another thread about a similar issue - can you check
> the ACL
> > > > > settings on the storage path?
> > > > >
> > > > > - Original Message -
> > > > > > From: "Siavash Safi" 
> > > > > > To: "David Gossage" 
> > > > > > Cc: "users" 
> > > > > > Sent: Thursday, July 28, 2016 7:58:29 PM
> > > > > > Subject: Re: [ovirt-users] Cannot find master domain
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
> > > > > dgoss...@carouselchecks.com >
> > > > > > wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi <
> > > siavash.s...@gmail.com >
> > > > > > wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > Issue: Cannot find master domain
> > > > > > Changes applied before issue started to happen: replaced
> > > > > > 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:
> > > /data/brick3/brick3,
> > > > > did
> > > > > > minor package upgrades for vdsm and glusterfs
> > > > > >
> > > > > > vdsm log: https://paste.fedoraproject.org/396842/
> > > > > >
> > > > > >
> > > > > > Any errrors in glusters brick or server logs? The client gluster
> logs
> > > > > from
> > > > > > ovirt?
> > > > > > Brick errors:
> > > > > > [2016-07-28 14:03:25.002396] E [MSGID: 113091]
> > > [posix.c:178:posix_lookup]
> > > > > > 0-ovirt-posix: null gfid for path (null)
> > > > > > [2016-07-28 14:03:25.002430] E [MSGID: 113018]
> > > [posix.c:196:posix_lookup]
> > > > > > 0-ovirt-posix: lstat on null failed [Invalid argument]
> > > > > > (Both repeated many times)
> > > > > >
> > > > > > Server errors:
> > > > > > None
> > > > > >
> > > > > > Client errors:
> > > > > > None
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > yum log: https://paste.fedoraproject.org/396854/
> > > > > >
> > > > > > What version of gluster was running prior to update to 3.7.13?
> > > > > > 3.7.11-1 from gluster.org repository(after update ovirt
> switched to
> > > > > centos
> > > > > > repository)
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > Did it create gluster mounts on server when attempting to start?
> > > > > > As I checked 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Sahina Bose


- Original Message -
> From: "Siavash Safi" 
> To: "Sahina Bose" 
> Cc: "David Gossage" , "users" , 
> "Nir Soffer" ,
> "Allon Mureinik" 
> Sent: Thursday, July 28, 2016 9:04:32 PM
> Subject: Re: [ovirt-users] Cannot find master domain
> 
> Please check the attachment.

Nothing out of place in the mount logs.

Can you ensure the brick dir permissions are vdsm:kvm - even for the brick that 
was replaced?

> 
> On Thu, Jul 28, 2016 at 7:46 PM Sahina Bose  wrote:
> 
> >
> >
> > - Original Message -
> > > From: "Siavash Safi" 
> > > To: "Sahina Bose" 
> > > Cc: "David Gossage" , "users" <
> > users@ovirt.org>
> > > Sent: Thursday, July 28, 2016 8:35:18 PM
> > > Subject: Re: [ovirt-users] Cannot find master domain
> > >
> > > [root@node1 ~]# ls -ld /rhev/data-center/mnt/glusterSD/
> > > drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:28 /rhev/data-center/mnt/glusterSD/
> > > [root@node1 ~]# getfacl /rhev/data-center/mnt/glusterSD/
> > > getfacl: Removing leading '/' from absolute path names
> > > # file: rhev/data-center/mnt/glusterSD/
> > > # owner: vdsm
> > > # group: kvm
> > > user::rwx
> > > group::r-x
> > > other::r-x
> > >
> >
> >
> > The ACLs look correct to me. Adding Nir/Allon for insights.
> >
> > Can you attach the gluster mount logs from this host?
> >
> >
> > > And as I mentioned in another message, the directory is empty.
> > >
> > > On Thu, Jul 28, 2016 at 7:24 PM Sahina Bose  wrote:
> > >
> > > > Error from vdsm log: Permission settings on the specified path do not
> > > > allow access to the storage. Verify permission settings on the
> > specified
> > > > storage path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11:
> > _ovirt'
> > > >
> > > > I remember another thread about a similar issue - can you check the ACL
> > > > settings on the storage path?
> > > >
> > > > - Original Message -
> > > > > From: "Siavash Safi" 
> > > > > To: "David Gossage" 
> > > > > Cc: "users" 
> > > > > Sent: Thursday, July 28, 2016 7:58:29 PM
> > > > > Subject: Re: [ovirt-users] Cannot find master domain
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
> > > > dgoss...@carouselchecks.com >
> > > > > wrote:
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi <
> > siavash.s...@gmail.com >
> > > > > wrote:
> > > > >
> > > > >
> > > > >
> > > > > Hi,
> > > > >
> > > > > Issue: Cannot find master domain
> > > > > Changes applied before issue started to happen: replaced
> > > > > 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:
> > /data/brick3/brick3,
> > > > did
> > > > > minor package upgrades for vdsm and glusterfs
> > > > >
> > > > > vdsm log: https://paste.fedoraproject.org/396842/
> > > > >
> > > > >
> > > > > Any errrors in glusters brick or server logs? The client gluster logs
> > > > from
> > > > > ovirt?
> > > > > Brick errors:
> > > > > [2016-07-28 14:03:25.002396] E [MSGID: 113091]
> > [posix.c:178:posix_lookup]
> > > > > 0-ovirt-posix: null gfid for path (null)
> > > > > [2016-07-28 14:03:25.002430] E [MSGID: 113018]
> > [posix.c:196:posix_lookup]
> > > > > 0-ovirt-posix: lstat on null failed [Invalid argument]
> > > > > (Both repeated many times)
> > > > >
> > > > > Server errors:
> > > > > None
> > > > >
> > > > > Client errors:
> > > > > None
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > yum log: https://paste.fedoraproject.org/396854/
> > > > >
> > > > > What version of gluster was running prior to update to 3.7.13?
> > > > > 3.7.11-1 from gluster.org repository(after update ovirt switched to
> > > > centos
> > > > > repository)
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Did it create gluster mounts on server when attempting to start?
> > > > > As I checked the master domain is not mounted on any nodes.
> > > > > Restarting vdsmd generated following errors:
> > > > >
> > > > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > > > 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
> > > > > directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode:
> > None
> > > > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > > >
> > > >
> > 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> > > > > Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
> > > > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > > > 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd)
> > /usr/bin/taskset
> > > > > --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
> > > > > --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
> > > > > backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
> > > > > 

[ovirt-users] made the mistake of configuring bond before 'hosted-engine --deploy'

2016-07-28 Thread Kenneth Bingham
Do you know of a way to force vdsmd to forget what it knows about the
network configuration on the hypervisor host? If I synchronize or make any
change through the engine the host drops off the network because of an
invalid configuration pushed by some component of oVirt which creates bond1
with BRIDGE=ovirtmgmt when the bridge should be assigned to bond1.$vid If I
manually edit the ifcfg files or make changes with the ip command I can
restore network connectivity, but can not make any changes in engine. It's
not clear how to recover from this situation without reinstalling the OS.

I've concluded that I should have manually configured the VLAN sub
interface only before deploying hosted engine which activates vdsmd, then
configured bonds and VLANs through engine. I believe the ovirtmgmt bridge
became assigned to the bond instead of the VLAN interface because I
de-selected VLAN tagging in the logical network configuration in engine,
and I think most of the problems I'm having are due to my needing to
manually edit the configuration files written by VDSM in order to bring the
host back on the network so that it can be managed by engine.

# node 4.0.1
# yum list installed |grep ovirt
cockpit-ovirt-dashboard.noarch 0.10.5-1.0.0.el7.centos
 installed
libgovirt.x86_64   0.3.3-1.el7_2.1
 installed
ovirt-engine-appliance.noarch  4.0-20160718.1.el7.centos
 @ovirt-4.0
ovirt-engine-sdk-python.noarch 3.6.7.0-1.el7.centos
installed
ovirt-host-deploy.noarch   1.5.1-1.el7.centos
installed
ovirt-hosted-engine-ha.noarch  2.0.1-1.el7.centos
installed
ovirt-hosted-engine-setup.noarch   2.0.1-1.el7.centos
installed
ovirt-imageio-common.noarch
 0.3.0-0.201606191345.git9f3d6d4.el7.centos
ovirt-imageio-daemon.noarch
 0.3.0-0.201606191345.git9f3d6d4.el7.centos
ovirt-node-ng-image-update-placeholder.noarch
ovirt-release-host-node.noarch 4.0.1-1.el7
 installed
ovirt-release40-pre.noarch 4.0.1-1
 installed
ovirt-setup-lib.noarch 1.0.2-1.el7.centos
installed
ovirt-vmconsole.noarch 1.0.3-1.el7.centos
installed
ovirt-vmconsole-host.noarch1.0.3-1.el7.centos
installed

# hosted engine appliance
# yum list installed|grep ovirt
ebay-cors-filter.noarch 1.0.1-3.el7
 @centos-ovirt40-candidate
libtomcrypt.x86_64  1.17-23.el7
 @ovirt-4.0-epel
libtommath.x86_64   0.42.0-4.el7
@ovirt-4.0-epel
novnc.noarch0.5.1-2.el7
 @centos-ovirt40-candidate
otopi.noarch1.5.1-1.el7.centos   @ovirt-4.0

otopi-java.noarch   1.5.1-1.el7.centos   @ovirt-4.0

ovirt-engine.noarch 4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-backend.noarch 4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-cli.noarch 3.6.8.0-1.el7.centos @ovirt-4.0

ovirt-engine-dashboard.noarch   1.0.0-0.2.20160610git5d210ea.el7.centos
 @ovirt-4.0

ovirt-engine-dbscripts.noarch   4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-dwh.noarch 4.0.1-1.el7.centos   @ovirt-4.0

ovirt-engine-dwh-setup.noarch   4.0.1-1.el7.centos   @ovirt-4.0

ovirt-engine-extension-aaa-jdbc.noarch
1.1.0-1.el7  @ovirt-4.0

ovirt-engine-extensions-api-impl.noarch
4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-lib.noarch 4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-restapi.noarch 4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-sdk-python.noarch  3.6.7.0-1.el7.centos @ovirt-4.0

ovirt-engine-setup.noarch   4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-setup-base.noarch  4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-setup-plugin-ovirt-engine.noarch
4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-setup-plugin-ovirt-engine-common.noarch
4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-setup-plugin-websocket-proxy.noarch
4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-tools.noarch   4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-tools-backup.noarch4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-userportal.noarch  4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-vmconsole-proxy-helper.noarch
4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-webadmin-portal.noarch 4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-websocket-proxy.noarch 4.0.1.1-1.el7.centos @ovirt-4.0

ovirt-engine-wildfly.x86_64 10.0.0-1.el7   

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Siavash Safi
Yes, the dir is missing on all node. I only created it on node1 (node2 &
node3 are put in maintenance mode manually)

Yes, manual mount works fine:

[root@node1 ~]# /usr/bin/mount -t glusterfs -o
backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt /mnt
[root@node1 ~]# ls -l /mnt/
total 4
drwxr-xr-x. 5 vdsm kvm 4096 Apr 26 19:34
4697fbde-45fb-4f91-ac4c-5516bc59f683
-rwxr-xr-x. 1 vdsm kvm0 Jul 27 23:05 __DIRECT_IO_TEST__
[root@node1 ~]# touch /mnt/test
[root@node1 ~]# ls -l /mnt/
total 4
drwxr-xr-x. 5 vdsm kvm  4096 Apr 26 19:34
4697fbde-45fb-4f91-ac4c-5516bc59f683
-rwxr-xr-x. 1 vdsm kvm 0 Jul 27 23:05 __DIRECT_IO_TEST__
-rw-r--r--. 1 root root0 Jul 28 20:10 test
[root@node1 ~]# chown vdsm:kvm /mnt/test
[root@node1 ~]# ls -l /mnt/
total 4
drwxr-xr-x. 5 vdsm kvm 4096 Apr 26 19:34
4697fbde-45fb-4f91-ac4c-5516bc59f683
-rwxr-xr-x. 1 vdsm kvm0 Jul 27 23:05 __DIRECT_IO_TEST__
-rw-r--r--. 1 vdsm kvm0 Jul 28 20:10 test
[root@node1 ~]# echo foo > /mnt/test
[root@node1 ~]# cat /mnt/test
foo


On Thu, Jul 28, 2016 at 8:06 PM David Gossage 
wrote:

> On Thu, Jul 28, 2016 at 10:28 AM, Siavash Safi 
> wrote:
>
>> I created the directory with correct permissions:
>> drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:51
>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt/
>>
>> It was removed after I tried to activate the storage from web.
>>
>> Is dir missing on all 3 oVirt nodes?  Did you create on all 3?
>
> When you did test mount with oVirts mount options did permissions on files
> after mount look proper?  Can you read/write to mount?
>
>
>> Engine displays the master storage as inactive:
>> [image: oVirt_Engine_Web_Administration.png]
>>
>>
>> On Thu, Jul 28, 2016 at 7:40 PM David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> On Thu, Jul 28, 2016 at 10:00 AM, Siavash Safi 
>>> wrote:
>>>


 On Thu, Jul 28, 2016 at 7:19 PM David Gossage <
 dgoss...@carouselchecks.com> wrote:

> On Thu, Jul 28, 2016 at 9:38 AM, Siavash Safi 
> wrote:
>
>> file system: xfs
>> features.shard: off
>>
>
> Ok was just seeing if matched up to the issues latest 3.7.x releases
> have with zfs and sharding but doesn't look like your issue.
>
>  In your logs I see it mounts with thee commands.  What happens if you
> use same to a test dir?
>
>  /usr/bin/mount -t glusterfs -o 
> backup-volfile-servers=172.16.0.12:172.16.0.13
> 172.16.0.11:/ovirt /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt
>

 It mounts successfully:
 [root@node1 ~]# /usr/bin/mount -t glusterfs -o
 backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt /mnt
 [root@node1 ~]# ls /mnt/
 4697fbde-45fb-4f91-ac4c-5516bc59f683  __DIRECT_IO_TEST__


> It then umounts it and complains short while later of permissions.
>
> StorageServerAccessPermissionError: Permission settings on the
> specified path do not allow access to the storage. Verify permission
> settings on the specified storage path.: 'path =
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
>
> Are the permissions of dirs to
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt as expected?
>

 /rhev/data-center/mnt/glusterSD/ is empty. Maybe it remove the
 directory after failure to cleanup?

>>>
>>> Maybe though I don't recall it ever being deleted unless you maybe
>>> destroy detach storage. What if you create that directory and permissions
>>> appropriately on any node missing then try and activate storage?
>>>
>>> In engine is it still displaying the master storage domain?
>>>
>>>
 How about on the bricks anything out of place?
>

 I didn't notice anything.


> Is gluster still using same options as before?  could it have reset
> the user and group to not be 36?
>

 All options seem to be correct, to make sure I ran "Optimize for Virt
 Store" from web.

 Volume Name: ovirt
 Type: Distributed-Replicate
 Volume ID: b224d9bc-d120-4fe1-b233-09089e5ca0b2
 Status: Started
 Number of Bricks: 2 x 3 = 6
 Transport-type: tcp
 Bricks:
 Brick1: 172.16.0.11:/data/brick1/brick1
 Brick2: 172.16.0.12:/data/brick3/brick3
 Brick3: 172.16.0.13:/data/brick1/brick1
 Brick4: 172.16.0.11:/data/brick2/brick2
 Brick5: 172.16.0.12:/data/brick2/brick2
 Brick6: 172.16.0.13:/data/brick2/brick2
 Options Reconfigured:
 performance.readdir-ahead: on
 nfs.disable: off
 user.cifs: enable
 auth.allow: *
 performance.quick-read: off
 performance.read-ahead: off
 performance.io-cache: off
 performance.stat-prefetch: off
 cluster.eager-lock: enable
 network.remote-dio: enable
 cluster.quorum-type: auto
 cluster.server-quorum-type: server
 storage.owner-uid: 36
 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread David Gossage
On Thu, Jul 28, 2016 at 10:28 AM, Siavash Safi 
wrote:

> I created the directory with correct permissions:
> drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:51
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt/
>
> It was removed after I tried to activate the storage from web.
>
> Is dir missing on all 3 oVirt nodes?  Did you create on all 3?

When you did test mount with oVirts mount options did permissions on files
after mount look proper?  Can you read/write to mount?


> Engine displays the master storage as inactive:
> [image: oVirt_Engine_Web_Administration.png]
>
>
> On Thu, Jul 28, 2016 at 7:40 PM David Gossage 
> wrote:
>
>> On Thu, Jul 28, 2016 at 10:00 AM, Siavash Safi 
>> wrote:
>>
>>>
>>>
>>> On Thu, Jul 28, 2016 at 7:19 PM David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 On Thu, Jul 28, 2016 at 9:38 AM, Siavash Safi 
 wrote:

> file system: xfs
> features.shard: off
>

 Ok was just seeing if matched up to the issues latest 3.7.x releases
 have with zfs and sharding but doesn't look like your issue.

  In your logs I see it mounts with thee commands.  What happens if you
 use same to a test dir?

  /usr/bin/mount -t glusterfs -o 
 backup-volfile-servers=172.16.0.12:172.16.0.13
 172.16.0.11:/ovirt /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt

>>>
>>> It mounts successfully:
>>> [root@node1 ~]# /usr/bin/mount -t glusterfs -o
>>> backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt /mnt
>>> [root@node1 ~]# ls /mnt/
>>> 4697fbde-45fb-4f91-ac4c-5516bc59f683  __DIRECT_IO_TEST__
>>>
>>>
 It then umounts it and complains short while later of permissions.

 StorageServerAccessPermissionError: Permission settings on the
 specified path do not allow access to the storage. Verify permission
 settings on the specified storage path.: 'path =
 /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'

 Are the permissions of dirs to
 /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt as expected?

>>>
>>> /rhev/data-center/mnt/glusterSD/ is empty. Maybe it remove the
>>> directory after failure to cleanup?
>>>
>>
>> Maybe though I don't recall it ever being deleted unless you maybe
>> destroy detach storage. What if you create that directory and permissions
>> appropriately on any node missing then try and activate storage?
>>
>> In engine is it still displaying the master storage domain?
>>
>>
>>> How about on the bricks anything out of place?

>>>
>>> I didn't notice anything.
>>>
>>>
 Is gluster still using same options as before?  could it have reset the
 user and group to not be 36?

>>>
>>> All options seem to be correct, to make sure I ran "Optimize for Virt
>>> Store" from web.
>>>
>>> Volume Name: ovirt
>>> Type: Distributed-Replicate
>>> Volume ID: b224d9bc-d120-4fe1-b233-09089e5ca0b2
>>> Status: Started
>>> Number of Bricks: 2 x 3 = 6
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 172.16.0.11:/data/brick1/brick1
>>> Brick2: 172.16.0.12:/data/brick3/brick3
>>> Brick3: 172.16.0.13:/data/brick1/brick1
>>> Brick4: 172.16.0.11:/data/brick2/brick2
>>> Brick5: 172.16.0.12:/data/brick2/brick2
>>> Brick6: 172.16.0.13:/data/brick2/brick2
>>> Options Reconfigured:
>>> performance.readdir-ahead: on
>>> nfs.disable: off
>>> user.cifs: enable
>>> auth.allow: *
>>> performance.quick-read: off
>>> performance.read-ahead: off
>>> performance.io-cache: off
>>> performance.stat-prefetch: off
>>> cluster.eager-lock: enable
>>> network.remote-dio: enable
>>> cluster.quorum-type: auto
>>> cluster.server-quorum-type: server
>>> storage.owner-uid: 36
>>> storage.owner-gid: 36
>>> server.allow-insecure: on
>>> network.ping-timeout: 10
>>>
>>>
> On Thu, Jul 28, 2016 at 7:03 PM David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi > > wrote:
>>
>>>
>>>
>>> On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi <
 siavash.s...@gmail.com> wrote:

> Hi,
>
> Issue: Cannot find master domain
> Changes applied before issue started to happen: replaced
> 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:/data/brick3/brick3,
> did minor package upgrades for vdsm and glusterfs
>
> vdsm log: https://paste.fedoraproject.org/396842/
>


 Any errrors in glusters brick or server logs?  The client gluster
 logs from ovirt?

>>> Brick errors:
>>> [2016-07-28 14:03:25.002396] E [MSGID: 113091]
>>> [posix.c:178:posix_lookup] 0-ovirt-posix: null gfid for path (null)
>>> [2016-07-28 14:03:25.002430] E [MSGID: 113018]
>>> [posix.c:196:posix_lookup] 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Siavash Safi
Please check the attachment.

On Thu, Jul 28, 2016 at 7:46 PM Sahina Bose  wrote:

>
>
> - Original Message -
> > From: "Siavash Safi" 
> > To: "Sahina Bose" 
> > Cc: "David Gossage" , "users" <
> users@ovirt.org>
> > Sent: Thursday, July 28, 2016 8:35:18 PM
> > Subject: Re: [ovirt-users] Cannot find master domain
> >
> > [root@node1 ~]# ls -ld /rhev/data-center/mnt/glusterSD/
> > drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:28 /rhev/data-center/mnt/glusterSD/
> > [root@node1 ~]# getfacl /rhev/data-center/mnt/glusterSD/
> > getfacl: Removing leading '/' from absolute path names
> > # file: rhev/data-center/mnt/glusterSD/
> > # owner: vdsm
> > # group: kvm
> > user::rwx
> > group::r-x
> > other::r-x
> >
>
>
> The ACLs look correct to me. Adding Nir/Allon for insights.
>
> Can you attach the gluster mount logs from this host?
>
>
> > And as I mentioned in another message, the directory is empty.
> >
> > On Thu, Jul 28, 2016 at 7:24 PM Sahina Bose  wrote:
> >
> > > Error from vdsm log: Permission settings on the specified path do not
> > > allow access to the storage. Verify permission settings on the
> specified
> > > storage path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11:
> _ovirt'
> > >
> > > I remember another thread about a similar issue - can you check the ACL
> > > settings on the storage path?
> > >
> > > - Original Message -
> > > > From: "Siavash Safi" 
> > > > To: "David Gossage" 
> > > > Cc: "users" 
> > > > Sent: Thursday, July 28, 2016 7:58:29 PM
> > > > Subject: Re: [ovirt-users] Cannot find master domain
> > > >
> > > >
> > > >
> > > > On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
> > > dgoss...@carouselchecks.com >
> > > > wrote:
> > > >
> > > >
> > > >
> > > > On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi <
> siavash.s...@gmail.com >
> > > > wrote:
> > > >
> > > >
> > > >
> > > > Hi,
> > > >
> > > > Issue: Cannot find master domain
> > > > Changes applied before issue started to happen: replaced
> > > > 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:
> /data/brick3/brick3,
> > > did
> > > > minor package upgrades for vdsm and glusterfs
> > > >
> > > > vdsm log: https://paste.fedoraproject.org/396842/
> > > >
> > > >
> > > > Any errrors in glusters brick or server logs? The client gluster logs
> > > from
> > > > ovirt?
> > > > Brick errors:
> > > > [2016-07-28 14:03:25.002396] E [MSGID: 113091]
> [posix.c:178:posix_lookup]
> > > > 0-ovirt-posix: null gfid for path (null)
> > > > [2016-07-28 14:03:25.002430] E [MSGID: 113018]
> [posix.c:196:posix_lookup]
> > > > 0-ovirt-posix: lstat on null failed [Invalid argument]
> > > > (Both repeated many times)
> > > >
> > > > Server errors:
> > > > None
> > > >
> > > > Client errors:
> > > > None
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > yum log: https://paste.fedoraproject.org/396854/
> > > >
> > > > What version of gluster was running prior to update to 3.7.13?
> > > > 3.7.11-1 from gluster.org repository(after update ovirt switched to
> > > centos
> > > > repository)
> > > >
> > > >
> > > >
> > > >
> > > > Did it create gluster mounts on server when attempting to start?
> > > > As I checked the master domain is not mounted on any nodes.
> > > > Restarting vdsmd generated following errors:
> > > >
> > > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > > 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
> > > > directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode:
> None
> > > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > >
> > >
> 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> > > > Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
> > > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > > 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset
> > > > --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
> > > > --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
> > > > backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
> > > > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> > > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > > 18:50:57,789::__init__::318::IOProcessClient::(_run) Starting
> > > IOProcess...
> > > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > > 18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset
> > > > --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l
> > > > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> > > > jsonrpc.Executor/5::ERROR::2016-07-28
> > > > 18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could
> not
> > > > connect to storageServer
> > > > Traceback (most recent call last):
> > > > File "/usr/share/vdsm/storage/hsm.py", line 2470, in
> connectStorageServer
> > > > conObj.connect()
> > > > 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Siavash Safi
I created the directory with correct permissions:
drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:51
/rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt/

It was removed after I tried to activate the storage from web.

Engine displays the master storage as inactive:
[image: oVirt_Engine_Web_Administration.png]


On Thu, Jul 28, 2016 at 7:40 PM David Gossage 
wrote:

> On Thu, Jul 28, 2016 at 10:00 AM, Siavash Safi 
> wrote:
>
>>
>>
>> On Thu, Jul 28, 2016 at 7:19 PM David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> On Thu, Jul 28, 2016 at 9:38 AM, Siavash Safi 
>>> wrote:
>>>
 file system: xfs
 features.shard: off

>>>
>>> Ok was just seeing if matched up to the issues latest 3.7.x releases
>>> have with zfs and sharding but doesn't look like your issue.
>>>
>>>  In your logs I see it mounts with thee commands.  What happens if you
>>> use same to a test dir?
>>>
>>>  /usr/bin/mount -t glusterfs -o 
>>> backup-volfile-servers=172.16.0.12:172.16.0.13
>>> 172.16.0.11:/ovirt /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt
>>>
>>
>> It mounts successfully:
>> [root@node1 ~]# /usr/bin/mount -t glusterfs -o
>> backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt /mnt
>> [root@node1 ~]# ls /mnt/
>> 4697fbde-45fb-4f91-ac4c-5516bc59f683  __DIRECT_IO_TEST__
>>
>>
>>> It then umounts it and complains short while later of permissions.
>>>
>>> StorageServerAccessPermissionError: Permission settings on the specified
>>> path do not allow access to the storage. Verify permission settings on the
>>> specified storage path.: 'path =
>>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
>>>
>>> Are the permissions of dirs to
>>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt as expected?
>>>
>>
>> /rhev/data-center/mnt/glusterSD/ is empty. Maybe it remove the directory
>> after failure to cleanup?
>>
>
> Maybe though I don't recall it ever being deleted unless you maybe destroy
> detach storage. What if you create that directory and permissions
> appropriately on any node missing then try and activate storage?
>
> In engine is it still displaying the master storage domain?
>
>
>> How about on the bricks anything out of place?
>>>
>>
>> I didn't notice anything.
>>
>>
>>> Is gluster still using same options as before?  could it have reset the
>>> user and group to not be 36?
>>>
>>
>> All options seem to be correct, to make sure I ran "Optimize for Virt
>> Store" from web.
>>
>> Volume Name: ovirt
>> Type: Distributed-Replicate
>> Volume ID: b224d9bc-d120-4fe1-b233-09089e5ca0b2
>> Status: Started
>> Number of Bricks: 2 x 3 = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: 172.16.0.11:/data/brick1/brick1
>> Brick2: 172.16.0.12:/data/brick3/brick3
>> Brick3: 172.16.0.13:/data/brick1/brick1
>> Brick4: 172.16.0.11:/data/brick2/brick2
>> Brick5: 172.16.0.12:/data/brick2/brick2
>> Brick6: 172.16.0.13:/data/brick2/brick2
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> nfs.disable: off
>> user.cifs: enable
>> auth.allow: *
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> cluster.eager-lock: enable
>> network.remote-dio: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> server.allow-insecure: on
>> network.ping-timeout: 10
>>
>>
 On Thu, Jul 28, 2016 at 7:03 PM David Gossage <
 dgoss...@carouselchecks.com> wrote:

> On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi 
> wrote:
>
>>
>>
>> On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi <
>>> siavash.s...@gmail.com> wrote:
>>>
 Hi,

 Issue: Cannot find master domain
 Changes applied before issue started to happen: replaced
 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:/data/brick3/brick3,
 did minor package upgrades for vdsm and glusterfs

 vdsm log: https://paste.fedoraproject.org/396842/

>>>
>>>
>>> Any errrors in glusters brick or server logs?  The client gluster
>>> logs from ovirt?
>>>
>> Brick errors:
>> [2016-07-28 14:03:25.002396] E [MSGID: 113091]
>> [posix.c:178:posix_lookup] 0-ovirt-posix: null gfid for path (null)
>> [2016-07-28 14:03:25.002430] E [MSGID: 113018]
>> [posix.c:196:posix_lookup] 0-ovirt-posix: lstat on null failed [Invalid
>> argument]
>> (Both repeated many times)
>>
>> Server errors:
>> None
>>
>> Client errors:
>> None
>>
>>
>>>
 yum log: https://paste.fedoraproject.org/396854/

>>>
>>> What version of gluster was running prior to update to 3.7.13?
>>>
>> 3.7.11-1 from gluster.org repository(after update ovirt switched to
>> 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Sahina Bose


- Original Message -
> From: "Siavash Safi" 
> To: "Sahina Bose" 
> Cc: "David Gossage" , "users" 
> Sent: Thursday, July 28, 2016 8:35:18 PM
> Subject: Re: [ovirt-users] Cannot find master domain
> 
> [root@node1 ~]# ls -ld /rhev/data-center/mnt/glusterSD/
> drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:28 /rhev/data-center/mnt/glusterSD/
> [root@node1 ~]# getfacl /rhev/data-center/mnt/glusterSD/
> getfacl: Removing leading '/' from absolute path names
> # file: rhev/data-center/mnt/glusterSD/
> # owner: vdsm
> # group: kvm
> user::rwx
> group::r-x
> other::r-x
> 


The ACLs look correct to me. Adding Nir/Allon for insights.

Can you attach the gluster mount logs from this host?


> And as I mentioned in another message, the directory is empty.
> 
> On Thu, Jul 28, 2016 at 7:24 PM Sahina Bose  wrote:
> 
> > Error from vdsm log: Permission settings on the specified path do not
> > allow access to the storage. Verify permission settings on the specified
> > storage path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
> >
> > I remember another thread about a similar issue - can you check the ACL
> > settings on the storage path?
> >
> > - Original Message -
> > > From: "Siavash Safi" 
> > > To: "David Gossage" 
> > > Cc: "users" 
> > > Sent: Thursday, July 28, 2016 7:58:29 PM
> > > Subject: Re: [ovirt-users] Cannot find master domain
> > >
> > >
> > >
> > > On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
> > dgoss...@carouselchecks.com >
> > > wrote:
> > >
> > >
> > >
> > > On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi < siavash.s...@gmail.com >
> > > wrote:
> > >
> > >
> > >
> > > Hi,
> > >
> > > Issue: Cannot find master domain
> > > Changes applied before issue started to happen: replaced
> > > 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:/data/brick3/brick3,
> > did
> > > minor package upgrades for vdsm and glusterfs
> > >
> > > vdsm log: https://paste.fedoraproject.org/396842/
> > >
> > >
> > > Any errrors in glusters brick or server logs? The client gluster logs
> > from
> > > ovirt?
> > > Brick errors:
> > > [2016-07-28 14:03:25.002396] E [MSGID: 113091] [posix.c:178:posix_lookup]
> > > 0-ovirt-posix: null gfid for path (null)
> > > [2016-07-28 14:03:25.002430] E [MSGID: 113018] [posix.c:196:posix_lookup]
> > > 0-ovirt-posix: lstat on null failed [Invalid argument]
> > > (Both repeated many times)
> > >
> > > Server errors:
> > > None
> > >
> > > Client errors:
> > > None
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > yum log: https://paste.fedoraproject.org/396854/
> > >
> > > What version of gluster was running prior to update to 3.7.13?
> > > 3.7.11-1 from gluster.org repository(after update ovirt switched to
> > centos
> > > repository)
> > >
> > >
> > >
> > >
> > > Did it create gluster mounts on server when attempting to start?
> > > As I checked the master domain is not mounted on any nodes.
> > > Restarting vdsmd generated following errors:
> > >
> > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
> > > directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode: None
> > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > >
> > 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> > > Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
> > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> > > --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
> > > --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
> > > backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
> > > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > 18:50:57,789::__init__::318::IOProcessClient::(_run) Starting
> > IOProcess...
> > > jsonrpc.Executor/5::DEBUG::2016-07-28
> > > 18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> > > --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l
> > > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> > > jsonrpc.Executor/5::ERROR::2016-07-28
> > > 18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could not
> > > connect to storageServer
> > > Traceback (most recent call last):
> > > File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
> > > conObj.connect()
> > > File "/usr/share/vdsm/storage/storageServer.py", line 248, in connect
> > > six.reraise(t, v, tb)
> > > File "/usr/share/vdsm/storage/storageServer.py", line 241, in connect
> > > self.getMountObj().getRecord().fs_file)
> > > File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess
> > > raise se.StorageServerAccessPermissionError(dirPath)
> > > 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread David Gossage
On Thu, Jul 28, 2016 at 10:00 AM, Siavash Safi 
wrote:

>
>
> On Thu, Jul 28, 2016 at 7:19 PM David Gossage 
> wrote:
>
>> On Thu, Jul 28, 2016 at 9:38 AM, Siavash Safi 
>> wrote:
>>
>>> file system: xfs
>>> features.shard: off
>>>
>>
>> Ok was just seeing if matched up to the issues latest 3.7.x releases have
>> with zfs and sharding but doesn't look like your issue.
>>
>>  In your logs I see it mounts with thee commands.  What happens if you
>> use same to a test dir?
>>
>>  /usr/bin/mount -t glusterfs -o 
>> backup-volfile-servers=172.16.0.12:172.16.0.13
>> 172.16.0.11:/ovirt /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt
>>
>
> It mounts successfully:
> [root@node1 ~]# /usr/bin/mount -t glusterfs -o
> backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt /mnt
> [root@node1 ~]# ls /mnt/
> 4697fbde-45fb-4f91-ac4c-5516bc59f683  __DIRECT_IO_TEST__
>
>
>> It then umounts it and complains short while later of permissions.
>>
>> StorageServerAccessPermissionError: Permission settings on the specified
>> path do not allow access to the storage. Verify permission settings on the
>> specified storage path.: 'path =
>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
>>
>> Are the permissions of dirs to
>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt as expected?
>>
>
> /rhev/data-center/mnt/glusterSD/ is empty. Maybe it remove the directory
> after failure to cleanup?
>

Maybe though I don't recall it ever being deleted unless you maybe destroy
detach storage. What if you create that directory and permissions
appropriately on any node missing then try and activate storage?

In engine is it still displaying the master storage domain?


> How about on the bricks anything out of place?
>>
>
> I didn't notice anything.
>
>
>> Is gluster still using same options as before?  could it have reset the
>> user and group to not be 36?
>>
>
> All options seem to be correct, to make sure I ran "Optimize for Virt
> Store" from web.
>
> Volume Name: ovirt
> Type: Distributed-Replicate
> Volume ID: b224d9bc-d120-4fe1-b233-09089e5ca0b2
> Status: Started
> Number of Bricks: 2 x 3 = 6
> Transport-type: tcp
> Bricks:
> Brick1: 172.16.0.11:/data/brick1/brick1
> Brick2: 172.16.0.12:/data/brick3/brick3
> Brick3: 172.16.0.13:/data/brick1/brick1
> Brick4: 172.16.0.11:/data/brick2/brick2
> Brick5: 172.16.0.12:/data/brick2/brick2
> Brick6: 172.16.0.13:/data/brick2/brick2
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: off
> user.cifs: enable
> auth.allow: *
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> server.allow-insecure: on
> network.ping-timeout: 10
>
>
>>> On Thu, Jul 28, 2016 at 7:03 PM David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi 
 wrote:

>
>
> On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi > > wrote:
>>
>>> Hi,
>>>
>>> Issue: Cannot find master domain
>>> Changes applied before issue started to happen: replaced
>>> 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:/data/brick3/brick3,
>>> did minor package upgrades for vdsm and glusterfs
>>>
>>> vdsm log: https://paste.fedoraproject.org/396842/
>>>
>>
>>
>> Any errrors in glusters brick or server logs?  The client gluster
>> logs from ovirt?
>>
> Brick errors:
> [2016-07-28 14:03:25.002396] E [MSGID: 113091]
> [posix.c:178:posix_lookup] 0-ovirt-posix: null gfid for path (null)
> [2016-07-28 14:03:25.002430] E [MSGID: 113018]
> [posix.c:196:posix_lookup] 0-ovirt-posix: lstat on null failed [Invalid
> argument]
> (Both repeated many times)
>
> Server errors:
> None
>
> Client errors:
> None
>
>
>>
>>> yum log: https://paste.fedoraproject.org/396854/
>>>
>>
>> What version of gluster was running prior to update to 3.7.13?
>>
> 3.7.11-1 from gluster.org repository(after update ovirt switched to
> centos repository)
>

 What file system do your bricks reside on and do you have sharding
 enabled?


>> Did it create gluster mounts on server when attempting to start?
>>
> As I checked the master domain is not mounted on any nodes.
> Restarting vdsmd generated following errors:
>
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
> directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode:
> None
> 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Siavash Safi
[root@node1 ~]# ls -ld /rhev/data-center/mnt/glusterSD/
drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:28 /rhev/data-center/mnt/glusterSD/
[root@node1 ~]# getfacl /rhev/data-center/mnt/glusterSD/
getfacl: Removing leading '/' from absolute path names
# file: rhev/data-center/mnt/glusterSD/
# owner: vdsm
# group: kvm
user::rwx
group::r-x
other::r-x

And as I mentioned in another message, the directory is empty.

On Thu, Jul 28, 2016 at 7:24 PM Sahina Bose  wrote:

> Error from vdsm log: Permission settings on the specified path do not
> allow access to the storage. Verify permission settings on the specified
> storage path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
>
> I remember another thread about a similar issue - can you check the ACL
> settings on the storage path?
>
> - Original Message -
> > From: "Siavash Safi" 
> > To: "David Gossage" 
> > Cc: "users" 
> > Sent: Thursday, July 28, 2016 7:58:29 PM
> > Subject: Re: [ovirt-users] Cannot find master domain
> >
> >
> >
> > On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
> dgoss...@carouselchecks.com >
> > wrote:
> >
> >
> >
> > On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi < siavash.s...@gmail.com >
> > wrote:
> >
> >
> >
> > Hi,
> >
> > Issue: Cannot find master domain
> > Changes applied before issue started to happen: replaced
> > 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:/data/brick3/brick3,
> did
> > minor package upgrades for vdsm and glusterfs
> >
> > vdsm log: https://paste.fedoraproject.org/396842/
> >
> >
> > Any errrors in glusters brick or server logs? The client gluster logs
> from
> > ovirt?
> > Brick errors:
> > [2016-07-28 14:03:25.002396] E [MSGID: 113091] [posix.c:178:posix_lookup]
> > 0-ovirt-posix: null gfid for path (null)
> > [2016-07-28 14:03:25.002430] E [MSGID: 113018] [posix.c:196:posix_lookup]
> > 0-ovirt-posix: lstat on null failed [Invalid argument]
> > (Both repeated many times)
> >
> > Server errors:
> > None
> >
> > Client errors:
> > None
> >
> >
> >
> >
> >
> >
> >
> > yum log: https://paste.fedoraproject.org/396854/
> >
> > What version of gluster was running prior to update to 3.7.13?
> > 3.7.11-1 from gluster.org repository(after update ovirt switched to
> centos
> > repository)
> >
> >
> >
> >
> > Did it create gluster mounts on server when attempting to start?
> > As I checked the master domain is not mounted on any nodes.
> > Restarting vdsmd generated following errors:
> >
> > jsonrpc.Executor/5::DEBUG::2016-07-28
> > 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
> > directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode: None
> > jsonrpc.Executor/5::DEBUG::2016-07-28
> >
> 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> > Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
> > jsonrpc.Executor/5::DEBUG::2016-07-28
> > 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> > --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
> > --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
> > backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
> > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> > jsonrpc.Executor/5::DEBUG::2016-07-28
> > 18:50:57,789::__init__::318::IOProcessClient::(_run) Starting
> IOProcess...
> > jsonrpc.Executor/5::DEBUG::2016-07-28
> > 18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> > --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l
> > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> > jsonrpc.Executor/5::ERROR::2016-07-28
> > 18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could not
> > connect to storageServer
> > Traceback (most recent call last):
> > File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
> > conObj.connect()
> > File "/usr/share/vdsm/storage/storageServer.py", line 248, in connect
> > six.reraise(t, v, tb)
> > File "/usr/share/vdsm/storage/storageServer.py", line 241, in connect
> > self.getMountObj().getRecord().fs_file)
> > File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess
> > raise se.StorageServerAccessPermissionError(dirPath)
> > StorageServerAccessPermissionError: Permission settings on the specified
> path
> > do not allow access to the storage. Verify permission settings on the
> > specified storage path.: 'path =
> > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
> > jsonrpc.Executor/5::DEBUG::2016-07-28
> > 18:50:57,817::hsm::2497::Storage.HSM::(connectStorageServer) knownSDs: {}
> > jsonrpc.Executor/5::INFO::2016-07-28
> > 18:50:57,817::logUtils::51::dispatcher::(wrapper) Run and protect:
> > connectStorageServer, Return response: {'statuslist': [{'status': 469,
> 'id':
> > u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}
> > jsonrpc.Executor/5::DEBUG::2016-07-28
> > 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Siavash Safi
On Thu, Jul 28, 2016 at 7:19 PM David Gossage 
wrote:

> On Thu, Jul 28, 2016 at 9:38 AM, Siavash Safi 
> wrote:
>
>> file system: xfs
>> features.shard: off
>>
>
> Ok was just seeing if matched up to the issues latest 3.7.x releases have
> with zfs and sharding but doesn't look like your issue.
>
>  In your logs I see it mounts with thee commands.  What happens if you use
> same to a test dir?
>
>  /usr/bin/mount -t glusterfs -o backup-volfile-servers=172.16.0.12:172.16.0.13
> 172.16.0.11:/ovirt /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt
>

It mounts successfully:
[root@node1 ~]# /usr/bin/mount -t glusterfs -o
backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt /mnt
[root@node1 ~]# ls /mnt/
4697fbde-45fb-4f91-ac4c-5516bc59f683  __DIRECT_IO_TEST__


> It then umounts it and complains short while later of permissions.
>
> StorageServerAccessPermissionError: Permission settings on the specified
> path do not allow access to the storage. Verify permission settings on the
> specified storage path.: 'path =
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
>
> Are the permissions of dirs to 
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt
> as expected?
>

/rhev/data-center/mnt/glusterSD/ is empty. Maybe it remove the directory
after failure to cleanup?

How about on the bricks anything out of place?
>

I didn't notice anything.


> Is gluster still using same options as before?  could it have reset the
> user and group to not be 36?
>

All options seem to be correct, to make sure I ran "Optimize for Virt
Store" from web.

Volume Name: ovirt
Type: Distributed-Replicate
Volume ID: b224d9bc-d120-4fe1-b233-09089e5ca0b2
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 172.16.0.11:/data/brick1/brick1
Brick2: 172.16.0.12:/data/brick3/brick3
Brick3: 172.16.0.13:/data/brick1/brick1
Brick4: 172.16.0.11:/data/brick2/brick2
Brick5: 172.16.0.12:/data/brick2/brick2
Brick6: 172.16.0.13:/data/brick2/brick2
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: off
user.cifs: enable
auth.allow: *
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
server.allow-insecure: on
network.ping-timeout: 10


>> On Thu, Jul 28, 2016 at 7:03 PM David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi 
>>> wrote:
>>>


 On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
 dgoss...@carouselchecks.com> wrote:

> On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi 
> wrote:
>
>> Hi,
>>
>> Issue: Cannot find master domain
>> Changes applied before issue started to happen: replaced 
>> 172.16.0.12:/data/brick1/brick1
>> with 172.16.0.12:/data/brick3/brick3, did minor package upgrades for
>> vdsm and glusterfs
>>
>> vdsm log: https://paste.fedoraproject.org/396842/
>>
>
>
> Any errrors in glusters brick or server logs?  The client gluster logs
> from ovirt?
>
 Brick errors:
 [2016-07-28 14:03:25.002396] E [MSGID: 113091]
 [posix.c:178:posix_lookup] 0-ovirt-posix: null gfid for path (null)
 [2016-07-28 14:03:25.002430] E [MSGID: 113018]
 [posix.c:196:posix_lookup] 0-ovirt-posix: lstat on null failed [Invalid
 argument]
 (Both repeated many times)

 Server errors:
 None

 Client errors:
 None


>
>> yum log: https://paste.fedoraproject.org/396854/
>>
>
> What version of gluster was running prior to update to 3.7.13?
>
 3.7.11-1 from gluster.org repository(after update ovirt switched to
 centos repository)

>>>
>>> What file system do your bricks reside on and do you have sharding
>>> enabled?
>>>
>>>
> Did it create gluster mounts on server when attempting to start?
>
 As I checked the master domain is not mounted on any nodes.
 Restarting vdsmd generated following errors:

 jsonrpc.Executor/5::DEBUG::2016-07-28
 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
 directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode:
 None
 jsonrpc.Executor/5::DEBUG::2016-07-28
 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
 Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
 jsonrpc.Executor/5::DEBUG::2016-07-28
 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
 --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
 --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
 backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Sahina Bose
Error from vdsm log: Permission settings on the specified path do not allow 
access to the storage. Verify permission settings on the specified storage 
path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'

I remember another thread about a similar issue - can you check the ACL 
settings on the storage path?

- Original Message -
> From: "Siavash Safi" 
> To: "David Gossage" 
> Cc: "users" 
> Sent: Thursday, July 28, 2016 7:58:29 PM
> Subject: Re: [ovirt-users] Cannot find master domain
> 
> 
> 
> On Thu, Jul 28, 2016 at 6:29 PM David Gossage < dgoss...@carouselchecks.com >
> wrote:
> 
> 
> 
> On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi < siavash.s...@gmail.com >
> wrote:
> 
> 
> 
> Hi,
> 
> Issue: Cannot find master domain
> Changes applied before issue started to happen: replaced
> 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:/data/brick3/brick3, did
> minor package upgrades for vdsm and glusterfs
> 
> vdsm log: https://paste.fedoraproject.org/396842/
> 
> 
> Any errrors in glusters brick or server logs? The client gluster logs from
> ovirt?
> Brick errors:
> [2016-07-28 14:03:25.002396] E [MSGID: 113091] [posix.c:178:posix_lookup]
> 0-ovirt-posix: null gfid for path (null)
> [2016-07-28 14:03:25.002430] E [MSGID: 113018] [posix.c:196:posix_lookup]
> 0-ovirt-posix: lstat on null failed [Invalid argument]
> (Both repeated many times)
> 
> Server errors:
> None
> 
> Client errors:
> None
> 
> 
> 
> 
> 
> 
> 
> yum log: https://paste.fedoraproject.org/396854/
> 
> What version of gluster was running prior to update to 3.7.13?
> 3.7.11-1 from gluster.org repository(after update ovirt switched to centos
> repository)
> 
> 
> 
> 
> Did it create gluster mounts on server when attempting to start?
> As I checked the master domain is not mounted on any nodes.
> Restarting vdsmd generated following errors:
> 
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
> directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode: None
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
> --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
> backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,789::__init__::318::IOProcessClient::(_run) Starting IOProcess...
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> jsonrpc.Executor/5::ERROR::2016-07-28
> 18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could not
> connect to storageServer
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
> conObj.connect()
> File "/usr/share/vdsm/storage/storageServer.py", line 248, in connect
> six.reraise(t, v, tb)
> File "/usr/share/vdsm/storage/storageServer.py", line 241, in connect
> self.getMountObj().getRecord().fs_file)
> File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess
> raise se.StorageServerAccessPermissionError(dirPath)
> StorageServerAccessPermissionError: Permission settings on the specified path
> do not allow access to the storage. Verify permission settings on the
> specified storage path.: 'path =
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,817::hsm::2497::Storage.HSM::(connectStorageServer) knownSDs: {}
> jsonrpc.Executor/5::INFO::2016-07-28
> 18:50:57,817::logUtils::51::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 469, 'id':
> u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,817::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::finished: {'statuslist':
> [{'status': 469, 'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,817::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::moving from state preparing ->
> state finished
> 
> I can manually mount the gluster volume on the same server.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Setup:
> engine running on a separate node
> 3 x kvm/glusterd nodes
> 
> Status of volume: ovirt
> Gluster process TCP Port RDMA Port Online Pid
> 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread David Gossage
On Thu, Jul 28, 2016 at 9:38 AM, Siavash Safi 
wrote:

> file system: xfs
> features.shard: off
>

Ok was just seeing if matched up to the issues latest 3.7.x releases have
with zfs and sharding but doesn't look like your issue.

 In your logs I see it mounts with thee commands.  What happens if you use
same to a test dir?

 /usr/bin/mount -t glusterfs -o backup-volfile-servers=172.16.0.12:172.16.0.13
172.16.0.11:/ovirt /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt

It then umounts it and complains short while later of permissions.

StorageServerAccessPermissionError: Permission settings on the specified
path do not allow access to the storage. Verify permission settings on the
specified storage path.: 'path =
/rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'

Are the permissions of dirs to
/rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt
as expected?
How about on the bricks anything out of place?

Is gluster still using same options as before?  could it have reset the
user and group to not be 36?

>
> On Thu, Jul 28, 2016 at 7:03 PM David Gossage 
> wrote:
>
>> On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi 
>> wrote:
>>
>>>
>>>
>>> On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi 
 wrote:

> Hi,
>
> Issue: Cannot find master domain
> Changes applied before issue started to happen: replaced 
> 172.16.0.12:/data/brick1/brick1
> with 172.16.0.12:/data/brick3/brick3, did minor package upgrades for
> vdsm and glusterfs
>
> vdsm log: https://paste.fedoraproject.org/396842/
>


 Any errrors in glusters brick or server logs?  The client gluster logs
 from ovirt?

>>> Brick errors:
>>> [2016-07-28 14:03:25.002396] E [MSGID: 113091]
>>> [posix.c:178:posix_lookup] 0-ovirt-posix: null gfid for path (null)
>>> [2016-07-28 14:03:25.002430] E [MSGID: 113018]
>>> [posix.c:196:posix_lookup] 0-ovirt-posix: lstat on null failed [Invalid
>>> argument]
>>> (Both repeated many times)
>>>
>>> Server errors:
>>> None
>>>
>>> Client errors:
>>> None
>>>
>>>

> yum log: https://paste.fedoraproject.org/396854/
>

 What version of gluster was running prior to update to 3.7.13?

>>> 3.7.11-1 from gluster.org repository(after update ovirt switched to
>>> centos repository)
>>>
>>
>> What file system do your bricks reside on and do you have sharding
>> enabled?
>>
>>
 Did it create gluster mounts on server when attempting to start?

>>> As I checked the master domain is not mounted on any nodes.
>>> Restarting vdsmd generated following errors:
>>>
>>> jsonrpc.Executor/5::DEBUG::2016-07-28
>>> 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
>>> directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode: None
>>> jsonrpc.Executor/5::DEBUG::2016-07-28
>>> 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
>>> Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
>>> jsonrpc.Executor/5::DEBUG::2016-07-28
>>> 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
>>> --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
>>> --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
>>> backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
>>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
>>> jsonrpc.Executor/5::DEBUG::2016-07-28
>>> 18:50:57,789::__init__::318::IOProcessClient::(_run) Starting IOProcess...
>>> jsonrpc.Executor/5::DEBUG::2016-07-28
>>> 18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
>>> --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l
>>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
>>> jsonrpc.Executor/5::ERROR::2016-07-28
>>> 18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could not
>>> connect to storageServer
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/hsm.py", line 2470, in
>>> connectStorageServer
>>> conObj.connect()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 248, in connect
>>> six.reraise(t, v, tb)
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 241, in connect
>>> self.getMountObj().getRecord().fs_file)
>>>   File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess
>>> raise se.StorageServerAccessPermissionError(dirPath)
>>> StorageServerAccessPermissionError: Permission settings on the specified
>>> path do not allow access to the storage. Verify permission settings on the
>>> specified storage path.: 'path =
>>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
>>> jsonrpc.Executor/5::DEBUG::2016-07-28
>>> 18:50:57,817::hsm::2497::Storage.HSM::(connectStorageServer) knownSDs: {}
>>> jsonrpc.Executor/5::INFO::2016-07-28
>>> 

[ovirt-users] How to access ovirt 4.0 desktop from iOS spice client?

2016-07-28 Thread 敖青云
How can I access ovirt 4.0 desktop from iPad with iOS spice client? As I know, 
ovirt 4.0 seems not support IP address access.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Siavash Safi
file system: xfs
features.shard: off

On Thu, Jul 28, 2016 at 7:03 PM David Gossage 
wrote:

> On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi 
> wrote:
>
>>
>>
>> On Thu, Jul 28, 2016 at 6:29 PM David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi 
>>> wrote:
>>>
 Hi,

 Issue: Cannot find master domain
 Changes applied before issue started to happen: replaced 
 172.16.0.12:/data/brick1/brick1
 with 172.16.0.12:/data/brick3/brick3, did minor package upgrades for
 vdsm and glusterfs

 vdsm log: https://paste.fedoraproject.org/396842/

>>>
>>>
>>> Any errrors in glusters brick or server logs?  The client gluster logs
>>> from ovirt?
>>>
>> Brick errors:
>> [2016-07-28 14:03:25.002396] E [MSGID: 113091] [posix.c:178:posix_lookup]
>> 0-ovirt-posix: null gfid for path (null)
>> [2016-07-28 14:03:25.002430] E [MSGID: 113018] [posix.c:196:posix_lookup]
>> 0-ovirt-posix: lstat on null failed [Invalid argument]
>> (Both repeated many times)
>>
>> Server errors:
>> None
>>
>> Client errors:
>> None
>>
>>
>>>
 yum log: https://paste.fedoraproject.org/396854/

>>>
>>> What version of gluster was running prior to update to 3.7.13?
>>>
>> 3.7.11-1 from gluster.org repository(after update ovirt switched to
>> centos repository)
>>
>
> What file system do your bricks reside on and do you have sharding
> enabled?
>
>
>>> Did it create gluster mounts on server when attempting to start?
>>>
>> As I checked the master domain is not mounted on any nodes.
>> Restarting vdsmd generated following errors:
>>
>> jsonrpc.Executor/5::DEBUG::2016-07-28
>> 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
>> directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode: None
>> jsonrpc.Executor/5::DEBUG::2016-07-28
>> 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
>> Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
>> jsonrpc.Executor/5::DEBUG::2016-07-28
>> 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
>> --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
>> --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
>> backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
>> jsonrpc.Executor/5::DEBUG::2016-07-28
>> 18:50:57,789::__init__::318::IOProcessClient::(_run) Starting IOProcess...
>> jsonrpc.Executor/5::DEBUG::2016-07-28
>> 18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
>> --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l
>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
>> jsonrpc.Executor/5::ERROR::2016-07-28
>> 18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could not
>> connect to storageServer
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/storage/hsm.py", line 2470, in
>> connectStorageServer
>> conObj.connect()
>>   File "/usr/share/vdsm/storage/storageServer.py", line 248, in connect
>> six.reraise(t, v, tb)
>>   File "/usr/share/vdsm/storage/storageServer.py", line 241, in connect
>> self.getMountObj().getRecord().fs_file)
>>   File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess
>> raise se.StorageServerAccessPermissionError(dirPath)
>> StorageServerAccessPermissionError: Permission settings on the specified
>> path do not allow access to the storage. Verify permission settings on the
>> specified storage path.: 'path =
>> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
>> jsonrpc.Executor/5::DEBUG::2016-07-28
>> 18:50:57,817::hsm::2497::Storage.HSM::(connectStorageServer) knownSDs: {}
>> jsonrpc.Executor/5::INFO::2016-07-28
>> 18:50:57,817::logUtils::51::dispatcher::(wrapper) Run and protect:
>> connectStorageServer, Return response: {'statuslist': [{'status': 469,
>> 'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}
>> jsonrpc.Executor/5::DEBUG::2016-07-28
>> 18:50:57,817::task::1191::Storage.TaskManager.Task::(prepare)
>> Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::finished: {'statuslist':
>> [{'status': 469, 'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}
>> jsonrpc.Executor/5::DEBUG::2016-07-28
>> 18:50:57,817::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::moving from state preparing ->
>> state finished
>>
>> I can manually mount the gluster volume on the same server.
>>
>>
>>>
>>>
 Setup:
 engine running on a separate node
 3 x kvm/glusterd nodes

 Status of volume: ovirt
 Gluster process TCP Port  RDMA Port  Online
  Pid

 --
 Brick 172.16.0.11:/data/brick1/brick1   49152 0  Y
   17304
 Brick 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread David Gossage
On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi 
wrote:

>
>
> On Thu, Jul 28, 2016 at 6:29 PM David Gossage 
> wrote:
>
>> On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi 
>> wrote:
>>
>>> Hi,
>>>
>>> Issue: Cannot find master domain
>>> Changes applied before issue started to happen: replaced 
>>> 172.16.0.12:/data/brick1/brick1
>>> with 172.16.0.12:/data/brick3/brick3, did minor package upgrades for
>>> vdsm and glusterfs
>>>
>>> vdsm log: https://paste.fedoraproject.org/396842/
>>>
>>
>>
>> Any errrors in glusters brick or server logs?  The client gluster logs
>> from ovirt?
>>
> Brick errors:
> [2016-07-28 14:03:25.002396] E [MSGID: 113091] [posix.c:178:posix_lookup]
> 0-ovirt-posix: null gfid for path (null)
> [2016-07-28 14:03:25.002430] E [MSGID: 113018] [posix.c:196:posix_lookup]
> 0-ovirt-posix: lstat on null failed [Invalid argument]
> (Both repeated many times)
>
> Server errors:
> None
>
> Client errors:
> None
>
>
>>
>>> yum log: https://paste.fedoraproject.org/396854/
>>>
>>
>> What version of gluster was running prior to update to 3.7.13?
>>
> 3.7.11-1 from gluster.org repository(after update ovirt switched to
> centos repository)
>

What file system do your bricks reside on and do you have sharding enabled?


>> Did it create gluster mounts on server when attempting to start?
>>
> As I checked the master domain is not mounted on any nodes.
> Restarting vdsmd generated following errors:
>
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
> directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode: None
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
> --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
> backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,789::__init__::318::IOProcessClient::(_run) Starting IOProcess...
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
> jsonrpc.Executor/5::ERROR::2016-07-28
> 18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could not
> connect to storageServer
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 248, in connect
> six.reraise(t, v, tb)
>   File "/usr/share/vdsm/storage/storageServer.py", line 241, in connect
> self.getMountObj().getRecord().fs_file)
>   File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess
> raise se.StorageServerAccessPermissionError(dirPath)
> StorageServerAccessPermissionError: Permission settings on the specified
> path do not allow access to the storage. Verify permission settings on the
> specified storage path.: 'path =
> /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,817::hsm::2497::Storage.HSM::(connectStorageServer) knownSDs: {}
> jsonrpc.Executor/5::INFO::2016-07-28
> 18:50:57,817::logUtils::51::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 469,
> 'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,817::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::finished: {'statuslist':
> [{'status': 469, 'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}
> jsonrpc.Executor/5::DEBUG::2016-07-28
> 18:50:57,817::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::moving from state preparing ->
> state finished
>
> I can manually mount the gluster volume on the same server.
>
>
>>
>>
>>> Setup:
>>> engine running on a separate node
>>> 3 x kvm/glusterd nodes
>>>
>>> Status of volume: ovirt
>>> Gluster process TCP Port  RDMA Port  Online
>>>  Pid
>>>
>>> --
>>> Brick 172.16.0.11:/data/brick1/brick1   49152 0  Y
>>>   17304
>>> Brick 172.16.0.12:/data/brick3/brick3   49155 0  Y
>>>   9363
>>> Brick 172.16.0.13:/data/brick1/brick1   49152 0  Y
>>>   23684
>>> Brick 172.16.0.11:/data/brick2/brick2   49153 0  Y
>>>   17323
>>> Brick 

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread Siavash Safi
On Thu, Jul 28, 2016 at 6:29 PM David Gossage 
wrote:

> On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi 
> wrote:
>
>> Hi,
>>
>> Issue: Cannot find master domain
>> Changes applied before issue started to happen: replaced 
>> 172.16.0.12:/data/brick1/brick1
>> with 172.16.0.12:/data/brick3/brick3, did minor package upgrades for
>> vdsm and glusterfs
>>
>> vdsm log: https://paste.fedoraproject.org/396842/
>>
>
>
> Any errrors in glusters brick or server logs?  The client gluster logs
> from ovirt?
>
Brick errors:
[2016-07-28 14:03:25.002396] E [MSGID: 113091] [posix.c:178:posix_lookup]
0-ovirt-posix: null gfid for path (null)
[2016-07-28 14:03:25.002430] E [MSGID: 113018] [posix.c:196:posix_lookup]
0-ovirt-posix: lstat on null failed [Invalid argument]
(Both repeated many times)

Server errors:
None

Client errors:
None


>
>> yum log: https://paste.fedoraproject.org/396854/
>>
>
> What version of gluster was running prior to update to 3.7.13?
>
3.7.11-1 from gluster.org repository(after update ovirt switched to centos
repository)

>
> Did it create gluster mounts on server when attempting to start?
>
As I checked the master domain is not mounted on any nodes.
Restarting vdsmd generated following errors:

jsonrpc.Executor/5::DEBUG::2016-07-28
18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode: None
jsonrpc.Executor/5::DEBUG::2016-07-28
18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']
jsonrpc.Executor/5::DEBUG::2016-07-28
18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
--cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope
--slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt
/rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
jsonrpc.Executor/5::DEBUG::2016-07-28
18:50:57,789::__init__::318::IOProcessClient::(_run) Starting IOProcess...
jsonrpc.Executor/5::DEBUG::2016-07-28
18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
--cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)
jsonrpc.Executor/5::ERROR::2016-07-28
18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 248, in connect
six.reraise(t, v, tb)
  File "/usr/share/vdsm/storage/storageServer.py", line 241, in connect
self.getMountObj().getRecord().fs_file)
  File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess
raise se.StorageServerAccessPermissionError(dirPath)
StorageServerAccessPermissionError: Permission settings on the specified
path do not allow access to the storage. Verify permission settings on the
specified storage path.: 'path =
/rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
jsonrpc.Executor/5::DEBUG::2016-07-28
18:50:57,817::hsm::2497::Storage.HSM::(connectStorageServer) knownSDs: {}
jsonrpc.Executor/5::INFO::2016-07-28
18:50:57,817::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 469,
'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}
jsonrpc.Executor/5::DEBUG::2016-07-28
18:50:57,817::task::1191::Storage.TaskManager.Task::(prepare)
Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::finished: {'statuslist':
[{'status': 469, 'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}
jsonrpc.Executor/5::DEBUG::2016-07-28
18:50:57,817::task::595::Storage.TaskManager.Task::(_updateState)
Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::moving from state preparing ->
state finished

I can manually mount the gluster volume on the same server.


>
>
>> Setup:
>> engine running on a separate node
>> 3 x kvm/glusterd nodes
>>
>> Status of volume: ovirt
>> Gluster process TCP Port  RDMA Port  Online
>>  Pid
>>
>> --
>> Brick 172.16.0.11:/data/brick1/brick1   49152 0  Y
>> 17304
>> Brick 172.16.0.12:/data/brick3/brick3   49155 0  Y
>> 9363
>> Brick 172.16.0.13:/data/brick1/brick1   49152 0  Y
>> 23684
>> Brick 172.16.0.11:/data/brick2/brick2   49153 0  Y
>> 17323
>> Brick 172.16.0.12:/data/brick2/brick2   49153 0  Y
>> 9382
>> Brick 172.16.0.13:/data/brick2/brick2   49153 0  Y
>> 23703
>> NFS Server on localhost 2049  0  Y
>> 30508
>> Self-heal Daemon on localhost   N/A   N/AY
>> 30521
>> NFS Server on 172.16.0.11   2049  0   

Re: [ovirt-users] Debian - based OS and SSO

2016-07-28 Thread Tadas
Thank you for your reply.
Strange, but i do not see any errors in gdm debug log, just this:
http://paste.ubuntu.com/21275558/

I will try installing debian unstable and several ubuntu versions tomorrow.

From: Vinzenz Feenstra 
Sent: Thursday, July 28, 2016 4:18 PM
To: ta...@ring.lt 
Cc: users 
Subject: Re: [ovirt-users] Debian - based OS and SSO


  On Jul 28, 2016, at 3:11 PM, Vinzenz Feenstra  wrote:


On Jul 28, 2016, at 11:53 AM, Tadas  wrote:

Hello,
still having issues with ovirt SSO and Debian OS.
Other OSes (Windows/Fedora 24) works just fine.
Some information:
OS: Debian 8.5 (jessie)
I've followed manual on https://www.ovirt.org/documentation/how-to/gues
t-agent/install-the-guest-agent-in-debian/ and installed ovirt-agent.
I can get info via spice socket on hypervisor side, this means that
agent works fine.
I've compiled pam-ovirt-cred and copied it into /lib/x86_64-linux-
gnu/security/


  It should be in /lib/security afaik


I've configured /etc/pamd/gdm-ovirtcred (just copied from working
Fedora 24)


  replace in that file all occurences of password-auth with passwd




But still login fails. I can see this in ovirt-agent log file:


  It some how fails for me in some cases with this now:


Correction its here:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=794064


  https://bugs.freedesktop.org/show_bug.cgi?id=71525

  There’s not much I can do about that though





Dummy-2::INFO::2016-07-28
12:49:51,046::OVirtAgentLogic::270::root::Received an external command:
login...
Dummy-2::DEBUG::2016-07-28
12:49:51,047::OVirtAgentLogic::304::root::User log-in (credentials =
'\x00\x00\x00\x04test\x00')
Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::207::root::The
following users are allowed to connect: [0]
Dummy-2::DEBUG::2016-07-28 12:49:51,047::CredServer::272::root::Token:
760258
Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::273::root::Opening
credentials channel...
Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::132::root::Emitting
user authenticated signal (760258).
Dummy-2::INFO::2016-07-28
12:49:51,178::CredServer::277::root::Credentials channel was closed.







This looks okay. The error is on pam side (auth.log):

Jul 28 12:49:39 desktop64 gdm-ovirtcred]: pam_succeed_if(gdm-
ovirtcred:auth): error retrieving user name: Conversation error
Jul 28 12:49:39 desktop64 gdm-ovirtcred]: pam_ovirt_cred(gdm-
ovirtcred:auth): Failed to acquire user's credentials

Have no idea, where it fails.
Would appreciate, if you could help me here a bit.
Thank you.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disk not bootable

2016-07-28 Thread Fernando Fuentes
Nir,

I been busy and have not been able to replicate your request.

I will as soon as I get a chance I will.

Thanks again for the help.

Regards,



-- 
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org

On Tue, Jul 19, 2016, at 08:54 AM, Fernando Fuentes wrote:
> Nir,
> 
> Thanks for all the help!
> I am on it and will reply with the requested info asap.
> 
> Regards,
> 
> -- 
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
> 
> On Tue, Jul 19, 2016, at 07:16 AM, Nir Soffer wrote:
> > On Mon, Jul 18, 2016 at 11:16 PM, Fernando Fuentes 
> > wrote:
> > > Ops... forgot the link:
> > >
> > > http://pastebin.com/LereJgyw
> > >
> > > The requested infor is in the pastebin.
> > 
> > So the issue is clear now, the template on NFS is using raw format, and
> > on block
> > stoage, qcow format:
> > 
> > NFS:
> > 
> > root@zeta ~]# cat
> > /rhev/data-center/mnt/172.30.10.5\:_opt_libvirtd_images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-
> > ...
> > FORMAT=RAW
> > ...
> > 
> > [root@alpha ~]# qemu-img info
> > /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e
> > image:
> > /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e
> > ...
> > file format: raw
> > ...
> > 
> > iSCSI:
> > 
> > [root@zeta ~]# qemu-img info
> > /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e
> > image:
> > /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e
> > ...
> > file format: qcow2
> > ...
> > 
> > [root@zeta ~]# dd
> > if=/dev/0ef17024-0eae-4590-8eea-6ec8494fe223/metadata bs=512 skip=4
> > count=1 iflag=direct
> > ...
> > FORMAT=COW
> > ...
> > 
> > This format conversion is expected, as we don't support raw/sparse on
> > block storage.
> > 
> > It looks like the vm is started with the template disk as "raw"
> > format, which is expected
> > to fail when the format is actually "qcow2". The guest will see the
> > qcow headers instead
> > of the actual data.
> > 
> > The next step to debug this is:
> > 
> > 1. Copy a disk using this template to the block storage domain
> > 2. Create a new vm using this disk
> > 3. Start the vm
> > 
> > Does it start? if not, attach engine and vdsm logs from this timefame.
> > 
> > If this works, you can try:
> > 
> > 1. Move vm disk from NFS to block storage
> > 2. Start the vm
> > 
> > Again, it it does not work, add engine and vdsm logs.
> > 
> > Nir
> > 
> > >
> > > Regards,
> > >
> > >
> > > --
> > > Fernando Fuentes
> > > ffuen...@txweather.org
> > > http://www.txweather.org
> > >
> > >
> > >
> > > On Mon, Jul 18, 2016, at 03:16 PM, Fernando Fuentes wrote:
> > >
> > > Nir,
> > >
> > > After some playing around with pvscan I was able to get all of the need it
> > > information.
> > >
> > > Please see:
> > >
> > >
> > > --
> > > Fernando Fuentes
> > > ffuen...@txweather.org
> > > http://www.txweather.org
> > >
> > >
> > >
> > > On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
> > >
> > > On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes 
> > > wrote:
> > >> Nir,
> > >>
> > >> As requested:
> > >>
> > >> [root@gamma ~]# lsblk
> > >> NAME  MAJ:MIN RM
> > >>   SIZE RO TYPE  MOUNTPOINT
> > >> sda 8:00
> > >>   557G  0 disk
> > >> ├─sda1  8:10
> > >>   500M  0 part  /boot
> > >> └─sda2  8:20
> > >> 556.5G  0 part
> > >>   ├─vg_gamma-lv_root (dm-0)   253:00
> > >>  50G  0 lvm   /
> > >>   ├─vg_gamma-lv_swap (dm-1)   253:10
> > >>   4G  0 lvm   [SWAP]
> > >>   └─vg_gamma-lv_home (dm-2)   253:20
> > >>   502.4G  0 lvm   /home
> > >> sr011:01
> > >>  1024M  0 rom
> > >> sdb 8:16   0
> > >> 2T  0 disk
> > >> └─36589cfc00881b9b93c2623780840 (dm-4)253:40
> > >> 2T  0 mpath
> > >> sdc 8:32   0
> > >> 2T  0 disk
> > >> └─36589cfc0050564002c7e51978316 (dm-3)253:30
> > >> 2T  0 mpath
> > >>   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5)  253:50
> > >> 512M  0 lvm
> > >>   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6)253:60
> > >> 128M  0 lvm
> > >>   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7)253:70
> > >>   2G  0 lvm
> > >>   

Re: [ovirt-users] Cannot find master domain

2016-07-28 Thread David Gossage
On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi 
wrote:

> Hi,
>
> Issue: Cannot find master domain
> Changes applied before issue started to happen: replaced 
> 172.16.0.12:/data/brick1/brick1
> with 172.16.0.12:/data/brick3/brick3, did minor package upgrades for vdsm
> and glusterfs
>
> vdsm log: https://paste.fedoraproject.org/396842/
>


Any errrors in glusters brick or server logs?  The client gluster logs from
ovirt?


> yum log: https://paste.fedoraproject.org/396854/
>

What version of gluster was running prior to update to 3.7.13?

Did it create gluster mounts on server when attempting to start?



> Setup:
> engine running on a separate node
> 3 x kvm/glusterd nodes
>
> Status of volume: ovirt
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick 172.16.0.11:/data/brick1/brick1   49152 0  Y
> 17304
> Brick 172.16.0.12:/data/brick3/brick3   49155 0  Y
> 9363
> Brick 172.16.0.13:/data/brick1/brick1   49152 0  Y
> 23684
> Brick 172.16.0.11:/data/brick2/brick2   49153 0  Y
> 17323
> Brick 172.16.0.12:/data/brick2/brick2   49153 0  Y
> 9382
> Brick 172.16.0.13:/data/brick2/brick2   49153 0  Y
> 23703
> NFS Server on localhost 2049  0  Y
> 30508
> Self-heal Daemon on localhost   N/A   N/AY
> 30521
> NFS Server on 172.16.0.11   2049  0  Y
> 24999
> Self-heal Daemon on 172.16.0.11 N/A   N/AY
> 25016
> NFS Server on 172.16.0.13   2049  0  Y
> 25379
> Self-heal Daemon on 172.16.0.13 N/A   N/AY
> 25509
>
> Task Status of Volume ovirt
>
> --
> Task : Rebalance
> ID   : 84d5ab2a-275e-421d-842b-928a9326c19a
> Status   : completed
>
> Thanks,
> Siavash
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot find master domain

2016-07-28 Thread Siavash Safi
Hi,

Issue: Cannot find master domain
Changes applied before issue started to happen: replaced
172.16.0.12:/data/brick1/brick1
with 172.16.0.12:/data/brick3/brick3, did minor package upgrades for vdsm
and glusterfs

vdsm log: https://paste.fedoraproject.org/396842/
yum log: https://paste.fedoraproject.org/396854/

Setup:
engine running on a separate node
3 x kvm/glusterd nodes

Status of volume: ovirt
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 172.16.0.11:/data/brick1/brick1   49152 0  Y
17304
Brick 172.16.0.12:/data/brick3/brick3   49155 0  Y
9363
Brick 172.16.0.13:/data/brick1/brick1   49152 0  Y
23684
Brick 172.16.0.11:/data/brick2/brick2   49153 0  Y
17323
Brick 172.16.0.12:/data/brick2/brick2   49153 0  Y
9382
Brick 172.16.0.13:/data/brick2/brick2   49153 0  Y
23703
NFS Server on localhost 2049  0  Y
30508
Self-heal Daemon on localhost   N/A   N/AY
30521
NFS Server on 172.16.0.11   2049  0  Y
24999
Self-heal Daemon on 172.16.0.11 N/A   N/AY
25016
NFS Server on 172.16.0.13   2049  0  Y
25379
Self-heal Daemon on 172.16.0.13 N/A   N/AY
25509

Task Status of Volume ovirt
--
Task : Rebalance
ID   : 84d5ab2a-275e-421d-842b-928a9326c19a
Status   : completed

Thanks,
Siavash
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] one export domain two DC

2016-07-28 Thread Fernando Fuentes
Fred,

It actually worked with no issues.
Thanks for the help!

Regards,

--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org



On Wed, Jul 27, 2016, at 06:36 AM, Fred Rolland wrote:
> Hi,
> Did you perform an "Import Domain" on the second setup ? It
> should work.
> Thanks,
> Fred
>
> On Mon, Jul 25, 2016 at 11:21 PM, Fernando Fuentes
>  wrote:
>> __
>> It did not work for me.
>> When I put the export on maintenance mode/detach it than try to add
>> to my second dc I get:
>>
>> Error while executing action New NFS Storage Domain: Error in
>> creating a Storage Domain. The selected storage path is not empty
>> (probably contains another Storage Domain). Either remove the
>> existing Storage Domain from this path, or change the Storage path).
>>
>> I am not sure if I explain my self right...
>> I have two clusters and two different ovirt DC. each ovirt  manager
>> is installed on a different host... One host has oVirt 3.6 and the
>> other host has ovirt 4.0
>>
>> I want to export my vms on my ovirt 3.6 and move those vm's to my
>> ovirt 4.0.
>>
>> What are my options?
>>
>>
>> Regards,
>>
>>
>> --
>> Fernando Fuentes
>> ffuen...@txweather.org
>> http://www.txweather.org
>>
>>
>>
>>
>> On Fri, Jul 22, 2016, at 09:10 AM, Fernando Fuentes wrote:
>>> To All,
>>>
>>> Thank you for the help!
>>>
>>> Regards,
>>>
>>> --
>>> Fernando Fuentes
>>> ffuen...@txweather.org
>>> http://www.txweather.org
>>>
>>>
>>>
>>> On Fri, Jul 22, 2016, at 07:12 AM, Yaniv Kaul wrote:


 On Wed, Jul 20, 2016 at 2:04 PM, Fernando Fuentes
  wrote:
> Is it possible to export all of my vms on my oVirt 3.5 Domain and
> than
> attach my export domain on my oVirt 4.0 DC and import the vm's?

 Yes, you can do this + just import a storage domain (see [1] for
 details - since 3.5)
 Y.

 [1] 
 http://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/

>
> Regards,
>
> --
>  Fernando Fuentes ffuen...@txweather.org http://www.txweather.org
>  ___
>  Users mailing list Users@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/users
>>>
>>> _
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> ___
>>  Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Debian - based OS and SSO

2016-07-28 Thread Vinzenz Feenstra

> On Jul 28, 2016, at 3:11 PM, Vinzenz Feenstra  wrote:
> 
> 
>> On Jul 28, 2016, at 11:53 AM, Tadas > 
>> wrote:
>> 
>> Hello,
>> still having issues with ovirt SSO and Debian OS.
>> Other OSes (Windows/Fedora 24) works just fine.
>> Some information:
>> OS: Debian 8.5 (jessie)
>> I've followed manual on https://www.ovirt.org/documentation/how-to/gues 
>> 
>> t-agent/install-the-guest-agent-in-debian/ and installed ovirt-agent.
>> I can get info via spice socket on hypervisor side, this means that
>> agent works fine.
>> I've compiled pam-ovirt-cred and copied it into /lib/x86_64-linux-
>> gnu/security/
> 
> It should be in /lib/security afaik
> 
>> I've configured /etc/pamd/gdm-ovirtcred (just copied from working
>> Fedora 24)
> 
> replace in that file all occurences of password-auth with passwd
> 
> 
>> 
>> But still login fails. I can see this in ovirt-agent log file:
> 
> It some how fails for me in some cases with this now:
> 

Correction its here:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=794064 

> https://bugs.freedesktop.org/show_bug.cgi?id=71525 
> 
> 
> There’s not much I can do about that though
> 
> 
> 
>> 
>> Dummy-2::INFO::2016-07-28
>> 12:49:51,046::OVirtAgentLogic::270::root::Received an external command:
>> login...
>> Dummy-2::DEBUG::2016-07-28
>> 12:49:51,047::OVirtAgentLogic::304::root::User log-in (credentials =
>> '\x00\x00\x00\x04test\x00')
>> Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::207::root::The
>> following users are allowed to connect: [0]
>> Dummy-2::DEBUG::2016-07-28 12:49:51,047::CredServer::272::root::Token:
>> 760258
>> Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::273::root::Opening
>> credentials channel...
>> Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::132::root::Emitting
>> user authenticated signal (760258).
>> Dummy-2::INFO::2016-07-28
>> 12:49:51,178::CredServer::277::root::Credentials channel was closed.
>> 
> 
> 
> 
> 
>> This looks okay. The error is on pam side (auth.log):
>> 
>> Jul 28 12:49:39 desktop64 gdm-ovirtcred]: pam_succeed_if(gdm-
>> ovirtcred:auth): error retrieving user name: Conversation error
>> Jul 28 12:49:39 desktop64 gdm-ovirtcred]: pam_ovirt_cred(gdm-
>> ovirtcred:auth): Failed to acquire user's credentials
>> 
>> Have no idea, where it fails.
>> Would appreciate, if you could help me here a bit.
>> Thank you.
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Debian - based OS and SSO

2016-07-28 Thread Vinzenz Feenstra

> On Jul 28, 2016, at 11:53 AM, Tadas  wrote:
> 
> Hello,
> still having issues with ovirt SSO and Debian OS.
> Other OSes (Windows/Fedora 24) works just fine.
> Some information:
> OS: Debian 8.5 (jessie)
> I've followed manual on https://www.ovirt.org/documentation/how-to/gues
> t-agent/install-the-guest-agent-in-debian/ and installed ovirt-agent.
> I can get info via spice socket on hypervisor side, this means that
> agent works fine.
> I've compiled pam-ovirt-cred and copied it into /lib/x86_64-linux-
> gnu/security/

It should be in /lib/security afaik

> I've configured /etc/pamd/gdm-ovirtcred (just copied from working
> Fedora 24)

replace in that file all occurences of password-auth with passwd


> 
> But still login fails. I can see this in ovirt-agent log file:

It some how fails for me in some cases with this now:

https://bugs.freedesktop.org/show_bug.cgi?id=71525 


There’s not much I can do about that though



> 
> Dummy-2::INFO::2016-07-28
> 12:49:51,046::OVirtAgentLogic::270::root::Received an external command:
> login...
> Dummy-2::DEBUG::2016-07-28
> 12:49:51,047::OVirtAgentLogic::304::root::User log-in (credentials =
> '\x00\x00\x00\x04test\x00')
> Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::207::root::The
> following users are allowed to connect: [0]
> Dummy-2::DEBUG::2016-07-28 12:49:51,047::CredServer::272::root::Token:
> 760258
> Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::273::root::Opening
> credentials channel...
> Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::132::root::Emitting
> user authenticated signal (760258).
> Dummy-2::INFO::2016-07-28
> 12:49:51,178::CredServer::277::root::Credentials channel was closed.
> 




> This looks okay. The error is on pam side (auth.log):
> 
> Jul 28 12:49:39 desktop64 gdm-ovirtcred]: pam_succeed_if(gdm-
> ovirtcred:auth): error retrieving user name: Conversation error
> Jul 28 12:49:39 desktop64 gdm-ovirtcred]: pam_ovirt_cred(gdm-
> ovirtcred:auth): Failed to acquire user's credentials
> 
> Have no idea, where it fails.
> Would appreciate, if you could help me here a bit.
> Thank you.
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trunk port for a guest nic?

2016-07-28 Thread Dan Lavu
Sorry Edy, I totally missed this email.

So, if my hypervisor I have 4 NICs,

0 ovirtmgmt (untagged vlan71)
1/2 bonded, trunk vlan 70,72-80
4 unplugged

So i just create a trunk network, no tags and connect it to the bonded
interface? Then attach it to my guest?

Cheers,
Dan

On Fri, Jul 15, 2016 at 3:30 PM, Edward Haas  wrote:

>
>
> On Wed, Jul 13, 2016 at 11:40 PM, Dan Lavu  wrote:
>
>> Hello,
>>
>> I remember reading some posts about this in the past, but I don't know if
>> anything came of it. Is this now possible? If so, does anybody have any
>> documentation on how to do this in 4.0?
>>
>> Thanks,
>>
>> Dan
>>
>>
> Hello Dan,
>
> If you mean allowing a VM to communicate on multiple vlan/s, then
> attaching a VM to a non-vlan network should do the job.
> You just need to define a trunk in the VM.
>
> Thanks,
> Edy.
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1

2016-07-28 Thread Peter Michael Calum
Hi Martin

I see no firewall issues.

I have made some more tests :

RHEV  3.6 host khk9dsk30  test to khk9dsk34 & khk9dsk35 - OK
RHEV  3.6 host khk9dsk31  test to khk9dsk34 & khk9dsk35 - OK
RHEV  3.5 host khk9dsk32  test to khk9dsk34 & khk9dsk35 - OK
RHEV  3.5 host khk9dsk33  test to khk9dsk34 & khk9dsk35 – OK
OVIRT 4.01 host khk9dsk34  test to khk9dsk34 & khk9dsk35 – FAIL
OVIRT 4.01 host khk9dsk35  test to khk9dsk34 & khk9dsk35 – FAIL
OVIRT 4.01 host khk9dsk34  test to khk9dsk30 & khk9dsk31 – FAIL

Could you check in the code  that ipmilan sends on port udp 623 ?

All hosts are on same VLAN
[root@khk9dsk31 ~]# fence_ipmilan -a khk9dsk35-mgnt.ip.tdk.dk -l mgntuser –p 
 -o status
Status: ON
[root@khk9dsk31 ~]# fence_ipmilan -a khk9dsk34-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Status: ON
[root@khk9dsk35 ~]# fence_ipmilan -a khk9dsk33-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Failed: Unable to obtain correct plug status or plug is not available

[root@khk9dsk35 ~]# fence_ipmilan -a khk9dsk30-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Failed: Unable to obtain correct plug status or plug is not available

[root@khk9dsk35 ~]# fence_ipmilan -a khk9dsk34-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Failed: Unable to obtain correct plug status or plug is not available

[root@khk9dsk35 ~]# fence_ipmilan -a khk9dsk35-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Failed: Unable to obtain correct plug status or plug is not available

Br,
Peter Calum
Fra: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] På vegne af Peter 
Michael Calum
Sendt: 27. juli 2016 21:30
Til: Martin Perina 
Cc: users@ovirt.org
Emne: [Phish] - Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1

Hi Martin
I was wondering if this could be a firewall problem. We have recently 
introduced new fw rules, and I have not tested fencing on the 2 hypervisors 
before I switched them to ovirt 4, but there was no alarm in the old setup 
before I switched. - I will investigate this further to be sure, and will 
return.
Thanks for your help
/Peter


Fra: Martin Perina [mailto:mper...@redhat.com]
Sendt: 27. juli 2016 20:19
Til: Peter Michael Calum >
Cc: users@ovirt.org; Eli Mesika 
>
Emne: Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1

Hmm, it's really strange that this is working on 3.6 and not on 4.0, I don't 
see any obvious error, so we need to find out what are correct parameters for 
your fencing device.
The easiest way is to execute fence_ipmilan, according to vdsm.log following 
are your current options:
fence_ipmilan -a khk9dsk35-mgnt.ip.tdk.dk -l 
mgntuser -p  -o status
Does the above works if it's execute from 
khk9dsk34-mgnt.ip.tdk.dk? Btw is username 
correct? Shouldn't be there 'mgmtuser'?
If all of above is correct and you are still not able to get power status, here 
are options you could try:
-v
--lanplus
-4
Martin Perina

On Wed, Jul 27, 2016 at 5:07 PM, Peter Michael Calum 
> wrote:
Hi,

Thank you for answering.

Here is the logs, one from each node.
I see the test from khk9dsk34 goes over khk9dsk35 and vice versa.

Same setup as on my redhat 3.6 setup, and no custom options.

Thanks
Peter


Fra: Martin Perina [mailto:mper...@redhat.com]
Sendt: 27. juli 2016 11:56
Til: Peter Michael Calum >
Cc: users@ovirt.org; Eli Mesika 
>
Emne: Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1

Hi Peter,
could you please share with us vdsm.log from the host that was used as fence 
proxy (the one that actually executed fence_ipmi agent)?
Also could you please check that fence agent options are the same as on 3.6 
setup? Do you any any custom options for this specific agent?
Thanks
Martin

On Wed, Jul 27, 2016 at 8:34 AM, Peter Michael Calum 
> wrote:
Hi
I’m testing on Ovirt 4.01 and got errors when testing fencing on the hosts
I use IBM x3550M4 as host and ipmilan as fence agent.

I get this error when testing.
 [Failed: Unable to obtain correct plug status or plug is not available, , ]

I also have a 3.6 setup and there it works.

Ovirt Node 4.02
oVirt Engine Version: 4.0.1.1-1.el7.centos

Is this list the correct way to report errors ?

Venlig hilsen / Kind regards
Peter Calum
TDC, Denmark


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Hosted-Engine not installing ERROR: 'OVEHOSTED_NETWORK/host_name'

2016-07-28 Thread Simone Tiraboschi
On Thu, Jul 28, 2016 at 1:07 PM, Florian Nolden  wrote:
> Im using the the Ovirt 4.0 Release repo:
>
> http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
>
> But the ovirt-4.0-dependencies.repo contains:
>
> [centos-ovirt40-candidate]
> name=CentOS-7 - oVirt 4.0
> baseurl=http://cbs.centos.org/repos/virt7-ovirt-40-candidate/x86_64/os/
> gpgcheck=0
> enabled=1
>
> I believe that shouldn't be there or?

Yes, you are absolutely right.
Thanks for reporting it.

> 2016-07-28 10:17 GMT+02:00 Simone Tiraboschi :
>>
>> On Thu, Jul 28, 2016 at 9:22 AM, Simone Tiraboschi 
>> wrote:
>> > On Thu, Jul 28, 2016 at 7:50 AM, Yedidyah Bar David 
>> > wrote:
>> >> On Wed, Jul 27, 2016 at 8:42 PM, Florian Nolden 
>> >> wrote:
>> >>> Hello,
>> >>>
>> >>> I try to install Ovirt 4.0.1-1 on a fresh installed CentOS 7.2 using a
>>
>> Another thing, either the bugged version (2.0.1.2) and the fixed one
>> (2.0.1.3) are available just in the 4.0.2 Second Release Candidate
>> repo which has not still reached the GA status.
>> The latest release is Ovirt 4.0.1 so maybe you are also using the
>> wrong repo if you want that.
>>
>> >>> replica 3 glusterfs. But I have trouble deploying the hosted engine.
>> >>>
>> >>> hosted-engine --deploy
>> >>>
>> >>>
>> >>> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
>> >>> DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
>> >>> deprecated, please use vdsm.jsonrpcvdscli
>> >>>   import vdsm.vdscli
>> >>>
>> >>> [ ERROR ] Failed to execute stage 'Environment customization':
>> >>> 'OVEHOSTED_NETWORK/host_name'
>> >
>> > The issue was caused by this patch
>> >  https://gerrit.ovirt.org/#/c/61078/
>> > yesterday we reverted it and built a new version (2.0.1.3) of
>> > hosted-engine-setup without that.
>> > It's already available:
>> >
>> > http://resources.ovirt.org/pub/ovirt-4.0-pre/rpm/el7/noarch/ovirt-hosted-engine-setup-2.0.1.3-1.el7.centos.noarch.rpm
>> >
>> >>> VDSM also did not create the ovirtmgmt bridge or the routing tables.
>> >>>
>> >>> I used the CentOS 7 minimal, and selected Infrastructure Server. I
>> >>> added the
>> >>> Puppet 4 repo and the Ovirt 4.0 Repo, no EPEL.
>> >>> I can reproduce it on 3 similar installed servers.
>> >>>
>> >>> Any Ideas?
>> >>
>> >> Please share the setup log. Thanks.
>> >>
>> >> Best,
>> >> --
>> >> Didi
>> >> ___
>> >> Users mailing list
>> >> Users@ovirt.org
>> >> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OEL virt-v2v

2016-07-28 Thread Shahar Havivi
On 28.07.16 11:30, Стаценко Константин Юрьевич wrote:
> Hello!
> Libguestfs in CentOS 7 is old and do not support Oracle Enterprise Linux 
> conversion via virt-v2v.
> How can I convert OEL VMware images ?
Best to ask at libgues...@redhat.com

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] OEL virt-v2v

2016-07-28 Thread Стаценко Константин Юрьевич
Hello!
Libguestfs in CentOS 7 is old and do not support Oracle Enterprise Linux 
conversion via virt-v2v.
How can I convert OEL VMware images ?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Spice in 4.0

2016-07-28 Thread Michal Skrivanek

> On 28 Jul 2016, at 13:07, Ivan Bulatović  wrote:
> 
> ‎To be able to use the .vv file, virt-viewer 2.0-8 is required and it can't 
> be found in CentOS 7 repos. CentOS 6 repos contain 2.0-14 version however. 
> Any info about when will the required version land in the CentOS 7 
> repositories?

It is going to be fixed in the final 4.0.2 build by a proper fix of 
https://bugzilla.redhat.com/show_bug.cgi?id=1285883
You can change it for the timebeing in engine-config’s 
RemoteViewerSupportedVersions option

Thanks,
michal

> 
> TIA,
> 
> Ivan
>   Original Message  
> From: Michal Skrivanek
> Sent: Friday, July 22, 2016 11:53
> To: Alexander Wels; Melissa Mesler
> Cc: Users@ovirt.org List
> Subject: Re: [ovirt-users] Spice in 4.0
> 
> 
>> On 21 Jul 2016, at 21:43, Alexander Wels  wrote:
>> 
>> On Thursday, July 21, 2016 02:38:41 PM Melissa Mesler wrote:
>>> That worked! Thank you!!!
>>> 
>> 
>> Okay, great now if you want to set that as the default, so you don't have to 
>> do that manually each time do the following:
>> 
>> ssh into the engine box as root.
>> engine-config -s ClientModeSpiceDefault=Plugin
>> 
>> That should default it to be plugin for all the VMs.
> 
> Thank you Alex for jumping in
> Additional note - the 4.0 distribution of spice components does not contain 
> the XPI/ActiveX anymore, so in case you want to use the deprecated way you 
> need to make sure you install the latest 3.6 version of ActiveX/XPI from 3.6 
> channels/repos
> 
> Melissa, may I ask what is the thin client you use? Does the vendor have a 
> plan for future? Another non-intrusive option is to use the web-based console 
> clients….which are of varying quality though, so not a feasible replacement 
> in many cases.
> 
> Thanks,
> michal
> 
>> 
>>> On Thu, Jul 21, 2016, at 02:14 PM, Alexander Wels wrote:
 On Thursday, July 21, 2016 01:58:05 PM Melissa Mesler wrote:
> Yes, this is a workaround for us after upgrading to 4.0. I know it's
> going away in 4.1 but this buys us some time.
> 
> So the first option didn't work b/c like yous aid, it wasn't there. So
> we did the second step and it still didn't seem to work. We even tried
> restarting ovirt but that didn't help either. Any other ideas?
 
 Looking at the code it also set the default mode to 'native' instead of
 plugin. After you set the flag to true and restarted the engine, do you
 see the
 'plugin' option when you right click on a running VM in webadmin and
 select
 console options? It should be under Console Invocation
 
> On Thu, Jul 21, 2016, at 01:39 PM, Alexander Wels wrote:
>> On Thursday, July 21, 2016 01:19:35 PM Melissa Mesler wrote:
>>> Yes we are trying to get spice working on a thin client where
>>> we can't
>>> use virt-viewer. I just don't know the steps in the bugzilla to
>>> accomplish it as it's not completely clear.
>> 
>> Okay gotcha,
>> Note in 4.1 this ability will be completely removed, the bugzilla is
>> for cases where people need it in some kind of production environment.
>> 
>> Assuming you can ssh into the machine that is running the engine
>> (either HE or standard engine) as root:
>> 
>> engine-config -s EnableDeprecatedClientModeSpicePlugin=true
>> 
>> If that doesn't work because the option was not added to engine-config
>> (couldn't tell from reading the code). Try this:
>> 
>> sudo su postgres
>> --this will log you in as the postgres user
>> psql -d engine
>> --this will start psql connecing to the engine database. The
>> default will be
>> --engine if you didn't specify something else during engine-
>> setup. psql -l
>> --will list all the databases.
>> update vdc_options set option_value=true where
>> option_name='EnableDeprecatedClientModeSpicePlugin';
>> 
>>> On Thu, Jul 21, 2016, at 01:14 PM, Alexander Wels wrote:
 On Thursday, July 21, 2016 01:08:49 PM Melissa Mesler wrote:
> So I am trying to get spice working in ovirt 4.0. I found the
> following
> solution:
> https://bugzilla.redhat.com/show_bug.cgi?id=1316560
 
 That bugzilla relates to the legacy spice.xpi FF plugin, and
 possibly
 some
 activex plugin for IE. The current way is the following:
 
 1. Get virt-viewer for your platform.
 2. Associated virt-viewer with .vv files in your browser.
 3. Click the button, which will download the .vv file with the
 appropriate
 ticket.
 4. The browser will launch virt-viewer with the .vv file as a
 
 parameter
 
 and it
 should just all work.
 
> Where do you set
> vdc_options.EnableDeprecatedClientModeSpicePlugin to
> 'true'?? I see it says ENGINE_DB but what steps do I follow to
> do this?

Re: [ovirt-users] Spice in 4.0

2016-07-28 Thread Ivan Bulatović
‎To be able to use the .vv file, virt-viewer 2.0-8 is required and it can't be 
found in CentOS 7 repos. CentOS 6 repos contain 2.0-14 version however. Any 
info about when will the required version land in the CentOS 7 repositories?

TIA,

Ivan
  Original Message  
From: Michal Skrivanek
Sent: Friday, July 22, 2016 11:53
To: Alexander Wels; Melissa Mesler
Cc: Users@ovirt.org List
Subject: Re: [ovirt-users] Spice in 4.0


> On 21 Jul 2016, at 21:43, Alexander Wels  wrote:
> 
> On Thursday, July 21, 2016 02:38:41 PM Melissa Mesler wrote:
>> That worked! Thank you!!!
>> 
> 
> Okay, great now if you want to set that as the default, so you don't have to 
> do that manually each time do the following:
> 
> ssh into the engine box as root.
> engine-config -s ClientModeSpiceDefault=Plugin
> 
> That should default it to be plugin for all the VMs.

Thank you Alex for jumping in
Additional note - the 4.0 distribution of spice components does not contain the 
XPI/ActiveX anymore, so in case you want to use the deprecated way you need to 
make sure you install the latest 3.6 version of ActiveX/XPI from 3.6 
channels/repos

Melissa, may I ask what is the thin client you use? Does the vendor have a plan 
for future? Another non-intrusive option is to use the web-based console 
clients….which are of varying quality though, so not a feasible replacement in 
many cases.

Thanks,
michal

> 
>> On Thu, Jul 21, 2016, at 02:14 PM, Alexander Wels wrote:
>>> On Thursday, July 21, 2016 01:58:05 PM Melissa Mesler wrote:
 Yes, this is a workaround for us after upgrading to 4.0. I know it's
 going away in 4.1 but this buys us some time.
 
 So the first option didn't work b/c like yous aid, it wasn't there. So
 we did the second step and it still didn't seem to work. We even tried
 restarting ovirt but that didn't help either. Any other ideas?
>>> 
>>> Looking at the code it also set the default mode to 'native' instead of
>>> plugin. After you set the flag to true and restarted the engine, do you
>>> see the
>>> 'plugin' option when you right click on a running VM in webadmin and
>>> select
>>> console options? It should be under Console Invocation
>>> 
 On Thu, Jul 21, 2016, at 01:39 PM, Alexander Wels wrote:
> On Thursday, July 21, 2016 01:19:35 PM Melissa Mesler wrote:
>> Yes we are trying to get spice working on a thin client where
>> we can't
>> use virt-viewer. I just don't know the steps in the bugzilla to
>> accomplish it as it's not completely clear.
> 
> Okay gotcha,
> Note in 4.1 this ability will be completely removed, the bugzilla is
> for cases where people need it in some kind of production environment.
> 
> Assuming you can ssh into the machine that is running the engine
> (either HE or standard engine) as root:
> 
> engine-config -s EnableDeprecatedClientModeSpicePlugin=true
> 
> If that doesn't work because the option was not added to engine-config
> (couldn't tell from reading the code). Try this:
> 
> sudo su postgres
> --this will log you in as the postgres user
> psql -d engine
> --this will start psql connecing to the engine database. The
> default will be
> --engine if you didn't specify something else during engine-
> setup. psql -l
> --will list all the databases.
> update vdc_options set option_value=true where
> option_name='EnableDeprecatedClientModeSpicePlugin';
> 
>> On Thu, Jul 21, 2016, at 01:14 PM, Alexander Wels wrote:
>>> On Thursday, July 21, 2016 01:08:49 PM Melissa Mesler wrote:
 So I am trying to get spice working in ovirt 4.0. I found the
 following
 solution:
 https://bugzilla.redhat.com/show_bug.cgi?id=1316560
>>> 
>>> That bugzilla relates to the legacy spice.xpi FF plugin, and
>>> possibly
>>> some
>>> activex plugin for IE. The current way is the following:
>>> 
>>> 1. Get virt-viewer for your platform.
>>> 2. Associated virt-viewer with .vv files in your browser.
>>> 3. Click the button, which will download the .vv file with the
>>> appropriate
>>> ticket.
>>> 4. The browser will launch virt-viewer with the .vv file as a
>>> 
>>> parameter
>>> 
>>> and it
>>> should just all work.
>>> 
 Where do you set
 vdc_options.EnableDeprecatedClientModeSpicePlugin to
 'true'?? I see it says ENGINE_DB but what steps do I follow to
 do this?
 Can someone help me?
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 

___
Users mailing list
Users@ovirt.org

Re: [ovirt-users] Ovirt Hosted-Engine not installing ERROR: 'OVEHOSTED_NETWORK/host_name'

2016-07-28 Thread Florian Nolden
Im using the the Ovirt 4.0 Release repo:

http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm

But the ovirt-4.0-dependencies.repo contains:

[centos-ovirt40-candidate]
name=CentOS-7 - oVirt 4.0
baseurl=http://cbs.centos.org/repos/virt7-ovirt-40-candidate/x86_64/os/
gpgcheck=0
enabled=1

I believe that shouldn't be there or?

2016-07-28 10:17 GMT+02:00 Simone Tiraboschi :

> On Thu, Jul 28, 2016 at 9:22 AM, Simone Tiraboschi 
> wrote:
> > On Thu, Jul 28, 2016 at 7:50 AM, Yedidyah Bar David 
> wrote:
> >> On Wed, Jul 27, 2016 at 8:42 PM, Florian Nolden 
> wrote:
> >>> Hello,
> >>>
> >>> I try to install Ovirt 4.0.1-1 on a fresh installed CentOS 7.2 using a
>
> Another thing, either the bugged version (2.0.1.2) and the fixed one
> (2.0.1.3) are available just in the 4.0.2 Second Release Candidate
> repo which has not still reached the GA status.
> The latest release is Ovirt 4.0.1 so maybe you are also using the
> wrong repo if you want that.
>
> >>> replica 3 glusterfs. But I have trouble deploying the hosted engine.
> >>>
> >>> hosted-engine --deploy
> >>>
> >>>
> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
> >>> DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
> >>> deprecated, please use vdsm.jsonrpcvdscli
> >>>   import vdsm.vdscli
> >>>
> >>> [ ERROR ] Failed to execute stage 'Environment customization':
> >>> 'OVEHOSTED_NETWORK/host_name'
> >
> > The issue was caused by this patch
> >  https://gerrit.ovirt.org/#/c/61078/
> > yesterday we reverted it and built a new version (2.0.1.3) of
> > hosted-engine-setup without that.
> > It's already available:
> >
> http://resources.ovirt.org/pub/ovirt-4.0-pre/rpm/el7/noarch/ovirt-hosted-engine-setup-2.0.1.3-1.el7.centos.noarch.rpm
> >
> >>> VDSM also did not create the ovirtmgmt bridge or the routing tables.
> >>>
> >>> I used the CentOS 7 minimal, and selected Infrastructure Server. I
> added the
> >>> Puppet 4 repo and the Ovirt 4.0 Repo, no EPEL.
> >>> I can reproduce it on 3 similar installed servers.
> >>>
> >>> Any Ideas?
> >>
> >> Please share the setup log. Thanks.
> >>
> >> Best,
> >> --
> >> Didi
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1 (SOLVED)

2016-07-28 Thread Peter Michael Calum
Hi Martin

Now you got it !

Executing: /usr/bin/ipmitool -I lan -H khk9dsk34-mgnt.ip.tdk.dk -U mgntuser -P 
[set] -p 623 -L ADMINISTRATOR chassis power status

1  Address lookup for khk9dsk34-mgnt.ip.tdk.dk failed
Address lookup for khk9dsk34-mgnt.ip.tdk.dk failed
Address lookup for khk9dsk34-mgnt.ip.tdk.dk failed
Unable to get Chassis Power Status

DNS was defined under installation, but both hosts has been rebooted, and now 
/etc/resolv.conf is empty
After I redefined nameservers in /etc/resolv.conf it works

Executing: /usr/bin/ipmitool -I lan -H khk9dsk34-mgnt.ip.tdk.dk -U mgntuser -P 
[set] -p 623 -L ADMINISTRATOR chassis power status
0 Chassis Power is on

I have seen this problem earlier. - After reboot /etc/resolv.conf is empty !!

Thank you for your help

Peter Calum
TDC


Fra: Martin Perina [mailto:mper...@redhat.com]
Sendt: 28. juli 2016 11:08
Til: Peter Michael Calum 
Cc: users@ovirt.org; Eli Mesika 
Emne: Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1

Hi Peter,
see my inline responses:

On Thu, Jul 28, 2016 at 9:32 AM, Peter Michael Calum 
> wrote:
Hi Martin

I see no firewall issues.

I have made some more tests :

RHEV  3.6 host khk9dsk30  test to khk9dsk34 & khk9dsk35 - OK
RHEV  3.6 host khk9dsk31  test to khk9dsk34 & khk9dsk35 - OK
RHEV  3.5 host khk9dsk32  test to khk9dsk34 & khk9dsk35 - OK
RHEV  3.5 host khk9dsk33  test to khk9dsk34 & khk9dsk35 – OK
OVIRT 4.01 host
​​
khk9dsk34  test to khk9dsk34 & khk9dsk35 – FAIL
OVIRT 4.01 host khk9dsk35  test to khk9dsk34 & khk9dsk35 – FAIL
OVIRT 4.01 host khk9dsk34  test to khk9dsk30 & khk9dsk31 – FAIL

Could you check in the code  that ipmilan sends on port udp 623 ?

​Yes, this is the default port
​


All hosts are on same VLAN
[root@khk9dsk31 ~]# fence_ipmilan -a 
khk9dsk35-mgnt.ip.tdk.dk -l mgntuser –p 
 -o status
Status: ON
[root@khk9dsk31 ~]# fence_ipmilan -a 
khk9dsk34-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Status: ON
[root@khk9dsk35 ~]# fence_ipmilan -a 
khk9dsk33-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Failed: Unable to obtain correct plug status or plug is not available

[root@khk9dsk35 ~]# fence_ipmilan -a 
khk9dsk30-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Failed: Unable to obtain correct plug status or plug is not available

[root@khk9dsk35 ~]# fence_ipmilan -a 
khk9dsk34-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Failed: Unable to obtain correct plug status or plug is not available

​
​From the above I can say that there are some networking issues on host ​

​
​
khk9dsk35​, you said that firewall is OK, so are IPMI hostnames resolvable?
Please also try to execute above with '-v' as we may get a bit more info about 
the issue.
​


[root@
​​
khk9dsk35 ~]# fence_ipmilan -a 
khk9dsk35-mgnt.ip.tdk.dk -l mgntuser -p 
 -o status
Failed: Unable to obtain correct plug status or plug is not available

​This is normal, you are usually not able to connect to IPMI port of the host 
from the host​

​Thanks
Martin Perina
​

Br,
Peter Calum
Fra: users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] På vegne af 
Peter Michael Calum
Sendt: 27. juli 2016 21:30
Til: Martin Perina >
Cc: users@ovirt.org
Emne: [Phish] - Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1

Hi Martin
I was wondering if this could be a firewall problem. We have recently 
introduced new fw rules, and I have not tested fencing on the 2 hypervisors 
before I switched them to ovirt 4, but there was no alarm in the old setup 
before I switched. - I will investigate this further to be sure, and will 
return.
Thanks for your help
/Peter


Fra: Martin Perina [mailto:mper...@redhat.com]
Sendt: 27. juli 2016 20:19
Til: Peter Michael Calum >
Cc: users@ovirt.org; Eli Mesika 
>
Emne: Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1

Hmm, it's really strange that this is working on 3.6 and not on 4.0, I don't 
see any obvious error, so we need to find out what are correct parameters for 
your fencing device.
The easiest way is to execute fence_ipmilan, according to vdsm.log following 
are your current options:
fence_ipmilan -a khk9dsk35-mgnt.ip.tdk.dk -l 
mgntuser -p  -o status
Does the above works if it's execute from 
khk9dsk34-mgnt.ip.tdk.dk? Btw is username 
correct? Shouldn't be there 'mgmtuser'?
If all of above is correct and you are still not able to get power status, here 
are options you could try:
-v
--lanplus
-4
Martin 

[ovirt-users] Debian - based OS and SSO

2016-07-28 Thread Tadas
Hello,
still having issues with ovirt SSO and Debian OS.
Other OSes (Windows/Fedora 24) works just fine.
Some information:
OS: Debian 8.5 (jessie)
I've followed manual on https://www.ovirt.org/documentation/how-to/gues
t-agent/install-the-guest-agent-in-debian/ and installed ovirt-agent.
I can get info via spice socket on hypervisor side, this means that
agent works fine.
I've compiled pam-ovirt-cred and copied it into /lib/x86_64-linux-
gnu/security/
I've configured /etc/pamd/gdm-ovirtcred (just copied from working
Fedora 24)

But still login fails. I can see this in ovirt-agent log file:

Dummy-2::INFO::2016-07-28
12:49:51,046::OVirtAgentLogic::270::root::Received an external command:
login...
Dummy-2::DEBUG::2016-07-28
12:49:51,047::OVirtAgentLogic::304::root::User log-in (credentials =
'\x00\x00\x00\x04test\x00')
Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::207::root::The
following users are allowed to connect: [0]
Dummy-2::DEBUG::2016-07-28 12:49:51,047::CredServer::272::root::Token:
760258
Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::273::root::Opening
credentials channel...
Dummy-2::INFO::2016-07-28 12:49:51,047::CredServer::132::root::Emitting
user authenticated signal (760258).
Dummy-2::INFO::2016-07-28
12:49:51,178::CredServer::277::root::Credentials channel was closed.

This looks okay. The error is on pam side (auth.log):

Jul 28 12:49:39 desktop64 gdm-ovirtcred]: pam_succeed_if(gdm-
ovirtcred:auth): error retrieving user name: Conversation error
Jul 28 12:49:39 desktop64 gdm-ovirtcred]: pam_ovirt_cred(gdm-
ovirtcred:auth): Failed to acquire user's credentials

Have no idea, where it fails.
Would appreciate, if you could help me here a bit.
Thank you.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 : Moving the hosted-engine to another storage

2016-07-28 Thread Simone Tiraboschi
On Thu, Jul 28, 2016 at 10:59 AM, Alexis HAUSER
 wrote:
>>Unfortunately we know that migrating from HE to HE is not as simple as
>>from physical to HE:
>>https://bugzilla.redhat.com/show_bug.cgi?id=1240466#c21
>>In general the issue is that the DB backup form the old hosted-engine
>>VM contains a lot of references to the previous hosted-engine env and
>>you cannot simply remove/edit them from the engine since they are lock
>>so you have to manually remove them from the DB which is quite
>>risky/error prone.
>
> This is a bit scaring. In case of issue with engine and trying to recover, it 
> could also happen.

The backup and restore procedure on the same env is well testes, the
issue is simply if you need to restore on a different environment
(it's basically a migration) cause you have to remove all the
references to the old env.

> What other way would you suggest for backing the engine VM and being sure to 
> be able to restore it as it was without errors ? Have you ever tried to 
> backup/restore from rsync ?
> If there are data in the DB written when you're performing it, do you think 
> it can cause issues ? If ovirt-engine service is stopped, is that problem 
> avoided ?

Postgres is transactional so if you do it in the proper way I don't
see issue but stopping the ovirt-engine service will of course help.
But the issue is not that you risk data corruption, the issue is that
when you import a backup of the engine DB to a different env, that
backup says the hosted-engine storage domain is still the old one
since it was that in the previous env and so on and you can simply
edit the hosted-engine strage domain location from the engine itself
since we are preventing it.

>>In the mean time I'd suggest, if feasible, to redeploy a new
>>hosted-engine env and reattach there your storage domains and your
>>hosts.
>>This will imply a downtime.
>
> Ok, I think I'll do that. A downtime isn't a problem right now, as I'm still 
> at a pre-production step. (preparing it for production soon)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1

2016-07-28 Thread Martin Perina
Hi Peter,

see my inline responses:

On Thu, Jul 28, 2016 at 9:32 AM, Peter Michael Calum  wrote:

> Hi Martin
>
>
>
> I see no firewall issues.
>
>
>
> I have made some more tests :
>
>
>
> RHEV  3.6 host khk9dsk30  test to khk9dsk34 & khk9dsk35 - OK
>
> RHEV  3.6 host khk9dsk31  test to khk9dsk34 & khk9dsk35 - OK
>
> RHEV  3.5 host khk9dsk32  test to khk9dsk34 & khk9dsk35 - OK
>
> RHEV  3.5 host khk9dsk33  test to khk9dsk34 & khk9dsk35 – OK
>
> OVIRT 4.01 host
> ​​
> khk9dsk34  test to khk9dsk34 & khk9dsk35 – FAIL
>
> OVIRT 4.01 host khk9dsk35  test to khk9dsk34 & khk9dsk35 – FAIL
>
OVIRT 4.01 host khk9dsk34  test to khk9dsk30 & khk9dsk31 – FAIL
>

>
> Could you check in the code  that ipmilan sends on port udp 623 ?
>

​Yes, this is the default port
​


>
>
> All hosts are on same VLAN
>
> [root@khk9dsk31 ~]# fence_ipmilan -a khk9dsk35-mgnt.ip.tdk.dk -l mgntuser
> –p  -o status
>
> Status: ON
>
> [root@khk9dsk31 ~]# fence_ipmilan -a khk9dsk34-mgnt.ip.tdk.dk -l mgntuser
> -p  -o status
>
> Status: ON
>
> [root@khk9dsk35 ~]# fence_ipmilan -a khk9dsk33-mgnt.ip.tdk.dk -l mgntuser
> -p  -o status
>
> Failed: Unable to obtain correct plug status or plug is not available
>
>
> [root@khk9dsk35 ~]# fence_ipmilan -a khk9dsk30-mgnt.ip.tdk.dk -l mgntuser
> -p  -o status
>
> Failed: Unable to obtain correct plug status or plug is not available
>
>
>
> [root@khk9dsk35 ~]# fence_ipmilan -a khk9dsk34-mgnt.ip.tdk.dk -l mgntuser
> -p  -o status
>
> Failed: Unable to obtain correct plug status or plug is not available
>
>
​
​From the above I can say that there are some networking issues on host ​

​
​
khk9dsk35​, you said that firewall is OK, so are IPMI hostnames resolvable?
Please also try to execute above with '-v' as we may get a bit more info
about the issue.
​


>
>
> [root@
> ​​
> khk9dsk35 ~]# fence_ipmilan -a khk9dsk35-mgnt.ip.tdk.dk -l mgntuser -p
>  -o status
>
> Failed: Unable to obtain correct plug status or plug is not available
>

​This is normal, you are usually not able to connect to IPMI port of the
host from the host​


​Thanks

Martin Perina
​

Br,
>
> Peter Calum
>
> *Fra:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *På vegne
> af *Peter Michael Calum
> *Sendt:* 27. juli 2016 21:30
> *Til:* Martin Perina 
> *Cc:* users@ovirt.org
> *Emne:* [Phish] - Re: [ovirt-users] Fencing errors on oVirt Engine
> Version: 4.0.1
>
>
>
> Hi Martin
>
> I was wondering if this could be a firewall problem. We have recently
> introduced new fw rules, and I have not tested fencing on the 2 hypervisors
> before I switched them to ovirt 4, but there was no alarm in the old setup
> before I switched. - I will investigate this further to be sure, and will
> return.
> Thanks for your help
>
> /Peter
>
>
>
>
>
> *Fra:* Martin Perina [mailto:mper...@redhat.com ]
> *Sendt:* 27. juli 2016 20:19
> *Til:* Peter Michael Calum 
> *Cc:* users@ovirt.org; Eli Mesika 
> *Emne:* Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1
>
>
>
> Hmm, it's really strange that this is working on 3.6 and not on 4.0, I
> don't see any obvious error, so we need to find out what are correct
> parameters for your fencing device.
>
> The easiest way is to execute fence_ipmilan, according to vdsm.log
> following are your current options:
>
> fence_ipmilan -a khk9dsk35-mgnt.ip.tdk.dk -l mgntuser -p  -o
> status
>
> Does the above works if it's execute from khk9dsk34-mgnt.ip.tdk.dk? Btw
> is username correct? Shouldn't be there 'mgmtuser'?
>
> If all of above is correct and you are still not able to get power status,
> here are options you could try:
>
> -v
>
> --lanplus
> -4
>
> Martin Perina
>
>
>
> On Wed, Jul 27, 2016 at 5:07 PM, Peter Michael Calum  wrote:
>
> Hi,
>
>
>
> Thank you for answering.
>
>
>
> Here is the logs, one from each node.
>
> I see the test from khk9dsk34 goes over khk9dsk35 and vice versa.
>
>
>
> Same setup as on my redhat 3.6 setup, and no custom options.
>
>
>
> Thanks
>
> Peter
>
>
>
>
>
> *Fra:* Martin Perina [mailto:mper...@redhat.com]
> *Sendt:* 27. juli 2016 11:56
> *Til:* Peter Michael Calum 
> *Cc:* users@ovirt.org; Eli Mesika 
> *Emne:* Re: [ovirt-users] Fencing errors on oVirt Engine Version: 4.0.1
>
>
>
> Hi Peter,
>
> could you please share with us vdsm.log from the host that was used as
> fence proxy (the one that actually executed fence_ipmi agent)?
>
> Also could you please check that fence agent options are the same as on
> 3.6 setup? Do you any any custom options for this specific agent?
>
> Thanks
>
> Martin
>
>
>
> On Wed, Jul 27, 2016 at 8:34 AM, Peter Michael Calum  wrote:
>
> Hi
>
> I’m testing on Ovirt 4.01 and got errors when testing fencing on the hosts
>
> I use IBM x3550M4 as host and ipmilan as fence agent.
>
>
>
> I get this error when testing.
>
>  [Failed: Unable to obtain correct 

Re: [ovirt-users] 3.6 : Moving the hosted-engine to another storage

2016-07-28 Thread Alexis HAUSER
>Unfortunately we know that migrating from HE to HE is not as simple as
>from physical to HE:
>https://bugzilla.redhat.com/show_bug.cgi?id=1240466#c21
>In general the issue is that the DB backup form the old hosted-engine
>VM contains a lot of references to the previous hosted-engine env and
>you cannot simply remove/edit them from the engine since they are lock
>so you have to manually remove them from the DB which is quite
>risky/error prone.

This is a bit scaring. In case of issue with engine and trying to recover, it 
could also happen.

What other way would you suggest for backing the engine VM and being sure to be 
able to restore it as it was without errors ? Have you ever tried to 
backup/restore from rsync ?
If there are data in the DB written when you're performing it, do you think it 
can cause issues ? If ovirt-engine service is stopped, is that problem avoided ?

>In the mean time I'd suggest, if feasible, to redeploy a new
>hosted-engine env and reattach there your storage domains and your
>hosts.
>This will imply a downtime.

Ok, I think I'll do that. A downtime isn't a problem right now, as I'm still at 
a pre-production step. (preparing it for production soon)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Safe to upgrade HE hosts from GUI?

2016-07-28 Thread Simone Tiraboschi
On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho  wrote:

> On 21/7/2559 16:53, Simone Tiraboschi wrote:
>
> On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho 
> wrote:
>
>
>> Can I just follow
>> http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
>> until step 3 and do everything else via GUI?
>>
> Yes, absolutely.
>
>
> Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted
> the services but the result is still the same. Kinda lost now. What should
> I do next?
>
>
Can you please attach ovirt-ha-agent logs?


> [root@host02 ~]# service vdsmd restart
> Redirecting to /bin/systemctl restart  vdsmd.service
> [root@host02 ~]# systemctl restart ovirt-ha-broker && systemctl restart
> ovirt-ha-agent
> [root@host02 ~]# systemctl status ovirt-ha-broker
> ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability
> Communications Broker
>Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service;
> enabled; vendor preset: disabled)
>Active: active (running) since Thu 2016-07-28 15:09:38 ICT; 20min ago
>  Main PID: 4614 (ovirt-ha-broker)
>CGroup: /system.slice/ovirt-ha-broker.service
>└─4614 /usr/bin/python
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon
>
> Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> closed
> Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> closed
> Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> closed
> Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> closed
> Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> closed
> [root@host02 ~]# systemctl status ovirt-ha-agent
> ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability
> Monitoring Agent
>Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
> enabled; vendor preset: disabled)
>Active: active (running) since Thu 2016-07-28 15:28:34 ICT; 1min 19s ago
>  Main PID: 11488 (ovirt-ha-agent)
>CGroup: /system.slice/ovirt-ha-agent.service
>└─11488 /usr/bin/python
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon
>
> Jul 28 15:29:52 host02.ovirt.forest.go.th ovirt-ha-agent[11488]:
> /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
> DeprecationWarning: Dispatcher.pend...instead.
> Jul 28 15:29:52 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending
> = getattr(dispatcher, 'pending', lambda: 0)
> Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]:
> /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
> DeprecationWarning: Dispatcher.pend...instead.
> Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending
> = getattr(dispatcher, 'pending', lambda: 0)
> Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]:
> /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
> DeprecationWarning: Dispatcher.pend...instead.
> Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending
> = getattr(dispatcher, 'pending', lambda: 0)
> Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]:
> /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
> DeprecationWarning: Dispatcher.pend...instead.
> Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending
> = getattr(dispatcher, 'pending', lambda: 0)
> Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]:
> ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error:
> 'Attempt to call functi...rt agent
> Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]:
> ERROR:ovirt_hosted_engine_ha.agent.agent.Agent:Error: 'Attempt to call
> function: teardownIma...rt agent
> Hint: Some lines were ellipsized, use -l to show in full.
> [root@host01 ~]# hosted-engine --vm-status
>
>
> --== Host 

Re: [ovirt-users] Safe to upgrade HE hosts from GUI?

2016-07-28 Thread Wee Sritippho

On 21/7/2559 16:53, Simone Tiraboschi wrote:
On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho > wrote:


Can I just follow

http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
until step 3 and do everything else via GUI?

Yes, absolutely.

Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted 
the services but the result is still the same. Kinda lost now. What 
should I do next?


[root@host02 ~]# service vdsmd restart
Redirecting to /bin/systemctl restart  vdsmd.service
[root@host02 ~]# systemctl restart ovirt-ha-broker && systemctl restart 
ovirt-ha-agent

[root@host02 ~]# systemctl status ovirt-ha-broker
● ovirt-ha-broker.service - oVirt Hosted Engine High Availability 
Communications Broker
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; 
enabled; vendor preset: disabled)

   Active: active (running) since Thu 2016-07-28 15:09:38 ICT; 20min ago
 Main PID: 4614 (ovirt-ha-broker)
   CGroup: /system.slice/ovirt-ha-broker.service
   └─4614 /usr/bin/python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon


Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
closed
Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
closed
Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
closed
Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
closed
Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
closed

[root@host02 ~]# systemctl status ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability 
Monitoring Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; 
enabled; vendor preset: disabled)

   Active: active (running) since Thu 2016-07-28 15:28:34 ICT; 1min 19s ago
 Main PID: 11488 (ovirt-ha-agent)
   CGroup: /system.slice/ovirt-ha-agent.service
   └─11488 /usr/bin/python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon


Jul 28 15:29:52 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: 
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: 
DeprecationWarning: Dispatcher.pend...instead.
Jul 28 15:29:52 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending 
= getattr(dispatcher, 'pending', lambda: 0)
Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: 
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: 
DeprecationWarning: Dispatcher.pend...instead.
Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending 
= getattr(dispatcher, 'pending', lambda: 0)
Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: 
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: 
DeprecationWarning: Dispatcher.pend...instead.
Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending 
= getattr(dispatcher, 'pending', lambda: 0)
Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: 
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: 
DeprecationWarning: Dispatcher.pend...instead.
Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending 
= getattr(dispatcher, 'pending', lambda: 0)
Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: 
ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: 
'Attempt to call functi...rt agent
Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: 
ERROR:ovirt_hosted_engine_ha.agent.agent.Agent:Error: 'Attempt to call 
function: teardownIma...rt agent

Hint: Some lines were ellipsized, use -l to show in full.
[root@host01 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date  : True
Hostname   : host01.ovirt.forest.go.th
Host ID: 1
Engine status  : {"health": "good", "vm": "up", 

Re: [ovirt-users] [ovirt-devel] Is there some guideline how to replace Linux Bridge with ovs on vdsm?

2016-07-28 Thread Edward Haas
On Thu, Jul 28, 2016 at 4:32 AM, lifuqiong  wrote:

> *Is there some solution or advice using ovs instead of linux bridge?  if
> we using ovirt 4.0.0 still*
>
>
>
> *Thank you*
>

I guess you could try the ovs hook, but it was used as a poc only. You can
play with it, but I would
recommend upgrading to 4.0.2.

What exactly is your setup goal?


>
>
>
> On Wed, Jul 27, 2016 at 4:52 PM, lifuqiong 
> wrote:
>
> It’s announced ovs is already support in build 4.0.0,  Do you mean we
> can’t use ovs yet?
>
>
>
> It is available in 4.0.2 as a tech-preview.
>
> You should be aware that it may be unstable at this point, work on it is
> still in progress
>
> and we look forward for the community feedback.
>
>
>
>
>
>
>
> Hello Mark,
>
> OVS limited support is coming out in the next build (4.0.2), it is in
> tech-preview stage.
>
> At the moment, there is no ovs-dpdk integration support.
>
> Thanks,
>
> Edy.
>
>
>
> On Wed, Jul 27, 2016 at 10:03 AM, lifuqiong 
> wrote:
>
> Hi,
>
>  I want to using ovs replace with Linux bridge, Is there some
> guideline or advice? If I want to use dpdk vhost-user port, how can I do
> that?
>
>
>
> Thank you
>
> Mark
>
>
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Hosted-Engine not installing ERROR: 'OVEHOSTED_NETWORK/host_name'

2016-07-28 Thread Simone Tiraboschi
On Thu, Jul 28, 2016 at 9:22 AM, Simone Tiraboschi  wrote:
> On Thu, Jul 28, 2016 at 7:50 AM, Yedidyah Bar David  wrote:
>> On Wed, Jul 27, 2016 at 8:42 PM, Florian Nolden  wrote:
>>> Hello,
>>>
>>> I try to install Ovirt 4.0.1-1 on a fresh installed CentOS 7.2 using a

Another thing, either the bugged version (2.0.1.2) and the fixed one
(2.0.1.3) are available just in the 4.0.2 Second Release Candidate
repo which has not still reached the GA status.
The latest release is Ovirt 4.0.1 so maybe you are also using the
wrong repo if you want that.

>>> replica 3 glusterfs. But I have trouble deploying the hosted engine.
>>>
>>> hosted-engine --deploy
>>>
>>> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
>>> DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
>>> deprecated, please use vdsm.jsonrpcvdscli
>>>   import vdsm.vdscli
>>>
>>> [ ERROR ] Failed to execute stage 'Environment customization':
>>> 'OVEHOSTED_NETWORK/host_name'
>
> The issue was caused by this patch
>  https://gerrit.ovirt.org/#/c/61078/
> yesterday we reverted it and built a new version (2.0.1.3) of
> hosted-engine-setup without that.
> It's already available:
> http://resources.ovirt.org/pub/ovirt-4.0-pre/rpm/el7/noarch/ovirt-hosted-engine-setup-2.0.1.3-1.el7.centos.noarch.rpm
>
>>> VDSM also did not create the ovirtmgmt bridge or the routing tables.
>>>
>>> I used the CentOS 7 minimal, and selected Infrastructure Server. I added the
>>> Puppet 4 repo and the Ovirt 4.0 Repo, no EPEL.
>>> I can reproduce it on 3 similar installed servers.
>>>
>>> Any Ideas?
>>
>> Please share the setup log. Thanks.
>>
>> Best,
>> --
>> Didi
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Hosted-Engine not installing ERROR: 'OVEHOSTED_NETWORK/host_name'

2016-07-28 Thread Simone Tiraboschi
On Thu, Jul 28, 2016 at 7:50 AM, Yedidyah Bar David  wrote:
> On Wed, Jul 27, 2016 at 8:42 PM, Florian Nolden  wrote:
>> Hello,
>>
>> I try to install Ovirt 4.0.1-1 on a fresh installed CentOS 7.2 using a
>> replica 3 glusterfs. But I have trouble deploying the hosted engine.
>>
>> hosted-engine --deploy
>>
>> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
>> DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
>> deprecated, please use vdsm.jsonrpcvdscli
>>   import vdsm.vdscli
>>
>> [ ERROR ] Failed to execute stage 'Environment customization':
>> 'OVEHOSTED_NETWORK/host_name'

The issue was caused by this patch
 https://gerrit.ovirt.org/#/c/61078/
yesterday we reverted it and built a new version (2.0.1.3) of
hosted-engine-setup without that.
It's already available:
http://resources.ovirt.org/pub/ovirt-4.0-pre/rpm/el7/noarch/ovirt-hosted-engine-setup-2.0.1.3-1.el7.centos.noarch.rpm

>> VDSM also did not create the ovirtmgmt bridge or the routing tables.
>>
>> I used the CentOS 7 minimal, and selected Infrastructure Server. I added the
>> Puppet 4 repo and the Ovirt 4.0 Repo, no EPEL.
>> I can reproduce it on 3 similar installed servers.
>>
>> Any Ideas?
>
> Please share the setup log. Thanks.
>
> Best,
> --
> Didi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ansible oVirt storage management module

2016-07-28 Thread Oved Ourfali
Hi

Adding some relevant folks here.
Until our module is ready I don't think there should be an issue with
pushing this.
However, there might be a name collision in the future.

Juan/Ondra - thoughts?

Thanks,
Oved


On Wed, Jul 27, 2016 at 8:07 PM, Groten, Ryan 
wrote:

> Thanks for the feedback Yaniv, I’d love to see a python module for ovirt
> actions.  What I’ve built is simple and a bit limited to the features I
> needed, so if we can implement an existing library that has all the
> features and is proven to work then the automation tasks would be much
> simpler.
>
> And of course, a supported Ansible module would be even better, it would
> probably get more traction towards being added as an extra or core module
> that way too.
>
>
>
> If you think this would be a valuable addition to the extra modules that
> ship with Ansible (at least temporarily), I’d appreciate if you comment
> ‘shipit’ on the pull request to mark it for inclusion!
>
>
>
> Thanks,
>
> Ryan
>
>
>
> *From:* Yaniv Kaul [mailto:yk...@redhat.com]
> *Sent:* Sunday, July 24, 2016 4:52 AM
> *To:* Groten, Ryan 
> *Cc:* users 
> *Subject:* Re: [ovirt-users] Ansible oVirt storage management module
>
>
>
>
>
>
>
> On Wed, Jul 20, 2016 at 12:00 AM, Groten, Ryan 
> wrote:
>
> Hey Ansible users,
>
>
>
> I wrote a module for storage management and created a pull request to have
> it added as an Extra module in Ansible.  It can be used to
> create/delete/attach/destroy pool disks.
>
>
>
> https://github.com/ansible/ansible-modules-extras/pull/2509
>
>
>
> Ryan
>
>
>
> Hi Ryan,
>
>
>
> This looks really interesting and surely is useful.
>
> My only comment would be that I think we should start to think about some
> Python module for oVirt actions.
>
> Otherwise, every project (this, ovirt-system-tests, others) that use the
> ovirt Python SDK for oVirt automation more or less re-implement the same
> functions.
>
> What do you think?
>
>
>
> Also, we are thinking about auto-generating such Ansible playbook (same
> way as the SDKs are generated).
>
> It might look less 'human', but it will always be complete and up-to-date
> with all features.
>
> Thanks,
>
> Y.
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users