Re: [vdsm] Help needed to debug segfault when using ctypes

2013-08-22 Thread Deepak C Shetty
I haven't worked much on using ctypes in python.. but did you try to 
look into the core to see where exactly the segfault is happening.. that 
might give some clues for further debug.


On 08/22/2013 05:53 PM, Aravinda wrote:

Hi,

In following patch I am using ctypes to load libgfapi(GlusterFS API) 
to get the Gluster volume statvfs information.

http://gerrit.ovirt.org/#/c/17822

I am getting *segfault* when I run
vdsClient 0 glusterVolumeSizeInfoGet volumeName=gv1

But I checked as below and it is working.

cd /usr/share/vdsm
python
 from gluster import gfapi
 print gfapi.volumeStatvfs(gv1)
posix.statvfs_result(f_bsize=4096L, f_frsize=4096L, 
f_blocks=25803070L, f_bfree=19130426L, f_bavail=17819706L, 
f_files=6553600L, f_ffree=5855876L, f_favail=5855876L, 
f_flag=4096L, f_namemax=255L)


Please suggest how I can debug this issue.

--
Regards
Aravinda




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Deepak C Shetty

On 08/12/2013 04:51 PM, Andrew Cathrow wrote:

- Forwarded Message -

From: Itamar Heim ih...@redhat.com
To: Sahina Bose sab...@redhat.com
Cc: engine-devel engine-de...@ovirt.org, VDSM Project
Development vdsm-devel@lists.fedorahosted.org
Sent: Wednesday, August 7, 2013 1:30:54 PM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage
Domain

On 08/07/2013 08:21 AM, Sahina Bose wrote:

[Adding engine-devel]

On 08/06/2013 10:48 AM, Deepak C Shetty wrote:

Hi All,
 There were 2 learnings from BZ
https://bugzilla.redhat.com/show_bug.cgi?id=988299

1) Gluster RPM deps were not proper in VDSM when using Gluster
Storage
Domain. This has been partly addressed
by the gluster-devel thread @
http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
and will be fully addressed once Gluster folks ensure their
packaging
is friendly enuf for VDSM to consume
just the needed bits. Once that happens, i will be sending a
patch to
vdsm.spec.in to update the gluster
deps correctly. So this issue gets addressed in near term.

2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu
1.3.

libvirt 1.0.1 has the support for representing gluster as a
network
block device and qemu 1.3 has the
native support for gluster block backend which supports
gluster://...
URI way of representing a gluster
based file (aka volume/vmdisk in VDSM case). Many distros (incl.
centos 6.4 in the BZ) won't have qemu
1.3 in their distro repos! How do we handle this dep in VDSM ?

Do we disable gluster storage domain in oVirt engine if VDSM
reports
qemu  1.3 as part of getCapabilities ?
or
Do we ensure qemu 1.3 is present in ovirt.repo assuming
ovirt.repo is
always present on VDSM hosts in which
case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in will
install qemu 1.3 from the ovirt.repo
instead of the distro repo. This means vdsm.spec.in will have
qemu =
1.3 under Requires.


Is this possible to make this a conditional install? That is,
only if
Storage Domain = GlusterFS in the Data center, the bootstrapping
of host
will install the qemu 1.3 and dependencies.

(The question still remains as to where the qemu 1.3 rpms will be
available)

RHEL6.5 (and so CentOS 6.5) will get backported libgfapi support so we 
shouldn't need to require qemu 1.3 just the appropriate qemu-kvm version from 
6.5

https://bugzilla.redhat.com/show_bug.cgi?id=848070


So IIUC this means we don't do anything special in vdsm.spec.in to 
handle qemu 1.3 dep ?
If so... what happens when User uses F17/F18 ( as an example) on the 
VDSM host.. their repos probably
won't have qemu-kvm which has libgfapi support... how do we handle it. 
Do we just release-note it ?


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Deepak C Shetty

On 08/12/2013 06:32 PM, Andrew Cathrow wrote:


- Original Message -

From: Deepak C Shetty deepa...@linux.vnet.ibm.com
To: vdsm-devel@lists.fedorahosted.org
Sent: Monday, August 12, 2013 8:59:37 AM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

On 08/12/2013 04:51 PM, Andrew Cathrow wrote:

- Forwarded Message -

From: Itamar Heim ih...@redhat.com
To: Sahina Bose sab...@redhat.com
Cc: engine-devel engine-de...@ovirt.org, VDSM Project
Development vdsm-devel@lists.fedorahosted.org
Sent: Wednesday, August 7, 2013 1:30:54 PM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster
Storage
Domain

On 08/07/2013 08:21 AM, Sahina Bose wrote:

[Adding engine-devel]

On 08/06/2013 10:48 AM, Deepak C Shetty wrote:

Hi All,
  There were 2 learnings from BZ
https://bugzilla.redhat.com/show_bug.cgi?id=988299

1) Gluster RPM deps were not proper in VDSM when using Gluster
Storage
Domain. This has been partly addressed
by the gluster-devel thread @
http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
and will be fully addressed once Gluster folks ensure their
packaging
is friendly enuf for VDSM to consume
just the needed bits. Once that happens, i will be sending a
patch to
vdsm.spec.in to update the gluster
deps correctly. So this issue gets addressed in near term.

2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu
1.3.

libvirt 1.0.1 has the support for representing gluster as a
network
block device and qemu 1.3 has the
native support for gluster block backend which supports
gluster://...
URI way of representing a gluster
based file (aka volume/vmdisk in VDSM case). Many distros
(incl.
centos 6.4 in the BZ) won't have qemu
1.3 in their distro repos! How do we handle this dep in VDSM ?

Do we disable gluster storage domain in oVirt engine if VDSM
reports
qemu  1.3 as part of getCapabilities ?
or
Do we ensure qemu 1.3 is present in ovirt.repo assuming
ovirt.repo is
always present on VDSM hosts in which
case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in
will
install qemu 1.3 from the ovirt.repo
instead of the distro repo. This means vdsm.spec.in will have
qemu =
1.3 under Requires.


Is this possible to make this a conditional install? That is,
only if
Storage Domain = GlusterFS in the Data center, the bootstrapping
of host
will install the qemu 1.3 and dependencies.

(The question still remains as to where the qemu 1.3 rpms will
be
available)

RHEL6.5 (and so CentOS 6.5) will get backported libgfapi support so
we shouldn't need to require qemu 1.3 just the appropriate
qemu-kvm version from 6.5

https://bugzilla.redhat.com/show_bug.cgi?id=848070

So IIUC this means we don't do anything special in vdsm.spec.in to
handle qemu 1.3 dep ?
If so... what happens when User uses F17/F18 ( as an example) on the
VDSM host.. their repos probably
won't have qemu-kvm which has libgfapi support... how do we handle
it.
Do we just release-note it ?


For Fedora SPEC we'd need to handle use a =1.3 dependency but for *EL6 it'd need 
to be 0.12-whaterver-6.5-has


I would love to hear how. I am waiting on some resolution for this, so 
that I can close the 3.3 blocker BZ


For Fedora if I put qemu-kvm = 1.3 in vdsm.spec.in, then F17/F18 can't 
be used as a VDSM host, that may not be acceptable.


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Deepak C Shetty

On 08/12/2013 07:22 PM, Andrew Cathrow wrote:


- Original Message -

From: Deepak C Shetty deepa...@linux.vnet.ibm.com
To: VDSM Project Development vdsm-devel@lists.fedorahosted.org
Sent: Monday, August 12, 2013 9:39:21 AM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

On 08/12/2013 06:32 PM, Andrew Cathrow wrote:

- Original Message -

From: Deepak C Shetty deepa...@linux.vnet.ibm.com
To: vdsm-devel@lists.fedorahosted.org
Sent: Monday, August 12, 2013 8:59:37 AM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage
Domain

On 08/12/2013 04:51 PM, Andrew Cathrow wrote:

- Forwarded Message -

From: Itamar Heim ih...@redhat.com
To: Sahina Bose sab...@redhat.com
Cc: engine-devel engine-de...@ovirt.org, VDSM Project
Development vdsm-devel@lists.fedorahosted.org
Sent: Wednesday, August 7, 2013 1:30:54 PM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster
Storage
Domain

On 08/07/2013 08:21 AM, Sahina Bose wrote:

[Adding engine-devel]

On 08/06/2013 10:48 AM, Deepak C Shetty wrote:

Hi All,
   There were 2 learnings from BZ
https://bugzilla.redhat.com/show_bug.cgi?id=988299

1) Gluster RPM deps were not proper in VDSM when using
Gluster
Storage
Domain. This has been partly addressed
by the gluster-devel thread @
http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
and will be fully addressed once Gluster folks ensure their
packaging
is friendly enuf for VDSM to consume
just the needed bits. Once that happens, i will be sending a
patch to
vdsm.spec.in to update the gluster
deps correctly. So this issue gets addressed in near term.

2) Gluster storage domain needs minimum libvirt 1.0.1 and
qemu
1.3.

libvirt 1.0.1 has the support for representing gluster as a
network
block device and qemu 1.3 has the
native support for gluster block backend which supports
gluster://...
URI way of representing a gluster
based file (aka volume/vmdisk in VDSM case). Many distros
(incl.
centos 6.4 in the BZ) won't have qemu
1.3 in their distro repos! How do we handle this dep in VDSM
?

Do we disable gluster storage domain in oVirt engine if VDSM
reports
qemu  1.3 as part of getCapabilities ?
or
Do we ensure qemu 1.3 is present in ovirt.repo assuming
ovirt.repo is
always present on VDSM hosts in which
case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in
will
install qemu 1.3 from the ovirt.repo
instead of the distro repo. This means vdsm.spec.in will have
qemu =
1.3 under Requires.


Is this possible to make this a conditional install? That is,
only if
Storage Domain = GlusterFS in the Data center, the
bootstrapping
of host
will install the qemu 1.3 and dependencies.

(The question still remains as to where the qemu 1.3 rpms will
be
available)

RHEL6.5 (and so CentOS 6.5) will get backported libgfapi support
so
we shouldn't need to require qemu 1.3 just the appropriate
qemu-kvm version from 6.5

https://bugzilla.redhat.com/show_bug.cgi?id=848070

So IIUC this means we don't do anything special in vdsm.spec.in to
handle qemu 1.3 dep ?
If so... what happens when User uses F17/F18 ( as an example) on
the
VDSM host.. their repos probably
won't have qemu-kvm which has libgfapi support... how do we handle
it.
Do we just release-note it ?


For Fedora SPEC we'd need to handle use a =1.3 dependency but for
*EL6 it'd need to be 0.12-whaterver-6.5-has

I would love to hear how. I am waiting on some resolution for this,
so
that I can close the 3.3 blocker BZ

For Fedora if I put qemu-kvm = 1.3 in vdsm.spec.in, then F17/F18
can't
be used as a VDSM host, that may not be acceptable.


What options do we have for fedora f19?
virt-preview may be an option for F18 but F17 is out of luck ..


what do you mean by 'out of luck'.. I thot virt-preview had F17/F18 
repos, no ?
Another Q to answer would be.. Do we support F17 as a valid vdsm host 
for 3.3 ?


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Deepak C Shetty

On 08/12/2013 09:50 PM, Vijay Bellur wrote:




On Mon, Aug 12, 2013 at 9:45 PM, Itamar Heim ih...@redhat.com 
mailto:ih...@redhat.com wrote:


On 08/12/2013 04:55 PM, Deepak C Shetty wrote:


what do you mean by 'out of luck'.. I thot virt-preview had
F17/F18
repos, no ?
Another Q to answer would be.. Do we support F17 as a valid
vdsm host
for 3.3 ?


iirc, F17 isn't supported by fedora once F19 is out, so no more
updates to it. using fedora you are moving fast, but with a
shorter support/update cycle i guess.


F17 entered EOL on 07/30:

http://fedoraproject.org/wiki/End_of_life

-Vijay


Thanks all.. I was concerned abt F17, since vdsm.spec still has = F17 
in many places.
Good 2 know that i don't have to worry abt F17. Will post vdsm.spec 
changes in a patch soon.



I don't think anyone tested 3.3 on F17.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
mailto:vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-05 Thread Deepak C Shetty

Hi All,
There were 2 learnings from BZ 
https://bugzilla.redhat.com/show_bug.cgi?id=988299


1) Gluster RPM deps were not proper in VDSM when using Gluster Storage 
Domain. This has been partly addressed
by the gluster-devel thread @ 
http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
and will be fully addressed once Gluster folks ensure their packaging is 
friendly enuf for VDSM to consume
just the needed bits. Once that happens, i will be sending a patch to 
vdsm.spec.in to update the gluster

deps correctly. So this issue gets addressed in near term.

2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu 1.3.

libvirt 1.0.1 has the support for representing gluster as a network 
block device and qemu 1.3 has the
native support for gluster block backend which supports gluster://... 
URI way of representing a gluster
based file (aka volume/vmdisk in VDSM case). Many distros (incl. centos 
6.4 in the BZ) won't have qemu

1.3 in their distro repos! How do we handle this dep in VDSM ?

Do we disable gluster storage domain in oVirt engine if VDSM reports 
qemu  1.3 as part of getCapabilities ?

or
Do we ensure qemu 1.3 is present in ovirt.repo assuming ovirt.repo is 
always present on VDSM hosts in which
case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in will install 
qemu 1.3 from the ovirt.repo
instead of the distro repo. This means vdsm.spec.in will have qemu = 
1.3 under Requires.


What will be a good way to handle this ?
Appreciate your response

thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Exploiting domain specific offload features

2013-07-25 Thread Deepak C Shetty

On 07/24/2013 07:55 PM, Federico Simoncelli wrote:

- Original Message -

From: Federico Simoncelli fsimo...@redhat.com
To: Itamar Heim ih...@redhat.com
Cc: Deepak C Shetty deepa...@linux.vnet.ibm.com, vdsm-devel@lists.fedorahosted.org, 
Ayal Baron
aba...@redhat.com
Sent: Wednesday, July 24, 2013 4:22:02 PM
Subject: Re: [vdsm] Exploiting domain specific offload features
- Original Message -

From: Itamar Heim ih...@redhat.com
To: Federico Simoncelli fsimo...@redhat.com
Cc: Deepak C Shetty deepa...@linux.vnet.ibm.com,
vdsm-devel@lists.fedorahosted.org
Sent: Wednesday, July 24, 2013 3:35:35 PM
Subject: Re: [vdsm] Exploiting domain specific offload features

On 07/24/2013 03:38 PM, Federico Simoncelli wrote:

I think we can already start exploiting cloning whenever we need to copy
a volume maintaining the same format (raw=raw, cow=cow).


you still need to tailor the flow from engine's perspective, right?
or just override the entity created by the engine with the native cloned
one for simplicity?

No, this change would be transparent to the engine. When vdsm is asked to
clone/copy an image (eg. iirc create a non-thin-provisioned vm from template)

Maybe in this case we use an hard-link (no time to check now). Anyway concept is
that if we ever need to copy a volume within the same storage domain, that can 
be
offloaded to gluster.


I am not sure if its clear, but wanted to stress that when Gluster 
provides the clone/snapshot
offloads.. the new files (which map to LVs as Gluster is configured with 
block backend) will be seen as
normal files on the Gluster mount (aka Gluster storage domain). But VDSM 
expects the snapshot
to appear as base -- qcow2 in the FS domain, which won't happen in this 
case. Will something

break in engine/VDSM assumptions and/or flows when this happens ?

thanx,
deepak





it would use the gluster clone capability to offload the volume copy.


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Exploiting domain specific offload features

2013-07-17 Thread Deepak C Shetty

On 07/17/2013 12:48 PM, M. Mohan Kumar wrote:

Hello,

We are adding features such as server offloaded cloning, snapshot of
the files(ie VM disks) and zeroed vm disk allocation in GlusterFS. As of
now only BD xlator supports offloaded cloning  snapshot. Server
offloaded zeroing of VM disks is supported both by posix and BD xlator.
The initial approach is to use xattr interface to trigger this offload
features such as
# setfattr -n clone -v path-to-new-clone-file path-to-source-file
will create clone of path-to-source-file in path-to-new-clone-file.
Cloning is done in GlusterFS server side and its kind of server
offloaded copy. Similarly snapshot can also be taken using setfattr approach.

GlusterFS storage domain is already part of VDSM and we want to exploit
offload features provided by GlusterFS through VDSM. Is there any way to
exploit these features from VDSM as of now?


Mohan,
IIUC, zeroing of files in GlusterFS is supported for both posix and 
block backends of GlusterFS
Today VDSM does zeroing (as part of preallocated vmdisk flow) itself 
using 'dd'. If GlusterFS supports
zeroing this feature can be exploited in VDSM (by overriding create 
volume flow as needed) so that we can save

compute resources on the VDSM host, when Gluster domain is being used.

Regarding exploiting clone and snapshot, IIUC these are very native 
to VDSM today... it expects that snapshots are qcow2 based and they form 
the image chain etc... With snapshot and clone handled in Gluster 
transparently, these
notions of VDSM will be broken, so it probably needs a lot of changes in 
lots of places in VDSM to exploit these.


Federico/Ayal,
Wanted to know your comments/opinion on this ?
Is there a way to exploit these features in VDSM Gluster storage domain 
in an elegant way ?


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Gluster-devel] How to figure out the transport type of Gluster volume from VDSM host (which is not a gluster peer) ?

2013-05-09 Thread Deepak C Shetty

On 05/09/2013 10:22 AM, Kaushal M wrote:

On Thu, May 9, 2013 at 8:17 AM, Vijay Bellur vbel...@redhat.com wrote:

On 05/07/2013 10:20 AM, Deepak C Shetty wrote:

On 05/07/2013 01:13 AM, Vijay Bellur wrote:

On 05/06/2013 08:03 PM, Deepak C Shetty wrote:

2) Use gluster ::system getspec volname

I tried this but it never worked for me... whats the right way of using
this ?
For me.. it just returned back to shell w/o dumping the volfile at all!


The right way to use would be this:

#gluster --remote-host=server system:: getspec volname


This worked for me.. when I was on a non-peer which was in the same
subnet as the gluster host
But when i tried the same from my laptop (not in the same subnet) it
didn't work.. Pls see below

Also, you had indicated that this may not be a long supported option..
so wondering if it makes sense to use it in VDSM ?


Supporting --remote-host for other volume operations doesn't look like a
good idea. But we can retain the interface for fetching a volume spec file.



  From my laptop
--
[root@deepakcs-lx ~]# gluster --version
glusterfs 3.2.7 built on Aug 27 2012 19:47:26
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU
General Public License.


[root@deepakcs-lx ~]# gluster --remote-host=llmvm02.in.ibm.com system::
getspec fio
[root@deepakcs-lx ~]# echo $?
255

and on server side.. the below error was reported...

[2013-05-07 04:44:06.517924] W [rpcsvc.c:180:rpcsvc_program_actor]
0-rpc-service: RPC program version not available (req 14398633 1)
[2013-05-07 04:44:06.517992] E
[rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed
to complete successfully

So maybe its due to my gluster being old ?


Yes, this is related to interaction between 3.2 and a later version.



  From my colleague's laptop, having recent gluster


bharata # gluster --version
bharata glusterfs 3.4.0alpha2 built on Apr 10 2013 08:28:37
bharata Repository revision: git://git.gluster.com/glusterfs.git
bharata Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
bharata GlusterFS comes with ABSOLUTELY NO WARRANTY.
bharata You may redistribute copies of GlusterFS under the terms of
the GNU General Public License.

It fails here too.. the cli returned error 240 instead of 255 (in my
laptop case) and server side had below error...

bharata [2013-05-07 04:45:08.124567] I
[glusterd-handshake.c:155:server_getspec] 0-management: Client
9.124.35.231:1023 doesn't support required op-version. Rejecting getspec
request.


This seems to be related to the recent op-version implementation. CC'ing
Kaushal for that.


I didn't know about the 'system:: getspec' command before, so didn't
account for it in the getspec handler.
I'll do the necessary changes.


Kaushal, thanks, can u pls Cc me on the bug, so that I can track it. 
This is now a dep for me/VDSM work.


Vijay,
Another Q, with the above fix getting only in 3.4... i believe then 
VDSM dep for gluster will be minm 3.4 ?






So looks like for getspec to work the non-peer host and gluster host
versions should match exactly or something ...and if its so stringent, I
am not sure if it makes sense to use --remote-host approach in
VDSM..concern being there could be too many such version issues and VDSM
failing that Users getting it working... what say ?


3.3 and 3.4 should be interoperable. Anything that impedes this should be
treated as a bug. If you have a real need for the fetch spec interface to
work with --remote-host, we can retain it.

Thanks,

Vijay


___
Gluster-devel mailing list
gluster-de...@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Gluster-devel] How to figure out the transport type of Gluster volume from VDSM host (which is not a gluster peer) ?

2013-05-09 Thread Deepak C Shetty

On 05/09/2013 08:17 AM, Vijay Bellur wrote:

On 05/07/2013 10:20 AM, Deepak C Shetty wrote:

On 05/07/2013 01:13 AM, Vijay Bellur wrote:

On 05/06/2013 08:03 PM, Deepak C Shetty wrote:

2) Use gluster ::system getspec volname

I tried this but it never worked for me... whats the right way of 
using

this ?
For me.. it just returned back to shell w/o dumping the volfile at 
all!


The right way to use would be this:

#gluster --remote-host=server system:: getspec volname


This worked for me.. when I was on a non-peer which was in the same
subnet as the gluster host
But when i tried the same from my laptop (not in the same subnet) it
didn't work.. Pls see below

Also, you had indicated that this may not be a long supported option..
so wondering if it makes sense to use it in VDSM ?


Supporting --remote-host for other volume operations doesn't look like 
a good idea. But we can retain the interface for fetching a volume 
spec file.


Thanks!




 From my laptop
--
[root@deepakcs-lx ~]# gluster --version
glusterfs 3.2.7 built on Aug 27 2012 19:47:26
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU
General Public License.


[root@deepakcs-lx ~]# gluster --remote-host=llmvm02.in.ibm.com system::
getspec fio
[root@deepakcs-lx ~]# echo $?
255

and on server side.. the below error was reported...

[2013-05-07 04:44:06.517924] W [rpcsvc.c:180:rpcsvc_program_actor]
0-rpc-service: RPC program version not available (req 14398633 1)
[2013-05-07 04:44:06.517992] E
[rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed
to complete successfully

So maybe its due to my gluster being old ?


Yes, this is related to interaction between 3.2 and a later version.



 From my colleague's laptop, having recent gluster


bharata # gluster --version
bharata glusterfs 3.4.0alpha2 built on Apr 10 2013 08:28:37
bharata Repository revision: git://git.gluster.com/glusterfs.git
bharata Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
bharata GlusterFS comes with ABSOLUTELY NO WARRANTY.
bharata You may redistribute copies of GlusterFS under the terms of
the GNU General Public License.

It fails here too.. the cli returned error 240 instead of 255 (in my
laptop case) and server side had below error...

bharata [2013-05-07 04:45:08.124567] I
[glusterd-handshake.c:155:server_getspec] 0-management: Client
9.124.35.231:1023 doesn't support required op-version. Rejecting getspec
request.


This seems to be related to the recent op-version implementation. 
CC'ing Kaushal for that.





So looks like for getspec to work the non-peer host and gluster host
versions should match exactly or something ...and if its so stringent, I
am not sure if it makes sense to use --remote-host approach in
VDSM..concern being there could be too many such version issues and VDSM
failing that Users getting it working... what say ?


3.3 and 3.4 should be interoperable. Anything that impedes this should 
be treated as a bug. If you have a real need for the fetch spec 
interface to work with --remote-host, we can retain it.


I hope the VDSM Gluster storage domain usecase satisfied the need to 
retain --remote-host.


Any idea if --remote-host is supported in VDSM's gluster plugin ? I did 
a quick find and didn't see it

(CCing Bala too)

thanx,
deepak


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] How to figure out the transport type of Gluster volume from VDSM host (which is not a gluster peer) ?

2013-05-06 Thread Deepak C Shetty

Hi Lists,
I am looking at options to figure the transport type of a gluster 
volume (given the volfileserver:volname) from a host that is *not* part 
of gluster volume (aka not a gluster peer).


The context here is GlusterFS as a storage domain in oVirt/VDSM, which 
is currently available in upstream oVirt.
This features exploits the QEMU-GlusterFS native integration, where the 
VM disk is specified using gluster+transport://... protocol.


For eg. if transport is TCP.. the URI looks liek gluster+tcp://..., 
otherwise gluster+rdma://...


Thus, to generate the gluster QEMU URI in VDSM, i need to know the 
Gluster volume's transport type and the only inputs that oVirt gets for 
GlusterFS storage domain are...

a) volfileserver (the host running glusterd)
b) volname (the name of the volume)

Currently i use VDSM's gluster plugin to do the eq. of gluster volume 
info volname to determine Gluster volume's transport type, but this 
won't work if the VDSM host is not a gluster peer, which is a 
constraint! ... and I would like to fix/remove this constraint.


So i discussed a bit on #glsuter-dev IRC and want to put down the 
options here for the community to help provide inputs on whats the best 
way to approach this...


1) Use gluster --remote-host=host_running_glusterd volume info volname

This is not a supported way and there is no guarantee on how long the 
--remote-host option be supported in gluster, since it has some security 
issues


2) Use gluster ::system getspec volname

I tried this but it never worked for me... whats the right way of using 
this ?

For me.. it just returned back to shell w/o dumping the volfile at all!

3) Have oVirt user provide the transport type as well (while creating 
Gluster storgae domain) in addition to volfileserver:volname options


This would be easiest, since VDSM can form the gluster QEMU URI by 
directly using the transport type specified by the user, and this won't 
have a need to use the vdsm-gluster plugin, hence no need for VDSM host 
to be part of gluster peer...but this would mean addnl input for user to 
provide during Gluster domain creation and oVirt UI changes to take the 
transport type as input in addition to volfileserver:volname


Comments/Opinions/Inputs appreciated

thanx,
deepak

(P.S. cross-posting this to VDSM and Gluster devel lists, as it relates 
to both)


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to figure out the transport type of Gluster volume from VDSM host (which is not a gluster peer) ?

2013-05-06 Thread Deepak C Shetty

On 05/06/2013 08:47 PM, Shu Ming wrote:

2013-5-6 22:33, Deepak C Shetty:

Hi Lists,
I am looking at options to figure the transport type of a gluster 
volume (given the volfileserver:volname) from a host that is *not* 
part of gluster volume (aka not a gluster peer).


The context here is GlusterFS as a storage domain in oVirt/VDSM, 
which is currently available in upstream oVirt.
This features exploits the QEMU-GlusterFS native integration, where 
the VM disk is specified using gluster+transport://... protocol.


For eg. if transport is TCP.. the URI looks liek 
gluster+tcp://..., otherwise gluster+rdma://...


Thus, to generate the gluster QEMU URI in VDSM, i need to know the 
Gluster volume's transport type and the only inputs that oVirt gets 
for GlusterFS storage domain are...

a) volfileserver (the host running glusterd)
b) volname (the name of the volume)

Currently i use VDSM's gluster plugin to do the eq. of gluster 
volume info volname to determine Gluster volume's transport type, 
but this won't work if the VDSM host is not a gluster peer, 
What do you mean by using gluster peer?  Does gluster peer mean 
the host is running glusterd?


In Gluster, the hosts that are part of a gluster storage volume (serving 
the bricks (aka storage) to the volume) are called gluster peers.
So if VDSM host is not serving storage to the gluster volume, its a 
non-peer and you cannot invoke gluster cli from a non-peer host, simply 
because it wouldn't make sense for a non-participating host to know abt 
gluster volumes ... thus there is the --remote-host option.. which is 
not encouraged to use and not guaranteed to be supported in future.


Yes.. and all gluster peers run glusterd



which is a constraint! ... and I would like to fix/remove this 
constraint.


So i discussed a bit on #glsuter-dev IRC and want to put down the 
options here for the community to help provide inputs on whats the 
best way to approach this...


1) Use gluster --remote-host=host_running_glusterd volume info 
volname


This is not a supported way and there is no guarantee on how long the 
--remote-host option be supported in gluster, since it has some 
security issues


2) Use gluster ::system getspec volname

I tried this but it never worked for me... whats the right way of 
using this ?

For me.. it just returned back to shell w/o dumping the volfile at all!

3) Have oVirt user provide the transport type as well (while creating 
Gluster storgae domain) in addition to volfileserver:volname options


This would be easiest, since VDSM can form the gluster QEMU URI by 
directly using the transport type specified by the user, and this 
won't have a need to use the vdsm-gluster plugin, hence no need for 
VDSM host to be part of gluster peer...but this would mean addnl 
input for user to provide during Gluster domain creation and oVirt UI 
changes to take the transport type as input in addition to 
volfileserver:volname

What will happen if a user gives a wrong transport type to VDSM?


Simply said... The gluster QEMU URI will be formed wrongly, QEMU would 
error out, hence VDSM and hence oVirt user will see the error while 
creating the VM.
But.. in oVirt GUI we can have a combo box prefilled with TCP, RDMA 
so that user has to choose one of the valid types only.. then the 
problem of giving wrong transport type does not arise.


thanx,
deepak






Comments/Opinions/Inputs appreciated

thanx,
deepak

(P.S. cross-posting this to VDSM and Gluster devel lists, as it 
relates to both)


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel





___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Gluster-devel] How to figure out the transport type of Gluster volume from VDSM host (which is not a gluster peer) ?

2013-05-06 Thread Deepak C Shetty

On 05/07/2013 01:13 AM, Vijay Bellur wrote:

On 05/06/2013 08:03 PM, Deepak C Shetty wrote:

2) Use gluster ::system getspec volname

I tried this but it never worked for me... whats the right way of using
this ?
For me.. it just returned back to shell w/o dumping the volfile at all!


The right way to use would be this:

#gluster --remote-host=server system:: getspec volname


This worked for me.. when I was on a non-peer which was in the same 
subnet as the gluster host
But when i tried the same from my laptop (not in the same subnet) it 
didn't work.. Pls see below


Also, you had indicated that this may not be a long supported option.. 
so wondering if it makes sense to use it in VDSM ?


gluster volume name fio running in llmvm02 system
---

[root@llmvm02 ~]# gluster volume info fio

Volume Name: fio
Type: Distribute
Volume ID: eea6f1a3-ad52-49cb-8135-670ea5ab23fe
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: llmvm02:/storage/fiobrick

[root@llmvm02 ~]# gluster --remote-host=llmvm02.in.ibm.com volume info fio

Volume Name: fio
Type: Distribute
Volume ID: eea6f1a3-ad52-49cb-8135-670ea5ab23fe
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: llmvm02:/storage/fiobrick

From /var/lib/glusterd/vols/fio/fio.llmvm02.storage-fiobrick.vol
auth allow is set to * as seen below

volume fio-server
type protocol/server
option auth.addr./storage/fiobrick.allow *
option auth.login.0656965d-f385-42df-b413-7ba16b09044d.password 
99e32205-521a-4343-a65e-8122a8eafd7c
option auth.login./storage/fiobrick.allow 
0656965d-f385-42df-b413-7ba16b09044d

option transport-type tcp
subvolumes /storage/fiobrick




From llmvm03 system .. in the same subnet as 02
---

[root@llmvm03 ~]# gluster --remote-host=llmvm02.in.ibm.com volume info fio

Volume Name: fio
Type: Distribute
Volume ID: eea6f1a3-ad52-49cb-8135-670ea5ab23fe
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: llmvm02:/storage/fiobrick

[root@llmvm03 ~]# gluster peer status
peer status: No peers present

From my laptop
--
[root@deepakcs-lx ~]# gluster --version
glusterfs 3.2.7 built on Aug 27 2012 19:47:26
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU 
General Public License.



[root@deepakcs-lx ~]# gluster --remote-host=llmvm02.in.ibm.com system:: 
getspec fio

[root@deepakcs-lx ~]# echo $?
255

and on server side.. the below error was reported...

[2013-05-07 04:44:06.517924] W [rpcsvc.c:180:rpcsvc_program_actor] 
0-rpc-service: RPC program version not available (req 14398633 1)
[2013-05-07 04:44:06.517992] E 
[rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed 
to complete successfully


So maybe its due to my gluster being old ?

From my colleague's laptop, having recent gluster


bharata # gluster --version
bharata glusterfs 3.4.0alpha2 built on Apr 10 2013 08:28:37
bharata Repository revision: git://git.gluster.com/glusterfs.git
bharata Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
bharata GlusterFS comes with ABSOLUTELY NO WARRANTY.
bharata You may redistribute copies of GlusterFS under the terms of 
the GNU General Public License.


It fails here too.. the cli returned error 240 instead of 255 (in my 
laptop case) and server side had below error...


bharata [2013-05-07 04:45:08.124567] I 
[glusterd-handshake.c:155:server_getspec] 0-management: Client 
9.124.35.231:1023 doesn't support required op-version. Rejecting getspec 
request.



So looks like for getspec to work the non-peer host and gluster host 
versions should match exactly or something ...and if its so stringent, I 
am not sure if it makes sense to use --remote-host approach in 
VDSM..concern being there could be too many such version issues and VDSM 
failing that Users getting it working... what say ?


thanx,
deepak




-Vijay




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Enabling clusterLevels for 3.3 in dsaversion.py

2013-04-30 Thread Deepak C Shetty

On 04/30/2013 12:12 PM, Itamar Heim wrote:

On 04/30/2013 07:37 AM, Deepak C Shetty wrote:

On 04/21/2013 02:40 AM, Itamar Heim wrote:

On 03/22/2013 06:42 AM, Deepak C Shetty wrote:

On 03/21/2013 04:07 PM, Vinzenz Feenstra wrote:

On 03/21/2013 10:32 AM, Deepak C Shetty wrote:

On 03/21/2013 01:11 PM, Dan Kenigsberg wrote:

On Thu, Mar 21, 2013 at 10:42:27AM +0530, Deepak C Shetty wrote:

Hi,
 I am trying to validate GlusterFS domain engine patches,
against
VDSM. GlusterFS domain is enabled for 3.3 only
So when i try to add my VDSM as a new host to engine, it doesn't
allow me to do so since clusterLevels (returned by VDSM as part of
engine calling getCap) doesn't have 3.3

I hacked VDSM's dsaversion.py to return 3.3 as well as part of
getCap and now I am able to add my VDSM host as a new host from
engine for DC of type GLUSTERFS_DOMAIN.

Is this the right way to test a 3.3. feature, if yes, should I 
send

a vdsm patch to add 3.3 in dsaversion.py ?

You are right - it's time to expose this clusterLevel.
Shouldn't the supportedENGINEs value also be updated to 3.2 and 
3.3? I

am a bit confused that this one stays at 3.0 and 3.1


I am really not sure whats the use of supportedENGINEs. I changed
clusterLevels bcos doing that allowed me to add my VDSM host to a 3.3
cluster. Can someone throw more light on what is supportedENGINEs used
for ?

engine also has supported vdsm versions.
if vdsm version isn't in that list, vdsm can declare to engine the
vdsm can work with that version of the engine - it was meant to make
sure only tested/supported versions of vdsm work with tested versions
of engine, etc.




Hmm... but vdsm having supportedENGINEs as 3.0 and 3.1 did work with
Engine 3.2 and 3.2 !
So does that mean engine is not honoring this field now ?


i assume you engine had the vdsm version listed as supported in its 
config


Okay, so this is the one you meant ?

engine-config  -g BootstrapMinimalVdsmVersion
BootstrapMinimalVdsmVersion: 4.9 version: general








___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Enabling clusterLevels for 3.3 in dsaversion.py

2013-04-29 Thread Deepak C Shetty

On 04/21/2013 02:40 AM, Itamar Heim wrote:

On 03/22/2013 06:42 AM, Deepak C Shetty wrote:

On 03/21/2013 04:07 PM, Vinzenz Feenstra wrote:

On 03/21/2013 10:32 AM, Deepak C Shetty wrote:

On 03/21/2013 01:11 PM, Dan Kenigsberg wrote:

On Thu, Mar 21, 2013 at 10:42:27AM +0530, Deepak C Shetty wrote:

Hi,
 I am trying to validate GlusterFS domain engine patches, 
against

VDSM. GlusterFS domain is enabled for 3.3 only
So when i try to add my VDSM as a new host to engine, it doesn't
allow me to do so since clusterLevels (returned by VDSM as part of
engine calling getCap) doesn't have 3.3

I hacked VDSM's dsaversion.py to return 3.3 as well as part of
getCap and now I am able to add my VDSM host as a new host from
engine for DC of type GLUSTERFS_DOMAIN.

Is this the right way to test a 3.3. feature, if yes, should I send
a vdsm patch to add 3.3 in dsaversion.py ?

You are right - it's time to expose this clusterLevel.

Shouldn't the supportedENGINEs value also be updated to 3.2 and 3.3? I
am a bit confused that this one stays at 3.0 and 3.1


I am really not sure whats the use of supportedENGINEs. I changed
clusterLevels bcos doing that allowed me to add my VDSM host to a 3.3
cluster. Can someone throw more light on what is supportedENGINEs used
for ?

engine also has supported vdsm versions.
if vdsm version isn't in that list, vdsm can declare to engine the 
vdsm can work with that version of the engine - it was meant to make 
sure only tested/supported versions of vdsm work with tested versions 
of engine, etc.





Hmm... but vdsm having supportedENGINEs as 3.0 and 3.1 did work with 
Engine 3.2 and 3.2 !

So does that mean engine is not honoring this field now ?


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Seeing gnutls issue when installing libvirt-python dep for VDSM

2013-04-05 Thread Deepak C Shetty

Hi All,
I am running ./autogen.sh --system and installing the required pkgs 
as it reports them missing.


I installed libvirt and qemu-kvm, which brought in gnutls pkg as a dep. 
(amongst others).

No errors/issues reported by yum duing the above pkgs installs.

Then in installed libvirt-pythonm pkg but vdsm autogen still complains 
of unable to find libvirt pkg.


So i tried the below and see this:

# python
Python 2.7.3 (default, Apr 30 2012, 21:18:11)
[GCC 4.7.0 20120416 (Red Hat 4.7.0-2)] on linux2
Type help, copyright, credits or license for more information.
 import libvirt
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 25, in 
module

raise lib_e
ImportError: /lib64/libgnutls.so.26: symbol asn1_read_node_value, 
version LIBTASN1_0_3 not defined in file libtasn1.so.3 with link time 
reference


So looks like some incompatible version of gnutls installed as part of 
libvirt/qemu-kvm install.


*My system details:*

# uname -a
Linux libSM_upstream 3.3.4-5.fc17.x86_64 #1 SMP Mon May 7 17:29:34 UTC 
2012 x86_64 x86_64 x86_64 GNU/Linux


# rpm -qa| grep libvirt
libvirt-daemon-0.9.11.9-1.fc17.x86_64
libvirt-daemon-config-nwfilter-0.9.11.9-1.fc17.x86_64
libvirt-python-0.9.11.9-1.fc17.x86_64
libvirt-0.9.11.9-1.fc17.x86_64
libvirt-client-0.9.11.9-1.fc17.x86_64
libvirt-daemon-config-network-0.9.11.9-1.fc17.x86_64

# rpm -qa| grep gnutls
gnutls-devel-2.12.23-1.fc17.x86_64
gnutls-utils-2.12.23-1.fc17.x86_64
gnutls-c++-2.12.23-1.fc17.x86_64
gnutls-2.12.23-1.fc17.x86_64


/etc/fedora-release:Fedora release 17 (Beefy Miracle)
/etc/os-release:NAME=Fedora
/etc/os-release:ID=fedora
/etc/os-release:PRETTY_NAME=Fedora 17 (Beefy Miracle)
/etc/os-release:CPE_NAME=cpe:/o:fedoraproject:fedora:17
/etc/redhat-release:Fedora release 17 (Beefy Miracle)
/etc/system-release:Fedora release 17 (Beefy Miracle)
checking for pyflakes... /bin/pyflakes
checking for pep8... /bin/pep8
checking for python-config... /bin/python-config
checking for nosetests... /bin/nosetests
checking python module: ethtool... yes
*checking python module: libvirt... no**
**configure: error: failed to find required module libvirt**
*

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [SOLVED] Seeing gnutls issue when installing libvirt-python dep for VDSM

2013-04-05 Thread Deepak C Shetty

This gets solved by doing yum update libtasn1
See comment #8 of https://bugzilla.redhat.com/show_bug.cgi?id=928674

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Enabling clusterLevels for 3.3 in dsaversion.py

2013-03-21 Thread Deepak C Shetty

On 03/21/2013 01:11 PM, Dan Kenigsberg wrote:

On Thu, Mar 21, 2013 at 10:42:27AM +0530, Deepak C Shetty wrote:

Hi,
 I am trying to validate GlusterFS domain engine patches, against
VDSM. GlusterFS domain is enabled for 3.3 only
So when i try to add my VDSM as a new host to engine, it doesn't
allow me to do so since clusterLevels (returned by VDSM as part of
engine calling getCap) doesn't have 3.3

I hacked VDSM's dsaversion.py to return 3.3 as well as part of
getCap and now I am able to add my VDSM host as a new host from
engine for DC of type GLUSTERFS_DOMAIN.

Is this the right way to test a 3.3. feature, if yes, should I send
a vdsm patch to add 3.3 in dsaversion.py ?

You are right - it's time to expose this clusterLevel.


I sent the patch @
http://gerrit.ovirt.org/#/c/13236/

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Enabling clusterLevels for 3.3 in dsaversion.py

2013-03-21 Thread Deepak C Shetty

On 03/21/2013 04:07 PM, Vinzenz Feenstra wrote:

On 03/21/2013 10:32 AM, Deepak C Shetty wrote:

On 03/21/2013 01:11 PM, Dan Kenigsberg wrote:

On Thu, Mar 21, 2013 at 10:42:27AM +0530, Deepak C Shetty wrote:

Hi,
 I am trying to validate GlusterFS domain engine patches, against
VDSM. GlusterFS domain is enabled for 3.3 only
So when i try to add my VDSM as a new host to engine, it doesn't
allow me to do so since clusterLevels (returned by VDSM as part of
engine calling getCap) doesn't have 3.3

I hacked VDSM's dsaversion.py to return 3.3 as well as part of
getCap and now I am able to add my VDSM host as a new host from
engine for DC of type GLUSTERFS_DOMAIN.

Is this the right way to test a 3.3. feature, if yes, should I send
a vdsm patch to add 3.3 in dsaversion.py ?

You are right - it's time to expose this clusterLevel.
Shouldn't the supportedENGINEs value also be updated to 3.2 and 3.3? I 
am a bit confused that this one stays at 3.0 and 3.1


I am really not sure whats the use of supportedENGINEs. I changed 
clusterLevels bcos doing that allowed me to add my VDSM host to a 3.3 
cluster. Can someone throw more light on what is supportedENGINEs used for ?


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Enabling clusterLevels for 3.3 in dsaversion.py

2013-03-20 Thread Deepak C Shetty

Hi,
I am trying to validate GlusterFS domain engine patches, against 
VDSM. GlusterFS domain is enabled for 3.3 only
So when i try to add my VDSM as a new host to engine, it doesn't allow 
me to do so since clusterLevels (returned by VDSM as part of engine 
calling getCap) doesn't have 3.3


I hacked VDSM's dsaversion.py to return 3.3 as well as part of getCap 
and now I am able to add my VDSM host as a new host from engine for DC 
of type GLUSTERFS_DOMAIN.


Is this the right way to test a 3.3. feature, if yes, should I send a 
vdsm patch to add 3.3 in dsaversion.py ?

If not, then what is the right process to follow here ?

thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] setupNetworks failure - Host non-operational

2013-01-22 Thread Deepak C Shetty

Hi All,
I have a multi-VM setup, where I have ovirt engine on one VM and 
VDSM host on another.
Discovering the host from the engine puts the host in Unassigned state, 
with the error saying 'ovirtmgmt' network not found.


When i select setupNetworks and drag-drop the ovirtmgmt to setup over 
eth0, i see the below error in VDSM  host goes to non-operataional state.


I tried the steps mentioned by Alon in 
http://lists.ovirt.org/pipermail/users/2012-December/011257.html

but still see the same error

= dump from vdsm.log 

MainProcess|Thread-23::ERROR::2013-01-22 
18:25:53,496::configNetwork::1438::setupNetworks::(setupNetworks) 
Requested operation is not valid: cannot set autostart for transient network

Traceback (most recent call last):
  File /usr/share/vdsm/configNetwork.py, line 1420, in setupNetworks
implicitBonding=True, **d)
  File /usr/share/vdsm/configNetwork.py, line 1030, in addNetwork
configWriter.createLibvirtNetwork(network, bridged, iface)
  File /usr/share/vdsm/configNetwork.py, line 208, in 
createLibvirtNetwork

self._createNetwork(netXml)
  File /usr/share/vdsm/configNetwork.py, line 192, in _createNetwork
net.setAutostart(1)
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 2148, in 
setAutostart
if ret == -1: raise libvirtError ('virNetworkSetAutostart() 
failed', net=self)
libvirtError: Requested operation is not valid: cannot set autostart for 
transient network
MainProcess|Thread-23::ERROR::2013-01-22 
18:25:53,502::supervdsmServer::77::SuperVdsm.ServerCallback::(wrapper) 
Error in setupNetworks

Traceback (most recent call last):
  File /usr/share/vdsm/supervdsmServer.py, line 75, in wrapper
return func(*args, **kwargs)
  File /usr/share/vdsm/supervdsmServer.py, line 170, in setupNetworks
return configNetwork.setupNetworks(networks, bondings, **options)
  File /usr/share/vdsm/configNetwork.py, line 1420, in setupNetworks
implicitBonding=True, **d)
  File /usr/share/vdsm/configNetwork.py, line 1030, in addNetwork
configWriter.createLibvirtNetwork(network, bridged, iface)
  File /usr/share/vdsm/configNetwork.py, line 208, in 
createLibvirtNetwork

self._createNetwork(netXml)
  File /usr/share/vdsm/configNetwork.py, line 192, in _createNetwork
net.setAutostart(1)
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 2148, in 
setAutostart
if ret == -1: raise libvirtError ('virNetworkSetAutostart() 
failed', net=self)
libvirtError: Requested operation is not valid: cannot set autostart for 
transient network



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Users] setupNetworks failure - Host non-operational

2013-01-22 Thread Deepak C Shetty

On 01/22/2013 07:13 PM, Antoni Segura Puimedon wrote:

Which version of vdsm is it running? Once I know I'll try to see why
does this happen.

I am working on a git version of VDSM. let me know if u need more info.

rpm -qa| grep vdsm
vdsm-debug-plugin-4.10.3-0.75.git87b668d.fc17.noarch
vdsm-debuginfo-4.10.3-0.75.git87b668d.fc17.x86_64
vdsm-jsonrpc-4.10.3-0.75.git87b668d.fc17.noarch
vdsm-4.10.3-0.75.git87b668d.fc17.x86_64
vdsm-xmlrpc-4.10.3-0.75.git87b668d.fc17.noarch
vdsm-gluster-4.10.3-0.75.git87b668d.fc17.noarch
vdsm-tests-4.10.3-0.75.git87b668d.fc17.noarch
vdsm-cli-4.10.3-0.75.git87b668d.fc17.noarch
vdsm-python-4.10.3-0.75.git87b668d.fc17.x86_64


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Q on git-review error

2013-01-15 Thread Deepak C Shetty

Hi All,
I got this error, while trying to push a topic branch..but in the 
end it looks like it did push the GLUSTERSD_DOMAIN patch ( which had a 
minor change ). The other 2 patches didn't have any change. Finally when 
i look up in gerrit, it shows GLUSTERSD_DOMAIn patch fine, but the link 
of that with the other 2 patches seems to have broken


 git-review -t gluster_domain_support
You have more than one commit that you are about to submit.
The outstanding commits are:

29f2048 (HEAD, dpk-gluster-support) tests/functional: Add GlusterSD 
functional test

3cc6fca tests/functional: Use deleteVolume instead of deleteImage
6d71286 Support for GLUSTERFS_DOMAIN

Is this really what you meant to do?
Type 'yes' to confirm: yes
remote: Resolving deltas: 100% (20/20)
remote: Processing changes: done
remote: error: internal error while processing changes  throwing some 
error

To ssh://dpkshe...@gerrit.ovirt.org:29418/vdsm.git
 * [new branch]  HEAD - 
refs/publish/master/gluster_domain_support   it did push tho'


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] RFC: New Storage API

2012-12-16 Thread Deepak C Shetty

On 12/08/2012 01:23 AM, Saggi Mizrahi wrote:


- Original Message -

From: Deepak C Shetty deepa...@linux.vnet.ibm.com
To: Saggi Mizrahi smizr...@redhat.com
Cc: Shu Ming shum...@linux.vnet.ibm.com, engine-devel engine-de...@ovirt.org, 
VDSM Project Development
vdsm-devel@lists.fedorahosted.org, Deepak C Shetty 
deepa...@linux.vnet.ibm.com
Sent: Friday, December 7, 2012 12:23:15 AM
Subject: Re: [vdsm] RFC: New Storage API

On 12/06/2012 10:22 PM, Saggi Mizrahi wrote:

- Original Message -

From: Shu Ming shum...@linux.vnet.ibm.com
To: Saggi Mizrahi smizr...@redhat.com
Cc: VDSM Project Development
vdsm-devel@lists.fedorahosted.org, engine-devel
engine-de...@ovirt.org
Sent: Thursday, December 6, 2012 11:02:02 AM
Subject: Re: [vdsm] RFC: New Storage API

Saggi,

Thanks for sharing your thought and I get some comments below.


Saggi Mizrahi:

I've been throwing a lot of bits out about the new storage API
and
I think it's time to talk a bit.
I will purposefully try and keep implementation details away and
concentrate about how the API looks and how you use it.

First major change is in terminology, there is no long a storage
domain but a storage repository.
This change is done because so many things are already called
domain in the system and this will make things less confusing for
new-commers with a libvirt background.

One other changes is that repositories no longer have a UUID.
The UUID was only used in the pool members manifest and is no
longer needed.


connectStorageRepository(repoId, repoFormat,
connectionParameters={}):
repoId - is a transient name that will be used to refer to the
connected domain, it is not persisted and doesn't have to be the
same across the cluster.
repoFormat - Similar to what used to be type (eg. localfs-1.0,
nfs-3.4, clvm-1.2).
connectionParameters - This is format specific and will used to
tell VDSM how to connect to the repo.

Where does repoID come from? I think repoID doesn't exist before
connectStorageRepository() return.  Isn't repoID a return value of
connectStorageRepository()?

No, repoIDs are no longer part of the domain, they are just a
transient handle.
The user can put whatever it wants there as long as it isn't
already taken by another currently connected domain.

So what happens when user mistakenly gives a repoID that is in use
before.. there should be something in the return value that specifies
the error and/or reason for error so that user can try with a
new/diff
repoID ?

Asi I said, connect fails if the repoId is in use ATM.

disconnectStorageRepository(self, repoId)


In the new API there are only images, some images are mutable and
some are not.
mutable images are also called VirtualDisks
immutable images are also called Snapshots

There are no explicit templates, you can create as many images as
you want from any snapshot.

There are 4 major image operations:


createVirtualDisk(targetRepoId, size, baseSnapshotId=None,
 userData={}, options={}):

targetRepoId - ID of a connected repo where the disk will be
created
size - The size of the image you wish to create
baseSnapshotId - the ID of the snapshot you want the base the new
virtual disk on
userData - optional data that will be attached to the new VD,
could
be anything that the user desires.
options - options to modify VDSMs default behavior

IIUC, i can use options to do storage offloads ? For eg. I can create
a
LUN that represents this VD on my storage array based on the
'options'
parameter ? Is this the intended way to use 'options' ?

No, this has nothing to do with offloads.
If by offloads you mean having other VDSM hosts to the heavy lifting then 
this is what the option autoFix=False and the fix mechanism is for.
If you are talking about advanced scsi features (ie. write same) they will be 
used automatically whenever possible.
In any case, how we manage LUNs (if they are even used) is an implementation 
detail.


I am a bit more interested in how storage array offloads ( by that I 
mean, offload VD creation, snapshot, clone etc to the storage array when 
available/possible) can be done from VDSM ?
In the past there were talks of using libSM to do that. How does that 
strategy play in this new Storage API scenario ? I agree its implmn 
detail, but how  where does that implm sit and how it would be 
triggered is not very clear to me. Looking at createVD args, it sounded 
like 'options' seems to be a trigger point for deciding whether to use 
storage offloads or not, but you spoke otherwise :) Can you provide your 
vision on how VDSM can understand the storage array capabilities  
exploit storgae array offloads in this New Storage API context ? -- 
Thanks deepak



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] RFC: New Storage API

2012-12-06 Thread Deepak C Shetty

On 12/06/2012 10:22 PM, Saggi Mizrahi wrote:


- Original Message -

From: Shu Ming shum...@linux.vnet.ibm.com
To: Saggi Mizrahi smizr...@redhat.com
Cc: VDSM Project Development vdsm-devel@lists.fedorahosted.org, engine-devel 
engine-de...@ovirt.org
Sent: Thursday, December 6, 2012 11:02:02 AM
Subject: Re: [vdsm] RFC: New Storage API

Saggi,

Thanks for sharing your thought and I get some comments below.


Saggi Mizrahi:

I've been throwing a lot of bits out about the new storage API and
I think it's time to talk a bit.
I will purposefully try and keep implementation details away and
concentrate about how the API looks and how you use it.

First major change is in terminology, there is no long a storage
domain but a storage repository.
This change is done because so many things are already called
domain in the system and this will make things less confusing for
new-commers with a libvirt background.

One other changes is that repositories no longer have a UUID.
The UUID was only used in the pool members manifest and is no
longer needed.


connectStorageRepository(repoId, repoFormat,
connectionParameters={}):
repoId - is a transient name that will be used to refer to the
connected domain, it is not persisted and doesn't have to be the
same across the cluster.
repoFormat - Similar to what used to be type (eg. localfs-1.0,
nfs-3.4, clvm-1.2).
connectionParameters - This is format specific and will used to
tell VDSM how to connect to the repo.


Where does repoID come from? I think repoID doesn't exist before
connectStorageRepository() return.  Isn't repoID a return value of
connectStorageRepository()?

No, repoIDs are no longer part of the domain, they are just a transient handle.
The user can put whatever it wants there as long as it isn't already taken by 
another currently connected domain.


So what happens when user mistakenly gives a repoID that is in use 
before.. there should be something in the return value that specifies 
the error and/or reason for error so that user can try with a new/diff 
repoID ?



disconnectStorageRepository(self, repoId)


In the new API there are only images, some images are mutable and
some are not.
mutable images are also called VirtualDisks
immutable images are also called Snapshots

There are no explicit templates, you can create as many images as
you want from any snapshot.

There are 4 major image operations:


createVirtualDisk(targetRepoId, size, baseSnapshotId=None,
userData={}, options={}):

targetRepoId - ID of a connected repo where the disk will be
created
size - The size of the image you wish to create
baseSnapshotId - the ID of the snapshot you want the base the new
virtual disk on
userData - optional data that will be attached to the new VD, could
be anything that the user desires.
options - options to modify VDSMs default behavior


IIUC, i can use options to do storage offloads ? For eg. I can create a 
LUN that represents this VD on my storage array based on the 'options' 
parameter ? Is this the intended way to use 'options' ?




returns the id of the new VD

I think we will also need a function to check if a a VirtualDisk is
based on a specific snapshot.
Like: isSnapshotOf(virtualDiskId, baseSnapshotID):

No, the design is that volume dependencies are an implementation detail.
There is no reason for you to know that an image is physically a snapshot of 
another.
Logical snapshots, template information, and any other information can be set 
by the user by using the userData field available for every image.

createSnapshot(targetRepoId, baseVirtualDiskId,
 userData={}, options={}):
targetRepoId - The ID of a connected repo where the new sanpshot
will be created and the original image exists as well.
size - The size of the image you wish to create
baseVirtualDisk - the ID of a mutable image (Virtual Disk) you want
to snapshot
userData - optional data that will be attached to the new Snapshot,
could be anything that the user desires.
options - options to modify VDSMs default behavior

returns the id of the new Snapshot

copyImage(targetRepoId, imageId, baseImageId=None, userData={},
options={})
targetRepoId - The ID of a connected repo where the new image will
be created
imageId - The image you wish to copy
baseImageId - if specified, the new image will contain only the
diff between image and Id.
If None the new image will contain all the bits of
image Id. This can be used to copy partial parts of
images for export.
userData - optional data that will be attached to the new image,
could be anything that the user desires.
options - options to modify VDSMs default behavior

Does this function mean that we can copy the image from one
repository
to another repository? Does it cover the semantics of storage
migration,
storage backup, storage incremental backup?

Yes, the main purpose is copying to another repo. and you can even do 
incremental backups.
Also the 

[vdsm] Feature page for GLUSTERFS_DOMAIN support is now available.

2012-10-22 Thread Deepak C Shetty

Hi list,
I created a wiki page for GLUSTERFS_DOMAIN support.
Comments/Suggestions are welcome.

http://wiki.ovirt.org/wiki/Features/GlusterFS_Storage_Domain

thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Q on 'git push' to vdsm gerrit

2012-09-24 Thread Deepak C Shetty

On 09/23/2012 11:59 PM, Alon Bar-Lev wrote:

Try to branch and re-push, remove your own commits from master so you won't 
have future problems.
I am not following here. Can you elaborate pls ? I am not a gerrit 
expert, so may be I am missing something.


FYI - I am not doing git push from vdsm master branch, in my local git 
repo, I have a different branch and pushing my patches to gerrit from 
that branch, did you mean this ?


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Q on 'git push' to vdsm gerrit

2012-09-24 Thread Deepak C Shetty

On 09/24/2012 03:54 PM, Alon Bar-Lev wrote:


- Original Message -

From: Deepak C Shetty deepa...@linux.vnet.ibm.com
To: Alon Bar-Lev alo...@redhat.com
Cc: VDSM Project Development vdsm-devel@lists.fedorahosted.org
Sent: Monday, September 24, 2012 12:19:19 PM
Subject: Re: [vdsm] Q on 'git push' to vdsm gerrit

On 09/23/2012 11:59 PM, Alon Bar-Lev wrote:

Try to branch and re-push, remove your own commits from master so
you won't have future problems.

I am not following here. Can you elaborate pls ? I am not a gerrit
expert, so may be I am missing something.

FYI - I am not doing git push from vdsm master branch, in my local
git
repo, I have a different branch and pushing my patches to gerrit from
that branch, did you mean this ?

thanx,
deepak



So that's is fine.
If you push the branch again what message do you get?


Will try  let you know, working on the next version of patchset.
In the meanwhile, I wanted to understand what the issue was behind me 
getting the error, hence had sent out this mail :)


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Q on 'git push' to vdsm gerrit

2012-09-23 Thread Deepak C Shetty

On 09/22/2012 09:01 PM, Alon Bar-Lev wrote:


- Original Message -

From: Deepak C Shetty deepa...@linux.vnet.ibm.com
To: VDSM Project Development vdsm-devel@lists.fedorahosted.org
Sent: Saturday, September 22, 2012 3:47:24 PM
Subject: [vdsm] Q on 'git push' to vdsm gerrit

Hi,

 I got the below error from git push, but my patchset is actually
pushed, when i see it gerrit.
See change # 6856, patchset 8.

git push gerrit.ovirt.org:vdsm HEAD:refs/for/master
Counting objects: 36, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (22/22), done.
Writing objects: 100% (22/22), 4.33 KiB, done.
Total 22 (delta 18), reused 0 (delta 0)
remote: Resolving deltas: 100% (18/18)
remote: Processing changes: updated: 2, refs: 1, done
To gerrit.ovirt.org:vdsm
   ! [remote rejected] HEAD - refs/for/master (no changes made)
error: failed to push some refs to 'gerrit.ovirt.org:vdsm'

Background:
1) I was asked to split patchset 7 into multiple patches, rebase and
post patchset 8
2) So as part of patchset 8, i had to git pull, rebase and post it
via
git push ( keep the same ChangeID), which is when I see the above
error.
3) But patchset 8 is visible in change 6856, so not sure if I need to
be
concerned abt the above error ? What does it mean then ?
 3a) If you see patchset 8, the dependency is empty, is it because
the prev patchset 7 was having a different dep. than 8 ?
 3b) But as part of 'parent' i see the reference to the dep.
 patch.

Question
1) Is this the right way to do a git push ?
2) Do i need to be concerned abt the git push error or I can ignore
it ?
3) Dependency for patchset 8 in gerrit is empty, tho' parent shows
reference to the dep. patch..is this ok, if not, what is the right
procedure to follow here ?

thanx,
deepak


It should work without any error.

Hmm, I do see the error tho' as above.



If you are to submit a patch series (several patches that depend on each 
other), you should have a branch based on master, and your 8 commits, each with 
its own unique Change-Id.
When pushing this branch you should see each commit depend on previous commit.
I am seeing each commit dep on the prev commit for all the patches 
except the topmost, in gerrit. Not sure why.
I agree, i should have probably started a new topic, but i forgot, hence 
continued to post on master itself.


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [fedora-virt] libvirt 0.10.0 dependency issue

2012-09-18 Thread Deepak C Shetty

On 09/12/2012 09:21 PM, Cole Robinson wrote:

On 09/09/2012 02:21 PM, Dan Kenigsberg wrote:

On Sun, Sep 09, 2012 at 12:35:00PM +0300, Dan Kenigsberg wrote:

On Thu, Sep 06, 2012 at 10:10:17PM +0800, Royce Lv wrote:

Guys,
 Because vdsm is depend on a patch in libvirt 0.10.0, But libvirt
0.10.0 is not available for fc17. You can download the rpm
here:http://libvirt.org/sources/
Other packages can be found with yum.
Sorry for the inconvenience I've brought.

Yeah, this unfortunate consequece was not foreseen by your reviewer.
Shame on him. On the mean while, I've built libvirt-0.10.1-2 for my F16
(yes, I'm backward) and put it in http://danken.fedorapeople.org/f16/


I've moved it around under
http://danken.fedorapeople.org/my-virt-preview, and added f17, but then
realised, that we should better request whomever responsible on
fedora-virt-preview to update libvirt under
http://fedorapeople.org/groups/virt/virt-preview/



I got an error while doing installing the latest VDSM rpm from my git 
src, as below...


-- Finished Dependency Resolution
Error: Package: vdsm-4.10.0-0.447.git62a10e1.fc16.x86_64 
(/vdsm-4.10.0-0.447.git62a10e1.fc16.x86_64)

   Requires: libvirt = 0.10.1-1
   Installed: libvirt-0.9.11.4-3.fc17.x86_64 (@updates)


So, I installed latest libvirt from the virt-preview F17 repo ( my 
system run F16 tho' and there is no 0.10.x-x available under F16/)


After that VDSM rpm install went fine, but got the below error for 
`service vdsmd start`


# service vdsmd start
Redirecting to /bin/systemctl  start vdsmd.service
Job failed. See system logs and 'systemctl status' for details.

From /var/log/messages.. I see this...

7 kvmfs01-hs22 systemd[1]: Reloading.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/libvirt-guests.service:13] Failed to parse 
output specifier, ignoring: journal+console
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/wdmd.service:1] Assignment outside of section. 
Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/wdmd.service:2] Assignment outside of section. 
Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/sanlock.service:1] Assignment outside of 
section. Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/sanlock.service:2] Assignment outside of 
section. Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: vdsm: libvirt already 
configured for vdsm [  OK  ]

Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: Starting iscsid:
Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: Starting libvirtd 
(via systemctl):  [  OK  ]
Sep 18 17:27:57 kvmfs01-hs22 abrt: detected unhandled Python exception 
in '/usr/share/vdsm/nwfilter.pyc'

Sep 18 17:27:57 kvmfs01-hs22 abrtd: New client connected
Sep 18 17:27:57 kvmfs01-hs22 abrtd: Directory 
'pyhook-2012-09-18-17:27:57-27402' creation detected
Sep 18 17:27:57 kvmfs01-hs22 abrt-server[27403]: Saved Python crash dump 
of pid 27402 to /var/spool/abrt/pyhook-2012-09-18-17:27:57-27402
Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: vdsm: Failed to 
define network filters on libvirt[FAILED]
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: vdsmd.service: control process 
exited, code=exited status=1
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: Unit vdsmd.service entered 
failed state.
Sep 18 17:27:57 kvmfs01-hs22 abrtd: DUP_OF_DIR: 
/var/spool/abrt/pyhook-2012-09-18-17:11:42-26406
Sep 18 17:27:57 kvmfs01-hs22 abrtd: Problem directory is a duplicate of 
/var/spool/abrt/pyhook-2012-09-18-17:11:42-26406
Sep 18 17:27:57 kvmfs01-hs22 abrtd: Deleting problem directory 
pyhook-2012-09-18-17:27:57-27402 (dup of pyhook-2012-09-18-17:11:42-26406)




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [fedora-virt] libvirt 0.10.0 dependency issue

2012-09-18 Thread Deepak C Shetty

On 09/12/2012 09:21 PM, Cole Robinson wrote:

On 09/09/2012 02:21 PM, Dan Kenigsberg wrote:

On Sun, Sep 09, 2012 at 12:35:00PM +0300, Dan Kenigsberg wrote:

On Thu, Sep 06, 2012 at 10:10:17PM +0800, Royce Lv wrote:

Guys,
 Because vdsm is depend on a patch in libvirt 0.10.0, But libvirt
0.10.0 is not available for fc17. You can download the rpm
here:http://libvirt.org/sources/
Other packages can be found with yum.
Sorry for the inconvenience I've brought.

Yeah, this unfortunate consequece was not foreseen by your reviewer.
Shame on him. On the mean while, I've built libvirt-0.10.1-2 for my F16
(yes, I'm backward) and put it in http://danken.fedorapeople.org/f16/


I've moved it around under
http://danken.fedorapeople.org/my-virt-preview, and added f17, but then
realised, that we should better request whomever responsible on
fedora-virt-preview to update libvirt under
http://fedorapeople.org/groups/virt/virt-preview/



I got an error while doing installing the latest VDSM rpm from my git 
src, as below...


-- Finished Dependency Resolution
Error: Package: vdsm-4.10.0-0.447.git62a10e1.fc16.x86_64 
(/vdsm-4.10.0-0.447.git62a10e1.fc16.x86_64)

   Requires: libvirt = 0.10.1-1
   Installed: libvirt-0.9.11.4-3.fc17.x86_64 (@updates)


So, I installed latest libvirt from the virt-preview F17 repo ( my 
system run F16 tho' and there is no 0.10.x-x available under F16/)


After that VDSM rpm install went fine, but got the below error for 
`service vdsmd start`


# service vdsmd start
Redirecting to /bin/systemctl  start vdsmd.service
Job failed. See system logs and 'systemctl status' for details.

From /var/log/messages.. I see this...

7 kvmfs01-hs22 systemd[1]: Reloading.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/libvirt-guests.service:13] Failed to parse 
output specifier, ignoring: journal+console
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/wdmd.service:1] Assignment outside of section. 
Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/wdmd.service:2] Assignment outside of section. 
Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/sanlock.service:1] Assignment outside of 
section. Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/sanlock.service:2] Assignment outside of 
section. Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: vdsm: libvirt already 
configured for vdsm [  OK  ]

Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: Starting iscsid:
Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: Starting libvirtd 
(via systemctl):  [  OK  ]
Sep 18 17:27:57 kvmfs01-hs22 abrt: detected unhandled Python exception 
in '/usr/share/vdsm/nwfilter.pyc'

Sep 18 17:27:57 kvmfs01-hs22 abrtd: New client connected
Sep 18 17:27:57 kvmfs01-hs22 abrtd: Directory 
'pyhook-2012-09-18-17:27:57-27402' creation detected
Sep 18 17:27:57 kvmfs01-hs22 abrt-server[27403]: Saved Python crash dump 
of pid 27402 to /var/spool/abrt/pyhook-2012-09-18-17:27:57-27402
Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: vdsm: Failed to 
define network filters on libvirt[FAILED]
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: vdsmd.service: control process 
exited, code=exited status=1
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: Unit vdsmd.service entered 
failed state.
Sep 18 17:27:57 kvmfs01-hs22 abrtd: DUP_OF_DIR: 
/var/spool/abrt/pyhook-2012-09-18-17:11:42-26406
Sep 18 17:27:57 kvmfs01-hs22 abrtd: Problem directory is a duplicate of 
/var/spool/abrt/pyhook-2012-09-18-17:11:42-26406
Sep 18 17:27:57 kvmfs01-hs22 abrtd: Deleting problem directory 
pyhook-2012-09-18-17:27:57-27402 (dup of pyhook-2012-09-18-17:11:42-26406)




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [fedora-virt] libvirt 0.10.0 dependency issue

2012-09-18 Thread Deepak C Shetty

On 09/18/2012 05:31 PM, Deepak C Shetty wrote:

On 09/12/2012 09:21 PM, Cole Robinson wrote:

On 09/09/2012 02:21 PM, Dan Kenigsberg wrote:

On Sun, Sep 09, 2012 at 12:35:00PM +0300, Dan Kenigsberg wrote:

On Thu, Sep 06, 2012 at 10:10:17PM +0800, Royce Lv wrote:

Guys,
 Because vdsm is depend on a patch in libvirt 0.10.0, But libvirt
0.10.0 is not available for fc17. You can download the rpm
here:http://libvirt.org/sources/
Other packages can be found with yum.
Sorry for the inconvenience I've brought.

Yeah, this unfortunate consequece was not foreseen by your reviewer.
Shame on him. On the mean while, I've built libvirt-0.10.1-2 for my 
F16

(yes, I'm backward) and put it in http://danken.fedorapeople.org/f16/


I've moved it around under
http://danken.fedorapeople.org/my-virt-preview, and added f17, but then
realised, that we should better request whomever responsible on
fedora-virt-preview to update libvirt under
http://fedorapeople.org/groups/virt/virt-preview/



I got an error while doing installing the latest VDSM rpm from my git 
src, as below...


-- Finished Dependency Resolution
Error: Package: vdsm-4.10.0-0.447.git62a10e1.fc16.x86_64 
(/vdsm-4.10.0-0.447.git62a10e1.fc16.x86_64)

   Requires: libvirt = 0.10.1-1
   Installed: libvirt-0.9.11.4-3.fc17.x86_64 (@updates)


So, I installed latest libvirt from the virt-preview F17 repo ( my 
system run F16 tho' and there is no 0.10.x-x available under F16/)


After that VDSM rpm install went fine, but got the below error for 
`service vdsmd start`


# service vdsmd start
Redirecting to /bin/systemctl  start vdsmd.service
Job failed. See system logs and 'systemctl status' for details.

From /var/log/messages.. I see this...

7 kvmfs01-hs22 systemd[1]: Reloading.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/libvirt-guests.service:13] Failed to parse 
output specifier, ignoring: journal+console
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/wdmd.service:1] Assignment outside of 
section. Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/wdmd.service:2] Assignment outside of 
section. Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/sanlock.service:1] Assignment outside of 
section. Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: 
[/usr/lib/systemd/system/sanlock.service:2] Assignment outside of 
section. Ignoring.
Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: vdsm: libvirt 
already configured for vdsm [  OK  ]

Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: Starting iscsid:
Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: Starting libvirtd 
(via systemctl):  [  OK  ]
Sep 18 17:27:57 kvmfs01-hs22 abrt: detected unhandled Python exception 
in '/usr/share/vdsm/nwfilter.pyc'

Sep 18 17:27:57 kvmfs01-hs22 abrtd: New client connected
Sep 18 17:27:57 kvmfs01-hs22 abrtd: Directory 
'pyhook-2012-09-18-17:27:57-27402' creation detected
Sep 18 17:27:57 kvmfs01-hs22 abrt-server[27403]: Saved Python crash 
dump of pid 27402 to /var/spool/abrt/pyhook-2012-09-18-17:27:57-27402
Sep 18 17:27:57 kvmfs01-hs22 systemd-vdsmd[27288]: vdsm: Failed to 
define network filters on libvirt[FAILED]
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: vdsmd.service: control 
process exited, code=exited status=1
Sep 18 17:27:57 kvmfs01-hs22 systemd[1]: Unit vdsmd.service entered 
failed state.
Sep 18 17:27:57 kvmfs01-hs22 abrtd: DUP_OF_DIR: 
/var/spool/abrt/pyhook-2012-09-18-17:11:42-26406
Sep 18 17:27:57 kvmfs01-hs22 abrtd: Problem directory is a duplicate 
of /var/spool/abrt/pyhook-2012-09-18-17:11:42-26406
Sep 18 17:27:57 kvmfs01-hs22 abrtd: Deleting problem directory 
pyhook-2012-09-18-17:27:57-27402 (dup of 
pyhook-2012-09-18-17:11:42-26406)





As a workaround, I comment calling nwfilter.py from 
/lib/systemd/systemd-vdsmd, which then leads me to this error... when 
starting vdsmd service...


MainThread::INFO::2012-09-18 18:35:34,019::vdsm::87::vds::(run) VDSM 
main thread ended. Waiting for 1 other threads...
MainThread::INFO::2012-09-18 18:35:34,019::vdsm::91::vds::(run) 
_MainThread(MainThread, started 140244831332096)
MainThread::INFO::2012-09-18 18:35:34,019::vdsm::91::vds::(run) 
Thread(Thread-1, started daemon 140244778272512)
MainThread::WARNING::2012-09-18 
18:35:34,079::vdsmDebugPlugin::36::DebugInterpreter::(__turnOnDebugPlugin) 
Starting Debug Interpreter. Tread lightly!
MainThread::INFO::2012-09-18 18:35:34,080::vdsm::81::vds::(run) I am the 
actual vdsm 4.10-0.447
MainThread::ERROR::2012-09-18 18:35:34,166::vdsm::84::vds::(run) 
Exception raised

Traceback (most recent call last):
  File /usr/share/vdsm/vdsm, line 82, in run
serve_clients(log)
  File /usr/share/vdsm/vdsm, line 49, in serve_clients
from clientIF import clientIF  # must import after config is read
  File /usr/share/vdsm/clientIF.py, line 33, in module
from vdsm import netinfo
  File /usr/lib64/python2.7/site-packages/vdsm/netinfo.py, line 31, 
in module

Re: [vdsm] Jenkins testing.

2012-08-21 Thread Deepak C Shetty

On 08/22/2012 07:40 AM, Robert Middleswarth wrote:

On 08/14/2012 04:54 AM, Deepak C Shetty wrote:

On 08/14/2012 12:52 PM, Deepak C Shetty wrote:

On 08/14/2012 11:13 AM, Robert Middleswarth wrote:

After a few false starts it looks like we have per patch testing
working on VDSM, oVirt-engine, oVirt-engine-sdk and
oVirt-engine-cli.  There are 3 status a patch can get.  1) Success -
Means the patch ran though the tests without issue.  2) Failure -
Means the tests failed.  3) Aborted - Generally means the submitter
is not in the whitelist and the tests were never run.  If you have
any questions please feel free to ask.


So what is needed for the submitted to be in whitelist ?
I once for Success for few of my patches.. then got failure for some
other patch( maybe thats due to the false starts u had) and then for
the latest patch of mine, it says aborted.

So not sure if i am in whitelist or not ?
If not, what do i need to do to be part of it ?
If yes, why did the build abort for my latest patch ?


Pls see http://gerrit.ovirt.org/#/c/6856/
For patch1 it says build success, for patch 2, it says aborted.. why ?

All the abort means as a protective measure we don't run the tests 
unless we know the committer.  With that said you are now in the 
whitelist so it shouldn't be an issue in the feature.



Thanks for putting me in the whitelist.
But it still doesn't clarify how patch 1 got build success and 
subsequent patch 2 had abort ?


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Jenkins testing.

2012-08-14 Thread Deepak C Shetty

On 08/14/2012 12:52 PM, Deepak C Shetty wrote:

On 08/14/2012 11:13 AM, Robert Middleswarth wrote:
After a few false starts it looks like we have per patch testing 
working on VDSM, oVirt-engine, oVirt-engine-sdk and 
oVirt-engine-cli.  There are 3 status a patch can get.  1) Success - 
Means the patch ran though the tests without issue.  2) Failure - 
Means the tests failed.  3) Aborted - Generally means the submitter 
is not in the whitelist and the tests were never run.  If you have 
any questions please feel free to ask.



So what is needed for the submitted to be in whitelist ?
I once for Success for few of my patches.. then got failure for some 
other patch( maybe thats due to the false starts u had) and then for 
the latest patch of mine, it says aborted.


So not sure if i am in whitelist or not ?
If not, what do i need to do to be part of it ?
If yes, why did the build abort for my latest patch ?


Pls see http://gerrit.ovirt.org/#/c/6856/
For patch1 it says build success, for patch 2, it says aborted.. why ?

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Error building RPM - solved

2012-08-07 Thread Deepak C Shetty

(top posting)

Adding my hook in spec.in outside the %if hook worked !

On 08/07/2012 11:00 AM, Deepak C Shetty wrote:

On 08/07/2012 07:47 AM, Xu He Jie wrote:

Hi, Deepak:
   You need attend there is compile argument --enable-hooks. if 
without --enable-hooks, some hooks will not compile and package.




I want my hook to be compiled and packaged without enabling 
--enable-hooks.

Just like how faqemu and vhostmd are being compiled and packaged today.


   Please see my inline comment.

On 08/07/2012 12:08 AM, Deepak C Shetty wrote:

Hello,
   I am creating a new hook for qemu cmdline support.
Facing the below issue :

Checking for unpackaged file(s): /usr/lib/rpm/check-files 
/root/rpmbuild/BUILDROOT/vdsm-4.10.0-0.255.gitc80d988.fc16.x86_64

error: Installed (but unpackaged) file(s) found:
   /usr/libexec/vdsm/hooks/before_vm_start/50_qemucmdline


RPM build errors:
Installed (but unpackaged) file(s) found:
   /usr/libexec/vdsm/hooks/before_vm_start/50_qemucmdline
make: *** [rpm] Error 1


I am not a RPM expert, so looking for suggestions.
I followed the steps that are done for faqemu and vhostmd hooks 
which are shipped today.

But its not working.


Changes I made..

diff --git a/configure.ac b/configure.ac
index ec35a49..519ba88 100644
--- a/configure.ac
+++ b/configure.ac
@@ -199,6 +199,7 @@ AC_OUTPUT([
vdsm_hooks/pincpu/Makefile
vdsm_hooks/promisc/Makefile
vdsm_hooks/qos/Makefile
+   vdsm_hooks/qemucmdline/Makefile
vdsm_hooks/scratchpad/Makefile
vdsm_hooks/smartcard/Makefile
vdsm_hooks/smbios/Makefile
diff --git a/vdsm.spec.in b/vdsm.spec.in
index 60f49ff..3a24193 100644
--- a/vdsm.spec.in
+++ b/vdsm.spec.in
@@ -260,6 +260,16 @@ BuildArch:  noarch
 VDSM promiscuous mode let user define a VM interface that will capture
 all network traffic.



you can find '%if 0%{?with_hooks}' at line 728, it means your package 
will be packaged with --enable-hooks.


I dont want this. I want my hook to be packaged, even when 
--enable-hook is not given





+%package hook-qemucmdline
+Summary:QEMU cmdline hook for VDSM
+BuildArch:  noarch
+Requires:   vdsm
+
+%description hook-qemucmdline
+Provides support for injecting QEMU cmdline via VDSM hook.
+It exploits libvirt's qemu:commandline facility available in the
+qemu xml namespace.
+
 %package hook-qos
 Summary:QoS network in/out traffic support for VDSM
 BuildArch:  noarch
@@ -773,6 +783,11 @@ exit 0
 %attr (755,vdsm,kvm) 
%{_libexecdir}/%{vdsm_name}/hooks/after_vm_start/50_promisc
 %attr (755,vdsm,kvm) 
%{_libexecdir}/%{vdsm_name}/hooks/before_vm_destroy/50_promisc


+%files hook-qemucmdline
+%defattr(-, vdsm, kvm, -)
+%doc COPYING
+%attr (755,vdsm,kvm) 
%{_libexecdir}/%{vdsm_name}/hooks/before_vm_start/50_qemucmdline

+
 %files hook-qos
 %defattr(-, vdsm, kvm, -)
 %attr (755,vdsm,kvm) 
%{_libexecdir}/%{vdsm_name}/hooks/before_vm_start/50_qos

diff --git a/vdsm_hooks/Makefile.am b/vdsm_hooks/Makefile.am
index 091cd73..e6a8280 100644
--- a/vdsm_hooks/Makefile.am
+++ b/vdsm_hooks/Makefile.am
@@ -18,7 +18,7 @@
 # Refer to the README and COPYING files for full details of the 
license

 #

-SUBDIRS = faqemu vhostmd
+SUBDIRS = faqemu vhostmd qemucmdline


But there you added your hooks at here, it means it will be complie 
and pakcage even without --enable-hooks.


Yes that is what i want.


you can add your hooks to:

# Additional hooks
if HOOKS
SUBDIRS += \
directlun \
fileinject \
floppy \
hostusb \
hugepages \
isolatedprivatevlan \
numa \
pincpu \
promisc \
qos \
scratchpad \
smartcard \
smbios \
sriov \
vmdisk
endif



But faqemu and vhostmd are not added here, but they are still 
packaged, how ?


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-07-29 Thread Deepak C Shetty

On 07/29/2012 03:47 PM, Dan Kenigsberg wrote:

Deepak,

I know that I am not relating to your main issue (sorry...), but...
I like the idea of a hook manglingqemu:commandline.
Could you (or someone else) contribute such a hook to upstream vdsm?
I'm sure many would thank a hook accepting general qemu command line as
custom property, and pass it to qemu command line.


Dan,
Sure, i remember you asking for this on IRC. Its on my TODO list, 
and will get to it soon. My priority is the VDSM gluster integration for 
exploiting the gluster backend of qemu, and I am trying all different 
options possible, hooks being one of them.


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] RFC: Proposal to support network disk type in PosixFS

2012-07-20 Thread Deepak C Shetty

Hello,
I am proposing a method for VDSM to exploit disk of 'network' type 
under PosixFS.
Altho' I am taking Gluster as the storage backend example, it should 
apply to any other backends (that support network disk type) as well.


Currently under PosixFS, the design is to mount the 'server:/export' and 
use that as storage domain.

The libvirt XML generated for such a disk is something like below...

disk device=disk snapshot=no type=file
source 
file=/rhev/data-center/8fe261ea-43c2-4635-a08a-ccbafe0cde0e/4f31ea5c-c01e-4578-8353-8897b2d691b4/images/c94c9cf2-fa1c-4e43-8c77-f222dbfb032d/eff4db09-1fde-43cd-a75b-34054a64182b/

target bus=ide dev=hda/
serialc94c9cf2-fa1c-4e43-8c77-f222dbfb032d/serial
driver cache=none error_policy=stop io=threads name=qemu 
type=raw/

/disk

This works well, but does not help exploit the gluster block backend of 
QEMU, since the QEMU cmdline generated is -drive 
file='/rhev/data-center/'


Gluster fits as a network block device in QEMU, similar to ceph and 
sheepdog backend, QEMU already has.

The proposed libvirt XML for Gluster based disks is ... (WIP)

disk type='network' device='disk'
  driver name='qemu' type='raw'/
  source protocol=gluster name=volname:imgname
  host name='server' port='xxx'/
  /source
  target dev='vda' bus='virtio'/
/disk

This causes libvirt to generate QEMU cmdline like : -drive 
file=gluster:server@port:volname:imgname. The imgname is relative the 
gluster mount point.


I am proposing the below to help VDSM exploit disk as a network device 
under PosixFS.
Here is a code snippet (taken from a vdsm standalone script) of how a 
storage domain  VM are created in VDSM


# When storage domain is mounted

gluster_conn = kvmfs01-hs22:dpkvol # gluster_server:volume_name
vdsOK(s.connectStorageServer(SHAREDFS_DOMAIN, my gluster mount, 
[dict(id=1, connection=gluster_conn, vfs_type=glusterfs, mnt_options=)])


# do other things...createStoragePool, SPM start etc...

...
...

# Now create a VM

vmId = str(uuid.uuid4())
vdsOK(
s.create(dict(vmId=vmId,
  drives=[dict(poolID=spUUID, domainID=sdUUID, 
imageID=imgUUID, volumeID=volUUID, disk_type=network, 
protocol=gluster, connection=gluster_conn)], # Proposed way
  #drives=[dict(poolID=spUUID, domainID=sdUUID, 
imageID=imgUUID, volumeID=volUUID)], # Existing way

  memSize=256,
  display=vnc,
  vmName=vm-backed-by-gluster,
 )
)
)


1) User (engine in ovirt case) passes disk_type, protocol  connection 
keywords as depicted above. NOTE: disk_type is used instead of just type 
to avoid confusion with driver_type
-- protocol and connection are already available to User as he/she 
used it as part of connectStorageServer ( connection and vfs_type )
-- disk_type is something that User chooses instead of default 
(which is file type)


2) Based on these extra keywords passed, the getXML() of 'class Drive' 
in libvirtvm.py can be modified to generate disk type='network'... as 
shown above.
Some parsing would be needed to extract the server, volname. imgname 
relative to gluster mount point can be extracted from drive['path'] 
which holds the fully qualified path.


3) Since these keywords are drive specific, User can choose which drives 
he/she wants to use network protocol Vs file. Not passing these keywords 
defaults to file, which is what happens today.


This approach would help VDSM to support network disk types under 
PosixFS and thus provide the ability to the User to choose file or 
network disk types on a per drive basis.


I will post a RFC patch soon ( awaiting libvirt changes ), comments welcome.

thanx,
deepak




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-07-16 Thread Deepak C Shetty

(top posting)

Hello All,
I am posting a VDSM hook example that helps exploit the 
QEMU-GlusterFS native integration from VDSM.


Since the last time i posted on this thread, there are some changes to 
the GlusterFS based image/file specification for QEMU.
This was done based on the discussion with GlusterFS folks. Bharata (in 
CC) is primarily working on this.


The latest QEMU way of specifying a image/file served by GlusterFS is as 
below...

-drive file=gluster:server@port:volname:imagename,format=gluster

Here it takes volname ( instead of volumefile) and server@port as addnl 
parameters.


I have been successfully able to write a sample VDSM stand-alone script 
 a VDSM hook which works along with the stand-alone script
to create a VM that exploit the QEMU's native GlusterFS options ( as 
depicted above ).


( see attached: glusterfs_strg_domain.py  55_qemu_gluster.py)

Few important points to note...

1) Quite a few stuff in the attached example .py's are hardcoded for my 
env. But it shows that things work from a VDSM perspective.


2) Pre-req: vdsmd service is started and gluster volume is setup and 
started. Gluster volume used in the example is...
`kvmfs01-hs22:dpkvol` where `kvmfs01-hs22` is the hostname and `dpkvol` 
is the GlusterFS volname


3) Copy 55_qemu_gluster.py to /usr/libexec/vdsm/hooks/before_vm_start/ 
directory


4) Run `python glusterfs_strg_domain.py`  -- This should create a blank 
vmdisk in gluster mount point and create a VM that boots
from the blank vmdisk using the -drive qemu option as depicted above, 
thus exploiting QEMU's gluster block backend support.


4a) While creating the VM, i pass a custom arg ( `use_qemu_gluster` 
in this case), which causes the VDSM hook of mine to be invoked.


4b) The hook replaces the existing disk xml tag (generated as a 
normal file path pointing to gluster mount point)
   with the `-drive 
file=gluster:server@port:volname:imagename,format=gluster` using 
qemu:commandline tag support of libvirt.


4c) It also adds a emulator tag to point to my custom qemu, which 
has gluster block backend support.


4d) Currently libvirt native support for GlusterFS is not yet 
there, once its present, hook can be changed/modified to

exploit the right libvirt tags for the same.

5) If all goes fine :), one should be able to see the VM getting created 
and from VNC it should be stuck at No boot device found
which is obvious, since the VDSM standalone script creates a new Volume 
( file in this case ) as a vmdisk, which is a blank disk.


6) I have tried extending the hook to add -cdrom path/to/iso and boot 
from cdrom and install the OS on the Gluster based vmdisk

as part of the VM execution, which also works fine.

7) Since the scenario works fine from a VDSM standalone script, it 
should work from oVirt side as well, provided the steps
necessary to register the custom arg ( `use_qemu_gluster` in this case) 
with oVirt and supplying the custom arg as part

of VM create step is followed.

I would like to know comments/feedback on the VDSM hook approach and 
suggestions on how to improvise on the hook implementation,

especially for some of the stuff that is hardcoded.

I am sure VDSM hook is not the ideal way to add this functionality in 
VDSM, would request inputs from experts on this list on
what would be a better way in VDSM to exploit QEMU-GlusterFS native 
integration ? Ideally based on the Storage Domain type
and options used, there should be a way in VDSM to modify the libvirt 
XML formed.


Appreciate feedback/suggestions.

thanx,
deepak



On 07/05/2012 05:24 PM, Deepak C Shetty wrote:

Hello All,
Any updates/comments on this mail, anybody ?

More comments/questions inline below
'would appreciate response which can help me here.

thanx,
deepak

On 06/27/2012 06:44 PM, Deepak C Shetty wrote:

Hello,
Recently there were patches posted in qemu-devel to support 
gluster as a block backend for qemu.


This introduced new way of specifying drive location to qemu as ...
-drive file=gluster:volumefile:image name

where...
volumefile is the gluster volume file name ( say gluster volume 
is pre-configured on the host )

image name is the name of the image file on the gluster mount point

I wrote a vdsm standalone script using SHAREDFS ( which maps to 
PosixFs ) taking cues from http://www.ovirt.org/wiki/Vdsm_Standalone

The conndict passed to connectStorageServer is as below...
[dict(id=1, connection=kvmfs01-hs22:dpkvol, vfs_type=glusterfs, 
mnt_options=)]


Here note that 'dpkvol' is the name of the gluster volume

I and am able to create and invoke a VM backed by a image file 
residing on gluster mount.


But since this is SHAREDFS way, the qemu -drive cmdline generated via 
VDSM is ...
-drive file=/rhev/datacentre/mnt/ -- which eventually softlinks 
to the image file on the gluster mount point.


I was looking to write a vdsm hook to be able to change the above to 


-drive file

[vdsm] Q on createVolume(..) filesize.

2012-07-12 Thread Deepak C Shetty

Hello,
I am working on a VDSM standalone script that is using PosixFS 
inteface to mount gluster volume

 trying to create a volume (file) inside the storage domain.

Snip of the code is below...

==
sizeGiB = 4

tid = vdsOK(s.createVolume(sdUUID, spUUID, imgUUID, sizeGiB,
   RAW_FORMAT, PREALLOCATED_VOL, LEAF_VOL,
   volUUID, glustervol,
   BLANK_UUID, BLANK_UUID))['uuid']
waitTask(s, tid)
==

But the file size created is not 4G, its 8K as seen below...

qemu-img info 
/rhev/data-center/mnt/kvmfs01-hs22:dpkvol/f0443ec4-3c94-49c9-a239-797562ee4926/images/073b3309-e4cd-4b6c-978e-5744a9afb8b7/c8c4f92e-818d-433a-b013-b0060cd7cc87
image: 
/rhev/data-center/mnt/kvmfs01-hs22:dpkvol/f0443ec4-3c94-49c9-a239-797562ee4926/images/073b3309-e4cd-4b6c-978e-5744a9afb8b7/c8c4f92e-818d-433a-b013-b0060cd7cc87

file format: raw
virtual size: 2.0K (2048 bytes)
disk size: 8.0K

It should have created a raw file of size 4G, but its not.
Wondering if the sizeGiB argument is not in GB but somethign else ?


thanx,
deepak


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Q on createVolume(..) filesize.

2012-07-12 Thread Deepak C Shetty

On 07/12/2012 06:32 PM, Lee Yarwood wrote:

On 07/12/2012 01:25 PM, Deepak C Shetty wrote:

Hello,
 I am working on a VDSM standalone script that is using PosixFS
inteface to mount gluster volume
  trying to create a volume (file) inside the storage domain.

Snip of the code is below...

==
sizeGiB = 4

tid = vdsOK(s.createVolume(sdUUID, spUUID, imgUUID, sizeGiB,
RAW_FORMAT, PREALLOCATED_VOL, LEAF_VOL,
volUUID, glustervol,
BLANK_UUID, BLANK_UUID))['uuid']
waitTask(s, tid)
==

But the file size created is not 4G, its 8K as seen below...

qemu-img info
/rhev/data-center/mnt/kvmfs01-hs22:dpkvol/f0443ec4-3c94-49c9-a239-797562ee4926/images/073b3309-e4cd-4b6c-978e-5744a9afb8b7/c8c4f92e-818d-433a-b013-b0060cd7cc87

image:
/rhev/data-center/mnt/kvmfs01-hs22:dpkvol/f0443ec4-3c94-49c9-a239-797562ee4926/images/073b3309-e4cd-4b6c-978e-5744a9afb8b7/c8c4f92e-818d-433a-b013-b0060cd7cc87

file format: raw
virtual size: 2.0K (2048 bytes)
disk size: 8.0K

It should have created a raw file of size 4G, but its not.
Wondering if the sizeGiB argument is not in GB but somethign else ?

Isn't this argument actually the number of sectors?

vdsm/storage/fileVolume.py

119 def create(cls, repoPath, sdUUID, imgUUID, size, volFormat,
preallocate,
120diskType, volUUID, desc, srcImgUUID, srcVolUUID):
121 
122 Create a new volume with given size or snapshot
123 'size' - in sectors
124 'volFormat' - volume format COW / RAW
125 'preallocate' - Preallocate / Sparse
126 'diskType' - enum (API.Image.DiskTypes)
127 'srcImgUUID' - source image UUID
128 'srcVolUUID' - source volume UUID
129 

Lee


Thanks, i realised it after see the 'dd' cmd vdsm invokes to 
preallocate, 'bs' is not specified, so it defaults to 512 blksize


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-07-05 Thread Deepak C Shetty

Hello All,
Any updates/comments on this mail, anybody ?

More comments/questions inline below
'would appreciate response which can help me here.

thanx,
deepak

On 06/27/2012 06:44 PM, Deepak C Shetty wrote:

Hello,
Recently there were patches posted in qemu-devel to support 
gluster as a block backend for qemu.


This introduced new way of specifying drive location to qemu as ...
-drive file=gluster:volumefile:image name

where...
volumefile is the gluster volume file name ( say gluster volume is 
pre-configured on the host )

image name is the name of the image file on the gluster mount point

I wrote a vdsm standalone script using SHAREDFS ( which maps to 
PosixFs ) taking cues from http://www.ovirt.org/wiki/Vdsm_Standalone

The conndict passed to connectStorageServer is as below...
[dict(id=1, connection=kvmfs01-hs22:dpkvol, vfs_type=glusterfs, 
mnt_options=)]


Here note that 'dpkvol' is the name of the gluster volume

I and am able to create and invoke a VM backed by a image file 
residing on gluster mount.


But since this is SHAREDFS way, the qemu -drive cmdline generated via 
VDSM is ...
-drive file=/rhev/datacentre/mnt/ -- which eventually softlinks to 
the image file on the gluster mount point.


I was looking to write a vdsm hook to be able to change the above to 
-drive file=gluster:volumefile:image name

which means I would need access to some of the conndict params inside 
the hook, esp. the 'connection' to extract the volume name.


1) In looking at the current VDSM code, i don't see a way for the hook 
to know anything abt the storage domain setup. So the only
way is to have the user pass a custom param which provides the path to 
the volumefile  image and use it in the hook. Is there
a better way ? Can i use the vdsm gluster plugin support inside the 
hook to determine the volfile from the volname, assuming I
only take the volname as the custom param, and determine imagename 
from the existing source file = .. tag ( basename is the
image name). Wouldn't it be better to provide a way for hooks to 
access ( readonly) storage domain parameters, so that they can

use that do implement the hook logic in a more saner way ?

2) In talking to Eduardo, it seems there are discussion going on to 
see how prepareVolumePath and prepareImage could be exploited
to fit gluster ( and in future other types) based images. I am not 
very clear on the image and volume code of vdsm, frankly its very

complex and hard to understand due to lack of comments.

I would appreciate if someone can guide me on what is the best way to 
achive my goal (-drive file=gluster:volumefile:image name)
here. Any short term solutions if not perfect solution are also 
appreciated, so that I can atleast have a working setup where I just
run my VDSM standaloen script and my qemu cmdline using gluster:... is 
generated.


Currently I am using qemu:commandline tag facility of libvirt to 
inject the needed qemu options and hardcoding the volname, imagename
but i would like to do this based on the conndict passed by the user 
when creating SHAREDFS domain.




I am using VDSM hook to customise the  libvirt xml to add -drive 
file=gluster: cmdline option, but facing issues as below...
NOTE: I am using the libvirt's generic qemu:commandline tag facility to 
add my needed qemu options.


1) I replace the existing disk tag with my new qemu:commandline tag to 
introduce -drive file=gluster:


This is what i add in my vdsm hook...
qemu:commandline
qemu:arg value=-drive/
qemu:arg 
value=file=gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/d536ca42-9dd2-40a2-bd45-7e5c67751698/images/e9d31bc2-9fb6-4803-aa88-5563229aad41/1c3463aa-be2c-4405--7283b166e981,format=gluster/

/qemu:commandline/domain

In this case the qemu process is created ( as seen from ps aux) but the 
VM is in stopped state, vdsm does not start it, and using virsh i 
cannot, it says 'unable to acquire some lock

There is no way i can force start it from the vdscli cmdline too.
From the vdsm.log all i can see is till the point vdsm dumps the 
libvirt xml... then nothing happens.


In other cases ( when i am not using this custom cmdline and the 
standard disk tag is present ).. i see the below msgs in vdsm.log 
after it dumps libvirt xml...


libvirtEventLoop::DEBUG::2012-07-05 
13:52:17,780::libvirtvm::2409::vm.Vm::(_onLibvirtLifecycleEvent) 
vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::event Started detail 0 
opaque None
Thread-49::DEBUG::2012-07-05 13:52:17,819::utils::329::vm.Vm::(start) 
vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::Start statistics collection
Thread-51::DEBUG::2012-07-05 13:52:17,819::utils::358::vm.Vm::(run) 
vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::Stats thread started
Thread-51::DEBUG::2012-07-05 
13:52:17,821::task::588::TaskManager.Task::(_updateState) 
Task=`f66ac43a-1528-491c-bdee-37112dac536c`::moving from state init - 
state preparing
Thread-51::INFO::2012-07-05 
13:52:17,822::logUtils::37

[vdsm] Readonly leases error in vdsm log

2012-07-05 Thread Deepak C Shetty

Hello,
I am creating a VM using vdsm standalone script and facing issues 
when i add cdrom and make it boot from cdrom.




vdsOK(
s.create(dict(vmId=vmId,
  drives=[dict(poolID=spUUID, domainID=sdUUID, 
imageID=imgUUID, volumeID=volUUID)],

  memSize=256,
  display=vnc,
  vmName=vm-backed-by-gluster,
  cdrom=/home/deepakcs/Fedora-16-x86_64-Live-Desktop.iso,
  boot=d,
  custom={dpktry:1},
 )
)

When i added cdrom= and boot= lines and ran, i see the below in vdsm.log

Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 570, in _startUnderlyingVm
self._run()
  File /usr/share/vdsm/libvirtvm.py, line 1364, in _run
self._connection.createXML(domxml, flags),
  File /usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py, 
line 82, in wrapper

ret = f(*args, **kwargs)
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 2420, in 
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', 
conn=self)
libvirtError: internal error unsupported configuration: Readonly leases 
are not supported
Thread-49::DEBUG::2012-07-05 
17:35:59,780::vm::920::vm.Vm::(setDownStatus) 
vmId=`8b400342-2cda-478d-93f6-a36bac8538c8`::Changed state to Down: 
internal error unsupported configuration: Readonly leases are not supported



It looks like its trying to use the cdrom disk device as a disk lease, 
hence complaining?

But the libvirt xml has disk tags which looks liek this...

disk device=disk snapshot=no type=file
source 
file=/rhev/data-center/98fc6cd9-d857-4735-bc7a-59a289bc0f55/01400276-e3c8-44d9-8353-924ab2183af2/images/95c9f6a4-def1-4300-baf6-db884dc8ccca/9d62e802-859e-4381-8029-7be5a6b1de26/

target bus=ide dev=hda/
serial95c9f6a4-def1-4300-baf6-db884dc8ccca/serial
driver cache=none error_policy=stop io=threads name=qemu 
type=raw/

/disk
disk device=cdrom snapshot=no type=file
source file=/home/deepakcs/Fedora-16-x86_64-Live-Desktop.iso 
startupPolicy=optional/

target bus=ide dev=hdc/
serial/
/disk

Any idea why it throws Readonly leases are not supported ?

thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Agenda for today's call

2012-07-02 Thread Deepak C Shetty

Stuff I'd like to mention today:

- bulk of REST patches unilaterally pushed. I still consider that API
 preliminary though, since I do not think it has undergone the scrutiny
 it deserves.
 The patches were not really verified with current API.py, so
 they are currently broken. Please review and ack
 o http://gerrit.ovirt.org/5815/
 o http://gerrit.ovirt.org/3757

- Volunteer is needed to write a Jenkins job for running the functional
 tests. Otherwise, REST (and other stuff) is going to be broken without
 us even noticing.

- State of Saggi's API overhaul

- Implementation of libvdsm has been discussed on list. I'd like to
 hear more about its suggested interface.

- what else?

During the call, please keep an eye on #v...@irc.freenode.net so that
folks who cannot dial into the conference call, can still participate in
a way.

Regards,
Dan.
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Agenda for tomorrow's call

2012-06-29 Thread Deepak C Shetty

On 06/29/2012 03:24 PM, Ewoud Kohl van Wijngaarden wrote:

On Mon, Jun 18, 2012 at 05:24:26PM +0300, Dan Kenigsberg wrote:

Deepack: please review libstorage writeup. Saggi, Adam: will do.
Deepack: is there an irc bot logging #vdsm? No but I'd love if you
  configure one ;-)
Deepack to research if there's an available bot to do this. It can be
 run on a private server, and then post to somewhere like
 you.fedorapeople.org.

Since my IRC client already logs them I can easily publish them.
http://ekohl.nl/vdsm has logs since december 2011. Currently manually
synchronized and I'll set up a cronjob which does copies the logs.
They've been anonymized by 's/ \[[^[]*\]/:/' so IPs shouldn't be in
there.
Thanks Ewoud. Can you publish this link on the VDSM main wikipage or 
create a new page
on the ovirt wiki ? Somewhere so that folks googling would know... maybe 
its a good idea to publish this permanently as part of #vdsm topic ?


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-06-27 Thread Deepak C Shetty

Hello,
Recently there were patches posted in qemu-devel to support gluster 
as a block backend for qemu.


This introduced new way of specifying drive location to qemu as ...
-drive file=gluster:volumefile:image name

where...
volumefile is the gluster volume file name ( say gluster volume is 
pre-configured on the host )

image name is the name of the image file on the gluster mount point

I wrote a vdsm standalone script using SHAREDFS ( which maps to PosixFs 
) taking cues from http://www.ovirt.org/wiki/Vdsm_Standalone

The conndict passed to connectStorageServer is as below...
[dict(id=1, connection=kvmfs01-hs22:dpkvol, vfs_type=glusterfs, 
mnt_options=)]


Here note that 'dpkvol' is the name of the gluster volume

I and am able to create and invoke a VM backed by a image file residing 
on gluster mount.


But since this is SHAREDFS way, the qemu -drive cmdline generated via 
VDSM is ...
-drive file=/rhev/datacentre/mnt/ -- which eventually softlinks to 
the image file on the gluster mount point.


I was looking to write a vdsm hook to be able to change the above to 
-drive file=gluster:volumefile:image name

which means I would need access to some of the conndict params inside 
the hook, esp. the 'connection' to extract the volume name.


1) In looking at the current VDSM code, i don't see a way for the hook 
to know anything abt the storage domain setup. So the only
way is to have the user pass a custom param which provides the path to 
the volumefile  image and use it in the hook. Is there
a better way ? Can i use the vdsm gluster plugin support inside the hook 
to determine the volfile from the volname, assuming I
only take the volname as the custom param, and determine imagename from 
the existing source file = .. tag ( basename is the
image name). Wouldn't it be better to provide a way for hooks to access 
( readonly) storage domain parameters, so that they can

use that do implement the hook logic in a more saner way ?

2) In talking to Eduardo, it seems there are discussion going on to see 
how prepareVolumePath and prepareImage could be exploited
to fit gluster ( and in future other types) based images. I am not very 
clear on the image and volume code of vdsm, frankly its very

complex and hard to understand due to lack of comments.

I would appreciate if someone can guide me on what is the best way to 
achive my goal (-drive file=gluster:volumefile:image name)
here. Any short term solutions if not perfect solution are also 
appreciated, so that I can atleast have a working setup where I just
run my VDSM standaloen script and my qemu cmdline using gluster:... is 
generated.


Currently I am using qemu:commandline tag facility of libvirt to 
inject the needed qemu options and hardcoding the volname, imagename
but i would like to do this based on the conndict passed by the user 
when creating SHAREDFS domain.


thanx,
deepak


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-26 Thread Deepak C Shetty

On 06/25/2012 11:13 PM, Itamar Heim wrote:

On 06/25/2012 10:14 AM, Deepak C Shetty wrote:

On 06/25/2012 07:47 AM, Shu Ming wrote:

On 2012-6-25 10:10, Andrew Cathrow wrote:


- Original Message -

From: Andy Grover agro...@redhat.com
To: Shu Ming shum...@linux.vnet.ibm.com
Cc: libstoragemgmt-de...@lists.sourceforge.net,
engine-de...@ovirt.org, VDSM Project Development
vdsm-devel@lists.fedorahosted.org
Sent: Sunday, June 24, 2012 10:05:45 PM
Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on
VDSM-libstoragemgmt integration

On 06/24/2012 07:28 AM, Shu Ming wrote:

On 2012-6-23 20:40, Itamar Heim wrote:

On 06/23/2012 03:09 AM, Andy Grover wrote:

On 06/22/2012 04:46 PM, Itamar Heim wrote:

On 06/23/2012 02:31 AM, Andy Grover wrote:

On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:

Also, there is no mention on credentials in any part of the
process.
How does VDSM or the host get access to actually modify the
storage
array? Who holds the creds for that and how? How does the user
set
this up?

It seems to me more natural to have the oVirt-engine use
libstoragemgmt
directly to allocate and export a volume on the storage array,
and
then
pass this info to the vdsm on the node creating the vm. This
answers
Saggi's question about creds -- vdsm never needs array
modification
creds, it only gets handed the params needed to connect and use
the
new
block device (ip, iqn, chap, lun).

Is this usage model made difficult or impossible by the current
software
architecture?

what about live snapshots?

I'm not a virt guy, so extreme handwaving:

vm X uses luns 1 2

engine - vdsm pause vm X

that's pausing the VM. live snapshot isn't supposed to do so.

Tough we don't expect to do a pausing operation to the VM when live
snaphot is undergoing, the VM should be blocked on the access to
specific luns for a while. The blocking time should be very short
to
avoid the storage IO time out in the VM.

OK my mistake, we don't pause the VM during live snapshot, we block
on
access to the luns while snapshotting. Does this keep live snapshots
working and mean ovirt-engine can use libsm to config the storage
array
instead of vdsm?

Because that was really my main question, should we be talking about
engine-libstoragemgmt integration rather than vdsm-libstoragemgmt
integration.

for snapshotting wouldn't we want VDSM to handle the coordination of
the various atomic functions?


I think VDSM-libstoragemgmt will let the storage array itself to make
the snapshot and handle the coordination of the various atomic
functions. VDSM should be blocked on the following access to the
specific luns which are under snapshotting.


I kind of agree. If snapshot is being done at the array level, then the
array takes care of quiesing the I/O, taking the snapshot and allowing
the I/O, why does VDSM have to worry about anything here, it should all
happen transparently for VDSM, isnt it ?


I may be misssing something, but afaiu you need to ask the guest to 
perform the quiesce, and i'm sure the storage array can't do that.


No, you are not, I missed it. After Tony  Shu Ming's reply, I realised 
that the guest has to quiese the I/O before VDSM can ask storage array 
to take the snapshot.





___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-25 Thread Deepak C Shetty

On 06/19/2012 01:45 AM, Saggi Mizrahi wrote:

First of all I'd like to suggest not using the LSM acronym as it can also mean 
live-storage-migration and maybe other things.


Sure, what do you suggest ? libSM ?


Secondly I would like to avoid talking about what needs to be changed in VDSM 
before we figure out what exactly we want to accomplish.



Also, there is no mention on credentials in any part of the process.
How does VDSM or the host get access to actually modify the storage array?
Who holds the creds for that and how?
How does the user set this up?


Per my original discussion on this with Ayal, this is what he had 
suggested...
In addition, I'm assuming we will either need a new 'storage array' 
entity in engine to keep credentials, or, in case of storage array as 
storage domain, just keep this info as part of the domain at engine level.


Either we can have the libstoragemgmt cred stored in the engine as part 
of engine-setup or have the user input them as part of Storage Prov and 
user clicks on remember cred button, so engine saves it and passes it 
to VDSM as needed ? In any way, the cred should come from the 
user/admin, no other way correct ?



In the array as domain case. How are the luns being mapped to initiators. What 
about setting discovery credentials.
In the array set up case. How will the hosts be represented in regards to 
credentials?
How will the different schemes and capabilities in regard to authentication 
methods will be expressed.


Not clear on what the concern here is. Can you pls provide more clarity 
on the problem here ?

Maybe providing some examples will help.


Rest of the comments inline

- Original Message -

From: Deepak C Shettydeepa...@linux.vnet.ibm.com
To: VDSM Project Developmentvdsm-devel@lists.fedorahosted.org
Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-de...@ovirt.org
Sent: Wednesday, May 30, 2012 5:38:46 AM
Subject: [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

Hello All,

  I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and
crystallize it, before putting it on the ovirt wiki.
I have run this once thru Ayal and Tony, so have some of their
comments
incorporated.

I still have few doubts/questions, which I have posted below with
lines
ending with '?'

Comments / Suggestions are welcome  appreciated.

thanx,
deepak

[Ccing engine-devel and libstoragemgmt lists as this stuff is
relevant
to them too]

--

1) Background:

VDSM provides high level API for node virtualization management. It
acts
in response to the requests sent by oVirt Engine, which uses VDSM to
do
all node virtualization related tasks, including but not limited to
storage management.

libstoragemgmt aims to provide vendor agnostic API for managing
external
storage array. It should help system administrators utilizing open
source solutions have a way to programmatically manage their storage
hardware in a vendor neutral way. It also aims to facilitate
management
automation, ease of use and take advantage of storage vendor
supported
features which improve storage performance and space utilization.

Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/

libstoragemgmt (LSM) today supports C and python plugins for talking
to
external storage array using SMI-S as well as native interfaces (eg:
netapp plugin )
Plan is to grow the SMI-S interface as needed over time and add more
vendor specific plugins for exploiting features not possible via
SMI-S
or have better alternatives than using SMI-S.
For eg: Many of the copy offload features require to use vendor
specific
commands, which justifies the need for a vendor specific plugin.


2) Goals:

  2a) Ability to plugin external storage array into oVirt/VDSM
virtualization stack, in a vendor neutral way.

  2b) Ability to list features/capabilities and other statistical
info of the array

  2c) Ability to utilize the storage array offload capabilities
  from
oVirt/VDSM.


3) Details:

LSM will sit as a new repository engine in VDSM.
VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192

Current plan is to have LSM co-exist with VDSM on the virtualization
nodes.

*Note : 'storage' used below is generic. It can be a file/nfs-export
for
NAS targets and LUN/logical-drive for SAN targets.

VDSM can use LSM and do the following...
  - Provision storage
  - Consume storage

3.1) Provisioning Storage using LSM

Typically this will be done by a Storage administrator.

oVirt/VDSM should provide storage admin the
  - ability to list the different storage arrays along with their
types (NAS/SAN), capabilities, free/used space.
  - ability to provision storage using any of the array
  capabilities
(eg: thin provisioned lun or new NFS export )
  - ability to manage the provisioned storage 

Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-25 Thread Deepak C Shetty

On 06/25/2012 08:28 PM, Ryan Harper wrote:

* Andrew Cathrowacath...@redhat.com  [2012-06-24 21:11]:


- Original Message -

From: Andy Groveragro...@redhat.com
To: Shu Mingshum...@linux.vnet.ibm.com
Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-de...@ovirt.org, VDSM 
Project Development
vdsm-devel@lists.fedorahosted.org
Sent: Sunday, June 24, 2012 10:05:45 PM
Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt  
integration

On 06/24/2012 07:28 AM, Shu Ming wrote:

On 2012-6-23 20:40, Itamar Heim wrote:

On 06/23/2012 03:09 AM, Andy Grover wrote:

On 06/22/2012 04:46 PM, Itamar Heim wrote:

On 06/23/2012 02:31 AM, Andy Grover wrote:

On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:

Also, there is no mention on credentials in any part of the
process.
How does VDSM or the host get access to actually modify the
storage
array? Who holds the creds for that and how? How does the user
set
this up?

It seems to me more natural to have the oVirt-engine use
libstoragemgmt
directly to allocate and export a volume on the storage array,
and
then
pass this info to the vdsm on the node creating the vm. This
answers
Saggi's question about creds -- vdsm never needs array
modification
creds, it only gets handed the params needed to connect and use
the
new
block device (ip, iqn, chap, lun).

Is this usage model made difficult or impossible by the current
software
architecture?

what about live snapshots?

I'm not a virt guy, so extreme handwaving:

vm X uses luns 1   2

engine -   vdsm pause vm X

that's pausing the VM. live snapshot isn't supposed to do so.

Tough we don't expect to do a pausing operation to the VM when live
snaphot is undergoing, the VM should be blocked on the access to
specific luns for a while.  The blocking time should be very short
to
avoid the storage IO time out in the VM.

OK my mistake, we don't pause the VM during live snapshot, we block
on
access to the luns while snapshotting. Does this keep live snapshots
working and mean ovirt-engine can use libsm to config the storage
array
instead of vdsm?

Because that was really my main question, should we be talking about
engine-libstoragemgmt integration rather than vdsm-libstoragemgmt
integration.

for snapshotting wouldn't we want VDSM to handle the coordination of
the various atomic functions?

Absolutely.  Requiring every management application (engine, etc) to
integrate with libstoragemanagement is a win here.  We want to simplify
working with KVM, storage, etc not require every mgmt application to
know deep details about how to create a live VM snapshot.



Sorry, but not clear to me. Are you saying engine-libstoragemgmt 
integration is a win here ?
VDSM is the common factor here.. so integrating libstoragemgmt with VDSM 
helps anybody talkign with VDSM in future AFAIU.


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Libstoragemgmt-devel] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-25 Thread Deepak C Shetty

On 06/25/2012 08:38 PM, Tony Asleson wrote:

On 06/25/2012 09:14 AM, Deepak C Shetty wrote:

On 06/25/2012 07:47 AM, Shu Ming wrote:

I think VDSM-libstoragemgmt will let the storage array itself to make
the snapshot and handle the coordination of the various atomic
functions. VDSM should be blocked on the following access to the
specific luns which are under snapshotting.

I kind of agree. If snapshot is being done at the array level, then the
array takes care of quiesing the I/O, taking the snapshot and allowing
the I/O, why does VDSM have to worry about anything here, it should all
happen transparently for VDSM, isnt it ?

The array can take a snapshot in flight, but the data may be in an
inconsistent state.  Only the end application/user of the storage knows
when a point in time is consistent.  Typically the application(s) are
quiesced, the OS buffers flushed (outstanding tagged IO is allowed to
complete) and then the storage is told to make a point in time copy.
This is the only way to be sure of what you have on disk is coherent.

A transactional database (two-phase commit) and logging file systems
(meta data) are specifically written to handle these inconsistencies,
but many applications are not.



Thanks for clarifying Tony. So that means we need to do whatever from 
VDSM to quiese the I/O

and then VDSM should instruct the array to take the snapshot.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [virt-node] VDSM as a general purpose virt host manager

2012-06-19 Thread Deepak C Shetty

On 06/19/2012 01:13 AM, Ryan Harper wrote:

* Saggi Mizrahismizr...@redhat.com  [2012-06-18 10:05]:

I would like to put on to the table for descussion the growing need for a way
to more easily reuse of the functionality of VDSM in order to service projects
other than Ovirt-Engine.

Originally VDSM was created as a proprietary agent for the sole purpose of
serving the then proprietary version of what is known as ovirt-engine. Red Hat,
after acquiring the technology, pressed on with it's commitment to open source
ideals and released the code. But just releasing code into the wild doesn't
build a community or makes a project successful. Further more when building
open source software you should aspire to build reusable components instead of
monolithic stacks.


Saggi,

Thanks for sending this out.  I've been trying to pull together some
thoughts on what else is needed for vdsm as a community.  I know that
for some time downstream has been the driving force for all of the work
and now with a community there are challenges in finding our own way.

While we certainly don't want to make downstream efforts harder, I think
we need to develop and support our own vision for what vdsm can be come,
some what independent of downstream and other exploiters.

Revisiting the API is definitely a much needed endeavor and I think
adding some use-cases or sample applications would be useful in
demonstrating whether or not we're evolving the API into something
easier to use for applications beyond engine.


We would like to expose a stable, documented, well supported API. This gives
us a chance to rethink the VDSM API from the ground up. There is already work
in progress of making the internal logic of VDSM separate enough from the API
layer so we could continue feature development and bug fixing while designing
the API of the future.

In order to achieve this though we need to do several things:
1. Declare API supportability guidelines
2. Decide on an API transport (e.g. REST, ZMQ, AMQP)
3. Make the API easily consumable (e.g. proper docs, example code, extending
   the API, etc)
4. Implement the API itself

I agree with the list, but I'd like to work on the redesign discussion so
that we're not doing all of 1-4 around the existing API that's
engine-focused.

I'm over due for posting a feature page on vdsm standalone mode, and I
have some other thoughts on various uses.

Some other paths of thought for use-cases I've been mulling over:

 - Simplifying using QEMU/KVM
 - consuming qemu via command line
 - can we manage/support developers launching qemu directly
 - consuming qemu via libvirt
 - can we integrate with systems that are already using
 libvirt

 - Addressing issues with libvirt
 - are there kvm specific features we can exploit that libvirt
 doesn't?

 - Scale-up/fail-over
 - can we support a single vdsm node, but allow for building up
 clusters/groups without bringing in something like ovirt-engine
 - can we look at decentralized fail-over for reliability without
 a central mgmt server?

 - pluggability
 - can we support an API that allows for third-party plugins to
 support new features or changes in implementation?


Pluggability feature would be nice. Even nicer would be the ability to 
introspect and figure whats supported by VDSM. For eg: It would be nice 
to query what plugins/capabilities are supported and accordingly the 
client can take a decision and/or call the appropriate APIs w/o worrying 
about ENOTSUPP kind of error.
It does becomes blur when we talk about Repository Engines... that was 
also targetted to provide pluggaibility in managing Images.. how will 
that co-exist with API level pluggability ?


IIUC, StorageProvisioning (via libstoragemgmt) can be one such optional 
support that can fit as a plug-in nicely, right ?



 - kvm tool integration into the API
 - there are lots of different kvm virt tools for various tasks
 and they are all stand-alone tools.  Can we integrate their
 use into the node level API.  Think libguestfs, virt-install,
 p2v/v2v tooling.  All of these are available, but there isn't an
 easy way to use this tools through an API.

 - host management operations
 - vdsm already does some host level configuration (see
   networking e.g.) it would be good to think about extending
 the API to cover other areas of configuration and updates
 - hardware enumeration
 - driver level information
 - storage configuration
 (we've got a bit of a discussion going around
  libstoragemgmt here)

 - performance monitoring/debugging
 - is the host collecting enough information to do debug/perf
 analysis
 - can we support specific configurations of a host that optimize
 

Re: [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-19 Thread Deepak C Shetty

On 06/18/2012 09:26 PM, Shu Ming wrote:

On 2012-5-30 17:38, Deepak C Shetty wrote:

Hello All,

I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and 
crystallize it, before putting it on the ovirt wiki.
I have run this once thru Ayal and Tony, so have some of their 
comments incorporated.


I still have few doubts/questions, which I have posted below with 
lines ending with '?'


Comments / Suggestions are welcome  appreciated.

thanx,
deepak

[Ccing engine-devel and libstoragemgmt lists as this stuff is 
relevant to them too]


-- 



1) Background:

VDSM provides high level API for node virtualization management. It 
acts in response to the requests sent by oVirt Engine, which uses 
VDSM to do all node virtualization related tasks, including but not 
limited to storage management.


libstoragemgmt aims to provide vendor agnostic API for managing 
external storage array. It should help system administrators 
utilizing open source solutions have a way to programmatically manage 
their storage hardware in a vendor neutral way. It also aims to 
facilitate management automation, ease of use and take advantage of 
storage vendor supported features which improve storage performance 
and space utilization.


Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/

libstoragemgmt (LSM) today supports C and python plugins for talking 
to external storage array using SMI-S as well as native interfaces 
(eg: netapp plugin )
Plan is to grow the SMI-S interface as needed over time and add more 
vendor specific plugins for exploiting features not possible via 
SMI-S or have better alternatives than using SMI-S.
For eg: Many of the copy offload features require to use vendor 
specific commands, which justifies the need for a vendor specific 
plugin.



2) Goals:

2a) Ability to plugin external storage array into oVirt/VDSM 
virtualization stack, in a vendor neutral way.


2b) Ability to list features/capabilities and other statistical 
info of the array


2c) Ability to utilize the storage array offload capabilities 
from oVirt/VDSM.



3) Details:

LSM will sit as a new repository engine in VDSM.
VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192

Current plan is to have LSM co-exist with VDSM on the virtualization 
nodes.


Does that mean LSM will be a different daemon process than VDSM?  
Also, how about the vendor's plugin, another process in the nodes?


Pls see the LSM homepage on sourceforge.net on how LSM works. It already 
has a lsmd ( daemon) which invokes the appropriate plugin based on the 
URI prefix.
vendor plugins in LSM are supported in LSM as a .py module, which is 
invoked based on the URI prefix, which will be vendor specific. See the 
netapp vendor plugin.py in LSM source.






*Note : 'storage' used below is generic. It can be a file/nfs-export 
for NAS targets and LUN/logical-drive for SAN targets.


VDSM can use LSM and do the following...
- Provision storage
- Consume storage

3.1) Provisioning Storage using LSM

Typically this will be done by a Storage administrator.

oVirt/VDSM should provide storage admin the
- ability to list the different storage arrays along with their 
types (NAS/SAN), capabilities, free/used space.
- ability to provision storage using any of the array 
capabilities (eg: thin provisioned lun or new NFS export )
- ability to manage the provisioned storage (eg: resize/delete 
storage)


Once the storage is provisioned by the storage admin, VDSM will have 
to refresh the host(s) for them to be able to see the newly 
provisioned storage.


3.1.1) Potential flows:

Mgmt - vdsm - lsm: create LUN + LUN Mapping / Zoning / whatever is 
needed to make LUN available to list of hosts passed by mgmt

Mgmt - vdsm: getDeviceList (refreshes host and gets list of devices)
 Repeat above for all relevant hosts (depending on list passed 
earlier, mostly relevant when extending an existing VG)

Mgmt - use LUN in normal flows.


3.1.2) How oVirt Engine will know which LSM to use ?

Normally the way this works today is that user can choose the host to 
use (default today is SPM), however there are a few flows where mgmt 
will know which host to use:
1. extend storage domain (add LUN to existing VG) - Use SPM and make 
sure *all* hosts that need access to this SD can see the new LUN
2. attach new LUN to a VM which is pinned to a specific host - use 
this host
3. attach new LUN to a VM which is not pinned - use a host from the 
cluster the VM belongs to and make sure all nodes in cluster can see 
the new LUN


So this model depend on the work of removing storage pool?


I am not sure and want the experts to comment here. I am not very clear 
yet on how things will work post SPM is gone. Here its assumed SPM is 
present.






Flows for which there is no clear

Re: [vdsm] Agenda for tomorrow's call

2012-06-18 Thread Deepak C Shetty

On 06/17/2012 11:34 PM, Dan Kenigsberg wrote:

Hi!

tomorrow I would like to discuss:

- the abysmal review condition of the rest api patches

- vdsm status for ovirt-3.1
   I know networking requires a heavy cherry-pick from upstream. There
   is probably more.
   Everybody invited to care for vdsm bugs that blocks Bug 822145 -
   Tracker: oVirt 3.1 release.

- plenty pep8 patches applied, but there is plenty more.

- Patches with pending verification. I see 11 of those now
   
http://gerrit.ovirt.org/#/q/status:open+project:vdsm+verified%253D0+codereview%253E%253D%252B2+-codereview%253C%253D-1,n,z
   Please do not send your patches out to the cold and desert them there.
   Pet them, nag folks to review and verify them, and rebase (only!) when
   required.

- Your issue comes here (or above, if it's more urgent).

Regards,
Dan.


Hello Dan,
 The India dial in ...
India Dial-In #: 000-800-650-1533

never works.. so I am unable to connect to this call from home.
I cannot use the other India number as that is not supported for my 
telecom carrier.


Who can help in resolving this issue. The above number always results in 
a 'engage' tone.

It never asks for conf. id.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [virt-node] VDSM as a general purpose virt host manager

2012-06-18 Thread Deepak C Shetty

On 06/18/2012 08:32 PM, Saggi Mizrahi wrote:

I would like to put on to the table for descussion the growing need for a way
to more easily reuse of the functionality of VDSM in order to service projects
other than Ovirt-Engine.

Originally VDSM was created as a proprietary agent for the sole purpose of
serving the then proprietary version of what is known as ovirt-engine. Red Hat,
after acquiring the technology, pressed on with it's commitment to open source
ideals and released the code. But just releasing code into the wild doesn't
build a community or makes a project successful. Further more when building
open source software you should aspire to build reusable components instead of
monolithic stacks.



Can you list issues that block tools (other than ovirt-engine) in using 
VDSM ?

That will help provide more clarity and scope of work described here.

I understand the lack of REST API, which is where Adam's work comes in. 
With REST API support for vdsm, other tools can integrate upwardly with 
VDSM and exploit it. What else ? How does the current API layer 
design/implementation inhibit tools other than ovirt-engine to use VDSM  ?



We would like to expose a stable, documented, well supported API. This gives
us a chance to rethink the VDSM API from the ground up. There is already work
in progress of making the internal logic of VDSM separate enough from the API
layer so we could continue feature development and bug fixing while designing
the API of the future.

In order to achieve this though we need to do several things:
1. Declare API supportability guidelines
2. Decide on an API transport (e.g. REST, ZMQ, AMQP)
3. Make the API easily consumable (e.g. proper docs, example code, extending
   the API, etc)
4. Implement the API itself

All of these are dependent on one another and the permutations are endless.
This is why I think we should try and work on each one separately. All
discussions will be done openly on the mailing list and until the final version
comes out nothing is set in stone.

If you think you have anything to contribute to this process, please do so
either by commenting on the discussions or by sending code/docs/whatever
patches. Once the API solidifies it will be quite difficult to change
fundamental things, so speak now or forever hold your peace. Note that this is
just an introductory email. There will be a quick follow up email to kick start
the discussions.
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Users] How to configure sharedFS ?

2012-06-17 Thread Deepak C Shetty

On 06/17/2012 06:20 PM, Deepak C Shetty wrote:


Hello,
Got more questions on this, now that I am re-visiting this.
The last time i tried using SHAREDFS, i started from 
createStorageDomain verb, and it works fine.


But now we have connectStorageServer verb... which i believe is the 
new way of doing things ?
If i start from connectStorageServer verb to mount using SHAREDFS ( 
which goes via PosixFs... MountConnection flow), that won't help me 
entirely here, right ?
Because it only mounts based on the dict sent, but does not do 
anything with the image and metadata stuff ( which createStorageDomain 
flow did ).


I am wondering if its too early to start using connectStorageServer ? 
If not, how can i re-write the above vdsm standalone example using 
connectStorageServer instead of createStorageDomain flow ?


'Guess i got confused.. the standalone example does use 
connectStorageServer followed by createStorageDomain.

Scratch the question .. my bad..

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] A Tool for PEP 8 Patches to Find Code Logic Changes

2012-06-11 Thread Deepak C Shetty

On 06/11/2012 02:06 PM, Ewoud Kohl van Wijngaarden wrote:

On Sun, Jun 10, 2012 at 11:15:48AM +0300, Dan Kenigsberg wrote:

On Thu, Jun 07, 2012 at 11:13:14PM +0800, Shu Ming wrote:

On 2012-6-7 21:26, Adam Litke wrote:
Yes, I agree with you.  Also, we should merge this tool into vdsm as
a helper for PEP8 clean work.

Thanks, Zhou Zheng! I hope this expedites the pep8 conversion process.

As the tool is not vdsm-specific, I'd rather see it in pypi.python.org
than in vdsm.

+1 on pypi rather than vdsm.

+1 from me too :)

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] A Tool for PEP 8 Patches to Find Code Logic Changes

2012-06-07 Thread Deepak C Shetty
I haven't used the tool yet, but saw your mail with the examples.
I think its a very nice tool and very helpful. Why don't you submit this
tool to python project itself ?
It think it deserves it.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] pep8 questions

2012-06-05 Thread Deepak C Shetty

Hi,
I was looking at resolving pep8 issues in 
vdsm/storage/blockVolume.py. Haven't been able to resolve the below.. 
Pointers appreciated.


vdsm/storage/blockVolume.py:99:55: E225 missing whitespace around operator
vdsm/storage/blockVolume.py:148:28: E201 whitespace after '{'
vdsm/storage/blockVolume.py:207:28: E701 multiple statements on one line 
(colon)



line 99:  cls.log.warn(Could not get size for vol %s/%s using optimized
googling i found some links indicating this pep8 warning is incorrect.

line 148: cls.__putMetadata({ NONE: # * (sd.METASIZE-10) }, metaid)
It gives some other error if i remove the whitespace after {

line 206  207:
raise se.VolumeCannotGetParent(blockVolume can't get 
parent %s for

  volume %s: %s % (srcVolUUID, volUUID, str(e)))
I split this line to overcome the  80 error, but unable to decipher 
what this error means ?


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Agenda for today's call

2012-06-04 Thread Deepak C Shetty

On 06/04/2012 03:28 PM, Dan Kenigsberg wrote:

Hi All,

I have fewer talk issues for today, please suggest others, or else the
call would be short and to the point!


- reviewers/verifiers are still missing for pep8 patches.
   A branch was created, but not much action has taken place on it
   
http://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic:pep8cleaning,n,z

- Upcoming oVirt-3.1 release: version bump to 4.9.7? to 4.10?

- Vdsm/MOM integration: could we move MOM to gerrit.ovirt.org?


I would like to propose...

VDSM - libstoragemgmt integration.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Fwd: RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-04 Thread Deepak C Shetty
(for some reason i never recd. Adam's note tho' I am subscribed to all 
the 3 lists Cc'ed here, strange !
 Replying off from the mail fwd.ed to me from my colleague, pls see my 
responses inline below. Thanks. )





-- Forwarded message --
From: *Adam Litke* a...@us.ibm.com mailto:a...@us.ibm.com
Date: Thu, May 31, 2012 at 7:31 PM
Subject: Re: [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration
To: Deepak C Shetty deepa...@linux.vnet.ibm.com 
mailto:deepa...@linux.vnet.ibm.com
Cc: libstoragemgmt-de...@lists.sourceforge.net 
mailto:libstoragemgmt-de...@lists.sourceforge.net, 
engine-de...@ovirt.org mailto:engine-de...@ovirt.org, VDSM Project 
Development vdsm-devel@lists.fedorahosted.org 
mailto:vdsm-devel@lists.fedorahosted.org



On Wed, May 30, 2012 at 03:08:46PM +0530, Deepak C Shetty wrote:
 Hello All,

 I have a draft write-up on the VDSM-libstoragemgmt integration.
 I wanted to run this thru' the mailing list(s) to help tune and
 crystallize it, before putting it on the ovirt wiki.
 I have run this once thru Ayal and Tony, so have some of their
 comments incorporated.

 I still have few doubts/questions, which I have posted below with
 lines ending with '?'

 Comments / Suggestions are welcome  appreciated.

 thanx,
 deepak

 [Ccing engine-devel and libstoragemgmt lists as this stuff is
 relevant to them too]

 
--


 1) Background:

 VDSM provides high level API for node virtualization management. It
 acts in response to the requests sent by oVirt Engine, which uses
 VDSM to do all node virtualization related tasks, including but not
 limited to storage management.

 libstoragemgmt aims to provide vendor agnostic API for managing
 external storage array. It should help system administrators
 utilizing open source solutions have a way to programmatically
 manage their storage hardware in a vendor neutral way. It also aims
 to facilitate management automation, ease of use and take advantage
 of storage vendor supported features which improve storage
 performance and space utilization.

 Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/

 libstoragemgmt (LSM) today supports C and python plugins for talking
 to external storage array using SMI-S as well as native interfaces
 (eg: netapp plugin )
 Plan is to grow the SMI-S interface as needed over time and add more
 vendor specific plugins for exploiting features not possible via
 SMI-S or have better alternatives than using SMI-S.
 For eg: Many of the copy offload features require to use vendor
 specific commands, which justifies the need for a vendor specific
 plugin.


 2) Goals:

 2a) Ability to plugin external storage array into oVirt/VDSM
 virtualization stack, in a vendor neutral way.

 2b) Ability to list features/capabilities and other statistical
 info of the array

 2c) Ability to utilize the storage array offload capabilities
 from oVirt/VDSM.


 3) Details:

 LSM will sit as a new repository engine in VDSM.
 VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192

 Current plan is to have LSM co-exist with VDSM on the virtualization 
nodes.


 *Note : 'storage' used below is generic. It can be a file/nfs-export
 for NAS targets and LUN/logical-drive for SAN targets.

 VDSM can use LSM and do the following...
 - Provision storage
 - Consume storage

 3.1) Provisioning Storage using LSM

 Typically this will be done by a Storage administrator.

 oVirt/VDSM should provide storage admin the
 - ability to list the different storage arrays along with their
 types (NAS/SAN), capabilities, free/used space.
 - ability to provision storage using any of the array
 capabilities (eg: thin provisioned lun or new NFS export )
 - ability to manage the provisioned storage (eg: resize/delete 
storage)


I guess vdsm will need to model a new type of object (perhaps 
StorageTarget) to
be used for performing the above provisioning operations.  Then, to 
consume the
provisioned storage, we could create a StorageConnectionRef by passing 
in a

StorageTarget object and some additional parameters.  Sound about right?


Sounds right to me, but I am not an expert in VDSM object model, 
Saggi/Ayal/Dan can provide
more inputs here.  The (proposed) storage array entity in ovirt engine 
can use this vdsm object to

communicate and work with the storage array in doing the provisioning work.

Going ahead with the change to new Image Repository, I was envisioning 
that LSM when integrated as
a new repo engine will exhibit Storage Provisioning as a implicit 
feature/capability, only then it

will be picked up by the StorageTarget, else not.



 Once the storage is provisioned by the storage admin, VDSM will have
 to refresh the host(s) for them to be able to see the newly
 provisioned storage.

How would this refresh affect currently connected storage and running VMs?


I am not too sure

[vdsm] configure / autogen error on F16

2012-05-31 Thread Deepak C Shetty

Hello,
I have a lab machine that does not have internet access.
So when i take the latest vdsm git src, tar it and put it on the lab 
machine and try to run ./configure, i get this...


./configure
configure: error: package version not defined

Note that all the dep packages etc have been resolved, and i first got 
this error when i ran 'make'.

Somehow the PACKAGE_VERSION in ./configure is getting cleared off.


I also see this...

./autogen.sh --prefix=/
aclocal.m4:16: warning: this file was generated for autoconf 2.67.
You have another version of autoconf.  It may work, but is not 
guaranteed to.

If you have problems, you may need to regenerate the build system entirely.
To do so, use the procedure documented by the package, typically 
`autoreconf'.

aclocal.m4:16: warning: this file was generated for autoconf 2.67.
You have another version of autoconf.  It may work, but is not 
guaranteed to.

If you have problems, you may need to regenerate the build system entirely.
To do so, use the procedure documented by the package, typically 
`autoreconf'.


Runnign autoreconf did not help.

On another working system i compared the autoconf and other auto* 
packages with this system, and they are same, so not sure why I am 
seeing this issue on this particular system only.


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] RFC: Writeup on VDSM-libstoragemgmt integration

2012-05-30 Thread Deepak C Shetty

Hello All,

I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and 
crystallize it, before putting it on the ovirt wiki.
I have run this once thru Ayal and Tony, so have some of their comments 
incorporated.


I still have few doubts/questions, which I have posted below with lines 
ending with '?'


Comments / Suggestions are welcome  appreciated.

thanx,
deepak

[Ccing engine-devel and libstoragemgmt lists as this stuff is relevant 
to them too]


--

1) Background:

VDSM provides high level API for node virtualization management. It acts 
in response to the requests sent by oVirt Engine, which uses VDSM to do 
all node virtualization related tasks, including but not limited to 
storage management.


libstoragemgmt aims to provide vendor agnostic API for managing external 
storage array. It should help system administrators utilizing open 
source solutions have a way to programmatically manage their storage 
hardware in a vendor neutral way. It also aims to facilitate management 
automation, ease of use and take advantage of storage vendor supported 
features which improve storage performance and space utilization.


Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/

libstoragemgmt (LSM) today supports C and python plugins for talking to 
external storage array using SMI-S as well as native interfaces (eg: 
netapp plugin )
Plan is to grow the SMI-S interface as needed over time and add more 
vendor specific plugins for exploiting features not possible via SMI-S 
or have better alternatives than using SMI-S.
For eg: Many of the copy offload features require to use vendor specific 
commands, which justifies the need for a vendor specific plugin.



2) Goals:

2a) Ability to plugin external storage array into oVirt/VDSM 
virtualization stack, in a vendor neutral way.


2b) Ability to list features/capabilities and other statistical 
info of the array


2c) Ability to utilize the storage array offload capabilities from 
oVirt/VDSM.



3) Details:

LSM will sit as a new repository engine in VDSM.
VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192

Current plan is to have LSM co-exist with VDSM on the virtualization nodes.

*Note : 'storage' used below is generic. It can be a file/nfs-export for 
NAS targets and LUN/logical-drive for SAN targets.


VDSM can use LSM and do the following...
- Provision storage
- Consume storage

3.1) Provisioning Storage using LSM

Typically this will be done by a Storage administrator.

oVirt/VDSM should provide storage admin the
- ability to list the different storage arrays along with their 
types (NAS/SAN), capabilities, free/used space.
- ability to provision storage using any of the array capabilities 
(eg: thin provisioned lun or new NFS export )

- ability to manage the provisioned storage (eg: resize/delete storage)

Once the storage is provisioned by the storage admin, VDSM will have to 
refresh the host(s) for them to be able to see the newly provisioned 
storage.


3.1.1) Potential flows:

Mgmt - vdsm - lsm: create LUN + LUN Mapping / Zoning / whatever is 
needed to make LUN available to list of hosts passed by mgmt

Mgmt - vdsm: getDeviceList (refreshes host and gets list of devices)
 Repeat above for all relevant hosts (depending on list passed earlier, 
mostly relevant when extending an existing VG)

Mgmt - use LUN in normal flows.


3.1.2) How oVirt Engine will know which LSM to use ?

Normally the way this works today is that user can choose the host to 
use (default today is SPM), however there are a few flows where mgmt 
will know which host to use:
1. extend storage domain (add LUN to existing VG) - Use SPM and make 
sure *all* hosts that need access to this SD can see the new LUN

2. attach new LUN to a VM which is pinned to a specific host - use this host
3. attach new LUN to a VM which is not pinned - use a host from the 
cluster the VM belongs to and make sure all nodes in cluster can see the 
new LUN


Flows for which there is no clear candidate (Maybe we can use the SPM 
host itself which is the default ?)

1. create a new disk without attaching it to any VM
2. create a LUN for a new storage domain


3.2) Consuming storage using LSM

Typically this will be done by a virtualization administrator

oVirt/VDSM should allow virtualization admin to
- Create a new storage domain using the storage on the array.
- Be able to specify whether VDSM should use the storage offload 
capability (default) or override it to use its own internal logic.


4) VDSM potential changes:

4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk ? 
which bring another question...1 array == 1 storage domain OR 1 
LUN/nfs-export on the array == 1 storage domain ?


Pros  Cons of each...

1 array == 1 storage domain
- 

Re: [vdsm] [Users] glusterfs and ovirt

2012-05-18 Thread Deepak C Shetty

On 05/17/2012 11:05 PM, Itamar Heim wrote:

On 05/17/2012 06:55 PM, Bharata B Rao wrote:

On Wed, May 16, 2012 at 3:29 PM, Itamar Heimih...@redhat.com  wrote:

On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:


Yair

Thanks for an update. Can I have KVM hypervisors also function as 
storage
nodes for glusterfs? What is a release date for glusterfs support? 
We're

looking for a production deployment in June. Thanks



current status is
1. patches for provisioning gluster clusters and volumes via ovirt 
are in

review, trying to cover this feature set [1].
I'm not sure if all of them will make the ovirt 3.1 version which is 
slated

to branch for stabilization June 1st, but i think enough is there.
so i'd start trying current upstream version to help find issues 
blocking
you, and following on them during june as we stabilize ovirt 3.1 for 
release

(planned for end of june).

2. you should be able to use same hosts for both gluster and virt, 
but there

is no special logic/handling for this yet (i.e., trying and providing
feedback would help improve this mode).
I would suggest start from separate clusters though first, and only 
later

trying the joint mode.

3. creating a storage domain on top of gluster:
- expose NFS on top of it, and consume as a normal nfs storage domain
- use posixfs storage domain with gluster mount semantics
- future: probably native gluster storage domain, up to native
  integration with qemu


I am looking at GlusterFS integration with QEMU which involves adding
GlusterFS as block backend in QEMU. This will involve QEMU talking to
gluster directly via libglusterfs bypassing FUSE. I could specify a
volume file and the VM image directly on QEMU command line to boot
from the VM image that resides on a gluster volume.

Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster

In this example, Fedora.img is being served by gluster and client.vol
would have client-side translators specified.

I am not sure if this use case would be served if GlusterFS is
integrated as posixfs storage domain in VDSM. Posixfs would involve
normal FUSE mount and QEMU would be required to work with images from
FUSE mount path ?

With QEMU supporting GlusterFS backend natively, further optimizations
are possible in case of gluster volume being local to the host node.
In this case, one could provide QEMU with a simple volume file that
would not contain client or server xlators, but instead just the posix
xlator. This would lead to most optimal IO path that bypasses RPC
calls.

So do you think, this use case (QEMU supporting GlusterFS backend
natively and using volume file to specify the needed translators)
warrants a specialized storage domain type for GlusterFS in VDSM ?


I'm not sure if a special storage domain, or a PosixFS based domain 
with enhanced capabilities.

Ayal?


Related Question:
With QEMU using GlusterFS backend natively (as described above), it 
also means that
it needs addnl options/parameters as part of qemu command line (as given 
above).


How does VDSM today support generating a custom qemu cmdline. I know 
VDSM talks to libvirt,
so is there a framework in VDSM to edit/modify the domxml based on some 
pre-conditions,
and how / where one should hook up to do that modification ? I know of 
libvirt hooks
framework in VDSM, but that was more for temporary/experimental needs, 
or am i completely

wrong here ?

Irrespective of whether GlusterFS integrates into VDSM as PosixFS or 
special storage domain
it won't address the need to generate a custom qemu cmdline if a 
file/image was served by

GlusterFS. Whats the way to address this issue in VDSM ?

I am assuming here that special storage domain (aka repo engine) is only 
to manage image
repository, and image related operations, won't help in modifying qemu 
cmd line being generated.


[Ccing vdsm-devel also]

thanx,
deepak


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] createStorageDomain failure but dir structure is created !

2012-03-04 Thread Deepak C Shetty

On 03/02/2012 11:54 PM, Deepak C Shetty wrote:

On 03/02/2012 11:27 PM, Deepak C Shetty wrote:

Hi,
In my simple experiment, i connected to a SHAREDFS storage server 
and then created a data domain
But the createStorageDomain failed with code 351, which just says 
Error creating a storage domain.


How to find out what the real reason behind the failure.

Surprisingly, the domain dir structure does get created, so looks 
like it worked, but still it gives

failure as the return result, why ?

 Sample code...

#!/usr/bin/python
# GPLv2+

import sys
import uuid
import time

sys.path.append('/usr/share/vdsm')

import vdscli
from storage.sd import SHAREDFS_DOMAIN, DATA_DOMAIN, ISO_DOMAIN
from storage.volume import COW_FORMAT, SPARSE_VOL, LEAF_VOL, BLANK_UUID

spUUID = str(uuid.uuid4())
sdUUID = str(uuid.uuid4())
imgUUID = str(uuid.uuid4())
volUUID = str(uuid.uuid4())

print spUUID = %s%spUUID
print sdUUID = %s%sdUUID
print imgUUID = %s%imgUUID
print volUUID = %s%volUUID

gluster_conn = llm65.in.ibm.com:myvol

s = vdscli.connect()

masterVersion = 1
hostID = 1

def vdsOK(d):
print d
if d['status']['code']:
raise Exception(str(d))
return d

def waitTask(s, taskid):
while vdsOK(s.getTaskStatus(taskid))['taskStatus']['taskState'] 
!= 'finished':

time.sleep(3)
vdsOK(s.clearTask(taskid))

vdsOK(s.connectStorageServer(SHAREDFS_DOMAIN, my gluster mount, 
[dict(id=1, spec=gluster_conn, vfs_type=glusterfs, mnt_options=)]))


vdsOK(s.createStorageDomain(SHAREDFS_DOMAIN, sdUUID, my gluster 
domain, gluster_conn, DATA_DOMAIN, 0))


 Output...

./dpk-sharedfs-vm.py
spUUID = 852110d5-c3d2-456e-ae75-b72e929e9bae
sdUUID = 1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe
imgUUID = c29100e7-19cd-4a27-adc6-4c35cc5e690c
volUUID = 1d074f24-8bf0-4b68-8a35-40c3f2c33723
{'status': {'message': 'OK', 'code': 0}, 'statuslist': [{'status': 0, 
'id': 1}]}
{'status': {'message': Error creating a storage domain: 
('storageType=6, sdUUID=1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe, 
domainName=my gluster domain, domClass=1, 
typeSpecificArg=llm65.in.ibm.com:myvol domVersion=0',), 'code': 351}}

Traceback (most recent call last):
  File ./dpk-sharedfs-vm.py, line 74, in module
vdsOK(s.createStorageDomain(SHAREDFS_DOMAIN, sdUUID, my gluster 
domain, gluster_conn, DATA_DOMAIN, 0))

  File ./dpk-sharedfs-vm.py, line 62, in vdsOK
raise Exception(str(d))
Exception: {'status': {'message': Error creating a storage domain: 
('storageType=6, sdUUID=1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe, 
domainName=my gluster domain, domClass=1, 
typeSpecificArg=llm65.in.ibm.com:myvol domVersion=0',), 'code': 351}}


 But it did create the dir structure...

]# find /rhev/data-center/mnt/llm65.in.ibm.com\:myvol/
/rhev/data-center/mnt/llm65.in.ibm.com:myvol/
/rhev/data-center/mnt/llm65.in.ibm.com:myvol/1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe 

/rhev/data-center/mnt/llm65.in.ibm.com:myvol/1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe/dom_md 

/rhev/data-center/mnt/llm65.in.ibm.com:myvol/1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe/dom_md/metadata 

/rhev/data-center/mnt/llm65.in.ibm.com:myvol/1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe/dom_md/leases 

/rhev/data-center/mnt/llm65.in.ibm.com:myvol/1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe/dom_md/outbox 

/rhev/data-center/mnt/llm65.in.ibm.com:myvol/1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe/dom_md/inbox 

/rhev/data-center/mnt/llm65.in.ibm.com:myvol/1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe/dom_md/ids 

/rhev/data-center/mnt/llm65.in.ibm.com:myvol/1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe/images 



# mount | grep gluster
llm65.in.ibm.com:myvol on 
/rhev/data-center/mnt/llm65.in.ibm.com:myvol type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) 




Attaching the vdsm.log

Thread-46::INFO::2012-03-03 
04:49:16,092::nfsSD::64::Storage.StorageDomain::(create) 
sdUUID=1c15bc91-f62b-43c8-b68a-fd2bd3ed18fe domainName=my gluster 
domain remotePath=llm65.in.ibm.com:myvol domClass=1
Thread-46::DEBUG::2012-03-03 
04:49:16,111::persistentDict::175::Storage.PersistentDict::(__init__) 
Created a persistant dict with FileMetadataRW backend
Thread-46::DEBUG::2012-03-03 
04:49:16,113::persistentDict::216::Storage.PersistentDict::(refresh) 
read lines (FileMetadataRW)=[]
Thread-46::WARNING::2012-03-03 
04:49:16,113::persistentDict::238::Storage.PersistentDict::(refresh) 
data has no embedded checksum - trust it as it is
Thread-46::DEBUG::2012-03-03 
04:49:16,113::persistentDict::152::Storage.PersistentDict::(transaction) 
Starting transaction
Thread-46::DEBUG::2012-03-03 
04:49:16,114::persistentDict::158::Storage.PersistentDict::(transaction) 
Flushing changes
Thread-46::DEBUG::2012-03-03 
04:49:16,114::persistentDict::277::Storage.PersistentDict::(flush) 
about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=my 
gluster domain', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 
'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 
'POOL_UUID=', 'REMOTE_PATH=llm65.in.ibm.com:myvol', 'ROLE=Regular

Re: [vdsm] Remove and Add host does not work

2012-02-28 Thread Deepak C Shetty

On 02/28/2012 06:46 PM, Itamar Heim wrote:

On 02/28/2012 02:48 PM, Deepak C Shetty wrote:

Hi,
I had a host managed via OE, completely working fine. Was even able to
create and run VMs off it.
I tried removing the host from the OE (put host into maint. mode and
then remove) and when i re-discover
the same host, OE just keeps seeing it as Non-responsive.


what do you mean by re-discover?



I just meant that i removed the host and re-added it by selecting New 
on the Hosts tab

and putting the IP and hostname.



I am able to ssh into the host and see that none of the vdsm processes
have been started.
I tried doing Confirm host has been rebooted on the OE, did not help.
I tried putting
host into maint. mode and re-activating the host, just doesn't help


did you do any change to the host?
just removing it from engine shouldn't cause vdsm to know/care and 
should work just like before.




Nothing changed on the host. In fact when i removed and added the host back
it says everything is installed so does nothing but reboots the host, 
post reboot
vdsm does not start automatically and host status is non-responsive 
on OE




I waited for ~45 mins thinking OE might connect to the host, start vdsm
and get me the Up status
but it failed.

Do i need to manually start vdsm in such a scenario on the host ?
Are there ways or methods to have OE forcibly start vdsm on the host ?

thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel





___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Remove and Add host does not work

2012-02-28 Thread Deepak C Shetty

On 02/28/2012 06:57 PM, Itamar Heim wrote:

On 02/28/2012 03:22 PM, Deepak C Shetty wrote:

On 02/28/2012 06:46 PM, Itamar Heim wrote:

On 02/28/2012 02:48 PM, Deepak C Shetty wrote:

Hi,
I had a host managed via OE, completely working fine. Was even able to
create and run VMs off it.
I tried removing the host from the OE (put host into maint. mode and
then remove) and when i re-discover
the same host, OE just keeps seeing it as Non-responsive.


what do you mean by re-discover?



I just meant that i removed the host and re-added it by selecting New
on the Hosts tab
and putting the IP and hostname.



I am able to ssh into the host and see that none of the vdsm processes
have been started.
I tried doing Confirm host has been rebooted on the OE, did not 
help.

I tried putting
host into maint. mode and re-activating the host, just doesn't help


did you do any change to the host?
just removing it from engine shouldn't cause vdsm to know/care and
should work just like before.



Nothing changed on the host. In fact when i removed and added the 
host back

it says everything is installed so does nothing but reboots the host,
post reboot
vdsm does not start automatically and host status is non-responsive
on OE


if vdsm does not start, OE is correct...
does vdsm try to start and fails (and if so, log excerpt?)


Right, but I dont see any traces of vdsm trying to start. Nothing in the 
vdsm.log
thats relevant to this. I just did chkconfig --list and don't see vdsmd 
in that, could

that be the reason.

On the host I manually did `service vdsmd restart` and then everythign works
fine.





___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Remove and Add host does not work

2012-02-28 Thread Deepak C Shetty

On 02/28/2012 07:11 PM, Itamar Heim wrote:

On 02/28/2012 03:31 PM, Deepak C Shetty wrote:

On 02/28/2012 06:57 PM, Itamar Heim wrote:

On 02/28/2012 03:22 PM, Deepak C Shetty wrote:

On 02/28/2012 06:46 PM, Itamar Heim wrote:

On 02/28/2012 02:48 PM, Deepak C Shetty wrote:

Hi,
I had a host managed via OE, completely working fine. Was even 
able to

create and run VMs off it.
I tried removing the host from the OE (put host into maint. mode and
then remove) and when i re-discover
the same host, OE just keeps seeing it as Non-responsive.


what do you mean by re-discover?



I just meant that i removed the host and re-added it by selecting 
New

on the Hosts tab
and putting the IP and hostname.



I am able to ssh into the host and see that none of the vdsm 
processes

have been started.
I tried doing Confirm host has been rebooted on the OE, did not
help.
I tried putting
host into maint. mode and re-activating the host, just doesn't help


did you do any change to the host?
just removing it from engine shouldn't cause vdsm to know/care and
should work just like before.



Nothing changed on the host. In fact when i removed and added the
host back
it says everything is installed so does nothing but reboots the host,
post reboot
vdsm does not start automatically and host status is 
non-responsive

on OE


if vdsm does not start, OE is correct...
does vdsm try to start and fails (and if so, log excerpt?)


Right, but I dont see any traces of vdsm trying to start. Nothing in the
vdsm.log
thats relevant to this. I just did chkconfig --list and don't see vdsmd
in that, could
that be the reason.

On the host I manually did `service vdsmd restart` and then everythign
works
fine.







you are basically saying re-installing a host causes vdsm to not 
start by default.
reproducing this again to make sure and opening a bug seems the right 
course.
trying to trace the install flow to provide root cause or even a patch 
would help more to fix this


Are you saying that vdsmd service should start by default post reboot 
and its entry

should be listed as part of chkconfig --list ?


Found this examining the vds bootstrap complete py log

2012-02-28 23:45:32,022 DEBUGdeployUtil 707 _updateFileLine: return: 
True

2012-02-28 23:45:32,022 DEBUGdeployUtil 228 setVdsConf: ended.
2012-02-28 23:45:32,022 DEBUGdeployUtil 103 ['/bin/systemctl', 
'reconfigure', 'vdsmd.service']

2012-02-28 23:45:32,026 DEBUGdeployUtil 107
2012-02-28 23:45:32,026 DEBUGdeployUtil 108 Unknown operation 
reconfigure


2012-02-28 23:45:32,026 DEBUGdeployUtil 103 ['/sbin/reboot']
2012-02-28 23:45:32,325 DEBUGdeployUtil 107



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Users] Remove and Add host does not work

2012-02-28 Thread Deepak C Shetty

On 02/28/2012 10:38 PM, Douglas Landgraf wrote:

On 02/28/2012 11:01 AM, Douglas Landgraf wrote:

On 02/28/2012 08:49 AM, Deepak C Shetty wrote:

On 02/28/2012 07:11 PM, Itamar Heim wrote:

On 02/28/2012 03:31 PM, Deepak C Shetty wrote:

On 02/28/2012 06:57 PM, Itamar Heim wrote:

On 02/28/2012 03:22 PM, Deepak C Shetty wrote:

On 02/28/2012 06:46 PM, Itamar Heim wrote:

On 02/28/2012 02:48 PM, Deepak C Shetty wrote:

Hi,
I had a host managed via OE, completely working fine. Was even 
able to

create and run VMs off it.
I tried removing the host from the OE (put host into maint. 
mode and

then remove) and when i re-discover
the same host, OE just keeps seeing it as Non-responsive.


what do you mean by re-discover?



I just meant that i removed the host and re-added it by 
selecting New

on the Hosts tab
and putting the IP and hostname.



I am able to ssh into the host and see that none of the vdsm 
processes

have been started.
I tried doing Confirm host has been rebooted on the OE, did not
help.
I tried putting
host into maint. mode and re-activating the host, just doesn't 
help


did you do any change to the host?
just removing it from engine shouldn't cause vdsm to know/care and
should work just like before.



Nothing changed on the host. In fact when i removed and added the
host back
it says everything is installed so does nothing but reboots the 
host,

post reboot
vdsm does not start automatically and host status is 
non-responsive

on OE


if vdsm does not start, OE is correct...
does vdsm try to start and fails (and if so, log excerpt?)


Right, but I dont see any traces of vdsm trying to start. Nothing 
in the

vdsm.log
thats relevant to this. I just did chkconfig --list and don't see 
vdsmd

in that, could
that be the reason.

On the host I manually did `service vdsmd restart` and then 
everythign

works
fine.







you are basically saying re-installing a host causes vdsm to not 
start by default.
reproducing this again to make sure and opening a bug seems the 
right course.
trying to trace the install flow to provide root cause or even a 
patch would help more to fix this


Are you saying that vdsmd service should start by default post 
reboot and its entry

should be listed as part of chkconfig --list ?


vdsm uses systemd.
http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet

Try to use chkconfig vdsm on, you will see a wrapper to systemd command.


Found this examining the vds bootstrap complete py log

2012-02-28 23:45:32,022 DEBUGdeployUtil 707 _updateFileLine: 
return: True

2012-02-28 23:45:32,022 DEBUGdeployUtil 228 setVdsConf: ended.
2012-02-28 23:45:32,022 DEBUGdeployUtil 103 ['/bin/systemctl', 
'reconfigure', 'vdsmd.service']

2012-02-28 23:45:32,026 DEBUGdeployUtil 107
2012-02-28 23:45:32,026 DEBUGdeployUtil 108 Unknown operation 
reconfigure


2012-02-28 23:45:32,026 DEBUGdeployUtil 103 ['/sbin/reboot']
2012-02-28 23:45:32,325 DEBUGdeployUtil 107


Which vdsm version are you using? If I am not wrong, I remember to 
see a patch for this report.. /me going to check..




From vdsm.spec:
===
* Sun Feb  5 2012 Dan Kenigsberg dan...@redhat.com - 4.9.3.3-0.fc16
snip
- BZ#773371 call `vdsmd reconfigure` after bootstrap
snip



Thanks douglas, I am on a bit older version of vdsm and can't readily update
as my lab system is not directly connected to the internet. So until i 
update,

'guess i will have to live with manual vdsmd restart.


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Procedure for restarting vdsm

2012-02-21 Thread Deepak C Shetty

Hi,
I haven't found this info elsewhere, hence asking here.

1) What is the procedure to restart vdsm on the host ? Do i need to put 
host in maint. mode
and then manually kill  vdsm and supervdsm parent process or use pkill 
to kill

all the numerous vdsm processes ?

2) I don't have a git repo on the vdsm host, so if i have made changes 
to vdsm (in my devpt env),
can i just create a tarball and copy it to the host 
(/usr/share/vdsm/vdsm) and restart vdsm for

my changes to take effect ?

Appreciate any help provided.

thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Error creating fc based storage domain

2012-02-21 Thread Deepak C Shetty

Hello,
This is the error i see in vdsm.log
Any pointers appreciated.

Thread-237764::INFO::2012-02-21 
22:04:52,785::logUtils::37::dispatcher::(wrapper) Run and protect: 
createVG(vgname='2b63c813-6bd5-4488-904c-dba1c8e52822', 
devlist=['3600a0b800017dd4f0e2e4a4ab484'], options=None)
Thread-237764::DEBUG::2012-02-21 
22:04:52,788::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n 
/sbin/lvm pvcreate --config  devices { preferred_names = 
[\\^/dev/mapper/\\] ignore_suspended_devices=1 write_cache_state=0 
disable_after_error_count=3 filter = [ 
\\a%3600a0b800017dcec0f324a4ab9ef|3600a0b800017dcec0f334a4aba67|3600a0b800017dcec0f344a4aba75|3600a0b800017dcec0f354a4aba8d|3600a0b800017dd4f0e2e4a4ab484|3600a0b800017dd4f0e2f4a4ab56c%\\, 
\\r%.*%\\ ] }  global {  locking_type=1  prioritise_write_locks=1  
wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 }  
--metadatasize 128m --metadatacopies 2 --metadataignore y 
/dev/mapper/3600a0b800017dd4f0e2e4a4ab484' (cwd None)
Thread-237764::DEBUG::2012-02-21 
22:04:52,854::lvm::287::Storage.Misc.excCmd::(cmd) FAILED: err =   
Can't open /dev/mapper/3600a0b800017dd4f0e2e4a4ab484 exclusively.  
Mounted filesystem?\n; rc = 5
Thread-237764::DEBUG::2012-02-21 
22:04:52,857::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n 
/sbin/lvm pvs --config  devices { preferred_names = 
[\\^/dev/mapper/\\] ignore_suspended_devices=1 write_cache_state=0 
disable_after_error_count=3 filter = [ 
\\a%3600a0b800017dcec0f324a4ab9ef|3600a0b800017dcec0f334a4aba67|3600a0b800017dcec0f344a4aba75|3600a0b800017dcec0f354a4aba8d|3600a0b800017dd4f0e2e4a4ab484|3600a0b800017dd4f0e2f4a4ab56c%\\, 
\\r%.*%\\ ] }  global {  locking_type=1  prioritise_write_locks=1  
wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 }  -o 
vg_name,pv_name --noheading 
/dev/mapper/3600a0b800017dd4f0e2e4a4ab484' (cwd None)
Thread-237764::DEBUG::2012-02-21 
22:04:52,914::lvm::287::Storage.Misc.excCmd::(cmd) FAILED: err = '  No 
physical volume label read from 
/dev/mapper/3600a0b800017dd4f0e2e4a4ab484\n  Failed to read physical 
volume /dev/mapper/3600a0b800017dd4f0e2e4a4ab484\n'; rc = 5
Thread-237764::ERROR::2012-02-21 
22:04:52,916::task::855::TaskManager.Task::(_setError) 
Task=`17d5267f-fe25-49d4-9a7d-e04f2785d3b7`::Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 863, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 38, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 1556, in createVG
metadataSize=blockSD.VG_METADATASIZE)
  File /usr/share/vdsm/storage/lvm.py, line 796, in createVG
_initpvs(pvs, metadataSize)
  File /usr/share/vdsm/storage/lvm.py, line 634, in _initpvs
found, notFound)
PhysDevInitializationError: Failed to initialize physical device: 
('found: %s notFound: %s', {}, generator object genexpr at 
0x7fd8c031a780)
Thread-237764::DEBUG::2012-02-21 
22:04:52,942::task::874::TaskManager.Task::(_run) 
Task=`17d5267f-fe25-49d4-9a7d-e04f2785d3b7`::Task._run: 
17d5267f-fe25-49d4-9a7d-e04f2785d3b7 
('2b63c813-6bd5-4488-904c-dba1c8e52822', 
['3600a0b800017dd4f0e2e4a4ab484']) {} failed - stopping task
Thread-237764::DEBUG::2012-02-21 
22:04:52,942::task::1201::TaskManager.Task::(stop) 
Task=`17d5267f-fe25-49d4-9a7d-e04f2785d3b7`::stopping in state preparing 
(force False)
Thread-237764::DEBUG::2012-02-21 
22:04:52,942::task::980::TaskManager.Task::(_decref) 
Task=`17d5267f-fe25-49d4-9a7d-e04f2785d3b7`::ref 1 aborting True
Thread-237764::INFO::2012-02-21 
22:04:52,943::task::1159::TaskManager.Task::(prepare) 
Task=`17d5267f-fe25-49d4-9a7d-e04f2785d3b7`::aborting: Task is aborted: 
'Failed to initialize physical device' - code 601
Thread-237764::DEBUG::2012-02-21 
22:04:52,943::task::1164::TaskManager.Task::(prepare) 
Task=`17d5267f-fe25-49d4-9a7d-e04f2785d3b7`::Prepare: aborted: Failed to 
initialize physical device
Thread-237764::DEBUG::2012-02-21 
22:04:52,943::task::980::TaskManager.Task::(_decref) 
Task=`17d5267f-fe25-49d4-9a7d-e04f2785d3b7`::ref 0 aborting True
Thread-237764::DEBUG::2012-02-21 
22:04:52,943::task::915::TaskManager.Task::(_doAbort) 
Task=`17d5267f-fe25-49d4-9a7d-e04f2785d3b7`::Task._doAbort: force False



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Some naive questions on ovirt / vdsm

2012-02-02 Thread Deepak C Shetty

Hi All,

I have few naive Qs for getting some understanding on how things
are not working in my setup.

I have a very very basic/simple setup.. 1 box hosting ovirt-engine and 1
host running f16,
which is discovered and being managed by the ovirt. I don't have the
luxury (atleast currently)
to have a shared san/nas storage setup, so with help on irc, i
configured and setup local dc
with local cluster and added host to the local cluster. Now i am in the
process of adding a virtual
disk to my VM which i created using ovirt.

1) The virtual disk ovirt helps create, is a disk with all zeroes... so
even if i am able to
create the vdisk and attach it to the VM, when i start vm, obviously it
won't boot, as the boot
disk is not found. How do i let the ovirt use a existing .img image file
which already has a os
and root fs installed ( i have it from my virt-manager setup). ? I tried
creatign a new storage
domain of type iso, but not sure how to add my .img for oivrt to
see/recognize and allow me to
select that while creating a new VM ? Again, this is all local, so i
created /iso/images
and /data/images directory on my host, and tried keeping my .imgs there,
but it does not work.

2) Is thin provisioning supported with just 1 host in the dc/cluster ?
From vdsm_storage pdf
i found on the wiki, there is a diagram which uses 2 hosts to do the
thin provisioning...
so are 2 hosts a must ? Also it talks abt mailbox LV, where the msgs are
sent and recd between
vdsm and spm to do the lvextend operation.. i am not clear on where
physically this LV resides ?
on host 1, host 2 or somewhere else ? Assuming its shared storage, does
it mean that i cannot
do thin prov, with local storage, as is the setup in my case ?

thanx,
deepak



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel