Re: [vdsm] [Users] glusterfs and ovirt

2012-05-20 Thread Dor Laor

On 05/21/2012 06:15 AM, Bharata B Rao wrote:

On Sun, May 20, 2012 at 4:57 PM, Dor Laor  wrote:

On 05/18/2012 04:28 PM, Deepak C Shetty wrote:


On 05/17/2012 11:05 PM, Itamar Heim wrote:


On 05/17/2012 06:55 PM, Bharata B Rao wrote:

I am looking at GlusterFS integration with QEMU which involves adding
GlusterFS as block backend in QEMU. This will involve QEMU talking to
gluster directly via libglusterfs bypassing FUSE. I could specify a
volume file and the VM image directly on QEMU command line to boot
from the VM image that resides on a gluster volume.

Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster

In this example, Fedora.img is being served by gluster and client.vol
would have client-side translators specified.

I am not sure if this use case would be served if GlusterFS is
integrated as posixfs storage domain in VDSM. Posixfs would involve
normal FUSE mount and QEMU would be required to work with images from
FUSE mount path ?

With QEMU supporting GlusterFS backend natively, further optimizations
are possible in case of gluster volume being local to the host node.
In this case, one could provide QEMU with a simple volume file that
would not contain client or server xlators, but instead just the posix
xlator. This would lead to most optimal IO path that bypasses RPC
calls.

So do you think, this use case (QEMU supporting GlusterFS backend
natively and using volume file to specify the needed translators)
warrants a specialized storage domain type for GlusterFS in VDSM ?



I'm not sure if a special storage domain, or a PosixFS based domain
with enhanced capabilities.
Ayal?



Related Question:
With QEMU using GlusterFS backend natively (as described above), it also
means that
it needs addnl options/parameters as part of qemu command line (as given
above).



There is no support in qemu for gluster yet but it will be there not far
away


As I said above, I am working on this. Will post the patches shortly.


/me apologize for the useless noise, I'm using a new thunderbird plugin 
that collapses quotes and it made me loss the context.




Regards,
Bharata.


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Agenda for tomorrow's call

2012-05-20 Thread Shu Ming

How about the new repository system latest status?
http://gerrit.ovirt.org/#change,192

On 2012-5-21 3:55, Ayal Baron wrote:

Hi all,

I would like to discuss the following on our upcoming call:

- reviewers are missing!
- reviewers/verifiers are missing for pep8 patches. I would like to
   ask a volunteer to aggregate them all in one branch, and get some
   folks from Red Hat QE to run some sanity test on them.
- functional tests: Wenchao Xia's http://gerrit.ovirt.org/#change,4454
   and Adam Litke's http://gerrit.ovirt.org/#change,4451
- Saggi's unicode fixes to betterPopen
- Stories about negative flows hurt by commit 1676396f18cf5c300d87e181
   "Change safelease APIs to match SANLock flow"
- Upcoming oVirt-3.1 release: when to break from master branch?

Anyone else has more interesting stuff? We can skip my bullets for a few
of yours if we do not have time.

Regards,
Dan.
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel



--
Shu Ming
IBM China Systems and Technology Laboratory


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] a problem with pepe8

2012-05-20 Thread ShaoHe Feng

On 05/18/2012 09:58 PM, Dan Kenigsberg wrote:

On Fri, May 18, 2012 at 09:47:43PM +0800, ShaoHe Feng wrote:

On 05/18/2012 08:30 PM, Dan Kenigsberg wrote:

On Fri, May 18, 2012 at 06:57:10AM -0500, Adam Litke wrote:

On Fri, May 18, 2012 at 03:56:05PM +0800, ShaoHe Feng wrote:

a comment exceed 80 characters,  and it is a url link.
such as
# 
http:///bb///eee/fff/

how can I do?
is this OK?
# "http://bb//
# /eee/fff/"
# (the link is too long to fit in one line, copy it and paste it to
one line)

It would be nice if we could annotate the source code to disable certain checks
in places such as this.  Clearly the rigid line length restriction would result
in a less readable comment if followed here.

Agreed. PEP-0008 is here to help us. If the script that enforces it is
actually hurts readability in a certain case, we should not use it.

Please fix other PEP-0008 issues in the file, and try to filter out the
url warning. If impossible, the module would not be whitelisted.


yes, pep8 has a option "--ignore=errors", but if this option is
given when all the same type errors will be ignored.

Hey, it is opensource. You can hack it to ignore specific error (and
push upstream, and wait until it's in Fedora), or you can `grep -v` its
output.
can grep the output

thank you.  a good way.
grep -v -e "\s*#\s\+http:"  to ignore url that is too long




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] constrain call supervdsm only to vdsm process

2012-05-20 Thread Royce Lv
Hi guys,
Went through current code and found calling
supervdsm(function:getProxy()) is only called from threads of vdsm, and
supervdsm can be called from other processes in current scheme. My plan to
change supervdsm and vdsm startup process is meant to limit call of
getProxy only to vdsm process and its threads, that is to say not allow
subprocesses and other process to call super vdsm. I know we are going to
move all the "sudo " to supervdsm, So I want to ask if my plan will make
constraints to these or introduce other troubles?
Thanks for your answer!
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Users] glusterfs and ovirt

2012-05-20 Thread Bharata B Rao
On Sun, May 20, 2012 at 4:57 PM, Dor Laor  wrote:
> On 05/18/2012 04:28 PM, Deepak C Shetty wrote:
>>
>> On 05/17/2012 11:05 PM, Itamar Heim wrote:
>>>
>>> On 05/17/2012 06:55 PM, Bharata B Rao wrote:
 I am looking at GlusterFS integration with QEMU which involves adding
 GlusterFS as block backend in QEMU. This will involve QEMU talking to
 gluster directly via libglusterfs bypassing FUSE. I could specify a
 volume file and the VM image directly on QEMU command line to boot
 from the VM image that resides on a gluster volume.

 Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster

 In this example, Fedora.img is being served by gluster and client.vol
 would have client-side translators specified.

 I am not sure if this use case would be served if GlusterFS is
 integrated as posixfs storage domain in VDSM. Posixfs would involve
 normal FUSE mount and QEMU would be required to work with images from
 FUSE mount path ?

 With QEMU supporting GlusterFS backend natively, further optimizations
 are possible in case of gluster volume being local to the host node.
 In this case, one could provide QEMU with a simple volume file that
 would not contain client or server xlators, but instead just the posix
 xlator. This would lead to most optimal IO path that bypasses RPC
 calls.

 So do you think, this use case (QEMU supporting GlusterFS backend
 natively and using volume file to specify the needed translators)
 warrants a specialized storage domain type for GlusterFS in VDSM ?
>>>
>>>
>>> I'm not sure if a special storage domain, or a PosixFS based domain
>>> with enhanced capabilities.
>>> Ayal?
>>
>>
>> Related Question:
>> With QEMU using GlusterFS backend natively (as described above), it also
>> means that
>> it needs addnl options/parameters as part of qemu command line (as given
>> above).
>
>
> There is no support in qemu for gluster yet but it will be there not far
> away

As I said above, I am working on this. Will post the patches shortly.

Regards,
Bharata.
-- 
http://bharata.sulekha.com/blog/posts.htm,  http://raobharata.wordpress.com/
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Agenda for tomorrow's call

2012-05-20 Thread Ayal Baron
Hi all,

I would like to discuss the following on our upcoming call:

- reviewers are missing!
- reviewers/verifiers are missing for pep8 patches. I would like to
  ask a volunteer to aggregate them all in one branch, and get some
  folks from Red Hat QE to run some sanity test on them.
- functional tests: Wenchao Xia's http://gerrit.ovirt.org/#change,4454
  and Adam Litke's http://gerrit.ovirt.org/#change,4451
- Saggi's unicode fixes to betterPopen
- Stories about negative flows hurt by commit 1676396f18cf5c300d87e181
  "Change safelease APIs to match SANLock flow"
- Upcoming oVirt-3.1 release: when to break from master branch?

Anyone else has more interesting stuff? We can skip my bullets for a few
of yours if we do not have time.

Regards,
Dan.
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Agenda for today's call

2012-05-20 Thread Dan Kenigsberg
On Mon, Apr 23, 2012 at 10:41:30PM +0800, ShaoHe Feng wrote:
> I wonder this call is a phone call or on an IRC channel ?

Phone call. See details in
http://www.ovirt.org/wiki/Meetings#Meeting_Time_and_Place

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Users] glusterfs and ovirt

2012-05-20 Thread Dor Laor

On 05/18/2012 04:28 PM, Deepak C Shetty wrote:

On 05/17/2012 11:05 PM, Itamar Heim wrote:

On 05/17/2012 06:55 PM, Bharata B Rao wrote:

On Wed, May 16, 2012 at 3:29 PM, Itamar Heim wrote:

On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:


Yair

Thanks for an update. Can I have KVM hypervisors also function as
storage
nodes for glusterfs? What is a release date for glusterfs support?
We're
looking for a production deployment in June. Thanks



current status is
1. patches for provisioning gluster clusters and volumes via ovirt
are in
review, trying to cover this feature set [1].
I'm not sure if all of them will make the ovirt 3.1 version which is
slated
to branch for stabilization June 1st, but i think "enough" is there.
so i'd start trying current upstream version to help find issues
blocking
you, and following on them during june as we stabilize ovirt 3.1 for
release
(planned for end of june).

2. you should be able to use same hosts for both gluster and virt,
but there
is no special logic/handling for this yet (i.e., trying and providing
feedback would help improve this mode).
I would suggest start from separate clusters though first, and only
later
trying the joint mode.

3. creating a storage domain on top of gluster:
- expose NFS on top of it, and consume as a normal nfs storage domain
- use posixfs storage domain with gluster mount semantics
- future: probably native gluster storage domain, up to native
integration with qemu


I am looking at GlusterFS integration with QEMU which involves adding
GlusterFS as block backend in QEMU. This will involve QEMU talking to
gluster directly via libglusterfs bypassing FUSE. I could specify a
volume file and the VM image directly on QEMU command line to boot
from the VM image that resides on a gluster volume.

Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster

In this example, Fedora.img is being served by gluster and client.vol
would have client-side translators specified.

I am not sure if this use case would be served if GlusterFS is
integrated as posixfs storage domain in VDSM. Posixfs would involve
normal FUSE mount and QEMU would be required to work with images from
FUSE mount path ?

With QEMU supporting GlusterFS backend natively, further optimizations
are possible in case of gluster volume being local to the host node.
In this case, one could provide QEMU with a simple volume file that
would not contain client or server xlators, but instead just the posix
xlator. This would lead to most optimal IO path that bypasses RPC
calls.

So do you think, this use case (QEMU supporting GlusterFS backend
natively and using volume file to specify the needed translators)
warrants a specialized storage domain type for GlusterFS in VDSM ?


I'm not sure if a special storage domain, or a PosixFS based domain
with enhanced capabilities.
Ayal?


Related Question:
With QEMU using GlusterFS backend natively (as described above), it also
means that
it needs addnl options/parameters as part of qemu command line (as given
above).


There is no support in qemu for gluster yet but it will be there not far 
away




How does VDSM today support generating a custom qemu cmdline. I know
VDSM talks to libvirt,
so is there a framework in VDSM to edit/modify the domxml based on some
pre-conditions,
and how / where one should hook up to do that modification ? I know of
libvirt hooks
framework in VDSM, but that was more for temporary/experimental needs,
or am i completely
wrong here ?

Irrespective of whether GlusterFS integrates into VDSM as PosixFS or
special storage domain
it won't address the need to generate a custom qemu cmdline if a
file/image was served by
GlusterFS. Whats the way to address this issue in VDSM ?

I am assuming here that special storage domain (aka repo engine) is only
to manage image
repository, and image related operations, won't help in modifying qemu
cmd line being generated.

[Ccing vdsm-devel also]

thanx,
deepak


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel