Re: [vdsm] pep8 questions

2012-06-05 Thread Zhou Zheng Sheng

Hi, I think a space is needed after "%s/%s", just like follow:

cls.log.warn("Could not get size for vol %s/%s "
 "using optimized methods",
 sdobj.sdUUID, volUUID, exc_info=True)


于 2012年06月06日 04:11, Saggi Mizrahi 写道:

 cls.log.warn("Could not get size for vol %s/%s"
  "using optimized methods",
  sdobj.sdUUID, volUUID, exc_info=True)


--
Thanks and best regards!

Zhou Zheng Sheng / 周征晟
E-mail: zhshz...@linux.vnet.ibm.com
Telephone: 86-10-82454397

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Users] glusterfs and ovirt

2012-06-05 Thread Itamar Heim

On 05/18/2012 04:28 PM, Deepak C Shetty wrote:

On 05/17/2012 11:05 PM, Itamar Heim wrote:

On 05/17/2012 06:55 PM, Bharata B Rao wrote:

On Wed, May 16, 2012 at 3:29 PM, Itamar Heim wrote:

On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:


Yair

Thanks for an update. Can I have KVM hypervisors also function as
storage
nodes for glusterfs? What is a release date for glusterfs support?
We're
looking for a production deployment in June. Thanks



current status is
1. patches for provisioning gluster clusters and volumes via ovirt
are in
review, trying to cover this feature set [1].
I'm not sure if all of them will make the ovirt 3.1 version which is
slated
to branch for stabilization June 1st, but i think "enough" is there.
so i'd start trying current upstream version to help find issues
blocking
you, and following on them during june as we stabilize ovirt 3.1 for
release
(planned for end of june).

2. you should be able to use same hosts for both gluster and virt,
but there
is no special logic/handling for this yet (i.e., trying and providing
feedback would help improve this mode).
I would suggest start from separate clusters though first, and only
later
trying the joint mode.

3. creating a storage domain on top of gluster:
- expose NFS on top of it, and consume as a normal nfs storage domain
- use posixfs storage domain with gluster mount semantics
- future: probably native gluster storage domain, up to native
integration with qemu


I am looking at GlusterFS integration with QEMU which involves adding
GlusterFS as block backend in QEMU. This will involve QEMU talking to
gluster directly via libglusterfs bypassing FUSE. I could specify a
volume file and the VM image directly on QEMU command line to boot
from the VM image that resides on a gluster volume.

Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster

In this example, Fedora.img is being served by gluster and client.vol
would have client-side translators specified.

I am not sure if this use case would be served if GlusterFS is
integrated as posixfs storage domain in VDSM. Posixfs would involve
normal FUSE mount and QEMU would be required to work with images from
FUSE mount path ?

With QEMU supporting GlusterFS backend natively, further optimizations
are possible in case of gluster volume being local to the host node.
In this case, one could provide QEMU with a simple volume file that
would not contain client or server xlators, but instead just the posix
xlator. This would lead to most optimal IO path that bypasses RPC
calls.

So do you think, this use case (QEMU supporting GlusterFS backend
natively and using volume file to specify the needed translators)
warrants a specialized storage domain type for GlusterFS in VDSM ?


I'm not sure if a special storage domain, or a PosixFS based domain
with enhanced capabilities.
Ayal?


Related Question:
With QEMU using GlusterFS backend natively (as described above), it also
means that
it needs addnl options/parameters as part of qemu command line (as given
above).

How does VDSM today support generating a custom qemu cmdline. I know
VDSM talks to libvirt,
so is there a framework in VDSM to edit/modify the domxml based on some
pre-conditions,
and how / where one should hook up to do that modification ? I know of
libvirt hooks
framework in VDSM, but that was more for temporary/experimental needs,
or am i completely
wrong here ?


for something vdsm is not aware of yet - you can use vdsm custom hooks 
to manipulate the libvirt xml.




Irrespective of whether GlusterFS integrates into VDSM as PosixFS or
special storage domain
it won't address the need to generate a custom qemu cmdline if a
file/image was served by
GlusterFS. Whats the way to address this issue in VDSM ?


when vdsm supports this I expect it will know to pass these.
it won't necessarily be a generic PosixFS at that time.



I am assuming here that special storage domain (aka repo engine) is only
to manage image
repository, and image related operations, won't help in modifying qemu
cmd line being generated.


support by vdsm for specific qemu options (via libvirt) will be done by 
either having a special type of storage domain, or some capability 
exchange, etc.




[Ccing vdsm-devel also]

thanx,
deepak




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] pep8 questions

2012-06-05 Thread Saggi Mizrahi
I thins this is the correct formatting:

self.__putMetadata({"NONE": "#" * (sd.METASIZE - 10)}, metaid)

cls.log.warn("Could not get size for vol %s/%s"
 "using optimized methods",
 sdobj.sdUUID, volUUID, exc_info=True)

- Original Message -
> From: "Deepak C Shetty" 
> To: "VDSM Project Development" 
> Sent: Tuesday, June 5, 2012 2:19:04 PM
> Subject: [vdsm] pep8 questions
> 
> Hi,
>  I was looking at resolving pep8 issues in
> vdsm/storage/blockVolume.py. Haven't been able to resolve the below..
> Pointers appreciated.
> 
> vdsm/storage/blockVolume.py:99:55: E225 missing whitespace around
> operator
> vdsm/storage/blockVolume.py:148:28: E201 whitespace after '{'
> vdsm/storage/blockVolume.py:207:28: E701 multiple statements on one
> line
> (colon)
> 
> 
> line 99:  cls.log.warn("Could not get size for vol %s/%s using
> optimized
> googling i found some links indicating this pep8 warning is
> incorrect.
> 
> line 148: cls.__putMetadata({ "NONE": "#" * (sd.METASIZE-10) },
> metaid)
> It gives some other error if i remove the whitespace after {
> 
> line 206 & 207:
>  raise se.VolumeCannotGetParent("blockVolume can't get
> parent %s for
>volume %s: %s" % (srcVolUUID, volUUID, str(e)))
> I split this line to overcome the > 80 error, but unable to decipher
> what this error means ?
> 
> thanx,
> deepak
> 
> ___
> vdsm-devel mailing list
> vdsm-devel@lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/vdsm-devel
> 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] pep8 questions

2012-06-05 Thread Deepak C Shetty

Hi,
I was looking at resolving pep8 issues in 
vdsm/storage/blockVolume.py. Haven't been able to resolve the below.. 
Pointers appreciated.


vdsm/storage/blockVolume.py:99:55: E225 missing whitespace around operator
vdsm/storage/blockVolume.py:148:28: E201 whitespace after '{'
vdsm/storage/blockVolume.py:207:28: E701 multiple statements on one line 
(colon)



line 99:  cls.log.warn("Could not get size for vol %s/%s using optimized
googling i found some links indicating this pep8 warning is incorrect.

line 148: cls.__putMetadata({ "NONE": "#" * (sd.METASIZE-10) }, metaid)
It gives some other error if i remove the whitespace after {

line 206 & 207:
raise se.VolumeCannotGetParent("blockVolume can't get 
parent %s for

  volume %s: %s" % (srcVolUUID, volUUID, str(e)))
I split this line to overcome the > 80 error, but unable to decipher 
what this error means ?


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] spicec + vncviewer query

2012-06-05 Thread Anil Vettathu
Hi David,

You are absolutely right. That was an authentication issue. I set the
ticket for spice and it worked.

Its now working for VNC also. Earlier I think I made a mistake while
setting vmticket for VNC.

Thanks a lot.
Anil


On Tue, Jun 5, 2012 at 3:44 PM, David Jaša  wrote:

> Anil Vettathu píše v Út 05. 06. 2012 v 15:36 +0530:
> >
> > Hi,
> >
> > I was able to get the details of the display of both spice and vnc
> > using vdsclient. Now how can I connect to the console using spicec or
> > virtviewer.
> >
> > spicec is failiing with the following log.
> >
> > 1338808977 INFO [32318:32318] Application::main: command line: spicec
> > --host 192.165.210.136 --port 5900 --secure-port 5901 --ca-file
> > ca-cert.pem
> > 1338808977 INFO [32318:32318] init_key_map: using evdev mapping
> > 1338808979 INFO [32318:32318] MultyMonScreen::MultyMonScreen:
> > platform_win: 77594625
> > 1338808979 INFO [32318:32318] GUI::GUI:
> > 1338808979 INFO [32318:32318] ForeignMenu::ForeignMenu: Creating a
> > foreign menu connection /tmp/SpiceForeignMenu-32318.uds
> > 1338808979 INFO [32318:32319] RedPeer::connect_unsecure: Connected to
> > 192.165.210.136 5900
> > 1338808979 INFO [32318:32319] RedPeer::connect_secure: Connected to
> > 192.165.210.136 5901
> > 1338808979 WARN [32318:32319] RedChannel::run: connect failed 7
>
> This indicates authentication failure. Have you set the ticket via
> vdsClient for spice, too?
>
> David
>
> >
> > virt-viewer is failing due to authentication even though i use a
> > password set by vmticket.
> >
> > Please note that the VMs are managed by ovirt
> > Is it mandatory that we need to use ovirt to connect to vm consoles?
> > Can someone guide me?
> >
> > Thanks,
> > Anil
> > ___
> > vdsm-devel mailing list
> > vdsm-devel@lists.fedorahosted.org
> > https://fedorahosted.org/mailman/listinfo/vdsm-devel
>
> --
>
> David Jaša, RHCE
>
> SPICE QE based in Brno
> GPG Key: 22C33E24
> Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24
>
>
>
>


-- 
http://www.anilv.in
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] spicec + vncviewer query

2012-06-05 Thread David Jaša
Anil Vettathu píše v Út 05. 06. 2012 v 15:36 +0530:
> 
> Hi,
> 
> I was able to get the details of the display of both spice and vnc
> using vdsclient. Now how can I connect to the console using spicec or
> virtviewer.
> 
> spicec is failiing with the following log.
> 
> 1338808977 INFO [32318:32318] Application::main: command line: spicec
> --host 192.165.210.136 --port 5900 --secure-port 5901 --ca-file
> ca-cert.pem
> 1338808977 INFO [32318:32318] init_key_map: using evdev mapping
> 1338808979 INFO [32318:32318] MultyMonScreen::MultyMonScreen:
> platform_win: 77594625
> 1338808979 INFO [32318:32318] GUI::GUI:
> 1338808979 INFO [32318:32318] ForeignMenu::ForeignMenu: Creating a
> foreign menu connection /tmp/SpiceForeignMenu-32318.uds
> 1338808979 INFO [32318:32319] RedPeer::connect_unsecure: Connected to
> 192.165.210.136 5900
> 1338808979 INFO [32318:32319] RedPeer::connect_secure: Connected to
> 192.165.210.136 5901
> 1338808979 WARN [32318:32319] RedChannel::run: connect failed 7

This indicates authentication failure. Have you set the ticket via
vdsClient for spice, too?

David

> 
> virt-viewer is failing due to authentication even though i use a
> password set by vmticket.
> 
> Please note that the VMs are managed by ovirt
> Is it mandatory that we need to use ovirt to connect to vm consoles?
> Can someone guide me?
> 
> Thanks,
> Anil
> ___
> vdsm-devel mailing list
> vdsm-devel@lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/vdsm-devel

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] spicec + vncviewer query

2012-06-05 Thread Anil Vettathu
Hi,

I was able to get the details of the display of both spice and vnc using
vdsclient. Now how can I connect to the console using spicec or virtviewer.

spicec is failiing with the following log.

1338808977 INFO [32318:32318] Application::main: command line: spicec
--host 192.165.210.136 --port 5900 --secure-port 5901 --ca-file ca-cert.pem
1338808977 INFO [32318:32318] init_key_map: using evdev mapping
1338808979 INFO [32318:32318] MultyMonScreen::MultyMonScreen: platform_win:
77594625
1338808979 INFO [32318:32318] GUI::GUI:
1338808979 INFO [32318:32318] ForeignMenu::ForeignMenu: Creating a foreign
menu connection /tmp/SpiceForeignMenu-32318.uds
1338808979 INFO [32318:32319] RedPeer::connect_unsecure: Connected to
192.165.210.136 5900
1338808979 INFO [32318:32319] RedPeer::connect_secure: Connected to
192.165.210.136 5901
1338808979 WARN [32318:32319] RedChannel::run: connect failed 7

virt-viewer is failing due to authentication even though i use a password
set by vmticket.

Please note that the VMs are managed by ovirt
Is it mandatory that we need to use ovirt to connect to vm consoles?
Can someone guide me?

Thanks,
Anil
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Agenda for today's call

2012-06-05 Thread Itamar Heim

On 06/04/2012 05:35 PM, Dan Kenigsberg wrote:

>  - Upcoming oVirt-3.1 release: version bump to 4.9.7? to 4.10?

Adam and Ayal prefer 4.9.7, suggest to lie only over xmlrpc?



will vdsm move to 5.0 and skip 4.10 later altogether?

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel