[Openstack] Libguestfs??

2011-11-01 Thread Joshua Harlow
Hi all,

I was wondering if there was a reason that openstack is not using libguestfs 
more frequently than not.
Was there a technical reason for that, or a lack of packages in distributions 
(or other reasons?).
Just wondering since it seems like its aiming to be a library that can unify 
mounting different VM files (similar to libvirt).
Also made by the same company as libvirt (for better or worse)...

I've seen some branches of code that seem to have incorporated it but was it 
ever officially incorporated?
Thx,
Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cannot boot from volume with 2 devices

2011-11-01 Thread Scott Moser
On Tue, 1 Nov 2011, Vishvananda Ishaya wrote:

> Sounds like we can work around this pretty easily by sorting the disks before 
> we pass them into the xml template.

The long term solution here is not to load the kernel and the ramdisk
outside the image, but rather let grub load it with root=LABEL=
or root=UUID= .

If you boot one of the full disk Ubuntu image (-disk1.img) files at
https://cloud-images.ubuntu.com/releases/oneiric/release/ or
https://cloud-images.ubuntu.com/server/natty/current/ , then you wont have
the problem.  You'll also be able to 'apt-get update && apt-get
dist-upgrade && reboot' and get a new kernel.  That is not possible with
the hypervisor doing the kernel and ramdisk loading.

This is assuming that in the multiple-disks-attached scenario, the *real*
root disk (the one with the bootloader on it) is found by bios.

static device names were deprecated several years ago by all linux
distributions.  Lets move towards using the better solution.

>
> Vish
>
> On Nov 1, 2011, at 9:52 AM, Gaurav Gupta wrote:
>
> > Hi all, I asked a question on Launchpad. but haven't heard back anything 
> > yet. Trying this forum to see if someone has any idea how to resolve this 
> > issue:
> > https://answers.launchpad.net/nova/+question/176938
> >
> > To summarize:
> > --
> >
> > Say I had 2 disks, disk1 and disk2 (represented by 2 volumes). disk1 has 
> > the root-file-system and disk2 has some data. I boot an instances using the 
> > boot-from-volumes extension, and specify the 2 disks such as disk1 should 
> > be attached to /dev/vda and disk2 to /dev/vdb. When the instance is 
> > launched it fails to boot, because it tries to find the root-filesystem on 
> > disk2 instead.
> >
> > The underlying problem is with virsh/libvirt. Boot fails because in the 
> > libvirt.xml file created by Openstack, disk2 (/dev/vdb) is listed before 
> > disk1 (/dev/vda). So, what happens is that the hypervisor attaches disk2 
> > first (since its listed first in the XML). Therefore when these disks are 
> > attached on the guest, disk2 appears as /dev/vda and disk1 as /dev/vdb, 
> > which causes the boot failure. Later the kernel tries to find the root 
> > filesystem on '/dev/vda' (because thats' what is selected as the root) and 
> > it fails for obvious reason. I think it's a virsh bug. It should be smart 
> > about it and attach the devices in the right order. 
> > ___

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cannot boot from volume with 2 devices

2011-11-01 Thread Gaurav Gupta
Opened bug:
https://bugs.launchpad.net/nova/+bug/884984

On Tue, Nov 1, 2011 at 12:49 PM, Vishvananda Ishaya
wrote:

> Sounds like we can work around this pretty easily by sorting the disks
> before we pass them into the xml template.
>
> Vish
>
> On Nov 1, 2011, at 9:52 AM, Gaurav Gupta wrote:
>
> Hi all, I asked a question on Launchpad. but haven't heard back anything
> yet. Trying this forum to see if someone has any idea how to resolve this
> issue:
> https://answers.launchpad.net/nova/+question/176938
>
> To summarize:
> --
>
> Say I had 2 disks, disk1 and disk2 (represented by 2 volumes). disk1 has
> the root-file-system and disk2 has some data. I boot an instances using the
> boot-from-volumes extension, and specify the 2 disks such as disk1 should
> be attached to /dev/vda and disk2 to /dev/vdb. When the instance is
> launched it fails to boot, because it tries to find the root-filesystem on
> disk2 instead.
>
> The underlying problem is with virsh/libvirt. Boot fails because in the
> libvirt.xml file created by Openstack, disk2 (/dev/vdb) is listed before
> disk1 (/dev/vda). So, what happens is that the hypervisor attaches disk2
> first (since its listed first in the XML). Therefore when these disks are
> attached on the guest, disk2 appears as /dev/vda and disk1 as /dev/vdb,
> which causes the boot failure. Later the kernel tries to find the root
> filesystem on '/dev/vda' (because thats' what is selected as the root) and
> it fails for obvious reason. I think it's a virsh bug. It should be smart
> about it and attach the devices in the right order.
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Gflags / conf -> common?

2011-11-01 Thread Akira Yoshiyama
+1
 2011/11/02 2:38 "Brian Lamar" :

> From what I understand, Nova is in the middle of a transition from gflags
> to optparse.
>
> It's difficult to tell exactly what is going on, but the flags file is
> still being read by gflags and then optparse seems to take over from there.
> Regardless, both libraries are still being used and the scenario that
> Joshua bring up is still a concern.
>
> I'm all for switching to `optparse` but it's going to be a heck of a
> transition.
>
> I worry about the the tight coupling that Glance has with `paste` and I
> would caution against Nova coupling with `paste` in a similar fashion.
>
> IMO if the API wants to use `paste.deploy` as a configuration mechanism
> that is great but the entire project should not be configured out of a
> paste config file just because they happen to use INI syntax.
>
> I'd like to treat paste deploy files as code and our configuration files
> as configuration files. (This will be the biggest point of controversy?)
>
> As an example, without thinking too much about it, we could have:
>
> $ cat /etc/nova/nova.conf
>
> [logging]
> driver=nova.log.drivers.SyslogDriver
> syslog_dev=/dev/log
> verbose=true
>
> [nova-network]
> manager=nova.network.quantum.QuantumManager
> vlan_interface=eth1
>
> [nova-api]
> driver=nova.api.drivers.PasteDriver
> config=/etc/nova/api-paste.ini
> pipeline=osapi-with-keystone
>
>
> $ cat /etc/nova/api-paste.ini
>
> ...
>
> [pipeline:osapi]
> pipeline = faultwrap noauth ratelimit serialize extensions osapiapp11
>
> [pipeline:osapi-with-keystone]
> pipeline = faultwrap keystone-auth ratelimit serialize extensions
> osapiapp11
>
> ...
>
>
>
>
> -Original Message-
> From: "Jay Pipes" 
> Sent: Monday, October 31, 2011 5:42pm
> To: "Joshua Harlow" 
> Cc: "openstack" 
> Subject: Re: [Openstack] Gflags / conf -> common?
>
> Hi!
>
> GFlags has now been removed, AFAIK. The flags module has an
> optparse-based emulator for GFlags to ease transition for Nova joining
> the rest of the OpenStack core project implementations' use of
> standard config files/Paste.Deploy.
>
> Cheers,
> -jay
>
> On Mon, Oct 31, 2011 at 5:08 PM, Joshua Harlow 
> wrote:
> > Hi all,
> >
> > I was wondering if there is any plans in essex to standardize either
> using
> > gflags or using configuration files for these types of settings.
> > One of the complaints that I receive a lot with gflags is that by
> including
> > a python file, u automatically inject all of its flags (even if they are
> not
> > used) into gflags (since its global).
> > Thus say u are just using the nova-compute run time, but that itself
> > includes say “flags.py” which itself seems to be a common area for flags
> > that may or may not be used by that runtime. Similarly if a file is
> imported
> > has say 1 method used by the calling code but itself defines 10 flags
> (for
> > its components) then those 10 flags get injected. This makes it very
> > confusing to figure out what should be set (or what could be set).
> >
> > Has there been any thought on fixing this (or making a standard
> > recommendation that subprojects can follow) that would avoid this
> problem?
> > I could imagine fixes being in the code structure itself (having said 1
> > method stated above not be in a file what pulls in other code that
> defines
> > 10 flags) or another type of configuration mechanism?
> > I think this was mentioned at the conference, but not sure what came out
> of
> > that :-)
> >
> > -Josh
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
> >
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Gflags / conf -> common?

2011-11-01 Thread Jay Pipes
On Tue, Nov 1, 2011 at 1:05 PM, Brian Lamar  wrote:
> From what I understand, Nova is in the middle of a transition from gflags to 
> optparse.
>
> It's difficult to tell exactly what is going on, but the flags file is still 
> being read by gflags and then optparse seems to take over from there. 
> Regardless, both libraries are still being used and the scenario that Joshua 
> bring up is still a concern.
>
> I'm all for switching to `optparse` but it's going to be a heck of a 
> transition.
>
> I worry about the the tight coupling that Glance has with `paste` and I would 
> caution against Nova coupling with `paste` in a similar fashion.

Sure, agreed. We've had a task in Glance to remove this coupling for a
while now:

https://bugs.launchpad.net/glance/+bug/815208

Happy to work on it, but it's not the highest priority right now :)

> IMO if the API wants to use `paste.deploy` as a configuration mechanism that 
> is great but the entire project should not be configured out of a paste 
> config file just because they happen to use INI syntax.
>
> I'd like to treat paste deploy files as code and our configuration files as 
> configuration files. (This will be the biggest point of controversy?)
>
> As an example, without thinking too much about it, we could have:
>
> $ cat /etc/nova/nova.conf
>
> [logging]
> driver=nova.log.drivers.SyslogDriver
> syslog_dev=/dev/log
> verbose=true
>
> [nova-network]
> manager=nova.network.quantum.QuantumManager
> vlan_interface=eth1
>
> [nova-api]
> driver=nova.api.drivers.PasteDriver
> config=/etc/nova/api-paste.ini
> pipeline=osapi-with-keystone

Yup, I'd be in favour of the above (and below).

-jay

> $ cat /etc/nova/api-paste.ini
>
> ...
>
> [pipeline:osapi]
> pipeline = faultwrap noauth ratelimit serialize extensions osapiapp11
>
> [pipeline:osapi-with-keystone]
> pipeline = faultwrap keystone-auth ratelimit serialize extensions osapiapp11
>
> ...
>
>
>
>
> -Original Message-
> From: "Jay Pipes" 
> Sent: Monday, October 31, 2011 5:42pm
> To: "Joshua Harlow" 
> Cc: "openstack" 
> Subject: Re: [Openstack] Gflags / conf -> common?
>
> Hi!
>
> GFlags has now been removed, AFAIK. The flags module has an
> optparse-based emulator for GFlags to ease transition for Nova joining
> the rest of the OpenStack core project implementations' use of
> standard config files/Paste.Deploy.
>
> Cheers,
> -jay
>
> On Mon, Oct 31, 2011 at 5:08 PM, Joshua Harlow  wrote:
>> Hi all,
>>
>> I was wondering if there is any plans in essex to standardize either using
>> gflags or using configuration files for these types of settings.
>> One of the complaints that I receive a lot with gflags is that by including
>> a python file, u automatically inject all of its flags (even if they are not
>> used) into gflags (since its global).
>> Thus say u are just using the nova-compute run time, but that itself
>> includes say “flags.py” which itself seems to be a common area for flags
>> that may or may not be used by that runtime. Similarly if a file is imported
>> has say 1 method used by the calling code but itself defines 10 flags (for
>> its components) then those 10 flags get injected. This makes it very
>> confusing to figure out what should be set (or what could be set).
>>
>> Has there been any thought on fixing this (or making a standard
>> recommendation that subprojects can follow) that would avoid this problem?
>> I could imagine fixes being in the code structure itself (having said 1
>> method stated above not be in a file what pulls in other code that defines
>> 10 flags) or another type of configuration mechanism?
>> I think this was mentioned at the conference, but not sure what came out of
>> that :-)
>>
>> -Josh
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Gflags / conf -> common?

2011-11-01 Thread Vishvananda Ishaya
The only code that is used from gflags ls ReadFlagsFromFiles which recursively 
reads flags from files and converts them into args (removing comments). We 
could rewrite or copy this code and remove the gflags dependency, but if we are 
moving towards a config file instead of a flag file we will be specifying these 
using a different config parser so it makes sense to leave it for now.

Vish

On Nov 1, 2011, at 10:05 AM, Brian Lamar wrote:

> From what I understand, Nova is in the middle of a transition from gflags to 
> optparse.
> 
> It's difficult to tell exactly what is going on, but the flags file is still 
> being read by gflags and then optparse seems to take over from there. 
> Regardless, both libraries are still being used and the scenario that Joshua 
> bring up is still a concern.
> 
> I'm all for switching to `optparse` but it's going to be a heck of a 
> transition.
> 
> I worry about the the tight coupling that Glance has with `paste` and I would 
> caution against Nova coupling with `paste` in a similar fashion.
> 
> IMO if the API wants to use `paste.deploy` as a configuration mechanism that 
> is great but the entire project should not be configured out of a paste 
> config file just because they happen to use INI syntax.
> 
> I'd like to treat paste deploy files as code and our configuration files as 
> configuration files. (This will be the biggest point of controversy?)
> 
> As an example, without thinking too much about it, we could have:
> 
> $ cat /etc/nova/nova.conf
> 
> [logging]
> driver=nova.log.drivers.SyslogDriver
> syslog_dev=/dev/log
> verbose=true
> 
> [nova-network]
> manager=nova.network.quantum.QuantumManager
> vlan_interface=eth1
> 
> [nova-api]
> driver=nova.api.drivers.PasteDriver
> config=/etc/nova/api-paste.ini
> pipeline=osapi-with-keystone
> 
> 
> $ cat /etc/nova/api-paste.ini
> 
> ...
> 
> [pipeline:osapi]
> pipeline = faultwrap noauth ratelimit serialize extensions osapiapp11
> 
> [pipeline:osapi-with-keystone]
> pipeline = faultwrap keystone-auth ratelimit serialize extensions osapiapp11
> 
> ...
> 
> 
> 
> 
> -Original Message-
> From: "Jay Pipes" 
> Sent: Monday, October 31, 2011 5:42pm
> To: "Joshua Harlow" 
> Cc: "openstack" 
> Subject: Re: [Openstack] Gflags / conf -> common?
> 
> Hi!
> 
> GFlags has now been removed, AFAIK. The flags module has an
> optparse-based emulator for GFlags to ease transition for Nova joining
> the rest of the OpenStack core project implementations' use of
> standard config files/Paste.Deploy.
> 
> Cheers,
> -jay
> 
> On Mon, Oct 31, 2011 at 5:08 PM, Joshua Harlow  wrote:
>> Hi all,
>> 
>> I was wondering if there is any plans in essex to standardize either using
>> gflags or using configuration files for these types of settings.
>> One of the complaints that I receive a lot with gflags is that by including
>> a python file, u automatically inject all of its flags (even if they are not
>> used) into gflags (since its global).
>> Thus say u are just using the nova-compute run time, but that itself
>> includes say “flags.py” which itself seems to be a common area for flags
>> that may or may not be used by that runtime. Similarly if a file is imported
>> has say 1 method used by the calling code but itself defines 10 flags (for
>> its components) then those 10 flags get injected. This makes it very
>> confusing to figure out what should be set (or what could be set).
>> 
>> Has there been any thought on fixing this (or making a standard
>> recommendation that subprojects can follow) that would avoid this problem?
>> I could imagine fixes being in the code structure itself (having said 1
>> method stated above not be in a file what pulls in other code that defines
>> 10 flags) or another type of configuration mechanism?
>> I think this was mentioned at the conference, but not sure what came out of
>> that :-)
>> 
>> -Josh
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>> 
>> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cannot boot from volume with 2 devices

2011-11-01 Thread Vishvananda Ishaya
Sounds like we can work around this pretty easily by sorting the disks before 
we pass them into the xml template.

Vish

On Nov 1, 2011, at 9:52 AM, Gaurav Gupta wrote:

> Hi all, I asked a question on Launchpad. but haven't heard back anything yet. 
> Trying this forum to see if someone has any idea how to resolve this issue:
> https://answers.launchpad.net/nova/+question/176938
> 
> To summarize:
> --
> 
> Say I had 2 disks, disk1 and disk2 (represented by 2 volumes). disk1 has the 
> root-file-system and disk2 has some data. I boot an instances using the 
> boot-from-volumes extension, and specify the 2 disks such as disk1 should be 
> attached to /dev/vda and disk2 to /dev/vdb. When the instance is launched it 
> fails to boot, because it tries to find the root-filesystem on disk2 instead. 
> 
> The underlying problem is with virsh/libvirt. Boot fails because in the 
> libvirt.xml file created by Openstack, disk2 (/dev/vdb) is listed before 
> disk1 (/dev/vda). So, what happens is that the hypervisor attaches disk2 
> first (since its listed first in the XML). Therefore when these disks are 
> attached on the guest, disk2 appears as /dev/vda and disk1 as /dev/vdb, which 
> causes the boot failure. Later the kernel tries to find the root filesystem 
> on '/dev/vda' (because thats' what is selected as the root) and it fails for 
> obvious reason. I think it's a virsh bug. It should be smart about it and 
> attach the devices in the right order. 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone Endpoint schema

2011-11-01 Thread Ziad Sawalha
I logged this here: 
https://blueprints.launchpad.net/keystone/+spec/endpoint-template-types

I don't know when we'll get to it, though. Essex is booked and right now the 
focus is on stabilizing. This is also an API change, so it might be fitting for 
a v3.0 of the API whenever we decide to move to that. Feels like Essex+1 to me.

Is there a piece of this or a blocker we need to address today?

From: Marcelo Martins 
mailto:btorch...@zeroaccess.org>>
Date: Tue, 1 Nov 2011 10:16:34 -0500
To: Ziad Sawalha mailto:ziad.sawa...@rackspace.com>>
Cc: Joseph Heck mailto:he...@mac.com>>, 
"openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] keystone Endpoint schema

Aww I see, that would be cool


Marcelo Martins
Openstack-swift
btorch...@zeroaccess.org

“Knowledge is the wings on which our aspirations take flight and soar. When it 
comes to surfing and life if you know what to do you can do it. If you desire 
anything become educated about it and succeed. “




On Nov 1, 2011, at 9:05 AM, Ziad Sawalha wrote:

We also need to consider the use case where a role may have rights over 
multiple services. Cloud Admin for example.

EndpointType would allow us to do this:

endpointTemplate add [region] [service] [type=public|internal|admin…other] 
[url] [enabled] [is_global]

That would allow services to register as many endpoints and endpoint types as 
they needed.

Z

From: Marcelo Martins 
mailto:btorch...@zeroaccess.org>>
Date: Mon, 31 Oct 2011 19:26:12 -0500
To: Ziad Sawalha mailto:ziad.sawa...@rackspace.com>>
Cc: Joseph Heck mailto:he...@mac.com>>, 
"openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] keystone Endpoint schema

Hi Ziad,

Sorry, that was my mistake. I meant to have "case service.name:"  on that 
pseudocode and not type. I wasn't proposing any EndpointType and don't see how 
that would help.

The way that I was thinking was, you can either have the "services" table  
pre-populated during keystone install/setup or have the user do it. Also 
provide information on the docs about the services that keystone currently 
support. The documentation would provide information to the user on how to add 
an endpointTemplate to a particular  service.


Perhaps this is a bit more clear:


case service.name:

openstack-swift)
try
endpointTemplate add [region] [service] [public_url] [internal_url] [enabled] 
[is_global]
except
  "Failed with improper number of arguments"swift)
show_some_help()
try

openstack-compute)
endpointTemplate add [region] [service] [public_url] [admin_url] [internal_url] 
[enabled] [is_global]


openstack-swift)



If one needs to add a "generic/custom"  service (one that keystone does not 
know yet), perhaps keystone could be flexible enough  to accept  N numbers of 
URLs for this "generic/custom"  service. But I think this is something more for 
the future.






Marcelo Martins
Openstack-swift
btorch...@zeroaccess.org

“Knowledge is the wings on which our aspirations take flight and soar. When it 
comes to surfing and life if you know what to do you can do it. If you desire 
anything become educated about it and succeed. “




On Oct 31, 2011, at 4:42 PM, Ziad Sawalha wrote:

The list of URLs comes from what we have historically done at Rackspace and the 
conversations had in OpenStack about a management/admin API.

I agree that not all services need those three. And some may want to create 
additional ones. You mention "type" below. Not to be confused with the 
serviceType (like compute, identity, image-service, object-store, etc...). Are 
you proposing an EndpointType (maybe admin, public, private, etc..)?

That does seem like a more flexible approach.

It would help to have some well-known types, such as:
- public: Internet-accessible
- admin: private, with elevated-privilege calls available
- internal: provides a high bandwidth, low latency, unmetered endpoint

Thoughts?

Z



On Oct 31, 2011, at 2:17 PM, "Marcelo Martins" 
mailto:btorch...@zeroaccess.org>> wrote:


It should require/accept the number of URLs that is required by the type of 
service one is adding. For example, swift only has public and localnet storage 
URLs. No admin URL.
So, regardless if one is using keystone-manage or not (not sure what else one 
can use, Rest calls maybe ? ),  it should only accept what the service type 
requires.


case type:

swift)
try
   endpointTemplate add [region] [service] [public_url] [internal_url] 
[enabled] [is_global]
except
"Failed with improper number of arguments"
show_some_help()

nova)


keystone)
...
another-service)


whatever_else)
...






Marcelo Martins
Openstack-swift
btorch...@zeroaccess.org

“Knowledge is the wings on which our aspirations take flight and soar. When it 
comes to 

Re: [Openstack] Which nova scheduler for different hardware sizes?

2011-11-01 Thread Lorin Hochstein
Christian:

Sandy's branch just landed in the repository. You should be able to use the 
distributed scheduler with the least cost functionality by specifying the 
following flag in nova.conf for the nova-scheduler service:

--compute_scheduler_driver=nova.scheduler.distributed_scheduler.DistributedScheduler

By default, this uses the nova.scheduler.least_cost.compute_fill_first_cost_fn 
weighting function.

Note, however, that this function will favor scheduling instances to nodes that 
have the smallest amount of RAM available that can still fit the instance. If 
you're looking for the opposite effect (deploy to the node that has the most 
amount of RAM free), then you'll have to write your own cost function.  One way 
would be to add the following method to least_cost.py:


def compute_least_loaded_cost_fn(host_info):
return -compute_fill_first_cost_fn(host_info)


Then add the following flag to your nova.conf

--least_cost_functions=nova.scheduler.least_cost.compute_least_loaded_cost_fn


Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin




On Nov 1, 2011, at 11:37 AM, Sandy Walsh wrote:

> I'm hoping to land this branch asap. 
> https://review.openstack.org/#change,1192
> 
> It replaces all the "kind of alike" schedulers with a single 
> DistributedScheduler.
> 
> -S
> 
> 
> From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
> [openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf 
> of Christian Wittwer [wittwe...@gmail.com]
> Sent: Tuesday, November 01, 2011 5:38 AM
> To: Lorin Hochstein
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] Which nova scheduler for different hardware sizes?
> 
> Lorin,
> Thanks for your reply. Well the least cost scheduler with these cost
> functions looks interesting.
> Unfortunately there is not much documenation about it. Can somebody
> give me an example how to switch to that scheduler using the memory
> cost function which already exist?
> 
> Cheers,
> Christian
> 
> 2011/10/24 Lorin Hochstein :
>> Christian:
>> You could use the least cost scheduler, but I think you'd have to write your
>> own cost function to take into account the different number of cores.
>> Looking at the source, the only cost function it comes with only takes into
>> account the amount of memory that's free, not loading in terms of total
>> physical cores and allocated virtual cores. (We use a custom scheduler at
>> our site, so I don't have any firsthand experience with the least-cost
>> scheduler).
>> Lorin
>> --
>> Lorin Hochstein, Computer Scientist
>> USC Information Sciences Institute
>> 703.812.3710
>> http://www.east.isi.edu/~lorin
>> 
>> 
>> 
>> On Oct 22, 2011, at 3:17 AM, Christian Wittwer wrote:
>> 
>> I'm planning to build a openstack nova installation with older
>> hardware. These servers obviously doesn't have the same hardware
>> configuration like memory and cores.
>> It ranges from 2 core and 4GB memory to 16 core and 64GB memory. I
>> know that there are different scheduler, but I'm not sure which one to
>> choose.
>> The simple scheduler tries to find the least used host, but the amount
>> of used cores per host (max_cores) is a constant, which doesn't work
>> for me.
>> Maybe the least cost scheduler would be the right one? But I'm not
>> sure, because I did not find any documenation about how to use it.
>> 
>> Cheers,
>> Christian
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>> 
>> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Gflags / conf -> common?

2011-11-01 Thread Brian Lamar
From what I understand, Nova is in the middle of a transition from gflags to 
optparse.

It's difficult to tell exactly what is going on, but the flags file is still 
being read by gflags and then optparse seems to take over from there. 
Regardless, both libraries are still being used and the scenario that Joshua 
bring up is still a concern.

I'm all for switching to `optparse` but it's going to be a heck of a transition.

I worry about the the tight coupling that Glance has with `paste` and I would 
caution against Nova coupling with `paste` in a similar fashion.

IMO if the API wants to use `paste.deploy` as a configuration mechanism that is 
great but the entire project should not be configured out of a paste config 
file just because they happen to use INI syntax.

I'd like to treat paste deploy files as code and our configuration files as 
configuration files. (This will be the biggest point of controversy?)

As an example, without thinking too much about it, we could have:

$ cat /etc/nova/nova.conf

[logging]
driver=nova.log.drivers.SyslogDriver
syslog_dev=/dev/log
verbose=true

[nova-network]
manager=nova.network.quantum.QuantumManager
vlan_interface=eth1

[nova-api]
driver=nova.api.drivers.PasteDriver
config=/etc/nova/api-paste.ini
pipeline=osapi-with-keystone


$ cat /etc/nova/api-paste.ini

...

[pipeline:osapi]
pipeline = faultwrap noauth ratelimit serialize extensions osapiapp11

[pipeline:osapi-with-keystone]
pipeline = faultwrap keystone-auth ratelimit serialize extensions osapiapp11

...




-Original Message-
From: "Jay Pipes" 
Sent: Monday, October 31, 2011 5:42pm
To: "Joshua Harlow" 
Cc: "openstack" 
Subject: Re: [Openstack] Gflags / conf -> common?

Hi!

GFlags has now been removed, AFAIK. The flags module has an
optparse-based emulator for GFlags to ease transition for Nova joining
the rest of the OpenStack core project implementations' use of
standard config files/Paste.Deploy.

Cheers,
-jay

On Mon, Oct 31, 2011 at 5:08 PM, Joshua Harlow  wrote:
> Hi all,
>
> I was wondering if there is any plans in essex to standardize either using
> gflags or using configuration files for these types of settings.
> One of the complaints that I receive a lot with gflags is that by including
> a python file, u automatically inject all of its flags (even if they are not
> used) into gflags (since its global).
> Thus say u are just using the nova-compute run time, but that itself
> includes say “flags.py” which itself seems to be a common area for flags
> that may or may not be used by that runtime. Similarly if a file is imported
> has say 1 method used by the calling code but itself defines 10 flags (for
> its components) then those 10 flags get injected. This makes it very
> confusing to figure out what should be set (or what could be set).
>
> Has there been any thought on fixing this (or making a standard
> recommendation that subprojects can follow) that would avoid this problem?
> I could imagine fixes being in the code structure itself (having said 1
> method stated above not be in a file what pulls in other code that defines
> 10 flags) or another type of configuration mechanism?
> I think this was mentioned at the conference, but not sure what came out of
> that :-)
>
> -Josh
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cannot boot from volume with 2 devices

2011-11-01 Thread Gaurav Gupta
Hi all, I asked a question on Launchpad. but haven't heard back anything
yet. Trying this forum to see if someone has any idea how to resolve this
issue:
https://answers.launchpad.net/nova/+question/176938

To summarize:
--

Say I had 2 disks, disk1 and disk2 (represented by 2 volumes). disk1 has
the root-file-system and disk2 has some data. I boot an instances using the
boot-from-volumes extension, and specify the 2 disks such as disk1 should
be attached to /dev/vda and disk2 to /dev/vdb. When the instance is
launched it fails to boot, because it tries to find the root-filesystem on
disk2 instead.

The underlying problem is with virsh/libvirt. Boot fails because in the
libvirt.xml file created by Openstack, disk2 (/dev/vdb) is listed before
disk1 (/dev/vda). So, what happens is that the hypervisor attaches disk2
first (since its listed first in the XML). Therefore when these disks are
attached on the guest, disk2 appears as /dev/vda and disk1 as /dev/vdb,
which causes the boot failure. Later the kernel tries to find the root
filesystem on '/dev/vda' (because thats' what is selected as the root) and
it fails for obvious reason. I think it's a virsh bug. It should be smart
about it and attach the devices in the right order.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Is there a reason Nova doesn't use scoped sessions in sqlalchemy ?

2011-11-01 Thread Vishvananda Ishaya
I'm still a bit unconvinced for two reasons:

a) clobberring a session would require a monkeypatched call while the session 
is still open. AFAIK we don't have any calls in sqlalchemy/api.py that are 
doing any fancy socket stuff, so calls through the db layer "should" be 
happening synchronously

b) if this were really an issue it seems like we would see it all the time, 
instead of just under a specific set of circumstances.

It would be great to force a simple repro test case that proves that this is 
blowing up. That said, I have no issue switching over to scoped sessions. We 
discussed it at one point and felt that it wasn't necessary, but if you are 
reasonably sure that is the issue, we can definitely switch.

I would expect that the issues with the migrations and the test cases are 
fixable.

Vish

On Nov 1, 2011, at 7:20 AM, Day, Phil wrote:

> Hi Vish,
>  
> I probably wasn’t careful enough with my wording – the API server may not be 
> threaded as such, but the use of eventlets gives effectively the same 
> concurrency issues that point towards needing to use scoped sessions.
>  
> Our basis for concluding that this is some form of concurrency issue is that 
> we can easily reproduce the issue by running concurrent requests into an API 
> server, and we have seen the problem disappear if we reduce the eventlet pool 
> to 1 or change to scoped sessions.   Whilst the symptom is that the session 
> has terminated by the time the lazy load is requested, as far as we can see 
> the eventlet handing the query hasn’t itself terminated the session – 
> although it does seem likely that another eventlet using the same shared 
> session could have. This seems to be specifically the type of issue that 
> scoped sessions are intended to address.   
>  
> http://www.sqlalchemy.org/docs/orm/session.html#contextual-thread-local-sessions
>  
> All of this is based on a limited understanding of how sqlalchemy is used in 
> Nova – I’d be more than happy to be corrected by others with more experience, 
> hence the question to the mailing list. 
>  
> I fully understand the drive to clean up the database layer, and I’m not 
> knocking the fix to 855660 – its clearly a good template for the way the DB 
> needs to go in Essex.   My concern is that as shown by 855660 these changes 
> have a pretty wide scope, and by the time that’s been expanded to all of the 
> current joinedloads it feels like it would be such a large set of changes 
> that I’d be concerned about them coming back into Diablo.Stable.
>  
> Hence instead we were looking for a much smaller change that can address the 
> whole class of problem of joinedloads in Diablo for now ahead of the DB 
> refactoring in Essex – and from our testing scoped sessions seem to address 
> that.  However as changing to scoped session breaks the migrate code in unit 
> tests, and not really understanding why this is or the intricacies of the DB 
> unit tests I wanted to see if we were heading down a path that had already 
> been examined and discarded before we spend too much time on it.
>  
> I’d be really interested in hearing from anyone with experience of 
> scoped_sessions, and/or willing to help us understand the issues we’re seeing 
> in the Unit Tests.
>  
> And of course I’d like to know what the communities feeling is towards a 
> simpler approach to fixing the issue in Diablo.Final vs the backport of DB 
> simplification changes from Essex – which I’m assuming will take some tiem 
> yet to work through all of the joinedloads.
>  
> Phil
>  
> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com] 
> Sent: 31 October 2011 19:50
> To: Day, Phil
> Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net); Johnson, 
> Andrew Gordon (HP Cloud Services); Hassan, Ahmad; Haynes, David; 
> nova-datab...@lists.launchpad.net
> Subject: Re: [Openstack] Is there a reason Nova doesn't use scoped sessions 
> in sqlalchemy ?
>  
> All of the workers are single-threaded, so I'm not sure that scoped sessions 
> are really necessary.
>  
> We did however decide that objects from the db layer are supposed to be 
> simple dictionaries.  We currently allow nested dictionaries to optimize 
> joined objects. Unfortunately we never switched to sanitizing data from 
> sqlalchemy, and instead we make the sqlalchemy objects provide a 
> dictionary-like interface and pass the object itself.
>  
> The issue that you're seeing is because network wasn't properly 
> 'joinedload'ed in the initial query, and because the data is not sanitized, 
> sqlalchemy tries to joinedload, but the session has been terminated.  If we 
> had sanitized data, we would get a more useful error like a key error when 
> network is accessed. The current solution is to add the proper joinedload.
>  
> One of the goals of the nova-database team is to do the necessary data 
> sanitization and to remove as many of the joinedloads as possible (hopefully 
> all of them).
>  
> Vish
>  
> On Oct 31, 2011, at 12

Re: [Openstack] Which nova scheduler for different hardware sizes?

2011-11-01 Thread Sandy Walsh
I'm hoping to land this branch asap. 
https://review.openstack.org/#change,1192

It replaces all the "kind of alike" schedulers with a single 
DistributedScheduler.

-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Christian Wittwer [wittwe...@gmail.com]
Sent: Tuesday, November 01, 2011 5:38 AM
To: Lorin Hochstein
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Which nova scheduler for different hardware sizes?

Lorin,
Thanks for your reply. Well the least cost scheduler with these cost
functions looks interesting.
Unfortunately there is not much documenation about it. Can somebody
give me an example how to switch to that scheduler using the memory
cost function which already exist?

Cheers,
Christian

2011/10/24 Lorin Hochstein :
> Christian:
> You could use the least cost scheduler, but I think you'd have to write your
> own cost function to take into account the different number of cores.
> Looking at the source, the only cost function it comes with only takes into
> account the amount of memory that's free, not loading in terms of total
> physical cores and allocated virtual cores. (We use a custom scheduler at
> our site, so I don't have any firsthand experience with the least-cost
> scheduler).
> Lorin
> --
> Lorin Hochstein, Computer Scientist
> USC Information Sciences Institute
> 703.812.3710
> http://www.east.isi.edu/~lorin
>
>
>
> On Oct 22, 2011, at 3:17 AM, Christian Wittwer wrote:
>
> I'm planning to build a openstack nova installation with older
> hardware. These servers obviously doesn't have the same hardware
> configuration like memory and cores.
> It ranges from 2 core and 4GB memory to 16 core and 64GB memory. I
> know that there are different scheduler, but I'm not sure which one to
> choose.
> The simple scheduler tries to find the least used host, but the amount
> of used cores per host (max_cores) is a constant, which doesn't work
> for me.
> Maybe the least cost scheduler would be the right one? But I'm not
> sure, because I did not find any documenation about how to use it.
>
> Cheers,
> Christian
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone Endpoint schema

2011-11-01 Thread Marcelo Martins
Aww I see, that would be cool 


Marcelo Martins
Openstack-swift
btorch...@zeroaccess.org

“Knowledge is the wings on which our aspirations take flight and soar. When it 
comes to surfing and life if you know what to do you can do it. If you desire 
anything become educated about it and succeed. “




On Nov 1, 2011, at 9:05 AM, Ziad Sawalha wrote:

> We also need to consider the use case where a role may have rights over 
> multiple services. Cloud Admin for example.
> 
> EndpointType would allow us to do this:
> 
> endpointTemplate add [region] [service] [type=public|internal|admin…other] 
> [url] [enabled] [is_global]
> 
> That would allow services to register as many endpoints and endpoint types as 
> they needed.
> 
> Z
> 
> From: Marcelo Martins 
> Date: Mon, 31 Oct 2011 19:26:12 -0500
> To: Ziad Sawalha 
> Cc: Joseph Heck , "openstack@lists.launchpad.net" 
> 
> Subject: Re: [Openstack] keystone Endpoint schema
> 
> Hi Ziad,
> 
> Sorry, that was my mistake. I meant to have "case service.name:"  on that 
> pseudocode and not type. I wasn't proposing any EndpointType and don't see 
> how that would help. 
> 
> The way that I was thinking was, you can either have the "services" table  
> pre-populated during keystone install/setup or have the user do it. Also 
> provide information on the docs about the services that keystone currently 
> support. The documentation would provide information to the user on how to 
> add an endpointTemplate to a particular  service. 
> 
> 
> Perhaps this is a bit more clear:
> 
> 
> case service.name:
>  
> openstack-swift)
> try
> endpointTemplate add [region] [service] [public_url] [internal_url] [enabled] 
> [is_global]
> except
>   "Failed with improper number of arguments"swift)
> show_some_help()
> try
> 
> openstack-compute)
> endpointTemplate add [region] [service] [public_url] [admin_url] 
> [internal_url] [enabled] [is_global]
> 
> 
> openstack-swift)
> 
> 
> 
> If one needs to add a "generic/custom"  service (one that keystone does not 
> know yet), perhaps keystone could be flexible enough  to accept  N numbers of 
> URLs for this "generic/custom"  service. But I think this is something more 
> for the future.
> 
> 
> 
> 
> 
> 
> Marcelo Martins
> Openstack-swift
> btorch...@zeroaccess.org
> 
> “Knowledge is the wings on which our aspirations take flight and soar. When 
> it comes to surfing and life if you know what to do you can do it. If you 
> desire anything become educated about it and succeed. “
> 
> 
> 
> 
> On Oct 31, 2011, at 4:42 PM, Ziad Sawalha wrote:
> 
>> The list of URLs comes from what we have historically done at Rackspace and 
>> the conversations had in OpenStack about a management/admin API.
>> 
>> I agree that not all services need those three. And some may want to create 
>> additional ones. You mention "type" below. Not to be confused with the 
>> serviceType (like compute, identity, image-service, object-store, etc...). 
>> Are you proposing an EndpointType (maybe admin, public, private, etc..)?
>> 
>> That does seem like a more flexible approach.
>> 
>> It would help to have some well-known types, such as:
>> - public: Internet-accessible
>> - admin: private, with elevated-privilege calls available
>> - internal: provides a high bandwidth, low latency, unmetered endpoint
>> 
>> Thoughts?
>> 
>> Z
>> 
>> 
>> 
>> On Oct 31, 2011, at 2:17 PM, "Marcelo Martins"  
>> wrote:
>> 
>>> 
>>> It should require/accept the number of URLs that is required by the type of 
>>> service one is adding. For example, swift only has public and localnet 
>>> storage URLs. No admin URL.
>>> So, regardless if one is using keystone-manage or not (not sure what else 
>>> one can use, Rest calls maybe ? ),  it should only accept what the service 
>>> type requires.   
>>> 
>>> 
>>> case type:
>>> 
>>> swift) 
>>> try 
>>>endpointTemplate add [region] [service] [public_url] [internal_url] 
>>> [enabled] [is_global]
>>> except
>>> "Failed with improper number of arguments"
>>> show_some_help()
>>> 
>>> nova)
>>> 
>>> 
>>> keystone)
>>> ...
>>> another-service)
>>> 
>>> 
>>> whatever_else)
>>> ...
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Marcelo Martins
>>> Openstack-swift
>>> btorch...@zeroaccess.org
>>> 
>>> “Knowledge is the wings on which our aspirations take flight and soar. When 
>>> it comes to surfing and life if you know what to do you can do it. If you 
>>> desire anything become educated about it and succeed. “
>>> 
>>> 
>>> 
>>> 
>>> On Oct 31, 2011, at 1:52 PM, Joseph Heck wrote:
>>> 
 Can you provide an example?
 
 I think you're asserting that you'd like the keystone-manage command to 
 not require 3 different URLs when they don't exist separately, is that 
 correct?
 
 -joe
 
 On Oct 31, 2011, at 11:45 AM, Marcelo Martins wrote:
> Well, If you need to specify a "type" when adding an endpointTemplate, 
> then keystone should be smart enough to identify the type given and only 
> accept the numb

[Openstack] Reminder: OpenStack team meeting - 21:00 UTC

2011-11-01 Thread Thierry Carrez
Hello everyone,

Our general meeting will take place at 21:00 UTC this Tuesday in
#openstack-meeting on IRC. PTLs, if you can't make it, please name a
substitute on [2].

We have one week left before essex-1 is branched out of trunk, so we'll
review progress on the currently-published essex-1 plans.

Please double-check the meeting time! DST ended in Europe last weekend.
[1] http://www.timeanddate.com/worldclock/fixedtime.html?iso=2001T21

See the meeting agenda, edit the wiki to add new topics for discussion:
[2] http://wiki.openstack.org/Meetings/TeamMeeting

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] info: wiki maintenance at 10:00 UTC Sat. 11/5/11

2011-11-01 Thread Anne Gentle
Hi all -

As much as I enjoy hearing from stackers requesting wiki accounts daily,
we're hoping to move towards a single sign-on solution for the OpenStack
wiki, using your Launchpad account as your identity.

We're going to need a wiki maintenance window to test the switch at 10:00
UTC Sat. 11/5/11
(link).
The wiki will be unavailable for about an hour this Saturday.

Once we test the solution, I'll send another email letting you know how the
logins should work. We're hoping that after this change is in effect, new
wiki users won't have to request a separate wiki account and can use their
Launchpad ID. The trickier part is that we're hoping to migrate current
wiki users and match their Launchpad ID to their wiki account, but if
that's not possible we'll let you know.

Thanks to Chmouel for the bravery in taking this on. :)

Anne


*Anne Gentle*
a...@openstack.org
 my blog  | my
book|
LinkedIn  |
Delicious|
Twitter 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Is there a reason Nova doesn't use scoped sessions in sqlalchemy ?

2011-11-01 Thread Day, Phil
Hi Vish,

I probably wasn't careful enough with my wording - the API server may not be 
threaded as such, but the use of eventlets gives effectively the same 
concurrency issues that point towards needing to use scoped sessions.

Our basis for concluding that this is some form of concurrency issue is that we 
can easily reproduce the issue by running concurrent requests into an API 
server, and we have seen the problem disappear if we reduce the eventlet pool 
to 1 or change to scoped sessions.   Whilst the symptom is that the session has 
terminated by the time the lazy load is requested, as far as we can see the 
eventlet handing the query hasn't itself terminated the session - although it 
does seem likely that another eventlet using the same shared session could 
have. This seems to be specifically the type of issue that scoped sessions are 
intended to address.

http://www.sqlalchemy.org/docs/orm/session.html#contextual-thread-local-sessions

All of this is based on a limited understanding of how sqlalchemy is used in 
Nova - I'd be more than happy to be corrected by others with more experience, 
hence the question to the mailing list.

I fully understand the drive to clean up the database layer, and I'm not 
knocking the fix to 855660 - its clearly a good template for the way the DB 
needs to go in Essex.   My concern is that as shown by 855660 these changes 
have a pretty wide scope, and by the time that's been expanded to all of the 
current joinedloads it feels like it would be such a large set of changes that 
I'd be concerned about them coming back into Diablo.Stable.

Hence instead we were looking for a much smaller change that can address the 
whole class of problem of joinedloads in Diablo for now ahead of the DB 
refactoring in Essex - and from our testing scoped sessions seem to address 
that.  However as changing to scoped session breaks the migrate code in unit 
tests, and not really understanding why this is or the intricacies of the DB 
unit tests I wanted to see if we were heading down a path that had already been 
examined and discarded before we spend too much time on it.

I'd be really interested in hearing from anyone with experience of 
scoped_sessions, and/or willing to help us understand the issues we're seeing 
in the Unit Tests.

And of course I'd like to know what the communities feeling is towards a 
simpler approach to fixing the issue in Diablo.Final vs the backport of DB 
simplification changes from Essex - which I'm assuming will take some tiem yet 
to work through all of the joinedloads.

Phil

From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: 31 October 2011 19:50
To: Day, Phil
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net); Johnson, 
Andrew Gordon (HP Cloud Services); Hassan, Ahmad; Haynes, David; 
nova-datab...@lists.launchpad.net
Subject: Re: [Openstack] Is there a reason Nova doesn't use scoped sessions in 
sqlalchemy ?

All of the workers are single-threaded, so I'm not sure that scoped sessions 
are really necessary.

We did however decide that objects from the db layer are supposed to be simple 
dictionaries.  We currently allow nested dictionaries to optimize joined 
objects. Unfortunately we never switched to sanitizing data from sqlalchemy, 
and instead we make the sqlalchemy objects provide a dictionary-like interface 
and pass the object itself.

The issue that you're seeing is because network wasn't properly 'joinedload'ed 
in the initial query, and because the data is not sanitized, sqlalchemy tries 
to joinedload, but the session has been terminated.  If we had sanitized data, 
we would get a more useful error like a key error when network is accessed. The 
current solution is to add the proper joinedload.

One of the goals of the nova-database team is to do the necessary data 
sanitization and to remove as many of the joinedloads as possible (hopefully 
all of them).

Vish

On Oct 31, 2011, at 12:25 PM, Day, Phil wrote:


Hi Folks,

We've been looking into a problem which looks a lot like:

https://bugs.launchpad.net/nova/+bug/855660



2011-10-21 14:13:31,035 ERROR nova.api [5bd52130-d46f-4702-b06b-9ca5045473d7 
smokeuser smokeproject] Unexpected error raised: Parent instance  is not bound to a Session; lazy load operation of attribute 
'network' cannot proceed
(nova.api): TRACE: Traceback (most recent call last):
(nova.api): TRACE: File 
"/usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py", line 363, in 
__call__
(nova.api): TRACE: result = api_request.invoke(context)
(nova.api): TRACE: File 
"/usr/lib/python2.7/dist-packages/nova/api/ec2/apirequest.py", line 90, in 
invoke
(nova.api): TRACE: result = method(context, **args)
(nova.api): TRACE: File 
"/usr/lib/python2.7/dist-packages/nova/api/ec2/cloud.py", line 1195, in 
describe_instances
(nova.api): TRACE: instance_id=instance_id)
(nova.api): TRACE: File 
"/usr/lib/python2.7/dist-packages/nova/api/ec2/cloud.py", line 1204, in 
_format_describe_instances
(nova.ap

Re: [Openstack] how to configure nova block_migration

2011-11-01 Thread DeadSun
I think you can report it as bug

But why nova-compute need to grab from glance rather than instances/_bases
and instances-i000x

2011/11/1 

> I have exactly the same problem myself DeadSun.
>
>
>
> nova-compute on the target node is attempting to retrieve the base image
> from Glance. Glance in response is saying the requestor is unauthenticated.
>
>
>
> Unfortunately I’ve not been able to find a solution. I posted a question
> on Launchpad but I’ve had no response yet,
> https://answers.launchpad.net/nova/+question/176608
>
>
>
> Adrian
>
>
>
>
>
> *From:* DeadSun [mailto:mwjpi...@gmail.com]
> *Sent:* Tuesday, November 01, 2011 12:55 PM
> *To:* Smith, Adrian F
> *Cc:* openstack@lists.launchpad.net
> *Subject:* Re: [Openstack] how to configure nova block_migration
>
>
>
> Hi,Adrian
>
>
>
> Thanks for your help.
>
>
>
> Do I need set some flags in nova.conf?
>
> in my nova.conf ,I set
>
> *--live_migration_uri=qemu+ssh://%s/system*
>
>
>
> and I have tried it but error log show
>
>
>
> *(nova.compute.manager): TRACE:**
> 2011-11-01 16:55:59,136 DEBUG nova.rpc [-] Making asynchronous cast on
> compute.node1-test... from (pid=6815) cast
> /home/openstack/nova/nova/rpc/impl_kombu.py:746
> 2011-11-01 16:55:59,140 ERROR nova.rpc [-] Exception during message
> handling
> (nova.rpc): TRACE: Traceback (most recent call last):
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/rpc/impl_kombu.py",
> line 620, in _process_data
> (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/compute/manager.py",
> line 1615, in live_migration
> (nova.rpc): TRACE: raise exc
> (nova.rpc): TRACE: RemoteError: Remote error: None None
> (nova.rpc): TRACE: None.
> (nova.rpc): TRACE:*
>
>
>
> and in dest node, compute.log show
>
>
>
> *2011-11-01 16:58:44,035 DEBUG nova.utils [-] Attempting to grab
> semaphore "0ade7c2cf97f75d009975f4d720d1fa6c19f4897_sm" for method
> "call_if_not_exists"... from (pid=17799) inner
> /home/openstack/nova/nova/utils.py:717**
> 2011-11-01 16:58:44,043 ERROR nova.rpc [-] Exception during message
> handling
> (nova.rpc): TRACE: Traceback (most recent call last):
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/rpc/impl_kombu.py",
> line 620, in _process_data
> (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/compute/manager.py",
> line 1569, in pre_live_migration
> (nova.rpc): TRACE: disk)
> (nova.rpc): TRACE: File
> "/home/openstack/nova/nova/virt/libvirt/connection.py", line 1803, in
> pre_block_migration
> (nova.rpc): TRACE: size=instance_ref['local_gb'])
> (nova.rpc): TRACE: File
> "/home/openstack/nova/nova/virt/libvirt/connection.py", line 804, in
> _cache_image
> (nova.rpc): TRACE: call_if_not_exists(base, fn, *args, **kwargs)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/utils.py", line 730, in
> inner
> (nova.rpc): TRACE: retval = f(*args, **kwargs)
> (nova.rpc): TRACE: File
> "/home/openstack/nova/nova/virt/libvirt/connection.py", line 802, in
> call_if_not_exists
> (nova.rpc): TRACE: fn(target=base, *args, **kwargs)
> (nova.rpc): TRACE: File
> "/home/openstack/nova/nova/virt/libvirt/connection.py", line 816, in
> _fetch_image
> (nova.rpc): TRACE: images.fetch_to_raw(context, image_id, target, user_id,
> project_id)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/images.py", line
> 52, in fetch_to_raw
> (nova.rpc): TRACE: metadata = fetch(context, image_href, path_tmp,
> user_id, project_id)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/images.py", line
> 46, in fetch
> (nova.rpc): TRACE: metadata = image_service.get(context, image_id,
> image_file)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/image/glance.py", line
> 239, in get
> (nova.rpc): TRACE: image_meta, image_chunks = client.get_image(image_id)
> (nova.rpc): TRACE: File
> "/usr/lib/python2.7/dist-packages/glance/client.py", line 83, in get_image
> (nova.rpc): TRACE: res = self.do_request("GET", "/images/%s" % image_id)
> (nova.rpc): TRACE: File
> "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 145, in
> do_request
> (nova.rpc): TRACE: method, action, body=body, headers=headers,
> params=params)
> (nova.rpc): TRACE: File
> "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 222, in
> _do_request
> (nova.rpc): TRACE: raise exception.NotAuthorized(res.read())
> (nova.rpc): TRACE: NotAuthorized: 401 Unauthorized
> (nova.rpc): TRACE:
> (nova.rpc): TRACE: This server could not verify that you are authorized to
> access the document you requested. Either you supplied the wrong
> credentials (e.g., bad password), or your browser does not understand how
> to supply the credentials required.
> (nova.rpc): TRACE:
> (nova.rpc): TRACE: Authentication required
> (nova.rpc): TRACE:
> 2011-11-01 16:58:44,044 ERROR nova.rpc [-] Returning exception 401
> Unauthorized
>
> This server could not verify that you are authorized to access the
> document you

Re: [Openstack] keystone Endpoint schema

2011-11-01 Thread Ziad Sawalha
We also need to consider the use case where a role may have rights over 
multiple services. Cloud Admin for example.

EndpointType would allow us to do this:

endpointTemplate add [region] [service] [type=public|internal|admin…other] 
[url] [enabled] [is_global]

That would allow services to register as many endpoints and endpoint types as 
they needed.

Z

From: Marcelo Martins 
mailto:btorch...@zeroaccess.org>>
Date: Mon, 31 Oct 2011 19:26:12 -0500
To: Ziad Sawalha mailto:ziad.sawa...@rackspace.com>>
Cc: Joseph Heck mailto:he...@mac.com>>, 
"openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] keystone Endpoint schema

Hi Ziad,

Sorry, that was my mistake. I meant to have "case service.name:"  on that 
pseudocode and not type. I wasn't proposing any EndpointType and don't see how 
that would help.

The way that I was thinking was, you can either have the "services" table  
pre-populated during keystone install/setup or have the user do it. Also 
provide information on the docs about the services that keystone currently 
support. The documentation would provide information to the user on how to add 
an endpointTemplate to a particular  service.


Perhaps this is a bit more clear:


case service.name:

openstack-swift)
try
endpointTemplate add [region] [service] [public_url] [internal_url] [enabled] 
[is_global]
except
  "Failed with improper number of arguments"swift)
show_some_help()
try

openstack-compute)
endpointTemplate add [region] [service] [public_url] [admin_url] [internal_url] 
[enabled] [is_global]


openstack-swift)



If one needs to add a "generic/custom"  service (one that keystone does not 
know yet), perhaps keystone could be flexible enough  to accept  N numbers of 
URLs for this "generic/custom"  service. But I think this is something more for 
the future.






Marcelo Martins
Openstack-swift
btorch...@zeroaccess.org

“Knowledge is the wings on which our aspirations take flight and soar. When it 
comes to surfing and life if you know what to do you can do it. If you desire 
anything become educated about it and succeed. “




On Oct 31, 2011, at 4:42 PM, Ziad Sawalha wrote:

The list of URLs comes from what we have historically done at Rackspace and the 
conversations had in OpenStack about a management/admin API.

I agree that not all services need those three. And some may want to create 
additional ones. You mention "type" below. Not to be confused with the 
serviceType (like compute, identity, image-service, object-store, etc...). Are 
you proposing an EndpointType (maybe admin, public, private, etc..)?

That does seem like a more flexible approach.

It would help to have some well-known types, such as:
- public: Internet-accessible
- admin: private, with elevated-privilege calls available
- internal: provides a high bandwidth, low latency, unmetered endpoint

Thoughts?

Z



On Oct 31, 2011, at 2:17 PM, "Marcelo Martins" 
mailto:btorch...@zeroaccess.org>> wrote:


It should require/accept the number of URLs that is required by the type of 
service one is adding. For example, swift only has public and localnet storage 
URLs. No admin URL.
So, regardless if one is using keystone-manage or not (not sure what else one 
can use, Rest calls maybe ? ),  it should only accept what the service type 
requires.


case type:

swift)
try
   endpointTemplate add [region] [service] [public_url] [internal_url] 
[enabled] [is_global]
except
"Failed with improper number of arguments"
show_some_help()

nova)


keystone)
...
another-service)


whatever_else)
...






Marcelo Martins
Openstack-swift
btorch...@zeroaccess.org

“Knowledge is the wings on which our aspirations take flight and soar. When it 
comes to surfing and life if you know what to do you can do it. If you desire 
anything become educated about it and succeed. “




On Oct 31, 2011, at 1:52 PM, Joseph Heck wrote:

Can you provide an example?

I think you're asserting that you'd like the keystone-manage command to not 
require 3 different URLs when they don't exist separately, is that correct?

-joe

On Oct 31, 2011, at 11:45 AM, Marcelo Martins wrote:
Well, If you need to specify a "type" when adding an endpointTemplate, then 
keystone should be smart enough to identify the type given and only accept the 
number of  URLs needed for such type of service.

Marcelo Martins
Openstack-swift
btorch...@zeroaccess.org

On Oct 31, 2011, at 1:40 PM, Joseph Heck wrote:
That's just what it sees today - the only one of the service endpoints that 
uses all three (right now anyway) is Keystone itself. Can you share a different 
pattern that you're interested in seeing supported?

-joe

On Oct 31, 2011, at 9:46 AM, Marcelo Martins wrote:
What makes keystone assume that all types of services will have " [public_url] 
[admin_url] [internal_ur

Re: [Openstack] how to configure nova block_migration

2011-11-01 Thread DeadSun
thank you for your help

2011/11/1 

> I have exactly the same problem myself DeadSun.
>
>
>
> nova-compute on the target node is attempting to retrieve the base image
> from Glance. Glance in response is saying the requestor is unauthenticated.
>
>
>
> Unfortunately I’ve not been able to find a solution. I posted a question
> on Launchpad but I’ve had no response yet,
> https://answers.launchpad.net/nova/+question/176608
>
>
>
> Adrian
>
>
>
>
>
> *From:* DeadSun [mailto:mwjpi...@gmail.com]
> *Sent:* Tuesday, November 01, 2011 12:55 PM
> *To:* Smith, Adrian F
> *Cc:* openstack@lists.launchpad.net
> *Subject:* Re: [Openstack] how to configure nova block_migration
>
>
>
> Hi,Adrian
>
>
>
> Thanks for your help.
>
>
>
> Do I need set some flags in nova.conf?
>
> in my nova.conf ,I set
>
> *--live_migration_uri=qemu+ssh://%s/system*
>
>
>
> and I have tried it but error log show
>
>
>
> *(nova.compute.manager): TRACE:**
> 2011-11-01 16:55:59,136 DEBUG nova.rpc [-] Making asynchronous cast on
> compute.node1-test... from (pid=6815) cast
> /home/openstack/nova/nova/rpc/impl_kombu.py:746
> 2011-11-01 16:55:59,140 ERROR nova.rpc [-] Exception during message
> handling
> (nova.rpc): TRACE: Traceback (most recent call last):
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/rpc/impl_kombu.py",
> line 620, in _process_data
> (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/compute/manager.py",
> line 1615, in live_migration
> (nova.rpc): TRACE: raise exc
> (nova.rpc): TRACE: RemoteError: Remote error: None None
> (nova.rpc): TRACE: None.
> (nova.rpc): TRACE:*
>
>
>
> and in dest node, compute.log show
>
>
>
> *2011-11-01 16:58:44,035 DEBUG nova.utils [-] Attempting to grab
> semaphore "0ade7c2cf97f75d009975f4d720d1fa6c19f4897_sm" for method
> "call_if_not_exists"... from (pid=17799) inner
> /home/openstack/nova/nova/utils.py:717**
> 2011-11-01 16:58:44,043 ERROR nova.rpc [-] Exception during message
> handling
> (nova.rpc): TRACE: Traceback (most recent call last):
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/rpc/impl_kombu.py",
> line 620, in _process_data
> (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/compute/manager.py",
> line 1569, in pre_live_migration
> (nova.rpc): TRACE: disk)
> (nova.rpc): TRACE: File
> "/home/openstack/nova/nova/virt/libvirt/connection.py", line 1803, in
> pre_block_migration
> (nova.rpc): TRACE: size=instance_ref['local_gb'])
> (nova.rpc): TRACE: File
> "/home/openstack/nova/nova/virt/libvirt/connection.py", line 804, in
> _cache_image
> (nova.rpc): TRACE: call_if_not_exists(base, fn, *args, **kwargs)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/utils.py", line 730, in
> inner
> (nova.rpc): TRACE: retval = f(*args, **kwargs)
> (nova.rpc): TRACE: File
> "/home/openstack/nova/nova/virt/libvirt/connection.py", line 802, in
> call_if_not_exists
> (nova.rpc): TRACE: fn(target=base, *args, **kwargs)
> (nova.rpc): TRACE: File
> "/home/openstack/nova/nova/virt/libvirt/connection.py", line 816, in
> _fetch_image
> (nova.rpc): TRACE: images.fetch_to_raw(context, image_id, target, user_id,
> project_id)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/images.py", line
> 52, in fetch_to_raw
> (nova.rpc): TRACE: metadata = fetch(context, image_href, path_tmp,
> user_id, project_id)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/images.py", line
> 46, in fetch
> (nova.rpc): TRACE: metadata = image_service.get(context, image_id,
> image_file)
> (nova.rpc): TRACE: File "/home/openstack/nova/nova/image/glance.py", line
> 239, in get
> (nova.rpc): TRACE: image_meta, image_chunks = client.get_image(image_id)
> (nova.rpc): TRACE: File
> "/usr/lib/python2.7/dist-packages/glance/client.py", line 83, in get_image
> (nova.rpc): TRACE: res = self.do_request("GET", "/images/%s" % image_id)
> (nova.rpc): TRACE: File
> "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 145, in
> do_request
> (nova.rpc): TRACE: method, action, body=body, headers=headers,
> params=params)
> (nova.rpc): TRACE: File
> "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 222, in
> _do_request
> (nova.rpc): TRACE: raise exception.NotAuthorized(res.read())
> (nova.rpc): TRACE: NotAuthorized: 401 Unauthorized
> (nova.rpc): TRACE:
> (nova.rpc): TRACE: This server could not verify that you are authorized to
> access the document you requested. Either you supplied the wrong
> credentials (e.g., bad password), or your browser does not understand how
> to supply the credentials required.
> (nova.rpc): TRACE:
> (nova.rpc): TRACE: Authentication required
> (nova.rpc): TRACE:
> 2011-11-01 16:58:44,044 ERROR nova.rpc [-] Returning exception 401
> Unauthorized
>
> This server could not verify that you are authorized to access the
> document you requested. Either you supplied the wrong credentials (e.g.,
> bad password), or your browser does not unders

Re: [Openstack] how to configure nova block_migration

2011-11-01 Thread Adrian_F_Smith
I have exactly the same problem myself DeadSun.

nova-compute on the target node is attempting to retrieve the base image from 
Glance. Glance in response is saying the requestor is unauthenticated.

Unfortunately I’ve not been able to find a solution. I posted a question on 
Launchpad but I’ve had no response yet, 
https://answers.launchpad.net/nova/+question/176608

Adrian


From: DeadSun [mailto:mwjpi...@gmail.com]
Sent: Tuesday, November 01, 2011 12:55 PM
To: Smith, Adrian F
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] how to configure nova block_migration

Hi,Adrian

Thanks for your help.

Do I need set some flags in nova.conf?
in my nova.conf ,I set
--live_migration_uri=qemu+ssh://%s/system

and I have tried it but error log show

(nova.compute.manager): TRACE:
2011-11-01 16:55:59,136 DEBUG nova.rpc [-] Making asynchronous cast on 
compute.node1-test... from (pid=6815) cast 
/home/openstack/nova/nova/rpc/impl_kombu.py:746
2011-11-01 16:55:59,140 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE: File "/home/openstack/nova/nova/rpc/impl_kombu.py", line 
620, in _process_data
(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/compute/manager.py", line 
1615, in live_migration
(nova.rpc): TRACE: raise exc
(nova.rpc): TRACE: RemoteError: Remote error: None None
(nova.rpc): TRACE: None.
(nova.rpc): TRACE:

and in dest node, compute.log show

2011-11-01 16:58:44,035 DEBUG nova.utils [-] Attempting to grab semaphore 
"0ade7c2cf97f75d009975f4d720d1fa6c19f4897_sm" for method 
"call_if_not_exists"... from (pid=17799) inner 
/home/openstack/nova/nova/utils.py:717
2011-11-01 16:58:44,043 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE: File "/home/openstack/nova/nova/rpc/impl_kombu.py", line 
620, in _process_data
(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/compute/manager.py", line 
1569, in pre_live_migration
(nova.rpc): TRACE: disk)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/libvirt/connection.py", 
line 1803, in pre_block_migration
(nova.rpc): TRACE: size=instance_ref['local_gb'])
(nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/libvirt/connection.py", 
line 804, in _cache_image
(nova.rpc): TRACE: call_if_not_exists(base, fn, *args, **kwargs)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/utils.py", line 730, in inner
(nova.rpc): TRACE: retval = f(*args, **kwargs)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/libvirt/connection.py", 
line 802, in call_if_not_exists
(nova.rpc): TRACE: fn(target=base, *args, **kwargs)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/libvirt/connection.py", 
line 816, in _fetch_image
(nova.rpc): TRACE: images.fetch_to_raw(context, image_id, target, user_id, 
project_id)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/images.py", line 52, in 
fetch_to_raw
(nova.rpc): TRACE: metadata = fetch(context, image_href, path_tmp, user_id, 
project_id)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/images.py", line 46, in 
fetch
(nova.rpc): TRACE: metadata = image_service.get(context, image_id, image_file)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/image/glance.py", line 239, 
in get
(nova.rpc): TRACE: image_meta, image_chunks = client.get_image(image_id)
(nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/glance/client.py", 
line 83, in get_image
(nova.rpc): TRACE: res = self.do_request("GET", "/images/%s" % image_id)
(nova.rpc): TRACE: File 
"/usr/lib/python2.7/dist-packages/glance/common/client.py", line 145, in 
do_request
(nova.rpc): TRACE: method, action, body=body, headers=headers, params=params)
(nova.rpc): TRACE: File 
"/usr/lib/python2.7/dist-packages/glance/common/client.py", line 222, in 
_do_request
(nova.rpc): TRACE: raise exception.NotAuthorized(res.read())
(nova.rpc): TRACE: NotAuthorized: 401 Unauthorized
(nova.rpc): TRACE:
(nova.rpc): TRACE: This server could not verify that you are authorized to 
access the document you requested. Either you supplied the wrong credentials 
(e.g., bad password), or your browser does not understand how to supply the 
credentials required.
(nova.rpc): TRACE:
(nova.rpc): TRACE: Authentication required
(nova.rpc): TRACE:
2011-11-01 16:58:44,044 ERROR nova.rpc [-] Returning exception 401 Unauthorized

This server could not verify that you are authorized to access the document you 
requested. Either you supplied the wrong credentials (e.g., bad password), or 
your browser does not understand how to supply the credentials required.

Authentication required to caller
2011-11-01 16:58:44,044 ERROR nova.rpc [-] ['Traceback (most recent call 
last):\n', ' File "/home/openstack/nova/nova/rpc/impl_kombu.py", line 620, in 
_process_data\n rval = node_func(context=ctxt, **node_args)\n', '

Re: [Openstack] how to configure nova block_migration

2011-11-01 Thread DeadSun
Hi,Adrian

Thanks for your help.

Do I need set some flags in nova.conf?
in my nova.conf ,I set
*--live_migration_uri=qemu+ssh://%s/system*

and I have tried it but error log show

*(nova.compute.manager): TRACE:
2011-11-01 16:55:59,136 DEBUG nova.rpc [-] Making asynchronous cast on
compute.node1-test... from (pid=6815) cast
/home/openstack/nova/nova/rpc/impl_kombu.py:746
2011-11-01 16:55:59,140 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE: File "/home/openstack/nova/nova/rpc/impl_kombu.py", line
620, in _process_data
(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/compute/manager.py",
line 1615, in live_migration
(nova.rpc): TRACE: raise exc
(nova.rpc): TRACE: RemoteError: Remote error: None None
(nova.rpc): TRACE: None.
(nova.rpc): TRACE:*

and in dest node, compute.log show

*2011-11-01 16:58:44,035 DEBUG nova.utils [-] Attempting to grab semaphore
"0ade7c2cf97f75d009975f4d720d1fa6c19f4897_sm" for method
"call_if_not_exists"... from (pid=17799) inner
/home/openstack/nova/nova/utils.py:717
2011-11-01 16:58:44,043 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE: File "/home/openstack/nova/nova/rpc/impl_kombu.py", line
620, in _process_data
(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/compute/manager.py",
line 1569, in pre_live_migration
(nova.rpc): TRACE: disk)
(nova.rpc): TRACE: File
"/home/openstack/nova/nova/virt/libvirt/connection.py", line 1803, in
pre_block_migration
(nova.rpc): TRACE: size=instance_ref['local_gb'])
(nova.rpc): TRACE: File
"/home/openstack/nova/nova/virt/libvirt/connection.py", line 804, in
_cache_image
(nova.rpc): TRACE: call_if_not_exists(base, fn, *args, **kwargs)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/utils.py", line 730, in
inner
(nova.rpc): TRACE: retval = f(*args, **kwargs)
(nova.rpc): TRACE: File
"/home/openstack/nova/nova/virt/libvirt/connection.py", line 802, in
call_if_not_exists
(nova.rpc): TRACE: fn(target=base, *args, **kwargs)
(nova.rpc): TRACE: File
"/home/openstack/nova/nova/virt/libvirt/connection.py", line 816, in
_fetch_image
(nova.rpc): TRACE: images.fetch_to_raw(context, image_id, target, user_id,
project_id)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/images.py", line
52, in fetch_to_raw
(nova.rpc): TRACE: metadata = fetch(context, image_href, path_tmp, user_id,
project_id)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/virt/images.py", line
46, in fetch
(nova.rpc): TRACE: metadata = image_service.get(context, image_id,
image_file)
(nova.rpc): TRACE: File "/home/openstack/nova/nova/image/glance.py", line
239, in get
(nova.rpc): TRACE: image_meta, image_chunks = client.get_image(image_id)
(nova.rpc): TRACE: File
"/usr/lib/python2.7/dist-packages/glance/client.py", line 83, in get_image
(nova.rpc): TRACE: res = self.do_request("GET", "/images/%s" % image_id)
(nova.rpc): TRACE: File
"/usr/lib/python2.7/dist-packages/glance/common/client.py", line 145, in
do_request
(nova.rpc): TRACE: method, action, body=body, headers=headers,
params=params)
(nova.rpc): TRACE: File
"/usr/lib/python2.7/dist-packages/glance/common/client.py", line 222, in
_do_request
(nova.rpc): TRACE: raise exception.NotAuthorized(res.read())
(nova.rpc): TRACE: NotAuthorized: 401 Unauthorized
(nova.rpc): TRACE:
(nova.rpc): TRACE: This server could not verify that you are authorized to
access the document you requested. Either you supplied the wrong
credentials (e.g., bad password), or your browser does not understand how
to supply the credentials required.
(nova.rpc): TRACE:
(nova.rpc): TRACE: Authentication required
(nova.rpc): TRACE:
2011-11-01 16:58:44,044 ERROR nova.rpc [-] Returning exception 401
Unauthorized

This server could not verify that you are authorized to access the document
you requested. Either you supplied the wrong credentials (e.g., bad
password), or your browser does not understand how to supply the
credentials required.

Authentication required to caller
2011-11-01 16:58:44,044 ERROR nova.rpc [-] ['Traceback (most recent call
last):\n', ' File "/home/openstack/nova/nova/rpc/impl_kombu.py", line 620,
in _process_data\n rval = node_func(context=ctxt, **node_args)\n', ' File
"/home/openstack/nova/nova/compute/manager.py", line 1569, in
pre_live_migration\n disk)\n', ' File
"/home/openstack/nova/nova/virt/libvirt/connection.py", line 1803, in
pre_block_migration\n size=instance_ref[\'local_gb\'])\n', ' File
"/home/openstack/nova/nova/virt/libvirt/connection.py", line 804, in
_cache_image\n call_if_not_exists(base, fn, *args, **kwargs)\n', ' File
"/home/openstack/nova/nova/utils.py", line 730, in inner\n retval =
f(*args, **kwargs)\n', ' File
"/home/openstack/nova/nova/virt/libvirt/connection.py", line 802, in
call_if_not_exists\n fn(target=base, *args, **kwargs)\n',

Re: [Openstack] how to configure nova block_migration

2011-11-01 Thread Adrian_F_Smith
There isn't really any block migration specific configuration. So long as your 
multi-node nova installation is working you should be fine.



The source and target nodes must have compatible CPU architectures and 
capabilities. Nova-compute on the target node will need access to the VM’s base 
image (i.e. on Glance if that’s what you’re using) so that it can fetch it 
during the migration.



The command to do the migration is,



$ nova-manage vm block_migration  



Adrian


From: openstack-bounces+adrian_f_smith=dell@lists.launchpad.net 
[mailto:openstack-bounces+adrian_f_smith=dell@lists.launchpad.net] On 
Behalf Of DeadSun
Sent: Tuesday, November 01, 2011 9:07 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] how to configure nova block_migration

because I have no shared storage, I want to know how nova use kvm block 
migration and how to configure it

--
非淡薄无以明志,非宁静无以致远
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] how to configure nova block_migration

2011-11-01 Thread DeadSun
because I have no shared storage, I want to know how nova use kvm block
migration and how to configure it

-- 
非淡薄无以明志,非宁静无以致远
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Which nova scheduler for different hardware sizes?

2011-11-01 Thread Christian Wittwer
Lorin,
Thanks for your reply. Well the least cost scheduler with these cost
functions looks interesting.
Unfortunately there is not much documenation about it. Can somebody
give me an example how to switch to that scheduler using the memory
cost function which already exist?

Cheers,
Christian

2011/10/24 Lorin Hochstein :
> Christian:
> You could use the least cost scheduler, but I think you'd have to write your
> own cost function to take into account the different number of cores.
> Looking at the source, the only cost function it comes with only takes into
> account the amount of memory that's free, not loading in terms of total
> physical cores and allocated virtual cores. (We use a custom scheduler at
> our site, so I don't have any firsthand experience with the least-cost
> scheduler).
> Lorin
> --
> Lorin Hochstein, Computer Scientist
> USC Information Sciences Institute
> 703.812.3710
> http://www.east.isi.edu/~lorin
>
>
>
> On Oct 22, 2011, at 3:17 AM, Christian Wittwer wrote:
>
> I'm planning to build a openstack nova installation with older
> hardware. These servers obviously doesn't have the same hardware
> configuration like memory and cores.
> It ranges from 2 core and 4GB memory to 16 core and 64GB memory. I
> know that there are different scheduler, but I'm not sure which one to
> choose.
> The simple scheduler tries to find the least used host, but the amount
> of used cores per host (max_cores) is a constant, which doesn't work
> for me.
> Maybe the least cost scheduler would be the right one? But I'm not
> sure, because I did not find any documenation about how to use it.
>
> Cheers,
> Christian
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp