Re: [Openstack] Cloud Computing StackExchange site proposal

2011-11-29 Thread Thierry Carrez
Stefano Maffulli wrote:
 Leaving aside naming the tools, what would be the most important
 features to have? Here is my personal list, in no particular order:
 
 * good search engine
 * ability to promote question and best answer to FAQ
 * good looking and nice to use
 * custom domain (like ask.openstack.org)
 * layout customizable, to give it the openstack.org look
 * use Launchpad login
 * possibility to turn question into bug report (nice to have)
 
 anything else? Please focus on features, we'll shop around for tools at
 later stage.

* Tagging
* Use of a reputation system to encourage participation and get a better
handle on authoritative answers

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Reminder: OpenStack team meeting - 21:00 UTC

2011-11-29 Thread Thierry Carrez
Hello everyone,

Our general meeting will take place at 21:00 UTC this Tuesday in
#openstack-meeting on IRC. PTLs, if you can't make it, please name a
substitute on [2].

We will look at general progress towards essex-2, improving visibility
over the global Essex plans and the status of the client projects
inclusion and horizon repo split (if any).

You can doublecheck what 21:00 UTC means for your timezone at [1]:
[1] http://www.timeanddate.com/worldclock/fixedtime.html?iso=2029T21

See the meeting agenda, edit the wiki to add new topics for discussion:
[2] http://wiki.openstack.org/Meetings/TeamMeeting

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Database stuff

2011-11-29 Thread Soren Hansen
Hi, guys.

Gosh, this turned out to be long. Sorry about that.

I'm adding some tests for the DB api, and I've stumbled across something
that we should probably discuss.

First of all, most (if not all) of the various *_create methods we have,
return quite amputated objects. Any attempt to access related objects
will fail with the much too familiar:

   DetachedInstanceError: Parent instance Whatever at 0x4f5c8d0 is not
   bound to a Session; lazy load operation of attribute 'other_things'
   cannot proceed

Also, with the SQLAlchemy driver, this test would pass:

network = db.network_create(ctxt, {})
network = db.network_get(ctxt, network['id'])

instance = db.instance_create(ctxt, {})
self.assertEquals(len(network['virtual_interfaces']), 0)
db.virtual_interface_create(ctxt, {'network_id': network['id'],
   'instance_id': instance['id']}

self.assertEquals(len(network['virtual_interfaces']), 0)
network = db.network_get(ctxt, network['id'])
self.assertEquals(len(network['virtual_interfaces']), 1)

I create a network, pull it out again (as per my comment above), verify
that it has no virtual_interfaces related to it, create a virtual
interface in this network, and check the network's virtual_interfaces
key and finds that it still has length 0. Reloading the network now
reveals the new virtual interface.

SQLAlchemy does support looking these things up on the fly. In fact,
AFAIK, this is its default behaviour. We just override it with
joinedload options, because we don't use scoped sessions.

My fake db driver looks stuff like this up on the fly (so the
assertEquals after the virtual_interface_create will fail with that db
driver).

So my question is this: Should this be

a) looked up on the fly,
b) looked up on first key access and then cached,
c) looked up when the parent object is loaded and then never again,
d) or up to the driver author?

Or should we do away with this stuff altogether? I.e. no more looking up
related objects by way of __getitem__ lookups, and instead only allow
lookups through db methods. So, instead of
network['virtual_interfaces'], you'd always do
db.virtual_interfaces_get_by_network(ctxt, network['id']).  Let's call
this option e).

I'm pretty undecided myself. If we go with option e) it becomes clear to
consumers of the DB api when they're pulling out fresh stuff from the DB
and when they're reusing potentially old results.  Explicit is better
than implicit, but it'll take quite a bit of work to change this.

If we go with one of options a) through d), my order of preference is
(from most to least preferred): a), d), c), b).

There's value in having a right way and a wrong way to do this. If it's
either-or, it makes testing (as in validation) more awkward. I'd say
it's always possible to do on-the-fly lookups. Overriding __getitem__
and fetching fresh results is pretty simple, and, as mentioned earlier,
I believe this is SQLAlchemy's default behaviour (somebody please
correct me if I'm wrong). Forcing an arbitrary ORM to replicate the
behaviour of b) and c) could be incredibly awkward, and c) is also
complicated because there might be reference loops involved. Also,
reviewing correct use of something where the need for reloads depends on
previous use of your db objects (which might itself be conditional or
happen in called methods) sounds like no fun at all.  With d) it's
pretty straightforward: Do you want to be sure to have fresh responses?
Then reload the object.  Otherwise, behaviour is undefined. It's
straightforward to explain and review.  Option e) is also easy to
explain and do reviews for, btw.

DB objects with options a) through d) will have trouble making it
through a serializer. After dict serialisation, the object isn't going
to change, unresolved related objects will not be resolved, and
prepopulating everything prior to serialisation is out of the question
due to the potential for loops and very big sets of related objects (and
their related objects and their related objects's related objects,
etc.). I think it would be valuable to not have to think a whole lot
about whether a particular db-like object is a real db object or
whether it came in as a JSON object or over the message bus or whatever.
Only option e) will give us that, AFAICS.

It seems I've talked myself into preferring option e). It's too much
work to do on my own, though, and it's going to be disruptive, so we
need to do it real soon. I think it'll be worth it, though.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Jason Kölker
On Tue, 2011-11-29 at 16:20 +0100, Soren Hansen wrote:
 
 It seems I've talked myself into preferring option e). It's too much
 work to do on my own, though, and it's going to be disruptive, so we
 need to do it real soon. I think it'll be worth it, though. 

I agree. This will also make it easier to swap out the storage with
other Non-SQLAlchemy datastores *cough* ElasticSearch *cough*.

On the network side of things, I've spend the last few weeks untie-ing
network FK's and joinloads for the untie-nova-network-models blueprint.
Those will be merge prop'd as soon as I get some merge conflicts squared
away.

Also Trey is working on a standardized Network Model objects that is
really just a dict++ so serialization is easy. Our plan on in the
network fiefdom is to pass around only these objects eventually to get
us off relying on the underlying SQLAlchemy objects.

Happy Hacking!

7-11


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Brian Waldon
I think option e is our best bet. It's the *only* option if we want to 
efficiently separate our services (as Jason has pointed out).

Waldon


On Nov 29, 2011, at 10:52 AM, Devin Carlen wrote:

 Hey Soren,
 
 On Nov 29, 2011, at 7:20 AM, Soren Hansen wrote:
 
 SQLAlchemy does support looking these things up on the fly. In fact,
 AFAIK, this is its default behaviour. We just override it with
 joinedload options, because we don't use scoped sessions.
 
 My fake db driver looks stuff like this up on the fly (so the
 assertEquals after the virtual_interface_create will fail with that db
 driver).
 
 So my question is this: Should this be
 
 a) looked up on the fly,
 b) looked up on first key access and then cached,
 c) looked up when the parent object is loaded and then never again,
 d) or up to the driver author?
 
 Or should we do away with this stuff altogether? I.e. no more looking up
 related objects by way of __getitem__ lookups, and instead only allow
 lookups through db methods. So, instead of
 network['virtual_interfaces'], you'd always do
 db.virtual_interfaces_get_by_network(ctxt, network['id']).  Let's call
 this option e).
 
 I think a simpler expectation of what the data objects should be capable of 
 enables a much wider variety of possible implementations.
 
 The main advantage to option e) is that it is simple both from an 
 implementation and from a debugging point of view.  You treat the entire db 
 layer as though it's just dumb dictionaries and then you enable a wider 
 support of implementation.  Sure sqlalchemy supports lookups on __get__item, 
 but maybe other potential implementations won't.
 
 I'm pretty undecided myself. If we go with option e) it becomes clear to
 consumers of the DB api when they're pulling out fresh stuff from the DB
 and when they're reusing potentially old results.  Explicit is better
 than implicit, but it'll take quite a bit of work to change this.
 
 Well, this is the way nova *used* to work.  I'm not exactly sure when and if 
 that changed.
 
 
 If we go with one of options a) through d), my order of preference is
 (from most to least preferred): a), d), c), b).
 
 Option e) is also easy to explain and do reviews for, btw.
 
 It seems I've talked myself into preferring option e). It's too much
 work to do on my own, though, and it's going to be disruptive, so we
 need to do it real soon. I think it'll be worth it, though.
 
 -- 
 Soren Hansen| http://linux2go.dk/
 Ubuntu Developer| http://www.ubuntu.com/
 OpenStack Developer | http://www.openstack.org/
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Mark McLoughlin
On Tue, 2011-11-29 at 16:20 +0100, Soren Hansen wrote:
 It seems I've talked myself into preferring option e). It's too much
 work to do on my own, though, and it's going to be disruptive, so we
 need to do it real soon. I think it'll be worth it, though.

(e) sounds right to me. But hopefully doesn't need to be so disruptive -
add the new APIs, gradually port the codebase to them and then delete
the old APIs.

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Aaron Lee
For this I think we need to separate the models from the queries. A query 
method should be able to populate as many of the models as needed to return the 
data collected. I also think separating the queries from the models themselves 
will help us make the storage engine replaceable, and will allow us to extract 
common behavior into the models themselves instead of peppered throughout the 
codebase.

Aaron

On Nov 29, 2011, at 10:31 AM, Chris Behrens wrote:

 e) sounds good as long as we don't remove the ability to joinload up front.  
 Sometimes we need to join.  Sometimes we can be lazy.  With 'list instances 
 with details', we need to get all instances from the DB and join network 
 information in a single DB query.  Doing 1 + (n*x) DB queries to support a 
 'nova list' will not be acceptable when it can be done in 1 query today.  
 That means we'll need instance['virtual_interfaces'] (or similar) to work in 
 cases where we 'join up front'.  Most cases when we're pulling instances from 
 the DB, we don't need to join anything.  It'd be more efficient to not join 
 network information for those calls.  This is where e) is fine.
 
 - Chris
 
 On Nov 29, 2011, at 9:20 AM, Soren Hansen wrote:
 
 Hi, guys.
 
 Gosh, this turned out to be long. Sorry about that.
 
 I'm adding some tests for the DB api, and I've stumbled across something
 that we should probably discuss.
 
 First of all, most (if not all) of the various *_create methods we have,
 return quite amputated objects. Any attempt to access related objects
 will fail with the much too familiar:
 
  DetachedInstanceError: Parent instance Whatever at 0x4f5c8d0 is not
  bound to a Session; lazy load operation of attribute 'other_things'
  cannot proceed
 
 Also, with the SQLAlchemy driver, this test would pass:
 
   network = db.network_create(ctxt, {})
   network = db.network_get(ctxt, network['id'])
 
   instance = db.instance_create(ctxt, {})
   self.assertEquals(len(network['virtual_interfaces']), 0)
   db.virtual_interface_create(ctxt, {'network_id': network['id'],
  'instance_id': instance['id']}
 
   self.assertEquals(len(network['virtual_interfaces']), 0)
   network = db.network_get(ctxt, network['id'])
   self.assertEquals(len(network['virtual_interfaces']), 1)
 
 I create a network, pull it out again (as per my comment above), verify
 that it has no virtual_interfaces related to it, create a virtual
 interface in this network, and check the network's virtual_interfaces
 key and finds that it still has length 0. Reloading the network now
 reveals the new virtual interface.
 
 SQLAlchemy does support looking these things up on the fly. In fact,
 AFAIK, this is its default behaviour. We just override it with
 joinedload options, because we don't use scoped sessions.
 
 My fake db driver looks stuff like this up on the fly (so the
 assertEquals after the virtual_interface_create will fail with that db
 driver).
 
 So my question is this: Should this be
 
 a) looked up on the fly,
 b) looked up on first key access and then cached,
 c) looked up when the parent object is loaded and then never again,
 d) or up to the driver author?
 
 Or should we do away with this stuff altogether? I.e. no more looking up
 related objects by way of __getitem__ lookups, and instead only allow
 lookups through db methods. So, instead of
 network['virtual_interfaces'], you'd always do
 db.virtual_interfaces_get_by_network(ctxt, network['id']).  Let's call
 this option e).
 
 I'm pretty undecided myself. If we go with option e) it becomes clear to
 consumers of the DB api when they're pulling out fresh stuff from the DB
 and when they're reusing potentially old results.  Explicit is better
 than implicit, but it'll take quite a bit of work to change this.
 
 If we go with one of options a) through d), my order of preference is
 (from most to least preferred): a), d), c), b).
 
 There's value in having a right way and a wrong way to do this. If it's
 either-or, it makes testing (as in validation) more awkward. I'd say
 it's always possible to do on-the-fly lookups. Overriding __getitem__
 and fetching fresh results is pretty simple, and, as mentioned earlier,
 I believe this is SQLAlchemy's default behaviour (somebody please
 correct me if I'm wrong). Forcing an arbitrary ORM to replicate the
 behaviour of b) and c) could be incredibly awkward, and c) is also
 complicated because there might be reference loops involved. Also,
 reviewing correct use of something where the need for reloads depends on
 previous use of your db objects (which might itself be conditional or
 happen in called methods) sounds like no fun at all.  With d) it's
 pretty straightforward: Do you want to be sure to have fresh responses?
 Then reload the object.  Otherwise, behaviour is undefined. It's
 straightforward to explain and review.  Option e) is also easy to
 explain and do reviews for, btw.
 
 DB objects with options a) through d) will have trouble 

Re: [Openstack] Database stuff

2011-11-29 Thread Vishvananda Ishaya
e) is the right solution imho.  The only reason joinedloads slipped in is for 
efficiency reasons.

In an ideal world the solution would be:

1) (explicitness) Every object or list of related objects is retrieved with an 
explicit call:
 instance = db.instance_get(id)
 ifaces = db.interfaces_get_by_instance(id)
 for iface in ifaces:
 ip = db.fixed_ip_get_by_interface(iface['id'])
2) (efficiency) Queries are perfectly efficient and all joins that will be used 
are made at once.
 So the above would be a single db query that joins all instances ifaces and 
ips.

Unless we're doing source code analysis to generate our queries, then we're 
probably going
to have to make some tradeoffs to get as much efficiency and explicitness as 
possible.

Brainstorming, perhaps we could use a hinting/caching mechanism that the 
backend could support.
something like db.interfaces_get_by_instance(id, hint='fixed_ip'), which states 
that you are about to make
another db request to get the fixed ips, so the backend could prejoin and cache 
the results. Then the next
request could be: db.fixed_ip_get_by_interface(iface['id'], cached=True) or 
some such.

I would like to move towards 1) but I think we really have to solve 2) or we 
will be smashing the database with too many queries.

Vish

On Nov 29, 2011, at 7:20 AM, Soren Hansen wrote:

 Hi, guys.
 
 Gosh, this turned out to be long. Sorry about that.
 
 I'm adding some tests for the DB api, and I've stumbled across something
 that we should probably discuss.
 
 First of all, most (if not all) of the various *_create methods we have,
 return quite amputated objects. Any attempt to access related objects
 will fail with the much too familiar:
 
   DetachedInstanceError: Parent instance Whatever at 0x4f5c8d0 is not
   bound to a Session; lazy load operation of attribute 'other_things'
   cannot proceed
 
 Also, with the SQLAlchemy driver, this test would pass:
 
network = db.network_create(ctxt, {})
network = db.network_get(ctxt, network['id'])
 
instance = db.instance_create(ctxt, {})
self.assertEquals(len(network['virtual_interfaces']), 0)
db.virtual_interface_create(ctxt, {'network_id': network['id'],
   'instance_id': instance['id']}
 
self.assertEquals(len(network['virtual_interfaces']), 0)
network = db.network_get(ctxt, network['id'])
self.assertEquals(len(network['virtual_interfaces']), 1)
 
 I create a network, pull it out again (as per my comment above), verify
 that it has no virtual_interfaces related to it, create a virtual
 interface in this network, and check the network's virtual_interfaces
 key and finds that it still has length 0. Reloading the network now
 reveals the new virtual interface.
 
 SQLAlchemy does support looking these things up on the fly. In fact,
 AFAIK, this is its default behaviour. We just override it with
 joinedload options, because we don't use scoped sessions.
 
 My fake db driver looks stuff like this up on the fly (so the
 assertEquals after the virtual_interface_create will fail with that db
 driver).
 
 So my question is this: Should this be
 
 a) looked up on the fly,
 b) looked up on first key access and then cached,
 c) looked up when the parent object is loaded and then never again,
 d) or up to the driver author?
 
 Or should we do away with this stuff altogether? I.e. no more looking up
 related objects by way of __getitem__ lookups, and instead only allow
 lookups through db methods. So, instead of
 network['virtual_interfaces'], you'd always do
 db.virtual_interfaces_get_by_network(ctxt, network['id']).  Let's call
 this option e).
 
 I'm pretty undecided myself. If we go with option e) it becomes clear to
 consumers of the DB api when they're pulling out fresh stuff from the DB
 and when they're reusing potentially old results.  Explicit is better
 than implicit, but it'll take quite a bit of work to change this.
 
 If we go with one of options a) through d), my order of preference is
 (from most to least preferred): a), d), c), b).
 
 There's value in having a right way and a wrong way to do this. If it's
 either-or, it makes testing (as in validation) more awkward. I'd say
 it's always possible to do on-the-fly lookups. Overriding __getitem__
 and fetching fresh results is pretty simple, and, as mentioned earlier,
 I believe this is SQLAlchemy's default behaviour (somebody please
 correct me if I'm wrong). Forcing an arbitrary ORM to replicate the
 behaviour of b) and c) could be incredibly awkward, and c) is also
 complicated because there might be reference loops involved. Also,
 reviewing correct use of something where the need for reloads depends on
 previous use of your db objects (which might itself be conditional or
 happen in called methods) sounds like no fun at all.  With d) it's
 pretty straightforward: Do you want to be sure to have fresh responses?
 Then reload the object.  Otherwise, behaviour is undefined. It's
 straightforward to explain 

[Openstack] Proposal for Lorin Hochstein to join nova-core

2011-11-29 Thread Vishvananda Ishaya
Lorin has been a great contributor to Nova for a long time and has been 
participating heavily in reviews over the past couple of months.  I think he 
would be a great addition to nova-core.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Security Group Extension Prohibits same group source in rules

2011-11-29 Thread Vishvananda Ishaya
+1.  This sounds like a bug. FYI there are some issues related to adding source 
group rules that specify ports that need to be fixed.  We have also discussed 
whether or not the same group should be allow all by default. In ec2 it does.  
I personally like having it explicit like this, but I don't know if it is 
confusing to people coming from other clouds.

Vish

On Nov 28, 2011, at 8:32 PM, Hookway, Ray wrote:

 I would like to be able to create a security group rule which allows 
 communication between VMs within the group. Using the EC2 API this can be 
 done as follows:
  
 rjh@cloud1:~$ euca-describe-groups
 GROUP rjhproject  default default
 PERMISSION  rjhproject  default ALLOWS  tcp   2222FROM  CIDR  
 0.0.0.0/0
 PERMISSION  rjhproject  default ALLOWS  icmp  -1-1FROM  CIDR  
 0.0.0.0/0
 PERMISSION  rjhproject  default ALLOWS  tcp   8080GRPNAME 
 default
 rjh@cloud1:~$ euca-add-group -d 'permissive group' rjhgroup
 GROUP rjhgrouppermissive group
 rjh@cloud1:~$ euca-authorize -o rjhgroup rjhgroup
 rjhgroup rjhgroup None tcp None None 0.0.0.0/0
 GROUP rjhgroup
 PERMISSION  rjhgroupALLOWS  tcp   GRPNAME rjhgroupFROM  CIDR  
 0.0.0.0/0
 rjh@cloud1:~$ euca-describe-groups
 GROUP rjhproject  default default
 PERMISSION  rjhproject  default ALLOWS  tcp   2222FROM  CIDR  
 0.0.0.0/0
 PERMISSION  rjhproject  default ALLOWS  icmp  -1-1FROM  CIDR  
 0.0.0.0/0
 PERMISSION  rjhproject  default ALLOWS  tcp   8080GRPNAME 
 default
 GROUP rjhproject  rjhgrouppermissive group
 PERMISSION  rjhproject  rjhgroupALLOWS  icmp  -1-1GRPNAME 
 rjhgroup
 PERMISSION  rjhproject  rjhgroupALLOWS  tcp   1 65535 GRPNAME 
 rjhgroup
 PERMISSION  rjhproject  rjhgroupALLOWS  udp   1 65536 GRPNAME 
 rjhgroup
  
 So, it looks like security groups support the notion of a group with rules 
 that mention the group containing the rule as a source. However, the 
 security_groups.py extension contains an explicit check that the source group 
 id is not the same as the parent group id. Why is this done? I would like to 
 remove this restriction allowing rules to be created similar to the one 
 created above using EC2. Any objections?
  
 -Ray Hookway (rjh)
  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Computing StackExchange site proposal

2011-11-29 Thread Michael Pittaro
On Tue, Nov 29, 2011 at 2:35 AM, Thierry Carrez thie...@openstack.org wrote:
 Stefano Maffulli wrote:
 Leaving aside naming the tools, what would be the most important
 features to have? Here is my personal list, in no particular order:

 * good search engine
 * ability to promote question and best answer to FAQ
 * good looking and nice to use
 * custom domain (like ask.openstack.org)
 * layout customizable, to give it the openstack.org look
 * use Launchpad login
 * possibility to turn question into bug report (nice to have)

 anything else? Please focus on features, we'll shop around for tools at
 later stage.

 * Tagging
 * Use of a reputation system to encourage participation and get a better
 handle on authoritative answers


 * a method or process for flagging topics which should migrate  into
documentation and/or the wiki

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] boot from ISO

2011-11-29 Thread Michaël Van de Borne

Hi Donal, hi all,

I'm trying to test the Boot From ISO feature. So I've set a XenServer 
host and installed a Ubuntu 11.10 PV DomU in it.


Then I used the following commands but, as you can see in the attached 
nova-compute log excerpt, there was a problem.


glance add name=fedora_iso disk_format=iso  
../Fedora-16-x86_64-Live-LXDE.iso

ID: 4
nova boot test_iso --flavor 2 --image 4

I can see the ISO images using nova list but not using glance index.

The error seems to be: 'Cannot find SR of content-type ISO'. However, 
I've set a NFS ISO Library using XenCenter, so that there is an actual 
ISO content-typed SR. How to tell OpenStack to use this SR for the ISO 
images I post using glance?


Any clue? I feel I'm rather close to make it work.


thanks,

michaël


Michaël Van de Borne
RD Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi


Le 22/11/11 00:18, Donal Lafferty a écrit :


Hi Michaël,

Boot from ISO should be ISO image agnostic.  The feature overcomes 
restrictions placed on the  distribution of modified Windows' images.  
People can use their ISO instead, but they may still need to use 
dedicated hardware.


You should have no problem with a Linux distribution.

However, I wrote it for XenAPI, so we need someone to duplicate the 
work for KVM and VMWare.


DL

*From:*openstack-bounces+donal.lafferty=citrix@lists.launchpad.net 
[mailto:openstack-bounces+donal.lafferty=citrix@lists.launchpad.net] 
*On Behalf Of *Michaël Van de Borne

*Sent:* 21 November 2011 17:28
*To:* openstack@lists.launchpad.net
*Subject:* Re: [Openstack] boot from ISO

up? anybody?


Le 14/11/11 14:44, Michaël Van de Borne a écrit :

Hi all,

I'm very interested in the Boot From ISO feature described here: 
http://wiki.openstack.org/bootFromISO


In a few words, it's about the ability to boot a VM from the CDROM 
with an ISO image attached. A blank hard disk being attached to 
install the OS files in it.


I've got some questions about this:
1. Is the feature available today using a standard Diablo install? 
I've seen the code about this feature is stored under nova/tests and 
glance/tests. Does this mean it isn't finished yet and could only be 
tested under specific conditions? Which ones?
2. the spec tells about a Windows use case. Why just Windows? What 
should I do to test with a Linux distribution?
3. I can see here 
http://bazaar.launchpad.net/%7Ehudson-openstack/nova/trunk/revision/1433?start_revid=1433 
that the Xen hypervisor only has been impacted by the source code 
changes. Are KVM and VMWare planned to be supported in the future? May 
I help/be helped to develop KVM and VMWare support for this 'Boot From 
Iso' feature?


Any help appreaciated

thank you,


michaël



--
Michaël Van de Borne
RD Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
www.cetic.be  http://www.cetic.be, rue des Frères Wright, 29/3, B-6041 
Charleroi




___
Mailing list:https://launchpad.net/~openstack  
https://launchpad.net/%7Eopenstack
Post to :openstack@lists.launchpad.net  
mailto:openstack@lists.launchpad.net
Unsubscribe :https://launchpad.net/~openstack  
https://launchpad.net/%7Eopenstack
More help   :https://help.launchpad.net/ListHelp
2011-11-29 18:11:41,970 DEBUG nova.rpc [-] received {u'_context_roles': [], u'_context_request_id': u'180d772e-f33d-4d9d-b1bc-e8cf2235cf1d', u'_context_read_deleted': False, u'args': {u'request_spec': {u'num_instances': 1, u'image': {u'status': u'active', u'name': u'fedora_iso', u'deleted': False, u'container_format': u'ovf', u'created_at': u'2011-11-29 17:10:27.140409', u'disk_format': u'iso', u'updated_at': u'2011-11-29 17:10:47.737498', u'properties': {u'min_disk': u'0', u'owner': None, u'min_ram': u'0'}, u'location': u'file:///var/lib/glance/images/4', u'checksum': u'20c39452cac6b1e6966f6b7edc704236', u'is_public': False, u'deleted_at': None, u'id': 4, u'size': 567279616}, u'filter': None, u'instance_type': {u'rxtx_quota': 0, u'flavorid': 2, u'name': u'm1.small', u'deleted': False, u'created_at': None, u'updated_at': None, u'memory_mb': 2048, u'vcpus': 1, u'rxtx_cap': 0, u'extra_specs': {}, u'swap': 0, u'deleted_at': None, u'id': 5, u'local_gb': 20}, u'blob': None, u'instance_properties': {u'vm_state': u'building', u'availability_zone': None, u'ramdisk_id': u'', u'instance_type_id': 5, u'user_data': u'', u'vm_mode': None, u'reservation_id': u'r-40470cnx', u'user_id': u'mb', u'display_description': u'test_iso', u'key_data': None, u'power_state': 0, u'project_id': u'openstack', u'metadata': {}, u'access_ip_v6': None, u'access_ip_v4': None, u'kernel_id': u'', u'key_name': None, u'display_name': u'test_iso', u'config_drive_id': u'', u'local_gb': 20, u'architecture': None, u'locked': False, u'launch_time': u'2011-11-29T17:11:41Z', u'memory_mb': 2048, u'vcpus': 1, u'image_ref': 4, 

Re: [Openstack] Database stuff

2011-11-29 Thread Jay Pipes
On Tue, Nov 29, 2011 at 10:49 AM, Jason Kölker jkoel...@rackspace.com wrote:
 On Tue, 2011-11-29 at 16:20 +0100, Soren Hansen wrote:

 It seems I've talked myself into preferring option e). It's too much
 work to do on my own, though, and it's going to be disruptive, so we
 need to do it real soon. I think it'll be worth it, though.

 I agree. This will also make it easier to swap out the storage with
 other Non-SQLAlchemy datastores *cough* ElasticSearch *cough*.

There's a very good reason this hasn't happened so far: handling
highly relational datasets with a non-relational data store is a bad
idea. In fact, I seem to remember that is exactly how Nova's data
store started out life (*cough* Redis *cough*)

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Jay Pipes
+10

On Tue, Nov 29, 2011 at 12:55 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 e) is the right solution imho.  The only reason joinedloads slipped in is for 
 efficiency reasons.

 In an ideal world the solution would be:

 1) (explicitness) Every object or list of related objects is retrieved with 
 an explicit call:
  instance = db.instance_get(id)
  ifaces = db.interfaces_get_by_instance(id)
  for iface in ifaces:
     ip = db.fixed_ip_get_by_interface(iface['id'])
 2) (efficiency) Queries are perfectly efficient and all joins that will be 
 used are made at once.
  So the above would be a single db query that joins all instances ifaces and 
 ips.

 Unless we're doing source code analysis to generate our queries, then we're 
 probably going
 to have to make some tradeoffs to get as much efficiency and explicitness as 
 possible.

 Brainstorming, perhaps we could use a hinting/caching mechanism that the 
 backend could support.
 something like db.interfaces_get_by_instance(id, hint='fixed_ip'), which 
 states that you are about to make
 another db request to get the fixed ips, so the backend could prejoin and 
 cache the results. Then the next
 request could be: db.fixed_ip_get_by_interface(iface['id'], cached=True) or 
 some such.

 I would like to move towards 1) but I think we really have to solve 2) or we 
 will be smashing the database with too many queries.

 Vish

 On Nov 29, 2011, at 7:20 AM, Soren Hansen wrote:

 Hi, guys.

 Gosh, this turned out to be long. Sorry about that.

 I'm adding some tests for the DB api, and I've stumbled across something
 that we should probably discuss.

 First of all, most (if not all) of the various *_create methods we have,
 return quite amputated objects. Any attempt to access related objects
 will fail with the much too familiar:

   DetachedInstanceError: Parent instance Whatever at 0x4f5c8d0 is not
   bound to a Session; lazy load operation of attribute 'other_things'
   cannot proceed

 Also, with the SQLAlchemy driver, this test would pass:

    network = db.network_create(ctxt, {})
    network = db.network_get(ctxt, network['id'])

    instance = db.instance_create(ctxt, {})
    self.assertEquals(len(network['virtual_interfaces']), 0)
    db.virtual_interface_create(ctxt, {'network_id': network['id'],
                                       'instance_id': instance['id']}

    self.assertEquals(len(network['virtual_interfaces']), 0)
    network = db.network_get(ctxt, network['id'])
    self.assertEquals(len(network['virtual_interfaces']), 1)

 I create a network, pull it out again (as per my comment above), verify
 that it has no virtual_interfaces related to it, create a virtual
 interface in this network, and check the network's virtual_interfaces
 key and finds that it still has length 0. Reloading the network now
 reveals the new virtual interface.

 SQLAlchemy does support looking these things up on the fly. In fact,
 AFAIK, this is its default behaviour. We just override it with
 joinedload options, because we don't use scoped sessions.

 My fake db driver looks stuff like this up on the fly (so the
 assertEquals after the virtual_interface_create will fail with that db
 driver).

 So my question is this: Should this be

 a) looked up on the fly,
 b) looked up on first key access and then cached,
 c) looked up when the parent object is loaded and then never again,
 d) or up to the driver author?

 Or should we do away with this stuff altogether? I.e. no more looking up
 related objects by way of __getitem__ lookups, and instead only allow
 lookups through db methods. So, instead of
 network['virtual_interfaces'], you'd always do
 db.virtual_interfaces_get_by_network(ctxt, network['id']).  Let's call
 this option e).

 I'm pretty undecided myself. If we go with option e) it becomes clear to
 consumers of the DB api when they're pulling out fresh stuff from the DB
 and when they're reusing potentially old results.  Explicit is better
 than implicit, but it'll take quite a bit of work to change this.

 If we go with one of options a) through d), my order of preference is
 (from most to least preferred): a), d), c), b).

 There's value in having a right way and a wrong way to do this. If it's
 either-or, it makes testing (as in validation) more awkward. I'd say
 it's always possible to do on-the-fly lookups. Overriding __getitem__
 and fetching fresh results is pretty simple, and, as mentioned earlier,
 I believe this is SQLAlchemy's default behaviour (somebody please
 correct me if I'm wrong). Forcing an arbitrary ORM to replicate the
 behaviour of b) and c) could be incredibly awkward, and c) is also
 complicated because there might be reference loops involved. Also,
 reviewing correct use of something where the need for reloads depends on
 previous use of your db objects (which might itself be conditional or
 happen in called methods) sounds like no fun at all.  With d) it's
 pretty straightforward: Do you want to be sure to have fresh 

Re: [Openstack] Database stuff

2011-11-29 Thread Duncan McGreggor
On 29 Nov 2011 - 17:54, Aaron Lee wrote:
 For this I think we need to separate the models from the queries.

+1

 A query method should be able to populate as many of the models as
 needed to return the data collected. I also think separating the
 queries from the models themselves will help us make the storage
 engine replaceable, and will allow us to extract common behavior into
 the models themselves instead of peppered throughout the codebase.

Couldn't agree more.

d

 Aaron

 On Nov 29, 2011, at 10:31 AM, Chris Behrens wrote:

  e) sounds good as long as we don't remove the ability to joinload up
  front.  Sometimes we need to join.  Sometimes we can be lazy.  With
  'list instances with details', we need to get all instances from the
  DB and join network information in a single DB query.  Doing 1 +
  (n*x) DB queries to support a 'nova list' will not be acceptable
  when it can be done in 1 query today.  That means we'll need
  instance['virtual_interfaces'] (or similar) to work in cases where
  we 'join up front'.  Most cases when we're pulling instances from
  the DB, we don't need to join anything.  It'd be more efficient to
  not join network information for those calls.  This is where e) is
  fine.
 
  - Chris
 
  On Nov 29, 2011, at 9:20 AM, Soren Hansen wrote:
 
  Hi, guys.
 
  Gosh, this turned out to be long. Sorry about that.
 
  I'm adding some tests for the DB api, and I've stumbled across something
  that we should probably discuss.
 
  First of all, most (if not all) of the various *_create methods we have,
  return quite amputated objects. Any attempt to access related objects
  will fail with the much too familiar:
 
   DetachedInstanceError: Parent instance Whatever at 0x4f5c8d0 is not
   bound to a Session; lazy load operation of attribute 'other_things'
   cannot proceed
 
  Also, with the SQLAlchemy driver, this test would pass:
 
network = db.network_create(ctxt, {})
network = db.network_get(ctxt, network['id'])
 
instance = db.instance_create(ctxt, {})
self.assertEquals(len(network['virtual_interfaces']), 0)
db.virtual_interface_create(ctxt, {'network_id': network['id'],
   'instance_id': instance['id']}
 
self.assertEquals(len(network['virtual_interfaces']), 0)
network = db.network_get(ctxt, network['id'])
self.assertEquals(len(network['virtual_interfaces']), 1)
 
  I create a network, pull it out again (as per my comment above), verify
  that it has no virtual_interfaces related to it, create a virtual
  interface in this network, and check the network's virtual_interfaces
  key and finds that it still has length 0. Reloading the network now
  reveals the new virtual interface.
 
  SQLAlchemy does support looking these things up on the fly. In fact,
  AFAIK, this is its default behaviour. We just override it with
  joinedload options, because we don't use scoped sessions.
 
  My fake db driver looks stuff like this up on the fly (so the
  assertEquals after the virtual_interface_create will fail with that db
  driver).
 
  So my question is this: Should this be
 
  a) looked up on the fly,
  b) looked up on first key access and then cached,
  c) looked up when the parent object is loaded and then never again,
  d) or up to the driver author?
 
  Or should we do away with this stuff altogether? I.e. no more looking up
  related objects by way of __getitem__ lookups, and instead only allow
  lookups through db methods. So, instead of
  network['virtual_interfaces'], you'd always do
  db.virtual_interfaces_get_by_network(ctxt, network['id']).  Let's call
  this option e).
 
  I'm pretty undecided myself. If we go with option e) it becomes clear to
  consumers of the DB api when they're pulling out fresh stuff from the DB
  and when they're reusing potentially old results.  Explicit is better
  than implicit, but it'll take quite a bit of work to change this.
 
  If we go with one of options a) through d), my order of preference is
  (from most to least preferred): a), d), c), b).
 
  There's value in having a right way and a wrong way to do this. If it's
  either-or, it makes testing (as in validation) more awkward. I'd say
  it's always possible to do on-the-fly lookups. Overriding __getitem__
  and fetching fresh results is pretty simple, and, as mentioned earlier,
  I believe this is SQLAlchemy's default behaviour (somebody please
  correct me if I'm wrong). Forcing an arbitrary ORM to replicate the
  behaviour of b) and c) could be incredibly awkward, and c) is also
  complicated because there might be reference loops involved. Also,
  reviewing correct use of something where the need for reloads depends on
  previous use of your db objects (which might itself be conditional or
  happen in called methods) sounds like no fun at all.  With d) it's
  pretty straightforward: Do you want to be sure to have fresh responses?
  Then reload the object.  Otherwise, behaviour is undefined. It's
  straightforward to 

Re: [Openstack] Database stuff

2011-11-29 Thread Michael Pittaro
I like e).

I know it's heresy in some circles, but this approach can also support
the use of SqlAlchemy core, without the use of the SQLAlchemy ORM.
It's more work in some respects, but does allow writing very efficient
queries.

mike

On Tue, Nov 29, 2011 at 9:55 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 e) is the right solution imho.  The only reason joinedloads slipped in is for 
 efficiency reasons.

 In an ideal world the solution would be:

 1) (explicitness) Every object or list of related objects is retrieved with 
 an explicit call:
  instance = db.instance_get(id)
  ifaces = db.interfaces_get_by_instance(id)
  for iface in ifaces:
     ip = db.fixed_ip_get_by_interface(iface['id'])
 2) (efficiency) Queries are perfectly efficient and all joins that will be 
 used are made at once.
  So the above would be a single db query that joins all instances ifaces and 
 ips.

 Unless we're doing source code analysis to generate our queries, then we're 
 probably going
 to have to make some tradeoffs to get as much efficiency and explicitness as 
 possible.

 Brainstorming, perhaps we could use a hinting/caching mechanism that the 
 backend could support.
 something like db.interfaces_get_by_instance(id, hint='fixed_ip'), which 
 states that you are about to make
 another db request to get the fixed ips, so the backend could prejoin and 
 cache the results. Then the next
 request could be: db.fixed_ip_get_by_interface(iface['id'], cached=True) or 
 some such.

 I would like to move towards 1) but I think we really have to solve 2) or we 
 will be smashing the database with too many queries.

 Vish

 On Nov 29, 2011, at 7:20 AM, Soren Hansen wrote:

 Hi, guys.

 Gosh, this turned out to be long. Sorry about that.

 I'm adding some tests for the DB api, and I've stumbled across something
 that we should probably discuss.

 First of all, most (if not all) of the various *_create methods we have,
 return quite amputated objects. Any attempt to access related objects
 will fail with the much too familiar:

   DetachedInstanceError: Parent instance Whatever at 0x4f5c8d0 is not
   bound to a Session; lazy load operation of attribute 'other_things'
   cannot proceed

 Also, with the SQLAlchemy driver, this test would pass:

    network = db.network_create(ctxt, {})
    network = db.network_get(ctxt, network['id'])

    instance = db.instance_create(ctxt, {})
    self.assertEquals(len(network['virtual_interfaces']), 0)
    db.virtual_interface_create(ctxt, {'network_id': network['id'],
                                       'instance_id': instance['id']}

    self.assertEquals(len(network['virtual_interfaces']), 0)
    network = db.network_get(ctxt, network['id'])
    self.assertEquals(len(network['virtual_interfaces']), 1)

 I create a network, pull it out again (as per my comment above), verify
 that it has no virtual_interfaces related to it, create a virtual
 interface in this network, and check the network's virtual_interfaces
 key and finds that it still has length 0. Reloading the network now
 reveals the new virtual interface.

 SQLAlchemy does support looking these things up on the fly. In fact,
 AFAIK, this is its default behaviour. We just override it with
 joinedload options, because we don't use scoped sessions.

 My fake db driver looks stuff like this up on the fly (so the
 assertEquals after the virtual_interface_create will fail with that db
 driver).

 So my question is this: Should this be

 a) looked up on the fly,
 b) looked up on first key access and then cached,
 c) looked up when the parent object is loaded and then never again,
 d) or up to the driver author?

 Or should we do away with this stuff altogether? I.e. no more looking up
 related objects by way of __getitem__ lookups, and instead only allow
 lookups through db methods. So, instead of
 network['virtual_interfaces'], you'd always do
 db.virtual_interfaces_get_by_network(ctxt, network['id']).  Let's call
 this option e).

 I'm pretty undecided myself. If we go with option e) it becomes clear to
 consumers of the DB api when they're pulling out fresh stuff from the DB
 and when they're reusing potentially old results.  Explicit is better
 than implicit, but it'll take quite a bit of work to change this.

 If we go with one of options a) through d), my order of preference is
 (from most to least preferred): a), d), c), b).

 There's value in having a right way and a wrong way to do this. If it's
 either-or, it makes testing (as in validation) more awkward. I'd say
 it's always possible to do on-the-fly lookups. Overriding __getitem__
 and fetching fresh results is pretty simple, and, as mentioned earlier,
 I believe this is SQLAlchemy's default behaviour (somebody please
 correct me if I'm wrong). Forcing an arbitrary ORM to replicate the
 behaviour of b) and c) could be incredibly awkward, and c) is also
 complicated because there might be reference loops involved. Also,
 reviewing correct use of something where the need 

Re: [Openstack] Proposal for Lorin Hochstein to join nova-core

2011-11-29 Thread Brian Waldon
+1

On Nov 29, 2011, at 1:03 PM, Vishvananda Ishaya wrote:

 Lorin has been a great contributor to Nova for a long time and has been 
 participating heavily in reviews over the past couple of months.  I think he 
 would be a great addition to nova-core.
 
 Vish
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for Lorin Hochstein to join nova-core

2011-11-29 Thread Brian Schott
+1
-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060







On Nov 29, 2011, at 1:03 PM, Vishvananda Ishaya wrote:

 Lorin has been a great contributor to Nova for a long time and has been 
 participating heavily in reviews over the past couple of months.  I think he 
 would be a great addition to nova-core.
 
 Vish
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Chris Behrens
+1 on the thoughts here.  Exactly what I meant by my reply.  Not sure what the 
interface should look like for #2, but we must be able to do it somehow.

- Chris

On Nov 29, 2011, at 11:55 AM, Vishvananda Ishaya wrote:

 e) is the right solution imho.  The only reason joinedloads slipped in is for 
 efficiency reasons.
 
 In an ideal world the solution would be:
 
 1) (explicitness) Every object or list of related objects is retrieved with 
 an explicit call:
 instance = db.instance_get(id)
 ifaces = db.interfaces_get_by_instance(id)
 for iface in ifaces:
ip = db.fixed_ip_get_by_interface(iface['id'])
 2) (efficiency) Queries are perfectly efficient and all joins that will be 
 used are made at once.
 So the above would be a single db query that joins all instances ifaces and 
 ips.
 
 Unless we're doing source code analysis to generate our queries, then we're 
 probably going
 to have to make some tradeoffs to get as much efficiency and explicitness as 
 possible.
 
 Brainstorming, perhaps we could use a hinting/caching mechanism that the 
 backend could support.
 something like db.interfaces_get_by_instance(id, hint='fixed_ip'), which 
 states that you are about to make
 another db request to get the fixed ips, so the backend could prejoin and 
 cache the results. Then the next
 request could be: db.fixed_ip_get_by_interface(iface['id'], cached=True) or 
 some such.
 
 I would like to move towards 1) but I think we really have to solve 2) or we 
 will be smashing the database with too many queries.
 
 Vish
 
 On Nov 29, 2011, at 7:20 AM, Soren Hansen wrote:
 
 Hi, guys.
 
 Gosh, this turned out to be long. Sorry about that.
 
 I'm adding some tests for the DB api, and I've stumbled across something
 that we should probably discuss.
 
 First of all, most (if not all) of the various *_create methods we have,
 return quite amputated objects. Any attempt to access related objects
 will fail with the much too familiar:
 
 DetachedInstanceError: Parent instance Whatever at 0x4f5c8d0 is not
 bound to a Session; lazy load operation of attribute 'other_things'
 cannot proceed
 
 Also, with the SQLAlchemy driver, this test would pass:
 
  network = db.network_create(ctxt, {})
  network = db.network_get(ctxt, network['id'])
 
  instance = db.instance_create(ctxt, {})
  self.assertEquals(len(network['virtual_interfaces']), 0)
  db.virtual_interface_create(ctxt, {'network_id': network['id'],
 'instance_id': instance['id']}
 
  self.assertEquals(len(network['virtual_interfaces']), 0)
  network = db.network_get(ctxt, network['id'])
  self.assertEquals(len(network['virtual_interfaces']), 1)
 
 I create a network, pull it out again (as per my comment above), verify
 that it has no virtual_interfaces related to it, create a virtual
 interface in this network, and check the network's virtual_interfaces
 key and finds that it still has length 0. Reloading the network now
 reveals the new virtual interface.
 
 SQLAlchemy does support looking these things up on the fly. In fact,
 AFAIK, this is its default behaviour. We just override it with
 joinedload options, because we don't use scoped sessions.
 
 My fake db driver looks stuff like this up on the fly (so the
 assertEquals after the virtual_interface_create will fail with that db
 driver).
 
 So my question is this: Should this be
 
 a) looked up on the fly,
 b) looked up on first key access and then cached,
 c) looked up when the parent object is loaded and then never again,
 d) or up to the driver author?
 
 Or should we do away with this stuff altogether? I.e. no more looking up
 related objects by way of __getitem__ lookups, and instead only allow
 lookups through db methods. So, instead of
 network['virtual_interfaces'], you'd always do
 db.virtual_interfaces_get_by_network(ctxt, network['id']).  Let's call
 this option e).
 
 I'm pretty undecided myself. If we go with option e) it becomes clear to
 consumers of the DB api when they're pulling out fresh stuff from the DB
 and when they're reusing potentially old results.  Explicit is better
 than implicit, but it'll take quite a bit of work to change this.
 
 If we go with one of options a) through d), my order of preference is
 (from most to least preferred): a), d), c), b).
 
 There's value in having a right way and a wrong way to do this. If it's
 either-or, it makes testing (as in validation) more awkward. I'd say
 it's always possible to do on-the-fly lookups. Overriding __getitem__
 and fetching fresh results is pretty simple, and, as mentioned earlier,
 I believe this is SQLAlchemy's default behaviour (somebody please
 correct me if I'm wrong). Forcing an arbitrary ORM to replicate the
 behaviour of b) and c) could be incredibly awkward, and c) is also
 complicated because there might be reference loops involved. Also,
 reviewing correct use of something where the need for reloads depends on
 previous use of your db objects (which might itself be conditional or
 happen 

[Openstack] Lloyd Learning Docs website content processes

2011-11-29 Thread Lloyd Dewolf
I had a good call with Anne Gentle a few hours ago, 08:00 PT, and got
a first introduction to the fantastic documentation work, and
infrastructure to support it.

There is a lot for me to get up to speed on, and Anne has generously
agreed to continue to mentor me. We'll have another one-on-one call
tomorrow, 08:00 PT [1]. If there are up to a few people who would like
to join this call, let Anne and I know off list -- note the call is
primarily to get me, Lloyd, up to speed on documentation.

Out of our first call, there were two items:

1. Anne went and confirmed that the process for updates and additions
to the content of the website [ http://openstack.org/  ] is to file
tasks, bugs, etc in http://launchpad.net/openstack-manuals. This
assists with public visibility, and tracking of items. Anne will be
updating the wiki with this information, and fleshing out the process.

2. Lloyd will log Thierry ttx Carrez's solid openstack.org/security
content from http://etherpad.openstack.org/8hWNQwkWf9 to
http://launchpad.net/openstack-manuals , if it is not already there.
He will do a copyedit to the etherpad, and also upload his revision to
openstack-manuals. We'll take it from there based on the process Anne
is updating to the wiki.

Thanks Anne!

Best regards,
Lloyd

--
1. 
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Lloyd+Dewolf+%26+Anne+Gentle+Chatiso=2030T08p1=283ah=1

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Soren Hansen
2011/11/29 Vishvananda Ishaya vishvana...@gmail.com:
 e) is the right solution imho.  The only reason joinedloads slipped in is for 
 efficiency reasons.

 In an ideal world the solution would be:

 1) (explicitness) Every object or list of related objects is retrieved with 
 an explicit call:
  instance = db.instance_get(id)
  ifaces = db.interfaces_get_by_instance(id)
  for iface in ifaces:
     ip = db.fixed_ip_get_by_interface(iface['id'])
 2) (efficiency) Queries are perfectly efficient and all joins that will be 
 used are made at once.
  So the above would be a single db query that joins all instances ifaces and 
 ips.

The way I'd attack these expensive-if-done-one-at-a-time-but-dirt-cheap-
if-done-as-one-big-query is to have a method in the generic layer that
is taylored for this use case. E.g.

def instances_get_all_for_network_with_fixed_ip_addresses():
retval =  []
for inst in instance_get_all_by_network():
x = inst.copy()
x['fixed_ip_addresses'] = []
for ip in fixed_ip_get_by_instance(inst['id']):
x['fixed_ip_addresses'].append(ip['address'])
retval.append(x)
return x

And then, in the sqlalchemy driver, I could override that method with
one that issues a query with joinedloads and all the rest of it. The
intent is explicit, drivers that have no speedier way to achieve this
get a free implementation made up of the more primitive methods.

fixed_ip_get_by_instace might also have a default implementation that
issues a fixed_ip_get_all() and then filters the results. This way, a
new driver would be quick to add, and then we could optimize each query
as we move along.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Computing StackExchange site proposal

2011-11-29 Thread Stefano Maffulli
On Tue, 2011-11-29 at 10:10 -0800, Lloyd Dewolf wrote:
 Where do I find this previous discussion?

around here:
https://lists.launchpad.net/openstack/msg02169.html

What do you think of the requirements we're gathering for the QA
system? I'd like your opinion on that as we move on.

thanks
stef


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Vishvananda Ishaya
Inline

On Nov 29, 2011, at 11:16 AM, Soren Hansen wrote:

 2011/11/29 Vishvananda Ishaya vishvana...@gmail.com:
 e) is the right solution imho.  The only reason joinedloads slipped in is 
 for efficiency reasons.
 
 In an ideal world the solution would be:
 
 1) (explicitness) Every object or list of related objects is retrieved with 
 an explicit call:
  instance = db.instance_get(id)
  ifaces = db.interfaces_get_by_instance(id)
  for iface in ifaces:
 ip = db.fixed_ip_get_by_interface(iface['id'])
 2) (efficiency) Queries are perfectly efficient and all joins that will be 
 used are made at once.
  So the above would be a single db query that joins all instances ifaces and 
 ips.
 
 The way I'd attack these expensive-if-done-one-at-a-time-but-dirt-cheap-
 if-done-as-one-big-query is to have a method in the generic layer that
 is taylored for this use case. E.g.
 
 def instances_get_all_for_network_with_fixed_ip_addresses():
retval =  []
for inst in instance_get_all_by_network():
x = inst.copy()
x['fixed_ip_addresses'] = []
for ip in fixed_ip_get_by_instance(inst['id']):
x['fixed_ip_addresses'].append(ip['address'])
   retval.append(x)
return x

I think there are a couple of issues with this approach:

 1) combinatorial explosion of queries.
Every time we need an additional joined field we will be adding a new 
method. This could get out of hand
if we aren't careful.
 2) interface clarity.
Sometimes an instance dict contains extra fields and other times it 
doesn't. This is especially annoying
if the resulting object is passed through a queue or another method.  Does 
the instance object have
an address or not?  Do I need to explicitly request it or is it already 
embedded?

These are probably surmountable obstacles if we want to go this route.  I just 
want to point out that it has its
own drawbacks.

 
 And then, in the sqlalchemy driver, I could override that method with
 one that issues a query with joinedloads and all the rest of it. The
 intent is explicit, drivers that have no speedier way to achieve this
 get a free implementation made up of the more primitive methods.
 
 fixed_ip_get_by_instace might also have a default implementation that
 issues a fixed_ip_get_all() and then filters the results. This way, a
 new driver would be quick to add, and then we could optimize each query
 as we move along.
 
 -- 
 Soren Hansen| http://linux2go.dk/
 Ubuntu Developer| http://www.ubuntu.com/
 OpenStack Developer | http://www.openstack.org/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Computing StackExchange site proposal

2011-11-29 Thread Vishvananda Ishaya
It was here:

http://area51.stackexchange.com/proposals/31788

It was rejected on the grounds of being able to be covered on StackOverflow and 
ServerFault.

Vish

On Nov 29, 2011, at 10:10 AM, Lloyd Dewolf wrote:

 On Fri, Nov 18, 2011 at 10:38 AM, Anne Gentle a...@openstack.org
 wrote: We had put forward an OpenStack StackExchange proposal earlier
 this year which was rejected
 Hi Anne,
 
 Where do I find this previous discussion?
 
 
 Thank you,
 Lloyd
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] openstack Diablo Installation manual [Simplified Chinese]

2011-11-29 Thread Anne Gentle
Great, thanks for your contribution! I am guessing that you did not
translate but wrote an independent one, that's fine with us. I've
worked with FLOSS Manuals, an open source documentation community,
where we would encourage new content in another language and not
consider English to be the only source. With that in mind, one way you
can invite collaborators is considering using the openstack-manuals
project (on Launchpad and Github) for sharing the source files with
others.

I'll send a second email to the list with thoughts on a translation
process for documentation.

Thanks,
Anne


On Tue, Nov 29, 2011 at 10:31 AM, Stefano Maffulli
stef...@openstack.org wrote:
 HelloOn Tue, 2011-11-29 at 11:29 +0800, darkfower wrote:
           I come from China,In my spare time I wrote a openstack
 Diablo version of the simplified Chinese installation
 documentation,Hope to Chinese friends have help

 That's really good news! Which manual did you translate exactly? Can you
 coordinate with Anne to release the source of your translation and put
 your files in a version control system?

 thanks,
 stef


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Documentation translations

2011-11-29 Thread Anne Gentle
Hi all,
As a follow-up to the interest in simplified Chinese manuals, I would
also like to experiment with English-as-source translations. I had
been delaying a frozen set of English source files but I think we
could just pick a date and start translating if there is enough
interest and we identify a point-person to manage translations.

Here's a rough outline of a possible process for translations:

1. Take the Docbook source files from openstack-manuals and use
Publican to create .pot files of each manual. [1] Publican produces a
pot file per xml file.
2. Manually upload .pot files to Launchpad to manage translations.
3. Translators go to
https://translations.launchpad.net/openstack-manuals to download files
to work on and upload translated files. In enabling it, I have to
choose a Launchpad group for the translations, does anyone know if the
Launchpad Translators group would be interested?
4. Jenkins automatically copies translated files and builds them and
publishes to docs.openstack.org. (This part I'm not sure how to
automate, needs research).

I would like input on the process and I would like to identify someone
who would like to manage the translation process. If you are
interested, please discuss this topic at the next DocTeam meeting
December 12 at 20:00 UTC. [2]

Thanks,
Anne


[1]  
http://jfearn.fedorapeople.org/en-US/Publican/2.4/html/Users_Guide/chap-Users_Guide-Creating_a_document.html#sect-Users_Guide-Preparing_a_document_for_translation
[2] http://wiki.openstack.org/Meetings/DocTeamMeeting

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Duncan McGreggor
On 29 Nov 2011 - 13:15, Jay Pipes wrote:
 On Tue, Nov 29, 2011 at 10:49 AM, Jason K??lker jkoel...@rackspace.com 
 wrote:
  On Tue, 2011-11-29 at 16:20 +0100, Soren Hansen wrote:
 
  It seems I've talked myself into preferring option e). It's too much
  work to do on my own, though, and it's going to be disruptive, so we
  need to do it real soon. I think it'll be worth it, though.
 
  I agree. This will also make it easier to swap out the storage with
  other Non-SQLAlchemy datastores *cough* ElasticSearch *cough*.

 There's a very good reason this hasn't happened so far: handling
 highly relational datasets with a non-relational data store is a bad
 idea. In fact, I seem to remember that is exactly how Nova's data
 store started out life (*cough* Redis *cough*)

I haven't played much with Riak's linking capabilities yet, but I'm
wondering how close something like (graph-like databases) that would get
us... I haven't explored the relationships in the OpenStack schema yet.

(I'm in the middle of putting together some *simple* experiments with
linking, with my goal being a rough initial performance comparison with
MySQL querying similar data.)

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Computing StackExchange site proposal

2011-11-29 Thread Stefano Maffulli
On Tue, 2011-11-29 at 10:14 -0800, Michael Pittaro wrote:
  * a method or process for flagging topics which should migrate  into
 documentation and/or the wiki 

Sounds interesting. If I understand you correctly, you want to have a
way to mark questions about topics that may be improved in the official
docs. Would this be something like 'transform this question into a bug
filed against the documentation project' or something different?

Can you elaborate a bit more on the use case? How would this work?

thanks,
stef


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Soren Hansen
2011/11/29 Jay Pipes jaypi...@gmail.com:
 There's a very good reason this hasn't happened so far: handling
 highly relational datasets with a non-relational data store is a bad
 idea. In fact, I seem to remember that is exactly how Nova's data
 store started out life (*cough* Redis *cough*)

To be fair, we're only barely making use of this in our DB
implementation. I don't think we do any foreign key checking at all,
and deletes (because we don't actually delete anything, we just mark
it as deleted) don't cascade, so there are all sort of ways in which
our data store could be inconsistent.

Besides, we don't really use transactions. I could easily read the
same data from two separate nodes, make different (irreconcilable)
changes on both nodes, and write them back, and the last one to write
simply wins.

In short, it seems to me we're not really getting much out of having a
relational data store?

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Jay Pipes
On Tue, Nov 29, 2011 at 2:58 PM, Soren Hansen so...@linux2go.dk wrote:
 2011/11/29 Jay Pipes jaypi...@gmail.com:
 There's a very good reason this hasn't happened so far: handling
 highly relational datasets with a non-relational data store is a bad
 idea. In fact, I seem to remember that is exactly how Nova's data
 store started out life (*cough* Redis *cough*)

 To be fair, we're only barely making use of this in our DB
 implementation. I don't think we do any foreign key checking at all,
 and deletes (because we don't actually delete anything, we just mark
 it as deleted) don't cascade, so there are all sort of ways in which
 our data store could be inconsistent.

Because the database schema isn't properly protecting against
referential integrity failures does not mean the relational database
store is a failure itself.

 Besides, we don't really use transactions. I could easily read the
 same data from two separate nodes, make different (irreconcilable)
 changes on both nodes, and write them back, and the last one to write
 simply wins.

Sure, but using a KV store doesn't solve this problem...

 In short, it seems to me we're not really getting much out of having a
 relational data store?

We're getting out of it what we ask of it. We aren't using scoped
sessions properly, aren't using transactions properly, and we aren't
enforcing referential integrity. But those are choices we've made, not
some native deficiency in relational data stores.

As soon as someone can demonstrate the performance, scalability, and
robustness advantages of rewriting the data layer to use a
non-relational data store, I'm all ears. Until that point, I remain
unconvinced that the relational database is the source of major
bottlenecks.

Cheers,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Aaron Lee

On Nov 29, 2011, at 11:55 AM, Vishvananda Ishaya wrote:
 … some stuff I agree with, then...
 something like db.interfaces_get_by_instance(id, hint='fixed_ip'), 

This is very similar to ActiveRecord. In a case like this you would say 

Instance.find(id, :include = :fixed_ip)

If you are including the hint, why not just populate those models in the 
initial query?
Something like

db.instances_by_id_including_fixed_ips(id)

or

db.instances_by_id_including_vifs(id)

or whatever.

Aaron Lee
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-29 Thread Soren Hansen
It's been a bit over a week since I started this thread. So far we've
agreed that running the test suite is too slow, mostly because there
are too many things in there that aren't unit tests.

We've also discussed my fake db implementation at length. I think
we've generally agreed that it isn't completely insane, so that's
moving along nicely.

Duncan has taken the first steps needed to split the test suite into
unit tests and everything else:

   https://review.openstack.org/#change,1879

Just one more core +1 needed. Will someone beat me to it? Only time
will tell :) Thanks, Duncan!

Anything else around unit testing anyone wants to get into The Great
Big Plan[tm]?

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for Lorin Hochstein to join nova-core

2011-11-29 Thread Todd Willey
+1

On Tue, Nov 29, 2011 at 1:03 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 Lorin has been a great contributor to Nova for a long time and has been 
 participating heavily in reviews over the past couple of months.  I think he 
 would be a great addition to nova-core.

 Vish
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Soren Hansen
2011/11/29 Jay Pipes jaypi...@gmail.com:
 On Tue, Nov 29, 2011 at 2:58 PM, Soren Hansen so...@linux2go.dk wrote:
 2011/11/29 Jay Pipes jaypi...@gmail.com:
 There's a very good reason this hasn't happened so far: handling
 highly relational datasets with a non-relational data store is a bad
 idea. In fact, I seem to remember that is exactly how Nova's data
 store started out life (*cough* Redis *cough*)
 To be fair, we're only barely making use of this in our DB
 implementation. I don't think we do any foreign key checking at all,
 and deletes (because we don't actually delete anything, we just mark
 it as deleted) don't cascade, so there are all sort of ways in which
 our data store could be inconsistent.
 Because the database schema isn't properly protecting against
 referential integrity failures does not mean the relational database
 store is a failure itself.

I'm not suggesting it's a failure at all.

 Besides, we don't really use transactions. I could easily read the
 same data from two separate nodes, make different (irreconcilable)
 changes on both nodes, and write them back, and the last one to write
 simply wins.
 Sure, but using a KV store doesn't solve this problem...

I'm not suggesting it will. My point is simply that using a KV store
wouldn't lose us anything in that respect.

 In short, it seems to me we're not really getting much out of having a
 relational data store?
 We're getting out of it what we ask of it. We aren't using scoped
 sessions properly, aren't using transactions properly, and we aren't
 enforcing referential integrity. But those are choices we've made, not
 some native deficiency in relational data stores.

I didn't mean to suggest that that was the case at all. The point I'm
trying (but failing, clearly) to make is that with the way we're using
it, we're not reaping the usual benefits from it, and that we'd in
fact not lose anything by using a KV store.

 As soon as someone can demonstrate the performance, scalability, and
 robustness advantages of rewriting the data layer to use a
 non-relational data store, I'm all ears. Until that point, I remain
 unconvinced that the relational database is the source of major
 bottlenecks.

I understand that MySQL (and the other backends supported by
SQLAlchemy, too) scales very well. Vertically. I doubt they'll be
bottlenecks. Heck, they're even well-understood enough that people
have built very decent HA setups using them. I just don't think
they're a particularly good fit for a distributed system. You can have
a highly available datastore all you want, but I'd sleep better
knowing that our data is stored in a distributed system that is
designed to handle network partitions well.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Computing StackExchange site proposal

2011-11-29 Thread Lloyd Dewolf
Thanks Vish. Now my failed search makes more sense. Awesome how they
delete it -- invitation to resubmit every month, jokes.

On Tue, Nov 29, 2011 at 11:35 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 It was here:
 http://area51.stackexchange.com/proposals/31788
 It was rejected on the grounds of being able to be covered on StackOverflow
 and ServerFault.
 Vish
 On Nov 29, 2011, at 10:10 AM, Lloyd Dewolf wrote:

 On Fri, Nov 18, 2011 at 10:38 AM, Anne Gentle a...@openstack.org
 wrote: We had put forward an OpenStack StackExchange proposal earlier
 this year which was rejected
 Hi Anne,

 Where do I find this previous discussion?


 Thank you,
 Lloyd

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Jay Pipes
On Tue, Nov 29, 2011 at 3:43 PM, Soren Hansen so...@linux2go.dk wrote:
 2011/11/29 Jay Pipes jaypi...@gmail.com:
 Besides, we don't really use transactions. I could easily read the
 same data from two separate nodes, make different (irreconcilable)
 changes on both nodes, and write them back, and the last one to write
 simply wins.
 Sure, but using a KV store doesn't solve this problem...

 I'm not suggesting it will. My point is simply that using a KV store
 wouldn't lose us anything in that respect.

I see your point. But then again, it comes down to whether we care
about referential integrity or transactional safety. If we don't, then
we're just building a distributed system that has unreliable
persistent storage built into it, and that, IMHO, is a bigger problem
than the as-yet-unproven assertions around scalability of a relational
database in a distributed system. (more below)

 As soon as someone can demonstrate the performance, scalability, and
 robustness advantages of rewriting the data layer to use a
 non-relational data store, I'm all ears. Until that point, I remain
 unconvinced that the relational database is the source of major
 bottlenecks.

 I understand that MySQL (and the other backends supported by
 SQLAlchemy, too) scales very well. Vertically. I doubt they'll be
 bottlenecks. Heck, they're even well-understood enough that people
 have built very decent HA setups using them. I just don't think
 they're a particularly good fit for a distributed system. You can have
 a highly available datastore all you want, but I'd sleep better
 knowing that our data is stored in a distributed system that is
 designed to handle network partitions well.

I guess I don't understand this. How do you sleep at night TODAY
knowing that the data Nova stores in its persistent storage is wide
open to referential integrity problems and transactional state
inconsistencies? What's the point of having a data store that
understands network partitions if we don't care enough to protect
the integrity of the data we're putting in the data store in the first
place? :(

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database stuff

2011-11-29 Thread Soren Hansen
2011/11/29 Jay Pipes jaypi...@gmail.com:
 On Tue, Nov 29, 2011 at 3:43 PM, Soren Hansen so...@linux2go.dk wrote:
 2011/11/29 Jay Pipes jaypi...@gmail.com:
 Besides, we don't really use transactions. I could easily read the
 same data from two separate nodes, make different (irreconcilable)
 changes on both nodes, and write them back, and the last one to write
 simply wins.
 Sure, but using a KV store doesn't solve this problem...

 I'm not suggesting it will. My point is simply that using a KV store
 wouldn't lose us anything in that respect.
 I see your point. But then again, it comes down to whether we care
 about referential integrity or transactional safety.

...and right now we have neither (by choice, not by limitations imposed
by the data store). Would you not agree?

 If we don't, then we're just building a distributed system that has
 unreliable persistent storage built into it, and that, IMHO, is a
 bigger problem than the as-yet-unproven assertions around scalability
 of a relational database in a distributed system. (more below)

Yes. This is what we have now. And it sucks.

 As soon as someone can demonstrate the performance, scalability, and
 robustness advantages of rewriting the data layer to use a
 non-relational data store, I'm all ears. Until that point, I remain
 unconvinced that the relational database is the source of major
 bottlenecks.
 I understand that MySQL (and the other backends supported by
 SQLAlchemy, too) scales very well. Vertically. I doubt they'll be
 bottlenecks. Heck, they're even well-understood enough that people
 have built very decent HA setups using them. I just don't think
 they're a particularly good fit for a distributed system. You can
 have a highly available datastore all you want, but I'd sleep better
 knowing that our data is stored in a distributed system that is
 designed to handle network partitions well.
 I guess I don't understand this. How do you sleep at night TODAY
 knowing that the data Nova stores in its persistent storage is wide
 open to referential integrity problems and transactional state
 inconsistencies?

Not very well at all. If I thought everything was in good shape, I
woulnd't have bothered with all of this :)

 What's the point of having a data store that understands network
 partitions if we don't care enough to protect the integrity of the
 data we're putting in the data store in the first place? :(

None at all. I hope I haven't said anything to suggest otherwise.

MySQL simply was not designed to be distributed. Generally speaking, if
you do end up in a situation where there's been a network partition and
your master is on one side and you have a slave on the other side, a
couple of things can happen:

1. You can automatically promote the slave to master, thus letting both
sides of the partition keep going.

2. You can leave the slave be and let the entire one side of the
partition be in read-only mode.

I think the usual case is 1, since MySQL HA setups are usually designed
to handle the case where the master dies rather than handling network
partitions. Would you agree with this assertion?

If both have acted as master, what happens when the the network is
joined again?  Hell breaks loose, because MySQL wasn't designed for this
sort of thing.

Something like Riak, on the other hand, is designed to excel for exactly
this sort of situation. It makes no attempt to handle these conflicts
(unless you explicitly tell it to just let last write win). If there are
conflicts, you get to handle it in your application in whatever way
makes sense.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] boot from ISO

2011-11-29 Thread Donal Lafferty
Off the top of my head, I'd look to see if the compute node can see that ISO SR.

DL


From: Michaël Van de Borne [mailto:michael.vandebo...@cetic.be]
Sent: 29 November 2011 18:15
To: Donal Lafferty; openstack@lists.launchpad.net
Subject: Re: [Openstack] boot from ISO

Hi Donal, hi all,

I'm trying to test the Boot From ISO feature. So I've set a XenServer host and 
installed a Ubuntu 11.10 PV DomU in it.

Then I used the following commands but, as you can see in the attached 
nova-compute log excerpt, there was a problem.

glance add name=fedora_iso disk_format=iso  ../Fedora-16-x86_64-Live-LXDE.iso
ID: 4
nova boot test_iso --flavor 2 --image 4

I can see the ISO images using nova list but not using glance index.

The error seems to be: 'Cannot find SR of content-type ISO'. However, I've set 
a NFS ISO Library using XenCenter, so that there is an actual ISO content-typed 
SR. How to tell OpenStack to use this SR for the ISO images I post using glance?

Any clue? I feel I'm rather close to make it work.


thanks,

michaël




Michaël Van de Borne

RD Engineer, SOA team, CETIC

Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli

www.cetic.behttp://www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi

Le 22/11/11 00:18, Donal Lafferty a écrit :
Hi Michaël,

Boot from ISO should be ISO image agnostic.  The feature overcomes 
restrictions placed on the  distribution of modified Windows' images.  People 
can use their ISO instead, but they may still need to use dedicated hardware.

You should have no problem with a Linux distribution.

However, I wrote it for XenAPI, so we need someone to duplicate the work for 
KVM and VMWare.

DL


From: 
openstack-bounces+donal.lafferty=citrix@lists.launchpad.netmailto:openstack-bounces+donal.lafferty=citrix@lists.launchpad.net
 [mailto:openstack-bounces+donal.lafferty=citrix@lists.launchpad.net] On 
Behalf Of Michaël Van de Borne
Sent: 21 November 2011 17:28
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] boot from ISO

up? anybody?


Le 14/11/11 14:44, Michaël Van de Borne a écrit :
Hi all,

I'm very interested in the Boot From ISO feature described here: 
http://wiki.openstack.org/bootFromISO

In a few words, it's about the ability to boot a VM from the CDROM with an ISO 
image attached. A blank hard disk being attached to install the OS files in it.

I've got some questions about this:
1. Is the feature available today using a standard Diablo install? I've seen 
the code about this feature is stored under nova/tests and glance/tests. Does 
this mean it isn't finished yet and could only be tested under specific 
conditions? Which ones?
2. the spec tells about a Windows use case. Why just Windows? What should I do 
to test with a Linux distribution?
3. I can see 
herehttp://bazaar.launchpad.net/%7Ehudson-openstack/nova/trunk/revision/1433?start_revid=1433
 that the Xen hypervisor only has been impacted by the source code changes. Are 
KVM and VMWare planned to be supported in the future? May I help/be helped to 
develop KVM and VMWare support for this 'Boot From Iso' feature?

Any help appreaciated

thank you,


michaël





--

Michaël Van de Borne

RD Engineer, SOA team, CETIC

Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli

www.cetic.behttp://www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi





___

Mailing list: 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack

Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net

Unsubscribe : 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack

More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for Mark McLoughlin to join nova-core

2011-11-29 Thread Dan Prince

+1
 
-Original Message-
From: Vishvananda Ishaya vishvana...@gmail.com
Sent: Tuesday, November 29, 2011 1:05pm
To: openstack (openstack@lists.launchpad.net) openstack@lists.launchpad.net
Subject: [Openstack] Proposal for Mark McLoughlin to join nova-core



Mark is maintaining openstack for Fedora and has made some excellent 
contributions to nova.  He has also been very prolific with reviews lately. 
Lets add him to core and make his reviews count towards potential merges!

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] openstack.common module (was: Re: [RFC] Common config options module)

2011-11-29 Thread Jason Kölker
On Mon, 2011-11-28 at 20:57 +, Mark McLoughlin wrote:
 Hi Jason,
 
 On Mon, 2011-11-28 at 10:24 -0600, Jason Kölker wrote:
  On Mon, 2011-11-28 at 08:06 -0800, Monty Taylor wrote: 
The idea is to unify option handling across projects with this new API.
The module would eventually (soon?) live in openstack-common.
   
   Awesome. So - whaddya think about making openstack-common an
   installable/consumable module?
  
  I've extracted openstack-common as a stanalone module from
  openstack-skeleton at https://github.com/jkoelker/openstack-common for
  melange. I also converted the rest of openstack-skeleton to a paster
  template in the openstack-paste repo. Its there for the picking if
  anyone wants to use it.
  
  I'd love to see a unified effort on some front extracting bits and
  moving them into the openstack namespace as an official gerrit, et. al.
  project.
  
  Happy Hacking!
 
 Cool stuff. I'll dig into it soon and send you a pull request with the
 cfg module.

So along these lines. Melange currently relies on that openstack-common
repo/namespace. As it Melange is moving to gerrit, are there any
objections to promoting that openstack-common repo to gerrit as well?

Happy Hacking!

7-11


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] development document: how to write a filter module to swift

2011-11-29 Thread pf shineyear
http://wiki.openstack.org/development/swift/filter

it's not perfect but i think this can help some people at the beginning.

if someone can help me to move this document to Swift Developer
Documentation http://swift.openstack.org/

i will say thank you.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] some of my understand of swift object replication (chinese and english)

2011-11-29 Thread pf shineyear
1。 当 proxy 传输成功节点数量 只有 2 的时候, 谁去补全第三个节点? 怎么补的?

when there is just 2 object server transfer success, who will fill up the
failed one?how?

如果只有三个节点, 两个挂了, 写入失败

if there are 3 object server, 2 node failed, upload fail.

如果只有三个节点, 一个挂了, 三次写入都成功, 另一次失败的写入会被 proxy 转写到 另外的两个节点中的一个, 由于 partition 不同,
所以虽然文件内容相同, 但在同一台机器上是不同的路径, 这样在 replicator 进程中就可以根据 ring
的值判断出某个路径里的文件是不属于这台服务器的, 从而当挂掉的节点恢复后可以被同步。

if there are 3 object server, 1 node failed and 3 times transfer all
success, the one of 3, is uploaded to an available server (one of the 2
available server), because the partition is different , so dir and hash
value is different even in the same machine. so, replicator process will
fine this upload file which not belong this server(scan all files in the
server and use ring file to calculate which file not belong), and when the
failed node become available, it will be rsync back.

如果只有三个节点, 一个挂了, 三次写入只有两次成功, 另一次的写入失败是由于网络原因引起的写入超时, 那么写入的文件是不完整的, auditor
进程会将不完整的文件 rename 到 quarantine 目录以进行文件修复

if there are 3 object server, 1 node failed and 3 times transfer just
successed 2, the failed one because the network problem timeout when it
already transfer some. auditor process will move the not complete file to
quarantine dir to repair it.

2。 通过 hash suffix 来判断文件是否被更改, 怎么判断被更改的时间先后顺序?怎么能肯定是最新的呢?

how to comfirm file changed through hash suffix? how to make sure which
change is the newest one?

通过 proxy server 传递到 object server 上的 x-timestemp 时间来判断, 必须实用 NTP 来同步所有
proxy server 的时间 否则会有一致性问题

when file upload , proxy server will send a x-timestemp http header to
every object server, it’s the current time on proxy server , so use this
timestemp you can find which one is new and which one is old. you must use
NTP server to keep every proxy server have the same time.

here is the response mail about that question from core developer of swift

On Mon, Nov 28, 2011 at 9:04 PM, pf shineyear shin...@gmail.com wrote:
 hi all
 i think X-Timestamp header is come from the proxy servery to object
storage
 node, the value is proxy server current time.
 if i have 2 or more proxy server run in one cluster, should i comfirm same
 account/container/filename use same proxy server?
 because if i upload one file 2 times use different proxy server, the
server
 time is different, and i think there maybe have some consistent problem
if i
 do so. am i right??

Yeah, Swift's last write wins logic is only as good as proxy server
times are synchronized.  The idea is you'd use NTP or similar to keep
them synced.

And NTP generally does a really good job, with clock skews an order of
magnitude smaller than the time it takes to PUT an object into Swift
(which is about the best conflict resolution level you could hope for
anyways).

We talked a lot about using logical clocks (e.g. vector clocks) when
designing Swift, but realistically they'd probably usually just have
to fall back on timestamps to resolve conflicts.  Or version objects
when there's a conflict and let the client decide which is right,
and that's a whole mess for both the clients and the backend.

We've also talked about tie breaker ideas, because there's only so
much resolution in those timestamps.  But in reality, it's a pretty
low priority because it's really difficult to exploit and only screws
up the user's own data if they manage it.

- Mike



3。 两个节点同时向一个重启的节点发送同步文件, 不会被覆盖或写花么?

if 2 object server replicator process rsync one file to anothor object
server at same time, how to guarantee file not be cover or overlap write?

通过对目录进行读写锁控制

use fcntl to add the read/write lock to the parent dir.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Computing StackExchange site proposal

2011-11-29 Thread Michael Pittaro
On Tue, Nov 29, 2011 at 11:54 AM, Stefano Maffulli
stef...@openstack.org wrote:
 On Tue, 2011-11-29 at 10:14 -0800, Michael Pittaro wrote:
  * a method or process for flagging topics which should migrate  into
 documentation and/or the wiki

 Sounds interesting. If I understand you correctly, you want to have a
 way to mark questions about topics that may be improved in the official
 docs. Would this be something like 'transform this question into a bug
 filed against the documentation project' or something different?

 Can you elaborate a bit more on the use case? How would this work?

 thanks,
 stef


Maybe the anti-pattern I'm trying to avoid here is a better place
to start :-)

A lot of knowledge discovery happens on a QA site, as well as lists
and forums.   However, a common problem with those tools is where
the all the knowledge ends up being scattered in those locations
(and often replicated), and the real documentation and/or wiki never
gets updated.   ( This seems to be more common with lists and forums
than QA sites.)  This is compounded by the natural aging of a
discussion or question - at some point, it's just no longer relevant.

I think two pieces are required:

1) As you suggest, a way of flagging a question or discussion
   as a doc bug, a potential enhancement, or even a product bug.

2) A way of closing the loop, and updating the question to indicate
  the issue is resolved/fixed, and no longer relevant.

The method I've used in the past is where the 'question' had a link
to a one or more 'bugs', and when a 'bug' was fixed the 'question'
got updated automatically.

There are various ways to do this; I think the important point is
just to close the knowledge loop in some way, and to avoid having
to do it manually.

mike

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Computing StackExchange site proposal

2011-11-29 Thread Stefano Maffulli
On Tue, 2011-11-29 at 16:06 -0800, Michael Pittaro wrote:
 A lot of knowledge discovery happens on a QA site, as well as lists
 and forums.   However, a common problem with those tools is where
 the all the knowledge ends up being scattered in those locations
 (and often replicated), and the real documentation and/or wiki never
 gets updated.   ( This seems to be more common with lists and forums
 than QA sites.)  This is compounded by the natural aging of a
 discussion or question - at some point, it's just no longer relevant.

Got it, very clear now. Thanks,

stef


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Computing StackExchange site proposal

2011-11-29 Thread Lloyd Dewolf
On Tue, Nov 29, 2011 at 11:16 AM, Stefano Maffulli
stef...@openstack.org wrote:
 On Tue, 2011-11-29 at 10:10 -0800, Lloyd Dewolf wrote:
 Where do I find this previous discussion?

 around here:
 https://lists.launchpad.net/openstack/msg02169.html

 What do you think of the requirements we're gathering for the QA
 system? I'd like your opinion on that as we move on.

Thanks Stefano. I really like everyone reframing the discussion to
figure out what our needs are as opposed to ... shiny!

I do think stackexchange (SE) is miles [1] ahead and the only system
that will meet the majority of our requirements.

If we can get our own Area51 then it's by far the best immediate solution.

I spoke to a friend at Area51, and he suggested we might have
different results if we tried again. So I feel like this is on the
table if we want to pursue.


Of course, having very active SE participants (high reputation) put
the proposal forward and committing to it carries a lot of weight.

My reputation [2] is weak today, but I'm sure myself and others could
ramp up the levels quickly over the next few months.

Cheers,
Lloyd

--
1. See I'm getting used to United States customary units,
http://en.wikipedia.org/wiki/Customary_units
2. http://stackexchange.com/users/25765?tab=accounts

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Meetup in Seattle 11/30, 6:30pm at Opscode HQ

2011-11-29 Thread Rob_Hirschfeld
Seattle Area Stackers,

We're having an informal meetup at the Opscode HQ 
(http://www.opscode.com/about/#contact) tomorrow.

Rob
__
Rob Hirschfeld
Principal Cloud Solution Architect
Dell | Cloud Edge, Data Center Solutions
blog robhirschfeld.com, twitter @zehicle

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-29 Thread Nachi Ueno
Hi folks

Jay
Thank you for your pointing this!:)
Hey OpenStackers,please help forward-porting :P

Soren
Anything else around unit testing anyone wants to get into The Great
Big Plan[tm]?

And also, we should have a policy for unit test.
Something like this

- New code should have a concrete specification doc , and all unit
test should be written based on the specs
- New code should include negative test case for each parameters.
- New code don't lower coverages

Cheers
Nati

2011/11/29 Jay Pipes jaypi...@gmail.com:
 On Tue, Nov 29, 2011 at 3:21 PM, Soren Hansen so...@linux2go.dk wrote:
 Anything else around unit testing anyone wants to get into The Great
 Big Plan[tm]?

 Well, NTT has written well over a thousand new unit tests for Nova. It
 would be great to get some more help from everyone in forward-porting
 them. To date, we've been a bit stymied by lack of resources to do the
 forward-porting, so if anyone has some spare cycles, there are an
 absolute ton of both new tests and bug fixes needing
 forward-porting...

 From another email, here is instructions for how to do the
 forward-porting for those interested in helping out.

 This is the NTT bug fix + unit test branch. Note that this is the
 branch that is based on stable/diablo:

 https://github.com/ntt-pf-lab/nova/branches

 All the bugs in the OpenStack QA project (and Nova project) that need
 forward-porting are tagged with forward-port-needed. You can see a
 list of the unassigned ones needing forward-porting here:

 http://bit.ly/rPVjCf


 The workflow for forward-porting these fixes/new tests is like this:

 A. Pick a bug from above list (http://bit.ly/rPVjCf)

 B. Assign yourself to bug

 C. Fix problem and review request

 I believe folks need some further instructions on this step.
 Basically, the NTT team has *already* fixed the bug, but we need to
 apply the bug fix to trunk and propose this fix for merging into
 trunk.

 The following steps are how to do this. I'm going to take a bug fix as
 an example, and show the steps needed to forward port it to trunk.
 Here is the bug and associated fix I will forward port:

 https://bugs.launchpad.net/openstack-qa/+bug/883293

 Nati's original bug fix branch is linked on the bug report:

 https://github.com/ntt-pf-lab/nova/tree/openstack-qa-nova-883293

 When looking at the branch, you can see the latest commits by clicking
 the Commits tab near the top of the page:

 https://github.com/ntt-pf-lab/nova/commits/openstack-qa-nova-883293

 As you can see, the top 2 commits form the bug fix from Nati -- the
 last commit being a test case, and the second to last commit being a
 fix for the infinite loop references in the bug report. The two
 commits have the following two SHA1 identifiers:

 9cf5945c9e64d1c6a2eb6d9499e80d6c19aed058
 2a95311263cbda5886b9409284fea2d155b3cada

 These are the two commits I need to apply to my local *trunk* branch
 of Nova. To do so, I do the following locally:

 1) Before doing anything, we first need to set up a remote for the NTT
 team repo on GitHub:

 jpipes@uberbox:~/repos/nova$ git remote add ntt
 https://github.com/ntt-pf-lab/nova.git
 jpipes@uberbox:~/repos/nova$ git fetch ntt
 remote: Counting objects: 2255, done.
 remote: Compressing objects: 100% (432/432), done.
 remote: Total 2120 (delta 1694), reused 2108 (delta 1686)
 Receiving objects: 100% (2120/2120), 547.09 KiB | 293 KiB/s, done.
 Resolving deltas: 100% (1694/1694), completed with 81 local objects.
 From https://github.com/ntt-pf-lab/nova
  * [new branch]      int001     - ntt/int001
  * [new branch]      int001_base - ntt/int001_base
  * [new branch]      int002.d1  - ntt/int002.d1
  * [new branch]      int003     - ntt/int003
  * [new branch]      ntt/stable/diablo - ntt/ntt/stable/diablo
  * [new branch]      openstack-qa-api-validation -
 ntt/openstack-qa-api-validation
 snip
  * [new branch]      openstack-qa-nova-888229 - ntt/openstack-qa-nova-888229
  * [new branch]      openstack-qa-test-branch - ntt/openstack-qa-test-branch
  * [new branch]      stable/diablo - ntt/stable/diablo

 2) Now that we have fetched the NTT branches (containing all the bug
 fixes we need to forward-port), we create a local branch based off of
 Essex trunk. On my machine, this local Essex trunk branch is called
 master:

 jpipes@uberbox:~/repos/nova$ git branch
 * diablo
  master
 jpipes@uberbox:~/repos/nova$ git checkout master
 Switched to branch 'master'
 jpipes@uberbox:~/repos/nova$ git checkout -b bug883293
 Switched to a new branch 'bug883293'

 3) We now need to cherry-pick the two commits from above. I do so in
 reverse order, as I want to apply the patch with the bug fix first and
 then the patch with the test case:

 jpipes@uberbox:~/repos/nova$ git cherry-pick
 2a95311263cbda5886b9409284fea2d155b3cada
 [bug883293 81e49b7] combination of log_notifier and
 log.PublishErrorsHandler causes infinite loop Fixes bug 883293.
  Author: Nachi Ueno ueno.na...@lab.ntt.co.jp
  1 files changed, 4 insertions(+), 0 

Re: [Openstack] Proposal for Mark McLoughlin to join nova-core

2011-11-29 Thread Monty Taylor
+1

On 11/29/2011 02:12 PM, Dan Prince wrote:
 +1
 
  
 
 -Original Message-
 From: Vishvananda Ishaya vishvana...@gmail.com
 Sent: Tuesday, November 29, 2011 1:05pm
 To: openstack (openstack@lists.launchpad.net)
 openstack@lists.launchpad.net
 Subject: [Openstack] Proposal for Mark McLoughlin to join nova-core
 
 Mark is maintaining openstack for Fedora and has made some excellent
 contributions to nova. He has also been very prolific with reviews
 lately. Lets add him to core and make his reviews count towards
 potential merges!
 
 Vish
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-poc] Meeting today

2011-11-29 Thread Jonathan Bryce
Anyone have anything you'd like to discuss today?

Jonathan
___
Mailing list: https://launchpad.net/~openstack-poc
Post to : openstack-poc@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-poc
More help   : https://help.launchpad.net/ListHelp