Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Murray, Paul (HP Cloud Services)
Hi Sylvain,

I would go with keeping AZs exclusive. It is a well-established concept even if 
it is up to providers to implement what it actually means in terms of 
isolation. Some good use cases have been presented on this topic recently, but 
for me they suggest we should develop a better concept rather than bend the 
meaning of the old one. We certainly don't have hosts in more than one AZ in HP 
Cloud and I think some of our users would be very surprised if we changed that.

Paul.

From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
Sent: 03 April 2014 15:53
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones : 
possible or not ?

+1 for AZs not sharing hosts.

Because it's the only mechanism that allows us to segment the datacenter. 
Otherwise we cannot provide redundancy to client except using Region which is 
dedicated infrastructure and networked separated and anti-affinity filter which 
IMO is not pragmatic as it has tendency of abusive usage.  Why sacrificing this 
power so that users can select the types of his desired physical hosts ? The 
latter can be exposed using flavor metadata, which is a lot safer and more 
controllable than using AZs. If someone insists that we really need to let 
users choose the types of physical hosts, then I suggest creating a new hint, 
and use aggregates with it. Don't sacrifice AZ exclusivity!

Btw, there is a datacenter design called dual-room [1] which I think best fit 
for AZs to make your cloud redundant even with one datacenter.

Best regards,

Toan

[1] IBM and Cisco: Together for a World Class Data Center, Page 141. 
http://books.google.fr/books?id=DHjJAgAAQBAJpg=PA141#v=onepageqf=false



De : Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Envoyé : jeudi 3 avril 2014 15:52
À : OpenStack Development Mailing List (not for usage questions)
Objet : [openstack-dev] [Nova] Hosts within two Availability Zones : possible 
or not ?

Hi,

I'm currently trying to reproduce [1]. This bug requires to have the same host 
on two different aggregates, each one having an AZ.

IIRC, Nova API prevents hosts of being part of two distinct AZs [2], so IMHO 
this request should not be possible.
That said, there are two flaws where I can identify that no validation is done :
 - when specifying an AZ in nova.conf, the host is overriding the existing AZ 
by its own
 - when adding an host to an aggregate without AZ defined, and afterwards 
update the aggregate to add an AZ


So, I need direction. Either we consider it is not possible to share 2 AZs for 
the same host and then we need to fix the two above scenarios, or we say it's 
nice to have 2 AZs for the same host and then we both remove the validation 
check in the API and we fix the output issue reported in the original bug [1].


Your comments are welcome.
Thanks,
-Sylvain


[1] : https://bugs.launchpad.net/nova/+bug/1277230

[2] : 
https://github.com/openstack/nova/blob/9d45e9cef624a4a972c24c47c7abd57a72d74432/nova/compute/api.py#L3378
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-12 Thread Murray, Paul (HP Cloud Services)
Reviewing this thread to come to a conclusion (for myself at least - and 
hopefully so I can document something so reviewers know why I did it)

For approach:
1. plugins should use stevedore with entry points (as stated by Russell)
2. the plugins should be explicitly selected through configuration 

For api stability:
I'm not sure there was a consensus. Personally I would write a base class for 
the plugins and document in it that the interface is unstable. Sound good?

BTW: this is one of those things that could be put in a place to make and 
record decisions (like the gerrit idea for blueprints). But now I am referring 
to another thread 
[http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html ]

Paul.


-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] 
Sent: 04 March 2014 21:25
To: Murray, Paul (HP Cloud Services)
Cc: OpenStack Development Mailing List (not for usage questions); 
d...@danplanet.com
Subject: Re: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

And sorry, as to your original problem, the loadables approach is kinda messy 
since only the classes that are loaded when *that* module are loaded are used 
(vs. explicitly specifying them in a config). You may get different results 
when the flow changes.

Either entry-points or config would give reliable results.


On 03/04/2014 03:21 PM, Murray, Paul (HP Cloud Services) wrote:
 In a chat with Dan Smith on IRC, he was suggesting that the important thing 
 was not to use class paths in the config file. I can see that internal 
 implementation should not be exposed in the config files - that way the 
 implementation can change without impacting the nova users/operators.

There's plenty of easy ways to deal with that problem vs. entry points.

MyModule.get_my_plugin() ... which can point to anywhere in the module 
permanently.

Also, we don't have any of the headaches of merging setup.cfg sections (as we 
see with oslo.* integration).

 Sandy, I'm not sure I really get the security argument. Python provides every 
 means possible to inject code, not sure plugins are so different. Certainly 
 agree on choosing which plugins you want to use though.

The concern is that any compromised part of the python eco-system can get 
auto-loaded with the entry-point mechanism. Let's say Nova auto-loads all 
modules with entry-points the [foo] section. All I have to do is create a setup 
that has a [foo] section and my code is loaded.
Explicit is better than implicit.

So, assuming we don't auto-load modules ... what does the entry-point approach 
buy us?


 From: Russell Bryant [rbry...@redhat.com] We should be careful though.  
 We need to limit what we expose as external plug points, even if we consider 
 them unstable.  If we don't want it to be public, it may not make sense for 
 it to be a plugin interface at all.

I'm not sure what the concern with introducing new extension points is?
OpenStack is basically just a big bag of plugins. If it's optional, it's 
supposed to be a plugin (according to the design tenets).



 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review approval

2014-03-07 Thread Murray, Paul (HP Cloud Services)
The principle is excellent, I think there are two points/objectives worth 
keeping in mind:

1. We need an effective way to make and record the design decisions
2. We should make the whole development process easier

In my mind the point of the design review part is to agree up front something 
that should not be over-turned (or is hard to over-turn) late in patch review. 
I agree with others that a patch should not be blocked (or should be hard to 
block) because the reviewer disagrees with an agreed design decision. Perhaps 
an author can ask for a -2 or -1 to be removed if they can point out the agreed 
design decision, without having to reopen the debate. 

I also think that blueprints tend to have parts that should be agreed up front, 
like changes to apis, database migrations, or integration points in general. 
They also have parts that don't need to be agreed up front, there is no point 
in a heavyweight process for everything. Some blueprints might not need any of 
this at all. For example, a new plugin for the filter scheduler might no need a 
lot of design review, or at least, adding the design review is unlikely to ease 
the development cycle.

So, we could use the blueprint template to identify things that need to be 
agreed in the design review. These could include anything the proposer wants 
agreed up front and possibly specifics of a defined set of integration points. 
Some blueprints might have nothing to be formally agreed in design review. 
Additionally, sometimes plans change, so it should be possible to return to 
design review. Possibly the notion of a design decision could be broken out 
form a blueprint in the same way as a patch-set? maybe it only makes sense to 
do it as a whole? Certainly design decisions should be made in relation to 
other blueprints and so it should be easy to see that there are two blueprints 
making related design decisions.

The main point is that there should be an identifiable set of design decisions 
that have reviewed and agreed that can also be found.

**The reward for authors in doing this is the author can defend their patch-set 
against late objections to design decisions.**
**The reward for reviewers is they get a way to know what has been agreed in 
relation to a blueprint.**

On another point...
...sometimes I fall foul of writing code using an approach I have seen in the 
code base, only to be told it was decided not to do it that way anymore. 
Sometimes I had no way of knowing that, and exactly what has been decided, when 
it was decided, and who did the deciding has been lost. Clearly the PTL and ML 
do help out here, but it would be helpful if such things were easy to find out. 
These kinds of design decision should be reviewed and recorded.

Again, I think it is excellent that this is being addressed.

Paul.



-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: 07 March 2014 12:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint 
review  approval

On 03/07/2014 06:30 AM, Thierry Carrez wrote:
 Sean Dague wrote:
 One of the issues that the Nova team has definitely hit is Blueprint 
 overload. At some point there were over 150 blueprints. Many of them 
 were a single sentence.

 The results of this have been that design review today is typically 
 not happening on Blueprint approval, but is instead happening once 
 the code shows up in the code review. So -1s and -2s on code review 
 are a mix of design and code review. A big part of which is that 
 design was never in any way sufficiently reviewed before the code started.

 In today's Nova meeting a new thought occurred. We already have 
 Gerrit which is good for reviewing things. It gives you detailed 
 commenting abilities, voting, and history. Instead of attempting (and 
 usually
 failing) on doing blueprint review in launchpad (or launchpad + an 
 etherpad, or launchpad + a wiki page) we could do something like follows:

 1. create bad blueprint
 2. create gerrit review with detailed proposal on the blueprint 3. 
 iterate in gerrit working towards blueprint approval 4. once approved 
 copy back the approved text into the blueprint (which should now be 
 sufficiently detailed)

 Basically blueprints would get design review, and we'd be pretty sure 
 we liked the approach before the blueprint is approved. This would 
 hopefully reduce the late design review in the code reviews that's 
 happening a lot now.

 There are plenty of niggly details that would be need to be worked 
 out

  * what's the basic text / template format of the design to be 
 reviewed (probably want a base template for folks to just keep things 
 consistent).
  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova 
 Enhancement Proposals), or is it happening in a separate gerrit tree.
  * are there timelines for blueprint approval in a cycle? after which 
 point, we don't review any new items.

[openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Murray, Paul (HP Cloud Services)
Hi All,

One of my patches has a query asking if I am using the agreed way to load 
plugins: https://review.openstack.org/#/c/71557/

I followed the same approach as filters/weights/metrics using nova.loadables. 
Was there an agreement to do it a different way? And if so, what is the agreed 
way of doing it? A pointer to an example or even documentation/wiki page would 
be appreciated.

Thanks in advance,
Paul

Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Murray, Paul (HP Cloud Services)
In a chat with Dan Smith on IRC, he was suggesting that the important thing was 
not to use class paths in the config file. I can see that internal 
implementation should not be exposed in the config files - that way the 
implementation can change without impacting the nova users/operators.

Sandy, I'm not sure I really get the security argument. Python provides every 
means possible to inject code, not sure plugins are so different. Certainly 
agree on choosing which plugins you want to use though.

-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] 
Sent: 04 March 2014 17:50
To: OpenStack Development Mailing List (not for usage questions); Murray, Paul 
(HP Cloud Services)
Subject: RE: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

This brings up something that's been gnawing at me for a while now ... why use 
entry-point based loaders at all? I don't see the problem they're trying to 
solve. (I thought I got it for a while, but I was clearly fooling myself)

1. If you use the load all drivers in this category feature, that's a 
security risk since any compromised python library could hold a trojan.

2. otherwise you have to explicitly name the plugins you want (or don't want) 
anyway, so why have the extra indirection of the entry-point? Why not just name 
the desired modules directly? 

3. the real value of a loader would be to also extend/manage the python path 
... that's where the deployment pain is. Use fully qualified filename driver 
and take care of the pathing for me. Abstracting the module/class/function 
name isn't a great win. 

I don't see where the value is for the added pain (entry-point 
management/package metadata) it brings. 

CMV,

-S

From: Russell Bryant [rbry...@redhat.com]
Sent: Tuesday, March 04, 2014 1:29 PM
To: Murray, Paul (HP Cloud Services); OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

On 03/04/2014 06:27 AM, Murray, Paul (HP Cloud Services) wrote:
 One of my patches has a query asking if I am using the agreed way to 
 load plugins: https://review.openstack.org/#/c/71557/

 I followed the same approach as filters/weights/metrics using 
 nova.loadables. Was there an agreement to do it a different way? And 
 if so, what is the agreed way of doing it? A pointer to an example or 
 even documentation/wiki page would be appreciated.

The short version is entry-point based plugins using stevedore.

We should be careful though.  We need to limit what we expose as external plug 
points, even if we consider them unstable.  If we don't want it to be public, 
it may not make sense for it to be a plugin interface at all.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] do nova objects work for plugins?

2014-02-04 Thread Murray, Paul (HP Cloud Services)
Hi Dan,

Yes, we did plan to do something simple with a DictOfStringsField, I was trying 
to see if I could do the more complicated thing. BTW the simple version for 
extra_resources is here: https://review.openstack.org/#/c/66959/ 

It will be good to go through any remaining problems next week, but I think I 
can get on with it.

Paul.

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: 03 February 2014 20:59
To: Murray, Paul (HP Cloud Services); OpenStack Development Mailing List (not 
for usage questions)
Subject: Re: [Nova] do nova objects work for plugins?

 Basically, if object A has object B as a child, and deserialization 
 finds object B to be an unrecognized version, it will try to back port 
 the object A to the version number of object B.

Right, which is why we rev the version of, say, the InstanceList when we have 
to rev Instance itself, and why we have unit tests to makes sure that happens.

 It is not reasonable to bump the version of the compute_node when new 
 external plugin is developed. So currently the versioning seems too 
 rigid to implement extensible/pluggable objects this way.

So we're talking about an out-of-tree closed-source plugin, right? IMHO, Nova's 
versioning infrastructure is in place to make Nova able to handle upgrades; 
adding requirements for supporting out-of-tree plugins wouldn't be high on my 
priority list.

 A reasonable alternative might be for all objects to be deserialized 
 individually within a tree data structure, but I'm not sure what might 
 happen to parent/child compatability without some careful tracking.

I think it would probably be possible to make the deserializer specify the 
object and version it tripped over when passing the whole thing back to 
conductor to be backleveled. That seems reasonably useful to Nova itself.

 Another might be to say that nova objects are for nova use only and 
 that's just tough for plugin writers!

Well, for the same reason we don't provide a stable virt driver API (among 
other things) I don't think we need to be overly concerned with allowing 
arbitrary bolt-on code to hook in at this point.

Your concern is, I assume, allowing a resource metric plugin to shove actual 
NovaObject items into a container object of compute node metrics?
Is there some reason that we can't just coerce all of these to a 
dict-of-strings or dict-of-known-primitive-types to save all of this 
complication? I seem to recall the proposal that led us down this road being 
store/communicate arbitrary JSON blobs, but certainly there is a happy medium?

Given that the nova meetup is next week, perhaps that would be a good time to 
actually figure out a path forward?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] about the bp cpu-entitlement

2014-02-04 Thread Murray, Paul (HP Cloud Services)
Hi Sahid,

This is being done by Oshrit Feder, so I'll let her answer, but I know that it 
is going to be implemented as an extensible resource (see: 
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking) so it 
is waiting for that to be done. That blueprint is making good progress now and 
it should have more patches up this week. There is another resource example 
nearly done for network entitlement (see: 
https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement) 

Paul.

-Original Message-
From: sahid [mailto:sahid.ferdja...@cloudwatt.com] 
Sent: 04 February 2014 09:24
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] about the bp cpu-entitlement

Greetings,

  I saw a really interesting blueprint about cpu entitlement, it will be 
targeted for icehouse-3 and I would like to get some details about the 
progress?. Does the developer need help? I can give a part of my time on it.

https://blueprints.launchpad.net/nova/+spec/cpu-entitlement

Thanks a lot,
s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] do nova objects work for plugins?

2014-02-03 Thread Murray, Paul (HP Cloud Services)

I was looking through Nova Objects with a view to creating an extensible object 
that can be used by writers of plugins to include data generated by the plugin 
(others have also done the same e.g. https://review.openstack.org/#/c/65826/ ) 
On the way I noticed what I think is a bug in Nova Objects serialization (but 
might be considered by design by some - see: 
https://bugs.launchpad.net/nova/+bug/1275675). Basically, if object A has 
object B as a child, and deserialization finds object B to be an unrecognized 
version, it will try to back port the object A to the version number of object 
B.

Now this is not a problem if the version of A is always bumped when the version 
of B changes. If the A and B versions are always deployed together, because 
they are revised and built together, then A will always be the one that is 
found to be incompatible first and in back porting it will always know what 
version its child should be. If that is not the way things are meant to work 
then there is a problem (I think).

Going back to the extensible object, what I would like to be able to do is 
allow the writer of a plugin to implement a nova object for data specific to 
that plugin, so that it can be communicated by Nova. (For example, a resource 
plugin on the compute node generates resource specific data that is passed to 
the scheduler, where another plugin consumes it). This object will be 
communicated as a child of another object (e.g. the compute_node). It would be 
useful if the plugins at each end benefit from the same version handling that 
nova does itself.

It is not reasonable to bump the version of the compute_node when new external 
plugin is developed. So currently the versioning seems too rigid to implement 
extensible/pluggable objects this way.

A reasonable alternative might be for all objects to be deserialized 
individually within a tree data structure, but I'm not sure what might happen 
to parent/child compatability without some careful tracking.

Another might be to say that nova objects are for nova use only and that's just 
tough for plugin writers!

Thoughts?

Paul



Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Scheduler] Will the Scheuler use Nova Objects?

2014-01-30 Thread Murray, Paul (HP Cloud Services)
Hi,

I have heard a couple of conflicting comments about the scheduler and nova 
objects that I would like to clear up. In one scheduler/gantt meeting, Gary 
Kotton offered to convert the scheduler to use Nova objects. In another I heard 
that with the creation of Gantt, the scheduler would avoid using any Nova 
specific features including Nova objects.

I can see that these things are evolving at the same time, so it makes sense 
that plans or opinions might change. But I am at a point where it would be nice 
to know.

Which way should this go?

Paul.

Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-27 Thread Murray, Paul (HP Cloud Services)
Hi Justin,

My though process is to go back to basics. To perform discovery there is no 
getting away from the fact that you have to start with a well-known address 
that your peers can access on the network. The second part is a 
service/protocol accessible at that address that can perform the discovery. So 
the questions are: what well-known addresses can I reach? And is that a 
suitable place to implement the service/protocol.

The metadata service is different to the others in that it can be accessed 
without credentials (correct me if I'm wrong), so it is the only possibility 
out of the openstack services if you do not want to have credentials on the 
peer instances. If that is not the case then the other services are options. 
All services require security groups and/or networks to be configured 
appropriately to access them.

(Yes, the question can all instances access the same metadata service did 
really mean are they all local. Sorry for being unclear. But I think your 
answer is yes, they are, right?)

Implementing the peer discovery in the instances themselves requires some kind 
of multicast or knowing a list of addresses to try. In both cases either the 
actual addresses or some name resolved through a naming service would do. 
Whatever is starting your instances does have access to at least nova, so it 
can find out if there are any running instances and what their addresses are. 
These could be used as the addresses they try first. These are the way that 
internet p2p services work and they work in the cloud.

So there are options. The metadata service is a good place in terms of 
accessibility, but may not be for other reasons. In particular, the lack of 
credentials relates to the fact it is only allowed to see its own information. 
Making that more dynamic and including information about other things in the 
system might change the security model slightly. Secondly, is it the purpose of 
the metadata server to do this job? That's more a matter of choice.

Personally, I think no, this is not the right place.

Paul.



From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
Sent: 24 January 2014 21:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances 
through metadata service

Murray, Paul (HP Cloud Services)  wrote:

Multicast is not generally used over the internet, so the comment about 
removing multicast is not really justified, and any of the approaches that work 
there could be used.

I think multicast/broadcast is commonly used 'behind the firewall', but I'm 
happy to hear of any other alternatives that you would recommend - particularly 
if they can work on the cloud!


I agree that the metadata service is a sensible alternative. Do you imagine 
your instances all having access to the same metadata service? Is there 
something more generic and not tied to the architecture of a single openstack 
deployment?


Not sure I understand - doesn't every Nova instance has access to the metadata 
service, and they all connect to the same back-end database?  Has anyone not 
deployed the metadata service?  It is not cross-region / cross-provider - is 
that what you mean?  In terms of implementation 
(https://review.openstack.org/#/c/68825/) it is supposed to be the same as if 
you had done a list-instances call on the API provider.  I know there's been 
talk of federation here; when this happens it would be awesome to have a 
cross-provider view (optionally, probably).

Although this is a simple example, it is also the first of quite a lot of 
useful primitives that are commonly provided by configuration services. As it 
is possible to do what you want by other means (including using an 
implementation that has multicast within subnets - I'm sure neutron does 
actually have this), it seems that this makes less of a special case and rather 
a requirement for a more general notification service?

I don't see any other solution offering as easy a solution for users (either 
the developer of the application or the person that launches the instances).  
If every instance had an automatic keystone token/trust with read-only access 
to its own project, that would be great.  If Heat intercepted every Nova call 
and added metadata, that would be great.  If Marconi offered every instance a 
'broadcast' queue where it could reach all its peers, and we had a Keystone 
trust for that, that would be great.  But, those are all 12 month projects, and 
even if you built them and they were awesome they still wouldn't get deployed 
on all the major clouds, so I _still_ couldn't rely on them as an application 
developer.

My hope is to find something that every cloud can be comfortable deploying, 
that solves discovery just as broadcast/multicast solves it on typical LANs.  
It may be that anything other than IP addresses will make e.g. HP public cloud 
uncomfortable; if so then I'll tweak it to just be IPs.  Finding

Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Murray, Paul (HP Cloud Services)
Hi Justin,

It's nice to see someone bringing this kind of thing up. Seeding discovery is a 
handy primitive to have.

Multicast is not generally used over the internet, so the comment about 
removing multicast is not really justified, and any of the approaches that work 
there could be used. Alternatively your instances could use the nova or neutron 
APIs to obtain any information you want - if they are network connected - but 
certainly whatever is starting them has access, so something can at least 
provide the information.

I agree that the metadata service is a sensible alternative. Do you imagine 
your instances all having access to the same metadata service? Is there 
something more generic and not tied to the architecture of a single openstack 
deployment?

Although this is a simple example, it is also the first of quite a lot of 
useful primitives that are commonly provided by configuration services. As it 
is possible to do what you want by other means (including using an 
implementation that has multicast within subnets - I'm sure neutron does 
actually have this), it seems that this makes less of a special case and rather 
a requirement for a more general notification service?

Having said that I do like this kind of stuff :)

Paul.


From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
Sent: 24 January 2014 15:43
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances 
through metadata service

Good points - thank you.  For arbitrary operations, I agree that it would be 
better to expose a token in the metadata service, rather than allowing the 
metadata service to expose unbounded amounts of API functionality.  We should 
therefore also have a per-instance token in the metadata, though I don't see 
Keystone getting the prerequisite IAM-level functionality for two+ releases (?).

However, I think I can justify peer discovery as the 'one exception'.  Here's 
why: discovery of peers is widely used for self-configuring clustered services, 
including those built in pre-cloud days.  Multicast/broadcast used to be the 
solution, but cloud broke that.  The cloud is supposed to be about distributed 
systems, yet we broke the primary way distributed systems do peer discovery. 
Today's workarounds are pretty terrible, e.g. uploading to an S3 bucket, or 
sharing EC2 credentials with the instance (tolerable now with IAM, but painful 
to configure).  We're not talking about allowing instances to program the 
architecture (e.g. attach volumes etc), but rather just to do the equivalent of 
a multicast for discovery.  In other words, we're restoring some functionality 
we took away (discovery via multicast) rather than adding 
programmable-infrastructure cloud functionality.

We expect the instances to start a gossip protocol to determine who is actually 
up/down, who else is in the cluster, etc.  As such, we don't need accurate 
information - we only have to help a node find one living peer.  
(Multicast/broadcast was not entirely reliable either!)  Further, instance #2 
will contact instance #1, so it doesn't matter if instance #1 doesn't have 
instance #2 in the list, as long as instance #2 sees instance #1.  I'm relying 
on the idea that instance launching takes time  0, so other instances will be 
in the starting state when the metadata request comes in, even if we launch 
instances simultaneously.  (Another reason why I don't filter instances by 
state!)

I haven't actually found where metadata caching is implemented, although the 
constructor of InstanceMetadata documents restrictions that really only make 
sense if it is.  Anyone know where it is cached?

In terms of information exposed: An alternative would be to try to connect to 
every IP in the subnet we are assigned; this blueprint can be seen as an 
optimization on that (to avoid DDOS-ing the public clouds).  So I've tried to 
expose only the information that enables directed scanning: availability zone, 
reservation id, security groups, network ids  labels  cidrs  IPs [example 
below].  A naive implementation will just try every peer; a smarter 
implementation might check the security groups to try to filter it, or the zone 
information to try to connect to nearby peers first.  Note that I don't expose 
e.g. the instance state: if you want to know whether a node is up, you have to 
try connecting to it.  I don't believe any of this information is at all 
sensitive, particularly not to instances in the same project.

On external agents doing the configuration: yes, they could put this into user 
defined metadata, but then we're tied to a configuration system.  We have to 
get 20 configuration systems to agree on a common format (Heat, Puppet, Chef, 
Ansible, SaltStack, Vagrant, Fabric, all the home-grown systems!)  It also 
makes it hard to launch instances concurrently (because you want node #2 to 
have the metadata for node #1, so you have to wait for 

Re: [openstack-dev] [nova] how is resource tracking supposed to work for live migration and evacuation?

2014-01-17 Thread Murray, Paul (HP Cloud Services)
To be clear - the changes that Yunhong describes below are not part of the 
extensible-resource-tracking blueprint. Extensible-resource-tracking has the 
more modest aim to provide plugins to track additional resource data.

Paul.

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] 
Sent: 17 January 2014 05:54
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] how is resource tracking supposed to work 
for live migration and evacuation?

There are some related discussion on this before. 

There is a BP at 
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking which 
try to support more resources.

And I have a documentation at  
https://docs.google.com/document/d/1gI_GE0-H637lTRIyn2UPfQVebfk5QjDi6ohObt6MIc0 
. My idea is to keep the claim as an object which can be invoked remotely, and 
the claim result is kept in DB as the instance's usage. I'm working on it now.

Thanks
--jyh

 -Original Message-
 From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
 Sent: Thursday, January 16, 2014 2:27 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] how is resource tracking supposed 
 to work for live migration and evacuation?
 
 
 On Jan 16, 2014, at 1:12 PM, Chris Friesen 
 chris.frie...@windriver.com
 wrote:
 
  Hi,
 
  I'm trying to figure out how resource tracking is intended to work 
  for live
 migration and evacuation.
 
  For a while I thought that maybe we were relying on the call to
 ComputeManager._instance_update() in
 ComputeManager.post_live_migration_at_destination().  However, in
  ResourceTracker.update_usage() we see that on a live migration the
 instance that has just migrated over isn't listed in 
 self.tracked_instances and so we don't actually update its usage.
 
  As far as I can see, the current code will just wait for the audit 
  to run at
 some unknown time in the future and call update_available_resource(), 
 which will add the newly-migrated instance to self.tracked_instances 
 and update the resource usage.
 
  From my poking around so far the same thing holds true for 
  evacuation
 as well.
 
  In either case, just waiting for the audit seems somewhat haphazard.
 
  Would it make sense to do something like
 ResourceTracker.instance_claim() during the migration/evacuate and 
 properly track the resources rather than wait for the audit?
 
 Yes that makes sense to me. Live migration was around before we had a 
 resource tracker so it probably was just never updated.
 
 Vish
 
 
  Chris
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Detect changes in object model

2014-01-13 Thread Murray, Paul (HP Cloud Services)
Hi Dan,

I was actually thinking of changes to the list itself rather than the objects 
in the list. To try and be clear, I actually mean the following:

ObjectListBase has a field called objects that is typed 
fields.ListOfObjectsField('NovaObject'). I can see methods for count and index, 
and I guess you are talking about adding a method for are any of your contents 
changed here. I don't see other list operations (like append, insert, remove, 
pop) that modify the list. If these were included they would have to mark the 
list as changed so it is picked up when looking for changes. 

Do you see these belonging here or would you expect those to go in a sub-class 
if they were wanted?

Paul.

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: 10 January 2014 16:22
To: Murray, Paul (HP Cloud Services); Wang, Shane; OpenStack Development 
Mailing List (not for usage questions)
Cc: Lee, Alexis; Tan, Lin
Subject: Re: [Nova] Detect changes in object model

 Sounds good to me. The list base objects don't have methods to make changes 
 to the list - so it would be a case of iterating looking at each object in 
 the list. That would be ok. 

Hmm? You mean for NovaObjects that are lists? I hesitate to expose lists as 
changed when one of the objects inside has changed because I think that sends 
the wrong message. However, I think it makes sense to have a different method 
on lists for are any of your contents changed?

I'll cook up a patch to implement what I'm talking about so you can take a look.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Detect changes in object model

2014-01-13 Thread Murray, Paul (HP Cloud Services)
Yes, I agree. 

Actually, I am trying to infer what the programming model for this is as we 
go along. 

Personally I would have been happy with only marking the fields when they are 
set. Then, if a you want to change a list somehow you would get it and then set 
it again, e.g.: 
  mylist = object.alist 
  do something to mylist
  object.alist = mylist
  object.save()

Having said that, it can be convenient to use the data structures in place. In 
which case we need all these means to track the changes and they should go in 
the base classes so they are used consistently.

So in short, I am happy with your dirty children :)

Paul.

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: 13 January 2014 15:26
To: Murray, Paul (HP Cloud Services); Wang, Shane; OpenStack Development 
Mailing List (not for usage questions)
Cc: Lee, Alexis; Tan, Lin
Subject: Re: [Nova] Detect changes in object model

 ObjectListBase has a field called objects that is typed 
 fields.ListOfObjectsField('NovaObject'). I can see methods for count 
 and index, and I guess you are talking about adding a method for are 
 any of your contents changed here. I don't see other list operations 
 (like append, insert, remove, pop) that modify the list. If these were 
 included they would have to mark the list as changed so it is picked 
 up when looking for changes.
 
 Do you see these belonging here or would you expect those to go in a 
 sub-class if they were wanted?

Well, I've been trying to avoid implying the notion that a list of things 
represents the content of the database. Meaning, I don't think it makes sense 
for someone to get a list of Foo objects, add another Foo to the list and then 
call save() on the list. I think that ends up with the assumption that the list 
matches the contents of the database, and if I add or remove things from the 
list, I can save() the contents to the database atomically. That definitely 
isn't something we can or would want to support.

That said, if we make the parent object consider the child to be dirty if any 
of its contents are dirty or the list itself is dirty (i.e. the list of objects 
has changed) that should give us the desired behavior for change tracking, 
right?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Detect changes in object model

2014-01-10 Thread Murray, Paul (HP Cloud Services)
Sounds good to me. The list base objects don't have methods to make changes to 
the list - so it would be a case of iterating looking at each object in the 
list. That would be ok. 

Do we need the contents of the lists to be modified without assigning a new 
list? - that would need a little more work to allow the changes and to track 
them there too.

Paul.

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: 10 January 2014 14:42
To: Wang, Shane; OpenStack Development Mailing List (not for usage questions)
Cc: Murray, Paul (HP Cloud Services); Lee, Alexis; Tan, Lin
Subject: Re: [Nova] Detect changes in object model

 If an object A contains another object or object list (called 
 sub-object), any change happened in the sub-object can't be detected 
 by obj_what_changed() in object A.

Well, like the Instance object does, you can override obj_what_changed() to 
expose that fact to the caller. However, I think it might be good to expand the 
base class to check, for any NovaObject fields, for the
obj_what_changed() of the child.

How does that sound?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][database] Update compute_nodes table

2013-12-04 Thread Murray, Paul (HP Cloud Services)
Hi Abbass,

I guess you read the blueprint Russell referred to. I think you actually are 
saying the same - but please read steps below and tell me if they don't cover 
what you want.

This is what it will do:

1.   Add a column to the compute_nodes table for a JSON blob

2.   Add plug-in framework for additional resources in resource_tacker 
(just like filters in filter scheduler)

3.   Resource plugin classes will implement things like:

a.   Claims test method

b.  add your data here method (so it can populate the JSON blob)

4.   Additional column is available in host_state at filter scheduler

You will then be able to do any or all of the following:

1.   Add new parameters to requests in extra_specs

2.   Add new filter/weight classes as scheduler plugins

a.   Will have access to filter properties (including extra_specs)

b.  Will have access to extra resource data (from compute node)

c.   Can generate limits

3.   Add new resource classes as scheduler plugins

a.   Will have access to filter properties (including extra specs)

b.  Will have access to limits (from scheduler)

c.   Can generate extra resource data to go to scheduler

Does this match your needs?

There are also plans to change how data goes from compute nodes to scheduler 
(i.e. not through the database). This will remove the database from the 
equation. But that can be kept as a separate concern.

Paul.



From: Abbass MAROUNI [mailto:abbass.maro...@virtualscale.fr]
Sent: 03 December 2013 08:54
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][database] Update compute_nodes table

I am aware of this work, in fact I reused a column (pci_stats) in the 
compute_nodes table to store a JSON blob.
I track the resource in the resource_tracker and update the column and then use 
the blob in a filter.
Maybe I should reformulate my question, How can I add a column to the table and 
use it in resource_tracker without breaking something ?

Best regards,

2013/12/2 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org


--

Message: 1
Date: Mon, 02 Dec 2013 12:06:21 -0500
From: Russell Bryant rbry...@redhat.commailto:rbry...@redhat.com
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][database] Update compute_nodes
table
Message-ID: 529cbe0d@redhat.commailto:529cbe0d@redhat.com
Content-Type: text/plain; charset=ISO-8859-1

On 12/02/2013 11:47 AM, Abbass MAROUNI wrote:
 Hello,

 I'm looking for way to a add new attribute to the compute nodes by
  adding a column to the compute_nodes table in nova database in order to
 track a metric on the compute nodes and use later it in nova-scheduler.

 I checked the  sqlalchemy/migrate_repo/versions and thought about adding
 my own upgrade then sync using nova-manage db sync.

 My question is :
 What is the process of upgrading a table in the database ? Do I have to
 modify or add a new variable in some class in order to associate the
 newly added column with a variable that I can use ?

Don't add this.  :-)

There is work in progress to just have a column with a json blob in it
for additional metadata like this.

https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
https://wiki.openstack.org/wiki/ExtensibleResourceTracking

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][metrics] Additional metrics

2013-11-29 Thread Murray, Paul (HP Cloud Services)
Hi Abbass, 

I am in the process of coding some of this now - take a look at 

https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking - now 
has a specification document attached 
https://etherpad.openstack.org/p/IcehouseNovaExtensibleSchedulerMetrics - the 
design summit session on this topic

see what you think and feel free to comment - I think it covers exactly what 
you describe.

Paul.


Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as HP CONFIDENTIAL.



-Original Message-
From: Lu, Lianhao [mailto:lianhao...@intel.com] 
Sent: 22 November 2013 02:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][scheduler][metrics] Additional metrics


Abbass MAROUNI wrote on 2013-11-21:
 Hello,
 
 I'm in the process of writing a new scheduling algorithm for openstack nova.
 I have a set of compute nodes that I'm going to filter and weigh according to 
 some metrics collected from these compute nodes.
 I saw nova.compute.resource_tracker and metrics (ram, disk and cpu) 
 that it collects from compute nodes and updates the rows corresponding to 
 compute nodes in the database.
 
 I'm planning to write some modules that will collect the new metrics 
 but I'm wondering if I need to modify the database schema by adding 
 more columns in the 'compute_nodes' table for my new metrics. Will 
 this require some modification to the compute model ? Then how can I use 
 these metrics during the scheduling process, do I fetch each compute node row 
 from the database ? Is there any easier way around this problem ?
 
 Best Regards,

There are currently some effort on this:
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking 

- Lianhao


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Ceilometer vs. Nova internal metrics collector for scheduling (was: New DB column or new DB table?)

2013-07-19 Thread Murray, Paul (HP Cloud Services)
Hi Sean,

Do you think the existing static allocators should be migrated to going through 
ceilometer - or do you see that as different? Ignoring backward compatibility.

The reason I ask is I want to extend the static allocators to include a couple 
more. These plugins are the way I would have done it. Which way do you think 
that should be done?

Paul.

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: 19 July 2013 12:04
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] Ceilometer vs. Nova internal metrics 
collector for scheduling (was: New DB column or new DB table?)

On 07/19/2013 06:18 AM, Day, Phil wrote:
 Ceilometer is a great project for taking metrics available in Nova and other 
 systems and making them available for use by Operations, Billing, Monitoring, 
 etc - and clearly we should try and avoid having multiple collectors of the 
 same data.

 But making the Nova scheduler dependent on Ceilometer seems to be the wrong 
 way round to me - scheduling is such a fundamental operation that I want Nova 
 to be self sufficient in this regard.   In particular I don't want the 
 availability of my core compute platform to be constrained by the 
 availability of my (still evolving) monitoring system.

 If Ceilometer can be fed from the data used by the Nova scheduler then that's 
 a good plus - but not the other way round.

I assume it would gracefully degrade to the existing static allocators if 
something went wrong. If not, well that would be very bad.

Ceilometer is an integrated project in Havana. Utilization based scheduling 
would be a new feature. I'm not sure why we think that duplicating the metrics 
collectors in new code would be less buggy than working with Ceilometer. Nova 
depends on external projects all the time.

If we have a concern about robustness here, we should be working as an overall 
project to address that.

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Ceilometer vs. Nova internal metrics collector for scheduling

2013-07-19 Thread Murray, Paul (HP Cloud Services)
If we agree that something like capabilities should go through Nova, what do 
you suggest should be done with the change that sparked this debate: 
https://review.openstack.org/#/c/35760/ 

I would be happy to use it or a modified version.

Paul.

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: 19 July 2013 14:28
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] Ceilometer vs. Nova internal metrics 
collector for scheduling

On 07/19/2013 08:30 AM, Andrew Laski wrote:
 On 07/19/13 at 12:08pm, Murray, Paul (HP Cloud Services) wrote:
 Hi Sean,

 Do you think the existing static allocators should be migrated to 
 going through ceilometer - or do you see that as different? Ignoring 
 backward compatibility.

 It makes sense to keep some things in Nova, in order to handle the 
 graceful degradation needed if Ceilometer couldn't be reached.  I see 
 the line as something like capabilities should be handled by Nova, 
 memory free, vcpus available, etc... and utilization metrics handled 
 by Ceilometer.

Yes, that makes sense to me. I'd be happy with that.

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New DB column or new DB table?

2013-07-18 Thread Murray, Paul (HP Cloud Services)
Hi All,

I would like to chip in with something from the side here (sorry to stretch the 
discussion out).

I was looking for a mechanism to do something like this in the context of this 
blueprint on network aware scheduling: 
https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement 
Essentially the problem is that I need to add network bandwidth resource 
allocation information just like vcpu, memory and disk space already has. I 
could hard code this just as they are, but I can also think of a couple of 
others we would like to add that may be more specific to a given installation. 
So I could do with a generic way to feed this information back from the compute 
node to the scheduler just like this.

However, my use case is not the same - it is not meant to be for 
monitored/statistical utilization info. But I would like a similar mechanism to 
allow the scheduler to keep track of more general / extensible resource 
allocation.

Do you have any thoughts on that? Again, don't mean to deflect the discussion - 
just I have another use case.

Paul.


-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 18 July 2013 12:05
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] New DB column or new DB table?

On 07/17/2013 10:54 PM, Lu, Lianhao wrote:
 Hi fellows,

 Currently we're implementing the BP
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
The main idea is to have an extensible plugin framework on nova-compute 
where every plugin can get different metrics(e.g. CPU utilization, 
memory cache utilization, network bandwidth, etc.) to store into the 
DB, and the nova- scheduler will use that data from DB for scheduling decision.

 Currently we adds a new table to store all the metric data and have 
 nova-
scheduler join loads the new table with the compute_nodes table to get 
all the data(https://review.openstack.org/35759). Someone is concerning 
about the performance penalty of the join load operation when there are 
many metrics data stored in the DB for every single compute node. Don 
suggested adding a new column in the current compute_nodes table in DB, 
and put all metric data into a dictionary key/value format and store 
the json encoded string of the dictionary into that new column in DB.

 I'm just wondering which way has less performance impact, join load 
 with a
new table with quite a lot of rows, or json encode/decode a dictionary 
with a lot of key/value pairs?

 Thanks,
 -Lianhao

I'm really confused. Why are we talking about collecting host metrics 
in nova when we've got a whole project to do that in ceilometer? I 
think utilization based scheduling would be a great thing, but it 
really out to be interfacing with ceilometer to get that data. Storing 
it again in nova (or even worse collecting it a second time in nova) seems 
like the wrong direction.

I think there was an equiv patch series at the end of Grizzly that was 
pushed out for the same reasons.

If there is a reason ceilometer can't be used in this case, we should 
have that discussion here on the list. Because my initial reading of 
this blueprint and the code patches is that it partially duplicates 
ceilometer function, which we definitely don't want to do. Would be happy to 
be proved wrong on that.

   -Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New DB column or new DB table?

2013-07-18 Thread Murray, Paul (HP Cloud Services)
Hi Jay, Lianhao, All,

Sorry if this comes out of order - for some reason I am not receiving the 
messages so I'm cut-and-pasting from the archive :( 

I think I might mean something closer to Brian's blue print (now I've seen it) 
https://blueprints.launchpad.net/nova/+spec/heterogeneous-instance-types 

Really I want to do resource management the way vcpu, memory and disk do. The 
scheduler chooses where to place instances according to an understanding of the 
available and free resources (and updates that when scheduling multiple 
instances, as in the consume_from_instance method of 
nova.scheduler.host_manager.HostState). Likewise, the compute node checks (in 
the test method of nova.compute.claims.Claim ) that they are available before 
accepting an instance. When the instance is created it reports back the usage 
to the database via the resource tracker. This is actually accounting what has 
been allocated, not an on-going measure of what is being used. 

Extra specs can certainly be used, but that does not provide the feedback loop 
between the compute nodes and the scheduler necessary to do the accounting of 
resource consumption.

What I would need for a generic way to do this is plugins at the compute node, 
a way to pass arbitrary resource consumption information back through the 
database, and plugins at the scheduler. So I am going beyond what is described 
here but the basic mechanisms are the same. The alternative is to code in each 
new resource we want to manage (which may not be that many really - but they 
may not be there for all installations).

Interestingly the 
https://blueprints.launchpad.net/nova/+spec/generic-host-state-for-scheduler 
blueprint (referenced in the patch) does talk about going to ceilometer. And 
that does seem to make sense to me. 

BTW, I'm getting all the other emails - just not this thread!

Bemused...
Paul


On 07/18/2013 10:44 AM, Murray, Paul (HP Cloud Services) wrote:
 Hi All,

 I would like to chip in with something from the side here (sorry to stretch 
 the discussion out).

 I was looking for a mechanism to do something like this in the context of 
 this blueprint on network aware scheduling: 
 https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement 
 Essentially the problem is that I need to add network bandwidth resource 
 allocation information just like vcpu, memory and disk space already has. I 
 could hard code this just as they are, but I can also think of a couple of 
 others we would like to add that may be more specific to a given 
 installation. So I could do with a generic way to feed this information back 
 from the compute node to the scheduler just like this.

 However, my use case is not the same - it is not meant to be for 
 monitored/statistical utilization info. But I would like a similar mechanism 
 to allow the scheduler to keep track of more general / extensible resource 
 allocation.

How is that a different use case from Lianhao's? You mean instead of 
collected usage metrics you want to allocate based on the value of a 
transient statistic like current network bandwidth utilisation?

 Do you have any thoughts on that? Again, don't mean to deflect the discussion 
 - just I have another use case.

I tend to agree with both Brian and Sean on this. I agree with Sean in 
that it seems duplicative to store compute_node_resources in the Nova 
database when a simple REST call to Ceilometer would avoid the 
duplication. And I agree with Brian that the extra_specs scheduler 
filters seem like they would fit the check a current bandwidth 
statistic type use case you describe above, Paul.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev