Re: [openstack-dev] Tricky questions - 1/2 Quantum Network Object

2013-10-14 Thread Marco Fornaro
Hi Gong,

Thanks so much for your answers

Just one more question: you wrote You can create networks with just one 
subnet, but the vlan id will run out soon if vlan is used.
Sorry but: how can the Vlan ID run out soon?...is it really possible to 
finish them?

Best Regards

Marco


From: Yongsheng Gong [mailto:gong...@unitedstack.com]
Sent: den 11 oktober 2013 10:56
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Tricky questions - 1/2 Quantum Network Object



On Fri, Oct 11, 2013 at 4:41 PM, Marco Fornaro 
marco.forn...@huawei.commailto:marco.forn...@huawei.com wrote:
Hi All,

(I already posted this on openstack mail list, but perhaps it's more a 
developer stuff :))
Some Tricky questions I ask help for (email 1 of 2):


Quantum Network object
In the openstack networking guide-Using Openstack compute with 
Openstack- Advanced VM creation 
(http://docs.openstack.org/grizzly/openstack-network/admin/content/advanceed_vm_creation.html)
 there are example boot a VM on one or more NETWORKs (meaning the quantum 
Network object):
nova boot --image img --flavor flavor \
--nic net-id=net1-id --nic net-id=net2-id vm-name

BUT if you look at the description of the network object in the API abstraction 
it looks like a collection of subnets (meaning the quantum object), so 
basically a collection of IP Addresses like 
192.168.100.0/24http://192.168.100.0/24

SO (first question): what happens in the network where I boot the VM has more 
than a subnet?...I suppose the VM should have a nic for EACH subnet of the 
network!
You will just get a nic for each network, not for each subnet of the network.   
to choose the subnet, use --nic net-id=net-uuid,v4-fixed-ip=ip-addr

THEN (second question): why do I need a network object? Shouldn't it be more 
practical to have just the subnet object?..why do I need to create a Network if 
it's just a collection of subnets?
under the hood, the traffic among networks are isolated by tunnel id, vlan id 
or something else. You can create networks with just one subnet, but the vlan 
id will run out soon if vlan is used.

we can have many networks, and the subnets within network can have overlap IPs.


Thanks in advance for any help

Best Regards

Marco




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tricky questions - 1/2 Quantum Network Object

2013-10-14 Thread Yongsheng Gong
On Mon, Oct 14, 2013 at 2:55 PM, Marco Fornaro marco.forn...@huawei.comwrote:

  Hi Gong,

 ** **

 Thanks so much for your answers

 ** **

 Just one more question: you wrote “You can create networks with just one
 subnet, but the vlan id will run out soon if vlan is used.”

 Sorry but: how can the Vlan ID “run out soon”?...is it really possible to
 finish them?

 **

for example, one network is using one vlan ID if vlan is used for linux
bridge plugin. If we create networks with just one subnet, we will need
more vlan ids than networks with more than subnets.



**

 Best Regards

 ** **

 Marco

 ** **

 ** **

 *From:* Yongsheng Gong [mailto:gong...@unitedstack.com]
 *Sent:* den 11 oktober 2013 10:56
 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] Tricky questions - 1/2 Quantum Network
 Object

 ** **

 ** **

 ** **

 On Fri, Oct 11, 2013 at 4:41 PM, Marco Fornaro marco.forn...@huawei.com
 wrote:

 Hi All,

  

 (I already posted this on openstack mail list, but perhaps it’s more a
 developer stuff J)

 Some Tricky questions I ask help for (email 1 of 2):

  

  

 *Quantum Network object*

 In the “openstack networking guide”-”Using Openstack compute with
 Openstack”-” Advanced VM creation” (
 http://docs.openstack.org/grizzly/openstack-network/admin/content/advanceed_vm_creation.html)
 there are example boot a VM on one or more NETWORKs (meaning the quantum
 Network object):  

 nova boot --image img --flavor flavor \

 *--nic net-id=net1-id --nic net-id=net2-id* vm-name

  

 BUT if you look at the description of the network object in the API
 abstraction it looks like a collection of subnets (meaning the quantum
 object), so basically a collection of IP Addresses like 192.168.100.0/24**
 **

  

 *SO (first question): what happens in the network where I boot the VM has
 more than a subnet?...I suppose the VM should have a nic for EACH subnet of
 the network!*

 You will just get a nic for each network, not for each subnet of the
 network.   to choose the subnet, use --nic
 net-id=net-uuid,v4-fixed-ip=ip-addr

   

 *THEN (second question): why do I need a network object? Shouldn’t it be
 more practical to have just the subnet object?..why do I need to create a
 Network if it’s just a collection of subnets?*

  under the hood, the traffic among networks are isolated by tunnel id,
 vlan id or something else. You can create networks with just one subnet,
 but the vlan id will run out soon if vlan is used.

 ** **

 we can have many networks, and the subnets within network can have overlap
 IPs.

 ** **

   

 Thanks in advance for any help

  

 Best Regards

  

 Marco

  

  

  


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ** **

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-14 Thread Thierry Carrez
Joe Gordon wrote:
 [...]
 This sounds like a very myopic solution to the issue you originally
 raised, and I don't think it will solve the underlying issues.
 
 Taking a step back, you originally raised a concern about how we
 prioritize reviews with the havana-rc-potential tag.
 [...]

I'm with Joe here. Additionally, I don't see how the proposed solution
would solve anything for the original issue.

You propose letting a subteam approve incremental patches in a specific
branch, and propose a big blob every milestone to merge in Nova proper,
so that the result can be considered golden and maintained by the Nova
team. So nova-core would still have to review it, since they put their
name on it. I don't see how reviewing the big blob is a lot easier than
reviewing incremental patches. Doing it under the time pressure of the
upcoming milestone won't drive better results.

Furthermore, the issue you raised was with havana release candidates,
for which we'd definitely not take the big blob approach anyway, and
go incremental all the way.


The subsystem mechanism works for the Linux kernel due to a trust model.
Linus doesn't review all patches coming from a subsystem maintainer, he
developed a trust in the work coming from that person over the years.

The equivalent in the OpenStack world would be to demonstrate that you
understand enough of the rest of the Nova code to avoid breaking it (and
to follow new conventions and features added there). This is done by
participating to reviews on the rest of the code. Then your +1s can be
considered as +2s, since you're the domain expert and you are trusted to
know enough of the rest of the code. That model works quite well with
oslo-incubator, without needing a separate branch for every incubated API.

Finally, the best way to not run into those priority hurdles is by
anticipating them. Since hyper-V is not tested at the gate, reviewers
will always be more reluctant in accepting late features and RC fixes
that affect hyper-V code. Landing features at the beginning of a cycle
and working on bugfixes well before we enter the release candidate
phases... that's the best way to make sure your work gets in before release.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tricky questions - 1/2 Quantum Network Object

2013-10-14 Thread Salvatore Orlando
Hi Marco,

At least two of your questions clearly hint at the dichotomy between subnet
and network, which appear to be redundant.
A multi-homing use case on a single network is a potential use case for
this, albeit a very limited one, since one might argue that in a cloud
scenario instead of allocating two IPs from two different subnets on a NIC
one would rather have two NICs with one IP each.

I agree that perhaps in 99% of cases there's no need for separating this
two concepts. Automatically provisioning a network when a subnet with no
network id is created is something which might be considered.
In my opinion, a reason for which the network/subnet concept are separated
is for allowing either L2-only and L2/L3 use cases. For instance, with
L2-only you might have Neutron provisioning your networks and then either
no IP configuration at all, or some other service doing IP configuration.

On a separate note, exhaustion of the VLAN pool is very likely in cloud
deployments. There is a hard limit given the 12-bit size of the VLAN
identifier. Also, if switches are not trunked and VLANs are provisioned
there too, there might be a further limitation on the number of VLANs which
can be configured on each switch port. But please don't trust me when it
comes to networking appliances, as I have a rather limited knowledge on the
subject.

Salvatore


On 14 October 2013 09:01, Yongsheng Gong gong...@unitedstack.com wrote:




 On Mon, Oct 14, 2013 at 2:55 PM, Marco Fornaro 
 marco.forn...@huawei.comwrote:

  Hi Gong,

 ** **

 Thanks so much for your answers

 ** **

 Just one more question: you wrote “You can create networks with just one
 subnet, but the vlan id will run out soon if vlan is used.”

 Sorry but: how can the Vlan ID “run out soon”?...is it really possible to
 finish them?

 **

 for example, one network is using one vlan ID if vlan is used for linux
 bridge plugin. If we create networks with just one subnet, we will need
 more vlan ids than networks with more than subnets.



 **

 Best Regards

 ** **

 Marco

 ** **

 ** **

 *From:* Yongsheng Gong [mailto:gong...@unitedstack.com]
 *Sent:* den 11 oktober 2013 10:56
 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] Tricky questions - 1/2 Quantum Network
 Object

 ** **

 ** **

 ** **

 On Fri, Oct 11, 2013 at 4:41 PM, Marco Fornaro marco.forn...@huawei.com
 wrote:

 Hi All,

  

 (I already posted this on openstack mail list, but perhaps it’s more a
 developer stuff J)

 Some Tricky questions I ask help for (email 1 of 2):

  

  

 *Quantum Network object*

 In the “openstack networking guide”-”Using Openstack compute with
 Openstack”-” Advanced VM creation” (
 http://docs.openstack.org/grizzly/openstack-network/admin/content/advanceed_vm_creation.html)
 there are example boot a VM on one or more NETWORKs (meaning the quantum
 Network object):  

 nova boot --image img --flavor flavor \

 *--nic net-id=net1-id --nic net-id=net2-id* vm-name

  

 BUT if you look at the description of the network object in the API
 abstraction it looks like a collection of subnets (meaning the quantum
 object), so basically a collection of IP Addresses like 192.168.100.0/24*
 ***

  

 *SO (first question): what happens in the network where I boot the VM
 has more than a subnet?...I suppose the VM should have a nic for EACH
 subnet of the network!*

 You will just get a nic for each network, not for each subnet of the
 network.   to choose the subnet, use --nic
 net-id=net-uuid,v4-fixed-ip=ip-addr

   

 *THEN (second question): why do I need a network object? Shouldn’t it be
 more practical to have just the subnet object?..why do I need to create a
 Network if it’s just a collection of subnets?*

  under the hood, the traffic among networks are isolated by tunnel id,
 vlan id or something else. You can create networks with just one subnet,
 but the vlan id will run out soon if vlan is used.

 ** **

 we can have many networks, and the subnets within network can have
 overlap IPs.

 ** **

   

 Thanks in advance for any help

  

 Best Regards

  

 Marco

  

  

  


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ** **



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Havana RC2 available

2013-10-14 Thread Thierry Carrez
Good morning,

Due to various issues detected in RC1 testing, we just created a new
Havana release candidate for OpenStack Metering (Ceilometer).

You can find the RC2 tarball and see the list of fixed bugs at:

https://launchpad.net/ceilometer/havana/havana-rc2

This is hopefully the last Havana release candidate for Ceilometer.
Unless a last-minute release-critical regression is found that warrant
another release candidate respin, this RC2 will be formally included in
the common OpenStack 2013.2 final release next Thursday. You are
therefore strongly encouraged to test and validate this tarball.

Alternatively, you can grab the code at:
https://github.com/openstack/ceilometer/tree/milestone-proposed

If you find a regression that could be considered release-critical,
please file it at https://bugs.launchpad.net/ceilometer/+filebug and tag
it *havana-rc-potential* to bring it to the release crew's attention.

Happy regression hunting,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Incubation Request for OpenStack UX

2013-10-14 Thread Jaromir Coufal

Dear Technical Committee Members and OpenStack Community,

after previous brainstorming, suggestions and discussions in community, 
we came out with OpenStack User Experience incubation proposal [0] and 
by this e-mail we would like to officially apply for incubation. The 
official application is available at OpenStack wiki:


https://wiki.openstack.org/wiki/UX/Incubation

We are aware of the fact that there are ongoing TC elections, so we need 
to wait for new TC to take place.


If it would be all possible, I would like to ask for one extra slot at 
Icehouse Design Summit, so we can talk about UX of OpenStack, get in 
touch with all OpenStack projects at once and figure out the best way 
for our cooperation.


[0] https://plus.google.com/115177641540718537778/posts/iumZKYXbe4N


Sincerely yours,
Jaromir Coufal
Proposed UX Technical Lead
Red Hat
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [tempest] [ceilometer] Looking for clarification on the diagnostics API

2013-10-14 Thread Bob Ball
I'm happy with that approach - again I've not seen any discussions about how 
this should be done.

I've added [tempest] and [ceilometer] tags so we can hopefully get input from 
the guys involved.

Bob

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: 13 October 2013 05:21
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API

Hi,
I agree with Matt here. This is not broad enough. One option is to have a 
tempest class that overrides for various backend plugins. Then the test can be 
haredednd for each driver. I am not sure if that is something that has been 
talked about.
Thanks
Gary

From: Matt Riedemann mrie...@us.ibm.commailto:mrie...@us.ibm.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Sunday, October 13, 2013 6:13 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API

There is also a tempest patch now to ease some of the libvirt-specific keys 
checked in the new diagnostics tests there:

https://review.openstack.org/#/c/51412/

To relay some of my concerns that I put in that patch:

I'm not sure how I feel about this. It should probably be more generic but I 
think we need more than just a change in tempest to enforce it, i.e. we should 
have a nova patch that changes the doc strings for the abstract compute driver 
method to specify what the minimum keys are for the info returned, maybe a doc 
api sample change, etc?

For reference, here is the mailing list post I started on this last week:

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html

There are also docs here (these examples use xen and libvirt):

http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

And under procedure 4.4 here:

http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#section_manage-the-cloud

=

I also found this wiki page related to metering and the nova diagnostics API:

https://wiki.openstack.org/wiki/EfficientMetering/FutureNovaInteractionModel

So it seems like if at some point this will be used with ceilometer it should 
be standardized a bit which is what the Tempest part starts but I don't want it 
to get lost there.


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com

[IBM]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States






From:Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:10/12/2013 01:42 PM
Subject:Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API




Yup, it seems to be hypervisor specific. I have added in the Vmware support 
following you correcting in the Vmware driver.
Thanks
Gary

From: Matt Riedemann mrie...@us.ibm.commailto:mrie...@us.ibm.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, October 10, 2013 10:17 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API

Looks like this has been brought up a couple of times:

https://lists.launchpad.net/openstack/msg09138.html

https://lists.launchpad.net/openstack/msg08555.html

But they seem to kind of end up in the same place I already am - it seems to be 
an open-ended API that is hypervisor-specific.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com

[IBM]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States







From:Matt Riedemann/Rochester/IBM
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:10/10/2013 02:12 PM
Subject:[nova] Looking for clarification on the diagnostics API



Tempest recently got some new tests for the nova diagnostics API [1] which 
failed when I was running against the powervm driver since it doesn't implement 
that API.  I started looking at other drivers that did and found that libvirt, 
vmware and xenapi at least had code for the get_diagnostics method.  I found 
that the vmware driver was re-using it's get_info method for get_diagnostics 
which led to bug 

Re: [openstack-dev] Incubation Request for OpenStack UX

2013-10-14 Thread Thierry Carrez
Jaromir Coufal wrote:
 Dear Technical Committee Members and OpenStack Community,
 
 after previous brainstorming, suggestions and discussions in community,
 we came out with OpenStack User Experience incubation proposal [0] and
 by this e-mail we would like to officially apply for incubation. The
 official application is available at OpenStack wiki:
 
 https://wiki.openstack.org/wiki/UX/Incubation

Incubation is a process for an project to become part of the
deliverables that are shipped as the common OpenStack release every 6
months:

https://wiki.openstack.org/wiki/Governance/NewProjects

If I read your application correctly, what you want is to become a new
Program (like QA or Documentation). The process to follow here is
not incubation but applying to become a new program:

https://wiki.openstack.org/wiki/Governance/NewPrograms

The first process is for projects, the second process is for teams.

 We are aware of the fact that there are ongoing TC elections, so we need
 to wait for new TC to take place.

Right, whatever the process followed we probably won't be considering it
until after the Design Summit (during which the first informal TC
gathering will be held).

 If it would be all possible, I would like to ask for one extra slot at
 Icehouse Design Summit, so we can talk about UX of OpenStack, get in
 touch with all OpenStack projects at once and figure out the best way
 for our cooperation.

It's a bit difficult because our resources are limited and we have to
draw the line in the sand somewhere... giving a slot to one prospective
program and not another would be playing favorites.

You can get invited in slots allocated to another program, though
(Horizon?) to discuss UX in general and the opportunity to make it a
separate program from Horizon and TripleO.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-14 Thread Lingxian Kong
Hi there:

When I perform some operations on servers with administrator role, using
CLI, I met tha same strage behavior as described:
https://bugs.launchpad.net/python-novaclient/+bug/1050901. I'm afraid the
Glance has the same issue.

What I want to figure out is, what is the expected behavior when taking
some actions on a server of other tenants using server name that is global
unique with admin role?

I apologize if this question was already covered and I missed it, but it
really bother me for a while.
-- 
**
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Team meeting reminder - October 14

2013-10-14 Thread Alexander Tivelkov
Hi!

This is just a reminder about the regular meeting of Murano-team in IRC.
The meeting will be held in #openstack-meeting-alt channel at 8am Pacific.

The complete agenda of the meeting is available here:
https://wiki.openstack.org/wiki/Meetings/MuranoAgenda


--
Kind Regards,
Alexander Tivelkov
Principal Software Engineer

OpenStack Platform Product division
Mirantis, Inc

+7(495) 640 4904, ext 0236
+7-926-267-37-97(cell)
Vorontsovskaya street 35 B, building 3,
Moscow, Russia.
ativel...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for OpenStack UX

2013-10-14 Thread Jaromir Coufal

Hey Thierry,

thanks for your notes. Few comments are following inline.

On 2013/14/10 11:00, Thierry Carrez wrote:

Jaromir Coufal wrote:

Dear Technical Committee Members and OpenStack Community,

after previous brainstorming, suggestions and discussions in community,
we came out with OpenStack User Experience incubation proposal [0] and
by this e-mail we would like to officially apply for incubation. The
official application is available at OpenStack wiki:

https://wiki.openstack.org/wiki/UX/Incubation

Incubation is a process for an project to become part of the
deliverables that are shipped as the common OpenStack release every 6
months:

https://wiki.openstack.org/wiki/Governance/NewProjects

If I read your application correctly, what you want is to become a new
Program (like QA or Documentation). The process to follow here is
not incubation but applying to become a new program:

https://wiki.openstack.org/wiki/Governance/NewPrograms

The first process is for projects, the second process is for teams.

Oh yes, right. I am sorry I confused those two.

What we want is to become an official part of OpenStack and Program is 
the right fit, exactly (like QA or Doc). The application might remain 
the same (the structure is very similar).


[snip]


If it would be all possible, I would like to ask for one extra slot at
Icehouse Design Summit, so we can talk about UX of OpenStack, get in
touch with all OpenStack projects at once and figure out the best way
for our cooperation.

It's a bit difficult because our resources are limited and we have to
draw the line in the sand somewhere... giving a slot to one prospective
program and not another would be playing favorites.

You can get invited in slots allocated to another program, though
(Horizon?) to discuss UX in general and the opportunity to make it a
separate program from Horizon and TripleO.
Sure, I completely understand. The session proposal is already placed 
under Horizon, I just wanted to try not to consume Horizon slots since 
UX will be general topic. No problem, will try to fit under Horizon for now.


Thanks
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change I3e080c30: Fix resource length in project_user_quotas table for Havana?

2013-10-14 Thread Joshua Hesketh


Rackspace Australia

On 10/12/13 12:35 AM, Russell Bryant wrote:

On 10/10/2013 11:15 PM, Joshua Hesketh wrote:

Hi there,

I've been reviewing this change which is currently proposed for master
and I think it needs to be considered for the next Havana RC.

Change I3e080c30: Fix resource length in project_user_quotas table
https://review.openstack.org/#/c/47299/

I'm new to the process around these kinds of patches but I imagine that
we should use one of the placeholder migrations in the havana branch and
cherry-pick it back into master?

The fix looks good, thanks!

I agree that this is good for Havana.  I'll see if I can slip it into
havana-rc2.

The process is generally merging the fix to master and then backporting
it.  In this case the backport can't be the same.  Instead of using a
new migration number, we'll use one of the migration numbers reserved
for havana backports.

So I'm a bit late responding to this but I'm confused about this process 
and was wondering if you could please clarify.


Why wouldn't we use one of the placeholders in master and backport it as 
is to Havana? What happens when an administrator installs Havana and 
then later upgrades to Icehouse? Their migration version number will be 
at 226 having already applied the patch at 217. However in Icehouse the 
patch re-represents itself as 227 and will therefore attempt to apply again*


Now typing this I am seeing this will pose issues for those running 
against trunk who will have already made it to migration 226 before the 
fix is implemented at 217. Still, how do we prevent the above scenario 
from causing problems?


Cheers,
Josh

*granted in this particular case there is no harm in the migration 
applying again but in other migration backports this could be problematic.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] container forwarding/cluster federation blueprint

2013-10-14 Thread Deliot, Eric
Hi,

With a global cluster, you can have your replicas stored in different regions 
within your cluster whereas container forwarding is about storing all objects 
within a container in a different swift cluster than the one you contacted in 
the first place.  These two clusters could well be co-located.  The swift 
cluster forwarded to may have a different replica count than the other clusters 
so that you can provide different QoS for different containers or it may just 
be there to give you extra space to accommodate a large account across several 
clusters but within a single namespace. The forwarding is happening between 
proxies of the clusters involved and each cluster is a regular cluster with its 
own configuration.

More details on container-forwarding  can be found in this wiki page:

https://wiki.openstack.org/wiki/Swift/ClusterFederationBlueprint

Eric

On 14/10/2013 03:31, Sam Morrison wrote:
Hi,

I'd be interested in the differences this has to using swift global clusters?

Cheers,
Sam



On 12/10/2013, at 3:49 AM, Coles, Alistair 
alistair.co...@hp.commailto:alistair.co...@hp.com wrote:

We’ve just committed a first set of patches to gerrit that address this 
blueprint:

https://blueprints.launchpad.net/swift/+spec/cluster-federation

Quoting from that page: “The goal of this work is to enable account contents to 
be dispersed across multiple clusters, motivated by (a) accounts that might 
grow beyond the remaining capacity of a single cluster and (b) clusters 
offering differentiated service levels such as different levels of redundancy 
or different storage tiers. Following feedback at the Portland summit, the work 
is initially limited to dispersal at the container level, i.e. each container 
within an account may be stored on a different cluster, whereas every object 
within a container will be stored on the same cluster.”

It is work in progress, but we’d welcome feedback on this thread, or in person 
for anyone who might be at the hackathon in Austin next week.

The bulk of the new features are in this patch:
https://review.openstack.org/51236 (Middleware module for container forwarding.)

There’s a couple of patches refactoring/adding support to existing modules:
https://review.openstack.org/51242 (Refactor proxy/controllers obj  base http 
code)
https://review.openstack.org/51228 (Store x-container-attr-* headers in 
container db.)

And some tests…
https://review.openstack.org/51245 (Container-forwarding unit and functional 
tests)


Regards,
Alistair Coles, Eric Deliot, Aled Edwards

HP Labs, Bristol, UK
-
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
. Registered No: 690597 England
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error, you should 
delete it from your system immediately and advise the sender.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-14 Thread Alessandro Pilotti


 On 14.10.2013, at 11:18, Thierry Carrez thie...@openstack.org wrote:
 
 Joe Gordon wrote:
 [...]
 This sounds like a very myopic solution to the issue you originally
 raised, and I don't think it will solve the underlying issues.
 
 Taking a step back, you originally raised a concern about how we
 prioritize reviews with the havana-rc-potential tag.
 [...]
 
 I'm with Joe here. Additionally, I don't see how the proposed solution
 would solve anything for the original issue.
 
 You propose letting a subteam approve incremental patches in a specific
 branch, and propose a big blob every milestone to merge in Nova proper,
 so that the result can be considered golden and maintained by the Nova
 team. So nova-core would still have to review it, since they put their
 name on it. I don't see how reviewing the big blob is a lot easier than
 reviewing incremental patches. Doing it under the time pressure of the
 upcoming milestone won't drive better results.
 
I already replied on this in the following emails, it was a proposal based on 
all the feedbacks to try to find a common ground, surely not the best option.

 Furthermore, the issue you raised was with havana release candidates,
 for which we'd definitely not take the big blob approach anyway, and
 go incremental all the way.
 
Ditto

 
 The subsystem mechanism works for the Linux kernel due to a trust model.
 Linus doesn't review all patches coming from a subsystem maintainer, he
 developed a trust in the work coming from that person over the years.
 
That's the only way to go IMO as the project gets bigger.

 The equivalent in the OpenStack world would be to demonstrate that you
 understand enough of the rest of the Nova code to avoid breaking it (and
 to follow new conventions and features added there). This is done by
 participating to reviews on the rest of the code. Then your +1s can be
 considered as +2s, since you're the domain expert and you are trusted to
 know enough of the rest of the code. That model works quite well with
 oslo-incubator, without needing a separate branch for every incubated API.
 

How should a driver break Nova? It's 100% decoupled. The only contact area is 
the driver interface that we simply consume, without changing it.

In the very rare cases in which we propose changes to Nova code (only the RDP 
patch so far in 3 releases) that'd be of course part of the Nova project, not 
the driver.

Oslo-incubator is definitely not a good example here as its code gets consumed 
by the other projects.

A separate project would give no concerns on breaking anything, only users of 
that specific driver would install it, e.g.:

pip install nova-driver-hyperv

For the moment, our Windows code is included in every Linux release with 
OpenStack (Ubuntu, RH/CentOS+RDO, etc). I find it quite funny to be honest.

 Finally, the best way to not run into those priority hurdles is by
 anticipating them. Since hyper-V is not tested at the gate, reviewers
 will always be more reluctant in accepting late features and RC fixes
 that affect hyper-V code. Landing features at the beginning of a cycle
 and working on bugfixes well before we enter the release candidate
 phases... that's the best way to make sure your work gets in before release.
 

What if a bug gets reported during the RC phase like it happened now? How can 
we work on it before it gets reported? Should I look for a crystal ball? :-)

Landing all features at the beginning of the cycle and spending the next 3 
months to beg for reviews that won't add almost anything to the patches and 
without any guarantee that this will happen? That would simply mean being 
constantly one entire release late in the development cycle without any 
advantage.

Beside the blueprints, the big problem are with bug fixes. Once you have a fix, 
why waiting weeks before releasing it and getting users unhappy? 

As a an example we have a couple of critical bugs for Havana with their fix 
already under review that nobody cared even to triage, let alone review.

Considering that we are not singled out here, the only explanation is that the 
Nova team is simply not able to face anymore the increasing amount of bugs and 
new features, with the obvious negative impact on the users.

Let's face it: the Nova team cannot scale fast enough as the project size 
increases at this pace.

Delegation of responsibility on partitioned and decoupled areas is the only 
proven way out, as for example the Linux kernel project clearly shows.

Alessandro

 Regards,
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-14 Thread Christopher Yeoh
On Mon, 14 Oct 2013 11:58:22 +
Alessandro Pilotti apilo...@cloudbasesolutions.com wrote:
 
 As a an example we have a couple of critical bugs for Havana with
 their fix already under review that nobody cared even to triage, let
 alone review.
 

Anyone can join the nova-bugs team on launchpad and help triage the
incoming bugs. You don't need to be a core to do that.

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-14 Thread Russell Bryant
On 10/14/2013 04:10 AM, Thierry Carrez wrote:
 Joe Gordon wrote:
 [...]
 This sounds like a very myopic solution to the issue you originally
 raised, and I don't think it will solve the underlying issues.

 Taking a step back, you originally raised a concern about how we
 prioritize reviews with the havana-rc-potential tag.
 [...]
 
 I'm with Joe here. Additionally, I don't see how the proposed solution
 would solve anything for the original issue.
 
 You propose letting a subteam approve incremental patches in a specific
 branch, and propose a big blob every milestone to merge in Nova proper,
 so that the result can be considered golden and maintained by the Nova
 team. So nova-core would still have to review it, since they put their
 name on it. I don't see how reviewing the big blob is a lot easier than
 reviewing incremental patches. Doing it under the time pressure of the
 upcoming milestone won't drive better results.
 
 Furthermore, the issue you raised was with havana release candidates,
 for which we'd definitely not take the big blob approach anyway, and
 go incremental all the way.

Regarding the original issue, I actually try very hard to stay on top
of how nova is doing with the review queue.  I wrote about this in
detail in my PTL candidacy (see Code Review Process of [1]).

I still maintain that the problem with review times is not quite as bad
as some people make it out to be now and then.  If there's an angle I'm
not tracking, I would love help adding more to these stats.

Of course, there's probably also quite a bit of variety in expectations.
 Perhaps we could do a better job of communicating what is a reasonable
expectation when posting reviews.

Note that the times look worse than usual right now, but that's
explained by a bunch of patches that were blocked by the feature freeze
being restored, and it looks like they've been waiting for review the
whole time, even though they were abandoned for a while.

http://russellbryant.net/openstack-stats/nova-openreviews.html

 The subsystem mechanism works for the Linux kernel due to a trust model.
 Linus doesn't review all patches coming from a subsystem maintainer, he
 developed a trust in the work coming from that person over the years.
 
 The equivalent in the OpenStack world would be to demonstrate that you
 understand enough of the rest of the Nova code to avoid breaking it (and
 to follow new conventions and features added there). This is done by
 participating to reviews on the rest of the code. Then your +1s can be
 considered as +2s, since you're the domain expert and you are trusted to
 know enough of the rest of the code. That model works quite well with
 oslo-incubator, without needing a separate branch for every incubated API.

While we don't have a MAINTAINERS file, I feel that we do this for Nova
today.  I do not expect everyone on nova-core to be an expert across the
whole tree.  Part of being on the core team is a trust in your reviews
that you would only +2 stuff that you are comfortable with.


[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015370.html

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ceilometer

2013-10-14 Thread Afef MDHAFFAR
Hi all,

I tried to install openstack via devstack, while including ceilometer.
The install of openstack is ok. The dashborad is accessible, and everything
seem to be fine.
However, I am not able to launch ceilometer services. For instance
ceilometer-collector returns the following error, that seems related to
python coding errors. Would you please help me to fix this bug.

Thanks,
Afef

-
 root@onodedomU:/etc/ceilometer# ceilometer-collector
2013-10-14 13:53:12.638 32022 INFO ceilometer.openstack.common.rpc.common
[-] Connected to AMQP server on localhost:5672
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:599:
DeprecationWarning: auto_delete exchanges has been deprecated
  'auto_delete exchanges has been deprecated'))
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py, line
99, in wait
writers.get(fileno, noop).cb(fileno)
  File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py,
line 194, in main
result = function(*args, **kwargs)
  File /opt/stack/ceilometer/ceilometer/openstack/common/service.py, line
448, in run_service
service.start()
  File /opt/stack/ceilometer/ceilometer/collector/service.py, line 138,
in start
super(CollectorService, self).start()
  File /opt/stack/ceilometer/ceilometer/openstack/common/rpc/service.py,
line 66, in start
self.manager.initialize_service_hook(self)
  File /opt/stack/ceilometer/ceilometer/collector/service.py, line 147,
in initialize_service_hook
'ceilometer.transformer',
  File /opt/stack/ceilometer/ceilometer/pipeline.py, line 356, in
setup_pipeline
with open(cfg_file) as fap:
TypeError: coercing to Unicode: need string or buffer, NoneType found
Removing descriptor: 4
2013-10-14 13:53:12.662 32022 ERROR ceilometer.openstack.common.threadgroup
[-] coercing to Unicode: need string or buffer, NoneType found
2013-10-14 13:53:12.662 32022 TRACE ceilometer.openstack.common.threadgroup
Traceback (most recent call last):
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/opt/stack/ceilometer/ceilometer/openstack/common/threadgroup.py, line
117, in wait
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup x.wait()
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/opt/stack/ceilometer/ceilometer/openstack/common/threadgroup.py, line
49, in wait
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup return self.thread.wait()
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168,
in wait
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup return self._exit_event.wait()
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, in
wait
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup return hubs.get_hub().switch()
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in
switch
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup return self.greenlet.switch()
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194,
in main
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup result = function(*args,
**kwargs)
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/opt/stack/ceilometer/ceilometer/openstack/common/service.py, line 448,
in run_service
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup service.start()
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/opt/stack/ceilometer/ceilometer/collector/service.py, line 138, in start
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup super(CollectorService,
self).start()
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/opt/stack/ceilometer/ceilometer/openstack/common/rpc/service.py, line
66, in start
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup
self.manager.initialize_service_hook(self)
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/opt/stack/ceilometer/ceilometer/collector/service.py, line 147, in
initialize_service_hook
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup 'ceilometer.transformer',
2013-10-14 13:53:12.662 32022 TRACE
ceilometer.openstack.common.threadgroup   File
/opt/stack/ceilometer/ceilometer/pipeline.py, line 356, in setup_pipeline
2013-10-14 

[openstack-dev] [Climate] Nova dependencies in Climate

2013-10-14 Thread Dina Belova
Hello everyone!

Guys, now we have Nova dependency in Climate like:

-f http://tarballs.openstack.org/nova/nova-master.tar.gz#egg=nova-master
nova=master

That was done to implement Nova-based scheduler filter for physical
reservations and store its code in Climate project itself. Now Climate is
really light weighted project, that do not really need all these deps
connected wit Nova. Still, now that future filter is part of Climate
physical reservations logic.

That's why we have the following question: do you think it's normal to have
such dependency there, or it will be better to create separate repo like
climate-filters and put there any filters and their dependencies?

What can you advise?

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Nova dependencies in Climate

2013-10-14 Thread Julien Danjou
On Mon, Oct 14 2013, Dina Belova wrote:

 That's why we have the following question: do you think it's normal to have
 such dependency there, or it will be better to create separate repo like
 climate-filters and put there any filters and their dependencies?

 What can you advise?

We do this in Ceilometer because we provide a special notifier for Nova.

You should be aware that if you rely on Nova internals, they change a
lot, and we had a lot of time our gate broken because of such changes
and unit tests not passing anymore.

Having this in a separate repository can makes sure you don't have such
a problem in your whole repository, so that might be a good idea indeed.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-14 Thread Ben Nemec
 

I agree that this needs to be fixed. It's very counterintuitive, if
nothing else (which is also my argument against requiring all-tenants
for admin users in the first place). The only question for me is whether
to fix it in novaclient or in Nova itself. The comments on the review
from the previous discussion seemed to support the latter, but there was
no follow-up that I'm aware of. Maybe it's a problem because it's an API
change? For reference, see the discussion here:
https://review.openstack.org/#/c/39705/3/novaclient/v1_1/shell.py 

I think it should definitely be fixed for v3. v2 is a little trickier
because it's a change in behavior, but since v3 is coming soon I'm not
too hung up on getting it changed in v2. 

-Ben 

On 2013-10-14 10:13, Lingxian Kong wrote: 

 Seems no consensus was reached. I think we should turn it up and make some 
 discussion again. IMHO, it's a bug. http://paste.openstack.org/show/48359/ 
 [1], I didn't see there is any explanation in the API doc or CLI helper, that 
 we only use server-name as a non-admin role. 
 
 2013/10/14 Ben Nemec openst...@nemebean.com
 
 On 2013-10-14 04:29, Lingxian Kong wrote: 
 Hi there: 
 
 When I perform some operations on servers with administrator role, using CLI, 
 I met tha same strage behavior as described: 
 https://bugs.launchpad.net/python-novaclient/+bug/1050901 [2]. I'm afraid the 
 Glance has the same issue. 
 
 What I want to figure out is, what is the expected behavior when taking some 
 actions on a server of other tenants using server name that is global unique 
 with admin role?
 
 I apologize if this question was already covered and I missed it, but it 
 really bother me for a while. 
 
 This sounds related to a discussion we had a few months ago: 
 http://lists.openstack.org/pipermail/openstack-dev/2013-June/010461.html [3] 
 
 I don't know that any changes ever came out of that though. :-/ 
 
 -Ben

 

Links:
--
[1] http://paste.openstack.org/show/48359/
[2] https://bugs.launchpad.net/python-novaclient/+bug/1050901
[3]
http://lists.openstack.org/pipermail/openstack-dev/2013-June/010461.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-14 Thread Day, Phil
Hi Folks,

In the weekly scheduler meeting we've been trying to pull together a 
consolidated list of Summit sessions so that we can find logical groupings and 
make a more structured set of sessions for the limited time available at the 
summit.

https://etherpad.openstack.org/p/IceHouse-Nova-Scheduler-Sessions

With the deadline for sessions being this Thursday 17th, tomorrows IRC meeting 
is the last chance to decide which sessions we want to combine / prioritize.
Russell has indicated that a starting assumption of three scheduler sessions is 
reasonable, with any extras depending on what else is submitted.

I've matched the list on the Either pad to submitted sessions below, and added 
links to any other proposed sessions that look like they are related.


1) Instance Group Model and API
Session Proposal:  http://summit.openstack.org/cfp/details/190
  
2) Smart Resource Placement:
Session Proposal:  http://summit.openstack.org/cfp/details/33
Possibly related sessions:  Resource optimization service for nova  
(http://summit.openstack.org/cfp/details/201)

3) Heat and Scheduling and Software, Oh My!:
Session Proposal: http://summit.openstack.org/cfp/details/113

4) Generic Scheduler Metrics and Celiometer:
Session Proposal: http://summit.openstack.org/cfp/details/218
Possibly related sessions:  Making Ceilometer and Nova play nice  
http://summit.openstack.org/cfp/details/73

5) Image Properties and Host Capabilities
Session Proposal:  NONE

6) Scheduler Performance:
Session Proposal:  NONE
Possibly related Sessions: Rethinking Scheduler Design  
http://summit.openstack.org/cfp/details/34

7) Scheduling Across Services:
Session Proposal: NONE

8) Private Clouds:
Session Proposal:   http://summit.openstack.org/cfp/details/228

9) Multiple Scheduler Policies:
Session Proposal: NONE


The proposal from last weeks meeting was to use the three slots for:
- Instance Group Model and API   (1)
- Smart Resource Placement (2)
- Performance (6)

However, at the moment there doesn't seem to be a session proposed to cover the 
performance work ?

It also seems to me that the Group Model and Smart Placement are pretty closely 
linked along with (3) (which says it wants to combine 1  2 into the same 
topic) , so if we only have three slots available then these look like logical 
candidates for consolidating into a single session.That would free up a 
session to cover the generic metrics (4) and Ceilometer - where a lot of work 
in Havana stalled because we couldn't get a consensus on the way forward.  The 
third slot would be kept for performance - which based on the lively debate in 
the scheduler meetings I'm assuming will still be submitted as a session.
Private Clouds isn't really a scheduler topic, so I suggest it takes its 
chances as a general session.  Hence my revised proposal for the three slots is:

  i) Group Scheduling / Smart Placement / Heat and Scheduling  (1), (2), (3),  
(7)
- How do you schedule something more complex that a single VM ?

ii) Generalized scheduling metrics / celiometer integration (4)
- How do we extend the set of resources a scheduler can use to make its 
decisions ?
- How do we make this work with  / compatible with Celiometer ?

iii) Scheduler Performance (6)
 
In that way we will at least give airtime to all of the topics. If a 4th 
scheduler slot becomes available then we could break up the first session into 
two parts.

Thoughts welcome here or in tomorrows IRC meeting.

Cheers,
Phil  

 









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VPNaaS questions

2013-10-14 Thread Paul Michali
See @PCM in-line…

PCM (Paul Michali)

MAIL p...@cisco.com
IRC   pcm_  (irc.freenode.net)
TW   @pmichali

On Oct 12, 2013, at 5:04 PM, Eugene Nikanorov enikano...@mirantis.com wrote:

 Hi folks,
 
  I was wondering in general how providers can customize service features,
  based on their capabilities (better or worse than reference). I could create
  a Summit session topic on this, but wanted to know if this is something that
  has already been addressed or if a different architectural approach has
  already been defined.
 
 This is seems to be a multilayered feature that needs to be discussed. 
 Mark McClain will be speaking about vendor cli extensions in 
 http://summit.openstack.org/cfp/details/10. 
 It require API counterpart on server side. I was planning to speak about this 
 in this session:
 http://summit.openstack.org/cfp/details/22 
 Feel free to add your suggestions to theether pad.

@PCM Thanks Eugene! I saw Mark's previously, but it didn't seem to have much in 
it. I had created a session suggestion this morning: 
http://summit.openstack.org/cfp/details/230 and then added a comment that maybe 
it could be combined (added to) with others like yours (or maybe yours covers 
it all :)


 
 more specifically:
  7) If a provider as additional attributes (can't think of any yet), how can
  the attribute be extended, only for that provider (or is that the wrong way
  to handle this)?
 I think it should be an additional extension mechanism different from the 
 framework that we're using right now.
 Service plugin should gather extended resources or attribute maps from 
 supported drivers and return them to the layer that will make wsgi 
 controllers for the collections. So it should be pretty much the same as 
 extension framework but instead of loading common extensions, it should load 
 resources from the service plugin.
 

@PCM That's a great idea and would be good to discuss more.

Regards,

PCM

 
 Thanks,
 Eugene.
 
 
 On Sat, Oct 12, 2013 at 1:40 AM, Nachi Ueno na...@ntti3.com wrote:
 Hi Paul
 
 2013/10/11 Paul Michali p...@cisco.com:
  Hi folks,
 
  I have a bunch of questions for you on VPNaaS in specific, and services in
  general...
 
  Nachi,
 
  1) You hd a bug fix to do service provider framework support for VPN
  (41827). It was held for Icehouse. Is that pretty much a working patch?
  2) When are you planning on reopening the review?
 
 I'm not sure it will work without rebase.
 I'll rebase, and test it again in next week.
 
 
  Anyone,
 
  I see that there is an agent.py file for VPN that has a main() and it starts
  up an L3 agent, specifying the VPNAgent class (in same file).
 
  3) How does this file get invoked? IOW how does the main() get invoked?
 
 we should use neutron-vpn-agent command to run vpn-agent.
 This command invoke vpn agent class.
 It is defined setup.cnf
 
 https://github.com/openstack/neutron/blob/master/setup.cfg#L98
 
  4) I take it we can specify multiple device drivers in the config file for
  the agent?
 
 Yes.
 
 
  Currently, for the reference device driver, the hierarchy is currently
  DeviceDriver [ABC] - IPsecDriver [Swan based logic] - OpenSwanDriver [one
  function, OpenSwan specific]. The ABC has a specific set of APIs. Wondering
  how to incorporate provider based device drivers.
 
 It is designed when we know only one swan based driver.
 so It won't fit with another device drivers.
 if so, You can also extend or modify DeviceDriver also.
 
  5) Should I push up more general methods from IPsecDriver to DeviceDriver,
  so that they can be reused by other providers?
 
 That's woud be great
 
  6) Should I push down the swan based methods from DeviceDriver to
  IPsecDriver and maybe name it SwanDeviceDriver?
 
 yes
 
 
  I see that vpnaas.py is an extension for VPN that defines attributes and the
  base plugin functions.
 
  7) If a provider as additional attributes (can't think of any yet), how can
  the attribute be extended, only for that provider (or is that the wrong way
  to handle this)?
 
 You can extend existing extension.
 
  For VPN, there are several attributes, each with varying ranges of values
  allowed. This is reflected in the CLI help messages, the database (e.g.
  enums), and is validated (some) in the client code and in the VPN service.
 
 Chaining existing attributes may be challenging on client side.
 But let's discuss this with a concrete example.
 
  8) How do we provide different limits/allowed values for attributes, for a
  specific provider (e.g. let's say the provider supports or doesn't support
  an encryption method, or doesn't support IKE v1 or v2)?
 
 Driver can throw unsupported exception. ( It is not defined yet)
 
  9) Should the code be changed not to do any client validation, and to have
  generic help, so that different values could be provided, or is there a way
  to customize this based on provider?
 
 That's could be one way.
 
  10) If customized, is it possible to reflect the difference in allowed
  values in 

Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-14 Thread Yathiraj Udupi (yudupi)
Hi Mike,

I read your email where you expressed concerns regarding create-time 
dependencies, and I agree they are valid concerns to be addressed.  But like we 
all agree, as a starting point, we are just focusing on the APIs for now, and 
will leave that aside as implementation details to be addressed later.
Thanks for sharing your suggestions on how we can simplify the APIs.  I think 
we are getting closer to finalizing this one.

Let us start at the model proposed here -
[1] 
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing
(Ignore the white diamonds - they will be black, when I edit the doc)

The InstanceGroup represents all the information necessary to capture the group 
- nodes, edges, policies, and metadata

InstanceGroupMember - is a reference to an Instance, which is saved separately, 
using the existing Instance Model in Nova.

InstanceGroupMemberConnection - represents the edge

InstanceGroupPolicy is a reference to a Policy, which will also be saved 
separately, (currently not existing in the model, but has to be created). Here 
in the Policy model, I don't mind adding any number of additional fields, and 
key-value pairs to be able to fully define a policy.  I guess a Policy-metadata 
dictionary is sufficient to capture all the required arguments.
The InstanceGroupPolicy will be associated to a group as a whole or an edge.

InstanceGroupMetadata - represents key-value dictionary for any additional 
metadata for the instance group.

I think this should fully support what we care about - nodes, edges, policies 
and metadata.

Do we all agree ?


Now going to the APIs,

Register GROUP API (from my doc [1]):


POST


/v3.0/{tenant_id}/groups


Register a group


I think the confusion is only about when the member (all nested members) and 
policy about when they are saved in the DB (registered, but not CREATED 
actually), such that we can associate a UUID.  This led to my original thinking 
that it is a 3-phase operation where we have to register (save in DB) the 
nested members first, then register the group as a whole.  But this is not 
client friendly.

Like I had suggested earlier, as an implementation detail of the Group 
registration API (CREATE part 1 in your terminology), we can support this: as 
part of the group registration transaction,  complete the registration of the 
nested members, get their UUIDs,  create the InstanceGroupMemberConnections, 
and then complete saving the group - resulting in a UUID for the group,  all in 
a single transaction-scope.  While you start the transaction, you can start 
with a UUID for the group, so that you can add the group_id pointers to the 
individual members,  and then finally complete the transaction.
This means,  you provide as an input to the REST API - the complete nested 
tree, including all the details about the nested members and policies, and the 
register API, will handle the saving of all the individual objects required.

But I think it does help to also add additional APIs to just register an 
InstanceGroupMember and  an InstanceGroupPolicy separately.  This might help 
the client while creating a group, rather than giving the entire nested tree.   
(his makes it a 3-phase) This API will support adding members and policies to 
an instance group that is created.  (you can start with an empty group)


POST


/v3.0/{tenant_id}/groups/instance


Register an instance belonging to an instancegroup



POST


/v3.0/{tenant_id}/groups/policy


Register a policy belonging to an instance group



Are we okay with this ?


The next API - is the actual creation of the resources.  (CREATE part 2  in 
your terminology).   This is was my create API in the doc-


POST


/v3.0/{tenant_id}/groups/{id}/create


Create and schedule an Instance group


This is just the API proposal, the underlying implementation details will 
involve all the required logic to ensure the creation of all the group members. 
 Here like you also suggest, as a starting point,  we can try to first use 
existing Nova mechanisms to create the members of this group.  Eventually we 
will need to get to the discussions for scheduling this entire group as a 
whole, which covers cross-services support like I discuss in the unified 
resource placement document - 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit?pli=1

Let me know your thoughts on this.

Thanks,
Yathi.



























___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][infra] testing nodepool against tripleo clouds

2013-10-14 Thread Monty Taylor
Hey all!

Currently, nodepool does not work against the two TripleO clouds (well,
I'm trying against the grizzly POC cloud first) So far, the problems
have been combinations of bugs/assumptions in nodepool, along with at
least one actual config issue in the TripleO cloud.

I thought I'd share info on how to spin up a nodepool pointed at a
cloud, so that if you want to play along, you can.

# do this in a virtualenv if you care about stuff

Step one - clone, apply patches and install nodepool:

cd ~/src
git clone git://git.openstack.org/openstack-infra/config
git clone git://git.openstack.org/openstack-infra/nodepool
cd nodepool
git review -x 49833
git review -x 49639
git review -x 51465
pip install -U -r requirements.txt
pip install -e .

Step two - make a MySQL user and database for nodepool:

mysql -u root

mysql create database nodepool;
mysql GRANT ALL ON nodepool.* TO 'nodepool'@'localhost';
mysql flush privileges;

Step three - make a nodepool.yaml file (I'm using shell variable syntax
for things you should replace with real values

script-dir:
$HOME/src/config/modules/openstack_project/files/nodepool/scripts
dburi: 'mysql://nodepool@localhost/nodepool'

cron:
  cleanup: '*/5 * * * *'
  check: '*/15 * * * *'
  update-image: '14 2 * * *'

zmq-publishers:
  - tcp://localhost:

providers:
  - name: tripleo-test-cloud
service-type: 'compute'
service-name: 'nova'
username: '$OS_USERNAME'
password: '$OS_PASSWORD'
project-id: '$OS_PROJECT_ID'
auth-url: '$CLOUD_ENDPOINT'
boot-timeout: 120
max-servers: 2
images:
  - name: tripleo-precise
base-image: 'Ubuntu Precise 12.04 LTS Server 64-bit'
min-ram: 8192
setup: prepare_node_tripleo.sh
username: jenkins
private-key: $HOME/.ssh/id_rsa

targets:
  - name: fake-jenkins
jenkins:
  url: https://localhost
  user: fake
  apikey: fake
images:
  - name: tripleo-precise
min-ready: 2
providers:
  - name: tripleo-test-cloud

Step 4 - in a different shell, start nodepool

nodepoold -d -c $HOME/src/nodepool/nodepool.yaml

voila! you're now running a nodepool against a cloud.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] Policy Model

2013-10-14 Thread Yathiraj Udupi (yudupi)
Mike,

Like I proposed in my previous email about the model and the APIs,

About the InstanceGroupPolicy, why not leave it as is, and introduce a new 
abstract model class called Policy.
The InstanceGroupPolicy will be a reference to a Policy object saved separately.
and the policy field will point to the saved Policy object's unique name or 
id.

The new class Policy – can have the usual fields – id, name, uuid, and a 
dictionary of key-value pairs for any additional arguments about the policy.

This is in alignment with the model for InstanceGroupMember, which is a 
reference to an actual Instance Object saved in the DB.

I will color all the diamonds black to make it a composition I the UML diagram.

Thanks,
Yathi.







From: Mike Spreitzer mspre...@us.ibm.commailto:mspre...@us.ibm.com
Date: Monday, October 14, 2013 7:14 AM
To: Yathiraj Udupi yud...@cisco.commailto:yud...@cisco.com
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [scheduler] Policy Model

Could we agree on the following small changes to the model you posted last week?

1.  Rename InstanceGroupPolicy to InstanceGroupPolicyUse

2.  In InstanceGroupPolicy[Use], rename the policy field to policy_type

3.  Add an InstanceGroupPolicyUseProperty table, holding key/value pairs (two 
strings) giving the properties of the policy uses

4.  Color all the diamonds black

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday October 15th at 19:00 UTC

2013-10-14 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday October 15th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-14 Thread Alex Glikson
IMO, the three themes make sense, but I would suggest waiting until the 
submission deadline and discuss at the following IRC meeting on the 22nd. 
Maybe there will be more relevant proposals to consider.

Regards,
Alex

P.S. I plan to submit a proposal regarding scheduling policies, and maybe 
one more related to theme #1 below



From:   Day, Phil philip@hp.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   14/10/2013 06:50 PM
Subject:Re: [openstack-dev] Scheduler meeting and Icehouse Summit



Hi Folks,

In the weekly scheduler meeting we've been trying to pull together a 
consolidated list of Summit sessions so that we can find logical groupings 
and make a more structured set of sessions for the limited time available 
at the summit.

https://etherpad.openstack.org/p/IceHouse-Nova-Scheduler-Sessions

With the deadline for sessions being this Thursday 17th, tomorrows IRC 
meeting is the last chance to decide which sessions we want to combine / 
prioritize.Russell has indicated that a starting assumption of three 
scheduler sessions is reasonable, with any extras depending on what else 
is submitted.

I've matched the list on the Either pad to submitted sessions below, and 
added links to any other proposed sessions that look like they are 
related.


1) Instance Group Model and API
 Session Proposal:  
http://summit.openstack.org/cfp/details/190
 
2) Smart Resource Placement:
 Session Proposal:  
http://summit.openstack.org/cfp/details/33
 Possibly related sessions:  Resource 
optimization service for nova  (
http://summit.openstack.org/cfp/details/201)

3) Heat and Scheduling and Software, Oh My!:
 Session Proposal: 
http://summit.openstack.org/cfp/details/113

4) Generic Scheduler Metrics and Celiometer:
 Session Proposal: 
http://summit.openstack.org/cfp/details/218
 Possibly related sessions:  Making Ceilometer and Nova 
play nice  http://summit.openstack.org/cfp/details/73

5) Image Properties and Host Capabilities
 Session Proposal:  NONE

6) Scheduler Performance:
 Session Proposal:  NONE
 Possibly related Sessions: Rethinking Scheduler Design  
http://summit.openstack.org/cfp/details/34

7) Scheduling Across Services:
 Session Proposal: NONE

8) Private Clouds:
 Session Proposal:   
http://summit.openstack.org/cfp/details/228

9) Multiple Scheduler Policies:
 Session Proposal: NONE


The proposal from last weeks meeting was to use the three slots for:
 - Instance Group Model and API   (1)
 - Smart Resource Placement (2)
 - Performance (6)

However, at the moment there doesn't seem to be a session proposed to 
cover the performance work ?

It also seems to me that the Group Model and Smart Placement are pretty 
closely linked along with (3) (which says it wants to combine 1  2 into 
the same topic) , so if we only have three slots available then these look 
like logical candidates for consolidating into a single session.That 
would free up a session to cover the generic metrics (4) and Ceilometer - 
where a lot of work in Havana stalled because we couldn't get a consensus 
on the way forward.  The third slot would be kept for performance - which 
based on the lively debate in the scheduler meetings I'm assuming will 
still be submitted as a session.Private Clouds isn't really a 
scheduler topic, so I suggest it takes its chances as a general session. 
Hence my revised proposal for the three slots is:

  i) Group Scheduling / Smart Placement / Heat and Scheduling  (1), (2), 
(3),  (7)
 - How do you schedule something more complex that a 
single VM ?
 
ii) Generalized scheduling metrics / celiometer integration (4)
 - How do we extend the set of resources a scheduler can 
use to make its decisions ?
 - How do we make this work with  / compatible with 
Celiometer ?

iii) Scheduler Performance (6)
 
In that way we will at least give airtime to all of the topics. If a 
4th scheduler slot becomes available then we could break up the first 
session into two parts.

Thoughts welcome here or in tomorrows IRC meeting.

Cheers,
Phil 

 









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][oslo] Trusted Messaging Question

2013-10-14 Thread Simo Sorce
On Fri, 2013-10-11 at 17:49 +, Sangeeta Singh wrote:
 Hi,
 
 
 I had some questions about the trusted messaging project.
 

   1. During your design did you consider a kerberos style
  ticketing service for KDS? If yes what were the reasons
  against it?
   2. The Keystone documentation does say that it can support
  kerberos style authentication. Are there any know
  implementations and deployments?
   3. Does the secured messaging framework supports plugging in
  one's own key service or is there a plan of going in that
  direction. I think that would something that would be useful
  to the community giving the flexibility to hook up different
  security enforcing agents similar to the higher level
  message abstractions to allow multiple message transport in
  the oslo messaging library.
  I am interested to know how can one use the proposed framework and
  be able to plugin different key distribution mechanism.
  

Just to keep the list in the loop, as we had a quick conversation on
IRC.

1. we did consider using Kerberos, however we felt that introducing such
a big dependency point-blank was a bit of a stretch, plus we are trying
to address issues for which Kerberos does not have standard answers
(yet), like for group messages or broadcasts (and the current solution
doesn't work well for this last case either). So in the end I proposed a
simplified system that would allow us to experiment with all the pieces
and introduce the necessary interfaces in more agile way. The aim has
always been to experiment and find out the corner cases and be able to
change the system if needed.

2. KDS is a symmetric key management system exclusively dedicated to the
RPC messaging layer, it has nothing to do with Keystone authentication.
Keystone can use HTTP authentication so you can plug in things like
mod_auth_kerb in apache in front of keystone. Same for x509 certs, but
again these are orthogonal uses.

3. As I explained on IRC, the idea for the secure messaging  code was to
draw a 'template' for the client side. So that additional methods could
be plugged in. The securemessage class is quite abstract and all
self-contained, so that it should be easy to drop in a replacement that
uses a different security mechanism.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] baremetal nova boot issue

2013-10-14 Thread Ravikanth Samprathi
Thank you for your help and pointers Rob.
I followed instructions from the baremetal wiki and i am lost in the last
step.

First:
I setup pxelinux.cfg/default to point to my-initrd and my-vmlinuz for pxe
boot.
My baremetal node goes into this prompt:
(initramfs)
I dont know if the above is correct.

Second:
Then i do nova boot, which gives me this:
root@os:/tftpboot# nova boot --flavor my-baremetal-flavor --image my-image
my-baremetal-node
+-+--+
| Property|
Value|
+-+--+
| status  |
BUILD|
| updated |
2013-10-14T18:08:50Z |
| OS-EXT-STS:task_state   |
scheduling   |
| OS-EXT-SRV-ATTR:host|
None |
| key_name|
None |
| image   |
my-image |
| hostId
|  |
| OS-EXT-STS:vm_state |
building |
| OS-EXT-SRV-ATTR:instance_name   |
instance-0021|
| OS-EXT-SRV-ATTR:hypervisor_hostname |
None |
| flavor  |
my-baremetal-flavor  |
| id  |
c9fd52de-e93e-44dd-9009-9395e964829e |
| security_groups | [{u'name':
u'default'}]  |
| user_id |
251bd0a9388a477b9c24c99b223e7b2a |
| name|
my-baremetal-node|
| adminPass   |
5kHEX46xremR |
| tenant_id   |
8a34123d83824f3ea52527c5a28ad81e |
| created |
2013-10-14T18:08:49Z |
| OS-DCF:diskConfig   |
MANUAL   |
| metadata|
{}   |
| accessIPv4
|  |
| accessIPv6
|  |
| progress|
0|
| OS-EXT-STS:power_state  |
0|
| OS-EXT-AZ:availability_zone |
nova |
| config_drive
|  |
+-+--+
root@os:/tftpboot#

Before doing this, I have done the nova-baremetal-create and interface add,
and this is what i get:
==

root@os:/tftpboot# nova baremetal-node-create os 1 1024 10 00:50:56:AA:20:54
+--+---+
| Property | Value |
+--+---+
| instance_uuid| None  |
| pm_address   | None  |
| interfaces   | []|
| prov_vlan_id | None  |
| cpus | 1 |
| memory_mb| 1024  |
| prov_mac_address | 00:50:56:AA:20:54 |
| service_host | os|
| local_gb | 10|
| id   | 2 |
| pm_user  | None  |
| terminal_port| None  |
+--+---+
root@os:/tftpboot# nova baremetal-interface-add 2 00:50:56:AA:20:54
+-+---+
| Property| Value |
+-+---+
| datapath_id | 0 |
| id  | 2 |
| port_no | 0 |
| address | 00:50:56:AA:20:54 |
+-+---+

The baremetal node is stuck at (initramfs) no auto powering and no auto
booting happens.
I have not given --pm options , should i give that and if so what are the
values?
What am i missing?
Thanks
ravi




On Sun, Oct 13, 2013 at 2:27 AM, Robert Collins
robe...@robertcollins.netwrote:

 On 13 October 2013 20:16, Ravikanth Samprathi rsamp...@gmail.com wrote:
  Hi Rob
  The steps are well known, the devil is in the details. Following the
  instructions in the wiki was not very straightforward, and led to many
  issues down the line.
  Now which images are deploy ramdisk and kernel?

 The ones you create.

  How is the nova agent downloaded to the baremetal node, in which step and
  how?

 What agent?

  What does ''nova boot'' do?

 It triggers the deployment process as normal for nova. For the pxe
 baremetal driver that means extracting the images from glance, writing
 them to tftp tthen powering on the machine.

  I see that using any of the diskbuilder built images (ramdisk kernel) is
 

Re: [openstack-dev] baremetal nova boot issue

2013-10-14 Thread Clint Byrum
No, I said if your machines are not IPMI, and are not Tilera based, then
they will not work.

If they are IPMI, they will work.

Excerpts from Ravikanth Samprathi's message of 2013-10-14 11:42:31 -0700:
 You mean there is no power driver for anything other than tilera?  I am
 using supermicro intel based baremetal nodes. And for testing am simulating
 using VMs are baremetals as well.
 Please let me know.
 Thanks
 Ravi
 
 On Mon, Oct 14, 2013 at 11:30 AM, Clint Byrum cl...@fewbar.com wrote:
 
  Excerpts from Ravikanth Samprathi's message of 2013-10-14 11:15:15 -0700:
   Thank you for your help and pointers Rob.
   I followed instructions from the baremetal wiki and i am lost in the last
   step.
  
   First:
   I setup pxelinux.cfg/default to point to my-initrd and my-vmlinuz for pxe
   boot.
   My baremetal node goes into this prompt:
   (initramfs)
   I dont know if the above is correct.
  
   Second:
   Then i do nova boot, which gives me this:
   root@os:/tftpboot# nova boot --flavor my-baremetal-flavor --image
  my-image
   my-baremetal-node
  
  +-+--+
   | Property|
   Value|
  
  +-+--+
   | status  |
   BUILD|
   | updated |
   2013-10-14T18:08:50Z |
   | OS-EXT-STS:task_state   |
   scheduling   |
   | OS-EXT-SRV-ATTR:host|
   None |
   | key_name|
   None |
   | image   |
   my-image |
   | hostId
   |  |
   | OS-EXT-STS:vm_state |
   building |
   | OS-EXT-SRV-ATTR:instance_name   |
   instance-0021|
   | OS-EXT-SRV-ATTR:hypervisor_hostname |
   None |
   | flavor  |
   my-baremetal-flavor  |
   | id  |
   c9fd52de-e93e-44dd-9009-9395e964829e |
   | security_groups | [{u'name':
   u'default'}]  |
   | user_id |
   251bd0a9388a477b9c24c99b223e7b2a |
   | name|
   my-baremetal-node|
   | adminPass   |
   5kHEX46xremR |
   | tenant_id   |
   8a34123d83824f3ea52527c5a28ad81e |
   | created |
   2013-10-14T18:08:49Z |
   | OS-DCF:diskConfig   |
   MANUAL   |
   | metadata|
   {}   |
   | accessIPv4
   |  |
   | accessIPv6
   |  |
   | progress|
   0|
   | OS-EXT-STS:power_state  |
   0|
   | OS-EXT-AZ:availability_zone |
   nova |
   | config_drive
   |  |
  
  +-+--+
   root@os:/tftpboot#
  
   Before doing this, I have done the nova-baremetal-create and interface
  add,
   and this is what i get:
   ==
  
   root@os:/tftpboot# nova baremetal-node-create os 1 1024 10
  00:50:56:AA:20:54
   +--+---+
   | Property | Value |
   +--+---+
   | instance_uuid| None  |
   | pm_address   | None  |
   | interfaces   | []|
   | prov_vlan_id | None  |
   | cpus | 1 |
   | memory_mb| 1024  |
   | prov_mac_address | 00:50:56:AA:20:54 |
   | service_host | os|
   | local_gb | 10|
   | id   | 2 |
   | pm_user  | None  |
   | terminal_port| None  |
   +--+---+
   root@os:/tftpboot# nova baremetal-interface-add 2 00:50:56:AA:20:54
   +-+---+
   | Property| Value |
   +-+---+
   | datapath_id | 0 |
   | id  | 2 |
   | port_no | 0 |
   | address | 00:50:56:AA:20:54 |
   +-+---+
  
   The baremetal node is stuck at (initramfs) no auto powering and no auto
   booting happens.
   I have not 

Re: [openstack-dev] [tripleo][infra] testing nodepool against tripleo clouds

2013-10-14 Thread Monty Taylor
Updated list of outstanding reviews nodepool needs for happy making:

https://review.openstack.org/#/c/51687/
https://review.openstack.org/#/c/51644/
https://review.openstack.org/#/c/51641/
https://review.openstack.org/#/c/49833/
https://review.openstack.org/#/c/49639/
https://review.openstack.org/#/c/51465/

(It's possible one of them isn't strictly necessary, but that's what I'm
runnig with locally. With the above set added, and also ensure you've at
least allowed port 22 in yoru default security group on your cloud
account, we now get as far as the tripleo image prep scripts.


On 10/14/2013 02:27 PM, Monty Taylor wrote:
 Hey all!
 
 Currently, nodepool does not work against the two TripleO clouds (well,
 I'm trying against the grizzly POC cloud first) So far, the problems
 have been combinations of bugs/assumptions in nodepool, along with at
 least one actual config issue in the TripleO cloud.
 
 I thought I'd share info on how to spin up a nodepool pointed at a
 cloud, so that if you want to play along, you can.
 
 # do this in a virtualenv if you care about stuff
 
 Step one - clone, apply patches and install nodepool:
 
 cd ~/src
 git clone git://git.openstack.org/openstack-infra/config
 git clone git://git.openstack.org/openstack-infra/nodepool
 cd nodepool
 git review -x 49833
 git review -x 49639
 git review -x 51465
 pip install -U -r requirements.txt
 pip install -e .
 
 Step two - make a MySQL user and database for nodepool:
 
 mysql -u root
 
 mysql create database nodepool;
 mysql GRANT ALL ON nodepool.* TO 'nodepool'@'localhost';
 mysql flush privileges;
 
 Step three - make a nodepool.yaml file (I'm using shell variable syntax
 for things you should replace with real values
 
 script-dir:
 $HOME/src/config/modules/openstack_project/files/nodepool/scripts
 dburi: 'mysql://nodepool@localhost/nodepool'
 
 cron:
   cleanup: '*/5 * * * *'
   check: '*/15 * * * *'
   update-image: '14 2 * * *'
 
 zmq-publishers:
   - tcp://localhost:
 
 providers:
   - name: tripleo-test-cloud
 service-type: 'compute'
 service-name: 'nova'
 username: '$OS_USERNAME'
 password: '$OS_PASSWORD'
 project-id: '$OS_PROJECT_ID'
 auth-url: '$CLOUD_ENDPOINT'
 boot-timeout: 120
 max-servers: 2
 images:
   - name: tripleo-precise
 base-image: 'Ubuntu Precise 12.04 LTS Server 64-bit'
 min-ram: 8192
 setup: prepare_node_tripleo.sh
 username: jenkins
 private-key: $HOME/.ssh/id_rsa
 
 targets:
   - name: fake-jenkins
 jenkins:
   url: https://localhost
   user: fake
   apikey: fake
 images:
   - name: tripleo-precise
 min-ready: 2
 providers:
   - name: tripleo-test-cloud
 
 Step 4 - in a different shell, start nodepool
 
 nodepoold -d -c $HOME/src/nodepool/nodepool.yaml
 
 voila! you're now running a nodepool against a cloud.
 
 Monty
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-14 Thread Renat Akhmerov
Hi OpenStackers,

I am proud to announce the official launch of the Mistral project. At Mirantis 
we have a team to start contributing to the project right away. We invite 
anybody interested in task service  state management to join the initiative.

Mistral is a new OpenStack service designed for task flow control, scheduling, 
and execution. The project will implement Convection proposal 
(https://wiki.openstack.org/wiki/Convection) and provide an API and 
domain-specific language that enables users to manage tasks and their 
dependencies, and to define workflows, triggers, and events. The service will 
provide the ability to schedule tasks, as well as to define and manage external 
sources of events to act as task execution triggers. 

The project will allow integration with the existing task flow library. Mistral 
will initially expose task flow library functionality through an API by 
providing a domain-specific language for describing task flow concepts. Going 
forward there is an intention to extend DSL to provide a support for additional 
concepts, like external events or scheduling rules.
 
The project page can be found at https://launchpad.net/Mistral.
 
We will be holding weekly meetings on Mondays at 16:00 UTC on 
#openstack-meeting at Freenode. If you have any questions or would like to 
contribute, you can find us on #openstack-mistral.


Renat Akhmerov
Software Engineer
@ Mirantis Inc.

+7(495) 640 4904, ext 0236
+7-905-984-2469 (cell)
rakhme...@mirantis.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Contract Opportunity in Bay Area, CA

2013-10-14 Thread Mathew Zacharias
Hi,


 *We are an executive search firm based in Fremont CA and are currently
 looking to help one of our Direct Client fill some open position which
 seems to be a good match with your profile. We would appreciate if you
 could review the same and let us know your interest to pursue with the
 opportunity. Including a brief job description below for your reference. *

 *Senior OpenStack Software Developer *
 *Contract*

 *Responsibilities and Activities:*

 · Lead OpenStack development for integration of value adds
 features including Ironic bare metal compute, Cinder storage API, and
 Neutron networking; Includes OpenStack blueprints and commits

 · Develop and maintain the tools and workflows to automate
 deployment and management of OpenStack components on the client's Fabric
 architecture

 · Work with our OpenStack ecosystem partners to develop
 integration with the client solution

 · Support the solution engineering and customer facing teams to
 develop end-to-end OpenStack solutions for customers

 *Qualifications:*

 · Recent and relevant experience with OpenStack, as well as a
 solid foundation in cloud computing and virtualization; OpenStack commit
 experience preferred

 · 7+ years of software development experience with proficiency in
 Python, shell scripts, REST API, Linux Systems, Git

 · Experience with configuration management and automation systems
 (Puppet, Chef, Cobbler)

 · Experience with data center infrastructure including servers,
 networking, and storage

 · Strong analytical and problem solving skills

 · Work well in a team environment

 · BS or equivalent experience

  Look forward to hearing from you soon.

 Regards,


 Mathew

 510-356-3811

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-14 Thread Clint Byrum
Excerpts from Renat Akhmerov's message of 2013-10-14 12:40:28 -0700:
 Hi OpenStackers,
 
 I am proud to announce the official launch of the Mistral project. At 
 Mirantis we have a team to start contributing to the project right away. We 
 invite anybody interested in task service  state management to join the 
 initiative.
 
 Mistral is a new OpenStack service designed for task flow control, 
 scheduling, and execution. The project will implement Convection proposal 
 (https://wiki.openstack.org/wiki/Convection) and provide an API and 
 domain-specific language that enables users to manage tasks and their 
 dependencies, and to define workflows, triggers, and events. The service will 
 provide the ability to schedule tasks, as well as to define and manage 
 external sources of events to act as task execution triggers. 

Why exactly aren't you just calling this Convection and/or collaborating
with the developers who came up with it?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-14 Thread Stan Lagun
 Why exactly aren't you just calling this Convection and/or collaborating
 with the developers who came up with it?

We do actively collaborate with TaskFlow/StateManagement team who are also
the authors of Convection proposal. This is a joint project and we invite
you and other developers to join and contribute.
Convection is a Microsoft trademark. That's why Mistral


On Tue, Oct 15, 2013 at 12:04 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Renat Akhmerov's message of 2013-10-14 12:40:28 -0700:
  Hi OpenStackers,
 
  I am proud to announce the official launch of the Mistral project. At
 Mirantis we have a team to start contributing to the project right away. We
 invite anybody interested in task service  state management to join the
 initiative.
 
  Mistral is a new OpenStack service designed for task flow control,
 scheduling, and execution. The project will implement Convection proposal (
 https://wiki.openstack.org/wiki/Convection) and provide an API and
 domain-specific language that enables users to manage tasks and their
 dependencies, and to define workflows, triggers, and events. The service
 will provide the ability to schedule tasks, as well as to define and manage
 external sources of events to act as task execution triggers.

 Why exactly aren't you just calling this Convection and/or collaborating
 with the developers who came up with it?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-14 Thread Georgy Okrokvertskhov
Hi Clint,

This is done in collaboration with Joshua Harlow and the task flow team who
come up with the original proposal. We had a hangout session where we
defined a project scope and its relationship with task flow library.

We cant use name Convection as this is a trademark owned by Microsoft. We
saw the naming problems with Quantum, so we decided to pick up a name which
is not associated with any IT company.

Thanks
Georgy


On Mon, Oct 14, 2013 at 1:04 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Renat Akhmerov's message of 2013-10-14 12:40:28 -0700:
  Hi OpenStackers,
 
  I am proud to announce the official launch of the Mistral project. At
 Mirantis we have a team to start contributing to the project right away. We
 invite anybody interested in task service  state management to join the
 initiative.
 
  Mistral is a new OpenStack service designed for task flow control,
 scheduling, and execution. The project will implement Convection proposal (
 https://wiki.openstack.org/wiki/Convection) and provide an API and
 domain-specific language that enables users to manage tasks and their
 dependencies, and to define workflows, triggers, and events. The service
 will provide the ability to schedule tasks, as well as to define and manage
 external sources of events to act as task execution triggers.

 Why exactly aren't you just calling this Convection and/or collaborating
 with the developers who came up with it?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Apache2 Installation Question

2013-10-14 Thread Adam Young

On 10/09/2013 08:43 PM, Fox, Kevin M wrote:

Thanks for the docs. It looks like I got through all of that already, its the 
authentication module part that is throwing me.

I managed to manually get a token by putting mod_krb5 on Location 
/keystone/main/v2.0/tokens and using curl against it, giving curl a 
username/password.
If I try and give that generated token back though its failing because krb5 
wants a username and password.
THat is not right.  krb5 should use Negotiate, not basic auth, and you 
should not need UID/PW in order to get a token


I guess I need one endpoint url to take in a kerb5 username/password and give 
me a token.

No, a krb service ticket, not password


another url to validate tokens I guess. Maybe that's what the split between 
main and admin is for though?
Validation probably should not be done via Kerberos, unless you have a 
way to automatically update the service tickets for Nova etc. There are 
mechanisms for doing that in the latest version of the GSSAPI, but I 
would not expect it to be in the current  RHEL6 or the latest LTS of 
Ubuntu yet.  So the validate calls need to go to an URL not protected 
via Kerberos.




And none of the clients seem to pass a basic auth username/password, so I'd 
have to modify all of those too? I think its a middleware thing though, so I 
might be able to tweak them all at once?


Correct.  I am working on a patch for Basic-Auth in Icehouse, but it 
won't be in Havana.




Thanks,
Kevin

From: Miller, Mark M (EB SW Cloud - RD - Corvallis) [mark.m.mil...@hp.com]
Sent: Wednesday, October 09, 2013 5:17 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

Hi Kevin,

It has been awhile, but here are some notes I took.

Regards,

Mark Miller

-

Keystone Apache2 frontend Installation and Configuration

Instructions below are based off of documentation/examples from URL 
https://keystone-voms.readthedocs.org/en/latest/requirements.html
Install Apache2 WSGI with mod_ssl enabled. To do so, install the packages, and 
enable the relevant modules:
sudo apt-get install apache2 libapache2-mod-wsgi
sudo a2enmod ssl
sudo ufw disable  #Note: not sure if need to  disable firewall

Then configure your Apache server to use CA certificates. If you have some installed in 
the default location, enable the default-ssl site (a2ensite default-ssl) and modify its 
configuration file (normally in /etc/apache2/sites-enabled/default-ssl). If not, create 
configuration file /etc/apache2/sites-enabled/keystone for your keystone 
installation.
Note: I created file /etc/apache2/sites-enabled/keystone shown below.
Example:
WSGIDaemonProcess keystone user=keystone group=nogroup processes=3 threads=10

Listen 5000
VirtualHost _default_:5000
 LogLevel info
 ErrorLog ${APACHE_LOG_DIR}/error.log
 CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined

 SSLEngine on
 SSLCertificateFile/etc/ssl/certs/apache.cert
 SSLCertificateKeyFile /etc/ssl/private/apache.key

 SSLCACertificatePath /etc/ssl/certs
 SSLCARevocationPath /etc/ssl/certs
 SSLVerifyClient optional
 SSLVerifyDepth 10
 SSLProtocol all -SSLv2
 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
 SSLOptions +StdEnvVars +ExportCertData

 WSGIScriptAlias /  /usr/lib/cgi-bin/keystone/main
 WSGIProcessGroup keystone
/VirtualHost

Listen 35357
VirtualHost _default_:35357
 LogLevel info
 ErrorLog ${APACHE_LOG_DIR}/error.log
 CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined

 SSLEngine on
 SSLCertificateFile/etc/ssl/certs/apache.cert
 SSLCertificateKeyFile /etc/ssl/private/apache.key


 SSLCACertificatePath /etc/ssl/certs
 SSLCARevocationPath /etc/ssl/certs
 SSLVerifyClient optional
 SSLVerifyDepth 10
 SSLProtocol all -SSLv2
 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
 SSLOptions +StdEnvVars +ExportCertData

 WSGIScriptAlias / /usr/lib/cgi-bin/keystone/admin
 WSGIProcessGroup keystone
/VirtualHost

Note1: By changing settings in this file you can turn on and off the 
Apache2-SSL frontend to Keystone (variable SSL_Engine).
Note2: The [ssl] section of file keystone.conf needs to match this file in 
that if SSL is turned on in one of them, then it needs to be turned on in the other.
To run keystone as a WSGI app, copy file keystone.py to the correct location 
and create links to it.
sudo mkdir -p /usr/lib/cgi-bin/keystone
sudo cp /path/keystone-2013.2.b2/httpd/keystone.py 
/usr/lib/cgi-bin/keystone/keystone.py
sudo ln /usr/lib/cgi-bin/keystone/keystone.py /usr/lib/cgi-bin/keystone/main
sudo ln /usr/lib/cgi-bin/keystone/keystone.py /usr/lib/cgi-bin/keystone/admin

If the keystone service is running, shut it down. The Apache2 service will now start up 
as many instances of keystone as are specified on the first line of file 

Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-14 Thread Joshua Harlow
+2 More collaboration the better :)

From: Stan Lagun sla...@mirantis.commailto:sla...@mirantis.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, October 14, 2013 1:20 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Announcing a new task scheduling and 
orchestration service for OpenStack


 Why exactly aren't you just calling this Convection and/or collaborating
 with the developers who came up with it?

We do actively collaborate with TaskFlow/StateManagement team who are also the 
authors of Convection proposal. This is a joint project and we invite you and 
other developers to join and contribute.
Convection is a Microsoft trademark. That's why Mistral


On Tue, Oct 15, 2013 at 12:04 AM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:
Excerpts from Renat Akhmerov's message of 2013-10-14 12:40:28 -0700:
 Hi OpenStackers,

 I am proud to announce the official launch of the Mistral project. At 
 Mirantis we have a team to start contributing to the project right away. We 
 invite anybody interested in task service  state management to join the 
 initiative.

 Mistral is a new OpenStack service designed for task flow control, 
 scheduling, and execution. The project will implement Convection proposal 
 (https://wiki.openstack.org/wiki/Convection) and provide an API and 
 domain-specific language that enables users to manage tasks and their 
 dependencies, and to define workflows, triggers, and events. The service will 
 provide the ability to schedule tasks, as well as to define and manage 
 external sources of events to act as task execution triggers.

Why exactly aren't you just calling this Convection and/or collaborating
with the developers who came up with it?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.comhttp://www.mirantis.com/
sla...@mirantis.commailto:sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nova boot command issue

2013-10-14 Thread Ravikanth Samprathi
I have setup openstack according to the baremetal wiki.

The hostname on which i am running baremetal openstack controller is ''os''

1 I have this baremetal-flavor:
=
| 9  | my-baremetal-flavor | 1024  | 10   | 0 |  | 1 |
1.0 | True  | {u'cpu_arch': u'x86_64',
u'baremetal:deploy_kernel_id': u'5b9a76b6-c8e1-4bdf-9216-e31084a2b381',
u'baremetal:deploy_ramdisk_id': u'939f2278-2936-499b-aba2-7f3705160e65'} |





2 I have enrolled this baremetal node:

root@os:/etc/init.d# nova baremetal-node-list
++--+--+---+-+---++-+-+---+
| ID | Host | CPUs | Memory_MB | Disk_GB | MAC Address   | PM Address |
PM Username | PM Password | Terminal Port |
++--+--+---+-+---++-+-+---+
| 3  | os   | 1| 1024  | 10  | 00:50:56:AA:20:54 | 10.40.0.99 |
admin   | | None  |
++--+--+---+-+---++-+-+---+




3 I have this baremetal interface:
=
root@os:/etc/init.d# nova baremetal-interface-list 3
++-+-+---+
| ID | Datapath_ID | Port_No | Address   |
++-+-+---+
| 4  | 0   | 0   | 00:50:56:AA:20:54 |
++-+-+---+





4 I do nova baremetal boot:
=
root@os:/etc/init.d# nova boot --flavor my-baremetal-flavor --image
my-image my-baremetal-node
+-+--+
| Property|
Value|
+-+--+
| status  |
BUILD|
| updated |
2013-10-14T20:32:52Z |
| OS-EXT-STS:task_state   |
scheduling   |
| OS-EXT-SRV-ATTR:host|
None |
| key_name|
None |
| image   |
my-image |
| hostId
|  |
| OS-EXT-STS:vm_state |
building |
| OS-EXT-SRV-ATTR:instance_name   |
instance-0029|
| OS-EXT-SRV-ATTR:hypervisor_hostname |
None |
| flavor  |
my-baremetal-flavor  |
| id  |
66a1f372-3217-4a5c-81e5-9c2cc975711c |
| security_groups | [{u'name':
u'default'}]  |
| user_id |
251bd0a9388a477b9c24c99b223e7b2a |
| name|
my-baremetal-node|
| adminPass   |
DLMqNtw59W4F |
| tenant_id   |
8a34123d83824f3ea52527c5a28ad81e |
| created |
2013-10-14T20:32:52Z |
| OS-DCF:diskConfig   |
MANUAL   |
| metadata|
{}   |
| accessIPv4
|  |
| accessIPv6
|  |
| progress|
0|
| OS-EXT-STS:power_state  |
0|
| OS-EXT-AZ:availability_zone |
nova |
| config_drive
|  |
+-+--+




5 But i see error in nova show:
===
root@os:/etc/init.d# nova show 66a1f372-3217-4a5c-81e5-9c2cc975711c
+-+-+
| Property|
Value
|
+-+-+
| status  |
ERROR
|
| name|
my-baremetal-node
|
| fault   | {u'message': u'NoValidHost',
u'code': 500, u'details': u'No valid host was
found.   |
| |   File
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py, line
97, in schedule_run_instance |
| | raise
exception.NoValidHost(reason=)
|
|  

Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-14 Thread Joshua Harlow
So from my understanding of the whole situation.

Convection doesn't exist (at the moment), and I think mistral will be an
implementation of what u call convection.

Its planned to be along the lines of convection, in fact I think we should
move https://wiki.openstack.org/wiki/Convection to
https://wiki.openstack.org/wiki/MistralDesign (or something similar?).

Convection was planned to use taskflow to provide the underlying workflow
engine (taskflow is targeted for more than just this usage btw, since its
a generic library/concept).

I for one think it will be very neat to see where mistral ends up :)

On 10/14/13 1:04 PM, Clint Byrum cl...@fewbar.com wrote:

Excerpts from Renat Akhmerov's message of 2013-10-14 12:40:28 -0700:
 Hi OpenStackers,
 
 I am proud to announce the official launch of the Mistral project. At
Mirantis we have a team to start contributing to the project right away.
We invite anybody interested in task service  state management to join
the initiative.
 
 Mistral is a new OpenStack service designed for task flow control,
scheduling, and execution. The project will implement Convection
proposal (https://wiki.openstack.org/wiki/Convection) and provide an API
and domain-specific language that enables users to manage tasks and
their dependencies, and to define workflows, triggers, and events. The
service will provide the ability to schedule tasks, as well as to define
and manage external sources of events to act as task execution triggers.

Why exactly aren't you just calling this Convection and/or collaborating
with the developers who came up with it?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-14 Thread Thomas Spatzier
Steven Dake sd...@redhat.com wrote on 11.10.2013 21:02:38:
 From: Steven Dake sd...@redhat.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 11.10.2013 21:04
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

 On 10/11/2013 11:55 AM, Lakshminaraya Renganarayana wrote:
 Clint Byrum cl...@fewbar.com wrote on 10/11/2013 12:40:19 PM:

  From: Clint Byrum cl...@fewbar.com
  To: openstack-dev openstack-dev@lists.openstack.org
  Date: 10/11/2013 12:43 PM
  Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
  proposal for workflows
 
   3. Ability to return arbitrary (JSON-compatible) data structure
 from config
   application and use attributes of that structure as an input for
other
   configs
 
  Note that I'd like to see more use cases specified for this ability.
The
  random string generator that Steve Baker has put up should handle most
  cases where you just need passwords. Generated key sharing might best
  be deferred to something like Barbican which does a lot more than Heat
  to try and keep your secrets safe.

 I had seen a deployment scenario that needed more than random string
 generator. It was during the deployment of a system that has
 clustered application servers, i.e., a cluster of application server
 nodes + a cluster manager node. The deployment progresses by all the
 VMs (cluster-manager and cluster-nodes) starting concurrently. Then
 the cluster-nodes wait for the cluster-manager to send them data
 (xml) to configure themselves. The cluster-manager after reading its
 own config file, generates config-data for each cluster-node and
 sends it to them.

 Is the config data per cluster node unique to each node?  If not:

I think Lakshmi's example (IBM WebSphere, right?) talks about a case where
the per cluster member info is unique per member, so the one fits all
approach does not work. In addition, I think there is a constraint that
members must join one by one and cannot join concurrently.


 Change deployment to following model:
 1. deploy cluster-manager as a resource with a waitcondition -
 passing the data using the cfn-signal  -d to send the xml blob
 2. have cluster nodes wait on wait condition in #1, using data from
 the cfn-signal

 If so, join the config data sent in cfn-signal and break it apart by
 the various cluster nodes in #2
 Thanks,
 LN


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-14 Thread Christopher Yeoh
On Mon, 14 Oct 2013 10:37:10 -0500
Ben Nemec openst...@nemebean.com wrote:
  
 
 I agree that this needs to be fixed. It's very counterintuitive, if
 nothing else (which is also my argument against requiring all-tenants
 for admin users in the first place). The only question for me is
 whether to fix it in novaclient or in Nova itself. The comments on
 the review from the previous discussion seemed to support the latter,
 but there was no follow-up that I'm aware of. Maybe it's a problem
 because it's an API change? For reference, see the discussion here:
 https://review.openstack.org/#/c/39705/3/novaclient/v1_1/shell.py 
 
 I think it should definitely be fixed for v3. v2 is a little trickier
 because it's a change in behavior, but since v3 is coming soon I'm not
 too hung up on getting it changed in v2. 

I think it should be fixed for v3. It's been the behaviour for v2 for
so long I don't think we should change it. That being said I don't see
why we can't paper over the issue for V2 at the novaclient level like
was suggested in 39705.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Apache2 Installation Question

2013-10-14 Thread Fox, Kevin M
Hi Adam,

I was trying to get both kerberos negotiate and kerberos basic auth working. 
Negotiate does not seem to be supported by any of the clients so I think it 
will be a fair amount of work to get working.

/keystone/main/v2.0/tokens can't support having an apache auth module on it, it 
seems because it is overloaded to do too many things. After playing around with 
it, it looks like some services (like horizon) assume they can give it a token 
and get back a restricted token without doing basic auth/negotiate all the 
time. You can't put auth around it in apache and Require valid-user and still 
have it perform its other functions. the tokens endpoint needs to be able to be 
split out so that you can do something like /auth/type/tokens so you can put 
a different handler on each url and /tokens has all the rest of the 
functionality. I guess this will have to wait for Icehouse.

I also played around with basic auth as an alternative in the mean time to 
negotiate and ran into that same issue. It also requires changes to not just 
python-keystoneclient but a lot of the other python-*clients as well, and even 
then, horizon breaks as described above.

I found a work around for basic auth though that is working quite nicely. I'm 
trying to get the patch through our legal department, but they are tripping 
over the contributor agreement. :/

The trick is, if you are using basic auth, you only support a username/password 
anyway and havana keystone is plugable in its handling of username/passwords.

So, I'll just tell you the idea of the patch so you can work on reimplementing 
it if you'd like.
 * I made a new file 
/usr/lib/python2.6/site-packages/keystone/identity/backends/basic_auth_sql.py
 * I made a class Identity that inherits from the sql Identity class.
 * I overrode the _check_password function.
 * I took the username/password and base64 encoded it, then make a http request 
with it to whatever http basic auth service url you want to validate with. 
apache on localhost works great.
 * Check the result for status 200. You can even fall back to the super class's 
_chck_password to support both basic auth and sql passwords if you'd like.

The interesting bit about this configuration is keystone does not need to be 
embedded in apache to support apache basic auth, while still providing you most 
of the flexability of apache basic auth plugins. The only thing that doesn't 
work is REMOTE_USER rewriting. Though you could probably add that feature in 
somehow using a http response header or something.

Thanks,
Kevin

From: Adam Young [ayo...@redhat.com]
Sent: Monday, October 14, 2013 1:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

On 10/09/2013 08:43 PM, Fox, Kevin M wrote:
 Thanks for the docs. It looks like I got through all of that already, its the 
 authentication module part that is throwing me.

 I managed to manually get a token by putting mod_krb5 on Location 
 /keystone/main/v2.0/tokens and using curl against it, giving curl a 
 username/password.
 If I try and give that generated token back though its failing because krb5 
 wants a username and password.
THat is not right.  krb5 should use Negotiate, not basic auth, and you
should not need UID/PW in order to get a token

 I guess I need one endpoint url to take in a kerb5 username/password and give 
 me a token.
No, a krb service ticket, not password

 another url to validate tokens I guess. Maybe that's what the split between 
 main and admin is for though?
Validation probably should not be done via Kerberos, unless you have a
way to automatically update the service tickets for Nova etc. There are
mechanisms for doing that in the latest version of the GSSAPI, but I
would not expect it to be in the current  RHEL6 or the latest LTS of
Ubuntu yet.  So the validate calls need to go to an URL not protected
via Kerberos.


 And none of the clients seem to pass a basic auth username/password, so I'd 
 have to modify all of those too? I think its a middleware thing though, so I 
 might be able to tweak them all at once?

Correct.  I am working on a patch for Basic-Auth in Icehouse, but it
won't be in Havana.


 Thanks,
 Kevin
 
 From: Miller, Mark M (EB SW Cloud - RD - Corvallis) [mark.m.mil...@hp.com]
 Sent: Wednesday, October 09, 2013 5:17 PM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

 Hi Kevin,

 It has been awhile, but here are some notes I took.

 Regards,

 Mark Miller

 -

 Keystone Apache2 frontend Installation and Configuration

 Instructions below are based off of documentation/examples from URL 
 https://keystone-voms.readthedocs.org/en/latest/requirements.html
 Install Apache2 WSGI with mod_ssl enabled. To do so, install the packages, 
 and enable the relevant modules:
 sudo apt-get install 

[openstack-dev] [Swift] team meeting

2013-10-14 Thread John Dickinson
This week's team meeting is cancelled since most of the active contributors 
will all be together in Austin for the Swift hackathon during the regularly 
scheduled meeting time.

Regular bi-weekly meetings will resume on October 30 at 1900UTC in 
#openstack-meeting

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] UpgradeImpact commit message tag

2013-10-14 Thread Russell Bryant
I was talking to Dan Smith today about a patch series I was starting to
work on.  These changes affect people doing continuous deployment, so we
came up with the idea of tagging the commits with UpgradeImpact,
similar to how we use DocImpact for changes that affect docs.

This seems like a good convention to start using for all changes that
affect upgrades in some way.

Any comments/suggestions/objections?  If not, I'll get this documented on:

https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] UpgradeImpact commit message tag

2013-10-14 Thread Robert Collins
I think it's good to call that out, but also perhaps there should be
some discussion with such deployers such changes?

-Rob

On 15 October 2013 12:32, Russell Bryant rbry...@redhat.com wrote:
 I was talking to Dan Smith today about a patch series I was starting to
 work on.  These changes affect people doing continuous deployment, so we
 came up with the idea of tagging the commits with UpgradeImpact,
 similar to how we use DocImpact for changes that affect docs.

 This seems like a good convention to start using for all changes that
 affect upgrades in some way.

 Any comments/suggestions/objections?  If not, I'll get this documented on:

 https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-14 Thread Caitlin Bestler

On 10/14/2013 8:37 AM, Ben Nemec wrote:

I agree that this needs to be fixed.  It's very counterintuitive, if
nothing else (which is also my argument against requiring all-tenants
for admin users in the first place).  The only question for me is
whether to fix it in novaclient or in Nova itself.


If it is fixed in novaclient, then any unscrupulous tenant would be able
to unfix it in novaclient themselves and gain the same information about
other tenants that the bug is allowing.

So if the intent is to protect leakage of information across tenant 
lines then the correct solution is a real lock (i.e. in Nova) rather

than just a screen door lock.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Plugin packaging

2013-10-14 Thread Sam Alba
Hello,

I am working on a Heat plugin that makes a new resource available in a
template. It's working great and I will opensource it this week if I
can get the packaging right...

Right now, I am linking my module.py file in /usr/lib/heat to get it
loaded when heat-engine starts. But according to the doc, I am
supposed to be able to make the plugin discoverable by heat-engine if
the module appears in the package heat.engine.plugins[1]

I looked into the plugin_loader module in the Heat source code and it
looks like it should work. However I was unable to get a proper Python
package.

Has anyone been able to make this packaging right for an external Heat plugin?

Thanks in advance,


[1] https://wiki.openstack.org/wiki/Heat/Plugins#Installation_and_Configuration

-- 
@sam_alba

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] CD Cloud MVP2 completed

2013-10-14 Thread Robert Collins
That is, we're now deploying a trunk KVM based OpenStack using heat +
nova baremetal on a continuous basis into the test rack we have *and*
there are users setup on it for any TripleO ATC that want's an
account, and networking is setup: you can access instances over the
internet, and the endpoint is also publically accessible.

So far it's done 121 successful deploys, and 4 failures. Woo - thats a
fairly decent reliability rate today; we're not at the point of
analysing every failure yet - I think we'll add that discipline in
when we have state preservation.

We'll need to do a retrospective for this as well...
https://wiki.openstack.org/wiki/TripleO/TripleOCloud/MVP2Retrospective
is where we'll capture the info from it.

Following on the current strategy, MVP 3 will add in stateful upgrades
(with downtime).

https://etherpad.openstack.org/p/tripleo-image-updates - see
'Rebuild:' in there.
https://trello.com/b/0jIoMrdo/tripleo has the cards as expected...

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change I3e080c30: Fix resource length in project_user_quotas table for Havana?

2013-10-14 Thread Vishvananda Ishaya

On Oct 14, 2013, at 3:38 AM, Joshua Hesketh joshua.hesk...@rackspace.com 
wrote:

 
 Rackspace Australia
 
 On 10/12/13 12:35 AM, Russell Bryant wrote:
 On 10/10/2013 11:15 PM, Joshua Hesketh wrote:
 Hi there,
 
 I've been reviewing this change which is currently proposed for master
 and I think it needs to be considered for the next Havana RC.
 
 Change I3e080c30: Fix resource length in project_user_quotas table
 https://review.openstack.org/#/c/47299/
 
 I'm new to the process around these kinds of patches but I imagine that
 we should use one of the placeholder migrations in the havana branch and
 cherry-pick it back into master?
 The fix looks good, thanks!
 
 I agree that this is good for Havana.  I'll see if I can slip it into
 havana-rc2.
 
 The process is generally merging the fix to master and then backporting
 it.  In this case the backport can't be the same.  Instead of using a
 new migration number, we'll use one of the migration numbers reserved
 for havana backports.
 
 So I'm a bit late responding to this but I'm confused about this process and 
 was wondering if you could please clarify.
 
 Why wouldn't we use one of the placeholders in master and backport it as is 
 to Havana? What happens when an administrator installs Havana and then later 
 upgrades to Icehouse? Their migration version number will be at 226 having 
 already applied the patch at 217. However in Icehouse the patch re-represents 
 itself as 227 and will therefore attempt to apply again*

Our plan for this has been to make the migration idempotent so it doesn't 
matter if the migration is run again. So yes, migrations that we plan to 
backport have to be carefully constructed.

Vish

 
 Now typing this I am seeing this will pose issues for those running against 
 trunk who will have already made it to migration 226 before the fix is 
 implemented at 217. Still, how do we prevent the above scenario from causing 
 problems?
 
 Cheers,
 Josh
 
 *granted in this particular case there is no harm in the migration applying 
 again but in other migration backports this could be problematic.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-14 Thread Christopher Yeoh
On Tue, Oct 15, 2013 at 10:25 AM, Caitlin Bestler 
caitlin.best...@nexenta.com wrote:

 On 10/14/2013 8:37 AM, Ben Nemec wrote:

 I agree that this needs to be fixed.  It's very counterintuitive, if
 nothing else (which is also my argument against requiring all-tenants
 for admin users in the first place).  The only question for me is
 whether to fix it in novaclient or in Nova itself.


 If it is fixed in novaclient, then any unscrupulous tenant would be able
 to unfix it in novaclient themselves and gain the same information about
 other tenants that the bug is allowing.

 So if the intent is to protect leakage of information across tenant lines
 then the correct solution is a real lock (i.e. in Nova) rather
 than just a screen door lock.


The novaclient fix for V2 would be simply to automatically pass all-tenants
where needed. It would not give a non admin user any extra privileges even
if they modified novaclient.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] UpgradeImpact commit message tag

2013-10-14 Thread Morgan Fainberg
I think this could be a significant win for all projects (I'd like to see
it adopted beyond nova).  This should help ferret out the upgrade impacts
that sometimes sneak up on us (and cause issues later on).

+1 from me.

Cheers,
Morgan Fainberg

IRC: morganfainberg


On Mon, Oct 14, 2013 at 4:51 PM, Robert Collins
robe...@robertcollins.netwrote:

 I think it's good to call that out, but also perhaps there should be
 some discussion with such deployers such changes?

 -Rob

 On 15 October 2013 12:32, Russell Bryant rbry...@redhat.com wrote:
  I was talking to Dan Smith today about a patch series I was starting to
  work on.  These changes affect people doing continuous deployment, so we
  came up with the idea of tagging the commits with UpgradeImpact,
  similar to how we use DocImpact for changes that affect docs.
 
  This seems like a good convention to start using for all changes that
  affect upgrades in some way.
 
  Any comments/suggestions/objections?  If not, I'll get this documented
 on:
 
 
 https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
 
  --
  Russell Bryant
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Apache2 Installation Question

2013-10-14 Thread Simo Sorce
On Mon, 2013-10-14 at 14:31 -0700, Fox, Kevin M wrote:
 Hi Adam,
 
 I was trying to get both kerberos negotiate and kerberos basic auth working. 
 Negotiate does not seem to be supported by any of the clients so I think it 
 will be a fair amount of work to get working.
 
 /keystone/main/v2.0/tokens can't support having an apache auth module on it, 
 it seems because it is overloaded to do too many things. After playing around 
 with it, it looks like some services (like horizon) assume they can give it a 
 token and get back a restricted token without doing basic auth/negotiate all 
 the time. You can't put auth around it in apache and Require valid-user and 
 still have it perform its other functions. the tokens endpoint needs to be 
 able to be split out so that you can do something like /auth/type/tokens so 
 you can put a different handler on each url and /tokens has all the rest of 
 the functionality. I guess this will have to wait for Icehouse.
 
 I also played around with basic auth as an alternative in the mean time to 
 negotiate and ran into that same issue. It also requires changes to not just 
 python-keystoneclient but a lot of the other python-*clients as well, and 
 even then, horizon breaks as described above.
 
 I found a work around for basic auth though that is working quite nicely. I'm 
 trying to get the patch through our legal department, but they are tripping 
 over the contributor agreement. :/
 
 The trick is, if you are using basic auth, you only support a 
 username/password anyway and havana keystone is plugable in its handling of 
 username/passwords.
 
 So, I'll just tell you the idea of the patch so you can work on 
 reimplementing it if you'd like.
  * I made a new file 
 /usr/lib/python2.6/site-packages/keystone/identity/backends/basic_auth_sql.py
  * I made a class Identity that inherits from the sql Identity class.
  * I overrode the _check_password function.
  * I took the username/password and base64 encoded it, then make a http 
 request with it to whatever http basic auth service url you want to validate 
 with. apache on localhost works great.
  * Check the result for status 200. You can even fall back to the super 
 class's _chck_password to support both basic auth and sql passwords if you'd 
 like.
 
 The interesting bit about this configuration is keystone does not need to be 
 embedded in apache to support apache basic auth, while still providing you 
 most of the flexability of apache basic auth plugins. The only thing that 
 doesn't work is REMOTE_USER rewriting. Though you could probably add that 
 feature in somehow using a http response header or something.

If all you end up using is basic auth, what is the point of using
Kerberos at all ?

Basic Auth should never be used with kerberos except in exceptional
cases.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-14 Thread Lakshminaraya Renganarayana
Hi Angus,

Thanks for detailed reply. I have a few comments that I have written below
in the context.


Angus Salkeld asalk...@redhat.com wrote on 10/13/2013 06:40:01 PM:

 
 - INPUTS: all the attributes that are consumed/used/read by that
resource
 (currently, we have Ref, GetAttrs that can give this implicitly)
 
 - OUTPUTS: all the attributes that are produced/written by that resource
(I
 do not know if this write-set is currently well-defined for a resource.
I
 think some of them are implicitly defined by Heat on particular resource
 types.)
 
 - Global name-space and data-space : all the values produced and
consumed
 (INPUTS/OUTPUTS) are described using a names that are fully qualified
 (XXX.stack_name.resource_name.property_name). The data values associated
 with these names are stored in a global data-space.  Reads are blocking,
 i.e., reading a value will block the execution resource/thread until the
 value is available. Writes are non-blocking, i.e., any thread can write
a
 value and the write will succeed immediately.

 I don't believe this would give us any new behaviour.

I believe that in today's Heat, wait-conditions and signals are the only
mechanism for synchronization during software configuration. The proposed
mechanism would provide a higher level synchronization based on
blocking-reads.
For example, if one is using Chef for software configuration, then the
recipes can use the proposed mechanism to wait for the all the node[][]
attributes
they require before starting the recipe execution. And, Heat can actually
analyze and reason about deadlock properties of such a synchronization. On
the other hand, if the recipe were using wait-conditions how would Heat
reason about deadlock properties of it?


 
 The ability to define resources at arbitrary levels of granularity
together
 with the explicit specification of INPUTS/OUTPUTS allows us to reap the
 benefits G1 and G2 outlined above. Note that the ability to reason about
 the inputs/outputs of each resource and the induced dependencies will
also
 allow Heat to detect dead-locks via dependence cycles (benefit G3). This
is
 already done today in Heat for Refs, GetAttr on base-resources, but the
 proposal is to extend the same to arbitrary attributes for any resource.

 How are TemplateResources and NestedStacks any different? To my
 knowledge this is aleady the case.

 The blocking-read and non-blocking writes further structures the
 specification to avoid deadlocks and race conditions (benefit G3).

 Have you experienced deadlocks with heat? I have never seen this...

Heat as it is today does not tackle the problem of synchronization during
software configuration and hence the problems I see cannot be attributed to
Heat and can only be attributed to the scripts / recipes that do the
software configuration. However, if we envision Heat to provide some
support for software configuration I can easily imagine cases where
it is impossible for Heat to analyze/reason with wait-conditions and hence
leading to deadlocks. Wait-conditions and signals are equal to
Timed-Semaphores in their power and expressivity and these are
known for their problems with deadlocks.


 To me what is missing to better support complex software configuration
 is :
 - better integrating with existing configuration tools (puppet, chef,
salt, ansible, etc). (resource types)

One question is, whether in this integration the synchronization is
completely
left to the configuration tools or Heat would be involved in it. If it is
left to configuration tools, say chef, then the question is how does the
iterative convergence style execution of chef interfere with the schedule
order that Heat determines for a template. On the other hand, if Heat
provides
the mechanism for synchronization, then the question is whether
wait-conditions
and signals are the right abstractions for them. What are your thoughts on
this?

Thanks,
LN
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] Policy Model

2013-10-14 Thread Mike Spreitzer
Consider the example at 
https://docs.google.com/drawings/d/1nridrUUwNaDrHQoGwSJ_KXYC7ik09wUuV3vXw1MyvlY

We could indeed have distinct policy objects.  But I think they are policy 
*uses*, not policy *definitions* --- which is why is prefer to give them 
less prominent lifecycles.  In the example cited above, one policy use 
object might be: {id: some int, type: anti_collocation, properties: 
{level: rack}}, and there are four references to it; another policy use 
object might be {id: some int, type: network_reachability}, and there 
are three references to it.  What object should own the policy use 
objects?  You might answer that policy uses are owned by groups.  I do not 
think it makes sense to give them a more prominent lifecycle.  As I said, 
my preference would be to give them a less prominent lifecycle.  I would 
be happy to see each policy use owned by an InstanceGroupPolicy[Use] that 
references it and allow only one reference per policy use --- in other 
words, make the InstanceGroupPolicy[Use] class inherit from the Policy Use 
class.  And since I am not proposing that anything else inherit from the 
Policy Use class, I would even more prefer to see its contents simply 
merged inline into the InstanceGroupPolicy[Use] class.

Regards,
Mike



From:   Yathiraj Udupi (yudupi) yud...@cisco.com
To: Mike Spreitzer/Watson/IBM@IBMUS, 
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
Date:   10/14/2013 01:38 PM
Subject:Re: [scheduler] Policy Model



Mike, 

Like I proposed in my previous email about the model and the APIs, 

About the InstanceGroupPolicy, why not leave it as is, and introduce a new 
abstract model class called Policy. 
The InstanceGroupPolicy will be a reference to a Policy object saved 
separately. 
and the policy field will point to the saved Policy object's unique name 
or id. 

The new class Policy – can have the usual fields – id, name, uuid, and a 
dictionary of key-value pairs for any additional arguments about the 
policy. 

This is in alignment with the model for InstanceGroupMember, which is a 
reference to an actual Instance Object saved in the DB. 

I will color all the diamonds black to make it a composition I the UML 
diagram. 

Thanks,
Yathi. 







From: Mike Spreitzer mspre...@us.ibm.com
Date: Monday, October 14, 2013 7:14 AM
To: Yathiraj Udupi yud...@cisco.com
Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: [scheduler] Policy Model

Could we agree on the following small changes to the model you posted last 
week?

1.  Rename InstanceGroupPolicy to InstanceGroupPolicyUse

2.  In InstanceGroupPolicy[Use], rename the policy field to 
policy_type

3.  Add an InstanceGroupPolicyUseProperty table, holding key/value pairs 
(two strings) giving the properties of the policy uses

4.  Color all the diamonds black 

Thanks, 
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] self-signed keystone not accessible from other services

2013-10-14 Thread Jamie Lennox
On Mon, 2013-10-14 at 18:36 -0700, Bhuvan Arumugam wrote:
 Just making sure i'm not the only one facing this problem.
 https://bugs.launchpad.net/nova/+bug/1239894

Yep, we thought this may raise some issues but insecure by default was
just not acceptable. 

 keystoneclient v0.4.0 was released last week and used by all openstack
 services now. The insecure=False, as defined in
 keystoneclient.middleware.auth_token. The keystone client is happy as
 long as --insecure flag is used. There is no way to configure it in
 other openstack services like nova, neutron or glance while it is
 integrated with self-signed keystone instance.

I'm not following the problem. As you mentioned before the equivalent
setting for --insecure in auth_token is setting insecure=True in the
service's config file along with all the other keystone auth_token
settings. The equivalent when using the client library is passing
insecure=True to the client initialization. 

 We should introduce new config parameter keystone_api_insecure and
 configure keystoneclient behavior based on this parameter. The config
 parameter should be defined in all other openstack services, as all of
 them integrate with keystone.

A new config parameter where? I guess we could make insecure in
auth_token also response to an OS_SSL_INSECURE but that pattern is not
followed for any other service or parameter. 

 Until it's resolved, I think the known workaround is to use
 keystoneclient==0.3.2.
 
 
 Is there any other workaround for this issue?

Signed certificates.

 -- 
 Regards,
 Bhuvan Arumugam
 www.livecipher.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-14 Thread Mike Spreitzer
That came through beautifully formatted to me, but it looks much worse in 
the archive.  I'm going to use crude email tech here, so that I know it 
won't lose anything in handling.

Yathiraj Udupi (yudupi) yud...@cisco.com wrote on 10/14/2013 01:17:47 
PM:

 I read your email where you expressed concerns regarding create-time
 dependencies, and I agree they are valid concerns to be addressed. 
 But like we all agree, as a starting point, we are just focusing on 
 the APIs for now, and will leave that aside as implementation 
 details to be addressed later. 

I am not sure I understand your language here.  To me, design decisions 
that affect what calls the clients make are not implementation details, 
they are part of the API design.

 Thanks for sharing your suggestions on how we can simplify the APIs.
 I think we are getting closer to finalizing this one. 
 
 Let us start at the model proposed here - 
 [1] https://docs.google.com/document/d/
 17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing 
 (Ignore the white diamonds - they will be black, when I edit the doc)
 
 The InstanceGroup represents all the information necessary to 
 capture the group - nodes, edges, policies, and metadata
 
 InstanceGroupMember - is a reference to an Instance, which is saved 
 separately, using the existing Instance Model in Nova.

I think you mean this is a reference to either a group or an individual 
Compute instance.

 
 InstanceGroupMemberConnection - represents the edge
 
 InstanceGroupPolicy is a reference to a Policy, which will also be 
 saved separately, (currently not existing in the model, but has to 
 be created). Here in the Policy model, I don't mind adding any 
 number of additional fields, and key-value pairs to be able to fully
 define a policy.  I guess a Policy-metadata dictionary is sufficient
 to capture all the required arguments. 
 The InstanceGroupPolicy will be associated to a group as a whole or an 
edge.

Like I said under separate cover, I think one of these is a policy *use* 
rather than a policy *definition*.  I go further and emphasize that the 
interesting out-of-scope definitions are of policy *types*.  A policy type 
takes parameters.  For example, policies of the anti-collocation (AKA 
anti-affinity) type have a parameter that specifies the level in the 
physical hierarchy where the location must differ (rack, host, ...).  Each 
policy type specifies a set of parameters, just like a procedure specifies 
parameters; each use of a policy type supplies values for the parameters, 
just like a procedure invocation supplies values for the procedure's 
parameters.  I suggest separating parameter values from metadata; the 
former are described by the policy type, while the latter are unknown to 
the policy type and are there for other needs of the client.

Yes, a use of a policy type is associated with a group or an edge.  In my 
own writing I have suggested a third possibility: that a policy use can be 
directly associated with an individual resource.  It just so happens that 
the code my group already has been running also has your restriction: it 
supports only policies associated with groups and relationships.  But I 
suggested allowing direct attachment to resources (as well as 
relationships also being able to directly reference resources instead of 
groups) because I think this restriction --- while it simplifies 
implementation --- makes templates more verbose; I felt the latter was a 
more important consideration than the former.  If you want to roadmap this 
--- restricted first, liberal later --- that's fine with me.

 
 InstanceGroupMetadata - represents key-value dictionary for any 
 additional metadata for the instance group. 
 
 I think this should fully support what we care about - nodes, edges,
 policies and metadata. 
 
 Do we all agree ? 

Yes, with exceptions noted above.

 
 Now going to the APIs, 
 
 Register GROUP API (from my doc [1]): 
 
 POST  /v3.0/{tenant_id}/groups --- Register a group

In such specs it would be good to be explicit about the request parameters 
and body.  If I follow correctly, 
https://review.openstack.org/#/c/30028/25/doc/api_samples/os-instance-groups/instance-groups-post-req.json
 
shows us that you intended (as of that patch) the body to carry a group 
definition.

 I think the confusion is only about when the member (all nested 
 members) and policy about when they are saved in the DB (registered,
 but not CREATED actually), such that we can associate a UUID.  This 
 led to my original thinking that it is a 3-phase operation where we 
 have to register (save in DB) the nested members first, then 
 register the group as a whole.  But this is not client friendly. 
 
 Like I had suggested earlier, as an implementation detail of the 
 Group registration API (CREATE part 1 in your terminology), we can 
 support this: as part of the group registration transaction, 
 complete the registration of the nested members, get their UUIDs, 
 create 

Re: [openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-14 Thread Sumit Naiksatam
Thanks all for attending the IRC meeting today for the Neutron advanced
services discussion. We have an etherpad for this:
https://etherpad.openstack.org/p/NeutronAdvancedServices

It was also felt that we need to have more ongoing discussions, so we will
have follow up meetings. We will try to propose a more convenient time for
everyone involved for a meeting next week. Meanwhile, we can continue to
use the mailing list, etherpad, and/or comment on the specific proposals.

Thanks,
~Sumit.


On Tue, Oct 8, 2013 at 8:30 PM, Sumit Naiksatam sumitnaiksa...@gmail.comwrote:

 Hi All,

 We had a VPNaaS meeting yesterday and it was felt that we should have a
 separate meeting to discuss the topics common to all services. So, in
 preparation for the Icehouse summit, I am proposing an IRC meeting on Oct
 14th 22:00 UTC (immediately after the Neutron meeting) to discuss common
 aspects related to the FWaaS, LBaaS, and VPNaaS.

 We will begin with service insertion and chaining discussion, and I hope
 we can collect requirements for other common aspects such as service
 agents, services instances, etc. as well.

 Etherpad for service insertion  chaining can be found here:
 https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining

 Hope you all can join.

 Thanks,
 ~Sumit.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] What validation feature is necessary for Nova v3 API?

2013-10-14 Thread Kenichi Oomichi

Hi,

I'd like to know what validation feature is really needed for Nova v3 API,
and I hope this mail will be a kick-off of brain-storming for it.

 Introduction 
I have submitted a blueprint nova-api-validation-fw[1].
The purpose is comprehensive validation of API input parameters.
32% of Nova v3 API parameters are not validated with any ways[2], and the
fact would cause an internal error if some clients just send an invalid 
request. If an internal error happens, the error message is output to a
log file and OpenStack operators should research its reason. It would be
hard work for the operators.

In Havana development cycle, I proposed the implementation code of the BP
but it was abandoned. Nova web framework will move to Pecan/WSME, but my
code depended on WSGI. So the code would have merits in short term, but not
in long term.
Now some Pecan/WSME sessions are proposed for Hong-Kong summit, so I feel
this situation is a good chance for this topic.

At the first step, I'd like to find what validation features are necessary
for Nova v3 API through the discussion. After listing necessary features, 
necessary features will be proposed for WSME improvement if we need.


For discussing, I have investigated all validation ways of current Nova v3
API parameters. There are 79 API methods, and 49 methods use API parameters
of a request body. Totally, they have 148 API parameters. (details: [3])

Necessary features, what I guess now, are the following:

 Basic Validation Feature 
Through this investigation, it seems that we need some basic validation
features such as:
* Type validation
  str(name, ..), int(vcpus, ..), float(rxtx_factor), dict(metadata, ..),
  list(networks, ..), bool(conbine, ..), None(availability_zone)
* String length validation
  1 - 255
* Value range validation
  value = 0(rotation, ..), value  0(vcpus, ..),
  value = 1(os-multiple-create:min_count, os-multiple-create:max_count)
* Data format validation
  * Pattern:
uuid(volume_id, ..), boolean(on_shared_storage, ..), 
base64encoded(contents),
ipv4(access_ip_v4, fixed_ip), ipv6(access_ip_v6)
  * Allowed list:
'active' or 'error'(state), 'parent' or 'child'(cells.type),
'MANUAL' or 'AUTO'(os-disk-config:disk_config), ...
  * Allowed string:
not contain '!' and '.'(cells.name),
contain [a-zA-Z0-9_.- ] only(flavor.name, flavor.id)
* Mandatory validation
  * Required: server.name, flavor.name, ..
  * Optional: flavor.ephemeral, flavor.swap, ..


 Auxiliary Validation Feature 
Some parameters have a dependency between other parameter.
For example, name or/and availability_zone should be specified when updating an
aggregate. The parameter dependencies are few cases, and the dependency 
validation
feature would not be mandatory.

The cases are the following:
* Required if not specifying other:
  (update aggregate: name or availability_zone), (host: status or 
maintenance_mode),
  (server: os-block-device-mapping:block_device_mapping or image_ref)
* Should not specify both:
  (interface_attachment: net_id and port_id),
  (server: fixed_ip and port)


 API Documentation Feature 
WSME has a unique feature which generates API documentations from source code.
The documentations[4] contains:
* Method, URL (GET /v2/resources/, etc)
* Parameters
* Reterun type
* Parameter samples of both JSON and XML

This feature will help us for Nova v3 API, because it consists of many APIs and
the API sample implementations seemed hard processes in Havana development.
Current documentations, which are generated with this feature, contain only Type
as each parameter attribute.
If Nova-v3-API documentation contains String length, Value range, Data format,
Mandatory also, it help many developers and users.


Please let me know necessary features you are thinking.
Of course, any comments are welcome :-)


Thanks
Ken'ichi Ohmichi

---
[1]: https://blueprints.launchpad.net/nova/+spec/nova-api-validation-fw
[2]: 
https://wiki.openstack.org/wiki/NovaApiValidationFramework#Appendix:_Nova-v3-API_parameters
 
[3]: 
https://wiki.openstack.org/wiki/NovaApiValidationFramework#What_is_necessary_for_Nova-v3-API
[4]: http://docs.openstack.org/developer/ceilometer/webapi/v2.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Are you coming to linux.conf.au 2014?

2013-10-14 Thread Michael Still
To follow up on this, I've decided to give running a meetup a go. The
details are at http://sites.rcbops.com/lca2014_openstack/?p=38

Cheers,
Michael

On Thu, Oct 3, 2013 at 10:30 AM, Michael Still mi...@stillhq.com wrote:
 If you're coming to linux.conf.au 2014, would you be interested in a
 couple of day meetup for OpenStack developers the week after the
 conference? I'm trying to judge interest before I try and get it
 booked.

 The timing (early January) is nice because its basically mid cycle for
 Icehouse, so its a nice opportunity to checkpoint our progress.

 Michael

 --
 Rackspace Australia



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-14 Thread Debojyoti Dutta
Hi Phil

Good summary ...

I was wondering if we are shooting for too much to discuss by clubbing
 i) Group Scheduling / Smart Placement / Heat and Scheduling  (1),
(2), (3),  (7)
- How do you schedule something more complex that a single VM ?

I think specifying something more complex than a single VM is a great
theme. But dont know if we can do justice in 1 session. I think maybe
a simple nova scheduling API with groups/bundles of resources  itself
would be a lot for 1 session. In fact in order to specify what you
want in your resources bundle, you would need to think about policies.
So maybe just the simple Nova API and policies might be useful.

Also we might have a session correlating the different models of how
more than 1 VM can be requested - you could start from nova and then
generalize to cross services or you could start from heat workload
models and drill down. There are passionate people on both sides and
maybe that debate needs a session.

I think the smart resource placement is very interesting and might
need at least 1/2 a slot since one can show how it can be done today
in nova and how it can handle cross services scenarios.

See you tomorrow on IRC

debo


On Mon, Oct 14, 2013 at 10:56 AM, Alex Glikson glik...@il.ibm.com wrote:
 IMO, the three themes make sense, but I would suggest waiting until the
 submission deadline and discuss at the following IRC meeting on the 22nd.
 Maybe there will be more relevant proposals to consider.

 Regards,
 Alex

 P.S. I plan to submit a proposal regarding scheduling policies, and maybe
 one more related to theme #1 below



 From:Day, Phil philip@hp.com
 To:OpenStack Development Mailing List
 openstack-dev@lists.openstack.org,
 Date:14/10/2013 06:50 PM
 Subject:Re: [openstack-dev] Scheduler meeting and Icehouse Summit
 



 Hi Folks,

 In the weekly scheduler meeting we've been trying to pull together a
 consolidated list of Summit sessions so that we can find logical groupings
 and make a more structured set of sessions for the limited time available at
 the summit.

 https://etherpad.openstack.org/p/IceHouse-Nova-Scheduler-Sessions

 With the deadline for sessions being this Thursday 17th, tomorrows IRC
 meeting is the last chance to decide which sessions we want to combine /
 prioritize.Russell has indicated that a starting assumption of three
 scheduler sessions is reasonable, with any extras depending on what else is
 submitted.

 I've matched the list on the Either pad to submitted sessions below, and
 added links to any other proposed sessions that look like they are related.


 1) Instance Group Model and API
Session Proposal:
 http://summit.openstack.org/cfp/details/190

 2) Smart Resource Placement:
   Session Proposal:
 http://summit.openstack.org/cfp/details/33
Possibly related sessions:  Resource
 optimization service for nova  (http://summit.openstack.org/cfp/details/201)

 3) Heat and Scheduling and Software, Oh My!:
 Session Proposal:
 http://summit.openstack.org/cfp/details/113

 4) Generic Scheduler Metrics and Celiometer:
 Session Proposal:
 http://summit.openstack.org/cfp/details/218
 Possibly related sessions:  Making Ceilometer and Nova play
 nice  http://summit.openstack.org/cfp/details/73

 5) Image Properties and Host Capabilities
 Session Proposal:  NONE

 6) Scheduler Performance:
 Session Proposal:  NONE
 Possibly related Sessions: Rethinking Scheduler Design
 http://summit.openstack.org/cfp/details/34

 7) Scheduling Across Services:
 Session Proposal: NONE

 8) Private Clouds:
 Session Proposal:
 http://summit.openstack.org/cfp/details/228

 9) Multiple Scheduler Policies:
 Session Proposal: NONE


 The proposal from last weeks meeting was to use the three slots for:
 - Instance Group Model and API   (1)
 - Smart Resource Placement (2)
 - Performance (6)

 However, at the moment there doesn't seem to be a session proposed to cover
 the performance work ?

 It also seems to me that the Group Model and Smart Placement are pretty
 closely linked along with (3) (which says it wants to combine 1  2 into the
 same topic) , so if we only have three slots available then these look like
 logical candidates for consolidating into a single session.That would
 free up a session to cover the generic metrics (4) and Ceilometer - where a
 lot of work in Havana stalled because we couldn't get a consensus on the way
 forward.  The third slot would be kept for performance - which based on the
 lively debate in the scheduler meetings I'm assuming will still be submitted
 as a session.Private Clouds isn't really a scheduler topic, so I suggest
 it takes its chances as a general session.  

Re: [openstack-dev] [TripleO] CD Cloud MVP2 completed

2013-10-14 Thread Monty Taylor


On 10/14/2013 09:39 PM, Robert Collins wrote:
 That is, we're now deploying a trunk KVM based OpenStack using heat +
 nova baremetal on a continuous basis into the test rack we have *and*
 there are users setup on it for any TripleO ATC that want's an
 account, and networking is setup: you can access instances over the
 internet, and the endpoint is also publically accessible.
 
 So far it's done 121 successful deploys, and 4 failures. Woo - thats a
 fairly decent reliability rate today; we're not at the point of
 analysing every failure yet - I think we'll add that discipline in
 when we have state preservation.

Woot. Well done.

 We'll need to do a retrospective for this as well...
 https://wiki.openstack.org/wiki/TripleO/TripleOCloud/MVP2Retrospective
 is where we'll capture the info from it.
 
 Following on the current strategy, MVP 3 will add in stateful upgrades
 (with downtime).
 
 https://etherpad.openstack.org/p/tripleo-image-updates - see
 'Rebuild:' in there.
 https://trello.com/b/0jIoMrdo/tripleo has the cards as expected...
 
 -Rob
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-14 Thread Mike Spreitzer
Yes, Rethinking Scheduler Design  
http://summit.openstack.org/cfp/details/34 is not the same as the 
performance issue that Boris raised.  I think the former would be a 
natural consequence of moving to an optimization-based joint 
decision-making framework, because such a thing necessarily takes a good 
enough attitude.  The issue Boris raised is more efficient tracking of 
the true state of resources, and I am interested in that issue too.  A 
holistic scheduler needs such tracking, in addition to the needs of the 
individual services.  Having multiple consumers makes the issue more 
interesting :-)

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] Policy Model

2013-10-14 Thread Mike Spreitzer
Yathiraj Udupi (yudupi) yud...@cisco.com wrote on 10/14/2013 11:43:34 
PM:

 ... 
 
 For the policy model, you can expect rows in the DB each 
 representing different policy instances something like- 

  {id: , uuid: SOME-UUID-1, name: anti-colocation-1,  type: 
 anti-colocation, properties: {level: rack}}

  {id: , uuid: SOME-UUID-2, name: anti-colocation-2,  type: 
 anti-colocation, properties: {level: PM}}

  {id: , uuid: SOME-UUID-3, name: network-reachabilty-1, 
 type: network-reachability properties: {}}
 
 And for the InstanceGroupPolicy model, you can expect rows such as 

 {id: 5, policy: SOME-UUID-1, type: group, edge_id: , 
 group_id: 12345}

 {id: 6, policy: SOME-UUID-1, type: group, edge_id: , 
 group_id: 22334} 

Do you imagine just one policy object of a given contents, or many?  Put 
another way, would every InstanceGroupPolicy object that wants to apply a 
rack-level anti-collocation policy use SOME-UUID-1?

Who or what created the record with id ?  Who or what decides to 
delete it, and when and why?  What about dangling references?  It seems to 
me that needing to answer these questions simply imposes unnecessary 
burdens.  If the type and properties fields of record id  were 
merged inline (replacing the policy:SOME-UUID-1 field) into records id 
, , and the other uses, then there are no hard questions to 
answer; the group author knows what policies he wants to apply and where, 
and he simply writes them there.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] Policy Model

2013-10-14 Thread Alex Glikson
I would suggest not to generalize too much.. e.g., restrict the discussion 
to PlacementPolicy. If anyone else would want to use a similar construct 
for other purposes -- it can be generalized later.
For example, the notion of 'policy' already exists in other places in 
OpenStack in the context of security, and we also plan to introduce a 
different kind of 'policies' for scheduler configurations in different 
managed domains (e.g., aggregates), but I wonder whether it is important 
(or makes sense) making all of them 'inherit' from the same base model.

Regards,
Alex




From:   Yathiraj Udupi (yudupi) yud...@cisco.com
To: Mike Spreitzer mspre...@us.ibm.com, 
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
Date:   15/10/2013 07:33 AM
Subject:Re: [openstack-dev] [scheduler] Policy Model



The Policy model object has a lifecycle of its own.  This is because this
policy object can possibly be used outside the scope of the InstanceGroup
effort.  Hence I don't see a problem in a policy administrator, or any
user, if allowed, to maintain this set of policies outside the scope of
InstanceGroups. 

However a group author will maintain the InstanceGroupPolicy objects, and
refer a policy that is appropriate to his requirement.  If a new Policy
object needs to be registered for a new requirement, that has to be done
by this user, if allowed.

About your question regarding dangling references, that situation should
not be allowed,  hence a delete of the Policy object should not be
allowed, if there is some other object referring it. This can be
implemented right, by adding a proper association between the models.

This way, a generic Policy model can apply to other scenarios that may
come up in Openstack.

Regards,
Yathi. 


















___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] how can i get volume name in create snapshot call

2013-10-14 Thread Dinakar Gorti Maruti
Dear all,
we are in progress of implementing a new driver for cinder services
, we have a scenario where we need volume name for creation of snapshot. In
detail , we designed the driver in such a way that it communicates with our
server through http calls , and now we need the volume name and other
details in create snapshot function , how can we get those details

Thanks
Dinakar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev