Re: [openstack-dev] Newbie python novaclient question

2013-10-11 Thread Alex
Thank you Noorul. I looked at the review. My question is that in 
openstackcomputeshell.main which line call the v1_1/ shell.py.function?


Thanks
Al


On Oct 10, 2013, at 9:03 PM, Noorul Islam K M noo...@noorul.com wrote:

 A L la6...@gmail.com writes:
 
 Dear Openstack Dev Gurus,
 
 I am trying to understand the python novaclient code. Can someone please
 point me to where in openstackcomputeshell class in shell.py does the
 actual function or class related to an argument gets called?
 
 
 This review [1] is something I submitted and it adds a sub command. May
 be this will give you some clue.
 
 [1] https://review.openstack.org/#/c/40181/
 
 Thanks and Regards
 Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newbie python novaclient question

2013-10-11 Thread Noorul Islam K M
Alex la6...@gmail.com writes:

 Thank you Noorul. I looked at the review. My question is that in 
 openstackcomputeshell.main which line call the v1_1/ shell.py.function?


I would look at get_subcommand_parser() method.

Thanks and Regards
Noorul



 On Oct 10, 2013, at 9:03 PM, Noorul Islam K M noo...@noorul.com wrote:

 A L la6...@gmail.com writes:
 
 Dear Openstack Dev Gurus,
 
 I am trying to understand the python novaclient code. Can someone please
 point me to where in openstackcomputeshell class in shell.py does the
 actual function or class related to an argument gets called?
 
 
 This review [1] is something I submitted and it adds a sub command. May
 be this will give you some clue.
 
 [1] https://review.openstack.org/#/c/40181/
 
 Thanks and Regards
 Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newbie python novaclient question

2013-10-11 Thread Alex
Yes , this method seems to look for the corresponding action but still doesn't 
seem to be the one actually calling them.

Regards
Al



On Oct 10, 2013, at 11:07 PM, Noorul Islam K M noo...@noorul.com wrote:

 Alex la6...@gmail.com writes:
 
 Thank you Noorul. I looked at the review. My question is that in 
 openstackcomputeshell.main which line call the v1_1/ shell.py.function?
 
 
 I would look at get_subcommand_parser() method.
 
 Thanks and Regards
 Noorul
 
 
 
 On Oct 10, 2013, at 9:03 PM, Noorul Islam K M noo...@noorul.com wrote:
 
 A L la6...@gmail.com writes:
 
 Dear Openstack Dev Gurus,
 
 I am trying to understand the python novaclient code. Can someone please
 point me to where in openstackcomputeshell class in shell.py does the
 actual function or class related to an argument gets called?
 
 
 This review [1] is something I submitted and it adds a sub command. May
 be this will give you some clue.
 
 [1] https://review.openstack.org/#/c/40181/
 
 Thanks and Regards
 Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-11 Thread Mike Spreitzer
Regarding Alex's question of which component does holistic infrastructure 
scheduling, I hesitate to simply answer heat.  Heat is about 
orchestration, and infrastructure scheduling is another matter.  I have 
attempted to draw pictures to sort this out, see 
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U 
and 
https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH_TONw6g 
.  In those you will see that I identify holistic infrastructure 
scheduling as separate functionality from infrastructure orchestration 
(the main job of today's heat engine) and also separate from software 
orchestration concerns.  However, I also see a close relationship between 
holistic infrastructure scheduling and heat, as should be evident in those 
pictures too.

Alex made a remark about the needed inputs, and I agree but would like to 
expand a little on the topic.  One thing any scheduler needs is knowledge 
of the amount, structure, and capacity of the hosting thingies (I wish I 
could say resources, but that would be confusing) onto which the 
workload is to be scheduled.  Scheduling decisions are made against 
available capacity.  I think the most practical way to determine available 
capacity is to separately track raw capacity and current (plus already 
planned!) allocations from that capacity, finally subtracting the latter 
from the former.

In Nova, for example, sensing raw capacity is handled by the various 
nova-compute agents reporting that information.  I think a holistic 
infrastructure scheduler should get that information from the various 
individual services (Nova, Cinder, etc) that it is concerned with 
(presumably they have it anyway).

A holistic infrastructure scheduler can keep track of the allocations it 
has planned (regardless of whether they have been executed yet).  However, 
there may also be allocations that did not originate in the holistic 
infrastructure scheduler.  The individual underlying services should be 
able to report (to the holistic infrastructure scheduler, even if lowly 
users are not so authorized) all the allocations currently in effect.  An 
accurate union of the current and planned allocations is what we want to 
subtract from raw capacity to get available capacity.

If there is a long delay between planning and executing an allocation, 
there can be nasty surprises from competitors --- if there are any 
competitors.  Actually, there can be nasty surprises anyway.  Any 
scheduler should be prepared for nasty surprises, and react by some 
sensible retrying.  If nasty surprises are rare, we are pretty much done. 
If nasty surprises due to the presence of competing managers are common, 
we may be able to combat the problem by changing the long delay to a short 
one --- by moving the allocation execution earlier into a stage that is 
only about locking in allocations, leaving all the other work involved in 
creating virtual resources to later (perhaps Climate will be good for 
this).  If the delay between planning and executing an allocation is short 
and there are many nasty surprises due to competing managers, then you 
have too much competition between managers --- don't do that.

Debo wants a simpler nova-centric story.  OK, how about the following. 
This is for the first step in the roadmap, where scheduling decisions are 
still made independently for each VM instance.  For the client/service 
interface, I think we can do this with a simple clean two-phase interface 
when traditional software orchestration is in play, a one-phase interface 
when slick new software orchestration is used.  Let me outline the 
two-phase flow.  We extend the Nova API with CRUD operations on VRTs 
(top-level groups).  For example, the CREATE operation takes a definition 
of a top-level group and all its nested groups, definitions (excepting 
stuff like userdata) of all the resources (only VM instances, for now) 
contained in those groups, all the relationships among those 
groups/resources, and all the applications of policy to those groups, 
resources, and relationships.  This is a rest-style interface; the CREATE 
operation takes a definition of the thing (a top-level group and all that 
it contains) being created; the UPDATE operation takes a revised 
definition of the whole thing.  Nova records the presented information; 
the familiar stuff is stored essentially as it is today (but marked as 
being in some new sort of tentative state), and the grouping, 
relationship, and policy stuff is stored according to a model like the one 
DeboYathi wrote.  The CREATE operation returns a UUID for the newly 
created top-level group.  The invocation of the top-level group CRUD is a 
single operation and it is the first of the two phases.  In the second 
phase of a CREATE flow, the client creates individual resources with the 
same calls as are used today, except that each VM instance create call is 
augmented with a pointer into the policy information.  That 

[openstack-dev] [Glance] Havana RC2 available

2013-10-11 Thread Thierry Carrez
Hello everyone,

Due to various issues and regressions detected in RC1 testing, we just
created a new Havana release candidate for OpenStack Image Service
(Glance).

You can find the RC2 tarball and the list of fixed bugs at:

https://launchpad.net/glance/havana/havana-rc2

This is hopefully the last Havana release candidate for Glance.
Unless a last-minute release-critical regression is found that warrant
another release candidate respin, this RC2 will be formally included in
the common OpenStack 2013.2 final release next Thursday. You are
therefore strongly encouraged to test and validate this tarball.

Alternatively, you can grab the code at:
https://github.com/openstack/glance/tree/milestone-proposed

If you find a regression that could be considered release-critical,
please file it at https://bugs.launchpad.net/glance/+filebug and tag
it *havana-rc-potential* to bring it to the release crew's attention.

Happy regression hunting,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tricky questions - 1/2 Quantum Network Object

2013-10-11 Thread Marco Fornaro
Hi All,

(I already posted this on openstack mail list, but perhaps it's more a 
developer stuff :))
Some Tricky questions I ask help for (email 1 of 2):


Quantum Network object
In the openstack networking guide-Using Openstack compute with 
Openstack- Advanced VM creation 
(http://docs.openstack.org/grizzly/openstack-network/admin/content/advanceed_vm_creation.html)
 there are example boot a VM on one or more NETWORKs (meaning the quantum 
Network object):
nova boot --image img --flavor flavor \
--nic net-id=net1-id --nic net-id=net2-id vm-name

BUT if you look at the description of the network object in the API abstraction 
it looks like a collection of subnets (meaning the quantum object), so 
basically a collection of IP Addresses like 192.168.100.0/24

SO (first question): what happens in the network where I boot the VM has more 
than a subnet?...I suppose the VM should have a nic for EACH subnet of the 
network!

THEN (second question): why do I need a network object? Shouldn't it be more 
practical to have just the subnet object?..why do I need to create a Network if 
it's just a collection of subnets?

Thanks in advance for any help

Best Regards

Marco



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tricky questions - 2/2 NOVA COMPUTE: are Booted VMs independent by other openstack/quantum services?

2013-10-11 Thread Marco Fornaro
Hi All,

(I already posted this on openstack mail list, but perhaps it's more a 
developer stuff :))
Some Tricky questions I ask help for (email 2 of 2):

(please refer to a scenario with Openstack+Quantum, so we can have complex 
networks)

Nova Compute
Are booted VMs independent by other nova/quantum services?
I mean: when a VM is already booted does it need in any case to dialog with 
other nova/quantum services (apart from nova compute) like quantum, the quantum 
plugin agent, the keystone etc etc???
For example and in other words: when a VM is running might I turn off the 
quantum server (or even physically disconnect the dedicated server itself? )?

Because...it this is NOT possible this means that the various servers (for 
example the quantum) are involved during the whole lifecycle of a VM.could 
this be a bottleneck?, something to take in count for high availability and 
performance?


Thanks in advance for any help

Best Regards

Marco








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tricky questions - 1/2 Quantum Network Object

2013-10-11 Thread Yongsheng Gong
On Fri, Oct 11, 2013 at 4:41 PM, Marco Fornaro marco.forn...@huawei.comwrote:

  Hi All,

 ** **

 (I already posted this on openstack mail list, but perhaps it’s more a
 developer stuff J)

 Some Tricky questions I ask help for (email 1 of 2):

 ** **

 ** **

 *Quantum Network object*

 In the “openstack networking guide”-”Using Openstack compute with
 Openstack”-” Advanced VM creation” (
 http://docs.openstack.org/grizzly/openstack-network/admin/content/advanceed_vm_creation.html)
 there are example boot a VM on one or more NETWORKs (meaning the quantum
 Network object):  

 nova boot --image img --flavor flavor \

 *--nic net-id=net1-id --nic net-id=net2-id* vm-name

 ** **

 BUT if you look at the description of the network object in the API
 abstraction it looks like a collection of subnets (meaning the quantum
 object), so basically a collection of IP Addresses like 192.168.100.0/24**
 **

 ** **

 *SO (first question): what happens in the network where I boot the VM has
 more than a subnet?...I suppose the VM should have a nic for EACH subnet of
 the network!*

You will just get a nic for each network, not for each subnet of the
network.   to choose the subnet, use --nic
net-id=net-uuid,v4-fixed-ip=ip-addr

 **

 ** **

 *THEN (second question): why do I need a network object? Shouldn’t it be
 more practical to have just the subnet object?..why do I need to create a
 Network if it’s just a collection of subnets?*

under the hood, the traffic among networks are isolated by tunnel id, vlan
id or something else. You can create networks with just one subnet, but the
vlan id will run out soon if vlan is used.
we can have many networks, and the subnets within network can have overlap
IPs.

**

 ** **

 Thanks in advance for any help

 ** **

 Best Regards

 ** **

 Marco

 ** **

 ** **

 ** **

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change I3e080c30: Fix resource length in project_user_quotas table for Havana?

2013-10-11 Thread Thierry Carrez
Joshua Hesketh wrote:
 I've been reviewing this change which is currently proposed for master
 and I think it needs to be considered for the next Havana RC.
 
 Change I3e080c30: Fix resource length in project_user_quotas table
 https://review.openstack.org/#/c/47299/

The bug was properly tagged and Russell and I should look into it soon.
I'm not a big fan of changing DB schema less than one week before final
release though.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] dd performance for wipe in cinder

2013-10-11 Thread cosmos cosmos
Hello.
My name is Rucia for Samsung SDS.

Now I am in trouble in cinder volume deleting.
I am developing for supporting big data storage in lvm

But it takes too much time for deleting of cinder lvm volume because of dd.
Cinder volume is 200GB for supporting hadoop master data.
When i delete cinder volume in using 'dd if=/dev/zero of $cinder-volume
count=100 bs=1M' it takes about 30 minutes.

Is there the better and quickly way for deleting?

Cheers.
Rucia.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tricky questions - 2/2 NOVA COMPUTE: are Booted VMs independent by other openstack/quantum services?

2013-10-11 Thread Yongsheng Gong
On Fri, Oct 11, 2013 at 4:41 PM, Marco Fornaro marco.forn...@huawei.comwrote:

  Hi All,

 ** **

 (I already posted this on openstack mail list, but perhaps it’s more a
 developer stuff J)

 Some Tricky questions I ask help for (email 2 of 2):

 ** **

 (please refer to a scenario with Openstack+Quantum, so we can have complex
 networks)

 ** **

 *Nova Compute*

 *Are booted VMs independent by other nova/quantum services?*

 *I mean: *when a VM is already booted does it need in any case to dialog
 with other nova/quantum services (apart from nova compute) like quantum,
 the quantum plugin agent, the keystone etc etc???

 For example and in other words: when a VM is running might I turn off the
 quantum server (or even physically disconnect the dedicated server itself?
 )?

 ** **

 Because…it this is NOT possible this means that the various servers (for
 example the quantum) are involved during the whole lifecycle of a
 VM…..could this be a bottleneck?, something to take in count for high
 availability and performance?

 ** **

 **

It depends how your vm gets the IP. If it is using dhcp, you need to keep
the dnsmasq alive. Other quantum services can be down when the VM is
running normally.

 **

 Thanks in advance for any help

 ** **

 Best Regards

 ** **

 Marco

 ** **

 ** **

 ** **

 ** **

 ** **

 ** **

 ** **

 ** **

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-11 Thread Prashant Upadhyaya
 But if there are two
 physical NIC's which were diced up with SRIOV, then VM's on the diced
 parts of the first  physical NIC cannot communicate easily with the
 VM's on the diced parts of the second physical NIC. So a native
 implementation has to be there on the Compute Node which will aid this
 (this native implementation will take over the Physical Function, PF
 of each NIC) and will be able to 'switch' the packets between VM's of
 different physical diced up NIC's [if we need that usecase]

Is this strictly necessary?  It seems like it would be simpler to let the 
packets be sent out over the wire and the switch/router would send them back to 
the other NIC.  Of course this would result in higher use of the physical link, 
but on the other hand it would mean less work for the CPU on the compute node.

PU Not strictly necessary. I am from the data plane background (Intel DPDK + 
SRIOV) and the Intel DPDK guide suggests the above usecase for acceleration of 
data path for the above. I agree, it would be much simpler to go to the switch 
and back into the 2nd NIC, let's solve this first in OpenStack with SRIOV, that 
itself will be a major step forward.

Regards
-Prashant

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com]
Sent: Thursday, October 10, 2013 8:21 PM
To: Prashant Upadhyaya
Cc: OpenStack Development Mailing List; Jiang, Yunhong; 
openst...@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack] Neutron support for passthrough of 
networking devices?

On 10/10/2013 01:19 AM, Prashant Upadhyaya wrote:
 Hi Chris,

 I note two of your comments --

 When we worked on H release, we target for basic PCI support like
 accelerator card or encryption card etc.

 PU So I note that you are already solving the PCI pass through
 usecase somehow ? How ? If you have solved this already in terms of
 architecture then SRIOV should not be difficult.

Notice the double indent...that was actually Jiang's statement that I quoted.


 Do we run into the same complexity if we have spare physical NICs on
 the host that get passed in to the guest?

 PU In part you are correct. However there is one additional thing.
 When we have multiple physical NIC's, the Compute Node's linux is
 still in control over those.

snip

 In case of SRIOV, you can dice up a single physical NIC into multiple
 NIC's (effectively), and expose each of these diced up NIC's to a VM
 each. This means that the VM will now 'directly' access the NIC
 bypassing the Hypervisor.

snip

 But if there are two
 physical NIC's which were diced up with SRIOV, then VM's on the diced
 parts of the first  physical NIC cannot communicate easily with the
 VM's on the diced parts of the second physical NIC. So a native
 implementation has to be there on the Compute Node which will aid this
 (this native implementation will take over the Physical Function, PF
 of each NIC) and will be able to 'switch' the packets between VM's of
 different physical diced up NIC's [if we need that usecase]

Is this strictly necessary?  It seems like it would be simpler to let the 
packets be sent out over the wire and the switch/router would send them back to 
the other NIC.  Of course this would result in higher use of the physical link, 
but on the other hand it would mean less work for the CPU on the compute node.

Chris




===
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.
===

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newbie python novaclient question

2013-10-11 Thread Noorul Islam K M
Alex la6...@gmail.com writes:

 Yes , this method seems to look for the corresponding action but still 
 doesn't seem to be the one actually calling them.


Are you looking for this which is inside **main** method?

args = subcommand_parser.parse_args(argv)

Thanks and Regards
Noorul

 Regards
 Al



 On Oct 10, 2013, at 11:07 PM, Noorul Islam K M noo...@noorul.com wrote:

 Alex la6...@gmail.com writes:
 
 Thank you Noorul. I looked at the review. My question is that in 
 openstackcomputeshell.main which line call the v1_1/ shell.py.function?
 
 
 I would look at get_subcommand_parser() method.
 
 Thanks and Regards
 Noorul
 
 
 
 On Oct 10, 2013, at 9:03 PM, Noorul Islam K M noo...@noorul.com wrote:
 
 A L la6...@gmail.com writes:
 
 Dear Openstack Dev Gurus,
 
 I am trying to understand the python novaclient code. Can someone please
 point me to where in openstackcomputeshell class in shell.py does the
 actual function or class related to an argument gets called?
 
 
 This review [1] is something I submitted and it adds a sub command. May
 be this will give you some clue.
 
 [1] https://review.openstack.org/#/c/40181/
 
 Thanks and Regards
 Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Voting in the Technical Committee election in now open

2013-10-11 Thread Thierry Carrez
TC elections are underway and will remain open for you to cast your vote
until at least 23:59 UTC on Thursday, October 17.

If you are a Foundation individual member and had a commit in one of the
official OpenStack program projects over the Grizzly-Havana timeframe
(from 2012-09-27 to 2013-09-26, 23:59 PST) then you are eligible to
vote. You should find your email with a link to the Condorcet page to
cast your vote in the inbox of the email gerrit knows about.

If you didn't get an voting email although you authored a commit in an
official OpenStack project in the designated timeframe:
* check trash/spambox of your email, in case it went in there
* wait a bit and check again, in case your email server is a bit slow
* find the sha of at least one commit you authored and email me (or the
other election official, ante...@anteaya.info). If we can confirm that
you are entitled to vote, we will add you to the voters list for this
election.

Our democratic process is important to the health of OpenStack, please
exercise your right to vote.

More information on this election, as well as candidate
statements/platforms can be found on this page:

https://wiki.openstack.org/wiki/TC_Elections_Fall_2013

Happy voting,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Need suggestions and pointers to start contributing for development :

2013-10-11 Thread Mayank Mittal
Hi Teams,

Please suggest and guide for starting to contribute in development. About
me - I have been working on L2/L3 protocol, SNMP, NMS development and ready
to contribute as a full timer to openstack.

PS : My interest lies in LB and MPLS. Any pointers to respective teams will
help a lot.

Thanks,
Mayank
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports

2013-10-11 Thread Kyle Mestery (kmestery)
On Oct 8, 2013, at 4:01 AM, P Balaji-B37839 b37...@freescale.com wrote:
 Hi,
 
 Current OVS Agent is creating tunnel with dst_port as the port configured in 
 INI file on Compute Node. If all the compute nodes on VXLAN network are 
 configured for DEFAULT port it is fine.
 
 When any of the Compute Nodes are configured for CUSTOM udp port as VXLAN UDP 
 Port, Then how does the tunnel will be established with remote IP.
 
 It is observed that the fan-out RPC message is not having the destination 
 port information.
 
Balaji, is this with the ML2 or OVS plugin?

 Regards,
 Balaji.P 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Sean Dague

On 10/10/2013 08:43 PM, Tim Smith wrote:
snip

Again, I don't have any vested interest in this discussion, except that
I believe the concept of reviewer karma to be counter to both software
quality and openness. In this particular case it would seem that the
simplest solution to this problem would be to give one of the hyper-v
team members core reviewer status, but perhaps there are consequences to
that that elude me.


There are very deep consequences to that. The core team model, where you 
have 15 - 20 reviewers, but it only takes 2 to land code, only works 
when the core teams share a culture. This means they know, or are 
willing to learn, code outside their comfort zone. Will they catch all 
the bugs in that? nope. But code blindness hits everyone, and there are 
real implications for the overall quality and maintainability of a 
project as complicated as Nova if everyone only stays in their 
comfortable corner.


Also, from my experience in Nova, code contributions written by people 
that aren't regularly reviewing outside of their corner of the world are 
demonstrably lower quality than those who are. Reviewing code outside 
your specific area is also educational, gets you familiar with norms and 
idioms beyond what simple style checking handles, and makes you a better 
developer.


We need to all be caring about the whole. That culture is what makes 
OpenStack long term sustainable, and there is a reason that it is 
behavior that's rewarded with more folks looking at your proposed 
patches. When people only care about their corner world, and don't put 
in hours on keeping things whole, they balkanize and fragment.


Review bandwidth, and people working on core issues, are our most 
constrained resources. If teams feel they don't need to contribute 
there, because it doesn't directly affect their code, we end up with 
this - http://en.wikipedia.org/wiki/Tragedy_of_the_commons


So it's really crazy to call OpenStack less open by having a culture 
that encourages people to actually work and help on the common parts. 
It's good for the project, as it keeps us whole; it's good for everyone 
working on the project, because they learn about more parts of 
OpenStack, and how their part fits in with the overall system; and it 
makes everyone better developers from learning from each other, on both 
sides of the review line.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-11 Thread Sylvain Bauza

Long-story short, sounds like we do have the same concerns here in Climate.

I'll be present at the Summit, any chance to do an unconference meeting 
in between all parties ?


Thanks,
-Sylvain

Le 11/10/2013 08:25, Mike Spreitzer a écrit :
Regarding Alex's question of which component does holistic 
infrastructure scheduling, I hesitate to simply answer heat.  Heat 
is about orchestration, and infrastructure scheduling is another 
matter.  I have attempted to draw pictures to sort this out, see 
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9Uand 
https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH_TONw6g. 
 In those you will see that I identify holistic infrastructure 
scheduling as separate functionality from infrastructure orchestration 
(the main job of today's heat engine) and also separate from software 
orchestration concerns.  However, I also see a close relationship 
between holistic infrastructure scheduling and heat, as should be 
evident in those pictures too.


Alex made a remark about the needed inputs, and I agree but would like 
to expand a little on the topic.  One thing any scheduler needs is 
knowledge of the amount, structure, and capacity of the hosting 
thingies (I wish I could say resources, but that would be confusing) 
onto which the workload is to be scheduled.  Scheduling decisions are 
made against available capacity.  I think the most practical way to 
determine available capacity is to separately track raw capacity and 
current (plus already planned!) allocations from that capacity, 
finally subtracting the latter from the former.


In Nova, for example, sensing raw capacity is handled by the various 
nova-compute agents reporting that information.  I think a holistic 
infrastructure scheduler should get that information from the various 
individual services (Nova, Cinder, etc) that it is concerned with 
(presumably they have it anyway).


A holistic infrastructure scheduler can keep track of the allocations 
it has planned (regardless of whether they have been executed yet). 
 However, there may also be allocations that did not originate in the 
holistic infrastructure scheduler.  The individual underlying services 
should be able to report (to the holistic infrastructure scheduler, 
even if lowly users are not so authorized) all the allocations 
currently in effect.  An accurate union of the current and planned 
allocations is what we want to subtract from raw capacity to get 
available capacity.


If there is a long delay between planning and executing an allocation, 
there can be nasty surprises from competitors --- if there are any 
competitors.  Actually, there can be nasty surprises anyway.  Any 
scheduler should be prepared for nasty surprises, and react by some 
sensible retrying.  If nasty surprises are rare, we are pretty much 
done.  If nasty surprises due to the presence of competing managers 
are common, we may be able to combat the problem by changing the long 
delay to a short one --- by moving the allocation execution earlier 
into a stage that is only about locking in allocations, leaving all 
the other work involved in creating virtual resources to later 
(perhaps Climate will be good for this).  If the delay between 
planning and executing an allocation is short and there are many nasty 
surprises due to competing managers, then you have too much 
competition between managers --- don't do that.


Debo wants a simpler nova-centric story.  OK, how about the following. 
 This is for the first step in the roadmap, where scheduling decisions 
are still made independently for each VM instance.  For the 
client/service interface, I think we can do this with a simple clean 
two-phase interface when traditional software orchestration is in 
play, a one-phase interface when slick new software orchestration is 
used.  Let me outline the two-phase flow.  We extend the Nova API with 
CRUD operations on VRTs (top-level groups).  For example, the CREATE 
operation takes a definition of a top-level group and all its nested 
groups, definitions (excepting stuff like userdata) of all the 
resources (only VM instances, for now) contained in those groups, all 
the relationships among those groups/resources, and all the 
applications of policy to those groups, resources, and relationships. 
 This is a rest-style interface; the CREATE operation takes a 
definition of the thing (a top-level group and all that it contains) 
being created; the UPDATE operation takes a revised definition of the 
whole thing.  Nova records the presented information; the familiar 
stuff is stored essentially as it is today (but marked as being in 
some new sort of tentative state), and the grouping, relationship, and 
policy stuff is stored according to a model like the one DeboYathi 
wrote.  The CREATE operation returns a UUID for the newly created 
top-level group.  The invocation of the top-level group CRUD is a 
single operation and it is the 

Re: [openstack-dev] Newbie python novaclient question

2013-10-11 Thread Alex
Is that actually doing something or parsing the sub commands?

I think after args is initialized it is probably the args.func calls that 
actually does something. For example the following line in the main:

args.func(self.cs, args) 

What do you think is the .func method?

Regards
Al

On Oct 11, 2013, at 3:07 AM, Noorul Islam K M noo...@noorul.com wrote:

 Alex la6...@gmail.com writes:
 
 Yes , this method seems to look for the corresponding action but still 
 doesn't seem to be the one actually calling them.
 
 
 Are you looking for this which is inside **main** method?
 
 args = subcommand_parser.parse_args(argv)
 
 Thanks and Regards
 Noorul
 
 Regards
 Al
 
 
 
 On Oct 10, 2013, at 11:07 PM, Noorul Islam K M noo...@noorul.com wrote:
 
 Alex la6...@gmail.com writes:
 
 Thank you Noorul. I looked at the review. My question is that in 
 openstackcomputeshell.main which line call the v1_1/ shell.py.function?
 
 
 I would look at get_subcommand_parser() method.
 
 Thanks and Regards
 Noorul
 
 
 
 On Oct 10, 2013, at 9:03 PM, Noorul Islam K M noo...@noorul.com wrote:
 
 A L la6...@gmail.com writes:
 
 Dear Openstack Dev Gurus,
 
 I am trying to understand the python novaclient code. Can someone please
 point me to where in openstackcomputeshell class in shell.py does the
 actual function or class related to an argument gets called?
 
 
 This review [1] is something I submitted and it adds a sub command. May
 be this will give you some clue.
 
 [1] https://review.openstack.org/#/c/40181/
 
 Thanks and Regards
 Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana RC1 available in the Ubuntu Cloud Archive for 12.04

2013-10-11 Thread Tom Fifield

Thanks James!!

On 11/10/13 23:47, James Page wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Folks

I've just finishing promoting all of the Havana RC1 packages and
associated dependencies to the Ubuntu Cloud Archive for Ubuntu 12.04 LTS.

For details of how to use the Ubuntu Cloud Archive for Havana, please
refer to:

   https://wiki.ubuntu.com/ServerTeam/CloudArchive

The latest packages are all now in the updates pocket (they have been
kicking around in proposed for a few days now - we where just waiting
on Swift so that we could complete final smoke testing).

You can track which versions of what are where here:


http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/havana_versions.html

Please use the 'ubuntu-bug' tool to report bugs back to Launchpad, for
example:

   ubuntu-bug nova-compute

This will log the bug against the right project in launchpad and
collect some basic information about which versions of packages you
are using.

Enjoy

James

- --
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSV/NHAAoJEL/srsug59jDT+UP/R7EJcOW63KS7l33/7+GFtuu
vAIYP/WAlUgC6ShUDD2TfFvKFCQ7HA7Ctw5qB5MjvTBH8ZV43/IDiYaww0gyq6ah
BIWmiJhu+6Vq/KRb93fYenpDPV5oFnCy5jbtZnN6DmwEsCcYPanzI9GToyLDxR1n
dzTO6im3HCcm55j+cAd3ehCmkcy4GHk+5pJqKtssGRCKHaRTl2YJ3XaGKTDlDFj1
P967dJFeWr/b3AJif7siKapJSOoJH5wvhwmaEu44bkRfZp8kOdEIEXP19fJKchZi
+TdNyyjf1DWJcMTdHxEbDJ+oCH7bfehekqhg0y8Y2ASdkuhbntjXEEVS9Zaj1m3F
GVTeM/dvumG4YYdAWllF/9i480frifBr7s/5QyTR63seOOBgyb8PRWanDqv8+OGk
4VP6km+U4Q2fO3BsD04YhR0NAcV0wZvc8B2Hn+ut+mdsxLellMhlgew//S/Jj6SW
ict/xuw7bXgHk3ROtnT48WOBEoNxqKxlZA+WntFzn5d4lbqpR2HuLXCtOOU6m1Jy
QDjBAwpQ1V8Bu1qI2ZiVT55S8YhU1EMLRqO4E85Wg/SstemXLj4jC8jTMi5c3hWl
wHMCmJhLZMLq3N2b4ojKmYSlfvfXwGyhSr9hvtJSol8dhRpGw+meVUc2eWeCc7kz
9E7YADmdKW1DTUx+KHbI
=VLr+
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti

On Oct 11, 2013, at 14:15 , Sean Dague s...@dague.net
 wrote:

 On 10/10/2013 08:43 PM, Tim Smith wrote:
 snip
 Again, I don't have any vested interest in this discussion, except that
 I believe the concept of reviewer karma to be counter to both software
 quality and openness. In this particular case it would seem that the
 simplest solution to this problem would be to give one of the hyper-v
 team members core reviewer status, but perhaps there are consequences to
 that that elude me.
 
 There are very deep consequences to that. The core team model, where you have 
 15 - 20 reviewers, but it only takes 2 to land code, only works when the core 
 teams share a culture. This means they know, or are willing to learn, code 
 outside their comfort zone. Will they catch all the bugs in that? nope. But 
 code blindness hits everyone, and there are real implications for the overall 
 quality and maintainability of a project as complicated as Nova if everyone 
 only stays in their comfortable corner.
 
 Also, from my experience in Nova, code contributions written by people that 
 aren't regularly reviewing outside of their corner of the world are 
 demonstrably lower quality than those who are. Reviewing code outside your 
 specific area is also educational, gets you familiar with norms and idioms 
 beyond what simple style checking handles, and makes you a better developer.


There's IMO a practical contradiction here: most people contribute code and do 
reviews on partitioned areas of OpenStack only. For example, Nova devs rarely 
commit on Neutron, so you can say that for a Nova dev the confort zone is 
Nova, but by your description, a fair amount of time should be spent in 
reviewing and learning all the OpenStack projects code, unless you want to 
limit the scope of this discussion to Nova, which does not make much sense when 
you work on a whole technology layer like in our case.

On the contrary, as an example, our job as driver/plugin/agent mantainers 
brings us in contact will all the major projects codebases, with the result 
that we are learning a lot from each of them. Beside that, obviously a 
driver/plugin/agent dev spends normally time learning how similar solutions are 
implemented for other technologies already in the tree, which leads to further 
improvement in the code due to the same knowledge sharing that you are 
referring to.

 
 We need to all be caring about the whole. That culture is what makes 
 OpenStack long term sustainable, and there is a reason that it is behavior 
 that's rewarded with more folks looking at your proposed patches. When people 
 only care about their corner world, and don't put in hours on keeping things 
 whole, they balkanize and fragment.
 
 Review bandwidth, and people working on core issues, are our most constrained 
 resources. If teams feel they don't need to contribute there, because it 
 doesn't directly affect their code, we end up with this - 
 http://en.wikipedia.org/wiki/Tragedy_of_the_commons
 

This reminds me about how peer to peer sharing technologies work. Why don't we 
put some ratios, for example for each commit that a dev does at least 2-3 
reviews of other people's code are required? Enforcing it wouldn't be that 
complicated. The negative part is that it might lead to low quality or fake 
reviews, but at least it could be easy to outline in the stats.

One thing is sure: review bandwidth is the obvious bottleneck in today's 
OpenStack status. If we don't find a reasonably quick solution, the more 
OpenStack grows, the more complicated it will become, leading to even worse 
response times in merging bug fixes and limiting the new features that each new 
version can bring, which is IMO the negation of what a vital and dynamic 
project should be.

From what I see on the Linux kernel project, which can be considered as a good 
source of inspiration when it comes to review bandwidth optimization in a 
large project, they have a pyramidal structure in the way in which the git 
repo origins are interconnected. This looks pretty similar to what we are 
proposing: teams work on specific areas with a topic mantainer and somebody 
merges their work at a higher level, with Linus ultimately managing the root 
repo. 

OpenStack is organized differently: there are lots of separate projects (Nova, 
Neutrom, Glance, etc) instead of a single one (which is a good thing), but I 
believe that a similar approach can be applied. Specific contributors can be 
nominated core rewievers on specific directories in the tree only and that 
would scale immediately the core review bandwidth. 

As a practical example for Nova: in our case that would simply include the 
following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv. Other 
projects didn't hit the review bandwidth limits yet as heavily as Nova did, but 
the same concept could be applied everywhere. 

Alessandro




 So it's really crazy to call OpenStack less open by having a culture that 
 encourages people 

Re: [openstack-dev] Need suggestions and pointers to start contributing for development :

2013-10-11 Thread Dolph Mathews
On Friday, October 11, 2013, Mayank Mittal wrote:

 Hi Teams,

 Please suggest and guide for starting to contribute in development. About
 me - I have been working on L2/L3 protocol, SNMP, NMS development and ready
 to contribute as a full timer to openstack.

 PS : My interest lies in LB and MPLS. Any pointers to respective teams
 will help a lot.


Welcome! It sounds like you'd be interested in contributing to neutron:
https://github.com/openstack/neutron

This should get you pointed in the right direction:
https://wiki.openstack.org/wiki/How_To_Contribute




 Thanks,
 Mayank



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Oct 11 2013

2013-10-11 Thread Anne Gentle
We met at our regular meeting this week and will meet again next week at
the same time, Tuesday 1300 UTC in #openstack-meeting. [1]

1. In review and merged this past week:

We got the install guide reorg patch this week and have been working on it.
Architectures and assignments are here:
https://etherpad.openstack.org/havanainstall. We discussed it at our weekly
team meeting, and although it was incomplete, merged it so that we can all
continue work.

2. High priority doc work:

Install guide has to be top priority. We need neutron help the most. There
are use cases but people always want more detail and step-by-step
instructions with diagrams. Please review works in progress at
https://review.openstack.org/#/q/status:open+project:openstack/openstack-manuals,n,zand
also submit patches as soon as possible, it gets published next
Thursday.

3. Doc work going on that I know of:

Install guide and release notes. The doc team has made some great strides
in the last six months and I'm so excited that we can release
simultaneously with the code.

4. New incoming doc requests:

Get that release out the door October 17th!

5. Doc tools updates:

We have a doc bug reporting link on each output page now when using 1.11.0.
Andreas Jaeger wrote a nice blog post about the automation, see
http://jaegerandi.blogspot.de/2013/10/improving-openstack-documentation-build.html
.

6. Other doc news:

The O'Reilly contract is signed for the OpenStack Operations Guide. We'll
have an early release available November 5th (electronic format) and the
serious development editing will take 4-6 months after that.

Nick Chase met with Alice King to discuss the documentation licensing.
Alice King will write a draft memo to share with the Foundation staff, Mark
Radcliffe the Foundation lawyer, the Board, the TC, the Legal Affairs
Committee, and community members who have an interest in licensing. For
those of you who are armchair law afficionados, the Bylaws state that we
have to use the licensing for documentation that the Board adopts (Article
VII, Section 7.2). The license approved by the Board is CC BY for
standalone docs (Apache 2.0 still applies for docs in line with the
software code).

1. https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change I3e080c30: Fix resource length in project_user_quotas table for Havana?

2013-10-11 Thread Russell Bryant
On 10/10/2013 11:15 PM, Joshua Hesketh wrote:
 Hi there,
 
 I've been reviewing this change which is currently proposed for master
 and I think it needs to be considered for the next Havana RC.
 
 Change I3e080c30: Fix resource length in project_user_quotas table
 https://review.openstack.org/#/c/47299/
 
 I'm new to the process around these kinds of patches but I imagine that
 we should use one of the placeholder migrations in the havana branch and
 cherry-pick it back into master?

The fix looks good, thanks!

I agree that this is good for Havana.  I'll see if I can slip it into
havana-rc2.

The process is generally merging the fix to master and then backporting
it.  In this case the backport can't be the same.  Instead of using a
new migration number, we'll use one of the migration numbers reserved
for havana backports.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change I3e080c30: Fix resource length in project_user_quotas table for Havana?

2013-10-11 Thread Russell Bryant
On 10/11/2013 05:04 AM, Thierry Carrez wrote:
 Joshua Hesketh wrote:
 I've been reviewing this change which is currently proposed for master
 and I think it needs to be considered for the next Havana RC.

 Change I3e080c30: Fix resource length in project_user_quotas table
 https://review.openstack.org/#/c/47299/
 
 The bug was properly tagged and Russell and I should look into it soon.
 I'm not a big fan of changing DB schema less than one week before final
 release though.
 

Yeah, we could consider it for a stable/havana backport instead of
havana-rc2.  It's a nice to have IMO.

It's a trivial error in a migration that affects a new feature in
havana.  So, at least it's not a regression.  It also leaves most of the
feature working fine (setting most user quotas should work just fine).

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 09:02 AM, Alessandro Pilotti wrote:
 OpenStack is organized differently: there are lots of separate projects 
 (Nova, Neutrom, Glance, etc) instead of a single one (which is a good thing), 
 but I believe that a similar approach can be applied. Specific contributors 
 can be nominated core rewievers on specific directories in the tree only 
 and that would scale immediately the core review bandwidth. 
 
 As a practical example for Nova: in our case that would simply include the 
 following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv. Other 
 projects didn't hit the review bandwidth limits yet as heavily as Nova did, 
 but the same concept could be applied everywhere. 

If maintainers of a particular driver would prefer this sort of
autonomy, I'd rather look at creating new repositories.  I'm completely
open to going that route on a per-driver basis.  Thoughts?

For the main tree, I think we already do something like this in
practice.  Core reviewers look for feedback (+1/-1) from experts of that
code and take it heavily into account when doing the review.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-11 Thread Stan Lagun
Hello,

Thanks Angus, Clint, I've got your design.

It seems that Murano can built on top of that. With service metadata
knowledge Murano can generate HOT templates with set of interdependent
configs.
Here is what will be needed:

1. Ability to implement support for custom software configuration tool
(type: OS::SoftwareConfig::MuranoAgent)
2. Ability to provide arbitrary input values for the config
3. Ability to return arbitrary (JSON-compatible) data structure from config
application and use attributes of that structure as an input for other
configs
4. Ability to provide config body that is an input to Murano Agent of
arbitrary size
5. Work well with large graph of configs with a lot of dependencies.
Independent configs on different VMs should be applied in parallel.

Does it confirm to your plans?




On Fri, Oct 11, 2013 at 3:47 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Angus Salkeld's message of 2013-10-10 15:27:48 -0700:
  On 10/10/13 11:59 +0400, Stan Lagun wrote:
  This rises number of questions:
  
  1. What about conditional dependencies? Like config3 depends on config1
 AND
  config2 OR config3.
 
  We have the AND, but not an OR. To depend on two resources you just
  have 2 references to the 2 resources.
 

 AND is concrete. OR is not. I don't actually think it is useful for what
 Heat is intended to do. This is not not packaging, this is deploying.
 For deploying, Heat needs to know _what to do_, not what is possible.

  
  2. How do I pass values between configs? For example config1 requires
 value
  from user input and config2 needs an output value obtained from applying
  config1
 
  {Fn::GetAtt: [config2, the_name_of_the_attribute]}
 

 This is a little bit misleading. Heat does not have any good ways to
 get a value obtained from applying config1. The data attribute of
 the WaitCondition is the only way I know, and it is really unwieldy,
 as it can basically only dump a json string of all of the things each
 signaler has fed back in.

 That said, I wonder how many of the value obtained from applying config1
 would be satisfied by the recently proposed random string generation
 resource. Most of the time what people want to communicate back is just
 auth details. If we push auth details in from Heat to both sides, that
 alleviates all of my current use cases for this type of feature.

  
  3. How would you do error handling? For example config3 on server3
 requires
  config1 to be applied on server1 and config2 on server2. Suppose that
 there
  was an error while applying config2 (and config1 succeeded). How do I
  specify reaction for that? Maybe I need then to try to apply config4 to
  server2 and continue or maybe just roll everything back
 
  We currently have no on_error but it is not out of scope. The
  current action is either to rollback the stack or leave it in the
  failed state (depending on what you choose).
 

 Right, I can definitely see more actions being added as we identify the
 commonly desired options.

  
  4. How these config dependencies play with nested stacks and resources
 like
  LoadBalancer that create such stacks? How do I specify that myConfig
  depends on HA proxy being configured if that config was declared in
 nested
  stack that is generated by resource's Python code and is not declared
 in my
  HOT template?
 
  It is normally based on the actual data/variable that you are
  dependant on.
  loadbalancer: depends on autoscaling instance_list
  (actually in the loadbalancer config would be a GetAtt: [scalegroup,
 InstanceList])
 
  Then if you want to depend on that config you could depend on an
  attribute of that resource that changes on reconfigure.
 
  config1:
 type: OS::SoftwareConfig::Ssh
 properties:
   script: {GetAtt: [scalegroup, InstanceList]}
   hosted_on: loadbalancer
   ...
 
  config2:
 type: OS::SoftwareConfig::Ssh
 properties:
   script: {GetAtt: [config1, ConfigAppliedCount]}
   hosted_on: somewhere_else
   ...
 
  I am sure we could come up with some better syntax for this. But
  the logic seems easily possible to me.
 
  As far as nested stacks go: you just need an output to be useable
  externally - basically design your API.
 
  
  5. The solution is not generic. For example I want to write HOT template
  for my custom load-balancer and a scalable web-servers group. Load
 balancer
  config depends on all configs of web-servers. But web-servers are
 created
  dynamically (autoscaling). That means dependency graph needs to be also
  dynamically modified. But if you explicitly list config names instead of
  something like depends on all configs of web-farm X you have no way to
  describe such rule. In other words we need generic dependency, not just
  dependency on particular config
 
  Why won't just depending on the scaling group be enough? if it needs
  to be updated it will update all within the group before progressing
  to the dependants.
 

 In the example, loadbalancer doesn't 

Re: [openstack-dev] Need suggestions and pointers to start contributing for development :

2013-10-11 Thread Mark McClain

On Oct 11, 2013, at 9:14 AM, Dolph Mathews dolph.math...@gmail.com wrote:
 
 On Friday, October 11, 2013, Mayank Mittal wrote:
 Hi Teams,
 
 Please suggest and guide for starting to contribute in development. About me 
 - I have been working on L2/L3 protocol, SNMP, NMS development and ready to 
 contribute as a full timer to openstack. 
 
 PS : My interest lies in LB and MPLS. Any pointers to respective teams will 
 help a lot.
 
 Welcome! It sounds like you'd be interested in contributing to neutron: 
 https://github.com/openstack/neutron
 
 This should get you pointed in the right direction: 
 https://wiki.openstack.org/wiki/How_To_Contribute
 
  

Mayank-

Dolph is correct that Neutron matches up with your interests.  Here's bit more 
specific information on Neutron development: 
https://wiki.openstack.org/wiki/NeutronDevelopment

mark___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] dd performance for wipe in cinder

2013-10-11 Thread Matt Riedemann
Have you looked at the volume_clear and volume_clear_size options in 
cinder.conf?

https://github.com/openstack/cinder/blob/2013.2.rc1/etc/cinder/cinder.conf.sample#L1073
 


The default is to zero out the volume.  You could try 'none' to see if 
that helps with performance.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   cosmos cosmos cosmos0...@gmail.com
To: openstack-dev@lists.openstack.org, 
Date:   10/11/2013 04:26 AM
Subject:[openstack-dev]  dd performance for wipe in cinder



Hello.
My name is Rucia for Samsung SDS.

Now I am in trouble in cinder volume deleting.
I am developing for supporting big data storage in lvm 

But it takes too much time for deleting of cinder lvm volume because of 
dd.
Cinder volume is 200GB for supporting hadoop master data.
When i delete cinder volume in using 'dd if=/dev/zero of $cinder-volume 
count=100 bs=1M' it takes about 30 minutes.

Is there the better and quickly way for deleting?

Cheers. 
Rucia.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On Oct 11, 2013, at 17:17 , Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com
 wrote:

On 10/11/2013 09:02 AM, Alessandro Pilotti wrote:
OpenStack is organized differently: there are lots of separate projects (Nova, 
Neutrom, Glance, etc) instead of a single one (which is a good thing), but I 
believe that a similar approach can be applied. Specific contributors can be 
nominated core rewievers on specific directories in the tree only and that 
would scale immediately the core review bandwidth.

As a practical example for Nova: in our case that would simply include the 
following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv. Other 
projects didn't hit the review bandwidth limits yet as heavily as Nova did, but 
the same concept could be applied everywhere.

If maintainers of a particular driver would prefer this sort of
autonomy, I'd rather look at creating new repositories.  I'm completely
open to going that route on a per-driver basis.  Thoughts?

Well, as long as it is an official project this would make definitely sense, at 
least for Hyper-V.
Stability of the driver's interface has never been a particular issue to 
prevent this to happen IMO.
We should think about how to handle the testing, considering that we are 
getting ready with the CI gate.

For the main tree, I think we already do something like this in
practice.  Core reviewers look for feedback (+1/-1) from experts of that
code and take it heavily into account when doing the review.


There's only one small issue with the current approach.

Current reviews require:

+1 de facto driver X mantainer(s)
+2  core reviewer
+2A  core reviewer

While with the proposed scenario we'd get to a way faster route:

+2  driver X mantainer
+2A another driver X mantainer or a core reviewer

This would make a big difference in terms of review time.

Thanks,

Alessandro


--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] dd performance for wipe in cinder

2013-10-11 Thread Chris Friesen

On 10/11/2013 03:20 AM, cosmos cosmos wrote:

Hello.
My name is Rucia for Samsung SDS.

Now I am in trouble in cinder volume deleting.
I am developing for supporting big data storage in lvm

But it takes too much time for deleting of cinder lvm volume because of dd.
Cinder volume is 200GB for supporting hadoop master data.
When i delete cinder volume in using 'dd if=/dev/zero of $cinder-volume
count=100 bs=1M' it takes about 30 minutes.

Is there the better and quickly way for deleting?


Is there a particular reason why you're overwriting the entire volume 
with zeros?


A simple way to delete the contents of a filesystem would be rm -rf 
/path/to/directory


Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] dd performance for wipe in cinder

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 8:41 AM, Matt Riedemann mrie...@us.ibm.com wrote:

 Have you looked at the volume_clear and volume_clear_size options in
 cinder.conf?

 *
 https://github.com/openstack/cinder/blob/2013.2.rc1/etc/cinder/cinder.conf.sample#L1073
 *https://github.com/openstack/cinder/blob/2013.2.rc1/etc/cinder/cinder.conf.sample#L1073

 The default is to zero out the volume.  You could try 'none' to see if
 that helps with performance.



 Thanks,

 *MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development
 --
  *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
 E-mail:* *mrie...@us.ibm.com* mrie...@us.ibm.com
 [image: IBM]

 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States





 From:cosmos cosmos cosmos0...@gmail.com
 To:openstack-dev@lists.openstack.org,
 Date:10/11/2013 04:26 AM
 Subject:[openstack-dev]  dd performance for wipe in cinder
 --



 Hello.
 My name is Rucia for Samsung SDS.

 Now I am in trouble in cinder volume deleting.
 I am developing for supporting big data storage in lvm

 But it takes too much time for deleting of cinder lvm volume because of dd.
 Cinder volume is 200GB for supporting hadoop master data.
 When i delete cinder volume in using 'dd if=/dev/zero of $cinder-volume
 count=100 bs=1M' it takes about 30 minutes.

 Is there the better and quickly way for deleting?

 Cheers.
 Rucia.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 As Matt pointed out there's an option to turn off secure-delete
altogether.  The reason for the volume_clear setting (aka secure delete) is
that since we're allocating volumes via LVM from a shared VG there is the
possibility that a user had a volume with sensitive data and
deleted/removed the logical volume they were using.  If there was no
encryption or if no secure delete operation were performed it is possible
that another tenant when creating a new volume from the Volume Group could
be allocated some of the blocks that the previous volume utilized and
potentially inspect/read those blocks and obtain some of the other users
data.

To be honest the options provided won't likely make this operation as
fast as you'd like, especially when dealing with 200GB volumes.
 Depending on your environment you may want to consider using encryption or
possibly if acceptable using the volume_clear=None.

John
image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 10:41 AM, Alessandro Pilotti wrote:
 
 
 On Oct 11, 2013, at 17:17 , Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com
  wrote:
 
 On 10/11/2013 09:02 AM, Alessandro Pilotti wrote:
 OpenStack is organized differently: there are lots of separate
 projects (Nova, Neutrom, Glance, etc) instead of a single one (which
 is a good thing), but I believe that a similar approach can be
 applied. Specific contributors can be nominated core rewievers on
 specific directories in the tree only and that would scale
 immediately the core review bandwidth.

 As a practical example for Nova: in our case that would simply
 include the following subtrees: nova/virt/hyperv and
 nova/tests/virt/hyperv. Other projects didn't hit the review
 bandwidth limits yet as heavily as Nova did, but the same concept
 could be applied everywhere.

 If maintainers of a particular driver would prefer this sort of
 autonomy, I'd rather look at creating new repositories.  I'm completely
 open to going that route on a per-driver basis.  Thoughts?
 
 Well, as long as it is an official project this would make definitely
 sense, at least for Hyper-V.

What I envision here would be another repository/project under the
OpenStack Compute program.  You can sort of look at it as similar to
python-novaclient, even though that project uses the same review team
right now.

So, that means it would also be a separate release deliverable.  It
wouldn't be integrated into the main nova release.  They could be
released at the same time, though.

We could either have a single repo:

openstack/nova-extra-drivers

or a repo per driver that wants to split:

openstack/nova-driver-hyperv

The latter is a bit more to keep track of, but might make the most sense
so that we can have a review team per driver.

 Stability of the driver's interface has never been a particular issue to
 prevent this to happen IMO.

Note that I would actually *not* want to necessarily guarantee a stable
API here for master.  We should be able to mitigate sync issues with CI.

 We should think about how to handle the testing, considering that we are
 getting ready with the CI gate.

Hopefully the testing isn't too much different.  It's just grabbing the
bits from another repo.

Also note that I have a session for the summit that is intended to talk
about all of this, as well:

http://summit.openstack.org/cfp/details/4

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] memcache connections and multiprocess api

2013-10-11 Thread Pitucha, Stanislaw Izaak
Hi all,
I'm seeing a lot of memcache connections from my api hosts to memcached and it 
looks like the number is way too high for what I expect.
The way I understand the code at the moment, there's going to be a specified 
number of workers (20 in my case) and each will have access to a greenthread 
pool (which at the moment is set to the default size of 1000). This seems a bit 
unreasonable I think. I'm not sure what the model for sql connections is, but 
at least memcache will get a connection per each greenthread... and it will 
almost never disconnect in practice.

This results in hundreds idle of connections to memcache at the moment, which 
quickly hits any reasonable open files limit on the memcached side.
Has anyone seen this behavior before and tried to play with tweaking the 
pool_size at all? I'd expect that 1000 greenthreads in one process' pool is too 
much for any typical usecase apart from trying not to miss bursts of 
connections (but they will have to wait for db and rpc pools anyway and there's 
128 connections backlog for that).

So... has anyone looked at fixing this in context of memcache connections? 
Lower wsgi pool_size? Timing out wsgi greenthreads? Pooling memcache 
connections?

Regards,
Stanisław Pitucha
Cloud Services 
Hewlett Packard


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo meeting this week

2013-10-11 Thread Doug Hellmann
The notes from the meeting and a link to the full logs can be found at
http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-11-14.00.html


On Mon, Oct 7, 2013 at 3:24 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 The Oslo team will be meeting this week to discuss delayed message
 translation.

 Please refer to https://wiki.openstack.org/wiki/Meetings/Oslo for a few
 links relevant to the conversation.

 Date: 11 Oct 2013
 Time: 1400 UTC
 Location: #openstack-meeting on freenode

 See you there!
 Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Bob Ball
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: 11 October 2013 15:18
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
  As a practical example for Nova: in our case that would simply include the
 following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.
 
 If maintainers of a particular driver would prefer this sort of
 autonomy, I'd rather look at creating new repositories.  I'm completely
 open to going that route on a per-driver basis.  Thoughts?

I think that all drivers that are officially supported must be treated in the 
same way.

If we are going to split out drivers into a separate but still official 
repository then we should do so for all drivers.  This would allow Nova core 
developers to focus on the architectural side rather than how each individual 
driver implements the API that is presented.

Of course, with the current system it is much easier for a Nova core to 
identify and request a refactor or generalisation of code written in one or 
multiple drivers so they work for all of the drivers - we've had a few of those 
with XenAPI where code we have written has been pushed up into Nova core rather 
than the XenAPI tree.

Perhaps one approach would be to re-use the incubation approach we have; if 
drivers want to have the fast-development cycles uncoupled from core reviewers 
then they can be moved into an incubation project.  When there is a suitable 
level of integration (and automated testing to maintain it of course) then they 
can graduate.  I imagine at that point there will be more development of new 
features which affect Nova in general (to expose each hypervisor's strengths), 
so there would be fewer cases of them being restricted just to the virt/* tree.

Bob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On Oct 11, 2013, at 18:02 , Russell Bryant rbry...@redhat.com
 wrote:

 On 10/11/2013 10:41 AM, Alessandro Pilotti wrote:
 
 
 On Oct 11, 2013, at 17:17 , Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com
 wrote:
 
 On 10/11/2013 09:02 AM, Alessandro Pilotti wrote:
 OpenStack is organized differently: there are lots of separate
 projects (Nova, Neutrom, Glance, etc) instead of a single one (which
 is a good thing), but I believe that a similar approach can be
 applied. Specific contributors can be nominated core rewievers on
 specific directories in the tree only and that would scale
 immediately the core review bandwidth.
 
 As a practical example for Nova: in our case that would simply
 include the following subtrees: nova/virt/hyperv and
 nova/tests/virt/hyperv. Other projects didn't hit the review
 bandwidth limits yet as heavily as Nova did, but the same concept
 could be applied everywhere.
 
 If maintainers of a particular driver would prefer this sort of
 autonomy, I'd rather look at creating new repositories.  I'm completely
 open to going that route on a per-driver basis.  Thoughts?
 
 Well, as long as it is an official project this would make definitely
 sense, at least for Hyper-V.
 
 What I envision here would be another repository/project under the
 OpenStack Compute program.  You can sort of look at it as similar to
 python-novaclient, even though that project uses the same review team
 right now.
 
 So, that means it would also be a separate release deliverable.  It
 wouldn't be integrated into the main nova release.  They could be
 released at the same time, though.
 
 We could either have a single repo:
 
openstack/nova-extra-drivers
 
 or a repo per driver that wants to split:
 
openstack/nova-driver-hyperv
 

+1 for openstack/nova-driver-hyperv

That would be perfect. Fast bug fixes, independent reviewers and autonomous 
blueprints management.

Our users would cry of joy for such a solution. :-)


 The latter is a bit more to keep track of, but might make the most sense
 so that we can have a review team per driver.
 
 Stability of the driver's interface has never been a particular issue to
 prevent this to happen IMO.
 
 Note that I would actually *not* want to necessarily guarantee a stable
 API here for master.  We should be able to mitigate sync issues with CI.
 
 We should think about how to handle the testing, considering that we are
 getting ready with the CI gate.
 
 Hopefully the testing isn't too much different.  It's just grabbing the
 bits from another repo.
 
 Also note that I have a session for the summit that is intended to talk
 about all of this, as well:
 
http://summit.openstack.org/cfp/details/4

Sure, looking forward to meet you there!

 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Dan Smith
 We could either have a single repo:
 
 openstack/nova-extra-drivers

This would be my preference for sure, just from the standpoint of
additional release complexity otherwise. I know it might complicate how
the core team works, but presumably we could get away with just having
driver maintainers with core abilities on the whole project, especially
given that most of them care only about their own driver.

 Note that I would actually *not* want to necessarily guarantee a stable
 API here for master.  We should be able to mitigate sync issues with CI.

Agreed, a stable virt driver API is not feasible or healthy at this
point, IMHO. However, it doesn't change that much as it is. I know I'll
be making changes to virt drivers in the coming cycle due to objects and
I have no problem submitting the corresponding changes to the
nova-extra-drivers tree for those drivers alongside any that go for the
main one.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Dan Smith
 I think that all drivers that are officially supported must be
 treated in the same way.

Well, we already have multiple classes of support due to the various
states of testing that the drivers have.

 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This would
 allow Nova core developers to focus on the architectural side rather
 than how each individual driver implements the API that is
 presented.

I really don't want to see KVM and XenAPI pulled out of the main tree,
FWIW. I think we need a critical mass of the (currently) most used
drivers there to be the reference platform. Going the route of kicking
everything out of tree means the virt driver API is necessarily needs to
be a stable thing that the others can depend on and I definitely don't
want to see that happen at this point.

The other thing is, this is driven mostly by the desire of some driver
maintainers to be able to innovate in their driver without the
restrictions of being in the main tree. That's not to say that once they
reach a level of completeness that they might want back into the main
tree as other new drivers continue to be cultivated in the faster-moving
extra drivers tree.

 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles uncoupled
 from core reviewers then they can be moved into an incubation
 project.  When there is a suitable level of integration (and
 automated testing to maintain it of course) then they can graduate.

Yeah, I think this makes sense. New drivers from here on out start in
the extra drivers tree and graduate to the main tree. It sounds like
Hyper-V will move back there to achieve a fast pace of development for a
while, which I think is fine. It will bring with it some additional
review overhead when the time comes to bring it back into the main nova
tree, but hopefully we can plan for that and make it happen swiftly.

Also, we have a looming deadline of required CI integration for the
drivers, so having an extra drivers tree gives us a very good landing
spot and answer to the question of what if we can't satisfy the CI
requirements?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newbie python novaclient question

2013-10-11 Thread Noorul Islam Kamal Malmiyoda
On Oct 11, 2013 6:04 PM, Alex la6...@gmail.com wrote:

 Is that actually doing something or parsing the sub commands?

 I think after args is initialized it is probably the args.func calls that
actually does something. For example the following line in the main:

 args.func(self.cs, args)

 What do you think is the .func method?


https://github.com/openstack/python-novaclient/blob/master/novaclient/shell.py#L473

- Noorul


 On Oct 11, 2013, at 3:07 AM, Noorul Islam K M noo...@noorul.com wrote:

  Alex la6...@gmail.com writes:
 
  Yes , this method seems to look for the corresponding action but still
doesn't seem to be the one actually calling them.
 
 
  Are you looking for this which is inside **main** method?
 
  args = subcommand_parser.parse_args(argv)
 
  Thanks and Regards
  Noorul
 
  Regards
  Al
 
 
 
  On Oct 10, 2013, at 11:07 PM, Noorul Islam K M noo...@noorul.com
wrote:
 
  Alex la6...@gmail.com writes:
 
  Thank you Noorul. I looked at the review. My question is that in
openstackcomputeshell.main which line call the v1_1/ shell.py.function?
 
 
  I would look at get_subcommand_parser() method.
 
  Thanks and Regards
  Noorul
 
 
 
  On Oct 10, 2013, at 9:03 PM, Noorul Islam K M noo...@noorul.com
wrote:
 
  A L la6...@gmail.com writes:
 
  Dear Openstack Dev Gurus,
 
  I am trying to understand the python novaclient code. Can someone
please
  point me to where in openstackcomputeshell class in shell.py does
the
  actual function or class related to an argument gets called?
 
 
  This review [1] is something I submitted and it adds a sub command.
May
  be this will give you some clue.
 
  [1] https://review.openstack.org/#/c/40181/
 
  Thanks and Regards
  Noorul
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball bob.b...@citrix.com wrote:

  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com]
  Sent: 11 October 2013 15:18
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
   As a practical example for Nova: in our case that would simply include
 the
  following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.
 
  If maintainers of a particular driver would prefer this sort of
  autonomy, I'd rather look at creating new repositories.  I'm completely
  open to going that route on a per-driver basis.  Thoughts?

 I think that all drivers that are officially supported must be treated in
 the same way.

 If we are going to split out drivers into a separate but still official
 repository then we should do so for all drivers.  This would allow Nova
 core developers to focus on the architectural side rather than how each
 individual driver implements the API that is presented.

 Of course, with the current system it is much easier for a Nova core to
 identify and request a refactor or generalisation of code written in one or
 multiple drivers so they work for all of the drivers - we've had a few of
 those with XenAPI where code we have written has been pushed up into Nova
 core rather than the XenAPI tree.

 Perhaps one approach would be to re-use the incubation approach we have;
 if drivers want to have the fast-development cycles uncoupled from core
 reviewers then they can be moved into an incubation project.  When there is
 a suitable level of integration (and automated testing to maintain it of
 course) then they can graduate.  I imagine at that point there will be more
 development of new features which affect Nova in general (to expose each
 hypervisor's strengths), so there would be fewer cases of them being
 restricted just to the virt/* tree.

 Bob

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I've thought about this in the past, but always come back to a couple of
things.

Being a community driven project, if a vendor doesn't want to participate
in the project then why even pretend (ie having their own project/repo,
reviewers etc).  Just post your code up in your own github and let people
that want to use it pull it down.  If it's a vendor project, then that's
fine; have it be a vendor project.

In my opinion pulling out and leaving things up to the vendors as is being
described has significant negative impacts.  Not the least of which is
consistency in behaviors.  On the Cinder side, the core team spends the
bulk of their review time looking at things like consistent behaviors,
missing features or paradigms that are introduced that break other
drivers.  For example looking at things like, are all the base features
implemented, do they work the same way, are we all using the same
vocabulary, will it work in an multi-backend environment.  In addition,
it's rare that a vendor implements a new feature in their driver that
doesn't impact/touch the core code somewhere.

Having drivers be a part of the core project is very valuable in my
opinion.  It's also very important in my view that the core team for Nova
actually has some idea and notion of what's being done by the drivers that
it's supporting.  Moving everybody further and further into additional
private silos seems like a very bad direction to me, it makes things like
knowledge transfer, documentation and worst of all bug triaging extremely
difficult.

I could go on and on here, but nobody likes to hear anybody go on a rant.
 I would just like to see if there are other alternatives to improving the
situation than fragmenting the projects.

John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On Oct 11, 2013, at 18:36 , Dan Smith d...@danplanet.com
 wrote:

 I think that all drivers that are officially supported must be
 treated in the same way.
 
 Well, we already have multiple classes of support due to the various
 states of testing that the drivers have.
 
 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This would
 allow Nova core developers to focus on the architectural side rather
 than how each individual driver implements the API that is
 presented.
 
 I really don't want to see KVM and XenAPI pulled out of the main tree,
 FWIW. I think we need a critical mass of the (currently) most used
 drivers there to be the reference platform. Going the route of kicking
 everything out of tree means the virt driver API is necessarily needs to
 be a stable thing that the others can depend on and I definitely don't
 want to see that happen at this point.
 

I see libvirt/KVM treated as the reference driver, so it could make sense to 
leave it in Nova. 

My only request here is that we can make sure that new driver features can land 
for other drivers without necessarilky having them implemented for libvirt/KVM 
first. 

 The other thing is, this is driven mostly by the desire of some driver
 maintainers to be able to innovate in their driver without the
 restrictions of being in the main tree. That's not to say that once they
 reach a level of completeness that they might want back into the main
 tree as other new drivers continue to be cultivated in the faster-moving
 extra drivers tree.
 
 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles uncoupled
 from core reviewers then they can be moved into an incubation
 project.  When there is a suitable level of integration (and
 automated testing to maintain it of course) then they can graduate.
 
 Yeah, I think this makes sense. New drivers from here on out start in
 the extra drivers tree and graduate to the main tree. It sounds like
 Hyper-V will move back there to achieve a fast pace of development for a
 while, which I think is fine. It will bring with it some additional
 review overhead when the time comes to bring it back into the main nova
 tree, but hopefully we can plan for that and make it happen swiftly.
 

I personally don't agree with this option, as it would create a A class 
version of the driver supposely mature and a B version supposely experimental 
which would just confuse users.

It's not a matter of stability, as the code is already stable so I really don't 
see a point in the incubator approach. We already have our forks for 
experimental features without needing to complicate things more.
The code that we publish in the OpenStack repos is meant to be production ready.

As Dan was pointing out, merging the code back into Nova would require a review 
at some point in time of a huge patch that would send us straight back into 
review hell. No thanks!

The best option for us, is to have a separate project (nova-driver-hyperv would 
be perfect) where we can handle blueprints, commit bug fixes independently with 
no intention to merge it back into the Nova tree as much as I guess there's no 
reason to merge, say, python-nova-client. 

Any area that would require additional features in Nova (e.g. our notorious RDP 
blueprint :-) ) would anyway go through the Nova review process, while 
blueprints that implement a feature already present in Nova (e.g. 
live-snapshots) can be handled entirely independently.

This approach would save quite some precious review bandwidths from the Nova 
reviewers and give us the required headroom to innovate and fix bugs in a 
timely manner, bringing the best OpenStack experience to our users.


 Also, we have a looming deadline of required CI integration for the
 drivers, so having an extra drivers tree gives us a very good landing
 spot and answer to the question of what if we can't satisfy the CI
 requirements?
 

I agree on this point: purgatory -ahem-, I mean, incubation, for the drivers 
that witll not have a CI ready in time for Icehouse.

Alessandro


 --Dan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 12:04 PM, John Griffith wrote:
 
 
 
 On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball bob.b...@citrix.com
 mailto:bob.b...@citrix.com wrote:
 
  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com
 mailto:rbry...@redhat.com]
  Sent: 11 October 2013 15:18
  To: openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
   As a practical example for Nova: in our case that would simply
 include the
  following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.
 
  If maintainers of a particular driver would prefer this sort of
  autonomy, I'd rather look at creating new repositories.  I'm
 completely
  open to going that route on a per-driver basis.  Thoughts?
 
 I think that all drivers that are officially supported must be
 treated in the same way.
 
 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This
 would allow Nova core developers to focus on the architectural side
 rather than how each individual driver implements the API that is
 presented.
 
 Of course, with the current system it is much easier for a Nova core
 to identify and request a refactor or generalisation of code written
 in one or multiple drivers so they work for all of the drivers -
 we've had a few of those with XenAPI where code we have written has
 been pushed up into Nova core rather than the XenAPI tree.
 
 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles uncoupled
 from core reviewers then they can be moved into an incubation
 project.  When there is a suitable level of integration (and
 automated testing to maintain it of course) then they can graduate.
  I imagine at that point there will be more development of new
 features which affect Nova in general (to expose each hypervisor's
 strengths), so there would be fewer cases of them being restricted
 just to the virt/* tree.
 
 Bob
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 I've thought about this in the past, but always come back to a couple of
 things.
 
 Being a community driven project, if a vendor doesn't want to
 participate in the project then why even pretend (ie having their own
 project/repo, reviewers etc).  Just post your code up in your own github
 and let people that want to use it pull it down.  If it's a vendor
 project, then that's fine; have it be a vendor project.
 
 In my opinion pulling out and leaving things up to the vendors as is
 being described has significant negative impacts.  Not the least of
 which is consistency in behaviors.  On the Cinder side, the core team
 spends the bulk of their review time looking at things like consistent
 behaviors, missing features or paradigms that are introduced that
 break other drivers.  For example looking at things like, are all the
 base features implemented, do they work the same way, are we all using
 the same vocabulary, will it work in an multi-backend environment.  In
 addition, it's rare that a vendor implements a new feature in their
 driver that doesn't impact/touch the core code somewhere.
 
 Having drivers be a part of the core project is very valuable in my
 opinion.  It's also very important in my view that the core team for
 Nova actually has some idea and notion of what's being done by the
 drivers that it's supporting.  Moving everybody further and further into
 additional private silos seems like a very bad direction to me, it makes
 things like knowledge transfer, documentation and worst of all bug
 triaging extremely difficult.
 
 I could go on and on here, but nobody likes to hear anybody go on a
 rant.  I would just like to see if there are other alternatives to
 improving the situation than fragmenting the projects.

Really good points here.  I'm glad you jumped in, because the underlying
issue here applies well to other projects (especially Cinder and Neutron).

So, the alternative to the split official repos is to either:

1) Stay in tree, participate, and help share the burden of maintenance
of the project

or

2) Truly be a vendor project, and to make that more clear, split out
into your own (not nova) repository.

#2 really isn't so bad if that's what you want, and it honestly sounds
like this may be the case for the Hyper-V team.  You could still be very
close to the OpenStack community by using the same tools.  Use
stackforge for the code (same gerrit, jenkins, etc), and have your own
launchpad project.  If you go that route, you get all of the control you
want, but the project 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Dan Smith
 My only request here is that we can make sure that new driver
 features can land for other drivers without necessarilky having them
 implemented for libvirt/KVM first.

We've got lots of things supported by the XenAPI drivers that aren't
supported by libvirt, so I don't think this is a problem even today.

 I personally don't agree with this option, as it would create a A
 class version of the driver supposely mature and a B version
 supposely experimental which would just confuse users.

If you're expecting that there would be two copies of the driver, one in
the main tree and one in the extra drivers tree, that's not what I was
suggesting.

 The best option for us, is to have a separate project
 (nova-driver-hyperv would be perfect) where we can handle blueprints,
 commit bug fixes independently with no intention to merge it back
 into the Nova tree as much as I guess there's no reason to merge,
 say, python-nova-client.

So if that's really the desire, why not go John's route and just push
your official version of the driver to a github repo and be done with it?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-11 Thread Clint Byrum
Excerpts from Stan Lagun's message of 2013-10-11 07:22:37 -0700:
 Hello,
 
 Thanks Angus, Clint, I've got your design.
 
 It seems that Murano can built on top of that. With service metadata
 knowledge Murano can generate HOT templates with set of interdependent
 configs.
 Here is what will be needed:
 
 1. Ability to implement support for custom software configuration tool
 (type: OS::SoftwareConfig::MuranoAgent)

These are really syntactic sugar, but you can implement them as resource
plugins for users who want Murano resources. In the absence of this,
just putting things in the free form metadata area of the resource that
the MuranoAgent can interpret would suffice.

 2. Ability to provide arbitrary input values for the config

We already have that, there's a free-form json document called metadata
attached to every resource. Or maybe I missed what you mean here. The
new capability that is in the works that will make that better is to
have multiple reusable metadata blocks referenced on one instance.

 3. Ability to return arbitrary (JSON-compatible) data structure from config
 application and use attributes of that structure as an input for other
 configs

Note that I'd like to see more use cases specified for this ability. The
random string generator that Steve Baker has put up should handle most
cases where you just need passwords. Generated key sharing might best
be deferred to something like Barbican which does a lot more than Heat
to try and keep your secrets safe.

 4. Ability to provide config body that is an input to Murano Agent of
 arbitrary size

Isn't this the same as 2?

 5. Work well with large graph of configs with a lot of dependencies.
 Independent configs on different VMs should be applied in parallel.


Yes this does look good. For dependent configs, the exsiting wait
condition can be used.

 Does it confirm to your plans?
 

I think it confirms that we're heading toward consensus on where to draw
the software config vs. infrastructure orchestration line. That is very
exciting. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-11 Thread Caitlin Bestler

On 10/9/2013 12:55 PM, Joshua Harlow wrote:

Your example sounds a lot like what taskflow is build for doing.

https://github.com/stackforge/taskflow/blob/master/taskflow/examples/calculate_in_parallel.py
 is
a decent example.

In that one, tasks are created and input/output dependencies are
specified (provides, rebind, and the execute function arguments itself).

This is combined into the taskflow concept of a flow, one of those flows
types is a dependency graph.

Using a parallel engine (similar in concept to a heat engine) we can run
all non-dependent tasks in parallel.

An example that I just created that shows this (and shows it running)
that closer matches your example.

Program (this will work against the current taskflow
codebase): http://paste.openstack.org/show/48156/



I think that there is a major difference between building a set of 
virtual servers (what Heat does) and performing a specific task on

a set of servers (what taskflow is designed for).

Taskflow is currently planned to have a more complex and robust
state machine that what Heat plans. This is natural given that
simplicity has a relatively higher value for deployment, while
efficiency of execution matters more for a task that will be
performed repeatedly.

However, if a simpler state model is needed, keep in mind that
a simpler interface can always be translated into the more complete
interface with a shim layer. You cannot build a more flexible solution
on top of a simple solution.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] dd performance for wipe in cinder

2013-10-11 Thread Chris Friesen

On 10/11/2013 09:02 AM, John Griffith wrote:


As Matt pointed out there's an option to turn off secure-delete
altogether.  The reason for the volume_clear setting (aka secure delete)
is that since we're allocating volumes via LVM from a shared VG there is
the possibility that a user had a volume with sensitive data and
deleted/removed the logical volume they were using.  If there was no
encryption or if no secure delete operation were performed it is
possible that another tenant when creating a new volume from the Volume
Group could be allocated some of the blocks that the previous volume
utilized and potentially inspect/read those blocks and obtain some of
the other users data.


Sounds like we could use some kind of layer that will zero out blocks on 
read if they haven't been written by that user.


That way the performance penalty would only affect people that try to 
read data from the volume without writing it first (which nobody should 
actually be doing).


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] container forwarding/cluster federation blueprint

2013-10-11 Thread Coles, Alistair
We've just committed a first set of patches to gerrit that address this 
blueprint:

https://blueprints.launchpad.net/swift/+spec/cluster-federation

Quoting from that page: The goal of this work is to enable account contents to 
be dispersed across multiple clusters, motivated by (a) accounts that might 
grow beyond the remaining capacity of a single cluster and (b) clusters 
offering differentiated service levels such as different levels of redundancy 
or different storage tiers. Following feedback at the Portland summit, the work 
is initially limited to dispersal at the container level, i.e. each container 
within an account may be stored on a different cluster, whereas every object 
within a container will be stored on the same cluster.

It is work in progress, but we'd welcome feedback on this thread, or in person 
for anyone who might be at the hackathon in Austin next week.

The bulk of the new features are in this patch:
https://review.openstack.org/51236 (Middleware module for container forwarding.)

There's a couple of patches refactoring/adding support to existing modules:
https://review.openstack.org/51242 (Refactor proxy/controllers obj  base http 
code)
https://review.openstack.org/51228 (Store x-container-attr-* headers in 
container db.)

And some tests...
https://review.openstack.org/51245 (Container-forwarding unit and functional 
tests)

Regards,
Alistair Coles, Eric Deliot, Aled Edwards

HP Labs, Bristol, UK
-
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
. Registered No: 690597 England
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error, you should 
delete it from your system immediately and advise the sender.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Enhance UX of Launch Instance Form

2013-10-11 Thread Cédric Soulas
Hi,

I just started a draft with suggestions to enhance the UX of the Launch 
Instance form:
https://docs.google.com/document/d/1hUdmyxpVxbYwgGtPbzDsBUXsv0_rtKbfgCHYxOgFjlo

Try the live prototype:
http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance

Best,

Cédric

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Joe Gordon
On Fri, Oct 11, 2013 at 6:02 AM, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:


 On Oct 11, 2013, at 14:15 , Sean Dague s...@dague.net
  wrote:

  On 10/10/2013 08:43 PM, Tim Smith wrote:
  snip
  Again, I don't have any vested interest in this discussion, except that
  I believe the concept of reviewer karma to be counter to both software
  quality and openness. In this particular case it would seem that the
  simplest solution to this problem would be to give one of the hyper-v
  team members core reviewer status, but perhaps there are consequences to
  that that elude me.
 
  There are very deep consequences to that. The core team model, where you
 have 15 - 20 reviewers, but it only takes 2 to land code, only works when
 the core teams share a culture. This means they know, or are willing to
 learn, code outside their comfort zone. Will they catch all the bugs in
 that? nope. But code blindness hits everyone, and there are real
 implications for the overall quality and maintainability of a project as
 complicated as Nova if everyone only stays in their comfortable corner.
 
  Also, from my experience in Nova, code contributions written by people
 that aren't regularly reviewing outside of their corner of the world are
 demonstrably lower quality than those who are. Reviewing code outside your
 specific area is also educational, gets you familiar with norms and idioms
 beyond what simple style checking handles, and makes you a better developer.


 There's IMO a practical contradiction here: most people contribute code
 and do reviews on partitioned areas of OpenStack only. For example, Nova
 devs rarely commit on Neutron, so you can say that for a Nova dev the
 confort zone is Nova, but by your description, a fair amount of time
 should be spent in reviewing and learning all the OpenStack projects code,
 unless you want to limit the scope of this discussion to Nova, which does
 not make much sense when you work on a whole technology layer like in our
 case.

 On the contrary, as an example, our job as driver/plugin/agent mantainers
 brings us in contact will all the major projects codebases, with the result
 that we are learning a lot from each of them. Beside that, obviously a
 driver/plugin/agent dev spends normally time learning how similar solutions
 are implemented for other technologies already in the tree, which leads to
 further improvement in the code due to the same knowledge sharing that you
 are referring to.

 
  We need to all be caring about the whole. That culture is what makes
 OpenStack long term sustainable, and there is a reason that it is behavior
 that's rewarded with more folks looking at your proposed patches. When
 people only care about their corner world, and don't put in hours on
 keeping things whole, they balkanize and fragment.
 
  Review bandwidth, and people working on core issues, are our most
 constrained resources. If teams feel they don't need to contribute there,
 because it doesn't directly affect their code, we end up with this -
 http://en.wikipedia.org/wiki/Tragedy_of_the_commons
 

 This reminds me about how peer to peer sharing technologies work. Why
 don't we put some ratios, for example for each commit that a dev does at
 least 2-3 reviews of other people's code are required? Enforcing it
 wouldn't be that complicated. The negative part is that it might lead to
 low quality or fake reviews, but at least it could be easy to outline in
 the stats.

 One thing is sure: review bandwidth is the obvious bottleneck in today's
 OpenStack status. If we don't find a reasonably quick solution, the more
 OpenStack grows, the more complicated it will become, leading to even worse
 response times in merging bug fixes and limiting the new features that each
 new version can bring, which is IMO the negation of what a vital and
 dynamic project should be.


Yes, review bandwidth is a bottleneck, and although there are some
organizational changes that may help at the risk of changing our entire
review process and culture (which perhaps we should consider?).  The
easiest solution is for everyone to do more reviews. For just one review a
day you can make the whole project much stronger.  Complaining about the
review bandwidth issue while only doing 33 reviews in all of OpenStack [1]
in the past 90 months (I don't mean to pick on you out here, you are just
an example) doesn't seem right.

[1] http://www.russellbryant.net/openstack-stats/all-reviewers-90.txt



 From what I see on the Linux kernel project, which can be considered as a
 good source of inspiration when it comes to review bandwidth optimization
 in a large project, they have a pyramidal structure in the way in which the
 git repo origins are interconnected. This looks pretty similar to what we
 are proposing: teams work on specific areas with a topic mantainer and
 somebody merges their work at a higher level, with Linus ultimately
 managing the root repo.

 OpenStack is organized differently: there 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Matt Riedemann
I'd like to see the powervm driver fall into that first category.  We 
don't nearly have the rapid development that the hyper-v driver does, but 
we do have some out of tree stuff anyway simply because it hasn't landed 
upstream yet (DB2, config drive support for the powervm driver, etc), and 
maintaining that out of tree code is not fun.  So I definitely don't want 
to move out of tree.

Given that, I think at least I'm trying to contribute overall [1][2] by 
doing reviews outside my comfort zone, bug triage, fixing bugs when I can, 
and because we run tempest in house (with neutron-openvswitch) we find 
issues there that I get to push patches for.

Having said all that, it's moot for the powervm driver if we don't get the 
CI hooked up in Icehouse and I completely understand that so it's a top 
priority.


[1] 
http://stackalytics.com/?release=havanametric=commitsproject_type=openstackmodule=company=user_id=mriedem
 

[2] 
https://review.openstack.org/#/q/reviewer:6873+project:openstack/nova,n,z 


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Russell Bryant rbry...@redhat.com
To: openstack-dev@lists.openstack.org, 
Date:   10/11/2013 11:33 AM
Subject:Re: [openstack-dev] [Hyper-V] Havana status



On 10/11/2013 12:04 PM, John Griffith wrote:
 
 
 
 On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball bob.b...@citrix.com
 mailto:bob.b...@citrix.com wrote:
 
  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com
 mailto:rbry...@redhat.com]
  Sent: 11 October 2013 15:18
  To: openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
   As a practical example for Nova: in our case that would simply
 include the
  following subtrees: nova/virt/hyperv and 
nova/tests/virt/hyperv.
 
  If maintainers of a particular driver would prefer this sort of
  autonomy, I'd rather look at creating new repositories.  I'm
 completely
  open to going that route on a per-driver basis.  Thoughts?
 
 I think that all drivers that are officially supported must be
 treated in the same way.
 
 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This
 would allow Nova core developers to focus on the architectural side
 rather than how each individual driver implements the API that is
 presented.
 
 Of course, with the current system it is much easier for a Nova core
 to identify and request a refactor or generalisation of code written
 in one or multiple drivers so they work for all of the drivers -
 we've had a few of those with XenAPI where code we have written has
 been pushed up into Nova core rather than the XenAPI tree.
 
 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles uncoupled
 from core reviewers then they can be moved into an incubation
 project.  When there is a suitable level of integration (and
 automated testing to maintain it of course) then they can graduate.
  I imagine at that point there will be more development of new
 features which affect Nova in general (to expose each hypervisor's
 strengths), so there would be fewer cases of them being restricted
 just to the virt/* tree.
 
 Bob
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 I've thought about this in the past, but always come back to a couple of
 things.
 
 Being a community driven project, if a vendor doesn't want to
 participate in the project then why even pretend (ie having their own
 project/repo, reviewers etc).  Just post your code up in your own github
 and let people that want to use it pull it down.  If it's a vendor
 project, then that's fine; have it be a vendor project.
 
 In my opinion pulling out and leaving things up to the vendors as is
 being described has significant negative impacts.  Not the least of
 which is consistency in behaviors.  On the Cinder side, the core team
 spends the bulk of their review time looking at things like consistent
 behaviors, missing features or paradigms that are introduced that
 break other drivers.  For example looking at things like, are all the
 base features implemented, do they work the same way, are we all using
 the same vocabulary, will it work in an multi-backend environment.  In
 addition, it's rare that a vendor implements a new feature in their
 driver that doesn't impact/touch the core code somewhere.
 
 Having 

Re: [openstack-dev] [cinder] dd performance for wipe in cinder

2013-10-11 Thread Jeremy Stanley
On 2013-10-11 10:50:33 -0600 (-0600), Chris Friesen wrote:
 Sounds like we could use some kind of layer that will zero out
 blocks on read if they haven't been written by that user.
[...]

You've mostly just described thin provisioning... reads to
previously unused blocks are returned empty/all-zero and don't get
allocated actual addresses on the underlying storage medium until
written.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Enhance UX of Launch Instance Form

2013-10-11 Thread Jesse Pretorius
+1

A few comments:

1. Bear in mind that sometimes a user may not have access to any Ephemeral
flavors, so the tabbing should ideally be adaptive. An alternative would
not to bother with the tabs and just show a flavor list. In our deployment
we have no flavors with ephemeral disk space larger than 0.
2. Whenever there's a selection, but only one choice, make it a default
choice. It's tedious to choose the only selection only because you have to.
It's common for our users to have one network/subnet defined, but the
current UI requires them to switch tabs and select the network which is
rather tedious.
3. The selection of the flavor is divorced from the quota available and
from the image requirements. Ideally those two items should somehow be
incorporated. A user needs to know up-front that the server will build
based on both their quota and the image minimum requirements.
4. We'd like to see options for sorting on items like flavors. Currently
the sort is by 'id' and we'd like to see an option to sort by name
alphabetically.



On 11 October 2013 18:53, Cédric Soulas cedric.sou...@cloudwatt.com wrote:

 Hi,

 I just started a draft with suggestions to enhance the UX of the Launch
 Instance form:

 https://docs.google.com/document/d/1hUdmyxpVxbYwgGtPbzDsBUXsv0_rtKbfgCHYxOgFjlo

 Try the live prototype:
 http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance

 Best,

 Cédric

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Jesse Pretorius
mobile: +27 83 680 5492
email: jesse.pretor...@gmail.com
skype: jesse.pretorius
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] dd performance for wipe in cinder

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 11:05 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2013-10-11 10:50:33 -0600 (-0600), Chris Friesen wrote:
  Sounds like we could use some kind of layer that will zero out
  blocks on read if they haven't been written by that user.
 [...]

 You've mostly just described thin provisioning... reads to
 previously unused blocks are returned empty/all-zero and don't get
 allocated actual addresses on the underlying storage medium until
 written.


+1, which by the way was the number one driving factor for adding the thin
provisioning LVM option in Grizzly.

 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On Oct 11, 2013, at 19:04 , John Griffith 
john.griff...@solidfire.commailto:john.griff...@solidfire.com
 wrote:




On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball 
bob.b...@citrix.commailto:bob.b...@citrix.com wrote:
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.commailto:rbry...@redhat.com]
 Sent: 11 October 2013 15:18
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Hyper-V] Havana status

  As a practical example for Nova: in our case that would simply include the
 following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.

 If maintainers of a particular driver would prefer this sort of
 autonomy, I'd rather look at creating new repositories.  I'm completely
 open to going that route on a per-driver basis.  Thoughts?

I think that all drivers that are officially supported must be treated in the 
same way.

If we are going to split out drivers into a separate but still official 
repository then we should do so for all drivers.  This would allow Nova core 
developers to focus on the architectural side rather than how each individual 
driver implements the API that is presented.

Of course, with the current system it is much easier for a Nova core to 
identify and request a refactor or generalisation of code written in one or 
multiple drivers so they work for all of the drivers - we've had a few of those 
with XenAPI where code we have written has been pushed up into Nova core rather 
than the XenAPI tree.

Perhaps one approach would be to re-use the incubation approach we have; if 
drivers want to have the fast-development cycles uncoupled from core reviewers 
then they can be moved into an incubation project.  When there is a suitable 
level of integration (and automated testing to maintain it of course) then they 
can graduate.  I imagine at that point there will be more development of new 
features which affect Nova in general (to expose each hypervisor's strengths), 
so there would be fewer cases of them being restricted just to the virt/* tree.

Bob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I've thought about this in the past, but always come back to a couple of things.

Being a community driven project, if a vendor doesn't want to participate in 
the project then why even pretend (ie having their own project/repo, reviewers 
etc).  Just post your code up in your own github and let people that want to 
use it pull it down.  If it's a vendor project, then that's fine; have it be a 
vendor project.


There are quite a few reasons why putting this project soemwehere else wouldn't 
make sense:

1) It's not a vendor project, we're having contributions from community members 
belonging to other companies as well
2) Legitimation. Users want to know that this code is going to be there with or 
without us
3) Driver interface stability, as everybody is against a stable interface (even 
if de facto is perfectly stable)
4) it's not a vendor project, did I say it already? :-)

Said that, we are constantly on the verge of starting pushing code to customers 
from a fork, but we are trying as hard as possible to avoid as it is definitely 
bad for the whole community.


In my opinion pulling out and leaving things up to the vendors as is being 
described has significant negative impacts.  Not the least of which is 
consistency in behaviors.  On the Cinder side, the core team spends the bulk of 
their review time looking at things like consistent behaviors, missing features 
or paradigms that are introduced that break other drivers.  For example 
looking at things like, are all the base features implemented, do they work the 
same way, are we all using the same vocabulary, will it work in an 
multi-backend environment.  In addition, it's rare that a vendor implements a 
new feature in their driver that doesn't impact/touch the core code somewhere.


In the moment in which you have a separate project for a driver, why should you 
care about if a driver breaks something or now? IMO It's a job for the driver 
mantainers and for it's CI.

Having drivers be a part of the core project is very valuable in my opinion.  
It's also very important in my view that the core team for Nova actually has 
some idea and notion of what's being done by the drivers that it's supporting.  
Moving everybody further and further into additional private silos seems like a 
very bad direction to me, it makes things like knowledge transfer, 
documentation and worst of all bug triaging extremely difficult.


That code is not going to disappear. Nova devs can anytime look into their 
offspring projects and contribute. I expect also driver devs to contribute to 
the Nova project as much as possible, as it is a common interest.


I could go on and on here, but nobody likes to hear 

[openstack-dev] [Keystone][oslo] Trusted Messaging Question

2013-10-11 Thread Sangeeta Singh
Hi,

I had some questions about the trusted messaging project.


  1.  During your design did you consider a kerberos style ticketing service 
for KDS? If yes what were the reasons against it?
  2.  The Keystone documentation does say that it can support kerberos style 
authentication. Are there any know implementations and deployments?
  3.  Does the secured messaging framework supports plugging in one's own key 
service or is there a plan of going in that direction. I think that would 
something that would be useful to the community giving the flexibility to hook 
up different security enforcing agents similar to the higher level message 
abstractions to allow multiple message transport in the oslo messaging library.

I am interested to know how can one use the proposed framework and be able to 
plugin different key distribution mechanism.

Thanks,
Sangeeta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-11 Thread Edgar Magana
Excellent points guys!

Salvatore, It was you presenting that blueprint in San Diego? Hopefully we
can get more people involved. I will not call it NATaaS Š

Harshad,

I also like to simplify as much as possible NAT configuration, in and out
networks and that is  :-)

Rudra,

IPAM extensions sounds very interesting and some how alignment to have NAT
implemented as individual service allowing vendor-technologies to be
included as well in Neutron but in the case of the AWS I don't see going in
the same direction, it seems like two different approaches, what do you
think?

Thanks,

Edgar


From:  Rudra Rugge rru...@juniper.net
Reply-To:  OpenStack List openstack-dev@lists.openstack.org
Date:  Thursday, October 10, 2013 5:22 PM
To:  OpenStack List openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Neutron] Common requirements for services'
discussion

Here are the blueprints (mentioned by Harshad below) to add complete AWS
VPC compatibility in Openstack. AWS EC2 compatibility already exists in
Openstack. 

https://blueprints.launchpad.net/neutron/+spec/ipam-extensions-for-neutron
https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
https://blueprints.launchpad.net/nova/+spec/aws-vpc-support

Services extension is relevant to NATaas (or Natasha :)), VPNaas,
in AWS VPC.

Regards,
Rudra

On Oct 10, 2013, at 6:15 AM, Harshad Nakil hna...@contrailsystems.com
 wrote:

 Agree, 
 I like what AWS had done. Have a concept of NAT instance. 90 % use cases are
 solved by just specifying
 Inside and outside networks for the NAT instance.
 
 If one wants fancier NAT config they can always use NATaas API(s)
 To configure this instance.
 
 There is a blueprint for bringing Amazon VPC API compatibility to nova and
 related extensions to quantum already propose concept of NAT instance.
 
 How the NAT instance is implemented is left to the plugin.
 
 Regards 
 -Harshad
 
 
 On Oct 10, 2013, at 1:47 AM, Salvatore Orlando sorla...@nicira.com wrote:
 
 Can I just ask you to not call it NATaas... if you want to pick a name for
 it, go for Natasha :)
 
 By the way, the idea of a NAT service plugin was first introduced at the
 Grizzly summit in San Diego.
 One hurdle, not a big one however, would be that the external gateway and
 floating IP features of the L3 extension already implicitly implements NAT.
 It will be important to find a solution to ensure NAT can be configured
 explicitly as well while allowing for configuring external gateway and
 floating IPs through the API in the same way that we do today.
 
 Apart from this, another interesting aspect would be to be see if we can come
 up with an approach which will result in an API which abstracts as much as
 possible networking aspects. In other words, I would like to avoid an API
 which ends up being iptables over rest, if possible.
 
 Regards,
 Salvatore
 
 
 On 10 October 2013 09:55, Bob Melander (bmelande) bmela...@cisco.com wrote:
 Hi Edgar,
 
 I'm also interested in a broadening of NAT capability in Neutron using the
 evolving service framework.
 
 Thanks,
 Bob
 
 From: Edgar Magana emag...@plumgrid.com
 Reply-To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date: onsdag 9 oktober 2013 21:38
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Common requirements for services'
 discussion
 
 Hello all,
 
 Is anyone working on NATaaS?
 I know we have some developer working on Router as a Service and they
 probably want to include NAT functionality but I have some interest in
 having NAT as a Service.
 
 Please, response is somebody is interested in having some discussions about
 it.  
 
 Thanks,
 
 Edgar
 
 From: Sumit Naiksatam sumitnaiksa...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Tuesday, October 8, 2013 8:30 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] Common requirements for services'
 discussion
 
 Hi All,
 
 We had a VPNaaS meeting yesterday and it was felt that we should have a
 separate meeting to discuss the topics common to all services. So, in
 preparation for the Icehouse summit, I am proposing an IRC meeting on Oct
 14th 22:00 UTC (immediately after the Neutron meeting) to discuss common
 aspects related to the FWaaS, LBaaS, and VPNaaS.
 
 We will begin with service insertion and chaining discussion, and I hope we
 can collect requirements for other common aspects such as service agents,
 services instances, etc. as well.
 
 Etherpad for service insertion  chaining can be found here:
 https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining
 
 Hope you all can join.
 
 Thanks,
 ~Sumit.
 
 
 ___ OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 01:18 PM, Alessandro Pilotti wrote:
 
 
 On Oct 11, 2013, at 19:04 , John Griffith john.griff...@solidfire.com
 mailto:john.griff...@solidfire.com
  wrote:
 



 On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball bob.b...@citrix.com
 mailto:bob.b...@citrix.com wrote:

  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com
 mailto:rbry...@redhat.com]
  Sent: 11 October 2013 15:18
  To: openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
   As a practical example for Nova: in our case that would simply
 include the
  following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.
 
  If maintainers of a particular driver would prefer this sort of
  autonomy, I'd rather look at creating new repositories.  I'm
 completely
  open to going that route on a per-driver basis.  Thoughts?

 I think that all drivers that are officially supported must be
 treated in the same way.

 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This
 would allow Nova core developers to focus on the architectural
 side rather than how each individual driver implements the API
 that is presented.

 Of course, with the current system it is much easier for a Nova
 core to identify and request a refactor or generalisation of code
 written in one or multiple drivers so they work for all of the
 drivers - we've had a few of those with XenAPI where code we have
 written has been pushed up into Nova core rather than the XenAPI tree.

 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles
 uncoupled from core reviewers then they can be moved into an
 incubation project.  When there is a suitable level of integration
 (and automated testing to maintain it of course) then they can
 graduate.  I imagine at that point there will be more development
 of new features which affect Nova in general (to expose each
 hypervisor's strengths), so there would be fewer cases of them
 being restricted just to the virt/* tree.

 Bob

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I've thought about this in the past, but always come back to a couple
 of things.

 Being a community driven project, if a vendor doesn't want to
 participate in the project then why even pretend (ie having their own
 project/repo, reviewers etc).  Just post your code up in your own
 github and let people that want to use it pull it down.  If it's a
 vendor project, then that's fine; have it be a vendor project.

 
 There are quite a few reasons why putting this project soemwehere else
 wouldn't make sense:
 
 1) It's not a vendor project, we're having contributions from community
 members belonging to other companies as well
 2) Legitimation. Users want to know that this code is going to be there
 with or without us
 3) Driver interface stability, as everybody is against a stable
 interface (even if de facto is perfectly stable)
 4) it's not a vendor project, did I say it already? :-)
 
 Said that, we are constantly on the verge of starting pushing code to
 customers from a fork, but we are trying as hard as possible to avoid as
 it is definitely bad for the whole community. 

A vendor project doesn't mean you couldn't accept contributions.  It
means that it would be primarily developed/maintained/managed by someone
other than the OpenStack project, which would in this case be Microsoft
(or its contractor(s)).

I totally agree with the benefits of staying in tree.  The question is
whether you are willing to pay the cost to get those benefits.

Splitting into repos and giving over control is starting to feel like
giving you all of the benefits (primarily being legitimate as you
say), without having to pay the cost (more involvement).

The reason we're at this point and having this conversation about the
fate of hyper-v is that there has been an imbalance.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti




On Oct 11, 2013, at 19:29 , Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com
 wrote:

On 10/11/2013 12:04 PM, John Griffith wrote:



On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball 
bob.b...@citrix.commailto:bob.b...@citrix.com
mailto:bob.b...@citrix.com wrote:

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.comhttp://redhat.com
   mailto:rbry...@redhat.com]
Sent: 11 October 2013 15:18
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
   mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Hyper-V] Havana status

As a practical example for Nova: in our case that would simply
   include the
following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.

If maintainers of a particular driver would prefer this sort of
autonomy, I'd rather look at creating new repositories.  I'm
   completely
open to going that route on a per-driver basis.  Thoughts?

   I think that all drivers that are officially supported must be
   treated in the same way.

   If we are going to split out drivers into a separate but still
   official repository then we should do so for all drivers.  This
   would allow Nova core developers to focus on the architectural side
   rather than how each individual driver implements the API that is
   presented.

   Of course, with the current system it is much easier for a Nova core
   to identify and request a refactor or generalisation of code written
   in one or multiple drivers so they work for all of the drivers -
   we've had a few of those with XenAPI where code we have written has
   been pushed up into Nova core rather than the XenAPI tree.

   Perhaps one approach would be to re-use the incubation approach we
   have; if drivers want to have the fast-development cycles uncoupled
   from core reviewers then they can be moved into an incubation
   project.  When there is a suitable level of integration (and
   automated testing to maintain it of course) then they can graduate.
I imagine at that point there will be more development of new
   features which affect Nova in general (to expose each hypervisor's
   strengths), so there would be fewer cases of them being restricted
   just to the virt/* tree.

   Bob

   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
   mailto:OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I've thought about this in the past, but always come back to a couple of
things.

Being a community driven project, if a vendor doesn't want to
participate in the project then why even pretend (ie having their own
project/repo, reviewers etc).  Just post your code up in your own github
and let people that want to use it pull it down.  If it's a vendor
project, then that's fine; have it be a vendor project.

In my opinion pulling out and leaving things up to the vendors as is
being described has significant negative impacts.  Not the least of
which is consistency in behaviors.  On the Cinder side, the core team
spends the bulk of their review time looking at things like consistent
behaviors, missing features or paradigms that are introduced that
break other drivers.  For example looking at things like, are all the
base features implemented, do they work the same way, are we all using
the same vocabulary, will it work in an multi-backend environment.  In
addition, it's rare that a vendor implements a new feature in their
driver that doesn't impact/touch the core code somewhere.

Having drivers be a part of the core project is very valuable in my
opinion.  It's also very important in my view that the core team for
Nova actually has some idea and notion of what's being done by the
drivers that it's supporting.  Moving everybody further and further into
additional private silos seems like a very bad direction to me, it makes
things like knowledge transfer, documentation and worst of all bug
triaging extremely difficult.

I could go on and on here, but nobody likes to hear anybody go on a
rant.  I would just like to see if there are other alternatives to
improving the situation than fragmenting the projects.

Really good points here.  I'm glad you jumped in, because the underlying
issue here applies well to other projects (especially Cinder and Neutron).

So, the alternative to the split official repos is to either:

1) Stay in tree, participate, and help share the burden of maintenance
of the project


Which means getting back to the status quo with all the problems we had. I hope 
we'll be able to find something better than that.

or

2) Truly be a vendor project, and to make that more clear, split out
into your own (not nova) repository.

I explained in my previous relpy some points about why it would be IMO totally 
counterproductive to have a fork outside of OpenStack.
Our goal is to have more and more independent community 

Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-11 Thread Lakshminaraya Renganarayana

Excellent discussion on various issues around orchestration and
coordination -- thanks to you all, in particular to Clint, Angus, Stan,
Thomas, Joshua, Zane, Steve ...

After reading the discussions, I am finding the following themes emerging
(please feel free to correct/add):

1. Most of the building blocks needed for effective coordination and
orchestration are already in Heat/HOT.
2. Heat would like to view software configuration as a resource (type) with
related providers + plugins
3. There is scope for communication/synchronization mechanisms that would
complement the wait-conditions and signals

I would like to propose a simple abstraction that would complement the
current wait-conditions and signals. My proposal is based our experience
with supporting such an abstraction on our DSL and also on an extension of
Heat.  In a nut-shell, this abstraction is a global data space (visible
across resources, stacks) from which resources can read and write their
inputs / outputs PLUS the semantics that reads will block until the read
values are available and writes are non-blocking. We used ZooKeeper to
implement this global data space and the blocking-read/non-blocking-writes
semantics. But, these could be implemented using several other mechanisms
and I believe the techniques currently used by Heat for meta-data service
can be used here.

I would like to make clear that I am not proposing a replacement for
wait-conditions and signals. I am hoping that wait-conditions and signals
would be used by power-users (concurrent/distributed programming experts)
and the proposed abstraction would be used by folks (like me) who do not
want to reason about concurrency and related problems. Also, the proposed
global data-space with blocking reads and non-blocking writes is not a new
idea (google tuple-spaces, linda) and it has been proven in other domains
such as coordination languages to improve the level of abstraction and
productivity.

The benefits of the proposed abstraction are:
G1. Support finer granularity of dependences
G2. Allow Heat to reason/analyze about these dependences so that it can
order resource creations/management
G3. Avoid classic synchronization problems such as dead-locks and race
conditions
G4 *Conjecture* : Capture most of the coordination use cases (including
those required for software configuration / orchestration).

Here is more detailed description: Let us say that we can use either
pre-defined or custom resource types to define resources at arbitrary
levels of granularity. This can be easily supported and I guess is already
possible in current version of Heat/HOT. Given this, the proposed
abstraction has two parts: (1) an interface style specification a
resource's inputs and outputs and (2) a global name/data space. The
interface specification which would capture

- INPUTS: all the attributes that are consumed/used/read by that resource
(currently, we have Ref, GetAttrs that can give this implicitly)

- OUTPUTS: all the attributes that are produced/written by that resource (I
do not know if this write-set is currently well-defined for a resource. I
think some of them are implicitly defined by Heat on particular resource
types.)

- Global name-space and data-space : all the values produced and consumed
(INPUTS/OUTPUTS) are described using a names that are fully qualified
(XXX.stack_name.resource_name.property_name). The data values associated
with these names are stored in a global data-space.  Reads are blocking,
i.e., reading a value will block the execution resource/thread until the
value is available. Writes are non-blocking, i.e., any thread can write a
value and the write will succeed immediately.

The ability to define resources at arbitrary levels of granularity together
with the explicit specification of INPUTS/OUTPUTS allows us to reap the
benefits G1 and G2 outlined above. Note that the ability to reason about
the inputs/outputs of each resource and the induced dependencies will also
allow Heat to detect dead-locks via dependence cycles (benefit G3). This is
already done today in Heat for Refs, GetAttr on base-resources, but the
proposal is to extend the same to arbitrary attributes for any resource.
The blocking-read and non-blocking writes further structures the
specification to avoid deadlocks and race conditions (benefit G3).

As for G4, the conjecture, I can only give as evidence our experience with
using our DSL with the proposed abstraction to deploy a few reasonably
large applications :-)

I would like to know your comments and suggestions. Also, if there is
interest I can write a Blueprint / proposal with more details and
use-cases.

Thanks,
LN



Clint Byrum cl...@fewbar.com wrote on 10/11/2013 12:40:19 PM:

 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Date: 10/11/2013 12:43 PM
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

 Excerpts from Stan Lagun's message of 2013-10-11 07:22:37 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 02:03 PM, Alessandro Pilotti wrote:

 Talking about new community involvements, newcomers are getting very
 frustrated to have to wait for weeks to get a meaningful review and I
 cannot blame them if they don't want to get involved anymore after the
 first patch!
 This makes appear public bureocracy here in eastern Europe a lightweight
 process in comparison! :-)

You keep making it sound like the situation is absolutely terrible.  The
stats that I track say otherwise.  That's why I brought them up at the
very beginning of this message.  So:

1) I don't think it's as bad as you make it out to be (based on actual
numbers).

http://russellbryant.net/openstack-stats/nova-openreviews.html
http://russellbryant.net/openstack-stats/all-openreviews.html

2) I don't think you (or hyper-v in general) is a victim (again based on
my stats).  If review times need to improve, it's a much more general
problem.

3) There's only one way to improve review times, which is more people
reviewing.  We could use review help in Nova, as could all projects I'm
sure.  We've also established that your review contribution is rather
small (30 reviews over 3 months across *all* openstack projects) [1], I
don't think you can really claim to be helping the problem.  I wouldn't
normally call anyone out like this.  It's not necessarily a *problem*
... until you complain.

So, are you in?  Let's work together to make things better.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016470.html

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Thanks for fixing my patch

2013-10-11 Thread Clint Byrum
Recently in the TripleO meeting we identified situations where we need
to make it very clear that it is ok to pick up somebody else's patch
and finish it. We are broadly distributed, time-zone-wise, and I know
other teams working on OpenStack projects have the same situation. So
when one of us starts the day and sees an obvious issue with a patch,
we have decided to take action, rather than always -1 and move on. We
clarified for our core reviewers that this does not mean that now both
of you cannot +2. We just need at least one person who hasn't been in
the code to also +2 for an approval*.

I think all projects can benefit from this model, as it will raise
velocity. It is not perfect for everything, but it is really great when
running up against deadlines or when a patch has a lot of churn and thus
may take a long time to get through the rebase gauntlet.

So, all of that said, I want to encourage all OpenStack developers to
say thanks for fixing my patch when somebody else does so. It may seem
obvious, but publicly expressing gratitude will make it clear that you
do not take things personally and that we're all working together.

Thanks for your time -Clint

* If all core reviewers have been in on the patch, then any two +2's
work.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports

2013-10-11 Thread P Balaji-B37839
Hi Kyle,

This observation is with OVS Plugin.

Regards,
Balaji.P

 -Original Message-
 From: Kyle Mestery (kmestery) [mailto:kmest...@cisco.com]
 Sent: Friday, October 11, 2013 4:14 PM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports
 
 On Oct 8, 2013, at 4:01 AM, P Balaji-B37839 b37...@freescale.com wrote:
  Hi,
 
  Current OVS Agent is creating tunnel with dst_port as the port
 configured in INI file on Compute Node. If all the compute nodes on VXLAN
 network are configured for DEFAULT port it is fine.
 
  When any of the Compute Nodes are configured for CUSTOM udp port as
 VXLAN UDP Port, Then how does the tunnel will be established with remote
 IP.
 
  It is observed that the fan-out RPC message is not having the
 destination port information.
 
 Balaji, is this with the ML2 or OVS plugin?
 
  Regards,
  Balaji.P
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread David Kranz

On 10/11/2013 02:03 PM, Alessandro Pilotti wrote:





On Oct 11, 2013, at 19:29 , Russell Bryant rbry...@redhat.com 
mailto:rbry...@redhat.com

 wrote:


On 10/11/2013 12:04 PM, John Griffith wrote:


[... snip ...]


Talking about new community involvements, newcomers are getting very 
frustrated to have to wait for weeks to get a meaningful review and I 
cannot blame them if they don't want to get involved anymore after the 
first patch!
This makes appear public bureocracy here in eastern Europe a 
lightweight process in comparison! :-)


Let me add another practical reason about why a separate OpenStack 
project would be a good idea:


Anytime that we commit a driver specific patch, a lot of Tempests 
tests are executed on Libvirt and XenServer (for Icehouse those will 
be joined by another pack of CIs, including Hyper-V).
On the jenkins side, we have to wait for regression tests that have 
nothing to do with the code that we are pushing. During the H3 push, 
this meant waiting for hours and hoping not to have to issue the 100th 
recheck / revery bug xxx.


A separate project would obviously include only the required tests and 
be definitely more lightweight, offloading quite some work from the 
SmokeStack / Jenkins job for everybody's happiness.



I'm glad you brought this up. There are two issues here, both discussed 
by the qe/infra groups and others at the Havana summit and after.


How do you/we know which regression tests have nothing to do with the 
code changed in a particular patch? Or that the answer won't change 
tomorrow? The only way to do that is to assert dependencies and 
non-dependencies between components that will be used to decide which 
tests should be run for each patch. There was a lively discussion (with 
me taking your side initially) at the summit and it was decided that a 
generic wasting resources argument was not sufficient to introduce 
that fragility and so we would run the whole test suite as a gate on all 
projects. That decision was to be revisited if resources became a problem.


As for the 100th recheck, that is a result of the recent introduction of 
parallel tempest runs before the Havana rush. It was decided that the 
benefit in throughput from drastically reduced gate job times outweighed 
the pain of potentially doing a lot of rechecks. For the most part the 
bugs being surfaced were real OpenStack bugs that were showing up due to 
the new stress of parallel test execution. This was a good thing, 
though certainly painful to all. With hindsight I'm not sure if that was 
the right decision or not.


This is just an explanation of what has happened and why. There are 
obviously costs and benefits of being tightly bound to the project.


 -David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thanks for fixing my patch

2013-10-11 Thread David Kranz

On 10/11/2013 02:34 PM, Clint Byrum wrote:

Recently in the TripleO meeting we identified situations where we need
to make it very clear that it is ok to pick up somebody else's patch
and finish it. We are broadly distributed, time-zone-wise, and I know
other teams working on OpenStack projects have the same situation. So
when one of us starts the day and sees an obvious issue with a patch,
we have decided to take action, rather than always -1 and move on. We
clarified for our core reviewers that this does not mean that now both
of you cannot +2. We just need at least one person who hasn't been in
the code to also +2 for an approval*.

I think all projects can benefit from this model, as it will raise
velocity. It is not perfect for everything, but it is really great when
running up against deadlines or when a patch has a lot of churn and thus
may take a long time to get through the rebase gauntlet.

So, all of that said, I want to encourage all OpenStack developers to
say thanks for fixing my patch when somebody else does so. It may seem
obvious, but publicly expressing gratitude will make it clear that you
do not take things personally and that we're all working together.

Thanks for your time -Clint

* If all core reviewers have been in on the patch, then any two +2's
work.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thanks, Clint. I have wanted to do this in the past but was not sure 
how. Can you provide the steps to take over some one else's patch and 
submit it? I volunteer to add it to 
https://wiki.openstack.org/wiki/Gerrit_Workflow.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-11 Thread Lakshminaraya Renganarayana

Clint Byrum cl...@fewbar.com wrote on 10/11/2013 12:40:19 PM:

 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Date: 10/11/2013 12:43 PM
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

  3. Ability to return arbitrary (JSON-compatible) data structure from
config
  application and use attributes of that structure as an input for other
  configs

 Note that I'd like to see more use cases specified for this ability. The
 random string generator that Steve Baker has put up should handle most
 cases where you just need passwords. Generated key sharing might best
 be deferred to something like Barbican which does a lot more than Heat
 to try and keep your secrets safe.

I had seen a deployment scenario that needed more than random string
generator. It was during the deployment of a system that has clustered
application servers, i.e., a cluster of application server nodes + a
cluster manager node. The deployment progresses by all the VMs
(cluster-manager and cluster-nodes) starting concurrently. Then the
cluster-nodes wait for the cluster-manager to send them data (xml) to
configure themselves. The cluster-manager after reading its own config
file, generates config-data for each cluster-node and sends it to them.

Thanks,
LN___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thanks for fixing my patch

2013-10-11 Thread Doug Hellmann
Running git review -d $gerrit_id will download the patch and create a
local branch for you.

For example, if I wanted to work on Sandy's patch
https://review.openstack.org/#/c/51249 I would git review -d 51249. I can
then amend the changeset, rebase, or whatever. Running git review will
push it up to gerrit again, and as long as I leave the Change-Id in the
commit message intact, gerrit will add a new patchset to the existing
review.

One small procedural suggestion: Leave a comment on the review to minimize
race conditions with other reviewers who are also considering providing
fixes.




On Fri, Oct 11, 2013 at 2:46 PM, David Kranz dkr...@redhat.com wrote:

 On 10/11/2013 02:34 PM, Clint Byrum wrote:

 Recently in the TripleO meeting we identified situations where we need
 to make it very clear that it is ok to pick up somebody else's patch
 and finish it. We are broadly distributed, time-zone-wise, and I know
 other teams working on OpenStack projects have the same situation. So
 when one of us starts the day and sees an obvious issue with a patch,
 we have decided to take action, rather than always -1 and move on. We
 clarified for our core reviewers that this does not mean that now both
 of you cannot +2. We just need at least one person who hasn't been in
 the code to also +2 for an approval*.

 I think all projects can benefit from this model, as it will raise
 velocity. It is not perfect for everything, but it is really great when
 running up against deadlines or when a patch has a lot of churn and thus
 may take a long time to get through the rebase gauntlet.

 So, all of that said, I want to encourage all OpenStack developers to
 say thanks for fixing my patch when somebody else does so. It may seem
 obvious, but publicly expressing gratitude will make it clear that you
 do not take things personally and that we're all working together.

 Thanks for your time -Clint

 * If all core reviewers have been in on the patch, then any two +2's
 work.

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Thanks, Clint. I have wanted to do this in the past but was not sure how.
 Can you provide the steps to take over some one else's patch and submit it?
 I volunteer to add it to 
 https://wiki.openstack.org/**wiki/Gerrit_Workflowhttps://wiki.openstack.org/wiki/Gerrit_Workflow
 .

  -David


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-11 Thread Steven Dake

On 10/11/2013 11:55 AM, Lakshminaraya Renganarayana wrote:


Clint Byrum cl...@fewbar.com wrote on 10/11/2013 12:40:19 PM:

 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Date: 10/11/2013 12:43 PM
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

  3. Ability to return arbitrary (JSON-compatible) data structure 
from config

  application and use attributes of that structure as an input for other
  configs

 Note that I'd like to see more use cases specified for this ability. The
 random string generator that Steve Baker has put up should handle most
 cases where you just need passwords. Generated key sharing might best
 be deferred to something like Barbican which does a lot more than Heat
 to try and keep your secrets safe.

I had seen a deployment scenario that needed more than random string 
generator. It was during the deployment of a system that has clustered 
application servers, i.e., a cluster of application server nodes + a 
cluster manager node. The deployment progresses by all the VMs 
(cluster-manager and cluster-nodes) starting concurrently. Then the 
cluster-nodes wait for the cluster-manager to send them data (xml) to 
configure themselves. The cluster-manager after reading its own config 
file, generates config-data for each cluster-node and sends it to them.



Is the config data per cluster node unique to each node?  If not:

Change deployment to following model:
1. deploy cluster-manager as a resource with a waitcondition - passing 
the data using the cfn-signal  -d to send the xml blob
2. have cluster nodes wait on wait condition in #1, using data from the 
cfn-signal


If so, join the config data sent in cfn-signal and break it apart by the 
various cluster nodes in #2


Thanks,
LN


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 12:43 PM, David Kranz dkr...@redhat.com wrote:

  On 10/11/2013 02:03 PM, Alessandro Pilotti wrote:





  On Oct 11, 2013, at 19:29 , Russell Bryant rbry...@redhat.com
  wrote:

 On 10/11/2013 12:04 PM, John Griffith wrote:


Umm... just to clarify the section below is NOT from my message.  :)


 [... snip ...]


  Talking about new community involvements, newcomers are getting very
 frustrated to have to wait for weeks to get a meaningful review and I
 cannot blame them if they don't want to get involved anymore after the
 first patch!
 This makes appear public bureocracy here in eastern Europe a lightweight
 process in comparison! :-)

  Let me add another practical reason about why a separate OpenStack
 project would be a good idea:

  Anytime that we commit a driver specific patch, a lot of Tempests tests
 are executed on Libvirt and XenServer (for Icehouse those will be joined by
 another pack of CIs, including Hyper-V).
 On the jenkins side, we have to wait for regression tests that have
 nothing to do with the code that we are pushing. During the H3 push, this
 meant waiting for hours and hoping not to have to issue the 100th recheck
 / revery bug xxx.

  A separate project would obviously include only the required tests and
 be definitely more lightweight, offloading quite some work from the
 SmokeStack / Jenkins job for everybody's happiness.


  I'm glad you brought this up. There are two issues here, both discussed
 by the qe/infra groups and others at the Havana summit and after.

 How do you/we know which regression tests have nothing to do with the code
 changed in a particular patch? Or that the answer won't change tomorrow?
 The only way to do that is to assert dependencies and non-dependencies
 between components that will be used to decide which tests should be run
 for each patch. There was a lively discussion (with me taking your side
 initially) at the summit and it was decided that a generic wasting
 resources argument was not sufficient to introduce that fragility and so
 we would run the whole test suite as a gate on all projects. That decision
 was to be revisited if resources became a problem.

 As for the 100th recheck, that is a result of the recent introduction of
 parallel tempest runs before the Havana rush. It was decided that the
 benefit in throughput from drastically reduced gate job times outweighed
 the pain of potentially doing a lot of rechecks. For the most part the bugs
 being surfaced were real OpenStack bugs that were showing up due to the new
 stress of parallel test execution. This was a good thing, though
 certainly painful to all. With hindsight I'm not sure if that was the right
 decision or not.

 This is just an explanation of what has happened and why. There are
 obviously costs and benefits of being tightly bound to the project.

  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thanks for fixing my patch

2013-10-11 Thread Dolph Mathews
On Fri, Oct 11, 2013 at 1:34 PM, Clint Byrum cl...@fewbar.com wrote:

 Recently in the TripleO meeting we identified situations where we need
 to make it very clear that it is ok to pick up somebody else's patch
 and finish it. We are broadly distributed, time-zone-wise, and I know
 other teams working on OpenStack projects have the same situation. So
 when one of us starts the day and sees an obvious issue with a patch,
 we have decided to take action, rather than always -1 and move on. We
 clarified for our core reviewers that this does not mean that now both
 of you cannot +2. We just need at least one person who hasn't been in
 the code to also +2 for an approval*.

 I think all projects can benefit from this model, as it will raise
 velocity. It is not perfect for everything, but it is really great when
 running up against deadlines or when a patch has a lot of churn and thus
 may take a long time to get through the rebase gauntlet.

 So, all of that said, I want to encourage all OpenStack developers to
 say thanks for fixing my patch when somebody else does so. It may seem
 obvious, but publicly expressing gratitude will make it clear that you
 do not take things personally and that we're all working together.

 Thanks for your time -Clint

 * If all core reviewers have been in on the patch, then any two +2's
 work.


+1 across the board -- keystone-core follows this approach, especially
around feature freeze / release candidate time.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Mathew R Odden
Not to derail the current direction this thread is heading but my 2 cents
on the topic of moving drivers out of tree:

I share a lot of the same concerns that John Griffith pointed out. As a one
of the maintainers of the PowerVM driver in nova,
I view the official-ness of having the driver in tree as a huge benefit.

For the hyper-v driver, it might make sense to be out of tree, or have an
out of tree copy for fast iteration. As one of the original
authors of the PowerVM driver, this is how we started. We had an internal
project and were able to iterate fast, fix issues quickly and
efficiently, and release as often or as little as we wanted. The copy of
the driver in Nova today is that same driver, but evolved and has a
different purpose. It is an 'official' shared copy that other teams and
community members can contribute to. It is meant to be
the community's driver, not a vendor driver. There is nothing stopping
anyone from taking that code and making their own version if they
want to go back to a fast iteration model, but their are obvious
consequences to that. Some of those consequences might come to
light during the Icehouse development cycle, so stick around if you want to
see an example of the problems with that approach.

I don't think we should move drivers out of tree, but I did like the idea
of an incubator area for new drivers. As Dan pointed out already,
this gives us a workflow to match the new requirement of CI integration for
each driver as well.

Also, I think someone already pointed this out, but doing code reviews and
helping out elsewhere in the community is important and
would definitely help the hyper-V case. Code reviews are obviously the most
in demand activity needed right now, but the idea is
that contributors should be involved in the entire project and help
optimize the whole. No matter how many bugs you fix in the hyper-V
driver,
the driver itself would be useless if the rest of Nova was so buggy it was
useless.
Similar case between Nova and the other OpenStack projects.

Mathew Odden, Software Developer
IBM STG OpenStack Development___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] VPNaaS questions

2013-10-11 Thread Paul Michali
Hi folks,

I have a bunch of questions for you on VPNaaS in specific, and services in 
general...

Nachi,

1) You hd a bug fix to do service provider framework support for VPN (41827). 
It was held for Icehouse. Is that pretty much a working patch? 
2) When are you planning on reopening the review?


Anyone,

I see that there is an agent.py file for VPN that has a main() and it starts up 
an L3 agent, specifying the VPNAgent class (in same file).

3) How does this file get invoked? IOW how does the main() get invoked?
4) I take it we can specify multiple device drivers in the config file for the 
agent?


Currently, for the reference device driver, the hierarchy is currently 
DeviceDriver [ABC] - IPsecDriver [Swan based logic] - OpenSwanDriver [one 
function, OpenSwan specific]. The ABC has a specific set of APIs. Wondering how 
to incorporate provider based device drivers.

5) Should I push up more general methods from IPsecDriver to DeviceDriver, so 
that they can be reused by other providers?
6) Should I push down the swan based methods from DeviceDriver to IPsecDriver 
and maybe name it SwanDeviceDriver?


I see that vpnaas.py is an extension for VPN that defines attributes and the 
base plugin functions.

7) If a provider as additional attributes (can't think of any yet), how can the 
attribute be extended, only for that provider (or is that the wrong way to 
handle this)?

For VPN, there are several attributes, each with varying ranges of values 
allowed. This is reflected in the CLI help messages, the database (e.g. enums), 
and is validated (some) in the client code and in the VPN service.

8) How do we provide different limits/allowed values for attributes, for a 
specific provider (e.g. let's say the provider supports or doesn't support an 
encryption method, or doesn't support IKE v1 or v2)?
9) Should the code be changed not to do any client validation, and to have 
generic help, so that different values could be provided, or is there a way to 
customize this based on provider?
10) If customized, is it possible to reflect the difference in allowed values 
in the help strings (and client validation)?
11) How do we handle the variation in the database (e.g. when enums specifying 
a fixed set of values)? Do we need to change the database to be more generic 
(strings and ints) or do we somehow extend the database?

I was wondering in general how providers can customize service features, based 
on their capabilities (better or worse than reference). I could create a Summit 
session topic on this, but wanted to know if this is something that has already 
been addressed or if a different architectural approach has already been 
defined.


Regards,


PCM (Paul Michali)

MAIL p...@cisco.com
IRC   pcm_  (irc.freenode.net)
TW   @pmichali



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thanks for fixing my patch

2013-10-11 Thread Nikhil Manchanda
Just wanted to chime in that Trove also follows this approach and it's
worked pretty well for us.
+1 on Doug's suggestion to leave a comment on the patch so that two
reviewers don't end up doing the same work fixing it.

Cheers,
-Nikhil



On Fri, Oct 11, 2013 at 12:17 PM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Fri, Oct 11, 2013 at 1:34 PM, Clint Byrum cl...@fewbar.com wrote:

 Recently in the TripleO meeting we identified situations where we need
 to make it very clear that it is ok to pick up somebody else's patch
 and finish it. We are broadly distributed, time-zone-wise, and I know
 other teams working on OpenStack projects have the same situation. So
 when one of us starts the day and sees an obvious issue with a patch,
 we have decided to take action, rather than always -1 and move on. We
 clarified for our core reviewers that this does not mean that now both
 of you cannot +2. We just need at least one person who hasn't been in
 the code to also +2 for an approval*.

 I think all projects can benefit from this model, as it will raise
 velocity. It is not perfect for everything, but it is really great when
 running up against deadlines or when a patch has a lot of churn and thus
 may take a long time to get through the rebase gauntlet.

 So, all of that said, I want to encourage all OpenStack developers to
 say thanks for fixing my patch when somebody else does so. It may seem
 obvious, but publicly expressing gratitude will make it clear that you
 do not take things personally and that we're all working together.

 Thanks for your time -Clint

 * If all core reviewers have been in on the patch, then any two +2's
 work.


 +1 across the board -- keystone-core follows this approach, especially
 around feature freeze / release candidate time.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Havana RC2 available

2013-10-11 Thread Thierry Carrez
Good evening everyone,

Due to various issues detected in RC1 testing, we just created a new
Havana release candidate for OpenStack Block Storage (Cinder).

You can find the RC2 tarball and the list of fixed bugs at:

https://launchpad.net/cinder/havana/havana-rc2

This is hopefully the last Havana release candidate for Cinder.
Unless a last-minute release-critical regression is found that warrant
another release candidate respin, this RC2 will be formally included in
the common OpenStack 2013.2 final release next Thursday. You are
therefore strongly encouraged to test and validate this tarball.

Alternatively, you can grab the code at:
https://github.com/openstack/cinder/tree/milestone-proposed

If you find a regression that could be considered release-critical,
please file it at https://bugs.launchpad.net/cinder/+filebug and tag
it *havana-rc-potential* to bring it to the release crew's attention.

Happy regression hunting,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-11 Thread Mike Spreitzer
I'll be at the summit too.  Available Nov 4 if we want to do some prep 
then.  It will be my first summit, I am not sure how overbooked my summit 
time will be.

Regards,
Mike



From:   Sylvain Bauza sylvain.ba...@bull.net
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Cc: Mike Spreitzer/Watson/IBM@IBMUS
Date:   10/11/2013 08:19 AM
Subject:Re: [openstack-dev] [scheduler] APIs for Smart Resource 
Placement - Updated Instance Group Model and API extension model - WIP 
Draft



Long-story short, sounds like we do have the same concerns here in 
Climate.

I'll be present at the Summit, any chance to do an unconference meeting in

between all parties ?

Thanks,
-Sylvain

Le 11/10/2013 08:25, Mike Spreitzer a écrit :
Regarding Alex's question of which component does holistic infrastructure 
scheduling, I hesitate to simply answer heat.  Heat is about 
orchestration, and infrastructure scheduling is another matter.  I have 
attempted to draw pictures to sort this out, see 
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U 
and 
https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH
_TONw6g 
.  In those you will see that I identify holistic infrastructure 
scheduling as separate functionality from infrastructure orchestration 
(the main job of today's heat engine) and also separate from software 
orchestration concerns.  However, I also see a close relationship between 
holistic infrastructure scheduling and heat, as should be evident in those

pictures too. 

Alex made a remark about the needed inputs, and I agree but would like to 
expand a little on the topic.  One thing any scheduler needs is knowledge 
of the amount, structure, and capacity of the hosting thingies (I wish I 
could say resources, but that would be confusing) onto which the 
workload is to be scheduled.  Scheduling decisions are made against 
available capacity.  I think the most practical way to determine available

capacity is to separately track raw capacity and current (plus already 
planned!) allocations from that capacity, finally subtracting the latter 
from the former. 

In Nova, for example, sensing raw capacity is handled by the various 
nova-compute agents reporting that information.  I think a holistic 
infrastructure scheduler should get that information from the various 
individual services (Nova, Cinder, etc) that it is concerned with 
(presumably they have it anyway). 

A holistic infrastructure scheduler can keep track of the allocations it 
has planned (regardless of whether they have been executed yet).  However,

there may also be allocations that did not originate in the holistic 
infrastructure scheduler.  The individual underlying services should be 
able to report (to the holistic infrastructure scheduler, even if lowly 
users are not so authorized) all the allocations currently in effect.  An 
accurate union of the current and planned allocations is what we want to 
subtract from raw capacity to get available capacity. 

If there is a long delay between planning and executing an allocation, 
there can be nasty surprises from competitors --- if there are any 
competitors.  Actually, there can be nasty surprises anyway.  Any 
scheduler should be prepared for nasty surprises, and react by some 
sensible retrying.  If nasty surprises are rare, we are pretty much done. 
If nasty surprises due to the presence of competing managers are common, 
we may be able to combat the problem by changing the long delay to a short

one --- by moving the allocation execution earlier into a stage that is 
only about locking in allocations, leaving all the other work involved in 
creating virtual resources to later (perhaps Climate will be good for 
this).  If the delay between planning and executing an allocation is short

and there are many nasty surprises due to competing managers, then you 
have too much competition between managers --- don't do that. 

Debo wants a simpler nova-centric story.  OK, how about the following. 
This is for the first step in the roadmap, where scheduling decisions are 
still made independently for each VM instance.  For the client/service 
interface, I think we can do this with a simple clean two-phase interface 
when traditional software orchestration is in play, a one-phase interface 
when slick new software orchestration is used.  Let me outline the 
two-phase flow.  We extend the Nova API with CRUD operations on VRTs 
(top-level groups).  For example, the CREATE operation takes a definition 
of a top-level group and all its nested groups, definitions (excepting 
stuff like userdata) of all the resources (only VM instances, for now) 
contained in those groups, all the relationships among those 
groups/resources, and all the applications of policy to those groups, 
resources, and relationships.  This is a rest-style interface; the CREATE 
operation takes a definition of the thing (a top-level group and all that 
it 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On 11.10.2013, at 22:58, Rochelle.Grober 
rochelle.gro...@huawei.commailto:rochelle.gro...@huawei.com wrote:

Pardon me for cutting out most of the discussion.  I’d like to summarize a bit 
here and make a proposal.

Issues:


· Driver and Plugin writers for Nova (and other Core OpenStack 
projects) have a different development focus than core developers which can 
create both delays in getting submitted code reviewed and tensions between to 
two camps.

· It is in OpenStack’s best interests to have these driver/plugin 
writers participating in OpenStack development as their contributions help make 
OpenStack a more relevant and compelling set of products in the Cloud space

· Delays of reviews are painful to driver writers causing extra 
branching, lots of duplicated work, etc.

· Nova Core reviewers are overworked and are less versed on the 
driver/plugin code, architecture, issues which makes them a little averse to 
performing reviews on these patches

· [developers|reviewers] aren’t appreciated

· Tempers flair

Proposed solution:
There have been a couple of solutions proposed.  I’m presenting a merged/hybrid 
solution that may work

· Create a new repository for the extra drivers:

o   Keep kvm and Xenapi in the Nova project as “reference” drivers

oopenstack/nova-extra-drivers (proposed by rbryant)

oHave all drivers other than reference drivers in the extra-drivers project 
until they meet the maturity of the ones in Nova

o   The core reviewers for nova-extra-drivers will come from its developer 
pool.  As Alessandro pointed out, all the driver developers have more in common 
with each other than core Nova, so they should be able to do a better job of 
reviewing these patches than Nova core.  Plus, this might create some synergy 
between different drivers that will result in more commonalities across drivers 
and better stability.  This also reduces the workloads on both Nova Core 
reviewers and the driver developers/core reviewers.

The Hyper-V driver is definitely stable, production grade and feature complete 
for our targets since Grizzly, the fact that we push a lot on the blueprints 
development side is simply because we see potential in new features.

So if a nova-extra-drivers projects means a ghetto for B class drivers, my 
answer is no way, unless they miss a CI gate starting from Icehouse. :-)

Getting back to the initial topic, we have only a small bunch of bug fixes that 
need to be merged for the features that got added in Havana, which are just 
staying in the review limbus and that originated all this discussion 
(incidentally all in Nova).

I still see our work completely independent from Nova, but getting along with 
the entire community has of course a value that goes beyond the merits of our 
driver or any other single aspect of OpenStack. My suggestion is to bring this 
discussion to HK, possibly with a few beers in front and sort it out :-)


o   If you don’t feel comfortable with the last bullet, have  Nova core 
reviewers do the final approval, but only for the obvious “does this code meet 
our standards?”

The proposed solution focuses the strengths of the different developers in 
their strong areas.  Everyone will still have to stretch to do reviews and now 
there is a possibility that the developers that best understand the drivers 
might be able to advance the state of the drivers by sharing their expertise 
amongst each other.

The proposal also offloads some of the workload for Nova Core reviewers and 
places it where it is best handled.

And, no more sniping about participation.  The driver developers will 
participate more because their vested interests are communal to the project 
they are now in.  Maybe the integration of tests, etc will even happen faster 
and expand coverage faster.

And by the way,  the statistics on participation are just that: statistics.  If 
you look at rbryant’s numbers, they are different from Stackalytics which are 
different from Launchpad which are different from 
review.openstack.orghttp://review.openstack.org.

And as and fYI.  Guess what?  Anyone working on a branch, such as stable (which 
promotes the commercial viability of OpenStack) gets ignored for their 
contributions once the branch has happened.  At least on Stackalytics.  I don’t 
know about rbryant’s numbers.

--Rocky Grober

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VPNaaS (and services) questions

2013-10-11 Thread Paul Michali
Added several more questions inline…


PCM (Paul Michali)

MAIL p...@cisco.com
IRC   pcm_  (irc.freenode.net)
TW   @pmichali

On Oct 11, 2013, at 3:28 PM, Paul Michali p...@cisco.com wrote:

 Hi folks,
 
 I have a bunch of questions for you on VPNaaS in specific, and services in 
 general...
 
 Nachi,
 
 1) You hd a bug fix to do service provider framework support for VPN (41827). 
 It was held for Icehouse. Is that pretty much a working patch? 
 2) When are you planning on reopening the review?
 
 
 Anyone,
 
 I see that there is an agent.py file for VPN that has a main() and it starts 
 up an L3 agent, specifying the VPNAgent class (in same file).
 
 3) How does this file get invoked? IOW how does the main() get invoked?
 4) I take it we can specify multiple device drivers in the config file for 
 the agent?
 
 
 Currently, for the reference device driver, the hierarchy is currently 
 DeviceDriver [ABC] - IPsecDriver [Swan based logic] - OpenSwanDriver [one 
 function, OpenSwan specific]. The ABC has a specific set of APIs. Wondering 
 how to incorporate provider based device drivers.
 
 5) Should I push up more general methods from IPsecDriver to DeviceDriver, so 
 that they can be reused by other providers?
 6) Should I push down the swan based methods from DeviceDriver to IPsecDriver 
 and maybe name it SwanDeviceDriver?
 
 
 I see that vpnaas.py is an extension for VPN that defines attributes and the 
 base plugin functions.
 
 7) If a provider as additional attributes (can't think of any yet), how can 
 the attribute be extended, only for that provider (or is that the wrong way 
 to handle this)?
 
 For VPN, there are several attributes, each with varying ranges of values 
 allowed. This is reflected in the CLI help messages, the database (e.g. 
 enums), and is validated (some) in the client code and in the VPN service.
 
 8) How do we provide different limits/allowed values for attributes, for a 
 specific provider (e.g. let's say the provider supports or doesn't support an 
 encryption method, or doesn't support IKE v1 or v2)?
 9) Should the code be changed not to do any client validation, and to have 
 generic help, so that different values could be provided, or is there a way 
 to customize this based on provider?
 10) If customized, is it possible to reflect the difference in allowed values 
 in the help strings (and client validation)?
 11) How do we handle the variation in the database (e.g. when enums 
 specifying a fixed set of values)? Do we need to change the database to be 
 more generic (strings and ints) or do we somehow extend the database?
 
 I was wondering in general how providers can customize service features, 
 based on their capabilities (better or worse than reference). I could create 
 a Summit session topic on this, but wanted to know if this is something that 
 has already been addressed or if a different architectural approach has 
 already been defined.
 

For the RPC, I see there is an IPSEC_DRIVER_TOPIC and IPSEC_AGENT_TOPIC, each 
of which gets the host appended to the end.

12) For a provider, I'm assuming I'd create a provider specific topic name?
13) Is there any convention to the naming of the topics? 
svc_provider_type.host?
14) Is it just be, or is it just a bit misleading with the agent topic name? 
Seems like the RPCs are all between the service and device drivers.

In Icehouse, we're talking about other protocols for VPN, like SSL-VPN, 
MPLS/BGP.

15) Has anyone thought about the class naming for the service/device drivers, 
and the RPC topics, or will it be a totally separate hierarchy?


On the underlying provider device, there could be a case where a mapping is 
needed. For example, the VPN connection UUID may need to be mapped to a Tunnel 
number.

16) Is there any precedence for persisting this type of information in 
OpenStack?
17) Would the device driver send that back up to the plugin, or is there some 
way to persist at the driver level?


I'm probably going to use a REST API in the device driver to talk to the 
provider process (in a VM).

18) Are there any libraries in Neutron for REST API client that I can use 
(versus rolling my own)?


Thanks!

PCM


 
 Regards,
 
 
 PCM (Paul Michali)
 
 MAIL p...@cisco.com
 IRC   pcm_  (irc.freenode.net)
 TW   @pmichali
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VPNaaS questions

2013-10-11 Thread Nachi Ueno
Hi Paul

2013/10/11 Paul Michali p...@cisco.com:
 Hi folks,

 I have a bunch of questions for you on VPNaaS in specific, and services in
 general...

 Nachi,

 1) You hd a bug fix to do service provider framework support for VPN
 (41827). It was held for Icehouse. Is that pretty much a working patch?
 2) When are you planning on reopening the review?

I'm not sure it will work without rebase.
I'll rebase, and test it again in next week.


 Anyone,

 I see that there is an agent.py file for VPN that has a main() and it starts
 up an L3 agent, specifying the VPNAgent class (in same file).

 3) How does this file get invoked? IOW how does the main() get invoked?

we should use neutron-vpn-agent command to run vpn-agent.
This command invoke vpn agent class.
It is defined setup.cnf

https://github.com/openstack/neutron/blob/master/setup.cfg#L98

 4) I take it we can specify multiple device drivers in the config file for
 the agent?

Yes.


 Currently, for the reference device driver, the hierarchy is currently
 DeviceDriver [ABC] - IPsecDriver [Swan based logic] - OpenSwanDriver [one
 function, OpenSwan specific]. The ABC has a specific set of APIs. Wondering
 how to incorporate provider based device drivers.

It is designed when we know only one swan based driver.
so It won't fit with another device drivers.
if so, You can also extend or modify DeviceDriver also.

 5) Should I push up more general methods from IPsecDriver to DeviceDriver,
 so that they can be reused by other providers?

That's woud be great

 6) Should I push down the swan based methods from DeviceDriver to
 IPsecDriver and maybe name it SwanDeviceDriver?

yes


 I see that vpnaas.py is an extension for VPN that defines attributes and the
 base plugin functions.

 7) If a provider as additional attributes (can't think of any yet), how can
 the attribute be extended, only for that provider (or is that the wrong way
 to handle this)?

You can extend existing extension.

 For VPN, there are several attributes, each with varying ranges of values
 allowed. This is reflected in the CLI help messages, the database (e.g.
 enums), and is validated (some) in the client code and in the VPN service.

Chaining existing attributes may be challenging on client side.
But let's discuss this with a concrete example.

 8) How do we provide different limits/allowed values for attributes, for a
 specific provider (e.g. let's say the provider supports or doesn't support
 an encryption method, or doesn't support IKE v1 or v2)?

Driver can throw unsupported exception. ( It is not defined yet)

 9) Should the code be changed not to do any client validation, and to have
 generic help, so that different values could be provided, or is there a way
 to customize this based on provider?

That's could be one way.

 10) If customized, is it possible to reflect the difference in allowed
 values in the help strings (and client validation)?

May be, server side can tell the client hey I'm supporting this set of values
Then client can use it as the help string.
# This change may need bp.

 11) How do we handle the variation in the database (e.g. when enums
 specifying a fixed set of values)? Do we need to change the database to be
 more generic (strings and ints) or do we somehow extend the database?

more than one driver will use same DB.
so I'm +1 for generic db structure if it is really needed.

 I was wondering in general how providers can customize service features,
 based on their capabilities (better or worse than reference). I could create
 a Summit session topic on this, but wanted to know if this is something that
 has already been addressed or if a different architectural approach has
 already been defined.

AFIK, That's new challenge.
I think LB team and FW team is facing same issue.

Best
Nachi


 Regards,


 PCM (Paul Michali)

 MAIL p...@cisco.com
 IRC   pcm_  (irc.freenode.net)
 TW   @pmichali


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Extraroute and router extensions

2013-10-11 Thread Nachi Ueno
Hi Artem

Thank you for your pointing out this.
I'm still thinking about the design. Once I got the draft, I'll share
it in the bp and here..

Best
Nachi

2013/10/10 Artem Dmytrenko nexton...@yahoo.com:
 Hi Rudra, Nachi.

 Glad to see this discussion on the mailing list! The ExtraRoute routes are
 fairly
 limited and it would be great to be able to store more complete routing
 information in Neutron. I've submitted a blueprint proposing expanding
 ExtraRoute
 parameters to include more information (extended-route-params). But it still
 has a problem where routes are stored in a list and are not indexed. So an
 update
 could be painful.

 Could you share what attributes would you like to see in your RIB API?

 Thanks!
 Artem

 P.S.
  I'm OpenStack newbie, looking forward to learning from and working with
 you!

Hi Rudra

ExtraRoute bp was designed for adding some extra routing for the router.
The spec is very handy for simple and small use cases.
However it won't fit large use cases, because it takes all route in a Json
 List.
# It means we need to send full route for updating.

As Salvatore suggests, we need to keep backward compatibility.
so, IMO, we should create Routing table extension.

I'm thinking about this in the context of L3VPN (MPLS) extension.
My Idea is to have a RIB API in the Neutron.
For vpnv4 routes it may have RT or RDs.

Best
Nachi


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] baremetal provisioning - issue with image and boot

2013-10-11 Thread Ravikanth Samprathi
Hi
I am new to baremetal provisioning with openstack.
I have followed this link for the setup:
https://wiki.openstack.org/wiki/Baremetal

I have followed instructions from the above link and generated the vmlinuz
and ramdisk images.  I loaded these vmlinuz and ramdisk into the baremetal
node (server) through dnsmasq and PXE.

Few questions:
1 The baremetal node has initramfs but the interfaces are not up and does
not have any ip address for the interfaces.   Does this mean, the kernel
and ramdisk that i extracted using disk-image-create as specified from the
above link is wrong?   What should is see as the image that boots up?
2 The baremetal agent does not seem to be present in the filesystem on the
baremetal node. How should this get loaded/downloaded into the node?

Then i ran this command on the openstack controller:

nova boot --flavor my-baremetal-flavor --image my-image my-baremetal-node

This produces the following error:

ERROR: Quota exceeded for instances: Requested 1, but already used 10
of 10 instances (HTTP 413) (Request-ID: req-xxx)

Questions:
1 What should i do to get this working?

2 Is the image that is currently present in the baremetal server correct?

Greatly appreciate any help and pointers.

Thanks
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] baremetal provisioning - issue with image and boot

2013-10-11 Thread Ravikanth Samprathi
Hi
I am past the quota issue, i increased the quota for the project.

Where should i get the kernel and ramdisk for tinycore bootstrap?
And how do i download the baremetal agent to the baremetal node?

Now when i do nova boot i see the following issue:



All nova services are up and working.

Thanks
Ravi


On Fri, Oct 11, 2013 at 3:23 PM, Ravikanth Samprathi rsamp...@gmail.comwrote:

 Hi
 I am new to baremetal provisioning with openstack.
 I have followed this link for the setup:
 https://wiki.openstack.org/wiki/Baremetal

 I have followed instructions from the above link and generated the vmlinuz
 and ramdisk images.  I loaded these vmlinuz and ramdisk into the baremetal
 node (server) through dnsmasq and PXE.

 Few questions:
 1 The baremetal node has initramfs but the interfaces are not up and does
 not have any ip address for the interfaces.   Does this mean, the kernel
 and ramdisk that i extracted using disk-image-create as specified from the
 above link is wrong?   What should is see as the image that boots up?
 2 The baremetal agent does not seem to be present in the filesystem on
 the baremetal node. How should this get loaded/downloaded into the node?

 Then i ran this command on the openstack controller:

 nova boot --flavor my-baremetal-flavor --image my-image my-baremetal-node

 This produces the following error:

 ERROR: Quota exceeded for instances: Requested 1, but already used 10 of 10 
 instances (HTTP 413) (Request-ID: req-xxx)


 Questions:
 1 What should i do to get this working?

 2 Is the image that is currently present in the baremetal server correct?

 Greatly appreciate any help and pointers.


 Thanks
 Ravi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 05:09 PM, Alessandro Pilotti wrote:
 My suggestion is to bring this discussion to HK, possibly with a few beers
 in front and sort it out :-)

Sounds like a good plan to me!

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Rochelle.Grober
When you do, have a beer for me.  I'll be looking for what you guys come up 
with.

And I don't think a separate project would be a second class project.  The 
driver guys could be so successful that all the drivers end up there and the 
interfaces between Nova and the drivers get *real* clean and fast.

--Rocky Grober

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com] 
Sent: Friday, October 11, 2013 3:59 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Hyper-V] Havana status

On 10/11/2013 05:09 PM, Alessandro Pilotti wrote:
 My suggestion is to bring this discussion to HK, possibly with a few beers
 in front and sort it out :-)

Sounds like a good plan to me!

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] odd behaviour from sqlalchemy

2013-10-11 Thread Chris Friesen

Hi,

I'm using grizzly with sqlalchemy 0.7.9.

I'm seeing some funny behaviour related to the automatic update of 
updated_at column for the Service class in the sqlalchemy model.


I added a new column to the Service class, and I want to be able to 
update that column without triggering the automatic update of the 
updated_at field.


While trying to do this, I noticed the following behaviour.  If I do

values = {'updated_at': new_value}
self.service_update(context, service, values)

this sets the updated_at column to new_value as expected.  However, if 
I do


values = {'updated_at': new_value, 'other_key': other_value}
self.service_update(context, service, values)

then the other key is set as expected, but updated_at gets 
auto-updated to the current timestamp.


The onupdate description in the sqlalchemy docs indicates that it 
will be invoked upon update if this column is not present in the SET 
clause of the update.  Anyone know why it's being invoked even though 
I'm passing in an explicit value?



On a slightly different note, does anyone have a good way to update a 
column in the Service class without triggering the updated_at field to 
be changed?  Is there a way to tell the database set this column to 
this value, and set the updated_at column to its current value?  I 
don't want to read the updated_at value and then write it back in 
another operation since that leads to a potential race with other 
entities accessing the database.


Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] baremetal nova boot issue

2013-10-11 Thread Ravikanth Samprathi
Hi
I am trying to issue the boot command to provision baremetal server.  But i
see the following error:

Also, where can i get the bootstrap kernel and ramdisk images to boot into
the baremetal?  And how to get the baremetal agent installed in the
baremetal node?

command:
=
root@os:/home/versa# nova boot --flavor 6 --image
39f4fd3b-15cc-4810-a808-e2c4764ba657 bm
ERROR: The server has either erred or is incapable of performing the
requested operation. (HTTP 500) (Request-ID:
req-c463e02b-7c35-448e-b0a7-97d1c02c6088)

The log is here:
==
BATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIEwVVbnNldDEOMAwGA1UEBxMFVW5zZXQxDjAMBgNVBAoTBVVuc2V0MRgwFgYDVQQDEw93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEgYCEx607Bw1UBm9A87zNIcwDj5VsPwOrLmlq2EG3uWRfyjNoqSZo0jnK-VskJ29hAq1lPZsqe5bnhacWuUUr0nW+aAe-39pcGg9+lXPMOFQEjtRYdwUzhwMz05qm1yWjrdzXl0Hofv7ncdggF8SZbyBG0O68CRwzXRFXeSpGDrHeFw==

INFO (connectionpool:191) Starting new HTTP connection (1): 10.40.0.99
DEBUG (connectionpool:283) GET
/v2/8a34123d83824f3ea52527c5a28ad81e/servers/36e71635-5f73-4895-87a9-6f1082e8cb6a
HTTP/1.1 500 128
RESP: [500] {'date': 'Fri, 11 Oct 2013 23:43:43 GMT', 'content-length':
'128', 'content-type': 'application/json; charset=UTF-8',
'x-compute-request-id': 'req-2fff698c-ddf2-47f1-ae82-47fb0dc67d41'}
RESP BODY: {computeFault: {message: The server has either erred or is
incapable of performing the requested operation., code: 500}}

DEBUG (shell:768) The server has either erred or is incapable of performing
the requested operation. (HTTP 500) (Request-ID:
req-2fff698c-ddf2-47f1-ae82-47fb0dc67d41)
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 765, in
main
OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
  File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 701, in
main
args.func(self.cs, args)
  File /usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py, line
286, in do_boot
server = cs.servers.get(info['id'])
  File /usr/lib/python2.7/dist-packages/novaclient/v1_1/servers.py, line
350, in get
return self._get(/servers/%s % base.getid(server), server)
  File /usr/lib/python2.7/dist-packages/novaclient/base.py, line 140, in
_get
_resp, body = self.api.client.get(url)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 230,
in get
return self._cs_request(url, 'GET', **kwargs)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 217,
in _cs_request
**kwargs)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 199,
in _time_request
resp, body = self.request(url, method, **kwargs)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 193,
in request
raise exceptions.from_response(resp, body, url, method)
ClientException: The server has either erred or is incapable of performing
the requested operation. (HTTP 500) (Request-ID:
req-2fff698c-ddf2-47f1-ae82-47fb0dc67d41)
ERROR: The server has either erred or is incapable of performing the
requested operation. (HTTP 500) (Request-ID:
req-2fff698c-ddf2-47f1-ae82-47fb0dc67d41)

Appreciate any  help.
Thanks
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] baremetal nova boot issue

2013-10-11 Thread Joe Gordon
On Fri, Oct 11, 2013 at 5:17 PM, Ravikanth Samprathi rsamp...@gmail.comwrote:

 Thanks Joe.

 Also may i please request the info about which kernel and ramdisk image to
 load and how to get baremetal agent loaded into the baremetal server?

 The nova-api.log is here:
 ==
 2013-10-11 16:43:43.514 ERROR nova.api.openstack
 [req-2fff698c-ddf2-47f1-ae82-47fb0dc67d41 251bd0a9388a477b9c24c99b223
 e7b2a 8a34123d83824f3ea52527c5a28ad81e] Caught error: [Errno 111]
 ECONNREFUSED
 3746 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack Traceback (most
 recent call last):
 3747 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/nova/api/openstack/__in it__.py,
 line 81, in __call__
 3748 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack return
 req.get_response(self.application)
 3749 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/webob/request.py, line  1296, in send
 3750 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack
 application, catch_exc_info=False)
 3751 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/webob/request.py, line  1260, in
 call_application
 3752 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack app_iter =
 application(self.environ, start_response)
 3753 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/webob/dec.py, line 144 , in __call__
 3754 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack return
 resp(environ, start_response)
 3755 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/keystoneclient/middlewa
 re/auth_token.py, line 450, in __call__
 3756 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack return
 self.app(env, start_response)
 3757 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/webob/dec.py, line 144 , in __call__
 3758 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack return
 resp(environ, start_response)
 3759 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/webob/dec.py, line 144 , in __call__
 3760 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack return
 resp(environ, start_response)
 3761 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/webob/dec.py, line 144 , in __call__
 3762 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack return
 resp(environ, start_response)
 3763 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/routes/middleware.py,  line 131, in
 __call__
 3764 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack response =
 self.app(environ, start_response)
 3765 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/webob/dec.py, line 144 , in __call__
 3766 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack return
 resp(environ, start_response)
 3767 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/webob/dec.py, line 130 , in __call__
 3768 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack resp =
 self.call_func(req, *args, **self.kwargs)
 3769 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/webob/dec.py, line 195 , in call_func
 3770 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack return
 self.func(req, *args, **kwargs)
 3771 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi .py, line
 890, in __call__
 3772 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack
 content_type, body, accept)
 3773 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi .py, line
 969, in _process_stack
 3774 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack request,
 action_args)
 3775 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi .py, line
 863, in post_process_extensions
 3776 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack
 **action_args)
 3777 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/nova/api/openstack/comp
 ute/contrib/security_groups.py, line 526, in show
 3778 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack return
 self._show(req, resp_obj)
 3779 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 /usr/lib/python2.7/dist-packages/nova/api/openstack/comp
 ute/contrib/security_groups.py, line 522, in _show
 3780 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack
 self._extend_servers(req, [resp_obj.obj['server']])
 3781 2013-10-11 16:43:43.514 4034 TRACE nova.api.openstack   File
 

Re: [openstack-dev] [nova][Libvirt] Disabling nova-compute when a connection to libvirt is broken.

2013-10-11 Thread Joe Gordon
On Thu, Oct 10, 2013 at 4:47 AM, Vladik Romanovsky 
vladik.romanov...@enovance.com wrote:

 Hello everyone,

 I have been recently working on a migration bug in nova (Bug #1233184).

 I noticed that compute service remains available, even if a connection to
 libvirt is broken.
 I thought that it might be better to disable the service (using
 conductor.manager.update_service()) and resume it once it's connected again.
 (maybe keep the host_stats periodic task running or create a dedicated
 one, once it succeed, the service will become available again).
 This way new vms wont be scheduled nor migrated to the disconnected host.

 Any thoughts on that?


Sounds reasonable to me. If we can't reach libvirt there isn't much that
nova-compute can / should do.


 Is anyone already working on that?

 Thank you,
 Vladik

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Extraroute and router extensions

2013-10-11 Thread Artem Dmytrenko
Great!

Looking forward to seeing more of this discussion. I've mentioned that I 
submitted a blueprint request extending ExtraRoute extension to include more 
routing attributes. It's located here: 
https://blueprints.launchpad.net/neutron/+spec/extended-route-params/ and it 
contains editable google doc. I don't know if it is of much use as it talks 
heavily about extra route extension. Please feel free to copy any part if you 
would find it useful (e.g. screenshot) or if you would appreciate any help with 
your blueprint I'd be very glad to pitch in.


Have a good weekend.

Sincerely,
Artem Dmytrenko


On Friday, October 11, 2013 3:02 PM, Nachi Ueno na...@ntti3.com wrote:
 
Hi Artem

Thank you for your pointing out this.
I'm still thinking about the design. Once I got the draft, I'll share
it in the bp and here..

Best
Nachi


2013/10/10 Artem Dmytrenko nexton...@yahoo.com:
 Hi Rudra, Nachi.

 Glad to see this discussion on the mailing list! The ExtraRoute routes are
 fairly
 limited and it would be great to be able to store more complete routing
 information in Neutron. I've submitted a blueprint proposing expanding
 ExtraRoute
 parameters to include more information (extended-route-params). But it still
 has a problem where routes are stored in a list and are not indexed. So an
 update
 could be painful.

 Could you share what attributes would you like to see in your RIB API?

 Thanks!
 Artem

 P.S.
  I'm OpenStack newbie, looking forward to learning from and working with
 you!

Hi Rudra

ExtraRoute bp was designed for adding some extra routing for the router.
The spec is very handy for simple and small use cases.
However it won't fit large use cases, because it takes all route in a Json
 List.
# It means we need to send full route for updating.

As Salvatore suggests, we need to keep backward compatibility.
so, IMO, we should create Routing table extension.

I'm thinking about this in the context of L3VPN (MPLS) extension.
My Idea is to have a RIB API in the Neutron.
For vpnv4 routes it may have RT or RDs.

Best
Nachi


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-11 Thread Stan Lagun
On Fri, Oct 11, 2013 at 8:40 PM, Clint Byrum cl...@fewbar.com wrote:


  2. Ability to provide arbitrary input values for the config
 We already have that, there's a free-form json document called metadata
 attached to every resource. Or maybe I missed what you mean here. The
 new capability that is in the works that will make that better is to
 have multiple reusable metadata blocks referenced on one instance.


Yes, that's what I meant - metadata attached to configs rather than
instances.



   3. Ability to return arbitrary (JSON-compatible) data structure from
 config
  application and use attributes of that structure as an input for other
  configs

 Note that I'd like to see more use cases specified for this ability. The
 random string generator that Steve Baker has put up should handle most
 cases where you just need passwords. Generated key sharing might best
 be deferred to something like Barbican which does a lot more than Heat
 to try and keep your secrets safe.


Murano's execution plans that are sent to Murano Agent are similar to
Python functions in that they have input and output. The output may be a
script exit code, captured stdout/stderr, value returned from
PowerShell/Python function etc. Although it is rare case where output from
one execution plan is required as an input for another plan but it happens.
For example execution plan 1 did created network interface on VM with DHCP
enabled and execution plan (that may be executed on another machine)
requires IP address obtained on that interface. In this case IP address
would be the returned value


  4. Ability to provide config body that is an input to Murano Agent of
  arbitrary size

 Isn't this the same as 2?


Not exactly the same, but config body can be an attribute of config's
metadata with a special reserved name



 I think it confirms that we're heading toward consensus on where to draw
 the software config vs. infrastructure orchestration line. That is very
 exciting. :)


This is indeed very promising. If Murano can do all the orchestration via
HOT templates and Heat doesn't deal with service metadata that is out of
scope of HOT we can eliminate most of the overlapping between projects and
actually complement each other. The only thing I concerned about in this
context is
https://wiki.openstack.org/wiki/Heat/DSL2 and
https://wiki.openstack.org/wiki/Heat/Open_API as this is very similar to
what Murano does


-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev