Re: [openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-09-09 Thread Tim Bell
It would be great if each OpenStack component could provide a maintenance mode 
like this… there was some work being considered on Cells 
https://blueprints.launchpad.net/nova/+spec/disable-child-cell-support which 
would have allowed parts of Nova to indicate they were in maintenance.

Something generic would be very useful. Some operators have asked for 
‘read-only’ modes also where query is OK but update is not permitted.

Tim

From: Mike Scherbakov [mailto:mscherba...@mirantis.com]
Sent: 09 September 2014 23:20
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [All] Maintenance mode in OpenStack during 
patching/upgrades

Sergii, Clint,
to rephrase what you are saying - there are might be situations when our 
OpenStack API will not be responding, as simply services would be down for 
upgrade.
Do we want to support it somehow? For example, if we know that Nova is going to 
be down, can we respond with HTTP 503 with appropriate Retry-After time in 
header?

The idea is not simply deny or hang requests from clients, but provide them "we 
are in maintenance mode, retry in X seconds"

> Turbo Hipster was added to the gate
great idea, I think we should use it in Fuel too

> You probably would want 'nova host-servers-migrate '
yeah for migrations - but as far as I understand, it doesn't help with 
disabling this host in scheduler - there is can be a chance that some workloads 
will be scheduled to the host.


On Tue, Sep 9, 2014 at 6:02 PM, Clint Byrum 
mailto:cl...@fewbar.com>> wrote:
Excerpts from Mike Scherbakov's message of 2014-09-09 00:35:09 -0700:
> Hi all,
> please see below original email below from Dmitry. I've modified the
> subject to bring larger audience to the issue.
>
> I'd like to split the issue into two parts:
>
>1. Maintenance mode for OpenStack controllers in HA mode (HA-ed
>Keystone, Glance, etc.)
>2. Maintenance mode for OpenStack computes/storage nodes (no HA)
>
> For first category, we might not need to have maintenance mode at all. For
> example, if we apply patching/upgrade one by one node to 3-node HA cluster,
> 2 nodes will serve requests normally. Is that possible for our HA solutions
> in Fuel, TripleO, other frameworks?

You may have a broken cloud if you are pushing out an update that
requires a new schema. Some services are better than others about
handling old schemas, and can be upgraded before doing schema upgrades.
But most of the time you have to do at least a brief downtime:

 * turn off DB accessing services
 * update code
 * run db migration
 * turn on DB accessing services

It is for this very reason, I believe, that Turbo Hipster was added to
the gate, so that deployers running against the upstream master branches
can have a chance at performing these upgrades in a reasonable amount of
time.

>
> For second category, can not we simply do "nova-manage service disable...",
> so scheduler will simply stop scheduling new workloads on particular host
> which we want to do maintenance on?
>

You probably would want 'nova host-servers-migrate ' at that
point, assuming you have migration set up.

http://docs.openstack.org/user-guide/content/novaclient_commands.html

> On Thu, Aug 28, 2014 at 6:44 PM, Dmitry Pyzhov 
> mailto:dpyz...@mirantis.com>> wrote:
>
> > All,
> >
> > I'm not sure if it deserves to be mentioned in our documentation, this
> > seems to be a common practice. If an administrator wants to patch his
> > environment, he should be prepared for a temporary downtime of OpenStack
> > services. And he should plan to perform patching in advance: choose a time
> > with minimal load and warn users about possible interruptions of service
> > availability.
> >
> > Our current implementation of patching does not protect from downtime
> > during the patching procedure. HA deployments seems to be more or less
> > stable. But it looks like it is possible to schedule an action on a compute
> > node and get an error because of service restart. Deployments with one
> > controller... well, you won’t be able to use your cluster until the
> > patching is finished. There is no way to get rid of downtime here.
> >
> > As I understand, we can get rid of possible issues with computes in HA.
> > But it will require migration of instances and stopping of nova-compute
> > service before patching. And it will make the overall patching procedure
> > much longer. Do we want to investigate this process?
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Mike Scherbakov
#mihgen
___

Re: [openstack-dev] [Heat] convergence flow diagrams

2014-09-09 Thread Clint Byrum
Excerpts from Angus Salkeld's message of 2014-09-08 17:15:04 -0700:
> On Mon, Sep 8, 2014 at 11:22 PM, Tyagi, Ishant  wrote:
> 
> >  Hi All,
> >
> >
> >
> > As per the heat mid cycle meetup whiteboard, we have created the
> > flowchart and sequence diagram for the convergence . Can you please review
> > these diagrams and provide your feedback?
> >
> >
> >
> > https://www.dropbox.com/sh/i8qbjtgfdxn4zx4/AAC6J-Nps8J12TzfuCut49ioa?dl=0
> >
> >
> Great! Good to see something.
> 
> 
> I was expecting something like:
> engine ~= like nova-conductor (it's the only process that talks to the db -
> make upgrading easier)

This complicates things immensely. The engine can just be the workers
too, we're just not going to do the observing and converging in the same
greenthread.

> observer - purely gets the actual state/properties and writes then to the
> db (via engine)

If you look closely at the diagrams, thats what it does.

> worker - has a "job" queue and grinds away at running those (resource
> actions)
> 

The convergence worker is just another set of RPC API calls that split
out work into isolated chunks.

> Then engine then "triggers" on differences on goal vs. actual state and
> create a job and sends it to the job queue.

Remember, we're not targeting continuous convergence yet. Just
convergence when we ask for things.

> - so, on create it sees there is no actual state so it sends a create job
> for the first resource to the worker queue

The diagram shows that, but confusingly says "is difference = 1". In
the original whiteboard this is 'if diff = DNE'. DNE stands for Does
Not Exist.

> - when the observer writes the new state for that resource it triggers the
> next resource create in the dependency tree.

Not the next resource create, but the next resource convergence. And not
just one either. I think one of the graphs was forgotten, it goes like
this:

https://www.dropbox.com/s/1h2ee151iriv4i1/resolve_graph.svg?dl=0

That is what we called "return happy" because we were at hour 9 or so of
talking and we got a bit punchy. I've renamed it 'resolve_graph'.

> - like any system that relies on notifications we need timeouts and each
> stack needs a periodic "notification" to make sure


This is, again, the continuous observer model.

https://review.openstack.org/#/c/100012/

>   that progress is been made or notify the user that no progress is being
> made.
> 
> One question about the observer (in either my setup or the one in the
> diagram).
> - If we are relying on rpc notifications all the observer processes will
> receive a copy of the same notification

Please read that spec. We talk about a filter.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] default allow security group

2014-09-09 Thread Baohua Yang
Not arguing if it's suitable to implement this with security-group commands.

To solve the problem, I guess no 20 rules are necessary at all.

You can just add one rules like the following to allow all traffic going
out of the vm.

iptables -I neutron-openvswi-o9LETTERID -j RETURN
Where the id part is the first 9 letters of the vm attached port id.
This rule will bypass all security filtering for the outgoing traffic.

On Fri, Sep 5, 2014 at 11:27 PM, Monty Taylor  wrote:

> Hi!
>
> I've decided that as I have problems with OpenStack while using it in the
> service of Infra, I'm going to just start spamming the list.
>
> Please make something like this:
>
> neutron security-group-create default --allow-every-damn-thing
>
> Right now, to make security groups get the hell out of our way because
> they do not provide us any value because we manage our own iptables, it
> takes adding something like 20 rules.
>
> 15:24:05  clarkb | one each for ingress and egress udp tcp over
> ipv4 then ipv6 and finaly icmp
>
> That may be great for someone using my-first-server-pony, but for me, I
> know how the internet works, and when I ask for a server, I want it to just
> work.
>
> Now, I know, I know - the DEPLOYER can make decisions blah blah blah.
>
> BS
>
> If OpenStack is going to let my deployer make the absolutely assinine
> decision that all of my network traffic should be blocked by default, it
> should give me, the USER, a get out of jail free card.
>
> kthxbai
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Clint Byrum
Excerpts from Samuel Merritt's message of 2014-09-09 19:04:58 -0700:
> On 9/9/14, 4:47 PM, Devananda van der Veen wrote:
> > On Tue, Sep 9, 2014 at 4:12 PM, Samuel Merritt  wrote:
> >> On 9/9/14, 12:03 PM, Monty Taylor wrote:
> > [snip]
> >>> So which is it? Because it sounds like to me it's a thing that actually
> >>> does NOT need to diverge in technology in any way, but that I've been
> >>> told that it needs to diverge because it's delivering a different set of
> >>> features - and I'm pretty sure if it _is_ the thing that needs to
> >>> diverge in technology because of its feature set, then it's a thing I
> >>> don't think we should be implementing in python in OpenStack because it
> >>> already exists and it's called AMQP.
> >>
> >>
> >> Whether Zaqar is more like AMQP or more like email is a really strange
> >> metric to use for considering its inclusion.
> >>
> >
> > I don't find this strange at all -- I had been judging the technical
> > merits of Zaqar (ex-Marconi) for the last ~18 months based on the
> > understanding that it aimed to provide Queueing-as-a-Service, and
> > found its delivery of that to be lacking on technical grounds. The
> > implementation did not meet my view of what a queue service should
> > provide; it is based on some serious antipatterns (storing a queue in
> > an RDBMS is probably the most obvious); and in fact, it isn't even
> > queue-like in the access patterns enabled by the REST API (random
> > access to a set != a queue). That was the basis for a large part of my
> > objections to the project over time, and a source of frustration for
> > me as the developers justified many of their positions rather than
> > accepted feedback and changed course during the incubation period. The
> > reason for this seems clear now...
> >
> > As was pointed out in the TC meeting today, Zaqar is (was?) actually
> > aiming to provide Messaging-as-a-Service -- not queueing as a service!
> > This is another way of saying "it's more like email and less like
> > AMQP", which means my but-its-not-a-queue objection to the project's
> > graduation is irrelevant, and I need to rethink about all my previous
> > assessments of the project.
> >
> > The questions now before us are:
> > - should OpenStack include, in the integrated release, a
> > messaging-as-a-service component?
> 
> I certainly think so. I've worked on a few reasonable-scale web 
> applications, and they all followed the same pattern: HTTP app servers 
> serving requests quickly, background workers for long-running tasks, and 
> some sort of durable message-broker/queue-server thing for conveying 
> work from the first to the second.
> 
> A quick straw poll of my nearby coworkers shows that every non-trivial 
> web application that they've worked on in the last decade follows the 
> same pattern.
> 
> While not *every* application needs such a thing, web apps are quite 
> common these days, and Zaqar satisfies one of their big requirements. 
> Not only that, it does so in a way that requires much less babysitting 
> than run-your-own-broker does.
> 

I think you missed the distinction.

What you describe is _message queueing_. Not messaging. The difference
being the durability and addressability of each message.

As Devananda pointed out, a queue doesn't allow addressing the items in
the queue directly. You can generally only send, receive, ACK, or NACK.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swiftclient]Implement "swift service-list" in python-swiftclient

2014-09-09 Thread Ashish Chandra
Hi,

In Horizon dashboard, under Admin-> System Info we have service lists for
Compute and Block Storage. I have filed a blueprint to populate the Swift
services there.
But while going through the implementation details of Compute Services and
Block Storage Services i got to know that the details there is populated
through api calls to python-novaclient and python-cinderclient respectively
which in turn uses "nova service-list" and "cinder service-list" to return
the details.

Whereas no such method is implemented in python-swiftclient to get the list
of services.

So my question is,
Do we have plans to include "swift service-list" in swiftclient ?
If yes then I would be filing a blueprint in python-swiftclient to
implement the same coz I require it to populate under the Admin -> System
Info -> Object Storage Services.

As a side note I can also see it has also not been implemented in some
other services like glance and heat. Is it a design decision or the feature
has not been simply impemented.

Thanks and Regards

Ashish Chandra
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] i need some help on this bug Bug #1365892

2014-09-09 Thread Li Tianqing
Hello,
I use backdoor of eventlet to enable gc.DEBUG_LEAK, and after wait a few 
minutes, i can sure that there will some objects that can not be collected by 
gc.collect in gc.garbage. 
Those looks like this (catched in ceilometer-collector)


['_context_auth_token', 'auth_token', 'new_pass'],
 (,
  ),
 ,
 ,
 ,
 ['_context_auth_token', 'auth_token', 'new_pass'],
 (,
  ),
 ,
 ,
 ,
 ['_context_auth_token', 'auth_token', 'new_pass'],
 (,
  ),
 ,
 ,
 ,


and i suspect those code in oslo.messaging


def _safe_log(log_func, msg, msg_data):
"""Sanitizes the msg_data field before logging."""
SANITIZE = ['_context_auth_token', 'auth_token', 'new_pass']


def _fix_passwords(d):
"""Sanitizes the password fields in the dictionary."""
for k in d.iterkeys():
if k.lower().find('password') != -1:
d[k] = ''
elif k.lower() in SANITIZE:
d[k] = ''
elif isinstance(d[k], dict):
_fix_passwords(d[k])
return d


return log_func(msg, _fix_passwords(copy.deepcopy(msg_data)))


i can resolve this problem by add _fix_passwords = None before _safe_log 
returns.


But i do not really understand why this problem happened, and in depth why the 
gc can not collect those object. Although i can make those uncollectable 
objects disappeared.
But this is not good enough, because if you do not understand it you will write 
out some code like this in future, and then also has memory leak too.


So can some one helps me give some detailed on recursive closure used like the 
code above, and on why gc can not collect them.
Thanks a lot lot ..


--

Best
Li Tianqing___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] On an API proxy from baremetal to ironic

2014-09-09 Thread Russell Bryant
On 09/09/2014 05:24 PM, Michael Still wrote:
> Hi.
> 
> One of the last things blocking Ironic from graduating is deciding
> whether or not we need a Nova API proxy for the old baremetal
> extension to new fangled Ironic API. The TC has asked that we discuss
> whether we think this functionality is actually necessary.
> 
> It should be noted that we're _not_ talking about migration of
> deployed instances from baremetal to Ironic. That is already
> implemented. What we are talking about is if users post-migration
> should be able to expect their previous baremetal Nova API extension
> to continue to function, or if they should use the Ironic APIs from
> that point onwards.
> 
> Nova had previously thought this was required, but it hasn't made it
> in time for Juno unless we do a FFE, and it has been suggested that
> perhaps its not needed at all because it is an admin extension.
> 
> To be super specific, we're talking about the "baremetal nodes" admin
> extension here. This extension has the ability to:
> 
>  - list nodes running baremetal
>  - show detail of one of those nodes
>  - create a new baremetal node
>  - delete a baremetal node
> 
> Only the first two of those would be supported if we implemented a proxy.
> 
> So, discuss.

I'm in favor of proceeding with deprecation without requiring the API proxy.

In the case of user facing APIs, the administrators in charge of
upgrading the cloud do not have full control over all of the apps using
the APIs.  In this particular case, I would expect that the cloud
administrators have *complete* control over the use of these APIs.

Assuming we have one overlap release (Juno) to allow the migration to
occur and given proper documentation of the migration plan and release
notes stating the fact that the old APIs are going away, we should be fine.

In summary, +1 to moving forward without the API proxy requirement.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-09 Thread Baohua Yang
Agree.
It's necessary for neutron to have GBP, and we can certainly utilize
stackforge to help improve it.


On Fri, Sep 5, 2014 at 11:08 PM, Mohammad Banikazemi  wrote:

> I can only see the use of a separate project for Group Policy as a
> tactical and temporary solution. In my opinion, it does not make sense to
> have the Group Policy as a separate project outside Neutron (unless the new
> project is aiming to replace Neutron and I do not think anybody is
> suggesting that). In this regard, Group Policy is not similar to Advanced
> Services such as FW and LB.
>
> So, using StackForge to get things moving again is fine but let us keep in
> mind (and see if we can agree on) that we want to have the Group Policy
> abstractions as part of OpenStack Networking (when/if it proves to be a
> valuable extension to what we currently have). I do not want to see our
> decision to make things moving quickly right now prevent us from achieving
> that goal. That is why I think the other two approaches (from the little I
> know about the incubator option, and even littler I know about the feature
> branch option) may be better options in the long run.
>
> If I understand it correctly some members of the community are actively
> working on these options (that is, the incubator and the Neutron feature
> branch options) . In order to make a better judgement as to how to proceed,
> it would be very helpful if we get a bit more information on these two
> options and their status here on this mailing list.
>
> Mohammad
>
>
>
> [image: Inactive hide details for Kevin Benton ---09/05/2014 04:31:05
> AM---Tl;dr - Neutron incubator is only a wiki page with many unce]Kevin
> Benton ---09/05/2014 04:31:05 AM---Tl;dr - Neutron incubator is only a wiki
> page with many uncertainties. Use StackForge to make progre
>
> From: Kevin Benton 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 09/05/2014 04:31 AM
> Subject: Re: [openstack-dev] [neutron][policy] Group-based Policy next
> steps
> --
>
>
>
> Tl;dr - Neutron incubator is only a wiki page with many uncertainties. Use
> StackForge to make progress and re-evaluate when the incubator exists.
>
>
> I also agree that starting out in StackForge as a separate repo is a
> better first step. In addition to the uncertainty around packaging and
> other processes brought up by Mandeep, I really doubt the Neutron incubator
> is going to have the review velocity desired by the group policy
> contributors. I believe this will be the case based on the Neutron
> incubator patch approval policy in conjunction with the nature of the
> projects it will attract.
>
> Due to the requirement for two core +2's in the Neutron incubator, moving
> group policy there is hardly going to do anything to reduce the load on the
> Neutron cores who are in a similar overloaded position as the Nova
> cores.[1] Consequently, I wouldn't be surprised if patches to the Neutron
> incubator receive even less core attention than the main repo simply
> because their location outside of openstack/neutron will be a good reason
> to treat them with a lower priority.
>
> If you combine that with the fact that the incubator is designed to house
> all of the proposed experimental features to Neutron, there will be a very
> high volume of patches constantly being proposed to add new features, make
> changes to features, and maybe even fix bugs in those features. This new
> demand for reviewers will not be met by the existing core reviewers because
> they will be busy with refactoring, fixing, and enhancing the core Neutron
> code.
>
> Even ignoring the review velocity issues, I see very little benefit to GBP
> starting inside of the Neutron incubator. It doesn't guarantee any
> packaging with Neutron and Neutron code cannot reference any incubator
> code. It's effectively a separate repo without the advantage of being able
> to commit code quickly.
>
> There is one potential downside to not immediately using the Neutron
> incubator. If the Neutron cores decide that all features must live in the
> incubator for at least 2 cycles regardless of quality or usage in
> deployments, starting outside in a StackForge project would delay the start
> of the timer until GBP makes it into the incubator. However, this can be
> considered once the incubator actually exists and starts accepting
> submissions.
>
> In summary, I think GBP should move to a StackForge project as soon as
> possible so development can progress. A transition to the Neutron incubator
> can be evaluated once it actually becomes something more than a wiki page.
>
>
> 1.
> *http://lists.openstack.org/pipermail/openstack-dev/2014-September/044872.html*
> 
>
> --
> Kevin Benton
>
>
> On Thu, Sep 4, 2014 at 11:24 PM, Mandeep Dhami <*dh...@noironetworks.com*
> > wrote:
>
>
>I agree. Also, as this does n

Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Samuel Merritt

On 9/9/14, 4:47 PM, Devananda van der Veen wrote:

On Tue, Sep 9, 2014 at 4:12 PM, Samuel Merritt  wrote:

On 9/9/14, 12:03 PM, Monty Taylor wrote:

[snip]

So which is it? Because it sounds like to me it's a thing that actually
does NOT need to diverge in technology in any way, but that I've been
told that it needs to diverge because it's delivering a different set of
features - and I'm pretty sure if it _is_ the thing that needs to
diverge in technology because of its feature set, then it's a thing I
don't think we should be implementing in python in OpenStack because it
already exists and it's called AMQP.



Whether Zaqar is more like AMQP or more like email is a really strange
metric to use for considering its inclusion.



I don't find this strange at all -- I had been judging the technical
merits of Zaqar (ex-Marconi) for the last ~18 months based on the
understanding that it aimed to provide Queueing-as-a-Service, and
found its delivery of that to be lacking on technical grounds. The
implementation did not meet my view of what a queue service should
provide; it is based on some serious antipatterns (storing a queue in
an RDBMS is probably the most obvious); and in fact, it isn't even
queue-like in the access patterns enabled by the REST API (random
access to a set != a queue). That was the basis for a large part of my
objections to the project over time, and a source of frustration for
me as the developers justified many of their positions rather than
accepted feedback and changed course during the incubation period. The
reason for this seems clear now...

As was pointed out in the TC meeting today, Zaqar is (was?) actually
aiming to provide Messaging-as-a-Service -- not queueing as a service!
This is another way of saying "it's more like email and less like
AMQP", which means my but-its-not-a-queue objection to the project's
graduation is irrelevant, and I need to rethink about all my previous
assessments of the project.

The questions now before us are:
- should OpenStack include, in the integrated release, a
messaging-as-a-service component?


I certainly think so. I've worked on a few reasonable-scale web 
applications, and they all followed the same pattern: HTTP app servers 
serving requests quickly, background workers for long-running tasks, and 
some sort of durable message-broker/queue-server thing for conveying 
work from the first to the second.


A quick straw poll of my nearby coworkers shows that every non-trivial 
web application that they've worked on in the last decade follows the 
same pattern.


While not *every* application needs such a thing, web apps are quite 
common these days, and Zaqar satisfies one of their big requirements. 
Not only that, it does so in a way that requires much less babysitting 
than run-your-own-broker does.



- is Zaqar a technically sound implementation of such a service?

As an aside, there are still references to Zaqar as a queue in both
the wiki [0], in the governance repo [1], and on launchpad [2].

Regards,
Devananda


[0] "Multi-tenant queues based on Keystone project IDs"
   https://wiki.openstack.org/wiki/Zaqar#Key_features

[1] "Queue service" is even the official OpenStack Program name, and
the mission statement starts with "To produce an OpenStack message
queueing API and service."
   
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n315

[2] "Zaqar is a new OpenStack project to create a multi-tenant cloud
queuing service"
   https://launchpad.net/zaqar



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] convergence flow diagrams

2014-09-09 Thread Angus Salkeld
On Wed, Sep 10, 2014 at 4:12 AM, Tyagi, Ishant  wrote:

>  Thanks Angus for your comments.
>
>
>
> Your design is almost same as this one. I also agree that only engine
> should have DB access will DB rpc api’s. I will update the diagrams with
> this change.
>
>
>
> Regarding the worker communicating with the observer, flow would be like
> this:
>
> · Engine tells worker to create or update a resource.
>
> · Worker then just calls resource plugin  handle_create /
> handle_update etc and calls observer rpc api to observer the resource (
> check_create_complete ) and then exits.
>
> · Observer then checks the resource status until it comes to the
> desired state.
>
> · Main engine then gets back the notification from observer and
> then schedule next parent resource to converge.
>
>
>
> If observer and worker are independent entities then who will invoke
> observer to check resource state ?
>

We could do what we do for autoscaling, tag each resource's metadata with
the heat stack id.
https://github.com/openstack/heat/blob/master/heat/engine/resources/autoscaling.py#L262-L273

Then the observer never needs to be told anything as it would look for
notifications that have "heat-stack-id" as  a key in the metadata
and know it's associated with a heat stack, then it retrieves what ever
other info it needs and sends an update to heat-engine (via rpc).

-Angus


>
>
> -Ishant
>
>  *From:* Angus Salkeld [mailto:asalk...@mirantis.com]
> *Sent:* Tuesday, September 9, 2014 5:45 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Heat] convergence flow diagrams
>
>
>
> On Mon, Sep 8, 2014 at 11:22 PM, Tyagi, Ishant 
> wrote:
>
>  Hi All,
>
>
>
> As per the heat mid cycle meetup whiteboard, we have created the
> flowchart and sequence diagram for the convergence . Can you please review
> these diagrams and provide your feedback?
>
>
>
> https://www.dropbox.com/sh/i8qbjtgfdxn4zx4/AAC6J-Nps8J12TzfuCut49ioa?dl=0
>
>
>
> Great! Good to see something.
>
>   I was expecting something like:
>
> engine ~= like nova-conductor (it's the only process that talks to the db
> - make upgrading easier)
>
> observer - purely gets the actual state/properties and writes then to the
> db (via engine)
>
> worker - has a "job" queue and grinds away at running those (resource
> actions)
>
>
>
> Then engine then "triggers" on differences on goal vs. actual state and
> create a job and sends it to the job queue.
>
> - so, on create it sees there is no actual state so it sends a create job
> for the first resource to the worker queue
>
> - when the observer writes the new state for that resource it triggers the
> next resource create in the dependency tree.
>
> - like any system that relies on notifications we need timeouts and each
> stack needs a periodic "notification" to make sure
>
>   that progress is been made or notify the user that no progress is being
> made.
>
>
>
> One question about the observer (in either my setup or the one in the
> diagram).
>
> - If we are relying on rpc notifications all the observer processes will
> receive a copy of the same notification
>
>   (say nova create end) how are we going to decide on which one does
> anything with it?
>
>   We don't want 10 observers getting more detailed info from nova and then
> writing to the db
>
>
>
> In your diagram worker is communicating with observer, which seems odd to
> me. I thought observer and worker were very
>
> independent entities.
>
>
>
>
> In my setup there are less API to worry about too:
>
> - RPC api for the engine (access to the db)
>
> - RPC api for sending a job to the worker
>
> - the plugin API
>
> - the observer might need an api just for the engine to tell it to
> start/stop observing a stack
>
> -Angus
>
>
>
>
>
> Thanks,
>
> Ishant
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Devananda van der Veen
On Tue, Sep 9, 2014 at 5:31 PM, Boris Pavlovic  wrote:
>
> Devananda,
>>
>>
>> While that is de rigueur today, it's actually at the core of the
>> current problem space. Blessing a project by integrating it is not a
>> scalable long-term solution. We don't have a model to integrate >1
>> project for the same space // of the same type, or to bless the
>> stability of a non-integrated project. You won't see two messaging
>> services, or two compute services, in the integrated release. In fact,
>> integration is supposed to occur only *after* the community has sorted
>> out "a winner" within a given space. In my view, it should also happen
>> only after the community has proven a project to be stable and
>> scalable in production.
>
>
> After looking at such profiles:
> http://boris-42.github.io/ngk.html
> And getting 150 DB requests (without neutron) to create one single VM, I 
> don't believe that set of current integrated OpenStack projects is scalable 
> well. (I mean without customization)

I'm not going to defend the DB performance of Nova or other services.
This thread isn't the place for that discussion.

>
> So I would like to say 2 things:
>
> - Rules should be the same for all projects (including incubated/integrated)

Yup. This is why the TC revisits integrated projects once per cycle now, too.

>
> - Nothing should be incubated/integrated.

This is a blatant straw-man. If you're suggesting we stop all
integration testing, release management, etc -- the very things which
the integrated release process coordinates... well, I don't think
that's what you're saying. Except it is.

> Cause projects have to evolve, to evolve they need competition. In other 
> words, monopoly sux in any moment of time (even after community decided to 
> chose project A and not project B)
>

In order for a project to evolve, a project needs people contributing
to it. More often than not, that is because someone is using the
project, and it doesn't do what they want, so they improve it in some
way. Incubation was intended to be a signal to early adopters to begin
using (and thus, hopefully, contributing to) a project, encouraging
collaboration and reducing NIH friction between corporations within
the ecosystem. It hasn't gone exactly as planned, but it's also worked
fairly well for _this_ purpose, in my opinion.

However, adding more and more projects into the integrated release,
and thus increasing the testing complexity and imposing greater
requirements on operators -- this is an imminent scaling problem, as
Sean has eloquently pointed out before in several long email threads
which I won't recount here.

All of this is to say that Kurt's statement:
  "[You don't get] broad exposure and usage... as a non-integrated
project in the OpenStack ecosystem."
is an accurate representation of one problem facing OpenStack today. I
don't think we solve that problem by following the established norm -
we solve it by creating a mechanism for non-integrated projects to get
the exposure and usage they need _without_ becoming a burden on our
QA, docs, and release teams, and without forcing that project upon
operators.

But as I said earlier, we shouldn't hold Zaqar hostage while we sort
out what that solution looks like...


Anyhow, my apologies for the bike shed. I felt it was worth voicing my
disagreement with Kurt's statement that graduation should not be
viewed as an official blessing of Zaqar as OpenStack's Messaging
Service. Today, I believe that's exactly what it is. With that
blessing comes an additional burden on the community to support it.

Perhaps that will change in the future.

-Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Boris Pavlovic
Devananda,



> While that is de rigueur today, it's actually at the core of the
> current problem space. Blessing a project by integrating it is not a
> scalable long-term solution. We don't have a model to integrate >1
> project for the same space // of the same type, or to bless the
> stability of a non-integrated project. You won't see two messaging
> services, or two compute services, in the integrated release. In fact,
> integration is supposed to occur only *after* the community has sorted
> out "a winner" within a given space. In my view, it should also happen
> only after the community has proven a project to be stable and
> scalable in production.


After looking at such profiles:
http://boris-42.github.io/ngk.html
And getting 150 DB requests (without neutron) to create one single VM, I
don't believe that set of current integrated OpenStack projects is scalable
well. (I mean without customization)

So I would like to say 2 things:

- Rules should be the same for all projects (including
incubated/integrated)
- Nothing should be incubated/integrated. Cause projects have to evolve, to
evolve they need competition. In other words, monopoly sux in any moment of
time (even after community decided to chose project A and not project B)


Best regards,
Boris Pavlovic



On Wed, Sep 10, 2014 at 4:18 AM, Adam Lawson  wrote:

> *"should OpenStack include, in the integrated release,
> a messaging-as-a-service component"*
>
> Assuming this is truly a question that represents where we are and not
> exploratory of what we might want to address, I would say the answer is a
> resounding no, as queuing is within the scope of what Openstack is and has
> always been. If we get into integrated messaging, I'm struggling to
> understand what value it adds to the IaaS goal. We might as well start
> integrating office and productivity applications while we're at it.
>
> Sorry if i sound cheeky but considering this seems rather odd to me.
>
>
> *Adam Lawson*
> *CEO, Principal Architect*
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
>
> On Tue, Sep 9, 2014 at 5:03 PM, Clint Byrum  wrote:
>
>> Excerpts from Devananda van der Veen's message of 2014-09-09 16:47:27
>> -0700:
>> > On Tue, Sep 9, 2014 at 4:12 PM, Samuel Merritt 
>> wrote:
>> > > On 9/9/14, 12:03 PM, Monty Taylor wrote:
>> > [snip]
>> > >> So which is it? Because it sounds like to me it's a thing that
>> actually
>> > >> does NOT need to diverge in technology in any way, but that I've been
>> > >> told that it needs to diverge because it's delivering a different
>> set of
>> > >> features - and I'm pretty sure if it _is_ the thing that needs to
>> > >> diverge in technology because of its feature set, then it's a thing I
>> > >> don't think we should be implementing in python in OpenStack because
>> it
>> > >> already exists and it's called AMQP.
>> > >
>> > >
>> > > Whether Zaqar is more like AMQP or more like email is a really strange
>> > > metric to use for considering its inclusion.
>> > >
>> >
>> > I don't find this strange at all -- I had been judging the technical
>> > merits of Zaqar (ex-Marconi) for the last ~18 months based on the
>> > understanding that it aimed to provide Queueing-as-a-Service, and
>> > found its delivery of that to be lacking on technical grounds. The
>> > implementation did not meet my view of what a queue service should
>> > provide; it is based on some serious antipatterns (storing a queue in
>> > an RDBMS is probably the most obvious); and in fact, it isn't even
>> > queue-like in the access patterns enabled by the REST API (random
>> > access to a set != a queue). That was the basis for a large part of my
>> > objections to the project over time, and a source of frustration for
>> > me as the developers justified many of their positions rather than
>> > accepted feedback and changed course during the incubation period. The
>> > reason for this seems clear now...
>> >
>> > As was pointed out in the TC meeting today, Zaqar is (was?) actually
>> > aiming to provide Messaging-as-a-Service -- not queueing as a service!
>> > This is another way of saying "it's more like email and less like
>> > AMQP", which means my but-its-not-a-queue objection to the project's
>> > graduation is irrelevant, and I need to rethink about all my previous
>> > assessments of the project.
>>
>> Well said.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.o

Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-09 Thread Adam Lawson
Deleting unnecessary code, introducing a stabilization cycle and/or making
definite steps towards a unified SDK are definitely my votes.


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Tue, Sep 9, 2014 at 5:09 PM, Joe Gordon  wrote:

>
>
> On Wed, Sep 3, 2014 at 8:37 AM, Joe Gordon  wrote:
>
>> As you all know, there has recently been several very active discussions
>> around how to improve assorted aspects of our development process. One
>> idea
>> that was brought up is to come up with a list of cycle goals/project
>> priorities for Kilo [0].
>>
>> To that end, I would like to propose an exercise as discussed in the TC
>> meeting yesterday [1]:
>> Have anyone interested (especially TC members) come up with a list of
>> what they think the project wide Kilo cycle goals should be and post them
>> on this thread by end of day Wednesday, September 10th. After which time we
>> can begin discussing the results.
>> The goal of this exercise is to help us see if our individual world views
>> align with the greater community, and to get the ball rolling on a larger
>> discussion of where as a project we should be focusing more time.
>>
>
>
>
> 1. Strengthen our north bound APIs
>
> * API micro-versioning
> * Improved CLI's and SDKs
> * Better capability discovery
> * Hide usability issues with client side logic
> * Improve reliability
>
> As others have said in this thread trying to use OpenStack as a user is a
> very frustrating experience. For a long time now we have focused on
> southbound APIs such as drivers, configuration options, supported
> architectures etc. But as a project we have not spent nearly enough time on
> the end user experience. If our northbound APIs aren't something developers
> want to use, our southbound API work doesn't matter.
>
> 2. 'Fix' our development process
>
> * openstack-specs. Currently we don't have any good way to work on big
> entire-project efforts, hopefully something like a openstack-specs repo
> (with liasons from each core-team reviewing it) will help make it possible
> for us to tackle these issues.  I see us addressing the API
> micro-versioning and capability  discovery issues here.
> * functional testing and post merge testing. As discussed elsewhere in
> this thread our current testing model isn't meeting our current
> requirements.
>
> 3. Pay down technical debt
>
> This is the one I am actually least sure about, as I can really only speak
> for nova on this one. In our constant push forward we have accumulated a
> lot of technical debt. The debt manifests itself as hard to maintain code,
> bugs (nova had over 1000 open bugs until yesterday), performance/scaling
> issues and missing basic features. I think its time for us to take
> inventory if our technical debt and fix some of the biggest issues.
>
>
>>
>> best,
>> Joe Gordon
>>
>> [0]
>> http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
>> [1]
>> http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Adam Lawson
*"should OpenStack include, in the integrated release,
a messaging-as-a-service component"*

Assuming this is truly a question that represents where we are and not
exploratory of what we might want to address, I would say the answer is a
resounding no, as queuing is within the scope of what Openstack is and has
always been. If we get into integrated messaging, I'm struggling to
understand what value it adds to the IaaS goal. We might as well start
integrating office and productivity applications while we're at it.

Sorry if i sound cheeky but considering this seems rather odd to me.


*Adam Lawson*
*CEO, Principal Architect*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Tue, Sep 9, 2014 at 5:03 PM, Clint Byrum  wrote:

> Excerpts from Devananda van der Veen's message of 2014-09-09 16:47:27
> -0700:
> > On Tue, Sep 9, 2014 at 4:12 PM, Samuel Merritt 
> wrote:
> > > On 9/9/14, 12:03 PM, Monty Taylor wrote:
> > [snip]
> > >> So which is it? Because it sounds like to me it's a thing that
> actually
> > >> does NOT need to diverge in technology in any way, but that I've been
> > >> told that it needs to diverge because it's delivering a different set
> of
> > >> features - and I'm pretty sure if it _is_ the thing that needs to
> > >> diverge in technology because of its feature set, then it's a thing I
> > >> don't think we should be implementing in python in OpenStack because
> it
> > >> already exists and it's called AMQP.
> > >
> > >
> > > Whether Zaqar is more like AMQP or more like email is a really strange
> > > metric to use for considering its inclusion.
> > >
> >
> > I don't find this strange at all -- I had been judging the technical
> > merits of Zaqar (ex-Marconi) for the last ~18 months based on the
> > understanding that it aimed to provide Queueing-as-a-Service, and
> > found its delivery of that to be lacking on technical grounds. The
> > implementation did not meet my view of what a queue service should
> > provide; it is based on some serious antipatterns (storing a queue in
> > an RDBMS is probably the most obvious); and in fact, it isn't even
> > queue-like in the access patterns enabled by the REST API (random
> > access to a set != a queue). That was the basis for a large part of my
> > objections to the project over time, and a source of frustration for
> > me as the developers justified many of their positions rather than
> > accepted feedback and changed course during the incubation period. The
> > reason for this seems clear now...
> >
> > As was pointed out in the TC meeting today, Zaqar is (was?) actually
> > aiming to provide Messaging-as-a-Service -- not queueing as a service!
> > This is another way of saying "it's more like email and less like
> > AMQP", which means my but-its-not-a-queue objection to the project's
> > graduation is irrelevant, and I need to rethink about all my previous
> > assessments of the project.
>
> Well said.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-09 Thread Stefano Maffulli
On 09/09/2014 06:55 AM, James Bottomley wrote:
> CLAs are a well known and documented barrier to casual contributions

I'm not convinced about this statement, at all. And since I think it's
secondary to what we're discussing, I'll leave it as is and go on.

> I've done both ... I do prefer the patch workflow to the gerrit one,
[...]

Do you consider yourself a 'committed' developer or a casual one?
Because ultimately I think this is what it comes down to: a developer
who has a commitment to get a patch landed in tree has a different
motivation and set of incentives that make climbing the learning curve
more appealing. A casual contributor is a different persona.

> Bad code is a bit of a pejorative term.

I used the wrong term, I apologize if I offended someone: it wasn't my
intention.

> However, I can sympathize with the view: In the Linux Kernel, drivers
> are often the biggest source of coding style and maintenance issues.
> I maintain a driver subsystem and I would have to admit that a lot of
> code that goes into those drivers that wouldn't be of sufficient
> quality to be admitted to the core kernel without a lot more clean up
> and flow changes.

thanks for saying this a lot more nicely than my rough expression.

> To me, this means you don't really want a sin bin where you dump
> drivers and tell them not to come out until they're fit to be
> reviewed by the core; You want a trusted driver community which does
> its own reviews and means the core doesn't have to review them.

I think we're going somewhere here, based on your comment and other's:
we may achieve some result if we empower a new set of people to manage
drivers, keeping them in the same repositories where they are now. This
new set of people may not be the current core reviewers but other with
different skillsets and more capable of understanding the driver's
ecosystem, needs, motivations, etc.

I have the impression this idea has been circling around for a while but
for some reason or another (like lack of capabilities in gerrit and
other reasons) we never tried to implement it. Maybe it's time to think
about an implementation. We have been thinking about mentors
https://wiki.openstack.org/wiki/Mentors, maybe that's a way to go?
Sub-team with +1.5 scoring capabilities?

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-09 Thread Joe Gordon
On Wed, Sep 3, 2014 at 8:37 AM, Joe Gordon  wrote:

> As you all know, there has recently been several very active discussions
> around how to improve assorted aspects of our development process. One idea
> that was brought up is to come up with a list of cycle goals/project
> priorities for Kilo [0].
>
> To that end, I would like to propose an exercise as discussed in the TC
> meeting yesterday [1]:
> Have anyone interested (especially TC members) come up with a list of what
> they think the project wide Kilo cycle goals should be and post them on
> this thread by end of day Wednesday, September 10th. After which time we
> can begin discussing the results.
> The goal of this exercise is to help us see if our individual world views
> align with the greater community, and to get the ball rolling on a larger
> discussion of where as a project we should be focusing more time.
>



1. Strengthen our north bound APIs

* API micro-versioning
* Improved CLI's and SDKs
* Better capability discovery
* Hide usability issues with client side logic
* Improve reliability

As others have said in this thread trying to use OpenStack as a user is a
very frustrating experience. For a long time now we have focused on
southbound APIs such as drivers, configuration options, supported
architectures etc. But as a project we have not spent nearly enough time on
the end user experience. If our northbound APIs aren't something developers
want to use, our southbound API work doesn't matter.

2. 'Fix' our development process

* openstack-specs. Currently we don't have any good way to work on big
entire-project efforts, hopefully something like a openstack-specs repo
(with liasons from each core-team reviewing it) will help make it possible
for us to tackle these issues.  I see us addressing the API
micro-versioning and capability  discovery issues here.
* functional testing and post merge testing. As discussed elsewhere in this
thread our current testing model isn't meeting our current requirements.

3. Pay down technical debt

This is the one I am actually least sure about, as I can really only speak
for nova on this one. In our constant push forward we have accumulated a
lot of technical debt. The debt manifests itself as hard to maintain code,
bugs (nova had over 1000 open bugs until yesterday), performance/scaling
issues and missing basic features. I think its time for us to take
inventory if our technical debt and fix some of the biggest issues.


>
> best,
> Joe Gordon
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
> [1]
> http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Clint Byrum
Excerpts from Devananda van der Veen's message of 2014-09-09 16:47:27 -0700:
> On Tue, Sep 9, 2014 at 4:12 PM, Samuel Merritt  wrote:
> > On 9/9/14, 12:03 PM, Monty Taylor wrote:
> [snip]
> >> So which is it? Because it sounds like to me it's a thing that actually
> >> does NOT need to diverge in technology in any way, but that I've been
> >> told that it needs to diverge because it's delivering a different set of
> >> features - and I'm pretty sure if it _is_ the thing that needs to
> >> diverge in technology because of its feature set, then it's a thing I
> >> don't think we should be implementing in python in OpenStack because it
> >> already exists and it's called AMQP.
> >
> >
> > Whether Zaqar is more like AMQP or more like email is a really strange
> > metric to use for considering its inclusion.
> >
> 
> I don't find this strange at all -- I had been judging the technical
> merits of Zaqar (ex-Marconi) for the last ~18 months based on the
> understanding that it aimed to provide Queueing-as-a-Service, and
> found its delivery of that to be lacking on technical grounds. The
> implementation did not meet my view of what a queue service should
> provide; it is based on some serious antipatterns (storing a queue in
> an RDBMS is probably the most obvious); and in fact, it isn't even
> queue-like in the access patterns enabled by the REST API (random
> access to a set != a queue). That was the basis for a large part of my
> objections to the project over time, and a source of frustration for
> me as the developers justified many of their positions rather than
> accepted feedback and changed course during the incubation period. The
> reason for this seems clear now...
> 
> As was pointed out in the TC meeting today, Zaqar is (was?) actually
> aiming to provide Messaging-as-a-Service -- not queueing as a service!
> This is another way of saying "it's more like email and less like
> AMQP", which means my but-its-not-a-queue objection to the project's
> graduation is irrelevant, and I need to rethink about all my previous
> assessments of the project.

Well said.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Devananda van der Veen
On Thu, Sep 4, 2014 at 1:44 PM, Kurt Griffiths
 wrote:
[snip]
> Does a Qpid/Rabbit/Kafka provisioning service make sense? Probably. Would
> such a service totally overlap in terms of use-cases with Zaqar? Community
> feedback suggests otherwise. Will there be some other kind of thing that
> comes out of the woodwork? Possibly. (Heck, if something better comes
> along I for one have no qualms in shifting resources to the more elegant
> solution--again, use the best tool for the job.) This process happens all
> the time in the broader open-source world. But this process takes a
> healthy amount of time, plus broad exposure and usage, which is something
> that you simply don’t get as a non-integrated project in the OpenStack
> ecosystem.

While that is de rigueur today, it's actually at the core of the
current problem space. Blessing a project by integrating it is not a
scalable long-term solution. We don't have a model to integrate >1
project for the same space // of the same type, or to bless the
stability of a non-integrated project. You won't see two messaging
services, or two compute services, in the integrated release. In fact,
integration is supposed to occur only *after* the community has sorted
out "a winner" within a given space. In my view, it should also happen
only after the community has proven a project to be stable and
scalable in production.

It should be self-evident that, for a large and healthy ecosystem of
production-quality projects to be created and flourish, we can not
pick a winner and shut down competition by integrating a project
*prior* to that project getting "broad exposure and usage". A practice
of integrating projects merely to get them exposure and contributors
is self-defeating.


> In any case, it’s pretty clear to me that Zaqar graduating should not be
> viewed as making it "the officially blessed messaging service for the
> cloud”

That's exactly what graduation does, though. Your statement in the
previous paragraph - that non-integrated projects don't get adoption -
only furthers this point.

> and nobody is allowed to have any other ideas, ever.

Of course other people can have other ideas -- but we don't have a
precedent for handling it inside the community. Look at Ceilometer -
there are at least two other projects which attempted to fill that
space, but we haven't any means to accept them into OpenStack without
either removing Ceilometer or encouraging those projects to merge into
Ceilometer.

> If that
> happens, it’s only a symptom of a deeper perception/process problem that
> is far from unique to Zaqar. In fact, I think it touches on all
> non-integrated projects, and many integrated ones as well.
>

Yup.

I agree that we shouldn't hold Zaqar hostage while the community sorts
out the small-tent-big-camp questions. But I also feel like we _must_
sort that out soon, because the current system (integrate all the
things!) doesn't appear to be sustainable for much longer.


-Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Puppet elements support

2014-09-09 Thread Emilien Macchi
So this is the patch to move the repo on Stackforge:
https://review.openstack.org/#/c/120285

Of course, I copy/paste Gerrit permissions from tripleo-image-elements
project, so people core in tripleo-image-elements will obviously be core
on tripleo-puppet-elements.

Emilien Macchi

On 09/08/2014 07:11 PM, Emilien Macchi wrote:
> Hi TripleO community,
>
> I would be really interested by helping to bring Puppet elements support
> in TripleO.
> So far I've seen this work:
> https://github.com/agroup/tripleo-puppet-elements/tree/puppet_dev_heat
> which is a very good bootstrap but really outdated.
> After some discussion with Greg Haynes on IRC, we came up with the idea
> to create a repo (that would be move in Stackforge or OpenStack git) and
> push the bits from what has been done by HP folks with updates &
> improvements.
>
> I started a basic repo
> https://github.com/enovance/tripleo-puppet-elements that could be moved
> right now on Stackforge to let the community start the work.
>
> My proposal is:
> * move this repo (or create a new one directly on
> github/{stackforge,openstack?})
> * push some bits from "agroup" original work.
> * continue the contributions, updates & improvements.
>
> Any thoughts?
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Clint Byrum
Excerpts from Samuel Merritt's message of 2014-09-09 16:12:09 -0700:
> On 9/9/14, 12:03 PM, Monty Taylor wrote:
> > On 09/04/2014 01:30 AM, Clint Byrum wrote:
> >> Excerpts from Flavio Percoco's message of 2014-09-04 00:08:47 -0700:
> >>> Greetings,
> >>>
> >>> Last Tuesday the TC held the first graduation review for Zaqar. During
> >>> the meeting some concerns arose. I've listed those concerns below with
> >>> some comments hoping that it will help starting a discussion before the
> >>> next meeting. In addition, I've added some comments about the project
> >>> stability at the bottom and an etherpad link pointing to a list of use
> >>> cases for Zaqar.
> >>>
> >>
> >> Hi Flavio. This was an interesting read. As somebody whose attention has
> >> recently been drawn to Zaqar, I am quite interested in seeing it
> >> graduate.
> >>
> >>> # Concerns
> >>>
> >>> - Concern on operational burden of requiring NoSQL deploy expertise to
> >>> the mix of openstack operational skills
> >>>
> >>> For those of you not familiar with Zaqar, it currently supports 2 nosql
> >>> drivers - MongoDB and Redis - and those are the only 2 drivers it
> >>> supports for now. This will require operators willing to use Zaqar to
> >>> maintain a new (?) NoSQL technology in their system. Before expressing
> >>> our thoughts on this matter, let me say that:
> >>>
> >>>  1. By removing the SQLAlchemy driver, we basically removed the
> >>> chance
> >>> for operators to use an already deployed "OpenStack-technology"
> >>>  2. Zaqar won't be backed by any AMQP based messaging technology for
> >>> now. Here's[0] a summary of the research the team (mostly done by
> >>> Victoria) did during Juno
> >>>  3. We (OpenStack) used to require Redis for the zmq matchmaker
> >>>  4. We (OpenStack) also use memcached for caching and as the oslo
> >>> caching lib becomes available - or a wrapper on top of dogpile.cache -
> >>> Redis may be used in place of memcached in more and more deployments.
> >>>  5. Ceilometer's recommended storage driver is still MongoDB,
> >>> although
> >>> Ceilometer has now support for sqlalchemy. (Please correct me if I'm
> >>> wrong).
> >>>
> >>> That being said, it's obvious we already, to some extent, promote some
> >>> NoSQL technologies. However, for the sake of the discussion, lets assume
> >>> we don't.
> >>>
> >>> I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
> >>> keep avoiding these technologies. NoSQL technologies have been around
> >>> for years and we should be prepared - including OpenStack operators - to
> >>> support these technologies. Not every tool is good for all tasks - one
> >>> of the reasons we removed the sqlalchemy driver in the first place -
> >>> therefore it's impossible to keep an homogeneous environment for all
> >>> services.
> >>>
> >>
> >> I whole heartedly agree that non traditional storage technologies that
> >> are becoming mainstream are good candidates for use cases where SQL
> >> based storage gets in the way. I wish there wasn't so much FUD
> >> (warranted or not) about MongoDB, but that is the reality we live in.
> >>
> >>> With this, I'm not suggesting to ignore the risks and the extra burden
> >>> this adds but, instead of attempting to avoid it completely by not
> >>> evolving the stack of services we provide, we should probably work on
> >>> defining a reasonable subset of NoSQL services we are OK with
> >>> supporting. This will help making the burden smaller and it'll give
> >>> operators the option to choose.
> >>>
> >>> [0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/
> >>>
> >>>
> >>> - Concern on should we really reinvent a queue system rather than
> >>> piggyback on one
> >>>
> >>> As mentioned in the meeting on Tuesday, Zaqar is not reinventing message
> >>> brokers. Zaqar provides a service akin to SQS from AWS with an OpenStack
> >>> flavor on top. [0]
> >>>
> >>
> >> I think Zaqar is more like SMTP and IMAP than AMQP. You're not really
> >> trying to connect two processes in real time. You're trying to do fully
> >> asynchronous messaging with fully randomized access to any message.
> >>
> >> Perhaps somebody should explore whether the approaches taken by large
> >> scale IMAP providers could be applied to Zaqar.
> >>
> >> Anyway, I can't imagine writing a system to intentionally use the
> >> semantics of IMAP and SMTP. I'd be very interested in seeing actual use
> >> cases for it, apologies if those have been posted before.
> >
> > It seems like you're EITHER describing something called XMPP that has at
> > least one open source scalable backend called ejabberd. OR, you've
> > actually hit the nail on the head with bringing up SMTP and IMAP but for
> > some reason that feels strange.
> >
> > SMTP and IMAP already implement every feature you've described, as well
> > as retries/failover/HA and a fully end to end secure transport (if
> > installed properly) If you don't actually set them up to run as a public
> >

Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-09 Thread Jay Pipes

On 09/09/2014 06:57 PM, Kevin Benton wrote:

Hi Jay,

The main component that won't work without direct integration is
enforcing policy on calls directly to Neutron and calls between the
plugins inside of Neutron. However, that's only one component of GBP.
All of the declarative abstractions, rendering of policy, etc can be
experimented with here in the stackforge project until the incubator is
figured out.


OK, thanks for the explanation Kevin, that helps!

Best,
-jay


On Tue, Sep 9, 2014 at 12:01 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

On 09/04/2014 12:07 AM, Sumit Naiksatam wrote:

Hi,

There's been a lot of lively discussion on GBP a few weeks back
and we
wanted to drive forward the discussion on this a bit more. As you
might imagine, we're excited to move this forward so more people can
try it out.  Here are the options:

* Neutron feature branch: This presumably allows the GBP feature
to be
developed independently, and will perhaps help in faster iterations.
There does seem to be a significant packaging issue [1] with this
approach that hasn’t been completely addressed.

* Neutron-incubator: This allows a path to graduate into
Neutron, and
will be managed by the Neutron core team. That said, the proposal is
under discussion and there are still some open questions [2].

* Stackforge: This allows the GBP team to make rapid and iterative
progress, while still leveraging the OpenStack infra. It also
provides
option of immediately exposing the existing implementation to early
adopters.

Each of the above options does not preclude moving to the other
at a later time.

Which option do people think is more preferable?

(We could also discuss this in the weekly GBP IRC meeting on
Thursday:
https://wiki.openstack.org/__wiki/Meetings/Neutron_Group___Policy 
)

Thanks!

[1]

http://lists.openstack.org/__pipermail/openstack-dev/2014-__August/044283.html


[2]

http://lists.openstack.org/__pipermail/openstack-dev/2014-__August/043577.html




Hi all,

IIRC, Kevin was saying to me in IRC that GBP really needs to live
in-tree due to it needing access to various internal plugin points
and to be able to call across different plugin layers/drivers inside
of Neutron.

If this is the case, how would the stackforge GBP project work if it
wasn't a fork of Neutron in its entirety?

Just curious,
-jay


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 





--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Devananda van der Veen
On Tue, Sep 9, 2014 at 4:12 PM, Samuel Merritt  wrote:
> On 9/9/14, 12:03 PM, Monty Taylor wrote:
[snip]
>> So which is it? Because it sounds like to me it's a thing that actually
>> does NOT need to diverge in technology in any way, but that I've been
>> told that it needs to diverge because it's delivering a different set of
>> features - and I'm pretty sure if it _is_ the thing that needs to
>> diverge in technology because of its feature set, then it's a thing I
>> don't think we should be implementing in python in OpenStack because it
>> already exists and it's called AMQP.
>
>
> Whether Zaqar is more like AMQP or more like email is a really strange
> metric to use for considering its inclusion.
>

I don't find this strange at all -- I had been judging the technical
merits of Zaqar (ex-Marconi) for the last ~18 months based on the
understanding that it aimed to provide Queueing-as-a-Service, and
found its delivery of that to be lacking on technical grounds. The
implementation did not meet my view of what a queue service should
provide; it is based on some serious antipatterns (storing a queue in
an RDBMS is probably the most obvious); and in fact, it isn't even
queue-like in the access patterns enabled by the REST API (random
access to a set != a queue). That was the basis for a large part of my
objections to the project over time, and a source of frustration for
me as the developers justified many of their positions rather than
accepted feedback and changed course during the incubation period. The
reason for this seems clear now...

As was pointed out in the TC meeting today, Zaqar is (was?) actually
aiming to provide Messaging-as-a-Service -- not queueing as a service!
This is another way of saying "it's more like email and less like
AMQP", which means my but-its-not-a-queue objection to the project's
graduation is irrelevant, and I need to rethink about all my previous
assessments of the project.

The questions now before us are:
- should OpenStack include, in the integrated release, a
messaging-as-a-service component?
- is Zaqar a technically sound implementation of such a service?

As an aside, there are still references to Zaqar as a queue in both
the wiki [0], in the governance repo [1], and on launchpad [2].

Regards,
Devananda


[0] "Multi-tenant queues based on Keystone project IDs"
  https://wiki.openstack.org/wiki/Zaqar#Key_features

[1] "Queue service" is even the official OpenStack Program name, and
the mission statement starts with "To produce an OpenStack message
queueing API and service."
  
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n315

[2] "Zaqar is a new OpenStack project to create a multi-tenant cloud
queuing service"
  https://launchpad.net/zaqar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Samuel Merritt

On 9/9/14, 12:03 PM, Monty Taylor wrote:

On 09/04/2014 01:30 AM, Clint Byrum wrote:

Excerpts from Flavio Percoco's message of 2014-09-04 00:08:47 -0700:

Greetings,

Last Tuesday the TC held the first graduation review for Zaqar. During
the meeting some concerns arose. I've listed those concerns below with
some comments hoping that it will help starting a discussion before the
next meeting. In addition, I've added some comments about the project
stability at the bottom and an etherpad link pointing to a list of use
cases for Zaqar.



Hi Flavio. This was an interesting read. As somebody whose attention has
recently been drawn to Zaqar, I am quite interested in seeing it
graduate.


# Concerns

- Concern on operational burden of requiring NoSQL deploy expertise to
the mix of openstack operational skills

For those of you not familiar with Zaqar, it currently supports 2 nosql
drivers - MongoDB and Redis - and those are the only 2 drivers it
supports for now. This will require operators willing to use Zaqar to
maintain a new (?) NoSQL technology in their system. Before expressing
our thoughts on this matter, let me say that:

 1. By removing the SQLAlchemy driver, we basically removed the
chance
for operators to use an already deployed "OpenStack-technology"
 2. Zaqar won't be backed by any AMQP based messaging technology for
now. Here's[0] a summary of the research the team (mostly done by
Victoria) did during Juno
 3. We (OpenStack) used to require Redis for the zmq matchmaker
 4. We (OpenStack) also use memcached for caching and as the oslo
caching lib becomes available - or a wrapper on top of dogpile.cache -
Redis may be used in place of memcached in more and more deployments.
 5. Ceilometer's recommended storage driver is still MongoDB,
although
Ceilometer has now support for sqlalchemy. (Please correct me if I'm
wrong).

That being said, it's obvious we already, to some extent, promote some
NoSQL technologies. However, for the sake of the discussion, lets assume
we don't.

I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
keep avoiding these technologies. NoSQL technologies have been around
for years and we should be prepared - including OpenStack operators - to
support these technologies. Not every tool is good for all tasks - one
of the reasons we removed the sqlalchemy driver in the first place -
therefore it's impossible to keep an homogeneous environment for all
services.



I whole heartedly agree that non traditional storage technologies that
are becoming mainstream are good candidates for use cases where SQL
based storage gets in the way. I wish there wasn't so much FUD
(warranted or not) about MongoDB, but that is the reality we live in.


With this, I'm not suggesting to ignore the risks and the extra burden
this adds but, instead of attempting to avoid it completely by not
evolving the stack of services we provide, we should probably work on
defining a reasonable subset of NoSQL services we are OK with
supporting. This will help making the burden smaller and it'll give
operators the option to choose.

[0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/


- Concern on should we really reinvent a queue system rather than
piggyback on one

As mentioned in the meeting on Tuesday, Zaqar is not reinventing message
brokers. Zaqar provides a service akin to SQS from AWS with an OpenStack
flavor on top. [0]



I think Zaqar is more like SMTP and IMAP than AMQP. You're not really
trying to connect two processes in real time. You're trying to do fully
asynchronous messaging with fully randomized access to any message.

Perhaps somebody should explore whether the approaches taken by large
scale IMAP providers could be applied to Zaqar.

Anyway, I can't imagine writing a system to intentionally use the
semantics of IMAP and SMTP. I'd be very interested in seeing actual use
cases for it, apologies if those have been posted before.


It seems like you're EITHER describing something called XMPP that has at
least one open source scalable backend called ejabberd. OR, you've
actually hit the nail on the head with bringing up SMTP and IMAP but for
some reason that feels strange.

SMTP and IMAP already implement every feature you've described, as well
as retries/failover/HA and a fully end to end secure transport (if
installed properly) If you don't actually set them up to run as a public
messaging interface but just as a cloud-local exchange, then you could
get by with very low overhead for a massive throughput - it can very
easily be run on a single machine for Sean's simplicity, and could just
as easily be scaled out using well known techniques for public cloud
sized deployments?

So why not use existing daemons that do this? You could still use the
REST API you've got, but instead of writing it to a mongo backend and
trying to implement all of the things that already exist in SMTP/IMAP -
you could just have them front to it. You could even bypass normal
d

Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-09 Thread Kevin Benton
Hi Jay,

The main component that won't work without direct integration is enforcing
policy on calls directly to Neutron and calls between the plugins inside of
Neutron. However, that's only one component of GBP. All of the declarative
abstractions, rendering of policy, etc can be experimented with here in the
stackforge project until the incubator is figured out.

On Tue, Sep 9, 2014 at 12:01 PM, Jay Pipes  wrote:

> On 09/04/2014 12:07 AM, Sumit Naiksatam wrote:
>
>> Hi,
>>
>> There's been a lot of lively discussion on GBP a few weeks back and we
>> wanted to drive forward the discussion on this a bit more. As you
>> might imagine, we're excited to move this forward so more people can
>> try it out.  Here are the options:
>>
>> * Neutron feature branch: This presumably allows the GBP feature to be
>> developed independently, and will perhaps help in faster iterations.
>> There does seem to be a significant packaging issue [1] with this
>> approach that hasn’t been completely addressed.
>>
>> * Neutron-incubator: This allows a path to graduate into Neutron, and
>> will be managed by the Neutron core team. That said, the proposal is
>> under discussion and there are still some open questions [2].
>>
>> * Stackforge: This allows the GBP team to make rapid and iterative
>> progress, while still leveraging the OpenStack infra. It also provides
>> option of immediately exposing the existing implementation to early
>> adopters.
>>
>> Each of the above options does not preclude moving to the other at a
>> later time.
>>
>> Which option do people think is more preferable?
>>
>> (We could also discuss this in the weekly GBP IRC meeting on Thursday:
>> https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy)
>>
>> Thanks!
>>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-
>> August/044283.html
>> [2] http://lists.openstack.org/pipermail/openstack-dev/2014-
>> August/043577.html
>>
>
> Hi all,
>
> IIRC, Kevin was saying to me in IRC that GBP really needs to live in-tree
> due to it needing access to various internal plugin points and to be able
> to call across different plugin layers/drivers inside of Neutron.
>
> If this is the case, how would the stackforge GBP project work if it
> wasn't a fork of Neutron in its entirety?
>
> Just curious,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Current state of play for nova FFEs

2014-09-09 Thread Michael Still
Hi.

So feature freeze seems to be going quite well to me. We've already
landed 10 things which had requested exceptions.

However, there are still six things in flight, so I would like to
remind people to keep focussing on those please. The deadline for
these patches to be approved is Friday midnight UTC. If something is
still in the gate when the deadline comes we'll try and sneak it
through, but things which aren't approved by then will be delayed
until Kilo.

To that end, if there is anything on the FFE list which we think wont
make it in time, let's defer it early so we can focus more on the
things which do stand a chance.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] On an API proxy from baremetal to ironic

2014-09-09 Thread Ben Nemec
On 09/09/2014 04:24 PM, Michael Still wrote:
> Hi.
> 
> One of the last things blocking Ironic from graduating is deciding
> whether or not we need a Nova API proxy for the old baremetal
> extension to new fangled Ironic API. The TC has asked that we discuss
> whether we think this functionality is actually necessary.
> 
> It should be noted that we're _not_ talking about migration of
> deployed instances from baremetal to Ironic. That is already
> implemented. What we are talking about is if users post-migration
> should be able to expect their previous baremetal Nova API extension
> to continue to function, or if they should use the Ironic APIs from
> that point onwards.
> 
> Nova had previously thought this was required, but it hasn't made it
> in time for Juno unless we do a FFE, and it has been suggested that
> perhaps its not needed at all because it is an admin extension.
> 
> To be super specific, we're talking about the "baremetal nodes" admin
> extension here. This extension has the ability to:
> 
>  - list nodes running baremetal
>  - show detail of one of those nodes
>  - create a new baremetal node
>  - delete a baremetal node
> 
> Only the first two of those would be supported if we implemented a proxy.
> 
> So, discuss.

nova-baremetal is, and has been for a while, a dead end.  That shouldn't
be a surprise to anyone who cares about it at this point.  By rights it
should have been ripped out a while ago since it's never met the
hypervisor requirements Nova enforces on everyone else.  Spending
additional effort on a partial compatibility layer for it seems like a
complete waste of time to me.

In addition, if we _do_ add a proxy then we're committing to supporting
that going forward, and frankly I think we've got better things to spend
our time on.  Unless we're planning to add it and immediately deprecate
it so we can remove it (and baremetal?) next cycle.  Which seems like a
silly thing to do. :-)

I mean, even assuming we did this, what's the real benefit?  If you
switch to Ironic without updating your tools, you've just made your
entire infrastructure read-only.  Nobody's actually going to do that,
are they?  And if they are, do they care that much about being able to
view their node list/details?  What are they going to do with that
information?  Can they even trust that the information is correct?  How
are we going to test this proxy when we don't test baremetal to any
meaningful degree today?

So yeah, -1 from me to a proxy that's at best a half-measure anyway.
Let's just put baremetal to rest already.

-Ben

> 
> Thanks,
> Michael
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] On an API proxy from baremetal to ironic

2014-09-09 Thread Michael Still
On Wed, Sep 10, 2014 at 7:43 AM, Solly Ross  wrote:
> With my admittedly limited knowledge of the whole Ironic process, the 
> question seems to me to be: "If we don't implement a proxy, which people are 
> going to have a serious problem?"
>
> Do we have an data on which users/operators are making use of the baremetal 
> API in any extensive fashion?  If nobody's using it, or the people using it 
> aren't using in an
> extensive fashion, I think we don't need to make a proxy for it.  
> Strengthening this
> argument is the fact that we would only be proxying the first two calls, so 
> it wouldn't
> be a drop-in replacement anyway.

You make a fair point, and this is something we've struggled for
during the Ironic driver implementation. We _know_ that baremetal
works (I know of at least one 1,000 node deployment), but we _don't_
know how widely its deployed and we don't have a good way to find out.

So, I think we're left assuming that people do use it, and acting accordingly.

Then again, is it ok to assume admins can tweak their code to use the
ironic API? I suspect it is, because the number of admins is small...

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack] Supporting code for incubated projects

2014-09-09 Thread Gabriel Hurley
I would also like to add that incubated != integrated. There's no telling how 
long a project may stay in incubation or how many changes it may undergo before 
it's deemed ready (see David's reasoning around client changes during RC's).

While the Horizon team has always made every effort to work closely with the 
incubated projects in getting them ready for merge as soon as they're 
officially integrated, doing so prior to that promise of stability, distro 
support, etc. isn't just infeasible, it's dangerous to the Horizon project.

Perhaps instead we should focus on making a better flow for installing 
"experimental" modules in a more official capacity. I always go back to how we 
can help build a better ecosystem. This problem applies to projects which are 
not and may never be incubated just as much as to incubated ones.

Sure, anyone with some know-how *can* load in another dashboard or panel into 
Horizon through their settings, but maybe it's time we looked at how to do this 
in a more user-friendly way.

- Gabriel

> -Original Message-
> From: Lyle, David [mailto:david.l...@hp.com]
> Sent: Tuesday, September 09, 2014 2:07 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack]
> Supporting code for incubated projects
> 
> Adding support for incubated projects in Horizon is blocked mainly for
> dependency issues. The way Horizon utilizes the python-*clients we force a
> requirement on distros to now include that version of the client even though
> it is not officially part of OpenStack's integrated release.
> Additionally, we've made exceptions and included incubated project support
> in the past and doing so has been borderline disastrous from a release point
> of view. Things like non-backward compatible client releases have happened
> in the RC cycles.
> 
> I've been struggling with a better way forward. But including directly in the
> Horizon tree, will create problems.
> 
> David
> 
> On 9/9/14, 8:12 AM, "Sean Dague"  wrote:
> 
> >On 09/09/2014 07:58 AM, Mac Innes, Kiall wrote:
> >> Hi all,
> >>
> >>
> >>
> >> While requesting a openstack/designate-dashboard project from the TC/
> >>
> >> Infra ­ The topic of why Designate panels, as an incubated project,
> >>can¹t  be merged into openstack/horizon was raised.
> >>
> >>
> >>
> >> In the openstack/governance review[1], Russell asked:
> >>
> >>
> >>
> >> Hm, I think we should discuss this with the horizon team, then.
> >>We are
> >> telling projects that incubation is a key time for integrating
> >>with  other
> >> projects. I would expect merging horizon integration into horizon
> >>itself
> >> to be a part of that.
> >>
> >>
> >>
> >> With this in mind ­ I¹d like to start a conversation with the
> >> Horizon, Tempest and DevStack teams around merging of code to support
> >> Incubated projects ­ What are the drawbacks?, Why is this currently
> >> frowned upon by the various teams? And ­ What do each of the parties
> >> believe is the Right Way forward?
> >
> >I though the Devstack and Tempest cases were pretty clear, once things
> >are incubated they are fair game to get added in.
> >
> >Devstack is usually the right starting point, as that makes it easy for
> >everyone to have access to the code, and makes the testability by other
> >systems viable.
> >
> >I currently don't see any designate changes that are passing Jenkins
> >that need to be addressed, is there something that got missed?
> >
> > -Sean
> >
> >--
> >Sean Dague
> >http://dague.net
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Dmitry Borodaenko
A clarification on 2: we are going to keep fuel_5.1_* jobs around for
the benefict 5.1.x maintenance releases, that should take care of
Icehouse testing for us, so I don't think we should keep Icehouse jobs
in 6.0/master after Juno is stabilized. What we should do instead is
drop Icehouse jobs and introduce Kilo jobs tracking OpenStack master,
and try to keep manifests in Fuel master forward-compatible with Kilo
through the 6.x release series.

On Tue, Sep 9, 2014 at 1:39 PM, Mike Scherbakov
 wrote:
> Aleksandra,
> you've got us exactly right. Fuel CI for OSTF can wait a bit longer, but "4
> fuel-library tests" should happen right after we create stable/5.1. Also,
> for Fuel CI for OSTF - I don't think it's actually necessary to support <5.0
> envs.
>
> Your questions:
>
> Create jobs for both Icehouse and Juno, but it doesn't make sense to do
> staging for Juno till it starts to pass deployment in HA mode. Once it
> passes deployment in HA, staging should be enabled. Then, once it passes
> OSTF - we extend criteria, and pass only those mirrors which also pass OSTF
> phase
> Once Juno starts to pass BVT with OSTF check enabled, I think we can disable
> Icehouse checks. Not sure about fuel-library tests on Fuel CI with Icehouse
> - we might want to continue using them.
>
> Thanks,
>
> On Wed, Sep 10, 2014 at 12:22 AM, Aleksandra Fedorova
>  wrote:
>>
>> > Our Fuel CI can do 4 builds against puppet modules: 2 voting, with
>> > Icehouse packages; 2 non-voting, with Juno packages.
>> > Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno)
>> > actually before Juno becomes stable. We will be able to run 2 sets of BVTs
>> > (against Icehouse and Juno), and it means that we will be able to see 
>> > almost
>> > immediately if something in nailgun/astute/puppet integration broke. For
>> > Juno builds it's going to be all red initially.
>>
>> Let me rephrase:
>>
>> We keep one Fuel master branch for two OpenStack releases. And we make
>> sure that Fuel master code is compatible with both of them. And we use
>> current release (Icehouse) as a reference for test results of upcoming
>> release, till we obtain stable enough reference point in Juno itself.
>> Moreover we'd like to have OSTF code running on all previous Fuel releases.
>>
>> Changes to CI workflow look as follows:
>>
>> Nightly builds:
>>   1) We build two mirrors: one for Icehouse and one for Juno.
>>   2) From each mirror we build Fuel ISO using exactly the same fuel master
>> branch code.
>>   3) Then we run BVT tests on both (using the same fuel-main code for
>> system tests).
>>   4) If Icehouse BVT tests pass, we deploy both ISO images (even with
>> failed Juno tests) onto Fuel CI.
>>
>> On Fuel CI we should run:
>>   - 4 fuel-library tests (revert master node, inject fuel-library code in
>> master node and run deployment):
>> 2 (ubuntu and centos) voting Icehouse tests and 2 non-voting
>> Juno tests
>>   - 5 OSTF tests (revert deployed environment, inject OSTF code into
>> master node, run OSTF):
>> voting on 4.1, 5.0, 5.1, master/icehouse and non-voting on
>> master/Juno
>>   - other tests, which don't use prebuilt environment, work as before
>>
>> The major action point here would be OSTF tests, as we don't have yet
>> working implementation of injecting OSTF code into deployed environment. And
>> we don't run any tests on old environments.
>>
>>
>> Questions:
>>
>> 1) How should we test mirrors?
>>
>> Current master mirrors go through the 4 hours test cycle involving Fuel
>> ISO build:
>>   1. we build temporary mirror
>>   2. build custom iso from it
>>   3. run two custom bvt jobs
>>   4. if they pass we move mirror to stable and sitch to it for our
>> "primary" fuel_master_iso
>>
>> Should we test only Icehouse mirrors, or both, but ignoring again failed
>> BVT for Juno? Maybe we should enable these tests only later in release
>> cycle, say, after SCF?
>>
>> 2) It is not clear for me when and how we will switch from supporting two
>> releases back to one.
>> Should we add one more milestone to our release process? The "Switching
>> point", when we disable and remove Icehouse tasks and move to Juno
>> completely? I guess it should happen before next SCF?
>>
>>
>>
>> On Tue, Sep 9, 2014 at 9:52 PM, Mike Scherbakov 
>> wrote:
>>>
>>> > What we need to achieve that is have 2 build series based on Fuel
>>> master: one with Icehouse packages, and one with Juno, and, as Mike
>>> proposed, keep our manifests backwards compatible with Icehouse.
>>> Exactly. Our Fuel CI can do 4 builds against puppet modules: 2 voting,
>>> with Icehouse packages; 2 non-voting, with Juno packages.
>>>
>>> Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno) actually
>>> before Juno becomes stable. We will be able to run 2 sets of BVTs (against
>>> Icehouse and Juno), and it means that we will be able to see almost
>>> immediately if something in nailgun/astute/puppet integration broke. For
>>> Juno builds it's going to be a

Re: [openstack-dev] On an API proxy from baremetal to ironic

2014-09-09 Thread Solly Ross
With my admittedly limited knowledge of the whole Ironic process, the question 
seems to me to be: "If we don't implement a proxy, which people are going to 
have a serious problem?"

Do we have an data on which users/operators are making use of the baremetal API 
in any extensive fashion?  If nobody's using it, or the people using it aren't 
using in an
extensive fashion, I think we don't need to make a proxy for it.  Strengthening 
this
argument is the fact that we would only be proxying the first two calls, so it 
wouldn't
be a drop-in replacement anyway.

Best Regards,
Solly Ross

- Original Message -
> From: "Michael Still" 
> To: "OpenStack Development Mailing List" 
> Sent: Tuesday, September 9, 2014 5:24:11 PM
> Subject: [openstack-dev] On an API proxy from baremetal to ironic
> 
> Hi.
> 
> One of the last things blocking Ironic from graduating is deciding
> whether or not we need a Nova API proxy for the old baremetal
> extension to new fangled Ironic API. The TC has asked that we discuss
> whether we think this functionality is actually necessary.
> 
> It should be noted that we're _not_ talking about migration of
> deployed instances from baremetal to Ironic. That is already
> implemented. What we are talking about is if users post-migration
> should be able to expect their previous baremetal Nova API extension
> to continue to function, or if they should use the Ironic APIs from
> that point onwards.
> 
> Nova had previously thought this was required, but it hasn't made it
> in time for Juno unless we do a FFE, and it has been suggested that
> perhaps its not needed at all because it is an admin extension.
> 
> To be super specific, we're talking about the "baremetal nodes" admin
> extension here. This extension has the ability to:
> 
>  - list nodes running baremetal
>  - show detail of one of those nodes
>  - create a new baremetal node
>  - delete a baremetal node
> 
> Only the first two of those would be supported if we implemented a proxy.
> 
> So, discuss.
> 
> Thanks,
> Michael
> 
> --
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] global-reqs on tooz pulls in worrisome transitive dep

2014-09-09 Thread Solly Ross
For the future
==

IMHO, we shouldn't be pulling in duplicate dependencies when we can control it. 
Since tooz is part of stackforge, it's somewhat part of OpenStack.  We should
strive to make all OpenStack projects use one memcached client.

That being said, a quick Google search indicates that pymemcache has some 
benefits
over python-memcache, the latter not being python 3 compatible.  Additionally, 
pymemcache
was written by the Pinterest people, so I'd imaging it stands up fairly well 
under stress.
Perhaps we should consider porting the existing OpenStack projects from 
python-memcache
to pymemcache.

In the future, though, we need to make sure to avoid getting into a situation 
like this.

For the present
===

Probably, we'll just have to get pymemcache packaged for Fedora in some form.
Like you said, it can be bundled in RDO.  If you're not using RDO I think it's
safe to just tell people to install it from other sources (pip install) until
we can get the package into Fedora.

Best Regards,
Solly Ross


- Original Message -
> From: "Matt Riedemann" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, September 9, 2014 4:30:02 PM
> Subject: [openstack-dev] global-reqs on tooz pulls in worrisome transitive
> dep
> 
> It took me a while to untangle this so prepare for links. :)
> 
> I noticed this change [1] today for global-requirements to require tooz
> [2] for a ceilometer blueprint [3].
> 
> The sad part is that tooz requires pymemcache [4] which is, from what I
> can tell, a memcached client that is not the same as python-memcached [5].
> 
> Note that python-memcached is listed in global-requirements already [6].
> 
> The problem I have with this is it doesn't appear that RHEL/Fedora
> package pymemcache (they do package python-memcached).  I see that
> openSUSE builds separate packages for each.  It looks like Ubuntu also
> has separate packages.
> 
> My question is, is this a problem?  I'm assuming RDO will just have to
> package python-pymemcache themselves but what about people not using RDO
> (SOL? Don't care? Other?).
> 
> Reverting the requirements change would probably mean reverting the
> ceilometer blueprint (or getting a version of tooz out that works with
> python-memcached which is probably too late for that right now).  Given
> the point in the schedule that seems pretty drastic.
> 
> Maybe I'm making more of this than it's worth but wanted to bring it up
> in case anyone else has concerns.
> 
> [1] https://review.openstack.org/#/c/93443/
> [2] https://github.com/stackforge/tooz/blob/master/requirements.txt#L6
> [3]
> http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/central-agent-partitioning.html
> [4] https://pypi.python.org/pypi/pymemcache
> [5] https://pypi.python.org/pypi/python-memcached/
> [6]
> https://github.com/openstack/requirements/blob/master/global-requirements.txt#L108
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] On an API proxy from baremetal to ironic

2014-09-09 Thread Michael Still
Hi.

One of the last things blocking Ironic from graduating is deciding
whether or not we need a Nova API proxy for the old baremetal
extension to new fangled Ironic API. The TC has asked that we discuss
whether we think this functionality is actually necessary.

It should be noted that we're _not_ talking about migration of
deployed instances from baremetal to Ironic. That is already
implemented. What we are talking about is if users post-migration
should be able to expect their previous baremetal Nova API extension
to continue to function, or if they should use the Ironic APIs from
that point onwards.

Nova had previously thought this was required, but it hasn't made it
in time for Juno unless we do a FFE, and it has been suggested that
perhaps its not needed at all because it is an admin extension.

To be super specific, we're talking about the "baremetal nodes" admin
extension here. This extension has the ability to:

 - list nodes running baremetal
 - show detail of one of those nodes
 - create a new baremetal node
 - delete a baremetal node

Only the first two of those would be supported if we implemented a proxy.

So, discuss.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] client release deadline - Sept 18th

2014-09-09 Thread Sean Dague
As we try to stabilize OpenStack Juno, many server projects need to get
out final client releases that expose new features of their servers.
While this seems like not a big deal, each of these clients releases
ends up having possibly destabilizing impacts on the OpenStack whole (as
the clients do double duty in cross communicating between services).

As such in the release meeting today it was agreed clients should have
their final release by Sept 18th. We'll start applying the dependency
freeze to oslo and clients shortly after that, all other requirements
should be frozen at this point unless there is a high priority bug
around them.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-09-09 Thread Mike Scherbakov
Sergii, Clint,
to rephrase what you are saying - there are might be situations when our
OpenStack API will not be responding, as simply services would be down for
upgrade.
Do we want to support it somehow? For example, if we know that Nova is
going to be down, can we respond with HTTP 503 with appropriate Retry-After
time in header?

The idea is not simply deny or hang requests from clients, but provide them
"we are in maintenance mode, retry in X seconds"

> Turbo Hipster was added to the gate
great idea, I think we should use it in Fuel too

> You probably would want 'nova host-servers-migrate '
yeah for migrations - but as far as I understand, it doesn't help with
disabling this host in scheduler - there is can be a chance that some
workloads will be scheduled to the host.


On Tue, Sep 9, 2014 at 6:02 PM, Clint Byrum  wrote:

> Excerpts from Mike Scherbakov's message of 2014-09-09 00:35:09 -0700:
> > Hi all,
> > please see below original email below from Dmitry. I've modified the
> > subject to bring larger audience to the issue.
> >
> > I'd like to split the issue into two parts:
> >
> >1. Maintenance mode for OpenStack controllers in HA mode (HA-ed
> >Keystone, Glance, etc.)
> >2. Maintenance mode for OpenStack computes/storage nodes (no HA)
> >
> > For first category, we might not need to have maintenance mode at all.
> For
> > example, if we apply patching/upgrade one by one node to 3-node HA
> cluster,
> > 2 nodes will serve requests normally. Is that possible for our HA
> solutions
> > in Fuel, TripleO, other frameworks?
>
> You may have a broken cloud if you are pushing out an update that
> requires a new schema. Some services are better than others about
> handling old schemas, and can be upgraded before doing schema upgrades.
> But most of the time you have to do at least a brief downtime:
>
>  * turn off DB accessing services
>  * update code
>  * run db migration
>  * turn on DB accessing services
>
> It is for this very reason, I believe, that Turbo Hipster was added to
> the gate, so that deployers running against the upstream master branches
> can have a chance at performing these upgrades in a reasonable amount of
> time.
>
> >
> > For second category, can not we simply do "nova-manage service
> disable...",
> > so scheduler will simply stop scheduling new workloads on particular host
> > which we want to do maintenance on?
> >
>
> You probably would want 'nova host-servers-migrate ' at that
> point, assuming you have migration set up.
>
> http://docs.openstack.org/user-guide/content/novaclient_commands.html
>
> > On Thu, Aug 28, 2014 at 6:44 PM, Dmitry Pyzhov 
> wrote:
> >
> > > All,
> > >
> > > I'm not sure if it deserves to be mentioned in our documentation, this
> > > seems to be a common practice. If an administrator wants to patch his
> > > environment, he should be prepared for a temporary downtime of
> OpenStack
> > > services. And he should plan to perform patching in advance: choose a
> time
> > > with minimal load and warn users about possible interruptions of
> service
> > > availability.
> > >
> > > Our current implementation of patching does not protect from downtime
> > > during the patching procedure. HA deployments seems to be more or less
> > > stable. But it looks like it is possible to schedule an action on a
> compute
> > > node and get an error because of service restart. Deployments with one
> > > controller... well, you won’t be able to use your cluster until the
> > > patching is finished. There is no way to get rid of downtime here.
> > >
> > > As I understand, we can get rid of possible issues with computes in HA.
> > > But it will require migration of instances and stopping of nova-compute
> > > service before patching. And it will make the overall patching
> procedure
> > > much longer. Do we want to investigate this process?
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack] Supporting code for incubated projects

2014-09-09 Thread Lyle, David
Adding support for incubated projects in Horizon is blocked mainly for
dependency issues. The way Horizon utilizes the python-*clients we force a
requirement on distros to now include that version of the client even
though it is not officially part of OpenStack's integrated release.
Additionally, we've made exceptions and included incubated project support
in the past and doing so has been borderline disastrous from a release
point of view. Things like non-backward compatible client releases have
happened in the RC cycles.

I've been struggling with a better way forward. But including directly in
the Horizon tree, will create problems.

David

On 9/9/14, 8:12 AM, "Sean Dague"  wrote:

>On 09/09/2014 07:58 AM, Mac Innes, Kiall wrote:
>> Hi all,
>> 
>>  
>> 
>> While requesting a openstack/designate-dashboard project from the TC/
>> 
>> Infra ­ The topic of why Designate panels, as an incubated project,
>>can¹t
>> be merged into openstack/horizon was raised.
>> 
>>  
>> 
>> In the openstack/governance review[1], Russell asked:
>> 
>>  
>> 
>> Hm, I think we should discuss this with the horizon team, then. We
>>are
>> telling projects that incubation is a key time for integrating with
>> other
>> projects. I would expect merging horizon integration into horizon
>>itself
>> to be a part of that.
>> 
>>  
>> 
>> With this in mind ­ I¹d like to start a conversation with the Horizon,
>> Tempest and DevStack teams around merging of code to support
>> Incubated projects ­ What are the drawbacks?, Why is this currently
>> frowned upon by the various teams? And ­ What do each of the parties
>> believe is the Right Way forward?
>
>I though the Devstack and Tempest cases were pretty clear, once things
>are incubated they are fair game to get added in.
>
>Devstack is usually the right starting point, as that makes it easy for
>everyone to have access to the code, and makes the testability by other
>systems viable.
>
>I currently don't see any designate changes that are passing Jenkins
>that need to be addressed, is there something that got missed?
>
>   -Sean
>
>-- 
>Sean Dague
>http://dague.net
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] pep8 - splitting expressions

2014-09-09 Thread Jay Pipes

On 09/09/2014 03:05 PM, Gilbert Pilz wrote:

I have a question with regards to splitting expressions in order to
conform to the pep8 line-length restriction. I have the following bit of
code:

 res = amodel.Assemblies(uri=common.ASSEM_URI_STR %
 pecan.request.host_url,
 name='Solum_CAMP_assemblies',
 type='assemblies',
 description=common.ASSEM_DESC_STR,
 assembly_links=a_links,

parameter_definitions_uri=common.ASSEM_PARAM_STR %
 pecan.request.host_url)

The line that assigns a value to 'parameter_definitions_uri' is (as you
might be able to tell) too long. What is the best way to split this
expression up?


pdu = common.ASSEM_PARAM_STR % pecan.request.host_url
res = amodel.Assemblies(uri=common.ASSEM_URI_STR %
pecan.request.host_url,
name='Solum_CAMP_assemblies',
type='assemblies',
description=common.ASSEM_DESC_STR,
assembly_links=a_links,
parameter_definitions_uri=pdu)

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Mike Scherbakov
Aleksandra,
you've got us exactly right. Fuel CI for OSTF can wait a bit longer, but "4
fuel-library tests" should happen right after we create stable/5.1. Also,
for Fuel CI for OSTF - I don't think it's actually necessary to support
<5.0 envs.

Your questions:

   1. Create jobs for both Icehouse and Juno, but it doesn't make sense to
   do staging for Juno till it starts to pass deployment in HA mode. Once it
   passes deployment in HA, staging should be enabled. Then, once it passes
   OSTF - we extend criteria, and pass only those mirrors which also pass OSTF
   phase
   2. Once Juno starts to pass BVT with OSTF check enabled, I think we can
   disable Icehouse checks. Not sure about fuel-library tests on Fuel CI with
   Icehouse - we might want to continue using them.

Thanks,

On Wed, Sep 10, 2014 at 12:22 AM, Aleksandra Fedorova <
afedor...@mirantis.com> wrote:

> > Our Fuel CI can do 4 builds against puppet modules: 2 voting, with
> Icehouse packages; 2 non-voting, with Juno packages.
> > Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno)
> actually before Juno becomes stable. We will be able to run 2 sets of BVTs
> (against Icehouse and Juno), and it means that we will be able to see
> almost immediately if something in nailgun/astute/puppet integration broke.
> For Juno builds it's going to be all red initially.
>
> Let me rephrase:
>
> We keep one Fuel master branch for two OpenStack releases. And we make
> sure that Fuel master code is compatible with both of them. And we use
> current release (Icehouse) as a reference for test results of upcoming
> release, till we obtain stable enough reference point in Juno itself.
> Moreover we'd like to have OSTF code running on all previous Fuel releases.
>
> Changes to CI workflow look as follows:
>
> Nightly builds:
>   1) We build two mirrors: one for Icehouse and one for Juno.
>   2) From each mirror we build Fuel ISO using exactly the same fuel master
> branch code.
>   3) Then we run BVT tests on both (using the same fuel-main code for
> system tests).
>   4) If Icehouse BVT tests pass, we deploy both ISO images (even with
> failed Juno tests) onto Fuel CI.
>
> On Fuel CI we should run:
>   - 4 fuel-library tests (revert master node, inject fuel-library code in
> master node and run deployment):
> 2 (ubuntu and centos) voting Icehouse tests and 2 non-voting
> Juno tests
>   - 5 OSTF tests (revert deployed environment, inject OSTF code into
> master node, run OSTF):
> voting on 4.1, 5.0, 5.1, master/icehouse and non-voting on
> master/Juno
>   - other tests, which don't use prebuilt environment, work as before
>
> The major action point here would be OSTF tests, as we don't have yet
> working implementation of injecting OSTF code into deployed environment.
> And we don't run any tests on old environments.
>
>
> Questions:
>
> 1) How should we test mirrors?
>
> Current master mirrors go through the 4 hours test cycle involving Fuel
> ISO build:
>   1. we build temporary mirror
>   2. build custom iso from it
>   3. run two custom bvt jobs
>   4. if they pass we move mirror to stable and sitch to it for our
> "primary" fuel_master_iso
>
> Should we test only Icehouse mirrors, or both, but ignoring again failed
> BVT for Juno? Maybe we should enable these tests only later in release
> cycle, say, after SCF?
>
> 2) It is not clear for me when and how we will switch from supporting two
> releases back to one.
> Should we add one more milestone to our release process? The "Switching
> point", when we disable and remove Icehouse tasks and move to Juno
> completely? I guess it should happen before next SCF?
>
>
>
> On Tue, Sep 9, 2014 at 9:52 PM, Mike Scherbakov 
> wrote:
>
>> > What we need to achieve that is have 2 build series based on Fuel
>> master: one with Icehouse packages, and one with Juno, and, as Mike
>> proposed, keep our manifests backwards compatible with Icehouse.
>> Exactly. Our Fuel CI can do 4 builds against puppet modules: 2 voting,
>> with Icehouse packages; 2 non-voting, with Juno packages.
>>
>> Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno) actually
>> before Juno becomes stable. We will be able to run 2 sets of BVTs (against
>> Icehouse and Juno), and it means that we will be able to see almost
>> immediately if something in nailgun/astute/puppet integration broke. For
>> Juno builds it's going to be all red initially.
>>
>> Another suggestion would be to lower green switch in BVTs for Juno:
>> first, when it passes deployment; and then, if it finally passes OSTF.
>>
>> I'd like to hear QA & DevOps opinion on all the above. Immediately we
>> would need just standard stuff which is in checklists for OSCI & DevOps
>> teams, and ideally soon after that - ability to have Fuel CI running 4
>> builds, not 2, against our master, as mentioned above.
>>
>> On Tue, Sep 9, 2014 at 9:28 PM, Roman Vyalov 
>> wrote:
>>
>>> All OSCI action items for prepare HCF check list  has been done
>>>
>>

[openstack-dev] global-reqs on tooz pulls in worrisome transitive dep

2014-09-09 Thread Matt Riedemann

It took me a while to untangle this so prepare for links. :)

I noticed this change [1] today for global-requirements to require tooz 
[2] for a ceilometer blueprint [3].


The sad part is that tooz requires pymemcache [4] which is, from what I 
can tell, a memcached client that is not the same as python-memcached [5].


Note that python-memcached is listed in global-requirements already [6].

The problem I have with this is it doesn't appear that RHEL/Fedora 
package pymemcache (they do package python-memcached).  I see that 
openSUSE builds separate packages for each.  It looks like Ubuntu also 
has separate packages.


My question is, is this a problem?  I'm assuming RDO will just have to 
package python-pymemcache themselves but what about people not using RDO 
(SOL? Don't care? Other?).


Reverting the requirements change would probably mean reverting the 
ceilometer blueprint (or getting a version of tooz out that works with 
python-memcached which is probably too late for that right now).  Given 
the point in the schedule that seems pretty drastic.


Maybe I'm making more of this than it's worth but wanted to bring it up 
in case anyone else has concerns.


[1] https://review.openstack.org/#/c/93443/
[2] https://github.com/stackforge/tooz/blob/master/requirements.txt#L6
[3] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/central-agent-partitioning.html

[4] https://pypi.python.org/pypi/pymemcache
[5] https://pypi.python.org/pypi/python-memcached/
[6] 
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L108


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Aleksandra Fedorova
> Our Fuel CI can do 4 builds against puppet modules: 2 voting, with
Icehouse packages; 2 non-voting, with Juno packages.
> Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno) actually
before Juno becomes stable. We will be able to run 2 sets of BVTs (against
Icehouse and Juno), and it means that we will be able to see almost
immediately if something in nailgun/astute/puppet integration broke. For
Juno builds it's going to be all red initially.

Let me rephrase:

We keep one Fuel master branch for two OpenStack releases. And we make sure
that Fuel master code is compatible with both of them. And we use current
release (Icehouse) as a reference for test results of upcoming release,
till we obtain stable enough reference point in Juno itself. Moreover we'd
like to have OSTF code running on all previous Fuel releases.

Changes to CI workflow look as follows:

Nightly builds:
  1) We build two mirrors: one for Icehouse and one for Juno.
  2) From each mirror we build Fuel ISO using exactly the same fuel master
branch code.
  3) Then we run BVT tests on both (using the same fuel-main code for
system tests).
  4) If Icehouse BVT tests pass, we deploy both ISO images (even with
failed Juno tests) onto Fuel CI.

On Fuel CI we should run:
  - 4 fuel-library tests (revert master node, inject fuel-library code in
master node and run deployment):
2 (ubuntu and centos) voting Icehouse tests and 2 non-voting
Juno tests
  - 5 OSTF tests (revert deployed environment, inject OSTF code into master
node, run OSTF):
voting on 4.1, 5.0, 5.1, master/icehouse and non-voting on
master/Juno
  - other tests, which don't use prebuilt environment, work as before

The major action point here would be OSTF tests, as we don't have yet
working implementation of injecting OSTF code into deployed environment.
And we don't run any tests on old environments.


Questions:

1) How should we test mirrors?

Current master mirrors go through the 4 hours test cycle involving Fuel ISO
build:
  1. we build temporary mirror
  2. build custom iso from it
  3. run two custom bvt jobs
  4. if they pass we move mirror to stable and sitch to it for our
"primary" fuel_master_iso

Should we test only Icehouse mirrors, or both, but ignoring again failed
BVT for Juno? Maybe we should enable these tests only later in release
cycle, say, after SCF?

2) It is not clear for me when and how we will switch from supporting two
releases back to one.
Should we add one more milestone to our release process? The "Switching
point", when we disable and remove Icehouse tasks and move to Juno
completely? I guess it should happen before next SCF?



On Tue, Sep 9, 2014 at 9:52 PM, Mike Scherbakov 
wrote:

> > What we need to achieve that is have 2 build series based on Fuel
> master: one with Icehouse packages, and one with Juno, and, as Mike
> proposed, keep our manifests backwards compatible with Icehouse.
> Exactly. Our Fuel CI can do 4 builds against puppet modules: 2 voting,
> with Icehouse packages; 2 non-voting, with Juno packages.
>
> Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno) actually
> before Juno becomes stable. We will be able to run 2 sets of BVTs (against
> Icehouse and Juno), and it means that we will be able to see almost
> immediately if something in nailgun/astute/puppet integration broke. For
> Juno builds it's going to be all red initially.
>
> Another suggestion would be to lower green switch in BVTs for Juno: first,
> when it passes deployment; and then, if it finally passes OSTF.
>
> I'd like to hear QA & DevOps opinion on all the above. Immediately we
> would need just standard stuff which is in checklists for OSCI & DevOps
> teams, and ideally soon after that - ability to have Fuel CI running 4
> builds, not 2, against our master, as mentioned above.
>
> On Tue, Sep 9, 2014 at 9:28 PM, Roman Vyalov  wrote:
>
>> All OSCI action items for prepare HCF check list  has been done
>>
>>
>> On Tue, Sep 9, 2014 at 6:27 PM, Mike Scherbakov > > wrote:
>>
>>> Thanks Alexandra.
>>>
>>> We land a few patches a day currently, so I think we can open stable
>>> branch. If we see no serious objections in next 12 hours, let's do it. We
>>> would need to immediately notify everyone in mailing list - that for every
>>> patch for 5.1, it should go first to master, and then to stable/5.1.
>>>
>>> Is everything ready from DevOps, OSCI (packaging) side to do this? Fuel
>>> CI, OBS, etc.?
>>>
>>> On Tue, Sep 9, 2014 at 2:28 PM, Aleksandra Fedorova <
>>> afedor...@mirantis.com> wrote:
>>>
 As I understand your proposal, we need to split our HCF milestone into
 two check points: Branching Point and HCF itself.

 Branching point should happen somewhere in between SCF and HCF. And
 though It may coincide with HCF, it needs its own list of requirements.
 This will give us the possibility to untie two events and make a separate
 decision on branching without enforcing all HCF criteria.

 

Re: [openstack-dev] [Infra] Meeting Tuesday September 9th at 19:00 UTC

2014-09-09 Thread Elizabeth K. Joseph
On Mon, Sep 8, 2014 at 10:34 AM, Elizabeth K. Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting on Tuesday September 9th, at 19:00 UTC in #openstack-meeting

Meeting minutes and log available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-09-19.04.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-09-19.04.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-09-19.04.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] HTTPS client breaks nova

2014-09-09 Thread Rob Crittenden
Flavio Percoco wrote:
> On 07/23/2014 06:05 PM, Rob Crittenden wrote:
>> Rob Crittenden wrote:
>>> It looks like the switch to requests in python-glanceclient
>>> (https://review.openstack.org/#/c/78269/) has broken nova when SSL is
>>> enabled.
>>>
>>> I think it is related to the custom object that the glanceclient uses.
>>> If another connection gets pushed into the pool then things fail because
>>> the object isn't a glanceclient VerifiedHTTPSConnection object.
>>>
>>> The error seen is:
>>>
>>> 2014-07-22 16:20:57.571 ERROR nova.api.openstack
>>> req-e9a94169-9af4-45e8-ab95-1ccd3f8caf04 admin admin Caught error:
>>> VerifiedHTTPSConnection instance has no attribute 'insecure'
>>>
>>> What I see is that nova works until glance is invoked.
>>>
>>> These all work:
>>>
>>> $ nova flavor-list
>>> $ glance image-list
>>> $ nova net-list
>>>
>>> Now make it go boom:
>>>
>>> $ nova image-list
>>> ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
>>> req-ee964e9a-c2a9-4be9-bd52-3f42c805cf2c)
>>>
>>> Now that a bad object is now in the pool nothing in nova works:
>>>
>>> $ nova list
>>> ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
>>> req-f670db83-c830-4e75-b29f-44f61ae161a1)
>>>
>>> A restart of nova gets things back to normal.
>>>
>>> I'm working on enabling SSL everywhere
>>> (https://bugs.launchpad.net/devstack/+bug/1328226) either directly or
>>> using TLS proxies (stud).
>>> I'd like to eventually get SSL testing done as a gate job which will
>>> help catch issues like this in advance.
>>>
>>> rob
>>
>> FYI, my temporary workaround is to change the queue name (scheme) so the
>> glance clients are handled separately:
>>
>> diff --git a/glanceclient/common/https.py b/glanceclient/common/https.py
>> index 6416c19..72ed929 100644
>> --- a/glanceclient/common/https.py
>> +++ b/glanceclient/common/https.py
>> @@ -72,7 +72,7 @@ class HTTPSAdapter(adapters.HTTPAdapter):
>>  def __init__(self, *args, **kwargs):
>>  # NOTE(flaper87): This line forces poolmanager to use
>>  # glanceclient HTTPSConnection
>> -poolmanager.pool_classes_by_scheme["https"] = HTTPSConnectionPool
>> +poolmanager.pool_classes_by_scheme["glance_https"] =
>> HTTPSConnectionPoo
>>  super(HTTPSAdapter, self).__init__(*args, **kwargs)
>>
>>  def cert_verify(self, conn, url, verify, cert):
>> @@ -92,7 +92,7 @@ class
>> HTTPSConnectionPool(connectionpool.HTTPSConnectionPool):
>>  be used just when the user sets --no-ssl-compression.
>>  """
>>
>> -scheme = 'https'
>> +scheme = 'glance_https'
>>
>>  def _new_conn(self):
>>  self.num_connections += 1
>>
>> This at least lets me continue working.
>>
>> rob
> 
> Hey Rob,
> 
> Sorry for the late reply, I'll take a look into this.

Ping, have you had a chance to look into it?

thanks

rob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-09 Thread Kurt Griffiths
Hi folks,

In this second round of performance testing, I benchmarked the new Redis
driver. I used the same setup and tests as in Round 1 to make it easier to
compare the two drivers. I did not test Redis in master-slave mode, but
that likely would not make a significant difference in the results since
Redis replication is asynchronous[1].

As always, the usual benchmarking disclaimers apply (i.e., take these
numbers with a grain of salt; they are only intended to provide a ballpark
reference; you should perform your own tests, simulating your specific
scenarios and using your own hardware; etc.).

## Setup ##

Rather than VMs, I provisioned some Rackspace OnMetal[3] servers to
mitigate noisy neighbor when running the performance tests:

* 1x Load Generator
* Hardware
* 1x Intel Xeon E5-2680 v2 2.8Ghz
* 32 GB RAM
* 10Gbps NIC
* 32GB SATADOM
* Software
* Debian Wheezy
* Python 2.7.3
* zaqar-bench
* 1x Web Head
* Hardware
* 1x Intel Xeon E5-2680 v2 2.8Ghz
* 32 GB RAM
* 10Gbps NIC
* 32GB SATADOM
* Software
* Debian Wheezy
* Python 2.7.3
* zaqar server
* storage=mongodb
* partitions=4
* MongoDB URI configured with w=majority
* uWSGI + gevent
* config: http://paste.openstack.org/show/100592/
* app.py: http://paste.openstack.org/show/100593/
* 3x MongoDB Nodes
* Hardware
* 2x Intel Xeon E5-2680 v2 2.8Ghz
* 128 GB RAM
* 10Gbps NIC
* 2x LSI Nytro WarpDrive BLP4-1600[2]
* Software
* Debian Wheezy
* mongod 2.6.4
* Default config, except setting replSet and enabling periodic
  logging of CPU and I/O
* Journaling enabled
* Profiling on message DBs enabled for requests over 10ms
* 1x Redis Node
* Hardware
* 2x Intel Xeon E5-2680 v2 2.8Ghz
* 128 GB RAM
* 10Gbps NIC
* 2x LSI Nytro WarpDrive BLP4-1600[2]
* Software
* Debian Wheezy
* Redis 2.4.14
* Default config (snapshotting and AOF enabled)
* One process

As in Round 1, Keystone auth is disabled and requests go over HTTP, not
HTTPS. The latency introduced by enabling these is outside the control of
Zaqar, but should be quite minimal (speaking anecdotally, I would expect
an additional 1-3ms for cached tokens and assuming an optimized TLS
termination setup).

For generating the load, I again used the zaqar-bench tool. I would like
to see the team complete a large-scale Tsung test as well (including a
full HA deployment with Keystone and HTTPS enabled), but decided not to
wait for that before publishing the results for the Redis driver using
zaqar-bench.

CPU usage on the Redis node peaked at around 75% for the one process. To
better utilize the hardware, a production deployment would need to run
multiple Redis processes and use Zaqar's backend pooling feature to
distribute queues across the various instances.

Several different messaging patterns were tested, taking inspiration
from: https://wiki.openstack.org/wiki/Use_Cases_(Zaqar)

Each test was executed three times and the best time recorded.

A ~1K sample message (1398 bytes) was used for all tests.

## Results ##

### Event Broadcasting (Read-Heavy) ###

OK, so let's say you have a somewhat low-volume source, but tons of event
observers. In this case, the observers easily outpace the producer, making
this a read-heavy workload.

Options
* 1 producer process with 5 gevent workers
* 1 message posted per request
* 2 observer processes with 25 gevent workers each
* 5 messages listed per request by the observers
* Load distributed across 4[6] queues
* 10-second duration

Results
* Redis
* Producer: 1.7 ms/req,  585 req/sec
* Observer: 1.5 ms/req, 1254 req/sec
* Mongo
* Producer: 2.2 ms/req,  454 req/sec
* Observer: 1.5 ms/req, 1224 req/sec

### Event Broadcasting (Balanced) ###

This test uses the same number of producers and consumers, but note that
the observers are still listing (up to) 5 messages at a time[4], so they
still outpace the producers, but not as quickly as before.

Options
* 2 producer processes with 25 gevent workers each
* 1 message posted per request
* 2 observer processes with 25 gevent workers each
* 5 messages listed per request by the observers
* Load distributed across 4 queues
* 10-second duration

Results
* Redis
* Producer: 1.4 ms/req, 1374 req/sec
* Observer: 1.6 ms/req, 1178 req/sec
* Mongo
* Producer: 2.2 ms/req, 883 req/sec
* Observer: 2.8 ms/req, 348 req/sec

### Point-to-Point Messaging ###

In this scenario I simulated one client sending messages directly to a
different client. Only one queue is required in this case[5].

Options
* 1 producer process with 1 gevent worker

[openstack-dev] [solum] pep8 - splitting expressions

2014-09-09 Thread Gilbert Pilz
I have a question with regards to splitting expressions in order to conform to 
the pep8 line-length restriction. I have the following bit of code:

res = amodel.Assemblies(uri=common.ASSEM_URI_STR %
pecan.request.host_url,
name='Solum_CAMP_assemblies',
type='assemblies',
description=common.ASSEM_DESC_STR,
assembly_links=a_links,

parameter_definitions_uri=common.ASSEM_PARAM_STR %
pecan.request.host_url)

The line that assigns a value to 'parameter_definitions_uri' is (as you might 
be able to tell) too long. What is the best way to split this expression up?

~ gp___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-09 Thread Monty Taylor

On 09/04/2014 01:30 AM, Clint Byrum wrote:

Excerpts from Flavio Percoco's message of 2014-09-04 00:08:47 -0700:

Greetings,

Last Tuesday the TC held the first graduation review for Zaqar. During
the meeting some concerns arose. I've listed those concerns below with
some comments hoping that it will help starting a discussion before the
next meeting. In addition, I've added some comments about the project
stability at the bottom and an etherpad link pointing to a list of use
cases for Zaqar.



Hi Flavio. This was an interesting read. As somebody whose attention has
recently been drawn to Zaqar, I am quite interested in seeing it
graduate.


# Concerns

- Concern on operational burden of requiring NoSQL deploy expertise to
the mix of openstack operational skills

For those of you not familiar with Zaqar, it currently supports 2 nosql
drivers - MongoDB and Redis - and those are the only 2 drivers it
supports for now. This will require operators willing to use Zaqar to
maintain a new (?) NoSQL technology in their system. Before expressing
our thoughts on this matter, let me say that:

 1. By removing the SQLAlchemy driver, we basically removed the chance
for operators to use an already deployed "OpenStack-technology"
 2. Zaqar won't be backed by any AMQP based messaging technology for
now. Here's[0] a summary of the research the team (mostly done by
Victoria) did during Juno
 3. We (OpenStack) used to require Redis for the zmq matchmaker
 4. We (OpenStack) also use memcached for caching and as the oslo
caching lib becomes available - or a wrapper on top of dogpile.cache -
Redis may be used in place of memcached in more and more deployments.
 5. Ceilometer's recommended storage driver is still MongoDB, although
Ceilometer has now support for sqlalchemy. (Please correct me if I'm wrong).

That being said, it's obvious we already, to some extent, promote some
NoSQL technologies. However, for the sake of the discussion, lets assume
we don't.

I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
keep avoiding these technologies. NoSQL technologies have been around
for years and we should be prepared - including OpenStack operators - to
support these technologies. Not every tool is good for all tasks - one
of the reasons we removed the sqlalchemy driver in the first place -
therefore it's impossible to keep an homogeneous environment for all
services.



I whole heartedly agree that non traditional storage technologies that
are becoming mainstream are good candidates for use cases where SQL
based storage gets in the way. I wish there wasn't so much FUD
(warranted or not) about MongoDB, but that is the reality we live in.


With this, I'm not suggesting to ignore the risks and the extra burden
this adds but, instead of attempting to avoid it completely by not
evolving the stack of services we provide, we should probably work on
defining a reasonable subset of NoSQL services we are OK with
supporting. This will help making the burden smaller and it'll give
operators the option to choose.

[0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/


- Concern on should we really reinvent a queue system rather than
piggyback on one

As mentioned in the meeting on Tuesday, Zaqar is not reinventing message
brokers. Zaqar provides a service akin to SQS from AWS with an OpenStack
flavor on top. [0]



I think Zaqar is more like SMTP and IMAP than AMQP. You're not really
trying to connect two processes in real time. You're trying to do fully
asynchronous messaging with fully randomized access to any message.

Perhaps somebody should explore whether the approaches taken by large
scale IMAP providers could be applied to Zaqar.

Anyway, I can't imagine writing a system to intentionally use the
semantics of IMAP and SMTP. I'd be very interested in seeing actual use
cases for it, apologies if those have been posted before.


It seems like you're EITHER describing something called XMPP that has at 
least one open source scalable backend called ejabberd. OR, you've 
actually hit the nail on the head with bringing up SMTP and IMAP but for 
some reason that feels strange.


SMTP and IMAP already implement every feature you've described, as well 
as retries/failover/HA and a fully end to end secure transport (if 
installed properly) If you don't actually set them up to run as a public 
messaging interface but just as a cloud-local exchange, then you could 
get by with very low overhead for a massive throughput - it can very 
easily be run on a single machine for Sean's simplicity, and could just 
as easily be scaled out using well known techniques for public cloud 
sized deployments?


So why not use existing daemons that do this? You could still use the 
REST API you've got, but instead of writing it to a mongo backend and 
trying to implement all of the things that already exist in SMTP/IMAP - 
you could just have them front to it. You could even bypass normal 
delivery mechanisms and do 

Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-09 Thread Jay Pipes

On 09/04/2014 12:07 AM, Sumit Naiksatam wrote:

Hi,

There's been a lot of lively discussion on GBP a few weeks back and we
wanted to drive forward the discussion on this a bit more. As you
might imagine, we're excited to move this forward so more people can
try it out.  Here are the options:

* Neutron feature branch: This presumably allows the GBP feature to be
developed independently, and will perhaps help in faster iterations.
There does seem to be a significant packaging issue [1] with this
approach that hasn’t been completely addressed.

* Neutron-incubator: This allows a path to graduate into Neutron, and
will be managed by the Neutron core team. That said, the proposal is
under discussion and there are still some open questions [2].

* Stackforge: This allows the GBP team to make rapid and iterative
progress, while still leveraging the OpenStack infra. It also provides
option of immediately exposing the existing implementation to early
adopters.

Each of the above options does not preclude moving to the other at a later time.

Which option do people think is more preferable?

(We could also discuss this in the weekly GBP IRC meeting on Thursday:
https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy)

Thanks!

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/044283.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2014-August/043577.html


Hi all,

IIRC, Kevin was saying to me in IRC that GBP really needs to live 
in-tree due to it needing access to various internal plugin points and 
to be able to call across different plugin layers/drivers inside of Neutron.


If this is the case, how would the stackforge GBP project work if it 
wasn't a fork of Neutron in its entirety?


Just curious,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack] Supporting code for incubated projects

2014-09-09 Thread Sean Dague
On 09/09/2014 12:23 PM, Mac Innes, Kiall wrote:
>> -Original Message-
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: 09 September 2014 15:13
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack]
>> Supporting code for incubated projects
>>
>> On 09/09/2014 07:58 AM, Mac Innes, Kiall wrote:
>>> Hi all,
>>>
>>>
>>>
>>> While requesting a openstack/designate-dashboard project from the TC/
>>>
>>> Infra – The topic of why Designate panels, as an incubated project,
>>> can’t be merged into openstack/horizon was raised.
>>>
>>>
>>>
>>> In the openstack/governance review[1], Russell asked:
>>>
>>>
>>>
>>> Hm, I think we should discuss this with the horizon team, then. We are
>>> telling projects that incubation is a key time for integrating
>>> with other
>>> projects. I would expect merging horizon integration into horizon itself
>>> to be a part of that.
>>>
>>>
>>>
>>> With this in mind – I’d like to start a conversation with the Horizon,
>>> Tempest and DevStack teams around merging of code to support
>> Incubated
>>> projects – What are the drawbacks?, Why is this currently frowned upon
>>> by the various teams? And – What do each of the parties believe is the
>>> Right Way forward?
>>
>> I though the Devstack and Tempest cases were pretty clear, once things are
>> incubated they are fair game to get added in.
>>
>> Devstack is usually the right starting point, as that makes it easy for 
>> everyone
>> to have access to the code, and makes the testability by other systems
>> viable.
>>
>> I currently don't see any designate changes that are passing Jenkins that
>> need to be addressed, is there something that got missed?
>>
>>  -Sean
> 
> From previous discussions with Tempest team members, we had been informed
> this was not the case - this could have been miscommunication.

Once officially incubated it should be fine to add tempest tests. There
are tests for queues and baremetal in there. dns should be the same.

> For DevStack, I never even asked - After two "Not till you're integrated"'s, 
> I made
> the assumption DevStack would be the same.

Devstack policy is basically once incubated, it's all good. We try to
resist every stackforge project under the sun trying to put config code
in devstack, however we've got a plugin interface so that projects can
maintain their devstack integration in their own tree.

> I'll get our DevStack part's submitted for review ASAP if that's not the case!
> 
> The Horizon integration though, the spark for this conversation, still stands.
> 
> Thanks,
> Kiall
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-09 Thread Gregory Haynes
Hello everyone!

I have been working on a meta-review of StevenK's reviews and I would
like to propose him as a new member of our core team.

As I'm sure many have noticed, he has been above our stats requirements
for several months now. More importantly, he has been reviewing a wide
breadth of topics and seems to have a strong understanding of our code
base. He also seems to be doing a great job at providing valuable
feedback and being attentive to responses on his reviews.

As such, I think he would make a great addition to our core team. Can
the other core team members please reply with your votes if you agree or
disagree.

Thanks!
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Request for python-heatclient project to adopt heat-translator

2014-09-09 Thread Sahdev P Zala
Hi Steve, sure. Please see my reply in-line. 

Thanks! 

Regards, 
Sahdev




From:   Steven Hardy 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   09/09/2014 05:55 AM
Subject:Re: [openstack-dev] [Heat] Request for python-heatclient 
project to adopt heat-translator



Hi Sahdev,

On Tue, Sep 02, 2014 at 11:52:30AM -0400, Sahdev P Zala wrote:
>Hello guys,
> 
>As you know, the heat-translator project was started early this year 
with
>an aim to create a tool to translate non-Heat templates to HOT. It is 
a
>StackForge project licensed under Apache 2. We have made good 
progress
>with its development and a demo was given at the OpenStack 2014 
Atlanta
>summit during a half-a-day session that was dedicated to 
heat-translator
>project and related TOSCA discussion. Currently the development and
>testing is done with the TOSCA template format but the tool is 
designed to
>be generic enough to work with templates other than TOSCA. There are 
five
>developers actively contributing to the development. In addition, all
>current Heat core members are already core members of the 
heat-translator
>project.
> 
>Recently, I attended Heat Mid Cycle Meet Up for Juno in Raleigh and
>updated the attendees on heat-translator project and ongoing 
progress. I
>also requested everyone for a formal adoption of the project in the
>python-heatclient and the consensus was that it is the right thing to 
do.
>Also when the project was started, the initial plan was to make it
>available in python-heatclient. Hereby, the heat-translator team 
would
>like to make a request to have the heat-translator project to be 
adopted
>by the python-heatclient/Heat program.

Obviously I wasn't at the meetup, so I may be missing some context here,
but can you answer some questions please?

- Is the scope for heat-translator only tosca simple-profile, or also the
  original more heavyweight tosca too?

Heat-translator is designed to be used to translate any non-Heat templates 
to HOT. However, current development is done for the TOSCA simple-profile 
only and there is no plan to use it for heavyweight TOSCA.

- If it's only tosca simple-profile, has any thought been given to moving
  towards implementing support via a template parser plugin, rather than
  baking the translation into the client?

At the meetup, Randall and Zane also mentioned that we should dig into the 
plugin and see if that can also be used for TOSCA. However, we all agreed 
that translation is still good to have and if plugin can be used that will 
be another option for TOSCA users.

While I see this effort as valuable, integrating the translator into the
client seems the worst of all worlds to me:

- Any users/services not intefacing to heat via python-heatclient can't 
use it

With python-heatclient, translator will just add a command line option 
i.e. something like ‘heat-translator  
' which will provide an output as HOT. The user 
needs to take the translated template and run it with Heat.

- You prempt the decision about integration with any higher level 
services,
  e.g Mistral, Murano, Solum, if you bake in the translator at the
  heat level.

Hopefully that won't happen. The translator can be a simple integration at 
the client level and provided just as a command line option without any 
added complexity. 

The scope question is probably key here - if you think the translator can
do (or will be able to do) a 100% non-lossy conversion to HOT using only
Heat, maybe it's time we considered discussing integration into Heat the
service rather than the client.

   When the project was started, there was a discussion with Steve Baker 
and others on IRC that initially it is a good idea to provide the 
translator tool to users via python-heatclient and eventually, as the tool 
gets more mature, we can discuss to make it available in Heat engine to 
provide a seamless deployment of translated template.

Conversely, if you're going to need other services to fully implement the
spec, it probably makes sense for the translator to remain layered over
heat (or integrated with another project which is layered over heat).

  The translator project has no dependency on other services and hoping 
for the same in future.


I hope my answers make sense. Please let me know if you have further 
questions.


Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] convergence flow diagrams

2014-09-09 Thread Tyagi, Ishant
Thanks Angus for your comments.

Your design is almost same as this one. I also agree that only engine should 
have DB access will DB rpc api’s. I will update the diagrams with this change.

Regarding the worker communicating with the observer, flow would be like this:

· Engine tells worker to create or update a resource.

· Worker then just calls resource plugin  handle_create / handle_update 
etc and calls observer rpc api to observer the resource ( check_create_complete 
) and then exits.

· Observer then checks the resource status until it comes to the 
desired state.

· Main engine then gets back the notification from observer and then 
schedule next parent resource to converge.

If observer and worker are independent entities then who will invoke observer 
to check resource state ?

-Ishant
From: Angus Salkeld [mailto:asalk...@mirantis.com]
Sent: Tuesday, September 9, 2014 5:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] convergence flow diagrams

On Mon, Sep 8, 2014 at 11:22 PM, Tyagi, Ishant 
mailto:ishant.ty...@hp.com>> wrote:
Hi All,

As per the heat mid cycle meetup whiteboard, we have created the flowchart and 
sequence diagram for the convergence . Can you please review these diagrams and 
provide your feedback?

https://www.dropbox.com/sh/i8qbjtgfdxn4zx4/AAC6J-Nps8J12TzfuCut49ioa?dl=0

Great! Good to see something.

I was expecting something like:
engine ~= like nova-conductor (it's the only process that talks to the db - 
make upgrading easier)
observer - purely gets the actual state/properties and writes then to the db 
(via engine)
worker - has a "job" queue and grinds away at running those (resource actions)

Then engine then "triggers" on differences on goal vs. actual state and create 
a job and sends it to the job queue.
- so, on create it sees there is no actual state so it sends a create job for 
the first resource to the worker queue
- when the observer writes the new state for that resource it triggers the next 
resource create in the dependency tree.
- like any system that relies on notifications we need timeouts and each stack 
needs a periodic "notification" to make sure
  that progress is been made or notify the user that no progress is being made.

One question about the observer (in either my setup or the one in the diagram).
- If we are relying on rpc notifications all the observer processes will 
receive a copy of the same notification
  (say nova create end) how are we going to decide on which one does anything 
with it?
  We don't want 10 observers getting more detailed info from nova and then 
writing to the db

In your diagram worker is communicating with observer, which seems odd to me. I 
thought observer and worker were very
independent entities.

In my setup there are less API to worry about too:
- RPC api for the engine (access to the db)
- RPC api for sending a job to the worker
- the plugin API
- the observer might need an api just for the engine to tell it to start/stop 
observing a stack
-Angus


Thanks,
Ishant


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-09 Thread Doug Hellmann

On Sep 9, 2014, at 10:51 AM, Sean Dague  wrote:

> On 09/09/2014 10:41 AM, Doug Hellmann wrote:
>> 
>> On Sep 8, 2014, at 8:18 PM, James E. Blair  wrote:
>> 
>>> Sean Dague  writes:
>>> 
 The crux of the issue is that zookeeper python modules are C extensions.
 So you have to either install from packages (which we don't do in unit
 tests) or install from pip, which means forcing zookeeper dev packages
 locally. Realistically this is the same issue we end up with for mysql
 and pg, but given their wider usage we just forced that pain on developers.
>>> ...
 Which feels like we need some decoupling on our requirements vs. tox
 targets to get there. CC to Monty and Clark as our super awesome tox
 hackers to help figure out if there is a path forward here that makes 
 sense.
>>> 
>>> From a technical standpoint, all we need to do to make this work is to
>>> add the zookeeper python client bindings to (test-)requirements.txt.
>>> But as you point out, that makes it more difficult for developers who
>>> want to run unit tests locally without having the requisite libraries
>>> and header files installed.
>> 
>> I don’t think I’ve ever tried to run any of our unit tests on a box where I 
>> hadn’t also previously run devstack to install all of those sorts of 
>> dependencies. Is that unusual?
> 
> It is for Linux users, running local unit tests is the norm for me.

To be clear, I run the tests on the same host where I ran devstack, not in a 
VM. I just use devstack as a way to bootstrap all of the libraries needed for 
the unit test dependencies. I guess I’m just being lazy. :-)

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Request for python-heatclient project to adopt heat-translator

2014-09-09 Thread Sahdev P Zala
Hi Angus, please see my reply in-line. 

Thanks!

Regards, 
Sahdev




From:   Angus Salkeld 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   09/09/2014 12:25 AM
Subject:Re: [openstack-dev] [Heat] Request for python-heatclient 
project to adopt heat-translator



Hi

Would the translator ever need to talk to something like Mistral for 
workflow?

   I do not think so. There is no current plan that needs the 
heat-translator to talk to Mistral or other services.

If so does it make sense to hook the translator into heat client.

(this might not be an issue, just asking).

-Angus

On Wed, Sep 3, 2014 at 1:52 AM, Sahdev P Zala  wrote:
Hello guys, 
  
As you know, the heat-translator project was started early this year with 
an aim to create a tool to translate non-Heat templates to HOT. It is a 
StackForge project licensed under Apache 2. We have made good progress 
with its development and a demo was given at the OpenStack 2014 Atlanta 
summit during a half-a-day session that was dedicated to heat-translator 
project and related TOSCA discussion. Currently the development and 
testing is done with the TOSCA template format but the tool is designed to 
be generic enough to work with templates other than TOSCA. There are five 
developers actively contributing to the development. In addition, all 
current Heat core members are already core members of the heat-translator 
project. 
Recently, I attended Heat Mid Cycle Meet Up for Juno in Raleigh and 
updated the attendees on heat-translator project and ongoing progress. I 
also requested everyone for a formal adoption of the project in the 
python-heatclient and the consensus was that it is the right thing to do. 
Also when the project was started, the initial plan was to make it 
available in python-heatclient. Hereby, the heat-translator team would 
like to make a request to have the heat-translator project to be adopted 
by the python-heatclient/Heat program. 
Below are some of links related to the project, 
https://github.com/stackforge/heat-translator 
https://launchpad.net/heat-translator 
https://blueprints.launchpad.net/heat-translator 
https://bugs.launchpad.net/heat-translator 
http://heat-translator.readthedocs.org/ (in progress)
Thanks! 

Regards, 
Sahdev Zala 
IBM SWG Standards Strategy 
Durham, NC 
(919)486-2915 T/L: 526-2915 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Mike Scherbakov
> What we need to achieve that is have 2 build series based on Fuel
master: one with Icehouse packages, and one with Juno, and, as Mike
proposed, keep our manifests backwards compatible with Icehouse.
Exactly. Our Fuel CI can do 4 builds against puppet modules: 2 voting, with
Icehouse packages; 2 non-voting, with Juno packages.

Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno) actually
before Juno becomes stable. We will be able to run 2 sets of BVTs (against
Icehouse and Juno), and it means that we will be able to see almost
immediately if something in nailgun/astute/puppet integration broke. For
Juno builds it's going to be all red initially.

Another suggestion would be to lower green switch in BVTs for Juno: first,
when it passes deployment; and then, if it finally passes OSTF.

I'd like to hear QA & DevOps opinion on all the above. Immediately we would
need just standard stuff which is in checklists for OSCI & DevOps teams,
and ideally soon after that - ability to have Fuel CI running 4 builds, not
2, against our master, as mentioned above.

On Tue, Sep 9, 2014 at 9:28 PM, Roman Vyalov  wrote:

> All OSCI action items for prepare HCF check list  has been done
>
>
> On Tue, Sep 9, 2014 at 6:27 PM, Mike Scherbakov 
> wrote:
>
>> Thanks Alexandra.
>>
>> We land a few patches a day currently, so I think we can open stable
>> branch. If we see no serious objections in next 12 hours, let's do it. We
>> would need to immediately notify everyone in mailing list - that for every
>> patch for 5.1, it should go first to master, and then to stable/5.1.
>>
>> Is everything ready from DevOps, OSCI (packaging) side to do this? Fuel
>> CI, OBS, etc.?
>>
>> On Tue, Sep 9, 2014 at 2:28 PM, Aleksandra Fedorova <
>> afedor...@mirantis.com> wrote:
>>
>>> As I understand your proposal, we need to split our HCF milestone into
>>> two check points: Branching Point and HCF itself.
>>>
>>> Branching point should happen somewhere in between SCF and HCF. And
>>> though It may coincide with HCF, it needs its own list of requirements.
>>> This will give us the possibility to untie two events and make a separate
>>> decision on branching without enforcing all HCF criteria.
>>>
>>> From the DevOps point of view it changes almost nothing, it just adds a
>>> bit more discussion items on the management side and slight modifications
>>> to our checklists.
>>>
>>>
>>> On Tue, Sep 9, 2014 at 5:55 AM, Dmitry Borodaenko <
>>> dborodae...@mirantis.com> wrote:
>>>
 TL;DR: Yes, our work on 6.0 features is currently blocked and it is
 becoming a major problem. No, I don't think we should create
 pre-release or feature branches. Instead, we should create stable/5.1
 branches and open master for 6.0 work.

 We have reached a point in 5.1 release cycle where the scope of issues
 we are willing to address in this release is narrow enough to not
 require full attention of the whole team. We have engineers working on
 6.0 features, and their work is essentially blocked until they have
 somewhere to commit their changes.

 Simply creating new branches is not even close to solving this
 problem: we have a whole CI infrastructure around every active release
 series (currently 5.1, 5.0, 4.1), including test jobs for gerrit
 commits, package repository mirrors updates, ISO image builds, smoke,
 build verification, and swarm tests for ISO images, documentation
 builds, etc. A branch without all that infrastructure isn't any better
 than current status quo: every developer tracking their own 6.0 work
 locally.

 Unrelated to all that, we also had a lot of very negative experience
 with feature branches in the past [0] [1], which is why we have
 decided to follow the OpenStack branching strategy: commit all feature
 changes directly to master and track bugfixes for stable releases in
 stable/* branches.

 [0] https://lists.launchpad.net/fuel-dev/msg00127.html
 [1] https://lists.launchpad.net/fuel-dev/msg00028.html

 I'm also against declaring a "hard code freeze with exceptions", HCF
 should remain tied to our ability to declare a release candidate. If
 we can't release with the bugs we already know about, declaring HCF
 before fixing these bugs would be an empty gesture.

 Creating stable/5.1 now instead of waiting for hard code freeze for
 5.1 will cost us two things:

 1) DevOps team will have to update our CI infrastructure for one more
 release series. It's something we have to do for 6.0 sooner or later,
 so this may be a disruption, but not an additional effort.

 2) All commits targeted for 5.1 will have to be proposed for two
 branches (master and stable/5.1) instead of just one (master). This
 will require additional effort, but I think that it is significantly
 smaller than the cost of spinning our wheels on 6.0 efforts.

 -DmitryB


 

Re: [openstack-dev] [neutron] [nova] non-deterministic gate failures due to unclosed eventlet Timeouts

2014-09-09 Thread Kevin L. Mitchell
On Mon, 2014-09-08 at 17:25 -0400, Jay Pipes wrote:
> > Thanks, that might be what's causing this timeout/gate failure in the
> > nova unit tests. [1]
> >
> > [1] https://bugs.launchpad.net/nova/+bug/1357578
> 
> Indeed, there are a couple places where eventlet.timeout.Timeout() seems 
> to be used in the test suite without a context manager or calling 
> close() explicitly:
> 
> tests/virt/libvirt/test_driver.py
> 8925:raise eventlet.timeout.Timeout()
> 
> tests/virt/hyperv/test_vmops.py
> 196:mock_with_timeout.side_effect = etimeout.Timeout()

I looked into that too, but the docs for Timeout indicate that it's an
Exception subclass, and passing it no args doesn't seem to start the
timer running.  I think you have to explicitly pass a duration value for
Timeout to enable its timeout behavior, but that's just a guess on my
part at this point…
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Roman Vyalov
All OSCI action items for prepare HCF check list  has been done

On Tue, Sep 9, 2014 at 6:27 PM, Mike Scherbakov 
wrote:

> Thanks Alexandra.
>
> We land a few patches a day currently, so I think we can open stable
> branch. If we see no serious objections in next 12 hours, let's do it. We
> would need to immediately notify everyone in mailing list - that for every
> patch for 5.1, it should go first to master, and then to stable/5.1.
>
> Is everything ready from DevOps, OSCI (packaging) side to do this? Fuel
> CI, OBS, etc.?
>
> On Tue, Sep 9, 2014 at 2:28 PM, Aleksandra Fedorova <
> afedor...@mirantis.com> wrote:
>
>> As I understand your proposal, we need to split our HCF milestone into
>> two check points: Branching Point and HCF itself.
>>
>> Branching point should happen somewhere in between SCF and HCF. And
>> though It may coincide with HCF, it needs its own list of requirements.
>> This will give us the possibility to untie two events and make a separate
>> decision on branching without enforcing all HCF criteria.
>>
>> From the DevOps point of view it changes almost nothing, it just adds a
>> bit more discussion items on the management side and slight modifications
>> to our checklists.
>>
>>
>> On Tue, Sep 9, 2014 at 5:55 AM, Dmitry Borodaenko <
>> dborodae...@mirantis.com> wrote:
>>
>>> TL;DR: Yes, our work on 6.0 features is currently blocked and it is
>>> becoming a major problem. No, I don't think we should create
>>> pre-release or feature branches. Instead, we should create stable/5.1
>>> branches and open master for 6.0 work.
>>>
>>> We have reached a point in 5.1 release cycle where the scope of issues
>>> we are willing to address in this release is narrow enough to not
>>> require full attention of the whole team. We have engineers working on
>>> 6.0 features, and their work is essentially blocked until they have
>>> somewhere to commit their changes.
>>>
>>> Simply creating new branches is not even close to solving this
>>> problem: we have a whole CI infrastructure around every active release
>>> series (currently 5.1, 5.0, 4.1), including test jobs for gerrit
>>> commits, package repository mirrors updates, ISO image builds, smoke,
>>> build verification, and swarm tests for ISO images, documentation
>>> builds, etc. A branch without all that infrastructure isn't any better
>>> than current status quo: every developer tracking their own 6.0 work
>>> locally.
>>>
>>> Unrelated to all that, we also had a lot of very negative experience
>>> with feature branches in the past [0] [1], which is why we have
>>> decided to follow the OpenStack branching strategy: commit all feature
>>> changes directly to master and track bugfixes for stable releases in
>>> stable/* branches.
>>>
>>> [0] https://lists.launchpad.net/fuel-dev/msg00127.html
>>> [1] https://lists.launchpad.net/fuel-dev/msg00028.html
>>>
>>> I'm also against declaring a "hard code freeze with exceptions", HCF
>>> should remain tied to our ability to declare a release candidate. If
>>> we can't release with the bugs we already know about, declaring HCF
>>> before fixing these bugs would be an empty gesture.
>>>
>>> Creating stable/5.1 now instead of waiting for hard code freeze for
>>> 5.1 will cost us two things:
>>>
>>> 1) DevOps team will have to update our CI infrastructure for one more
>>> release series. It's something we have to do for 6.0 sooner or later,
>>> so this may be a disruption, but not an additional effort.
>>>
>>> 2) All commits targeted for 5.1 will have to be proposed for two
>>> branches (master and stable/5.1) instead of just one (master). This
>>> will require additional effort, but I think that it is significantly
>>> smaller than the cost of spinning our wheels on 6.0 efforts.
>>>
>>> -DmitryB
>>>
>>>
>>> On Mon, Sep 8, 2014 at 10:10 AM, Dmitry Mescheryakov
>>>  wrote:
>>> > Hello Fuelers,
>>> >
>>> > Right now we have the following policy in place: the branches for a
>>> > release are opened only after its 'parent' release have reached hard
>>> > code freeze (HCF). Say, 5.1 release is parent releases for 5.1.1 and
>>> > 6.0.
>>> >
>>> > And that is the problem: if parent release is delayed, we can't
>>> > properly start development of a child release because we don't have
>>> > branches to commit. That is current issue with 6.0: we already started
>>> > to work on pushing Juno in to 6.0, but if we are to make changes to
>>> > our deployment code we have nowhere to store them.
>>> >
>>> > IMHO the issue could easily be resolved by creation of pre-release
>>> > branches, which are merged together with parent branches once the
>>> > parent reaches HCF. Say, we use branch 'pre-6.0' for initial
>>> > development of 6.0. Once 5.1 reaches HCF, we merge pre-6.0 into master
>>> > and continue development here. After that pre-6.0 is abandoned.
>>> >
>>> > What do you think?
>>> >
>>> > Thanks,
>>> >
>>> > Dmitry
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > Ope

Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Dmitry Borodaenko
+1 on adding flow based criteria for HCF and Branching Point. Tracking
down how many bugs was reported in LP on a given day is a bit tricky,
so I think in both cases it would be easier to rely on flow of commits
(which after Soft Code Freeze becomes a direct indicator of how many
bugs are fixed per day).

For HCF, not having merged any code changes for 24 hours (on top of
meeting the bug count criteria) would be a solid proof of code
stabilization.

For Branching Point, the threshold has to be more relaxed, for
example, less than 10 commits across all our code repositories within
last 24 hours.

I agree that landing Juno packages right now is going be disruptive. I
continue to think that the only way to address this long term is to
have a parallel OpenStack master branch based Fuel/MOS build series.
What we need to achieve that is have 2 build series based on Fuel
master: one with Icehouse packages, and one with Juno, and, as Mike
proposed, keep our manifests backwards compatible with Icehouse. That
way, we can land changes unrelated to Juno into Fuel master and debug
them using the Icehouse based build series, and land changes needed
for Juno integration into the same Fuel master branch, and debug them
in Juno based build series.

In order to minimize impact of non Juno related changes on Juno
integration, we should minimize breakage of Icehouse based builds. Two
ways to address that problem would be to serialize landing of major
changes (so that only one feature at a time can be blamed for
breakage), and agressively revert changes that cause BVT failure if
they are not fixed within 24 hours.

As soon as Juno builds reach the same level of stability of Icehouse,
we can drop the Icehouse based builds and introduce Kilo based builds
instead.

Thoughts?
-DmitryB


On Tue, Sep 9, 2014 at 7:27 AM, Mike Scherbakov
 wrote:
> Thanks Alexandra.
>
> We land a few patches a day currently, so I think we can open stable branch.
> If we see no serious objections in next 12 hours, let's do it. We would need
> to immediately notify everyone in mailing list - that for every patch for
> 5.1, it should go first to master, and then to stable/5.1.
>
> Is everything ready from DevOps, OSCI (packaging) side to do this? Fuel CI,
> OBS, etc.?
>
> On Tue, Sep 9, 2014 at 2:28 PM, Aleksandra Fedorova 
> wrote:
>>
>> As I understand your proposal, we need to split our HCF milestone into two
>> check points: Branching Point and HCF itself.
>>
>> Branching point should happen somewhere in between SCF and HCF. And though
>> It may coincide with HCF, it needs its own list of requirements. This will
>> give us the possibility to untie two events and make a separate decision on
>> branching without enforcing all HCF criteria.
>>
>> From the DevOps point of view it changes almost nothing, it just adds a
>> bit more discussion items on the management side and slight modifications to
>> our checklists.
>>
>>
>> On Tue, Sep 9, 2014 at 5:55 AM, Dmitry Borodaenko
>>  wrote:
>>>
>>> TL;DR: Yes, our work on 6.0 features is currently blocked and it is
>>> becoming a major problem. No, I don't think we should create
>>> pre-release or feature branches. Instead, we should create stable/5.1
>>> branches and open master for 6.0 work.
>>>
>>> We have reached a point in 5.1 release cycle where the scope of issues
>>> we are willing to address in this release is narrow enough to not
>>> require full attention of the whole team. We have engineers working on
>>> 6.0 features, and their work is essentially blocked until they have
>>> somewhere to commit their changes.
>>>
>>> Simply creating new branches is not even close to solving this
>>> problem: we have a whole CI infrastructure around every active release
>>> series (currently 5.1, 5.0, 4.1), including test jobs for gerrit
>>> commits, package repository mirrors updates, ISO image builds, smoke,
>>> build verification, and swarm tests for ISO images, documentation
>>> builds, etc. A branch without all that infrastructure isn't any better
>>> than current status quo: every developer tracking their own 6.0 work
>>> locally.
>>>
>>> Unrelated to all that, we also had a lot of very negative experience
>>> with feature branches in the past [0] [1], which is why we have
>>> decided to follow the OpenStack branching strategy: commit all feature
>>> changes directly to master and track bugfixes for stable releases in
>>> stable/* branches.
>>>
>>> [0] https://lists.launchpad.net/fuel-dev/msg00127.html
>>> [1] https://lists.launchpad.net/fuel-dev/msg00028.html
>>>
>>> I'm also against declaring a "hard code freeze with exceptions", HCF
>>> should remain tied to our ability to declare a release candidate. If
>>> we can't release with the bugs we already know about, declaring HCF
>>> before fixing these bugs would be an empty gesture.
>>>
>>> Creating stable/5.1 now instead of waiting for hard code freeze for
>>> 5.1 will cost us two things:
>>>
>>> 1) DevOps team will have to update our CI infrastructure for one

Re: [openstack-dev] [Glance][Nova][All] requests 2.4.0 breaks glanceclient

2014-09-09 Thread Ian Cordasco


On 9/3/14, 3:59 PM, "Ian Cordasco"  wrote:

>On 9/3/14, 2:20 PM, "Sean Dague"  wrote:
>
>>On 09/03/2014 03:12 PM, Gregory Haynes wrote:
>>> Excerpts from Kuvaja, Erno's message of 2014-09-03 12:30:08 +:
 Hi All,

 While investigating glanceclient gating issues we narrowed it down to
requests 2.4.0 which was released 2014-08-29. Urllib3 seems to be
raising new ProtocolError which does not get catched and breaks at
least glanceclient.
 Following error can be seen on console "ProtocolError: ('Connection
aborted.', gaierror(-2, 'Name or service not known'))".

 Unfortunately we hit on such issue just under the freeze. Apparently
this breaks novaclient as well and there is change
(https://review.openstack.org/#/c/118332/ )proposed to requirements to
limit the version <2.4.0.

 Is there any other projects using requirements and seeing issues with
the latest version?
>>> 
>>> Weve run into this in tripleo, specifically with os-collect-config.
>>> Heres the upstream bug:
>>> https://github.com/kennethreitz/requests/issues/2192
>>> 
>>> We had to pin it in our project to unwedge CI (otherwise we would be
>>> blocked on cutting an os-collect-config release).
>>
>>Ok, given the details of the bug, I'd be ok with a != 2.4.0, it looks
>>like they are working on a merge now.
>
>There’s a patch waiting to be merged here:
>https://github.com/kennethreitz/requests/pull/2193. Unfortunately, it
>might take a while for Kenneth to show up, merge it, and cut a minor
>release.

For what it’s worth, 2.4.1 was released by the requests team this morning
fixing this issue.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack] Supporting code for incubated projects

2014-09-09 Thread Mac Innes, Kiall
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 09 September 2014 15:13
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack]
> Supporting code for incubated projects
> 
> On 09/09/2014 07:58 AM, Mac Innes, Kiall wrote:
> > Hi all,
> >
> >
> >
> > While requesting a openstack/designate-dashboard project from the TC/
> >
> > Infra – The topic of why Designate panels, as an incubated project,
> > can’t be merged into openstack/horizon was raised.
> >
> >
> >
> > In the openstack/governance review[1], Russell asked:
> >
> >
> >
> > Hm, I think we should discuss this with the horizon team, then. We are
> > telling projects that incubation is a key time for integrating
> > with other
> > projects. I would expect merging horizon integration into horizon itself
> > to be a part of that.
> >
> >
> >
> > With this in mind – I’d like to start a conversation with the Horizon,
> > Tempest and DevStack teams around merging of code to support
> Incubated
> > projects – What are the drawbacks?, Why is this currently frowned upon
> > by the various teams? And – What do each of the parties believe is the
> > Right Way forward?
> 
> I though the Devstack and Tempest cases were pretty clear, once things are
> incubated they are fair game to get added in.
>
> Devstack is usually the right starting point, as that makes it easy for 
> everyone
> to have access to the code, and makes the testability by other systems
> viable.
>
> I currently don't see any designate changes that are passing Jenkins that
> need to be addressed, is there something that got missed?
> 
>   -Sean

From previous discussions with Tempest team members, we had been informed
this was not the case - this could have been miscommunication.

For DevStack, I never even asked - After two "Not till you're integrated"'s, I 
made
the assumption DevStack would be the same.

I'll get our DevStack part's submitted for review ASAP if that's not the case!

The Horizon integration though, the spark for this conversation, still stands.

Thanks,
Kiall
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FUEL] Re: SSL in Fuel.

2014-09-09 Thread Stanislaw Bogatkin
I think that if we have 3 blueprints that realises some SSL stuff around
themselves then we can discuss it here.
My vision about SSL in Fuel split into 3 parts:

A) We need to implement [1] blueprint, cause it is only one way to generate
certificates.
How i see that:
1.0 We sync puppet-openssl from upstream, adapt it for Fuel tasks.
1.1 We create docker container (we already have many, so containerized
CA should work well) with OpenSSL and puppet manifests in it.
1.2 When container will start first time, it will create CA that will
store on master node.

Our workitems here is:
- Create docker container
- Sync upstream puppet-openssl and adapt it for Fuel
- Write code to create CA

B) We need to implement [2] blueprint. How I see that:
1.3 When CA container start first time and creates CA, then it will
check for keypair for master node (Fuel UI). If that keypair will not
found, then CA create it, change nginx conffile properly and restart nginx
on master node.

Our workitems here is:
- Write code to check if we already have generated certificate and generate
new if we have not.

C) Then we need to implement [3] blueprint
For next step we have 2 options:
  First:
1.3 When we create new cluster then we know all information to create
new keypair(s). When user press "Deploy changes" button, then we will
create new keypair(s).
Q: Main question here is - how many keypairs we will create? One for every
service or one for all?
1.4 We will distribute key(s) with mcollective agent (similar to how we
sync puppet manifests from master node to other nodes). After that private
key(s) will deleted from master node.
1.5 On nodes puppet will do all work. We need to write some code for
that
Pros of that method:
+ It's relative simple, we can create clean and lucid code that
will be easy for support
Cons of that method:
- We need to send every private key over network. We can reduce
this danger cause we will already have passwordless sync over network
between master node and other nodes, cause we will generate ssh keys for
nodes before we will distribute any data at deployment stage.

  Second:
1.3 When we create new cluster, we do all work same way as we do it
now, but after provisioning we will create keypair on first node, make csr
for every service (or for one, if we will create one certificate for all
services) and send that csr to master node, where it will signed and
certificate will send back.
1.4 Puppet will do all work on nodes. We, obviously, need to write some
code for it. But we need to sync our keys over controllers all the same
(and now we don't have reliable mechanism to do this)
Pros of that method:
+ I don't see any
Cons of that method:
- Code will be not so obvious
- To generate cert we need to rely to other service (network) and
if network will unreachable then csr signing will fail and all following
deployment will fail because we will not have valid certificate

Independing of choice (first or second), our workitems here is:
- Write code to provide functions about generating keypairs, csr's and
signing them.

1.5. When we have created CA and certs for services, we can do usual
puppet apply to deploy changes.

Workitems on that stage:
- Sync upstream modules that already have big changes to SSL support (e.g.
HAProxy) and adapt that modules to Fuel usage
- Rewrite some of manifests to support https support

As I see, at that phase we can say that Stage I for blueprint [3] is ready.
What we can do next? My thoughts is that:

2. We need to think about use case when user want to import his own
certificate to Fuel or Openstack services endpoints (cause in that case
users will not see warning popup in browser about not trusted certificate
or cause corporate policy say to do that). I see that way:

2.1 We can provide ability to change some keypairs (e.g. for Fuel UI
and Horizon)
Q: How many keys user can change? We can provide ability to change all keys
(but why we need to do that?)
Q: If user will replace some of keys with his own key - how we will check
that it is not some malicious but valid user key? Or we will trust all keys
by default?
To do that we will need to change:
- Some UI changes to provide ability to upload user keys
- Some Nailgun and Astute changes to operate with new keys
- Some puppet manifest changes to apply new keys and restart services
- Some changes to check basic validity of uploaded keys (expiry date, key
length, key type)

3. We can give user ability to change CA keypair (if user trust certificate
from that keypair then he automatically will trust all certificates that
will signed with that CA, so if user company trusted CA issue
cross-sertificate for Fuel then user automatically will agree that all
certificates in deployed by Fuel services is trusted). For do that we need:
- Some UI changes to provide ability to upload user CA key
- Some Nailgun and Astute changes to operate with new CA keys
- Write so

Re: [openstack-dev] memory usage in devstack-gate (the oom-killer strikes again)

2014-09-09 Thread Mike Bayer
yes.  guppy seems to have some nicer string formatting for this dump as well, 
but i was unable to figure out how to get this string format to write to a 
file, it seems like the tool is very geared towards interactive console use.   
We should pick a nice memory formatter we like, there’s a bunch of them, and 
then add it to our standard toolset.


On Sep 9, 2014, at 10:35 AM, Doug Hellmann  wrote:

> 
> On Sep 8, 2014, at 8:12 PM, Mike Bayer  wrote:
> 
>> Hi All - 
>> 
>> Joe had me do some quick memory profiling on nova, just an FYI if anyone 
>> wants to play with this technique, I place a little bit of memory profiling 
>> code using Guppy into nova/api/__init__.py, or anywhere in your favorite app 
>> that will definitely get imported when the thing first runs:
>> 
>> from guppy import hpy
>> import signal
>> import datetime
>> 
>> def handler(signum, frame):
>> print "guppy memory dump"
>> 
>> fname = "/tmp/memory_%s.txt" % 
>> datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
>> prof = hpy().heap()
>> with open(fname, 'w') as handle:
>> prof.dump(handle)
>> del prof
>> 
>> signal.signal(signal.SIGUSR2, handler)
> 
> This looks like something we could build into our standard service startup 
> code. Maybe in 
> http://git.openstack.org/cgit/openstack/oslo-incubator/tree/openstack/common/service.py
>  for example?
> 
> Doug
> 
>> 
>> 
>> 
>> Then, run nova-api, run some API calls, then you hit the nova-api process 
>> with a SIGUSR2 signal, and it will dump a profile into /tmp/ like this:
>> 
>> http://paste.openstack.org/show/108536/
>> 
>> Now obviously everyone is like, oh boy memory lets go beat up SQLAlchemy 
>> again…..which is fine I can take it.  In that particular profile, there’s a 
>> bunch of SQLAlchemy stuff, but that is all structural to the classes that 
>> are mapped in Nova API, e.g. 52 classes with a total of 656 attributes 
>> mapped.   That stuff sets up once and doesn’t change.   If Nova used less 
>> ORM,  e.g. didn’t map everything, that would be less.  But in that profile 
>> there’s no “data” lying around.
>> 
>> But even if you don’t have that many objects resident, your Python process 
>> might still be using up a ton of memory.  The reason for this is that the 
>> cPython interpreter has a model where it will grab all the memory it needs 
>> to do something, a time consuming process by the way, but then it really 
>> doesn’t ever release it  (see 
>> http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm
>>  for the “classic” answer on this, things may have improved/modernized in 
>> 2.7 but I think this is still the general idea).
>> 
>> So in terms of SQLAlchemy, a good way to suck up a ton of memory all at once 
>> that probably won’t get released is to do this:
>> 
>> 1. fetching a full ORM object with all of its data
>> 
>> 2. fetching lots of them all at once
>> 
>> 
>> So to avoid doing that, the answer isn’t necessarily that simple.   The 
>> quick wins to loading full objects are to …not load the whole thing!   E.g. 
>> assuming we can get Openstack onto 0.9 in requirements.txt, we can start 
>> using load_only():
>> 
>> session.query(MyObject).options(load_only(“id”, “name”, “ip”))
>> 
>> or with any version, just load those columns - we should be using this as 
>> much as possible for any query that is row/time intensive and doesn’t need 
>> full ORM behaviors (like relationships, persistence):
>> 
>> session.query(MyObject.id, MyObject.name, MyObject.ip)
>> 
>> Another quick win, if we *really* need an ORM object, not a row, and we have 
>> to fetch a ton of them in one big result, is to fetch them using yield_per():
>> 
>>for obj in session.query(MyObject).yield_per(100):
>> # work with obj and then make sure to lose all references to it
>> 
>> yield_per() will dish out objects drawing from batches of the number you 
>> give it.   But it has two huge caveats: one is that it isn’t compatible with 
>> most forms of eager loading, except for many-to-one joined loads.  The other 
>> is that the DBAPI, e.g. like the MySQL driver, does *not* stream the rows; 
>> virtually all DBAPIs by default load a result set fully before you ever see 
>> the first row.  psycopg2 is one of the only DBAPIs that even offers a 
>> special mode to work around this (server side cursors).
>> 
>> Which means its even *better* to paginate result sets, so that you only ask 
>> the database for a chunk at a time, only storing at most a subset of objects 
>> in memory at once.  Pagination itself is tricky, if you are using a naive 
>> LIMIT/OFFSET approach, it takes awhile if you are working with a large 
>> OFFSET.  It’s better to SELECT into windows of data, where you can specify a 
>> start and end criteria (against an indexed column) for each window, like a 
>> timestamp.
>> 
>> Then of course, using Core only is another level of fastness/low memory.  
>> Though querying for individual columns with ORM is not 

[openstack-dev] [oslo] deferring incomplete juno specs

2014-09-09 Thread Doug Hellmann
We haven’t talked about the process for deferring incomplete specs. I submitted 
a review [1] to simply remove them from juno, with the understanding that their 
author (or a new owner) can resubmit them for kilo. Please look it over and 
vote on the review. If we need to have a process discussion, we can do that in 
this thread since it’s easier to follow than gerrit.

Doug

[1] https://review.openstack.org/120095
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-09 Thread Sean Dague
On 09/09/2014 10:41 AM, Doug Hellmann wrote:
> 
> On Sep 8, 2014, at 8:18 PM, James E. Blair  wrote:
> 
>> Sean Dague  writes:
>>
>>> The crux of the issue is that zookeeper python modules are C extensions.
>>> So you have to either install from packages (which we don't do in unit
>>> tests) or install from pip, which means forcing zookeeper dev packages
>>> locally. Realistically this is the same issue we end up with for mysql
>>> and pg, but given their wider usage we just forced that pain on developers.
>> ...
>>> Which feels like we need some decoupling on our requirements vs. tox
>>> targets to get there. CC to Monty and Clark as our super awesome tox
>>> hackers to help figure out if there is a path forward here that makes sense.
>>
>> From a technical standpoint, all we need to do to make this work is to
>> add the zookeeper python client bindings to (test-)requirements.txt.
>> But as you point out, that makes it more difficult for developers who
>> want to run unit tests locally without having the requisite libraries
>> and header files installed.
> 
> I don’t think I’ve ever tried to run any of our unit tests on a box where I 
> hadn’t also previously run devstack to install all of those sorts of 
> dependencies. Is that unusual?

It is for Linux users, running local unit tests is the norm for me.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting today.

2014-09-09 Thread Peter Pouliot
Hi everyone,

Due to an overload of critical work in the CI we will be postponing this weeks 
hyper-v meeting.
We will resume with the regular schedule next week.

p

Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research & Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-09 Thread Doug Hellmann

On Sep 8, 2014, at 8:18 PM, James E. Blair  wrote:

> Sean Dague  writes:
> 
>> The crux of the issue is that zookeeper python modules are C extensions.
>> So you have to either install from packages (which we don't do in unit
>> tests) or install from pip, which means forcing zookeeper dev packages
>> locally. Realistically this is the same issue we end up with for mysql
>> and pg, but given their wider usage we just forced that pain on developers.
> ...
>> Which feels like we need some decoupling on our requirements vs. tox
>> targets to get there. CC to Monty and Clark as our super awesome tox
>> hackers to help figure out if there is a path forward here that makes sense.
> 
> From a technical standpoint, all we need to do to make this work is to
> add the zookeeper python client bindings to (test-)requirements.txt.
> But as you point out, that makes it more difficult for developers who
> want to run unit tests locally without having the requisite libraries
> and header files installed.

I don’t think I’ve ever tried to run any of our unit tests on a box where I 
hadn’t also previously run devstack to install all of those sorts of 
dependencies. Is that unusual?

Doug

> 
> We could add another requirements file with heavyweight optional
> dependencies, and use that in gate testing, but also have a lightweight
> tox environment that does not include them for ease of use in local
> testing.
> 
> What would be really great is if we could use setuptools extras_require
> for this:
> 
> https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies
> 
> However, I'm not sure what the situation is with support for that in pip
> (and we might need pbr support too).
> 
> -Jim
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] memory usage in devstack-gate (the oom-killer strikes again)

2014-09-09 Thread Doug Hellmann

On Sep 8, 2014, at 8:12 PM, Mike Bayer  wrote:

> Hi All - 
> 
> Joe had me do some quick memory profiling on nova, just an FYI if anyone 
> wants to play with this technique, I place a little bit of memory profiling 
> code using Guppy into nova/api/__init__.py, or anywhere in your favorite app 
> that will definitely get imported when the thing first runs:
> 
> from guppy import hpy
> import signal
> import datetime
> 
> def handler(signum, frame):
> print "guppy memory dump"
> 
> fname = "/tmp/memory_%s.txt" % 
> datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
> prof = hpy().heap()
> with open(fname, 'w') as handle:
> prof.dump(handle)
> del prof
> 
> signal.signal(signal.SIGUSR2, handler)

This looks like something we could build into our standard service startup 
code. Maybe in 
http://git.openstack.org/cgit/openstack/oslo-incubator/tree/openstack/common/service.py
 for example?

Doug

> 
> 
> 
> Then, run nova-api, run some API calls, then you hit the nova-api process 
> with a SIGUSR2 signal, and it will dump a profile into /tmp/ like this:
> 
> http://paste.openstack.org/show/108536/
> 
> Now obviously everyone is like, oh boy memory lets go beat up SQLAlchemy 
> again…..which is fine I can take it.  In that particular profile, there’s a 
> bunch of SQLAlchemy stuff, but that is all structural to the classes that are 
> mapped in Nova API, e.g. 52 classes with a total of 656 attributes mapped.   
> That stuff sets up once and doesn’t change.   If Nova used less ORM,  e.g. 
> didn’t map everything, that would be less.  But in that profile there’s no 
> “data” lying around.
> 
> But even if you don’t have that many objects resident, your Python process 
> might still be using up a ton of memory.  The reason for this is that the 
> cPython interpreter has a model where it will grab all the memory it needs to 
> do something, a time consuming process by the way, but then it really doesn’t 
> ever release it  (see 
> http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm
>  for the “classic” answer on this, things may have improved/modernized in 2.7 
> but I think this is still the general idea).
> 
> So in terms of SQLAlchemy, a good way to suck up a ton of memory all at once 
> that probably won’t get released is to do this:
> 
> 1. fetching a full ORM object with all of its data
> 
> 2. fetching lots of them all at once
> 
> 
> So to avoid doing that, the answer isn’t necessarily that simple.   The quick 
> wins to loading full objects are to …not load the whole thing!   E.g. 
> assuming we can get Openstack onto 0.9 in requirements.txt, we can start 
> using load_only():
> 
> session.query(MyObject).options(load_only(“id”, “name”, “ip”))
> 
> or with any version, just load those columns - we should be using this as 
> much as possible for any query that is row/time intensive and doesn’t need 
> full ORM behaviors (like relationships, persistence):
> 
> session.query(MyObject.id, MyObject.name, MyObject.ip)
> 
> Another quick win, if we *really* need an ORM object, not a row, and we have 
> to fetch a ton of them in one big result, is to fetch them using yield_per():
> 
>for obj in session.query(MyObject).yield_per(100):
> # work with obj and then make sure to lose all references to it
> 
> yield_per() will dish out objects drawing from batches of the number you give 
> it.   But it has two huge caveats: one is that it isn’t compatible with most 
> forms of eager loading, except for many-to-one joined loads.  The other is 
> that the DBAPI, e.g. like the MySQL driver, does *not* stream the rows; 
> virtually all DBAPIs by default load a result set fully before you ever see 
> the first row.  psycopg2 is one of the only DBAPIs that even offers a special 
> mode to work around this (server side cursors).
> 
> Which means its even *better* to paginate result sets, so that you only ask 
> the database for a chunk at a time, only storing at most a subset of objects 
> in memory at once.  Pagination itself is tricky, if you are using a naive 
> LIMIT/OFFSET approach, it takes awhile if you are working with a large 
> OFFSET.  It’s better to SELECT into windows of data, where you can specify a 
> start and end criteria (against an indexed column) for each window, like a 
> timestamp.
> 
> Then of course, using Core only is another level of fastness/low memory.  
> Though querying for individual columns with ORM is not far off, and I’ve also 
> made some major improvements to that in 1.0 so that query(*cols) is pretty 
> competitive with straight Core (and Core is…well I’d say becoming visible in 
> raw DBAPI’s rear view mirror, at least….).
> 
> What I’d suggest here is that we start to be mindful of memory/performance 
> patterns and start to work out naive ORM use into more savvy patterns; being 
> aware of what columns are needed, what rows, how many SQL queries we really 
> need to emit, what the “worst case” number of row

Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Mike Scherbakov
Thanks Alexandra.

We land a few patches a day currently, so I think we can open stable
branch. If we see no serious objections in next 12 hours, let's do it. We
would need to immediately notify everyone in mailing list - that for every
patch for 5.1, it should go first to master, and then to stable/5.1.

Is everything ready from DevOps, OSCI (packaging) side to do this? Fuel CI,
OBS, etc.?

On Tue, Sep 9, 2014 at 2:28 PM, Aleksandra Fedorova 
wrote:

> As I understand your proposal, we need to split our HCF milestone into two
> check points: Branching Point and HCF itself.
>
> Branching point should happen somewhere in between SCF and HCF. And though
> It may coincide with HCF, it needs its own list of requirements. This will
> give us the possibility to untie two events and make a separate decision on
> branching without enforcing all HCF criteria.
>
> From the DevOps point of view it changes almost nothing, it just adds a
> bit more discussion items on the management side and slight modifications
> to our checklists.
>
>
> On Tue, Sep 9, 2014 at 5:55 AM, Dmitry Borodaenko <
> dborodae...@mirantis.com> wrote:
>
>> TL;DR: Yes, our work on 6.0 features is currently blocked and it is
>> becoming a major problem. No, I don't think we should create
>> pre-release or feature branches. Instead, we should create stable/5.1
>> branches and open master for 6.0 work.
>>
>> We have reached a point in 5.1 release cycle where the scope of issues
>> we are willing to address in this release is narrow enough to not
>> require full attention of the whole team. We have engineers working on
>> 6.0 features, and their work is essentially blocked until they have
>> somewhere to commit their changes.
>>
>> Simply creating new branches is not even close to solving this
>> problem: we have a whole CI infrastructure around every active release
>> series (currently 5.1, 5.0, 4.1), including test jobs for gerrit
>> commits, package repository mirrors updates, ISO image builds, smoke,
>> build verification, and swarm tests for ISO images, documentation
>> builds, etc. A branch without all that infrastructure isn't any better
>> than current status quo: every developer tracking their own 6.0 work
>> locally.
>>
>> Unrelated to all that, we also had a lot of very negative experience
>> with feature branches in the past [0] [1], which is why we have
>> decided to follow the OpenStack branching strategy: commit all feature
>> changes directly to master and track bugfixes for stable releases in
>> stable/* branches.
>>
>> [0] https://lists.launchpad.net/fuel-dev/msg00127.html
>> [1] https://lists.launchpad.net/fuel-dev/msg00028.html
>>
>> I'm also against declaring a "hard code freeze with exceptions", HCF
>> should remain tied to our ability to declare a release candidate. If
>> we can't release with the bugs we already know about, declaring HCF
>> before fixing these bugs would be an empty gesture.
>>
>> Creating stable/5.1 now instead of waiting for hard code freeze for
>> 5.1 will cost us two things:
>>
>> 1) DevOps team will have to update our CI infrastructure for one more
>> release series. It's something we have to do for 6.0 sooner or later,
>> so this may be a disruption, but not an additional effort.
>>
>> 2) All commits targeted for 5.1 will have to be proposed for two
>> branches (master and stable/5.1) instead of just one (master). This
>> will require additional effort, but I think that it is significantly
>> smaller than the cost of spinning our wheels on 6.0 efforts.
>>
>> -DmitryB
>>
>>
>> On Mon, Sep 8, 2014 at 10:10 AM, Dmitry Mescheryakov
>>  wrote:
>> > Hello Fuelers,
>> >
>> > Right now we have the following policy in place: the branches for a
>> > release are opened only after its 'parent' release have reached hard
>> > code freeze (HCF). Say, 5.1 release is parent releases for 5.1.1 and
>> > 6.0.
>> >
>> > And that is the problem: if parent release is delayed, we can't
>> > properly start development of a child release because we don't have
>> > branches to commit. That is current issue with 6.0: we already started
>> > to work on pushing Juno in to 6.0, but if we are to make changes to
>> > our deployment code we have nowhere to store them.
>> >
>> > IMHO the issue could easily be resolved by creation of pre-release
>> > branches, which are merged together with parent branches once the
>> > parent reaches HCF. Say, we use branch 'pre-6.0' for initial
>> > development of 6.0. Once 5.1 reaches HCF, we merge pre-6.0 into master
>> > and continue development here. After that pre-6.0 is abandoned.
>> >
>> > What do you think?
>> >
>> > Thanks,
>> >
>> > Dmitry
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Dmitry Borodaenko
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> ht

Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack] Supporting code for incubated projects

2014-09-09 Thread Sean Dague
On 09/09/2014 07:58 AM, Mac Innes, Kiall wrote:
> Hi all,
> 
>  
> 
> While requesting a openstack/designate-dashboard project from the TC/
> 
> Infra – The topic of why Designate panels, as an incubated project, can’t
> be merged into openstack/horizon was raised.
> 
>  
> 
> In the openstack/governance review[1], Russell asked:
> 
>  
> 
> Hm, I think we should discuss this with the horizon team, then. We are
> telling projects that incubation is a key time for integrating with
> other
> projects. I would expect merging horizon integration into horizon itself
> to be a part of that.
> 
>  
> 
> With this in mind – I’d like to start a conversation with the Horizon,
> Tempest and DevStack teams around merging of code to support
> Incubated projects – What are the drawbacks?, Why is this currently
> frowned upon by the various teams? And – What do each of the parties
> believe is the Right Way forward?

I though the Devstack and Tempest cases were pretty clear, once things
are incubated they are fair game to get added in.

Devstack is usually the right starting point, as that makes it easy for
everyone to have access to the code, and makes the testability by other
systems viable.

I currently don't see any designate changes that are passing Jenkins
that need to be addressed, is there something that got missed?

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-09-09 Thread Clint Byrum
Excerpts from Mike Scherbakov's message of 2014-09-09 00:35:09 -0700:
> Hi all,
> please see below original email below from Dmitry. I've modified the
> subject to bring larger audience to the issue.
> 
> I'd like to split the issue into two parts:
> 
>1. Maintenance mode for OpenStack controllers in HA mode (HA-ed
>Keystone, Glance, etc.)
>2. Maintenance mode for OpenStack computes/storage nodes (no HA)
> 
> For first category, we might not need to have maintenance mode at all. For
> example, if we apply patching/upgrade one by one node to 3-node HA cluster,
> 2 nodes will serve requests normally. Is that possible for our HA solutions
> in Fuel, TripleO, other frameworks?

You may have a broken cloud if you are pushing out an update that
requires a new schema. Some services are better than others about
handling old schemas, and can be upgraded before doing schema upgrades.
But most of the time you have to do at least a brief downtime:

 * turn off DB accessing services
 * update code
 * run db migration
 * turn on DB accessing services

It is for this very reason, I believe, that Turbo Hipster was added to
the gate, so that deployers running against the upstream master branches
can have a chance at performing these upgrades in a reasonable amount of
time.

> 
> For second category, can not we simply do "nova-manage service disable...",
> so scheduler will simply stop scheduling new workloads on particular host
> which we want to do maintenance on?
> 

You probably would want 'nova host-servers-migrate ' at that
point, assuming you have migration set up.

http://docs.openstack.org/user-guide/content/novaclient_commands.html

> On Thu, Aug 28, 2014 at 6:44 PM, Dmitry Pyzhov  wrote:
> 
> > All,
> >
> > I'm not sure if it deserves to be mentioned in our documentation, this
> > seems to be a common practice. If an administrator wants to patch his
> > environment, he should be prepared for a temporary downtime of OpenStack
> > services. And he should plan to perform patching in advance: choose a time
> > with minimal load and warn users about possible interruptions of service
> > availability.
> >
> > Our current implementation of patching does not protect from downtime
> > during the patching procedure. HA deployments seems to be more or less
> > stable. But it looks like it is possible to schedule an action on a compute
> > node and get an error because of service restart. Deployments with one
> > controller... well, you won’t be able to use your cluster until the
> > patching is finished. There is no way to get rid of downtime here.
> >
> > As I understand, we can get rid of possible issues with computes in HA.
> > But it will require migration of instances and stopping of nova-compute
> > service before patching. And it will make the overall patching procedure
> > much longer. Do we want to investigate this process?
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack] Supporting code for incubated projects

2014-09-09 Thread Akihiro Motoki
On Tue, Sep 9, 2014 at 10:23 PM, Thierry Carrez  wrote:
> Mac Innes, Kiall wrote:
>> While requesting a openstack/designate-dashboard project from the TC/Infra
>> – The topic of why Designate panels, as an incubated project, can’t
>> be merged into openstack/horizon was raised.
>>
>> In the openstack/governance review[1], Russell asked:
>>
>>> Hm, I think we should discuss this with the horizon team, then. We are
>>> telling projects that incubation is a key time for integrating with
>>> other
>>> projects. I would expect merging horizon integration into horizon itself
>>> to be a part of that.
>
> We are actually telling projects that they should work on their Horizon
> panels while in incubation, and use their first "integrated" cycle (once
> they graduate, before their first release), to get their panels into
> Horizon mainline code.
>
> That's what Sahara did over this cycle (they had a dashboard, they got
> it merged in Horizon during juno, in time for final Juno release).
>
> Now it's not a perfect setup: it put a lot of stress between Sahara and
> Horizon teams -- it was essential for Sahara to get it merged, while no
> horizon-core really signed up to review it. It took a bit of
> cross-project coordination to get it in in time... I expect the same to
> happen again.

As Horizon team, the reasons it takes time to review is just not
a bandwidth or attention for reviews themselves (apart form
the fact that Sahara dashboard is relatively big).
Horizon team need to maintain the code once they are merged and to
accomplish it better understanding for a new integrated project is important.
In the first "integrated" cycle of a new integrated project, the project
itself need to do a lot of things for graduation, so we tend to explore
how it can work with devstack or what concept there is behind the API.
Such input like how to devstack and/or introduction are really helpful for
Horizon team. In most cases Horizon reviewers are first users of a new
project :-)

I hope this input helps us move things forward smoothly and
result in good collaboration among teams.

Thanks,
Akihiro


>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-09 Thread James Bottomley
On Mon, 2014-09-08 at 17:20 -0700, Stefano Maffulli wrote:
> On 09/05/2014 07:07 PM, James Bottomley wrote:
> > Actually, I don't think this analysis is accurate.  Some people are
> > simply interested in small aspects of a project.  It's the "scratch your
> > own itch" part of open source.  The thing which makes itch scratchers
> > not lone wolfs is the desire to go the extra mile to make what they've
> > done useful to the community.  If they never do this, they likely have a
> > forked repo with only their changes (and are the epitome of a lone
> > wolf).  If you scratch your own itch and make the effort to get it
> > upstream, you're assisting the community (even if that's the only piece
> > of code you do) and that assistance makes you (at least for a time) part
> > of the community.
> 
> I'm starting to think that the processes we have implemented are slowing
> down (if not preventing) "scratch your own itch" contributions. The CLA
> has been identified as the cause for this but after carefully looking at
> our development processes and the documentation, I think that's only one
> part of the problem (and maybe not even as big as initially thought).

CLAs are a well known and documented barrier to casual contributions
(just look at all the project harmony discussion),  they affect one offs
disproportionately since they require an investment of effort to
understand and legal resources are often unavailable to individuals.
The key problem for individuals in the US is usually do I or my employer
own my contribution?  Because that makes a huge difference to the
process for signing.

> The gerrit workflow for example is something that requires quite an
> investment in time and energy and casual developers (think operators
> fixing small bugs in code, or documentation) have little incentive to go
> through the learning curve.

I've done both ... I do prefer the patch workflow to the gerrit one, but
I think that's just because the former is what I used for ten years and
I'm very comfortable with it.  The good thing about the patch workflow
is that the initial barrier is very low.  However, the later barriers
can be as high or higher.

> To go back in topic, to the proposal to split drivers out of tree, I
> think we may want to evaluate other, simpler, paths before we embark in
> a huge task which is already quite clear will require more cross-project
> coordination.
> 
> From conversations with PTLs and core reviewers I get the impression
> that lots of drivers contributions come with bad code.

Bad code is a bit of a pejorative term.  However, I can sympathize with
the view: In the Linux Kernel, drivers are often the biggest source of
coding style and maintenance issues.  I maintain a driver subsystem and
I would have to admit that a lot of code that goes into those drivers
that wouldn't be of sufficient quality to be admitted to the core kernel
without a lot more clean up and flow changes.  However, is this bad
code?  It mostly works, so it does the job it's designed for.  Usually
the company producing the device is the one maintaining the driver so as
long as they have the maintenance burden and do their job there's no
real harm.  It's a balance, and sometimes I get it wrong, but I do know
from bitter effort that there's a limit to what you can get busy
developers to do in the driver space.

>  These require a
> lot of time and reviewers energy to be cleaned up, causing burn out and
> bad feelings on all sides. What if we establish a new 'place' of some
> sort where we can send people to improve their code (or dump it without
> interfering with core?) Somewhere there may be a workflow
> "go-improve-over-there" where a Community Manager (or mentors or some
> other program we may invent) takes over and does what core reviewers
> have been trying to do 'on the side'? The advantage is that this way we
> don't have to change radically how current teams operate, we may be able
> to start this immediately with Kilo. Thoughts?

I think it's a question of communities, like Daniel said.  In the
kernel, the driver reviewers are a different community from the core
kernel code reviewers.  Most core reviewers would probably fry their own
eyeballs before they'd review device driver code.  So the solution is
not to make them; instead we set up a review community of people who
understand driver code and make allowances for some of its
eccentricities.  At the end of the day, bad code is measured by defect
count which impacts usability for drivers and the reputation of that
driver is what suffers.  I'm sure in OpenStack, driver reputation is an
easy way to encourage better drivers ... after all hypervisors are
pretty fungible: if the Bar hypervisor driver is awful, you can use the
Foo hypervisor instead.  People who want you to switch to the Baz
hypervisor would need to make sure you have a pretty awesome experience
when you take it for a spin, so they're most naturally inclined to spend
the time writing good code.

To me,

Re: [openstack-dev] [Sahara][FFE] Requesting exception for Swift trust authentication blueprint

2014-09-09 Thread Sergey Lukjanov
As Thierry said, for Sahara we don't need to find review sponsors. We
discussed the list of bps proposed for rc1 and I've just mailed the
list of approved FFEs for Sahara in juno:

http://lists.openstack.org/pipermail/openstack-dev/2014-September/045448.html

Thanks.

On Fri, Sep 5, 2014 at 7:48 PM, Thierry Carrez  wrote:
> Smaller review teams don't really need to line up core sponsors as much
> as Nova does. As long as Sergey and myself are fine with it, you can go
> for it. I'm +1 on this one becauise it's actually a security bug we need
> to plug before release.
>
> Trevor McKay wrote:
>> Not sure how this is done, but I'm a core member for Sahara, and I
>> hereby sponsor it.
>>
>> On Fri, 2014-09-05 at 09:57 -0400, Michael McCune wrote:
>>> hey folks,
>>>
>>> I am requesting an exception for the Swift trust authentication 
>>> blueprint[1]. This blueprint addresses a security bug in Sahara and 
>>> represents a significant move towards increased security for Sahara 
>>> clusters. There are several reviews underway[2] with 1 or 2 more starting 
>>> today or monday.
>>>
>>> This feature is initially implemented as optional and as such will have 
>>> minimal impact on current user deployments. By default it is disabled and 
>>> requires no additional configuration or management from the end user.
>>>
>>> My feeling is that there has been vigorous debate and discussion 
>>> surrounding the implementation of this blueprint and there is consensus 
>>> among the team that these changes are needed. The code reviews for the bulk 
>>> of the work have been positive thus far and I have confidence these patches 
>>> will be accepted within the next week.
>>>
>>> thanks for considering this exception,
>>> mike
>>>
>>>
>>> [1]: 
>>> https://blueprints.launchpad.net/sahara/+spec/edp-swift-trust-authentication
>>> [2]: 
>>> https://review.openstack.org/#/q/status:open+topic:bp/edp-swift-trust-authentication,n,z
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] metrics based scheduling

2014-09-09 Thread Abbass MAROUNI

Hi guys,

Is the metrics based scheduling available in Icehouse ? And if so can we 
use to add custom metrics (other than CPU, network or power) ? If so is 
there any documentation on how to use it ?


Best Regards,

--
--
Abbass MAROUNI
VirtualScale


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate][Horizon][Tempest][DevStack] Supporting code for incubated projects

2014-09-09 Thread Thierry Carrez
Mac Innes, Kiall wrote:
> While requesting a openstack/designate-dashboard project from the TC/Infra
> – The topic of why Designate panels, as an incubated project, can’t
> be merged into openstack/horizon was raised.
> 
> In the openstack/governance review[1], Russell asked:
> 
>> Hm, I think we should discuss this with the horizon team, then. We are
>> telling projects that incubation is a key time for integrating with
>> other
>> projects. I would expect merging horizon integration into horizon itself
>> to be a part of that.

We are actually telling projects that they should work on their Horizon
panels while in incubation, and use their first "integrated" cycle (once
they graduate, before their first release), to get their panels into
Horizon mainline code.

That's what Sahara did over this cycle (they had a dashboard, they got
it merged in Horizon during juno, in time for final Juno release).

Now it's not a perfect setup: it put a lot of stress between Sahara and
Horizon teams -- it was essential for Sahara to get it merged, while no
horizon-core really signed up to review it. It took a bit of
cross-project coordination to get it in in time... I expect the same to
happen again.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] FFEs list

2014-09-09 Thread Sergey Lukjanov
Hi sahara folks,

here is a list of approved Feature Freeze Exceptions:

* 
https://blueprints.launchpad.net/sahara/+spec/cluster-persist-sahara-configuration
* https://blueprints.launchpad.net/sahara/+spec/edp-swift-trust-authentication
* https://blueprints.launchpad.net/sahara/+spec/move-rest-samples-to-docs

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Juno RC1 and beyond

2014-09-09 Thread Doug Hellmann
With some help from Thierry, we finally have our launchpad projects cleaned up 
and configured so we can use them correctly for tracking.

I have reviewed our etherpad [1] and updated all of the bugs referenced in the 
RC1 section so they are now listed on 
https://launchpad.net/oslo/+milestone/juno-rc1. If you find a bug missing from 
that list, let me know and I can update its target.

If you have a change you think needs to land for RC1 that does not yet have an 
associated bug, please open one in the appropriate project on launchpad. If you 
can’t set the target to juno-rc1, let me know and I’ll take care of it.

As we discussed on the list previously, our priority is fixing bugs and 
reviewing patches for the 18 Sept release candidate date.

For non-critical items that can land after RC-1, I set their target to 
“next-juno” so they will appear on 
https://launchpad.net/oslo/+milestone/next-juno. 

Doug

[1] https://etherpad.openstack.org/p/juno-oslo-feature-freeze


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests

2014-09-09 Thread Zane Bitter

On 04/09/14 10:45, Jay Pipes wrote:

On 08/29/2014 05:15 PM, Zane Bitter wrote:

On 29/08/14 14:27, Jay Pipes wrote:

On 08/26/2014 10:14 AM, Zane Bitter wrote:

Steve Baker has started the process of moving Heat tests out of the
Tempest repository and into the Heat repository, and we're looking for
some guidance on how they should be packaged in a consistent way.
Apparently there are a few projects already packaging functional tests
in the package .tests.functional (alongside
.tests.unit for the unit tests).

That strikes me as odd in our context, because while the unit tests run
against the code in the package in which they are embedded, the
functional tests run against some entirely different code - whatever
OpenStack cloud you give it the auth URL and credentials for. So these
tests run from the outside, just like their ancestors in Tempest do.

There's all kinds of potential confusion here for users and packagers.
None of it is fatal and all of it can be worked around, but if we
refrain from doing the thing that makes zero conceptual sense then
there
will be no problem to work around :)

I suspect from reading the previous thread about "In-tree functional
test vision" that we may actually be dealing with three categories of
test here rather than two:

* Unit tests that run against the package they are embedded in
* Functional tests that run against the package they are embedded in
* Integration tests that run against a specified cloud

i.e. the tests we are now trying to add to Heat might be qualitatively
different from the .tests.functional suites that already
exist in a few projects. Perhaps someone from Neutron and/or Swift can
confirm?

I'd like to propose that tests of the third type get their own
top-level
package with a name of the form -integrationtests (second
choice: -tempest on the principle that they're essentially
plugins for Tempest). How would people feel about standardising that
across OpenStack?


By its nature, Heat is one of the only projects that would have
integration tests of this nature. For Nova, there are some "functional"
tests in nova/tests/integrated/ (yeah, badly named, I know) that are
tests of the REST API endpoints and running service daemons (the things
that are RPC endpoints), with a bunch of stuff faked out (like RPC
comms, image services, authentication and the hypervisor layer itself).
So, the "integrated" tests in Nova are really not testing integration
with other projects, but rather integration of the subsystems and
processes inside Nova.

I'd support a policy that true integration tests -- tests that test the
interaction between multiple real OpenStack service endpoints -- be left
entirely to Tempest. Functional tests that test interaction between
internal daemons and processes to a project should go into
/$project/tests/functional/.

For Heat, I believe tests that rely on faked-out other OpenStack
services but stress the interaction between internal Heat
daemons/processes should be in /heat/tests/functional/ and any tests the
rely on working, real OpenStack service endpoints should be in Tempest.


Well, the problem with that is that last time I checked there was
exactly one Heat scenario test in Tempest because tempest-core doesn't
have the bandwidth to merge all (any?) of the other ones folks submitted.

So we're moving them to openstack/heat for the pure practical reason
that it's the only way to get test coverage at all, rather than concerns
about overloading the gate or theories about the best venue for
cross-project integration testing.


Hmm, speaking of passive aggressivity...


That's probably a fair criticism, in light of what Matt said about the 
failures of communication on both sides. I think a formal liaison 
program will be an enormous help here. However, it won't change the fact 
that keeping the tests for every project in a single repo with a single 
core team just won't scale.



Where can I see a discussion of the Heat integration tests with Tempest
QA folks? If you give me some background on what efforts have been made
already and what is remaining to be reviewed/merged/worked on, then I
can try to get some resources dedicated to helping here.


I made a list at one point:

https://wiki.openstack.org/w/index.php?title=Governance/TechnicalCommittee/Heat_Gap_Coverage&oldid=58358#Improved_functional_testing_with_Tempest

I'm not sure how complete it is, because a lot of those patches came 
from folks who were not Heat core members, and in some cases not even 
closely engaged in Heat development.


That wiki page was reviewed by the TC, although I was unable to make it 
to the meeting:


http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-07-29-20.02.html


I would greatly prefer just having a single source of integration
testing in OpenStack, versus going back to the bad ol' days of everybody
under the sun rewriting their own.


I'd actually prefer some sort of plug-in system, where individual 
projects could supply tests and Tempest coul

[openstack-dev] [neutron] List of BPs with an FFE

2014-09-09 Thread Kyle Mestery
The list of BPs for Neutron with an FFE are now targeted in the RC1
page here [1]. Please focus on reviewing these, we have a short window
to merge this. I believe the window closes on Friday this week
(9-12-2014), but I'll verify with Thierry in my 1:1 with him today.

We'll also spend a good amount of time covering these in the Neutron
meeting today.

Thanks!
Kyle

[1] https://launchpad.net/neutron/+milestone/juno-rc1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Request for J3 FFE - add reset-state function for backups

2014-09-09 Thread yunling
Hi Cinder Folks,I would like to request a FFE for add reset-state function for 
backups[1][2].The spec of add reset-state function for backups has been 
reviewed and merged[2]. These code changes have been well tested and are not 
very complex[3]. I would appreciate any consideration for an 
FFE.Thanks,-ling-yun[1] 
https://blueprints.launchpad.net/cinder/+spec/support-reset-state-for-backup[2] 
https://review.openstack.org/#/c/98316/[3] 
https://review.openstack.org/#/c/116849/  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]How to collect the real-time data

2014-09-09 Thread lijian
Hello Dina,

'real-time data' like the utilization of vcpu, mem, storage for a instance that 
collected by the poster. I realy want to collect the meters by demand, and do 
not store the data to the database 
I know the ceilometer event can not cover the above meters
so, I think we can enhance the monitor function for ceilometer

Thanks,
Jian Li




At 2014-09-09 07:50:28, "Dina Belova"  wrote:

Jian, hello


What do you actually mean by 'real-time data'? Here in Ceilometer we're having 
'events' feature, for instance - so services like Nova, Cinder, etc. are 
notifying Ceilometer about recent changes like 'VM was created', 'IP was 
assigned', etc. - this data is more than recent one.


May you provide us with kind of use case or example, whatever do you mean by 
'real-time data'?


Cheers,
Dina


On Tue, Sep 9, 2014 at 3:45 PM, lijian  wrote:

Hi folks,
We know the ceilometer collect the data through poster periodically.
But how to collect the real-time data?  whether plan to implement it or not

Thanks!
Jian Li




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







--


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-09-09 Thread Sergii Golovatiuk
Hi Fuelers,

1. Sometimes fuel has non reversible changes. Here are a couple of samples
A new version needs to change/adjust Pacemaker primitives. Such changes
affect all controllers in cluster.
A old API can be deprecated or new API can be introduced. Until we all
components configured to use new API, it's almost impossible to keep half
of cluster with old API and half cluster with new API.

2. For computes, even if we stop services VM instances should work. I think
it's possible to upgrade without downtime of VM instances. Though I am not
sure if it's possible for CEPH nodes.




--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Sep 9, 2014 at 9:35 AM, Mike Scherbakov 
wrote:

> Hi all,
> please see below original email below from Dmitry. I've modified the
> subject to bring larger audience to the issue.
>
> I'd like to split the issue into two parts:
>
>1. Maintenance mode for OpenStack controllers in HA mode (HA-ed
>Keystone, Glance, etc.)
>2. Maintenance mode for OpenStack computes/storage nodes (no HA)
>
> For first category, we might not need to have maintenance mode at all. For
> example, if we apply patching/upgrade one by one node to 3-node HA cluster,
> 2 nodes will serve requests normally. Is that possible for our HA solutions
> in Fuel, TripleO, other frameworks?
>
> For second category, can not we simply do "nova-manage service
> disable...", so scheduler will simply stop scheduling new workloads on
> particular host which we want to do maintenance on?
>
>
> On Thu, Aug 28, 2014 at 6:44 PM, Dmitry Pyzhov 
> wrote:
>
>> All,
>>
>> I'm not sure if it deserves to be mentioned in our documentation, this
>> seems to be a common practice. If an administrator wants to patch his
>> environment, he should be prepared for a temporary downtime of OpenStack
>> services. And he should plan to perform patching in advance: choose a time
>> with minimal load and warn users about possible interruptions of service
>> availability.
>>
>> Our current implementation of patching does not protect from downtime
>> during the patching procedure. HA deployments seems to be more or less
>> stable. But it looks like it is possible to schedule an action on a compute
>> node and get an error because of service restart. Deployments with one
>> controller... well, you won’t be able to use your cluster until the
>> patching is finished. There is no way to get rid of downtime here.
>>
>> As I understand, we can get rid of possible issues with computes in HA.
>> But it will require migration of instances and stopping of nova-compute
>> service before patching. And it will make the overall patching procedure
>> much longer. Do we want to investigate this process?
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Mike Scherbakov
> #mihgen
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate][Horizon][Tempest][DevStack] Supporting code for incubated projects

2014-09-09 Thread Mac Innes, Kiall
Hi all,

While requesting a openstack/designate-dashboard project from the TC/
Infra - The topic of why Designate panels, as an incubated project, can't
be merged into openstack/horizon was raised.

In the openstack/governance review[1], Russell asked:

Hm, I think we should discuss this with the horizon team, then. We are
telling projects that incubation is a key time for integrating with other
projects. I would expect merging horizon integration into horizon itself
to be a part of that.

With this in mind - I'd like to start a conversation with the Horizon,
Tempest and DevStack teams around merging of code to support
Incubated projects - What are the drawbacks?, Why is this currently
frowned upon by the various teams? And - What do each of the parties
believe is the Right Way forward?

Thanks,
Kiall

[1]: https://review.openstack.org/#/c/119549/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-09 Thread Gary Kotton


On 9/8/14, 7:23 PM, "Sylvain Bauza"  wrote:

>
>Le 08/09/2014 18:06, Steven Dake a écrit :
>> On 09/05/2014 06:10 AM, Sylvain Bauza wrote:
>>>
>>> Le 05/09/2014 12:48, Sean Dague a écrit :
 On 09/05/2014 03:02 AM, Sylvain Bauza wrote:
> Le 05/09/2014 01:22, Michael Still a écrit :
>> On Thu, Sep 4, 2014 at 5:24 AM, Daniel P. Berrange
>>  wrote:
>>
>> [Heavy snipping because of length]
>>
>>> The radical (?) solution to the nova core team bottleneck is thus
>>>to
>>> follow this lead and split the nova virt drivers out into separate
>>> projects and delegate their maintainence to new dedicated teams.
>>>
>>>- Nova becomes the home for the public APIs, RPC system,
>>>database
>>>  persistent and the glue that ties all this together with the
>>>  virt driver API.
>>>
>>>- Each virt driver project gets its own core team and is
>>> responsible
>>>  for dealing with review, merge & release of their codebase.
>> I think this is the crux of the matter. We're not doing a great
>> job of
>> landing code at the moment, because we can't keep up with the review
>> workload.
>>
>> So far we've had two proposals mooted:
>>
>>- slots / runways, where we try to rate limit the number of
>>things
>> we're trying to review at once to maintain focus
>>- splitting all the virt drivers out of the nova tree
> Ahem, IIRC, there is a third proposal for Kilo :
>   - create subteam's half-cores responsible for reviewing patch's
> iterations and send to cores approvals requests once they consider
>the
> patch enough stable for it.
>
> As I explained, it would allow to free up reviewing time for cores
> without loosing the control over what is being merged.
 I don't really understand how the half core idea works outside of a
 math
 equation, because the point is in core is to have trust over the
 judgement of your fellow core members so that they can land code when
 you aren't looking. I'm not sure how I manage to build up half trust
in
 someone any quicker.
>>>
>>> Well, this thread is becoming huge so that's becoming hard to follow
>>> all the discussion but I explained the idea elsewhere. Let me just
>>> provide it here too :
>>> The idea is *not* to land patches by the halfcores. Core team will
>>> still be fully responsible for approving patches. The main problem in
>>> Nova is that cores are spending lots of time because they review each
>>> iteration of a patch, and also have to look at if a patch is good or
>>> not.
>>>
>>> That's really time consuming, and for most of the time, quite
>>> frustrating as it requires to follow the patch's life, so there are
>>> high risks that your core attention is becoming distracted over the
>>> life of the patch.
>>>
>>> Here, the idea is to reduce dramatically this time by having teams
>>> dedicated to specific areas (as it's already done anyway for the
>>> various majority of reviewers) who could on their own take time for
>>> reviewing all the iterations. Of course, that doesn't mean cores
>>> would loose the possibility to specifically follow a patch and bypass
>>> the halfcores, that's just for helping them if they're overwhelmed.
>>>
>>> About the question of trusting cores or halfcores, I can just say
>>> that Nova team is anyway needing to grow up or divide it so the
>>> trusting delegation has to be real anyway.
>>>
>>> This whole process is IMHO very encouraging for newcomers because
>>> that creates dedicated teams that could help them to improve their
>>> changes, and not waiting 2 months for getting a -1 and a frank reply.
>>>
>>>
>> Interesting idea, but having been core on Heat for ~2 years, it is
>> critical to be involved in the review from the beginning of the patch
>> set.  Typically you won't see core reviewer's participate in a review
>> that is already being handled by two core reviewers.
>>
>> The reason it is important from the beginning of the change request is
>> that the project core can store the iterations and purpose of the
>> change in their heads.  Delegating all that up front work to a
>> non-core just seems counter to the entire process of code reviews.
>> Better would be reduce the # of reviews in the queue (what is proposed
>> by this change) or trust new reviewers "faster".  I'm not sure how you
>> do that - but this second model is what your proposing.
>>
>> I think one thing that would be helpful is to point out somehow in the
>> workflow that two core reviewers are involved in the review so core
>> reviewers don't have to sift through 10 pages of reviews to find new
>> work.
>>
>
>Now that the specs repo is in place and has been proved with Juno, most
>of the design stage is approved before the implementation is going. If
>the cores are getting more time because they wouldn't be focused on each
>single patchset, they could really find some pat

Re: [openstack-dev] [ceilometer]How to collect the real-time data

2014-09-09 Thread Dina Belova
Jian, hello

What do you actually mean by 'real-time data'? Here in Ceilometer we're
having 'events' feature, for instance - so services like Nova, Cinder, etc.
are notifying Ceilometer about recent changes like 'VM was created', 'IP
was assigned', etc. - this data is more than recent one.

May you provide us with kind of use case or example, whatever do you mean
by 'real-time data'?

Cheers,
Dina

On Tue, Sep 9, 2014 at 3:45 PM, lijian  wrote:

> Hi folks,
> We know the ceilometer collect the data through poster periodically.
> But how to collect the real-time data?  whether plan to implement it
> or not
>
> Thanks!
> Jian Li
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer]How to collect the real-time data

2014-09-09 Thread lijian
Hi folks,
We know the ceilometer collect the data through poster periodically.
But how to collect the real-time data?  whether plan to implement it or not

Thanks!
Jian Li
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-09 Thread Matthew Booth
On 09/09/14 01:20, Stefano Maffulli wrote:
> From conversations with PTLs and core reviewers I get the impression
> that lots of drivers contributions come with bad code. These require a
> lot of time and reviewers energy to be cleaned up, causing burn out and
> bad feelings on all sides. What if we establish a new 'place' of some
> sort where we can send people to improve their code (or dump it without
> interfering with core?) Somewhere there may be a workflow
> "go-improve-over-there" where a Community Manager (or mentors or some
> other program we may invent) takes over and does what core reviewers
> have been trying to do 'on the side'? The advantage is that this way we
> don't have to change radically how current teams operate, we may be able
> to start this immediately with Kilo. Thoughts?

I can't speak for other areas of the codebase, but certainly in the
VMware driver the technical debt has been allowed to accrue in the past
precisely because the review process itself is so tortuously slow. This
results in a death spiral of code quality, and ironically the review
process has been the cause, not the solution.

In Juno we have put our major focus on refactor work, which has meant
essentially no feature work for an entire cycle. This is painful, but
unfortunately necessary with the current process.

As an exercise, look at what has been merged in the VMware driver during
Juno. Consider how many developer weeks that should reasonably have
taken. Then consider how many developer weeks it actually took. Is the
current process conducive to productivity? The answer is clearly and
emphatically no. Is it worth it in its current form? Obviously not.

So, should driver contributors be forced to play in the sandpit before
mixing with the big boys? If a tortuously slow review process is a
primary cause of technical debt, will adding more steps to it improve
the situation? I hope the answer is obvious. And I'll be honest, I found
the suggestion more than a little patronising.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-09 Thread Sean Dague
On 09/08/2014 08:18 PM, James E. Blair wrote:
> Sean Dague  writes:
> 
>> The crux of the issue is that zookeeper python modules are C extensions.
>> So you have to either install from packages (which we don't do in unit
>> tests) or install from pip, which means forcing zookeeper dev packages
>> locally. Realistically this is the same issue we end up with for mysql
>> and pg, but given their wider usage we just forced that pain on developers.
> ...
>> Which feels like we need some decoupling on our requirements vs. tox
>> targets to get there. CC to Monty and Clark as our super awesome tox
>> hackers to help figure out if there is a path forward here that makes sense.
> 
> From a technical standpoint, all we need to do to make this work is to
> add the zookeeper python client bindings to (test-)requirements.txt.
> But as you point out, that makes it more difficult for developers who
> want to run unit tests locally without having the requisite libraries
> and header files installed.
> 
> We could add another requirements file with heavyweight optional
> dependencies, and use that in gate testing, but also have a lightweight
> tox environment that does not include them for ease of use in local
> testing.
> 
> What would be really great is if we could use setuptools extras_require
> for this:
> 
> https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies
> 
> However, I'm not sure what the situation is with support for that in pip
> (and we might need pbr support too).

Right, some optional test path like that would be nice.

Honestly, one thing I was thinking about was effectively a bunch of tox
targets for local running, but that we run them all as a single target
upstream.

So testenv:zookeeper, testenv:mysql, testenv:pg

And then some way to have py27all be py27 + all these. py27all is what
upstream runs, devs can easily run without the extra requirements (which
will be sufficient 95% of the time), and when they hit a different
failure can run the wider tests.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] summit session brainstorming

2014-09-09 Thread Sergey Lukjanov
Hi sahara folks,

I'd like to start brainstorming ideas for the upcoming summit design
sessions earlier than previous times to have more time to discuss
topics and prioritize / filter / prepare them.

Here is an etherpad to start the brainstorming:

https://etherpad.openstack.org/p/kilo-sahara-summit-topics

If you have ideas for summit sessions, please, add them to the
etherpad and we'll select the most important topics later before the
summit.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Aleksandra Fedorova
As I understand your proposal, we need to split our HCF milestone into two
check points: Branching Point and HCF itself.

Branching point should happen somewhere in between SCF and HCF. And though
It may coincide with HCF, it needs its own list of requirements. This will
give us the possibility to untie two events and make a separate decision on
branching without enforcing all HCF criteria.

>From the DevOps point of view it changes almost nothing, it just adds a bit
more discussion items on the management side and slight modifications to
our checklists.


On Tue, Sep 9, 2014 at 5:55 AM, Dmitry Borodaenko 
wrote:

> TL;DR: Yes, our work on 6.0 features is currently blocked and it is
> becoming a major problem. No, I don't think we should create
> pre-release or feature branches. Instead, we should create stable/5.1
> branches and open master for 6.0 work.
>
> We have reached a point in 5.1 release cycle where the scope of issues
> we are willing to address in this release is narrow enough to not
> require full attention of the whole team. We have engineers working on
> 6.0 features, and their work is essentially blocked until they have
> somewhere to commit their changes.
>
> Simply creating new branches is not even close to solving this
> problem: we have a whole CI infrastructure around every active release
> series (currently 5.1, 5.0, 4.1), including test jobs for gerrit
> commits, package repository mirrors updates, ISO image builds, smoke,
> build verification, and swarm tests for ISO images, documentation
> builds, etc. A branch without all that infrastructure isn't any better
> than current status quo: every developer tracking their own 6.0 work
> locally.
>
> Unrelated to all that, we also had a lot of very negative experience
> with feature branches in the past [0] [1], which is why we have
> decided to follow the OpenStack branching strategy: commit all feature
> changes directly to master and track bugfixes for stable releases in
> stable/* branches.
>
> [0] https://lists.launchpad.net/fuel-dev/msg00127.html
> [1] https://lists.launchpad.net/fuel-dev/msg00028.html
>
> I'm also against declaring a "hard code freeze with exceptions", HCF
> should remain tied to our ability to declare a release candidate. If
> we can't release with the bugs we already know about, declaring HCF
> before fixing these bugs would be an empty gesture.
>
> Creating stable/5.1 now instead of waiting for hard code freeze for
> 5.1 will cost us two things:
>
> 1) DevOps team will have to update our CI infrastructure for one more
> release series. It's something we have to do for 6.0 sooner or later,
> so this may be a disruption, but not an additional effort.
>
> 2) All commits targeted for 5.1 will have to be proposed for two
> branches (master and stable/5.1) instead of just one (master). This
> will require additional effort, but I think that it is significantly
> smaller than the cost of spinning our wheels on 6.0 efforts.
>
> -DmitryB
>
>
> On Mon, Sep 8, 2014 at 10:10 AM, Dmitry Mescheryakov
>  wrote:
> > Hello Fuelers,
> >
> > Right now we have the following policy in place: the branches for a
> > release are opened only after its 'parent' release have reached hard
> > code freeze (HCF). Say, 5.1 release is parent releases for 5.1.1 and
> > 6.0.
> >
> > And that is the problem: if parent release is delayed, we can't
> > properly start development of a child release because we don't have
> > branches to commit. That is current issue with 6.0: we already started
> > to work on pushing Juno in to 6.0, but if we are to make changes to
> > our deployment code we have nowhere to store them.
> >
> > IMHO the issue could easily be resolved by creation of pre-release
> > branches, which are merged together with parent branches once the
> > parent reaches HCF. Say, we use branch 'pre-6.0' for initial
> > development of 6.0. Once 5.1 reaches HCF, we merge pre-6.0 into master
> > and continue development here. After that pre-6.0 is abandoned.
> >
> > What do you think?
> >
> > Thanks,
> >
> > Dmitry
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Dmitry Borodaenko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Aleksandra Fedorova
bookwar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL in Fuel

2014-09-09 Thread Guillaume Thouvenin
I think that the management of certificates should be discussed in the
ca-deployment blueprint [3]

We had some discussions and it seems that one idea is to use a docker
container as the root authority. By doing this we should be able to sign
certificate from Nailgun and distribute the certificate to the
corresponding controllers. So one way to see this is:

1) a new environment is created
2) Nailgun generates a key pair that will be used for the new env.
3) Nailgun sends a CSR that contains the VIP used by the new environment
and signed by the newly created private key to the docker "root CA".
4) the docker "CA" will send back a signed certificate.
5) Nailgun distribute this signed certificate and the env private key to
the corresponding controller through mcollective.

It's not clear to me how Nailgun will interact with docker CA and I aslo
have some concerns about the storage of different private key of
environments but it is the idea...
If needed I can start to fill the ca-deployment according to this scenario
but I guess that we need to approve the BP [3].

So I think that we need to start on [3]. As this is required for OSt public
endpoint SSL and also for Fuel SSL it can be quicker to make a first stage
where a self-signed certificate is managed from nailgun and a second stage
with the docker CA...

Best regards,
Guillaume

[3] https://blueprints.launchpad.net/fuel/+spec/ca-deployment
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Request for python-heatclient project to adopt heat-translator

2014-09-09 Thread Steven Hardy
Hi Sahdev,

On Tue, Sep 02, 2014 at 11:52:30AM -0400, Sahdev P Zala wrote:
>Hello guys,
> 
>As you know, the heat-translator project was started early this year with
>an aim to create a tool to translate non-Heat templates to HOT. It is a
>StackForge project licensed under Apache 2. We have made good progress
>with its development and a demo was given at the OpenStack 2014 Atlanta
>summit during a half-a-day session that was dedicated to heat-translator
>project and related TOSCA discussion. Currently the development and
>testing is done with the TOSCA template format but the tool is designed to
>be generic enough to work with templates other than TOSCA. There are five
>developers actively contributing to the development. In addition, all
>current Heat core members are already core members of the heat-translator
>project.
> 
>Recently, I attended Heat Mid Cycle Meet Up for Juno in Raleigh and
>updated the attendees on heat-translator project and ongoing progress. I
>also requested everyone for a formal adoption of the project in the
>python-heatclient and the consensus was that it is the right thing to do.
>Also when the project was started, the initial plan was to make it
>available in python-heatclient. Hereby, the heat-translator team would
>like to make a request to have the heat-translator project to be adopted
>by the python-heatclient/Heat program.

Obviously I wasn't at the meetup, so I may be missing some context here,
but can you answer some questions please?

- Is the scope for heat-translator only tosca simple-profile, or also the
  original more heavyweight tosca too?

- If it's only tosca simple-profile, has any thought been given to moving
  towards implementing support via a template parser plugin, rather than
  baking the translation into the client?

While I see this effort as valuable, integrating the translator into the
client seems the worst of all worlds to me:

- Any users/services not intefacing to heat via python-heatclient can't use it

- You prempt the decision about integration with any higher level services,
  e.g Mistral, Murano, Solum, if you bake in the translator at the
  heat level.

The scope question is probably key here - if you think the translator can
do (or will be able to do) a 100% non-lossy conversion to HOT using only
Heat, maybe it's time we considered discussing integration into Heat the
service rather than the client.

Conversely, if you're going to need other services to fully implement the
spec, it probably makes sense for the translator to remain layered over
heat (or integrated with another project which is layered over heat).

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] master access control - future work

2014-09-09 Thread Lukasz Oles
Dear Fuelers,

I have some ideas and questions to share regarding Fuel Master access
control.

During 5,1 cycle we made some non optimal decision which we have to fix.
The following blueprint describes required changes:

https://blueprints.launchpad.net/fuel/+spec/access-control-master-node-improvments

The next step to improve security is to introduce secure connection using
HTTPS, it is described here:

https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints

And now, there is question about next stages from original blueprint:

https://blueprints.launchpad.net/fuel/+spec/access-control-master-node

For example, from stage 3:
- Node agent authorization, which will increase security. Currently, any
one can change node data.
What do you think do we need it now?

Please read and comment first two blueprints.

-- 
Łukasz Oleś
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Mike Scherbakov
Currently I think we should take it as an exception, and discuss two points
which I brought up. Obviously, if we open stable/5.1, then we are opening
master for new features.

We will modify HCF definition once we settle on final decision.

Thanks,

On Tue, Sep 9, 2014 at 1:02 PM, Igor Marnat  wrote:

> Mike,
> just to clarify - do we want consider this as an exception, which is
> not going to be repeated next release? If not, we might want to
> consider updating the statement "It is the time when master opens for
> next release changes, including features." in [1]. If I got you
> correct, we are going to open master for development of new features
> now.
>
> [1] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
> Regards,
> Igor Marnat
>
>
> On Tue, Sep 9, 2014 at 12:07 PM, Mike Scherbakov
>  wrote:
> > +1 to DmitryB, I think in this particular time and case we should open
> > stable/5.1. But not to call it HCF [1].
> >
> > Though I think we should retrospect our approaches here.
> >
> > Sometimes we can squash 30 bugs a day, and formally reach HCF. Though the
> > day after we will get 30 New bugs from QA. We might want to reconsider
> the
> > whole approach on criteria, and come up with "flow" instead. Like if we
> have
> > <=5 High bugs at the moment, and over last 3 days we were seeing <10
> > confirmed High/Critical bugs, then we can call for HCF (if 60 bugs in 3
> last
> > days, then no way for HCF)
> > Consumption of new OpenStack release is hard and will be as such unless
> we
> > will be using Fuel in gating process for every patch being pushed to
> > OpenStack upstream. We want to deploy Juno now for 6.0, and the only way
> to
> > do it now - is to build all packages, try to run it, observe issues, fix,
> > run again, observe other issues... - and this process continues for many
> > iterations before we get stable ISO which passes BVTs. It is obvious,
> that
> > if we drop the Juno packages in, then our master is going to be broken.
> > If we do any other feature development, then we don't know whether it's
> > because Juno or that another feature. What should we do then?
> >
> > My suggestion on #2 is that we could keep backward compatibility with
> > Icehouse code (on puppet side), and can continue to use BVT's, other
> testing
> > against master branch using both Icehouse packages and Juno. Thus we can
> > keep using gating process for fuel library, relying on stable Icehouse
> > version.
> >
> > As for immediate action, again, I'm in favor of creating stable/5.1 in
> order
> > to unblock feature development in master, while we are fixing last issues
> > with OpenStack patching.
> >
> > [1] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
> >
> >
> > On Tue, Sep 9, 2014 at 5:55 AM, Dmitry Borodaenko <
> dborodae...@mirantis.com>
> > wrote:
> >>
> >> TL;DR: Yes, our work on 6.0 features is currently blocked and it is
> >> becoming a major problem. No, I don't think we should create
> >> pre-release or feature branches. Instead, we should create stable/5.1
> >> branches and open master for 6.0 work.
> >>
> >> We have reached a point in 5.1 release cycle where the scope of issues
> >> we are willing to address in this release is narrow enough to not
> >> require full attention of the whole team. We have engineers working on
> >> 6.0 features, and their work is essentially blocked until they have
> >> somewhere to commit their changes.
> >>
> >> Simply creating new branches is not even close to solving this
> >> problem: we have a whole CI infrastructure around every active release
> >> series (currently 5.1, 5.0, 4.1), including test jobs for gerrit
> >> commits, package repository mirrors updates, ISO image builds, smoke,
> >> build verification, and swarm tests for ISO images, documentation
> >> builds, etc. A branch without all that infrastructure isn't any better
> >> than current status quo: every developer tracking their own 6.0 work
> >> locally.
> >>
> >> Unrelated to all that, we also had a lot of very negative experience
> >> with feature branches in the past [0] [1], which is why we have
> >> decided to follow the OpenStack branching strategy: commit all feature
> >> changes directly to master and track bugfixes for stable releases in
> >> stable/* branches.
> >>
> >> [0] https://lists.launchpad.net/fuel-dev/msg00127.html
> >> [1] https://lists.launchpad.net/fuel-dev/msg00028.html
> >>
> >> I'm also against declaring a "hard code freeze with exceptions", HCF
> >> should remain tied to our ability to declare a release candidate. If
> >> we can't release with the bugs we already know about, declaring HCF
> >> before fixing these bugs would be an empty gesture.
> >>
> >> Creating stable/5.1 now instead of waiting for hard code freeze for
> >> 5.1 will cost us two things:
> >>
> >> 1) DevOps team will have to update our CI infrastructure for one more
> >> release series. It's something we have to do for 6.0 sooner or later,
> >> so this may be a disruption, but not an additional e

Re: [openstack-dev] [NFV] NFV Meetings

2014-09-09 Thread MENDELSOHN, ITAI (ITAI)
Hi,

Was looking at the wrong channel last week….
Looked at the minutes. Tnx for raising the topic.
Let’s discuss this week.

Tnx!
I

On 9/8/14, 5:49 PM, "Steve Gordon"  wrote:

>- Original Message -
>> From: "ITAI MENDELSOHN (ITAI)" 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>>
>> 
>> Hi,
>> 
>> Hope you are doing good.
>> Did we have a meeting last week?
>> I was under the impression it¹s was scheduled to Thursday (as in the
>>wiki)
>> but found other meetings in the IRCŠ
>> What am I missing?
>> Do we have one this week?
>
>Hi Itai,
>
>Yes there was a meeting last Thursday IN #openstack-meeting @ 1600 UTC,
>the minutes are here:
>
>
>http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-09-04-16.00.log.
>html
>
>This week's meeting will be on Wednesday at 1400 UTC in
>#openstack-meeting-alt.
>
>> Also,
>> I sent a mail about the sub groups goals as we agreed ten days ago.
>> Did you see it?
>> 
>> Happy to hear your thoughts.
>
>I did see this and thought it was a great attempt to re-frame the
>discussion (I think I said as much in the meeting). I'm personally still
>mulling over my own thoughts on the matter and how to respond. Maybe we
>will have more opportunity to discuss this week?
>
>Thanks,
>
>Steve
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Igor Marnat
Mike,
just to clarify - do we want consider this as an exception, which is
not going to be repeated next release? If not, we might want to
consider updating the statement "It is the time when master opens for
next release changes, including features." in [1]. If I got you
correct, we are going to open master for development of new features
now.

[1] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
Regards,
Igor Marnat


On Tue, Sep 9, 2014 at 12:07 PM, Mike Scherbakov
 wrote:
> +1 to DmitryB, I think in this particular time and case we should open
> stable/5.1. But not to call it HCF [1].
>
> Though I think we should retrospect our approaches here.
>
> Sometimes we can squash 30 bugs a day, and formally reach HCF. Though the
> day after we will get 30 New bugs from QA. We might want to reconsider the
> whole approach on criteria, and come up with "flow" instead. Like if we have
> <=5 High bugs at the moment, and over last 3 days we were seeing <10
> confirmed High/Critical bugs, then we can call for HCF (if 60 bugs in 3 last
> days, then no way for HCF)
> Consumption of new OpenStack release is hard and will be as such unless we
> will be using Fuel in gating process for every patch being pushed to
> OpenStack upstream. We want to deploy Juno now for 6.0, and the only way to
> do it now - is to build all packages, try to run it, observe issues, fix,
> run again, observe other issues... - and this process continues for many
> iterations before we get stable ISO which passes BVTs. It is obvious, that
> if we drop the Juno packages in, then our master is going to be broken.
> If we do any other feature development, then we don't know whether it's
> because Juno or that another feature. What should we do then?
>
> My suggestion on #2 is that we could keep backward compatibility with
> Icehouse code (on puppet side), and can continue to use BVT's, other testing
> against master branch using both Icehouse packages and Juno. Thus we can
> keep using gating process for fuel library, relying on stable Icehouse
> version.
>
> As for immediate action, again, I'm in favor of creating stable/5.1 in order
> to unblock feature development in master, while we are fixing last issues
> with OpenStack patching.
>
> [1] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
>
>
> On Tue, Sep 9, 2014 at 5:55 AM, Dmitry Borodaenko 
> wrote:
>>
>> TL;DR: Yes, our work on 6.0 features is currently blocked and it is
>> becoming a major problem. No, I don't think we should create
>> pre-release or feature branches. Instead, we should create stable/5.1
>> branches and open master for 6.0 work.
>>
>> We have reached a point in 5.1 release cycle where the scope of issues
>> we are willing to address in this release is narrow enough to not
>> require full attention of the whole team. We have engineers working on
>> 6.0 features, and their work is essentially blocked until they have
>> somewhere to commit their changes.
>>
>> Simply creating new branches is not even close to solving this
>> problem: we have a whole CI infrastructure around every active release
>> series (currently 5.1, 5.0, 4.1), including test jobs for gerrit
>> commits, package repository mirrors updates, ISO image builds, smoke,
>> build verification, and swarm tests for ISO images, documentation
>> builds, etc. A branch without all that infrastructure isn't any better
>> than current status quo: every developer tracking their own 6.0 work
>> locally.
>>
>> Unrelated to all that, we also had a lot of very negative experience
>> with feature branches in the past [0] [1], which is why we have
>> decided to follow the OpenStack branching strategy: commit all feature
>> changes directly to master and track bugfixes for stable releases in
>> stable/* branches.
>>
>> [0] https://lists.launchpad.net/fuel-dev/msg00127.html
>> [1] https://lists.launchpad.net/fuel-dev/msg00028.html
>>
>> I'm also against declaring a "hard code freeze with exceptions", HCF
>> should remain tied to our ability to declare a release candidate. If
>> we can't release with the bugs we already know about, declaring HCF
>> before fixing these bugs would be an empty gesture.
>>
>> Creating stable/5.1 now instead of waiting for hard code freeze for
>> 5.1 will cost us two things:
>>
>> 1) DevOps team will have to update our CI infrastructure for one more
>> release series. It's something we have to do for 6.0 sooner or later,
>> so this may be a disruption, but not an additional effort.
>>
>> 2) All commits targeted for 5.1 will have to be proposed for two
>> branches (master and stable/5.1) instead of just one (master). This
>> will require additional effort, but I think that it is significantly
>> smaller than the cost of spinning our wheels on 6.0 efforts.
>>
>> -DmitryB
>>
>>
>> On Mon, Sep 8, 2014 at 10:10 AM, Dmitry Mescheryakov
>>  wrote:
>> > Hello Fuelers,
>> >
>> > Right now we have the following policy in place: the branches for a
>> > release are opened only after its 'parent' release ha

Re: [openstack-dev] about Distributed OpenStack Cluster

2014-09-09 Thread Vo Hoang, Tri
Hi Jesse Pretorius,

if you read my whole mail carefully, it’s about the current development of 
OpenStack Cascading [1] and future development of OpenStack for distributed 
cluster. And I don’t see it fits to Ops at all. Many thanks.

Kind Regards,
Tri Hoang Vo

From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
Sent: Montag, 8. September 2014 14:18
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] about Distributed OpenStack Cluster

The openstack-dev list is meant to be for the discussion of current and future 
development of OpenStack itself, whereas the question you're asking is more 
suited to the openstack-operators list. I encourage you to send your question 
there instead.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-09 Thread Mike Scherbakov
+1 to DmitryB, I think in this particular time and case we should open
stable/5.1. But not to call it HCF [1].

Though I think we should retrospect our approaches here.

   1. Sometimes we can squash 30 bugs a day, and formally reach HCF. Though
   the day after we will get 30 New bugs from QA. We might want to reconsider
   the whole approach on criteria, and come up with "flow" instead. Like if we
   have <=5 High bugs at the moment, and over last 3 days we were seeing <10
   confirmed High/Critical bugs, then we can call for HCF (if 60 bugs in 3
   last days, then no way for HCF)
   2. Consumption of new OpenStack release is hard and will be as such
   unless we will be using Fuel in gating process for every patch being pushed
   to OpenStack upstream. We want to deploy Juno now for 6.0, and the only way
   to do it now - is to build all packages, try to run it, observe issues,
   fix,  run again, observe other issues... - and this process continues for
   many iterations before we get stable ISO which passes BVTs. It is obvious,
   that if we drop the Juno packages in, then our master is going to be broken.
   If we do any other feature development, then we don't know whether it's
   because Juno or that another feature. What should we do then?


My suggestion on #2 is that we could keep backward compatibility with
Icehouse code (on puppet side), and can continue to use BVT's, other
testing against master branch using both Icehouse packages and Juno. Thus
we can keep using gating process for fuel library, relying on stable
Icehouse version.

As for immediate action, again, I'm in favor of creating stable/5.1 in
order to unblock feature development in master, while we are fixing last
issues with OpenStack patching.

[1] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze


On Tue, Sep 9, 2014 at 5:55 AM, Dmitry Borodaenko 
wrote:

> TL;DR: Yes, our work on 6.0 features is currently blocked and it is
> becoming a major problem. No, I don't think we should create
> pre-release or feature branches. Instead, we should create stable/5.1
> branches and open master for 6.0 work.
>
> We have reached a point in 5.1 release cycle where the scope of issues
> we are willing to address in this release is narrow enough to not
> require full attention of the whole team. We have engineers working on
> 6.0 features, and their work is essentially blocked until they have
> somewhere to commit their changes.
>
> Simply creating new branches is not even close to solving this
> problem: we have a whole CI infrastructure around every active release
> series (currently 5.1, 5.0, 4.1), including test jobs for gerrit
> commits, package repository mirrors updates, ISO image builds, smoke,
> build verification, and swarm tests for ISO images, documentation
> builds, etc. A branch without all that infrastructure isn't any better
> than current status quo: every developer tracking their own 6.0 work
> locally.
>
> Unrelated to all that, we also had a lot of very negative experience
> with feature branches in the past [0] [1], which is why we have
> decided to follow the OpenStack branching strategy: commit all feature
> changes directly to master and track bugfixes for stable releases in
> stable/* branches.
>
> [0] https://lists.launchpad.net/fuel-dev/msg00127.html
> [1] https://lists.launchpad.net/fuel-dev/msg00028.html
>
> I'm also against declaring a "hard code freeze with exceptions", HCF
> should remain tied to our ability to declare a release candidate. If
> we can't release with the bugs we already know about, declaring HCF
> before fixing these bugs would be an empty gesture.
>
> Creating stable/5.1 now instead of waiting for hard code freeze for
> 5.1 will cost us two things:
>
> 1) DevOps team will have to update our CI infrastructure for one more
> release series. It's something we have to do for 6.0 sooner or later,
> so this may be a disruption, but not an additional effort.
>
> 2) All commits targeted for 5.1 will have to be proposed for two
> branches (master and stable/5.1) instead of just one (master). This
> will require additional effort, but I think that it is significantly
> smaller than the cost of spinning our wheels on 6.0 efforts.
>
> -DmitryB
>
>
> On Mon, Sep 8, 2014 at 10:10 AM, Dmitry Mescheryakov
>  wrote:
> > Hello Fuelers,
> >
> > Right now we have the following policy in place: the branches for a
> > release are opened only after its 'parent' release have reached hard
> > code freeze (HCF). Say, 5.1 release is parent releases for 5.1.1 and
> > 6.0.
> >
> > And that is the problem: if parent release is delayed, we can't
> > properly start development of a child release because we don't have
> > branches to commit. That is current issue with 6.0: we already started
> > to work on pushing Juno in to 6.0, but if we are to make changes to
> > our deployment code we have nowhere to store them.
> >
> > IMHO the issue could easily be resolved by creation of pre-release
> > branches, which are merged toget

Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-09 Thread Michael Still
I haven't had a chance to read other people's posts, so I am sure
there is duplication here.

What would I have all of OpenStack working on if I was ruler of the
universe? Let's see...

1. Fixing our flakey gate: we're all annoyed by our code failing tests
with transient errors, but most people just recheck. Here's the kicker
though -- those errors sometimes affect production deployments as
well. I don't have a magical solution to incent people to work on
fixing these bugs, but we need to fix these. Now.

2. Pay down our tech debt in general. The most obvious example of this
is bugs. Nova has nearly 1,000 of these, and not enough people working
on them compared with features. This is a horrible user experience for
our users, and we should all be embarrassed by it.

3. Find a way to scale nova and neutron development. Our biggest
projects are suffering, and we need to come up with a consistent way
to solve that problem.

Michael

On Thu, Sep 4, 2014 at 1:37 AM, Joe Gordon  wrote:
> As you all know, there has recently been several very active discussions
> around how to improve assorted aspects of our development process. One idea
> that was brought up is to come up with a list of cycle goals/project
> priorities for Kilo [0].
>
> To that end, I would like to propose an exercise as discussed in the TC
> meeting yesterday [1]:
> Have anyone interested (especially TC members) come up with a list of what
> they think the project wide Kilo cycle goals should be and post them on this
> thread by end of day Wednesday, September 10th. After which time we can
> begin discussing the results.
> The goal of this exercise is to help us see if our individual world views
> align with the greater community, and to get the ball rolling on a larger
> discussion of where as a project we should be focusing more time.
>
>
> best,
> Joe Gordon
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
> [1]
> http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-09 Thread Sumit Naiksatam
On Sat, Sep 6, 2014 at 9:54 AM, Prasad Vellanki <
prasad.vella...@oneconvergence.com> wrote:

> Good discussion.
>
> Based on this I think we should get started on the stackforge right away.
>
> Sumit - It would be great if you get started on the StackForge soon. We
> have a few changes that needs to be submitted and have been holding up.
>
>
The stackforge repo has been created:
https://github.com/stackforge/group-based-policy


> On Fri, Sep 5, 2014 at 8:08 AM, Mohammad Banikazemi  wrote:
>
>> I can only see the use of a separate project for Group Policy as a
>> tactical and temporary solution. In my opinion, it does not make sense to
>> have the Group Policy as a separate project outside Neutron (unless the new
>> project is aiming to replace Neutron and I do not think anybody is
>> suggesting that). In this regard, Group Policy is not similar to Advanced
>> Services such as FW and LB.
>>
>> So, using StackForge to get things moving again is fine but let us keep
>> in mind (and see if we can agree on) that we want to have the Group Policy
>> abstractions as part of OpenStack Networking (when/if it proves to be a
>> valuable extension to what we currently have). I do not want to see our
>> decision to make things moving quickly right now prevent us from achieving
>> that goal. That is why I think the other two approaches (from the little I
>> know about the incubator option, and even littler I know about the feature
>> branch option) may be better options in the long run.
>>
>> If I understand it correctly some members of the community are actively
>> working on these options (that is, the incubator and the Neutron feature
>> branch options) . In order to make a better judgement as to how to proceed,
>> it would be very helpful if we get a bit more information on these two
>> options and their status here on this mailing list.
>>
>> Mohammad
>>
>>
>>
>> [image: Inactive hide details for Kevin Benton ---09/05/2014 04:31:05
>> AM---Tl;dr - Neutron incubator is only a wiki page with many unce]Kevin
>> Benton ---09/05/2014 04:31:05 AM---Tl;dr - Neutron incubator is only a wiki
>> page with many uncertainties. Use StackForge to make progre
>>
>> From: Kevin Benton 
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Date: 09/05/2014 04:31 AM
>> Subject: Re: [openstack-dev] [neutron][policy] Group-based Policy next
>> steps
>> --
>>
>>
>>
>> Tl;dr - Neutron incubator is only a wiki page with many uncertainties.
>> Use StackForge to make progress and re-evaluate when the incubator exists.
>>
>>
>> I also agree that starting out in StackForge as a separate repo is a
>> better first step. In addition to the uncertainty around packaging and
>> other processes brought up by Mandeep, I really doubt the Neutron incubator
>> is going to have the review velocity desired by the group policy
>> contributors. I believe this will be the case based on the Neutron
>> incubator patch approval policy in conjunction with the nature of the
>> projects it will attract.
>>
>> Due to the requirement for two core +2's in the Neutron incubator, moving
>> group policy there is hardly going to do anything to reduce the load on the
>> Neutron cores who are in a similar overloaded position as the Nova
>> cores.[1] Consequently, I wouldn't be surprised if patches to the Neutron
>> incubator receive even less core attention than the main repo simply
>> because their location outside of openstack/neutron will be a good reason
>> to treat them with a lower priority.
>>
>> If you combine that with the fact that the incubator is designed to house
>> all of the proposed experimental features to Neutron, there will be a very
>> high volume of patches constantly being proposed to add new features, make
>> changes to features, and maybe even fix bugs in those features. This new
>> demand for reviewers will not be met by the existing core reviewers because
>> they will be busy with refactoring, fixing, and enhancing the core Neutron
>> code.
>>
>> Even ignoring the review velocity issues, I see very little benefit to
>> GBP starting inside of the Neutron incubator. It doesn't guarantee any
>> packaging with Neutron and Neutron code cannot reference any incubator
>> code. It's effectively a separate repo without the advantage of being able
>> to commit code quickly.
>>
>> There is one potential downside to not immediately using the Neutron
>> incubator. If the Neutron cores decide that all features must live in the
>> incubator for at least 2 cycles regardless of quality or usage in
>> deployments, starting outside in a StackForge project would delay the start
>> of the timer until GBP makes it into the incubator. However, this can be
>> considered once the incubator actually exists and starts accepting
>> submissions.
>>
>> In summary, I think GBP should move to a StackForge project as soon as
>> possible so development can progress. A transition to the Neutron

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-09 Thread Daniel P. Berrange
On Mon, Sep 08, 2014 at 05:20:54PM -0700, Stefano Maffulli wrote:
> On 09/05/2014 07:07 PM, James Bottomley wrote:
> > Actually, I don't think this analysis is accurate.  Some people are
> > simply interested in small aspects of a project.  It's the "scratch your
> > own itch" part of open source.  The thing which makes itch scratchers
> > not lone wolfs is the desire to go the extra mile to make what they've
> > done useful to the community.  If they never do this, they likely have a
> > forked repo with only their changes (and are the epitome of a lone
> > wolf).  If you scratch your own itch and make the effort to get it
> > upstream, you're assisting the community (even if that's the only piece
> > of code you do) and that assistance makes you (at least for a time) part
> > of the community.

[snip]

> From conversations with PTLs and core reviewers I get the impression
> that lots of drivers contributions come with bad code. These require a
> lot of time and reviewers energy to be cleaned up, causing burn out and
> bad feelings on all sides. What if we establish a new 'place' of some
> sort where we can send people to improve their code (or dump it without
> interfering with core?) Somewhere there may be a workflow
> "go-improve-over-there" where a Community Manager (or mentors or some
> other program we may invent) takes over and does what core reviewers
> have been trying to do 'on the side'? The advantage is that this way we
> don't have to change radically how current teams operate, we may be able
> to start this immediately with Kilo. Thoughts?

I don't really I agree with the suggestion that contributions to drivers
are largely "bad code". Sure there are some contributors who are worse
than others, and when reviewing I've seen that pretty much anywhere in
the code tree. That's just life when you have a project that welcomes
contributions from anyone. I wouldn't want to send new contributors
to a different place to "improve their code" as it would be yet another
thing for them to go through before getting their code accepted and I
don't think it'd really make a significant difference to our workload
overall. 

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-09-09 Thread Mike Scherbakov
Hi all,
please see below original email below from Dmitry. I've modified the
subject to bring larger audience to the issue.

I'd like to split the issue into two parts:

   1. Maintenance mode for OpenStack controllers in HA mode (HA-ed
   Keystone, Glance, etc.)
   2. Maintenance mode for OpenStack computes/storage nodes (no HA)

For first category, we might not need to have maintenance mode at all. For
example, if we apply patching/upgrade one by one node to 3-node HA cluster,
2 nodes will serve requests normally. Is that possible for our HA solutions
in Fuel, TripleO, other frameworks?

For second category, can not we simply do "nova-manage service disable...",
so scheduler will simply stop scheduling new workloads on particular host
which we want to do maintenance on?


On Thu, Aug 28, 2014 at 6:44 PM, Dmitry Pyzhov  wrote:

> All,
>
> I'm not sure if it deserves to be mentioned in our documentation, this
> seems to be a common practice. If an administrator wants to patch his
> environment, he should be prepared for a temporary downtime of OpenStack
> services. And he should plan to perform patching in advance: choose a time
> with minimal load and warn users about possible interruptions of service
> availability.
>
> Our current implementation of patching does not protect from downtime
> during the patching procedure. HA deployments seems to be more or less
> stable. But it looks like it is possible to schedule an action on a compute
> node and get an error because of service restart. Deployments with one
> controller... well, you won’t be able to use your cluster until the
> patching is finished. There is no way to get rid of downtime here.
>
> As I understand, we can get rid of possible issues with computes in HA.
> But it will require migration of instances and stopping of nova-compute
> service before patching. And it will make the overall patching procedure
> much longer. Do we want to investigate this process?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] [Devstack] - Create Volume is called in an infinite loop

2014-09-09 Thread Amit Das
Hi All,

I have been running tempest tests on my cinder driver for about a month
now.

However, since last week i see the create volume logic was attempted thrice
by the scheduler after a failure during volume creation.

However, i would like the scheduler not to attempt after a failure in
volume creation. With this requirement in mind, I modified the
/etc/cinder/cinder.conf to have scheduler_max_attempts = 1

my cinder.conf details - http://paste.openstack.org/show/108740/

However, i see that this results in the executing of create volume logic in
an endless loop.

Please let me know if i have to use any specific scheduler filter etc that
does not retry in case of create_volume failures ?

Regards,
Amit
*CloudByte Inc.* 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >