Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Jonathan D. Proulx
On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote:
:> On Oct 9, 2015, at 12:28 PM, Monty Taylor  wrote:
:> 
:>> On 10/09/2015 11:21 AM, Shamail wrote:
:>> 
:>> 
:>>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
:>>> 
:>>> It looks like some great conversation got going on the service catalog
:>>> standardization spec / discussion at the last cross project meeting.
:>>> Sorry I wasn't there to participate.
:>> Apologize if this is a question that has already been address but why can't 
we just leverage something like consul.io?
:> 
:> It's a good question and there have actually been some discussions about 
leveraging it on the backend. However, even if we did, we'd still need keystone 
to provide the multi-tenancy view on the subject. consul wasn't designed (quite 
correctly I think) to be a user-facing service for 50k users.
:> 
:> I think it would be an excellent backend.
:Thanks, that makes sense.  I agree that it might be a good backend but not the 
overall solution... I was bringing it up to ensure we consider existing options 
(where possible) and spend cycles on the unsolved bits.

As an operator I'd be happy to use SRV records to define endpoints,
though multiple regions could make that messy.

would we make subdomins per region or include region name in the
service name? 

_compute-regionone._tcp.example.com 
   -vs-
_compute._tcp.regionone.example.com

Also not all operators can controll their DNS to this level so it
couldn't be the only option.

Or are you talking about using an internal DNS implementation private
to the OpenStack Deployment?  I'm actually a bit less happy with that
idea.

-Jon
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread David Stanek
On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx 
wrote:

> On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote:
> :> On Oct 9, 2015, at 12:28 PM, Monty Taylor  wrote:
> :>
> :>> On 10/09/2015 11:21 AM, Shamail wrote:
> :>>
> :>>
> :>>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
> :>>>
> :>>> It looks like some great conversation got going on the service catalog
> :>>> standardization spec / discussion at the last cross project meeting.
> :>>> Sorry I wasn't there to participate.
> :>> Apologize if this is a question that has already been address but why
> can't we just leverage something like consul.io?
> :>
> :> It's a good question and there have actually been some discussions
> about leveraging it on the backend. However, even if we did, we'd still
> need keystone to provide the multi-tenancy view on the subject. consul
> wasn't designed (quite correctly I think) to be a user-facing service for
> 50k users.
> :>
> :> I think it would be an excellent backend.
> :Thanks, that makes sense.  I agree that it might be a good backend but
> not the overall solution... I was bringing it up to ensure we consider
> existing options (where possible) and spend cycles on the unsolved bits.
>
> As an operator I'd be happy to use SRV records to define endpoints,
> though multiple regions could make that messy.
>
> would we make subdomins per region or include region name in the
> service name?
>
> _compute-regionone._tcp.example.com
>-vs-
> _compute._tcp.regionone.example.com
>
> Also not all operators can controll their DNS to this level so it
> couldn't be the only option.
>
> Or are you talking about using an internal DNS implementation private
> to the OpenStack Deployment?  I'm actually a bit less happy with that
> idea.
>

I was able to put together an implementation[1] of DNS-SD loosely based on
RFC-6763[2]. It'd really a proof of concept, but we've talked so much about
it that I decided to get something working. Although if this seems like a
viable option then there's still much work to be done.

I'd love feedback.

1. https://gist.github.com/dstanek/093f851fdea8ebfd893d
2. https://tools.ietf.org/html/rfc6763

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Clark Boylan


On Fri, Oct 9, 2015, at 10:32 AM, Vahid S Hashemian wrote:
> Serg, Jeremy,
> 
> Thank you for your response, so the issue I ran into with my patch is the 
> gate job failing on python26.
> You can see it here: https://review.openstack.org/#/c/232271/
> 
> Serg suggested that we add 2.6 support to tosca-parser, which is fine
> with 
> us.
> But I got a bit confused after reading Jeremy's response.
> It seems to me that the support will be going away, but there is no 
> timeline (and therefore no near-term plan?)
> So, I'm hoping Jeremy can advise whether he also recommends the same 
> thing, or not.
There is a timeline (though admittedly hard to find) at
https://etherpad.openstack.org/p/YVR-relmgt-stable-branch which says
Juno support would run through the end of November. Since Juno is the
last release to support python2.6 we will remove python2.6 support from
the test infrastructure at that time as well.

I personally probably wouldn't bother with extra work to support
python2.6, but that all depends on how much work it is and whether or
not you find value in it. Ultimately it is up to you, just know that the
Infrastructure team will stop hosting testing for python2.6 when Juno is
EOLed.

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Returning of HEAD files?

2015-10-09 Thread Anna Kamyshnikova
Some time ago we merged change [1] that removes HEADS file. Validation of
migration revisions using HEADS file was replaced with pep8. This allows us
to avoid merge conflicts that appeared every time a new migration was
merged.

The problem was pointed by Kevin Benton as the original idea of HEAD file
[2] was not only to validate revisions, so as not to allow outdated changes
go into merge queue, that could be very important for the end of the cycle
when a lot of patches get approved.

I introduced change [3] that returns HEAD files, but this time they are
created per branch, so that will reduce merge conflicts a bit.

I understand that it was better to ask at the first time when [1] was on
review, should we have HEAD files and merge conflicts or not, but I want to
ask it now: Should I continue work on [3] or we are not expecting to have
problems with big merge queues?


[1] - https://review.openstack.org/#/c/227319/
[2] -
https://github.com/openstack/neutron/commit/36d85f831ae8eb21383806261bfc4c3d53dd1929
[3] - https://review.openstack.org/#/c/232607/

-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Jeremy Stanley
On 2015-10-09 12:22:12 +0300 (+0300), Serg Melikyan wrote:
> unfortunately we don't plan to remove support for py26 from the
> python-muranoclient, most of the python-client support py26
> in order to work out of the box on different OS including CentOS 6.5
> and so on.
[...]

Bear in mind that we were only keeping 2.6 testing around to support
the stable/juno branch, and intend to begin removing support for
running Python 2.6 tests from our CI when that branch reaches EOL in
a few weeks. The end-of-2.6 discussions we had now many summits ago
involved representatives from several Enterprise/LTS Linux
distributions who agreed that supporting it in Kilo and beyond would
not be necessary.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread William M Edmonds

Cory Benfield  writes:
> > The problem that occurs is the result of a few interacting things:
> >  - requests has very very specific versions of urllib3 it works with.
> > So specific they aren't always released yet.
>
> This should no longer be true. Our downstream redistributors pointedout
to us
> that this  was making their lives harder than they needed to be, so it's
now
> our policy to only  update to actual release versions of urllib3.

That's great... except that I'm confused as to why requests would continue
to repackage urllib3 if that's the case. Why not just prereq the version of
urllib3 that it needs? I thought the one and only answer to that question
had been so that requests could package non-standard versions.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] mox to mock migration

2015-10-09 Thread Jay Dobies
I forget where we left things at the last meeting with regard to whether 
or not there should be a blueprint on this. I was going to work on some 
during some downtime but I wanted to make sure I wasn't overlapping with 
what others may be converting (it's more time consuming than I anticipated).


Any thoughts on how to track it?

Thanks :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sender Auth Failure] [neutron] New cycle started. What are you up to, folks?

2015-10-09 Thread Howard, Victor
I¹d like to help out in some fashion with FwaaS and the IPV6 needs
mentioned.  Current working on the DSCP Spec and patches to implement
ontop of QOS.

I liked Seans comments about devstack, we have been doing tons of testing
for DSCP there without really understanding how things are going to be
updated or knowing if its current with production on master. Would like to
help out in discussing the best way to move forward to keep devstack in
synch or help to update it.

On 10/1/15, 9:45 AM, "Ihar Hrachyshka"  wrote:

>Hi all,
>
>I talked recently with several contributors about what each of us plans
>for the next cycle, and found it¹s quite useful to share thoughts with
>others, because you have immediate yay/nay feedback, and maybe find
>companions for next adventures, and what not. So I¹ve decided to ask
>everyone what you see the team and you personally doing the next cycle,
>for fun or profit.
>
>That¹s like a PTL nomination letter, but open to everyone! :) No
>commitments, no deadlines, just list random ideas you have in mind or in
>your todo lists, and we¹ll all appreciate the huge pile of awesomeness no
>one will ever have time to implement even if scheduled for Xixao release.
>
>To start the fun, I will share my silly ideas in the next email.
>
>Ihar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Jeremy Stanley
On 2015-10-09 13:51:39 +0200 (+0200), Ihar Hrachyshka wrote:
[...]
> Another IRC service that I find useful to encourage collaboration
> in teams is a karma bot. Something that would calculate
> ++ messages in tracked channels. Having such a
> lightweight and visible way to tell ‘thank you’ to a contributor
> would be great. Do we have plans to implement it in infra?

If you write it! An easy compromise would be to just add similar
support like "#thanks ttx for your awesome successbot
implementation!" and interleave those into the same Successes
article or direct them to a separate Karma article. If you want an
interface to tabulate/summarize #thanks calls though, that would
likely need some additional service.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Russell Bryant
On 10/09/2015 05:42 AM, Thierry Carrez wrote:
> Hello everyone,
> 
> OpenStack has become quite big, and it's easier than ever to feel lost,
> to feel like nothing is really happening. It's more difficult than ever
> to feel part of a single community, and to celebrate little successes
> and progress.
> 
> In a (small) effort to help with that, I suggested making it easier to
> record little moments of joy and small success bits. Those are usually
> not worth the effort of a blog post or a new mailing-list thread, but
> they show that our community makes progress *every day*.
> 
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
> 
> #success [Your message here]
> 
> The openstackstatus bot will take that and record it on this wiki page:
> 
> https://wiki.openstack.org/wiki/Successes
> 
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
> 
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
> 
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.
> 

This is *really* cool.  I'm excited to use this and see all the things
others record.  Thanks!!

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Ihar Hrachyshka
> On 09 Oct 2015, at 15:46, Jeremy Stanley  wrote:
> 
> On 2015-10-09 13:51:39 +0200 (+0200), Ihar Hrachyshka wrote:
> [...]
>> Another IRC service that I find useful to encourage collaboration
>> in teams is a karma bot. Something that would calculate
>> ++ messages in tracked channels. Having such a
>> lightweight and visible way to tell ‘thank you’ to a contributor
>> would be great. Do we have plans to implement it in infra?
> 
> If you write it! An easy compromise would be to just add similar
> support like "#thanks ttx for your awesome successbot
> implementation!" and interleave those into the same Successes
> article or direct them to a separate Karma article. If you want an
> interface to tabulate/summarize #thanks calls though, that would
> likely need some additional service.


There are already multiple karmabot implementations that could be reused, like 
https://github.com/chromakode/karmabot

Can we just adopt one of those?

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Returning of HEAD files?

2015-10-09 Thread Ihar Hrachyshka
> On 09 Oct 2015, at 15:28, Anna Kamyshnikova  
> wrote:
> 
> Some time ago we merged change [1] that removes HEADS file. Validation of 
> migration revisions using HEADS file was replaced with pep8. This allows us 
> to avoid merge conflicts that appeared every time a new migration was merged.
> 
> The problem was pointed by Kevin Benton as the original idea of HEAD file [2] 
> was not only to validate revisions, so as not to allow outdated changes go 
> into merge queue, that could be very important for the end of the cycle when 
> a lot of patches get approved.
> 
> I introduced change [3] that returns HEAD files, but this time they are 
> created per branch, so that will reduce merge conflicts a bit.
> 
> I understand that it was better to ask at the first time when [1] was on 
> review, should we have HEAD files and merge conflicts or not, but I want to 
> ask it now: Should I continue work on [3] or we are not expecting to have 
> problems with big merge queues?
> 
> 
> [1] - https://review.openstack.org/#/c/227319/
> [2] - 
> https://github.com/openstack/neutron/commit/36d85f831ae8eb21383806261bfc4c3d53dd1929
> [3] - https://review.openstack.org/#/c/232607/


I think it’s worth describing merge scenarios with and without HEAD files.

1. both patches in merge queue

Currently (no HEAD files), if two patches that touch the same alembic branch 
head are pushed into the gate:

- first patch passes the gate;
- second patch fails on pep8, and moved out of the merge queue; other jobs 
continue to run to report the failure back to Gerrit;
- if a patch above the first patch reset the queue, then both patches are 
re-added into merge queue; again, the second patch fails on pep8 quickly and is 
moved out of the queue;
- the second patch author is notified about the failure once the first patch is 
merged and all jobs for the second patch are complete (at least pep8 is failed).

With HEAD files,
- first patch passes the gate;
- second patch does not get into the queue until first patch merges, or fails 
(because there are now git conflicts);
- if a patch above the first patch reset the queue, only first patch is 
re-added into the merge queue; the second patch waits until the first patch 
merges or fails to merge; in the former case, zuul reports git conflict back to 
the author of the second patch; in the latter case, the second patch is added 
into the merge queue and merges if all goes well;
- meaning, the second patch author is notified about the failure once the first 
patch is merged.

I see the following speed-ups with HEAD files:
- there is no need to wait for jobs of the second patch to complete before we 
notify the author about the problem;
- queue is not reset by second patch failures after each gate reset; since pep8 
job fails quick, it’s ~5 mins per reset (which should not occur frequently);

I see the following speed-up without HEAD files:
- if first patch fails to merge, the second patch gets into the merge queue at 
the point were it was pushed into the gate, not at the end of the list at the 
moment the first patch failed.

2. one patch in merge queue

Currently, when a patch is merged in the gate, all other patches are tested on 
git conflicts, but since there are no conflicts due to HEAD files (there are no 
such things now), authors are not notified about the issue. Still, the patch 
can proceed with review (there is no -1 vote from CI which scares a lot of 
people), and if pushed into gate, will immediately fail. Then author will need 
to rebase, and reviewers repeat the push into the gate.

With HEAD files, we would immediately detect git conflict and report to the 
author about the issue, setting -1 vote for CI. Then the author updates the 
patch, gets fresh vote and hopes that other reviewers get back to his patch.

In that scenario, it’s not clear what’s better for review velocity. My 
experience shows that git conflicts and -1 CI votes slow down reviews, and if a 
patch was in the gate before, it should be easy to respin it for a small change 
in head and push. On the contrary, git conflicts are very reviewer time 
consuming, and scare reviewers away.

3. Another tiny benefit not to have git conflicts on HEAD files is that 
reviewers can distinguish legitimate git conflicts from branching failures, and 
apply appropriate review attention based on the nature of the failure. We also 
get fresh CI run for the patch (except for pep8 job that is doomed to fail 
until rebase).

There are pros and cons for both approaches, but overall, I don’t see how the 
former justify having HEAD files and the complexity to handle them in code and 
in file system.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Heat] mox to mock migration

2015-10-09 Thread Sergey Kraynev
Jay,

I think, that only one person who partially do the same job is Qiming.
Please ask him about it.
Also I think, that will be good to create BP for it, which you may
mention in commit messages.
In this Bp you may specify list of directories for parallel work.
There is nothing else from my side. :)
Thank you for volunteering!

On 9 October 2015 at 16:06, Jay Dobies  wrote:
> I forget where we left things at the last meeting with regard to whether or
> not there should be a blueprint on this. I was going to work on some during
> some downtime but I wanted to make sure I wasn't overlapping with what
> others may be converting (it's more time consuming than I anticipated).
>
> Any thoughts on how to track it?
>
> Thanks :)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread William M Edmonds

Robert Collins  writes:
>  - Linux vendors often unbundle urllib3 from requests and then apply
> what patches were needed to their urllib3; while not updating their
> requests package dependencies to reflect this.

I opened a bug on Fedora for them to update their requests package
dependencies. See https://bugzilla.redhat.com/show_bug.cgi?id=1253823. Of
course that may continue to be an issue on older versions and other
distros.

>  - if for any reason we have a distro-altered requests + a
> pip-installed urllib3, requests will [usually] break... see the 'not
> always released yet' key thing above.
>
> Now, there are lots of places this last thing can happen; they all
> depend on us having a dependency on requests that is compatible with
> the version installed by the distro, but a urllib3 dependency that
> triggers an upgrade of just urllib3. When constraints are in use, the
> requests version has to match the distro requests version exactly, but
> that will happen from time to time.

When you're using a distro, you're always going to have to worry about
someone pip installing something that conflicts with the rpm, no? That
could be for any reason, could be completely unrelated to OpenStack
dependencies. Unless the distros have a way to put in protection against
this, preventing pip install of something that is already installed by RPM?

>  - make sure none of our testing environments include distro
> requests packages.

It's not like requests is an unusual package for someone to have installed
from their distro in a base OS image. So when they take that base OS and go
to setup OpenStack, they'll be hitting this case, whether we tested it or
not. So while not testing this case seems nice from a development
perspective, it doesn't seem to fit real-world usage. I don't think it
would make operators very happy.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Clint Byrum
Excerpts from Chris Friesen's message of 2015-10-09 10:54:36 -0700:
> On 10/09/2015 11:09 AM, Zane Bitter wrote:
> 
> > The optimal way to do this would be a weighted random selection, where the
> > probability of any given host being selected is proportional to its 
> > weighting.
> > (Obviously this is limited by the accuracy of the weighting function in
> > expressing your actual preferences - and it's at least conceivable that this
> > could vary with the number of schedulers running.)
> >
> > In fact, the choice of the name 'weighting' would normally imply that it's 
> > done
> > this way; hearing that the 'weighting' is actually used as a 'score' with 
> > the
> > highest one always winning is quite surprising.
> 
> If you've only got one scheduler, there's no need to get fancy, you just pick 
> the "best" host based on your weighing function.
> 
> It's only when you've got parallel schedulers that things get tricky.
> 

Note that I think you mean _concurrent_ not _parallel_ schedulers.

Parallel schedulers would be trying to solve the same unit of work by
breaking it up into smaller components and doing them at the same time.

Concurrent means they're just doing different things at the same time.

I know this is nit-picky, but we use the wrong word _A LOT_ and the
problem space is actually vastly different, as parallelizable problems
have a whole set of optimizations and advantages that generic concurrent
problems (especially those involving mutating state!) have a whole set
of race conditions that must be managed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] mox to mock migration

2015-10-09 Thread Jay Dobies
This sounds good, I was hoping it'd be acceptable to use etherpad. I 
filed a blueprint [1] but I'm anticipating using the etherpad much more 
regularly to track which files are being worked or completed.


[1] https://blueprints.launchpad.net/heat/+spec/mox-to-mock-conversion
[2] https://etherpad.openstack.org/p/heat-mox-to-mock

Thanks for the guidance :)

On 10/09/2015 12:42 PM, Steven Hardy wrote:

On Fri, Oct 09, 2015 at 09:06:57AM -0400, Jay Dobies wrote:

I forget where we left things at the last meeting with regard to whether or
not there should be a blueprint on this. I was going to work on some during
some downtime but I wanted to make sure I wasn't overlapping with what
others may be converting (it's more time consuming than I anticipated).

Any thoughts on how to track it?


I'd probably suggest raising either a bug or a blueprint (not spec), then
link from that to an etherpad where you can track all the tests requiring
rework, and who's working on them.

"it's more time consuming than I anticipated" is pretty much my default
response for anything to do with heat unit tests btw, good luck! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-10-09 Thread Matt Riedemann



On 10/9/2015 12:03 PM, Jay Pipes wrote:

On 10/07/2015 11:04 AM, Matt Riedemann wrote:

I'm wondering why we don't reverse sort the tables using the sqlalchemy
metadata object before processing the tables for delete?  That's the
same thing I did in the 267 migration since we needed to process the
tree starting with the leafs and then eventually get back to the
instances table (since most roads lead to the instances table).


Yes, that would make a lot of sense to me if we used the SA metadata
object for reverse sorting.


When I get some free time next week I'm going to play with this.




Another thing that's really weird is how max_rows is used in this code.
There is cumulative tracking of the max_rows value so if the value you
pass in is too small, you might not actually be removing anything.

I figured max_rows meant up to max_rows from each table, not max_rows
*total* across all tables. By my count, there are 52 tables in the nova
db model. The way I read the code, if I pass in max_rows=10 and say it
processes table A and archives 7 rows, then when it processes table B it
will pass max_rows=(max_rows - rows_archived), which would be 3 for
table B. If we archive 3 rows from table B, rows_archived >= max_rows
and we quit. So to really make this work, you have to pass in something
big for max_rows, like 1000, which seems completely random.

Does this seem odd to anyone else?


Uhm, yes it does.

 > Given the relationships between

tables, I'd think you'd want to try and delete max_rows for all tables,
so archive 10 instances, 10 block_device_mapping, 10 pci_devices, etc.

I'm also bringing this up now because there is a thread in the operators
list which pointed me to a set of scripts that operators at GoDaddy are
using for archiving deleted rows:

http://lists.openstack.org/pipermail/openstack-operators/2015-October/008392.html


Presumably because the command in nova doesn't work. We should either
make this thing work or just punt and delete it because no one cares.


The db archive code in Nova just doesn't make much sense to me at all.
The algorithm for purging stuff, like you mention above, does not take
into account the relationships between tables; instead of diving into
the children relations and archiving those first, the code just uses a
simplistic "well, if we hit a foreign key error, just ignore and
continue archiving other things, we will eventually repeat the call to
delete this row" strategy:

https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L6021-L6023


Yeah, I noticed that too and I don't think it actually does anything. We 
never actually come back since that would require some 
tracking/stack/recursion stuff to retry failed tables, which we don't do.





I had a proposal [1] to completely rework the whole shadow table mess
and db archiving functionality. I continue to believe that is the
appropriate solution for this, and that we should rip out the existing
functionality because it simply does not work properly.

Best,
-jay

[1] https://review.openstack.org/#/c/137669/


Are you going to pick that back up? Or sick some minions on it.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Alec Hothan (ahothan)

Still the point from Chris is valid.
I guess the main reason openstack is going with multiple concurrent schedulers 
is to scale out by distributing the load between multiple instances of 
schedulers because 1 instance is too slow.
This discussion is about coordinating the many instances of schedulers in a way 
that works and this is actually a difficult problem and will get worst as the 
number of variables for instance placement increases (for example NFV is going 
to require a lot more than just cpu pinning, huge pages and numa).

Has anybody looked at why 1 instance is too slow and what it would take to make 
1 scheduler instance work fast enough? This does not preclude the use of 
concurrency for finer grain tasks in the background.




On 10/9/15, 11:05 AM, "Clint Byrum"  wrote:

>Excerpts from Chris Friesen's message of 2015-10-09 10:54:36 -0700:
>> On 10/09/2015 11:09 AM, Zane Bitter wrote:
>> 
>> > The optimal way to do this would be a weighted random selection, where the
>> > probability of any given host being selected is proportional to its 
>> > weighting.
>> > (Obviously this is limited by the accuracy of the weighting function in
>> > expressing your actual preferences - and it's at least conceivable that 
>> > this
>> > could vary with the number of schedulers running.)
>> >
>> > In fact, the choice of the name 'weighting' would normally imply that it's 
>> > done
>> > this way; hearing that the 'weighting' is actually used as a 'score' with 
>> > the
>> > highest one always winning is quite surprising.
>> 
>> If you've only got one scheduler, there's no need to get fancy, you just 
>> pick 
>> the "best" host based on your weighing function.
>> 
>> It's only when you've got parallel schedulers that things get tricky.
>> 
>
>Note that I think you mean _concurrent_ not _parallel_ schedulers.
>
>Parallel schedulers would be trying to solve the same unit of work by
>breaking it up into smaller components and doing them at the same time.
>
>Concurrent means they're just doing different things at the same time.
>
>I know this is nit-picky, but we use the wrong word _A LOT_ and the
>problem space is actually vastly different, as parallelizable problems
>have a whole set of optimizations and advantages that generic concurrent
>problems (especially those involving mutating state!) have a whole set
>of race conditions that must be managed.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-09 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2015-10-09 09:51:28 -0500:
> 
> On 10/9/2015 1:49 AM, Paul Carlton wrote:
> >
> > On 08/10/15 16:49, Doug Hellmann wrote:
> >> Excerpts from Matt Riedemann's message of 2015-10-07 14:38:07 -0500:
> >>> Here's why:
> >>>
> >>> https://review.openstack.org/#/c/220622/
> >>>
> >>> That's marked as fixing an OSSA which means we'll have to backport the
> >>> fix in nova but it depends on a change to strutils.mask_password in
> >>> oslo.utils, which required a release and a minimum version bump in
> >>> global-requirements.
> >>>
> >>> To backport the change in nova, we either have to:
> >>>
> >>> 1. Copy mask_password out of oslo.utils and add it to nova in the
> >>> backport or,
> >>>
> >>> 2. Backport the oslo.utils change to a stable branch, release it as a
> >>> patch release, bump minimum required version in stable g-r and then
> >>> backport the nova change and depend on the backported oslo.utils stable
> >>> release - which also makes it a dependent library version bump for any
> >>> packagers/distros that have already frozen libraries for their stable
> >>> releases, which is kind of not fun.
> >> Bug fix releases do not generally require a minimum version bump. The
> >> API hasn't changed, and there's nothing new in the library in this case,
> >> so it's a documentation issue to ensure that users update to the new
> >> release. All we should need to do is backport the fix to the appropriate
> >> branch of oslo.utils and release a new version from that branch that is
> >> compatible with the same branch of nova.
> >>
> >> Doug
> >>
> >>> So I'm thinking this is one of those things that should ultimately live
> >>> in oslo-incubator so it can live in the respective projects. If
> >>> mask_password were in oslo-incubator, we'd have just fixed and
> >>> backported it there and then synced to nova on master and stable
> >>> branches, no dependent library version bumps required.
> >>>
> >>> Plus I miss the good old days of reviewing oslo-incubator
> >>> syncs...(joking of course).
> >>>
> >> __
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > I've been following this discussion, is there now a consensus on the way
> > forward?
> >
> > My understanding is that Doug is suggesting back porting my oslo.utils
> > change to the stable juno and kilo branches?
> >
> 
> It means you'll have to backport the oslo.utils change to each stable 
> branch that you also backport the nova change to, which probably goes 
> back to stable/juno (so liberty->kilo->juno backports in both projects).
> 

That sounds right. Ping the Oslo team in #openstack-oslo for reviews on
those stable branches as you prepare them and I'm sure we can help
expedite the updates.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L2 gateway project

2015-10-09 Thread Sukhdev Kapur
Hey Kyle,

We are down to couple of patches that are awaiting approval/merge. As soon
as it is done, I will let you know.

Thanks
-Sukhdev


On Fri, Oct 9, 2015 at 9:39 AM, Kyle Mestery  wrote:

> On Fri, Oct 9, 2015 at 10:13 AM, Gary Kotton  wrote:
>
>> Hi,
>> Who will be creating the stable/liberty branch?
>> Thanks
>> Gary
>>
>>
> I'll be doing this once someone from the L2GW team lets me know a commit
> SHA to create it from.
>
> Thanks,
> Kyle
>
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/09/2015 11:09 AM, Zane Bitter wrote:


The optimal way to do this would be a weighted random selection, where the
probability of any given host being selected is proportional to its weighting.
(Obviously this is limited by the accuracy of the weighting function in
expressing your actual preferences - and it's at least conceivable that this
could vary with the number of schedulers running.)

In fact, the choice of the name 'weighting' would normally imply that it's done
this way; hearing that the 'weighting' is actually used as a 'score' with the
highest one always winning is quite surprising.


If you've only got one scheduler, there's no need to get fancy, you just pick 
the "best" host based on your weighing function.


It's only when you've got parallel schedulers that things get tricky.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Backport policy for Liberty

2015-10-09 Thread Fox, Kevin M
As an Op, that sounds reasonable so long as they aren't defaulted on. In theory 
it shouldn't be much different then a distro adding additional packages. The 
new packages don't affect existing systems unless the op requests them to be 
installed.

With my App Catalog hat on, I'm curious how horizon plugins might fit into that 
scheme. The App Catalog plugin would need to be added directly to the Horizon 
container. I'm sure there are other plugins that may want to get loaded into 
the container too. They should all be able to be enabled/disabled though via 
docker env variables though. Any thoughts there?

Thanks,
Kevin

From: Steven Dake (stdake) [std...@cisco.com]
Sent: Thursday, October 08, 2015 12:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [kolla] Backport policy for Liberty

Kolla operators and developers,

The general consensus of the Core Reviewer team for Kolla is that we should 
embrace a liberal backport policy for the Liberty release.  An example of 
liberal -> We add a new server service to Ansible, we would backport the 
feature to liberty.  This is in breaking with the typical OpenStack backports 
policy.  It also creates a whole bunch more work and has potential to introduce 
regressions in the Liberty release.

Given these realities I want to put on hold any liberal backporting until after 
Summit.  I will schedule a fishbowl session for a backport policy discussion 
where we will decide as a community what type of backport policy we want.  The 
delivery required before we introduce any liberal backporting policy then 
should be a description of that backport policy discussion at Summit distilled 
into a RST file in our git repository.

If you have any questions, comments, or concerns, please chime in on the thread.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 01:39 PM, David Stanek wrote:


On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx > wrote:

On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote:
:> On Oct 9, 2015, at 12:28 PM, Monty Taylor > wrote:
:>
:>> On 10/09/2015 11:21 AM, Shamail wrote:
:>>
:>>
:>>> On Oct 9, 2015, at 10:39 AM, Sean Dague > wrote:
:>>>
:>>> It looks like some great conversation got going on the service
catalog
:>>> standardization spec / discussion at the last cross project
meeting.
:>>> Sorry I wasn't there to participate.
:>> Apologize if this is a question that has already been address
but why can't we just leverage something like consul.io
?
:>
:> It's a good question and there have actually been some
discussions about leveraging it on the backend. However, even if we
did, we'd still need keystone to provide the multi-tenancy view on
the subject. consul wasn't designed (quite correctly I think) to be
a user-facing service for 50k users.
:>
:> I think it would be an excellent backend.
:Thanks, that makes sense.  I agree that it might be a good backend
but not the overall solution... I was bringing it up to ensure we
consider existing options (where possible) and spend cycles on the
unsolved bits.

As an operator I'd be happy to use SRV records to define endpoints,
though multiple regions could make that messy.

would we make subdomins per region or include region name in the
service name?

_compute-regionone._tcp.example.com 
-vs-
_compute._tcp.regionone.example.com 

Also not all operators can controll their DNS to this level so it
couldn't be the only option.


SO - XMPP does this. The way it works is that if your XMPP provider has 
put the approriate records in DNS, then everything Just Works. If not, 
then you, as a consumer, have several pieces of information you need to 
provide by hand.


Of course, there are already several pieces of information you have to 
provide by hand to connect to OpenStack, so needing to download a 
manifest file or something like that to talk to a cloud in an 
environment where the people running a cloud do not have the ability to 
add information to DNS (boggles) shouldn't be that terrible.


One could also imagine an in-between option where OpenStack could run an 
_optional_ DNS for this purpose - and then the only 'by-hand' you'd need 
for clouds with no real DNS is the location of the discover DNS.



Or are you talking about using an internal DNS implementation private
to the OpenStack Deployment?  I'm actually a bit less happy with that
idea.


I was able to put together an implementation[1] of DNS-SD loosely based
on RFC-6763[2]. It'd really a proof of concept, but we've talked so much
about it that I decided to get something working. Although if this seems
like a viable option then there's still much work to be done.

I'd love feedback.

1. https://gist.github.com/dstanek/093f851fdea8ebfd893d
2. https://tools.ietf.org/html/rfc6763

--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Robert Collins
On 10 October 2015 at 03:57, Cory Benfield  wrote:
>
>> On 9 Oct 2015, at 15:18, Jeremy Stanley  wrote:
>>
>> On 2015-10-09 14:58:36 +0100 (+0100), Cory Benfield wrote:
>> [...]
>>> IMO, what OpenStack needs is a decision about where it’s getting
>>> its packages from, and then to refuse to mix the two.
>>
>> I have yet to find a Python-based operating system installable in
>> whole via pip. There will always be _at_least_some_ packages you
>> install from your operating system's package management. What you
>> seem to be missing is that Linux distros are now shipping base
>> images which include their python-requests and python-urllib3
>> packages already pre-installed as dependencies of Python-based tools
>> they deem important to their users.
>>
>
> Yeah, this has been an ongoing problem.
>
> For my part, Donald Stufft has informed me that if the distribution-provided 
> requests package has the appropriate install_requires field in its setup.py, 
> pip will respect that dependency.

It should but it won't :).

https://github.com/pypa/pip/issues/2687
and
https://github.com/pypa/pip/issues/988

The first one means that if someone does 'pip install -U urllib3' and
an unbundled requests with appropriate pin on urllib3 is already
installed, that pip will happily upgrade urllib3, breaking requests,
without complaining. It is fixable (with correct metadata of course).

The second one means that if anything - another package, or the user
via direct mention or requirements/constraints files - specifies a
urllib3 dependency (of any sort) then the requests dependency will be
silently ignored.

Both of these will be solved in the medium future - we're now at the
point of having POC branches, and once we've finished with the
constraints rollout and PEP-426 marker polish will be moving onto the
resolver work.

> Given that requests has recently switched to not providing mid-cycle urllib3 
> versions, it should be entirely possible for downstream redistributors in 
> Debian/Fedora to put that metadata into their packages when they unbundle 
> requests. I’m chasing up with our downstream redistributors right now to ask 
> them to start doing that.
>
> This should resolve the problem for systems where requests 2.7.0 or higher 
> are being used. In other systems, this problem still exists and cannot be 
> fixed by requests directly.

Well, if we get to a future where it is in-principle fixed, I'll be happy.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Sahdev P Zala
> From: Clark Boylan 
> To: openstack-dev@lists.openstack.org
> Date: 10/09/2015 02:00 PM
> Subject: Re: [openstack-dev] [Murano] py26 support in 
python-muranoclient
> 
> 
> 
> On Fri, Oct 9, 2015, at 10:32 AM, Vahid S Hashemian wrote:
> > Serg, Jeremy,
> > 
> > Thank you for your response, so the issue I ran into with my patch is 
the 
> > gate job failing on python26.
> > You can see it here: https://review.openstack.org/#/c/232271/
> > 
> > Serg suggested that we add 2.6 support to tosca-parser, which is fine
> > with 
> > us.
> > But I got a bit confused after reading Jeremy's response.
> > It seems to me that the support will be going away, but there is no 
> > timeline (and therefore no near-term plan?)
> > So, I'm hoping Jeremy can advise whether he also recommends the same 
> > thing, or not.
> There is a timeline (though admittedly hard to find) at
> https://etherpad.openstack.org/p/YVR-relmgt-stable-branch which says
> Juno support would run through the end of November. Since Juno is the
> last release to support python2.6 we will remove python2.6 support from
> the test infrastructure at that time as well.
> 
> I personally probably wouldn't bother with extra work to support
> python2.6, but that all depends on how much work it is and whether or
> not you find value in it. Ultimately it is up to you, just know that the
> Infrastructure team will stop hosting testing for python2.6 when Juno is
> EOLed.
> 
> Hope this helps,
> Clark

Thanks Clark and Jeremy! This is very helpful. 

Serg, now knowing that CI testing is not going to continue in few weeks 
and many other projects has dropped python 2.6 support or getting there, 
if Murano decides the same that would be great. If Murano team decide to 
continue the 2.6 support, we will need to enable support in tosca-parser 
as well. As you mentioned it may not be a lot of work for us and we are 
totally fine in making changes, but without automated tests it can be 
challenging in future. 

Thanks! 
Sahdev Zala



> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Tim Bell
There is a need to distinguish between server side py26 support which is
generally under the control of the service provider and py26 support on the
client side. For a service provider to push all of their hypervisors and
service machines to RHEL 7 is under their control but requiring all of their
users to do the same is much more difficult.

 

Thus, I feel there should be different decisions and communication w.r.t.
the time scales for deprecation of py26 on clients compared to the server
side. A project may choose to make them together but equally some may choose
to delay the mandatory client migration to py27 while requiring the server
to move.

 

Tim

 

From: Sahdev P Zala [mailto:spz...@us.ibm.com] 
Sent: 09 October 2015 20:42
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Murano] py26 support in python-muranoclient

 

> From: Clark Boylan  >
> To: openstack-dev@lists.openstack.org
 
> Date: 10/09/2015 02:00 PM
> Subject: Re: [openstack-dev] [Murano] py26 support in python-muranoclient
> 
> 
> 
> On Fri, Oct 9, 2015, at 10:32 AM, Vahid S Hashemian wrote:
> > Serg, Jeremy,
> > 
> > Thank you for your response, so the issue I ran into with my patch is
the 
> > gate job failing on python26.
> > You can see it here:  
https://review.openstack.org/#/c/232271/
> > 
> > Serg suggested that we add 2.6 support to tosca-parser, which is fine
> > with 
> > us.
> > But I got a bit confused after reading Jeremy's response.
> > It seems to me that the support will be going away, but there is no 
> > timeline (and therefore no near-term plan?)
> > So, I'm hoping Jeremy can advise whether he also recommends the same 
> > thing, or not.
> There is a timeline (though admittedly hard to find) at
>  
https://etherpad.openstack.org/p/YVR-relmgt-stable-branchwhich says
> Juno support would run through the end of November. Since Juno is the
> last release to support python2.6 we will remove python2.6 support from
> the test infrastructure at that time as well.
> 
> I personally probably wouldn't bother with extra work to support
> python2.6, but that all depends on how much work it is and whether or
> not you find value in it. Ultimately it is up to you, just know that the
> Infrastructure team will stop hosting testing for python2.6 when Juno is
> EOLed.
> 
> Hope this helps,
> Clark

Thanks Clark and Jeremy! This is very helpful. 

Serg, now knowing that CI testing is not going to continue in few weeks and
many other projects has dropped python 2.6 support or getting there, if
Murano decides the same that would be great. If Murano team decide to
continue the 2.6 support, we will need to enable support in tosca-parser as
well. As you mentioned it may not be a lot of work for us and we are totally
fine in making changes, but without automated tests it can be challenging in
future. 

Thanks! 
Sahdev Zala



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
>  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Jonathan D. Proulx
On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
:On 10/09/2015 01:39 PM, David Stanek wrote:
:>
:>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx > wrote:
:>As an operator I'd be happy to use SRV records to define endpoints,
:>though multiple regions could make that messy.
:>
:>would we make subdomins per region or include region name in the
:>service name?
:>
:>_compute-regionone._tcp.example.com 
:>-vs-
:>_compute._tcp.regionone.example.com 
:>
:>Also not all operators can controll their DNS to this level so it
:>couldn't be the only option.
:
:SO - XMPP does this. The way it works is that if your XMPP provider
:has put the approriate records in DNS, then everything Just Works. If
:not, then you, as a consumer, have several pieces of information you
:need to provide by hand.
:
:Of course, there are already several pieces of information you have
:to provide by hand to connect to OpenStack, so needing to download a
:manifest file or something like that to talk to a cloud in an
:environment where the people running a cloud do not have the ability
:to add information to DNS (boggles) shouldn't be that terrible.

yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
of local config options is managable. A cloud with X endpoints and Y
regions is significantly more.

Not to say this couldn't be done by packing more stuff into the openrc
or equivelent so users don't need to directly enter all that, but that
would be a significant change and one I think would be more difficult
for smaller operations.

:One could also imagine an in-between option where OpenStack could run
:an _optional_ DNS for this purpose - and then the only 'by-hand'
:you'd need for clouds with no real DNS is the location of the
:discover DNS.

Yes a special purpose DNS (a la dnsbl) might be preferable to
pushing around static configs.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Gregory Haynes
Excerpts from Zane Bitter's message of 2015-10-09 17:09:46 +:
> On 08/10/15 21:32, Ian Wells wrote:
> >
> > > 2. if many hosts suit the 5 VMs then this is *very* unlucky,because 
> > we should be choosing a host at random from the set of
> > suitable hosts and that's a huge coincidence - so this is a tiny
> > corner case that we shouldn't be designing around
> >
> > Here is where we differ in our understanding. With the current
> > system of filters and weighers, 5 schedulers getting requests for
> > identical VMs and having identical information are *expected* to
> > select the same host. It is not a tiny corner case; it is the most
> > likely result for the current system design. By catching this
> > situation early (in the scheduling process) we can avoid multiple
> > RPC round-trips to handle the fail/retry mechanism.
> >
> >
> > And so maybe this would be a different fix - choose, at random, one of
> > the hosts above a weighting threshold, not choose the top host every
> > time? Technically, any host passing the filter is adequate to the task
> > from the perspective of an API user (and they can't prove if they got
> > the highest weighting or not), so if we assume weighting an operator
> > preference, and just weaken it slightly, we'd have a few more options.
> 
> The optimal way to do this would be a weighted random selection, where 
> the probability of any given host being selected is proportional to its 
> weighting. (Obviously this is limited by the accuracy of the weighting 
> function in expressing your actual preferences - and it's at least 
> conceivable that this could vary with the number of schedulers running.)
> 
> In fact, the choice of the name 'weighting' would normally imply that 
> it's done this way; hearing that the 'weighting' is actually used as a 
> 'score' with the highest one always winning is quite surprising.
> 
> cheers,
> Zane.
> 

There is a more generalized version of this algorithm for concurrent
scheduling I've seen a few times - Pick N options at random, apply
heuristic over that N to pick the best, attempt to schedule at your
choice, retry on failure. As long as you have a fast heuristic and your
N is sufficiently smaller than the total number of options then the
retries are rare-ish and cheap. It also can scale out extremely well.

Obviously you lose some of the ability to micro-manage where things are
placed with a scheduling setup like that, but if scaling up is the
concern I really hope that isnt a problem...

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-10-09 Thread Jay Pipes

On 10/09/2015 02:16 PM, Matt Riedemann wrote:

On 10/9/2015 12:03 PM, Jay Pipes wrote:

I had a proposal [1] to completely rework the whole shadow table mess
and db archiving functionality. I continue to believe that is the
appropriate solution for this, and that we should rip out the existing
functionality because it simply does not work properly.

Best,
-jay

[1] https://review.openstack.org/#/c/137669/


Are you going to pick that back up? Or sick some minions on it.


I don't personally have the bandwidth to do this. If anyone out there in 
Nova contributor land has interest, just find me on IRC. :)


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint to change (expand) traditional Ethernet interface naming schema in Fuel

2015-10-09 Thread Sergey Vasilenko
>
> >I would like to pay your attention to the changing interface naming
> >schema, which is proposed to be implemented in FuelA [1].A In brief,
> >Ethernet network interfaces may not be named as ethX, and there is a
> >reported bug about itA [2]
> >There are a lot of reasons to switch to the new naming schema, not
> only
> >because it has been used in CentOS 7 (and probably will be used in
> next
> >Ubuntu LTS), but becauseA new naming schema gave more predictable
> >interface namesA [3]. There is a reported bug related to the topicA
> [4]
>

L23network module is a interface naming scheme agnostic.
Only bridge and bond interface name protection found -- You can't call bond
or bridge like 'enp2s0', because this name reserved for NICs.



> You might be interested to look at the os-net-config tool - we faced this
> exact same issue with TripleO, and solved it via os-net-config, which
> provides abstractions for network configuration, including mapping device
> aliases (e.g "nic1") to real NIC names (e.g "em1" or whatever).
>
> https://github.com/openstack/os-net-config
>
>
It's interesting project. Proposed format for network configuration, so
interesting, but...
Project too young. And doesn't allow to configure some things, that
L23network already support.
Main problem of this project -- is a approach to change interface options
options. They doesn't use prefetch/flush mechanics as in the puppet. They
just executing commands for change, instead in most cases. Such approach
doesn't allow re-configure existing cloud properly, if one under production
load.

I can support config format from os-net-config as additional network scheme
format too, but, IMHO, this hierarchical format not so convenient as flat.

NIC mapping, in Nailgun, already implemented in the template-networking. If
wee need use it for another cases -- ask Alexey Kasatkin, please.

/sv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/09/2015 12:55 PM, Gregory Haynes wrote:


There is a more generalized version of this algorithm for concurrent
scheduling I've seen a few times - Pick N options at random, apply
heuristic over that N to pick the best, attempt to schedule at your
choice, retry on failure. As long as you have a fast heuristic and your
N is sufficiently smaller than the total number of options then the
retries are rare-ish and cheap. It also can scale out extremely well.


If you're looking for a resource that is relatively rare (say you want a 
particular hardware accelerator, or a very large number of CPUs, or even to be 
scheduled "near" to a specific other instance) then you may have to retry quite 
a lot.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/09/2015 12:25 PM, Alec Hothan (ahothan) wrote:


Still the point from Chris is valid. I guess the main reason openstack is
going with multiple concurrent schedulers is to scale out by distributing the
load between multiple instances of schedulers because 1 instance is too
slow. This discussion is about coordinating the many instances of schedulers
in a way that works and this is actually a difficult problem and will get
worst as the number of variables for instance placement increases (for
example NFV is going to require a lot more than just cpu pinning, huge pages
and numa).

Has anybody looked at why 1 instance is too slow and what it would take to
make 1 scheduler instance work fast enough? This does not preclude the use of
concurrency for finer grain tasks in the background.


Currently we pull data on all (!) of the compute nodes out of the database via a 
series of RPC calls, then evaluate the various filters in python code.


I suspect it'd be a lot quicker if each filter was a DB query.

Also, ideally we'd want to query for the most "strict" criteria first, to reduce 
the total number of comparisons.  For example, if you want to implement the 
"affinity" server group policy, you only need to test a single host.  If you're 
matching against host aggregate metadata, you only need to test against hosts in 
matching aggregates.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Different OpenStack components

2015-10-09 Thread Abhishek Talwar
Hi Folks,I have been working with OpenStack from a while now, I know that other than the main componets (nova, neutron, glance, cinder, horizon, tempest, keystone etc) there are many more components in OpenStack (like Sahara, Trove).So, where can I see the list of all existing OpenStack components and is there any documentation for these components so that I can read what all roles these components play.Thanks and RegardsAbhishek Talwar
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Joshua Harlow

And also we should probably deprecate/not recommend:

http://docs.openstack.org/developer/nova/api/nova.scheduler.filters.json_filter.html#nova.scheduler.filters.json_filter.JsonFilter

That filter IMHO basically disallows optimizations like forming SQL 
statements for each filter (and then letting the DB do the heavy 
lifting) or say having each filter say 'oh my logic can be performed by 
a prepared statement ABC and u should just use that instead' (and then 
letting the DB do the heavy lifting).


Chris Friesen wrote:

On 10/09/2015 12:25 PM, Alec Hothan (ahothan) wrote:


Still the point from Chris is valid. I guess the main reason openstack is
going with multiple concurrent schedulers is to scale out by
distributing the
load between multiple instances of schedulers because 1 instance is too
slow. This discussion is about coordinating the many instances of
schedulers
in a way that works and this is actually a difficult problem and will get
worst as the number of variables for instance placement increases (for
example NFV is going to require a lot more than just cpu pinning, huge
pages
and numa).

Has anybody looked at why 1 instance is too slow and what it would
take to
make 1 scheduler instance work fast enough? This does not preclude the
use of
concurrency for finer grain tasks in the background.


Currently we pull data on all (!) of the compute nodes out of the
database via a series of RPC calls, then evaluate the various filters in
python code.

I suspect it'd be a lot quicker if each filter was a DB query.

Also, ideally we'd want to query for the most "strict" criteria first,
to reduce the total number of comparisons. For example, if you want to
implement the "affinity" server group policy, you only need to test a
single host. If you're matching against host aggregate metadata, you
only need to test against hosts in matching aggregates.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Sean Dague
It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.

A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.

A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132

That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context between
projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread John Griffith
On Fri, Oct 9, 2015 at 3:42 AM, Thierry Carrez 
wrote:

> Hello everyone,
>
> OpenStack has become quite big, and it's easier than ever to feel lost,
> to feel like nothing is really happening. It's more difficult than ever
> to feel part of a single community, and to celebrate little successes
> and progress.
>
> In a (small) effort to help with that, I suggested making it easier to
> record little moments of joy and small success bits. Those are usually
> not worth the effort of a blog post or a new mailing-list thread, but
> they show that our community makes progress *every day*.
>
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
>
> #success [Your message here]
>
> The openstackstatus bot will take that and record it on this wiki page:
>
> https://wiki.openstack.org/wiki/Successes
>
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
>
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
>
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Great idea Thierry, great to promote some positive things!  Thanks for
putting this together.​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Dean Troyer
On Fri, Oct 9, 2015 at 9:39 AM, Sean Dague  wrote:

> Lastly, I think it's pretty clear we probably need a dedicated workgroup
> meeting to keep this ball rolling, come to a reasonable plan that
> doesn't break any existing deployed code, but lets us get to a better
> world in a few cycles. annegentle, stevemar, and I have been pushing on
> that ball so far, however I'd like to know who else is willing to commit
> a chunk of time over this cycle to this. Once we know that we can try to
> figure out when a reasonable weekly meeting point would be.
>

Count me in...

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Joshua Harlow
For those who are interested in more of the historical aspect around this,

https://github.com/kennethreitz/requests/issues/1811

https://github.com/kennethreitz/requests/pull/1812

My own thoughts are varied here, I get the angle of vendoring, but I don't get 
the resistance to unvendoring it (which it seems like quite a few people have 
asked for); if many people want it unvendored then this just ends up creating a 
bad taste in the mouth of many people (this is a bad thing to have happen in 
opensource and is how forks and such get created...).

But as was stated,

The decision to stop vendoring it likely won't be made here anyway ;)

From: c...@lukasa.co.uk
Date: Fri, 9 Oct 2015 14:58:36 +0100
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Requests + urllib3 + distro packages

 
> On 9 Oct 2015, at 14:40, William M Edmonds  wrote:
> 
> Cory Benfield  writes:
>>> The problem that occurs is the result of a few interacting things:
>>>  - requests has very very specific versions of urllib3 it works with.
>>> So specific they aren't always released yet.
>>
>> This should no longer be true. Our downstream redistributors pointedout to us
>> that this  was making their lives harder than they needed to be, so it's now
>> our policy to only  update to actual release versions of urllib3.
> 
> That's great... except that I'm confused as to why requests would continue to 
> repackage urllib3 if that's the case. Why not just prereq the version of 
> urllib3 that it needs? I thought the one and only answer to that question had 
> been so that requests could package non-standard versions.
> 
 
That is not and was never the only reason for vendoring urllib3. However, and I 
cannot stress this enough, the decision to vendor urllib3 is *not going to be 
changed on this thread*. If and when it changes, it will be by consensus 
decision from the requests maintenance team, which we do not have at this time.
 
Further, as I pointed out to Donald Stufft on IRC, if requests unbundled 
urllib3 *today* that would not fix the problem. The reason is that we’d specify 
our urllib3 dependency as: urllib3>=1.12,   
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Suggestions for handling new panels and refactors in the future

2015-10-09 Thread Douglas Fish
I have two suggestions for handling both new panels and refactoring existing panels that I think could benefit us in the future:
1) When we are creating a panel that's a major refactor of an existing it should be a new separate panel, not a direct code replacement of the existing panel
2) New panels (include the refactors of existing panels) should be developed in an out of tree gerrit repository.
 
Why make refactors a separate panel?
 
I was taken a bit off guard after we merged the Network Topology->Curvature improvement: this was a surprise to some people outside of the Horizon community (though it had been discussed within Horizon for as long as I've been on the project). In retrospect, I think it would have been better to keep both the old Network Topology and new curvature based topology in our Horizon codebase. Doing so would have allowed operators to perform A-B/ Red-Black testing if they weren't immediately convinced of the awesomeness of the panel. It also would have allowed anyone with a customization of the Network Topology panel to have some time to configure their Horizon instance to continue to use the Legacy panel while they updated their customization to work with the new panel.
 
Perhaps we should treat panels more like an API element and take them through a deprecation cycle before removing them completely. Giving time for customizers to update their code is going to be especially important as we build angular replacements for python panels. While we have much better plugin support for angular there is still a learning curve for those developers.
 
Why build refactors and new panels out of tree?
 
First off, it appears to me trying to build new panels in tree has been fairly painful. I've seen big long lived patches pushed along without being merged. It's quite acceptable and expected to quickly merge half-complete patches into a brand new repository - but you can't behave that way working in tree in Horizon. Horizon needs to be kept production/operator ready. External repositories do not. Merging code quickly can ease collaboration and avoid this kind of long lived patch set.
 
Secondly, keeping new panels/plugins in a separate repository decentralizes decisions about which panels are "ready" and which aren't. If one group feels a plugin is "ready" they can make it their default version of the panel, and perhaps put resources toward translating it. If we develop these panels in-tree we need to make a common decision about what "ready" means - and once it's in everyone who wants a translated Horizon will need to translate it.
 
Finally, I believe developing new panels out of tree will help improve our plugin support in Horizon. It's this whole "eating your own dog food" idea. As soon as we start using our own Horizon plugin mechanism for our own development we are going to become aware of it's shortcomings (like quotas) and will be sufficiently motivated to fix them.
 
Looking forward to further discussion and other ideas on this!
Doug Fish


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-09 Thread Matt Riedemann



On 10/9/2015 1:49 AM, Paul Carlton wrote:


On 08/10/15 16:49, Doug Hellmann wrote:

Excerpts from Matt Riedemann's message of 2015-10-07 14:38:07 -0500:

Here's why:

https://review.openstack.org/#/c/220622/

That's marked as fixing an OSSA which means we'll have to backport the
fix in nova but it depends on a change to strutils.mask_password in
oslo.utils, which required a release and a minimum version bump in
global-requirements.

To backport the change in nova, we either have to:

1. Copy mask_password out of oslo.utils and add it to nova in the
backport or,

2. Backport the oslo.utils change to a stable branch, release it as a
patch release, bump minimum required version in stable g-r and then
backport the nova change and depend on the backported oslo.utils stable
release - which also makes it a dependent library version bump for any
packagers/distros that have already frozen libraries for their stable
releases, which is kind of not fun.

Bug fix releases do not generally require a minimum version bump. The
API hasn't changed, and there's nothing new in the library in this case,
so it's a documentation issue to ensure that users update to the new
release. All we should need to do is backport the fix to the appropriate
branch of oslo.utils and release a new version from that branch that is
compatible with the same branch of nova.

Doug


So I'm thinking this is one of those things that should ultimately live
in oslo-incubator so it can live in the respective projects. If
mask_password were in oslo-incubator, we'd have just fixed and
backported it there and then synced to nova on master and stable
branches, no dependent library version bumps required.

Plus I miss the good old days of reviewing oslo-incubator
syncs...(joking of course).


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I've been following this discussion, is there now a consensus on the way
forward?

My understanding is that Doug is suggesting back porting my oslo.utils
change to the stable juno and kilo branches?



It means you'll have to backport the oslo.utils change to each stable 
branch that you also backport the nova change to, which probably goes 
back to stable/juno (so liberty->kilo->juno backports in both projects).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Cory Benfield

> On 9 Oct 2015, at 15:18, Jeremy Stanley  wrote:
> 
> On 2015-10-09 14:58:36 +0100 (+0100), Cory Benfield wrote:
> [...]
>> IMO, what OpenStack needs is a decision about where it’s getting
>> its packages from, and then to refuse to mix the two.
> 
> I have yet to find a Python-based operating system installable in
> whole via pip. There will always be _at_least_some_ packages you
> install from your operating system's package management. What you
> seem to be missing is that Linux distros are now shipping base
> images which include their python-requests and python-urllib3
> packages already pre-installed as dependencies of Python-based tools
> they deem important to their users.
> 

Yeah, this has been an ongoing problem.

For my part, Donald Stufft has informed me that if the distribution-provided 
requests package has the appropriate install_requires field in its setup.py, 
pip will respect that dependency. Given that requests has recently switched to 
not providing mid-cycle urllib3 versions, it should be entirely possible for 
downstream redistributors in Debian/Fedora to put that metadata into their 
packages when they unbundle requests. I’m chasing up with our downstream 
redistributors right now to ask them to start doing that.

This should resolve the problem for systems where requests 2.7.0 or higher are 
being used. In other systems, this problem still exists and cannot be fixed by 
requests directly.

Cory


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread David Lyle
I'm in too.

David

On Fri, Oct 9, 2015 at 8:51 AM, Dean Troyer  wrote:
> On Fri, Oct 9, 2015 at 9:39 AM, Sean Dague  wrote:
>>
>> Lastly, I think it's pretty clear we probably need a dedicated workgroup
>> meeting to keep this ball rolling, come to a reasonable plan that
>> doesn't break any existing deployed code, but lets us get to a better
>> world in a few cycles. annegentle, stevemar, and I have been pushing on
>> that ball so far, however I'd like to know who else is willing to commit
>> a chunk of time over this cycle to this. Once we know that we can try to
>> figure out when a reasonable weekly meeting point would be.
>
>
> Count me in...
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Shamail


> On Oct 9, 2015, at 10:49 AM, John Griffith  wrote:
> 
> ​Great idea Thierry, great to promote some positive things!  Thanks for 
> putting this together.​

+1
Great indeed... thanks Thierry.

Regards,
Shamail 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] L2 gateway project

2015-10-09 Thread Gary Kotton
Hi,
Who will be creating the stable/liberty branch?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 11:07 AM, David Lyle wrote:

I'm in too.


Yes please.


On Fri, Oct 9, 2015 at 8:51 AM, Dean Troyer  wrote:

On Fri, Oct 9, 2015 at 9:39 AM, Sean Dague  wrote:


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.



Count me in...

dt

--

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Znoinski, Waldemar
 >-Original Message-
 >From: Jeremy Stanley [mailto:fu...@yuggoth.org]
 >Sent: Friday, October 9, 2015 1:17 PM
 >To: OpenStack Development Mailing List (not for usage questions)
 >
 >Subject: Re: [openstack-dev] [infra] Try to introduce RFC mechanism to CI.
 >
 >On 2015-10-09 18:06:55 +0800 (+0800), Tang Chen wrote:
 >[...]
 >> It is just a waste of resource if reviewers are discussing about where
 >> this function should be, or what the function should be named. After
 >> all these details are agreed on, run the CI.
 >[...]
 
[WZ] I'm maintaining 2 3rdparty CIs here, for Nova and Neutron each, and to me 
there's no big difference in maintaining/supporting a CI that runs 5 or 150 
times a day. The only difference may be in resources required to keep up with 
the Gerrit stream. In my opinion (3rdparty) CIs should help early-discover the 
problems so should run on all patchsets as they appear - that's their main 
purpose to me.

 >As one of the people maintaining the upstream CI and helping coordinate our
 >resources/quotas, I don't see that providing early test feedback is a waste.
 >We're steadily increasing the instance quotas available to us, so check
 >pipeline utilization should continue to become less and less of a concern
 >anyway.
 >
 >For a change which is still under debate, feel free to simply ignore test 
 >results
 >until you get it to a point where you see them start to become relevant.
 >--
 >Jeremy Stanley
 >
 >__
 >
 >OpenStack Development Mailing List (not for usage questions)
 >Unsubscribe: OpenStack-dev-
 >requ...@lists.openstack.org?subject:unsubscribe
 >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Shamail


> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
> 
> It looks like some great conversation got going on the service catalog
> standardization spec / discussion at the last cross project meeting.
> Sorry I wasn't there to participate.
> 
Apologize if this is a question that has already been address but why can't we 
just leverage something like consul.io?

> A lot of that ended up in here (which was an ether pad stevemar and I
> started working on the other day) -
> https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
I didn't see anything immediately in the etherpad that couldn't be covered with 
the tool mentioned above.  It is open-source so we could always try to 
contribute there if we need something extra (written in golang though).
> 
> A couple of things that would make this more useful:
> 
> 1) if you are commenting, please (ircnick) your comments. It's not easy
> to always track down folks later if the comment was not understood.
> 
> 2) please provide link to code when explaining a point. Github supports
> the ability to very nicely link to (and highlight) a range of code by a
> stable object ref. For instance -
> https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132
> 
> That will make comments about X does Y, or Z can't do W, more clear
> because we'll all be looking at the same chunk of code and start to
> build more shared context here. One of the reasons this has been long
> and difficult is that we're missing a lot of that shared context between
> projects. Reassembling that by reading each other's relevant code will
> go a long way to understanding the whole picture.
> 
> 
> Lastly, I think it's pretty clear we probably need a dedicated workgroup
> meeting to keep this ball rolling, come to a reasonable plan that
> doesn't break any existing deployed code, but lets us get to a better
> world in a few cycles. annegentle, stevemar, and I have been pushing on
> that ball so far, however I'd like to know who else is willing to commit
> a chunk of time over this cycle to this. Once we know that we can try to
> figure out when a reasonable weekly meeting point would be.
> 
> Thanks,
> 
>-Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 10:39 AM, Sean Dague wrote:

It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.


Just so folks know, the collection of existing service catalogs has been 
updated:


https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

It now includes a new and correct catalog for Rackspace Private (the 
previous entry was just a copy of Rackspace Public) as well as entries 
for every public cloud I have an account on.


Hopefully that is useful information for folks looking at this.


A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.

A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132

That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context between
projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2