Re: [openstack-dev] how to set default security group rules?

2017-06-09 Thread Kevin Benton
This isn't about the operating system of the instance or even the host.
It's the behavior of the Neutron API WRT what traffic will be filtered by
the default security group.

If we go down this route, users will have to expect effectively random sets
of security group rules from cloud to cloud and manually inspect each one.
If those are the semantics we want to provide, why have a default security
group at all?

Is your suggestion that since clouds are already inconsistent, we should
make it easier for operators to make it worse? It sounds silly, but the
main supporting argument for this seems to be that operators are already
breaking consistency using other scripts, etc so we shouldn't care.

On Fri, Jun 9, 2017 at 6:03 AM, Paul Belanger  wrote:

> On Fri, Jun 09, 2017 at 05:20:03AM -0700, Kevin Benton wrote:
> > This was an intentional decision. One of the goals of OpenStack is to
> > provide consistency across different clouds and configurable defaults for
> > new tenants default rules hurts consistency.
> >
> > If I write a script to boot up a workload on one OpenStack cloud that
> > allows everything by default and it doesn't work on another cloud that
> > doesn't allow everything by default, that leads to a pretty bad user
> > experience. I would now need logic to scan all of the existing security
> > group rules and do a diff between what I want and what is there and have
> > logic to resolve the difference.
> >
> FWIW: While that argument is valid, the reality is every cloud provider
> runs a
> different version of operating system you boot up your workload on, so it
> is
> pretty much assume that every cloud is different out of box.
>
> What we do now in openstack-infra, is place expected cloud
> configuration[2] in
> ansible-role-cloud-launcher[1], and run ansible against the cloud. This
> has been
> one of the ways we ensure consistency between clouds. Bonus point, we
> build and
> upload images daily to ensure our workloads are also the same.
>
> [1] http://git.openstack.org/cgit/openstack/ansible-role-cloud-launcher
> [2] http://git.openstack.org/cgit/openstack-infra/system-config/
> tree/playbooks/clouds_layouts.yml
>
> > It's a backwards-incompatible change so we'll probably be stuck with the
> > current behavior.
> >
> >
> > On Fri, Jun 9, 2017 at 2:27 AM, Ahmed Mostafa  >
> > wrote:
> >
> > > I believe that there are no features impelemented in neutron that
> allows
> > > changing the rules for the default security group.
> > >
> > > I am also interested in seeing such a feature implemented.
> > >
> > > I see only this blueprint :
> > >
> > > https://blueprints.launchpad.net/neutron/+spec/default-
> > > rules-for-default-security-group
> > >
> > > But no work has been done on it so far.
> > >
> > >
> > >
> > > On Fri, Jun 9, 2017 at 9:16 AM, Paul Schlacter 
> > > wrote:
> > >
> > >> I see the neutron code, which added the default rules to write
> very
> > >> rigid, only for ipv4 ipv6 plus two rules. What if I want to customize
> the
> > >> default rules?
> > >>
> > >> 
> > >> __
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscrib
> > >> e
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-09 Thread Chris Dent

On Fri, 9 Jun 2017, Dan Smith wrote:


In other words, I would expect to be able to explain the purpose of the
scheduler as "applies nova-specific logic to the generic resources that
placement says are _valid_, with the goal of determining which one is
_best_".


This sounds great as an explanation. If we can reach this we done good.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-09 Thread Chris Dent

On Fri, 9 Jun 2017, Jay Pipes wrote:


Sorry, been in a three-hour meeting. Comments inline...


Thanks for getting to this, it's very helpful to me.


* Part of the reason for having nested resource providers is because
  it can allow affinity/anti-affinity below the compute node (e.g.,
  workloads on the same host but different numa cells).


Mmm, kinda, yeah.


What I meant by this was that if it didn't matter which of more than
one nested rp was used, then it would be easier to simply consider
the group of them as members of an inventory (that came out a bit
more in one of the later questions).


* Does a claim made in the scheduler need to be complete? Is there
  value in making a partial claim from the scheduler that consumes a
  vcpu and some ram, and then in the resource tracker is corrected
  to consume a specific pci device, numa cell, gpu and/or fpga?
  Would this be better or worse than what we have now? Why?


Good question. I think the answer to this is probably pretty theoretical at 
this point. My gut instinct is that we should treat the consumption of 
resources in an atomic fashion, and that transactional nature of allocation 
will result in fewer race conditions and cleaner code. But, admittedly, this 
is just my gut reaction.


I suppose if we were more spread oriented than pack oriented, an
allocation of vcpu and ram would almost operate as a proxy for a
lock, allowing the later correcting allocation proposed above to be
somewhat safe because other near concurrent emplacements would be
happening on some other machine. But we don't have that reality.
I've always been in favor of making the allocation as early as
possible. I remember those halcyon days when we even thought it
might be possible to make a request and claim of resources in one
HTTP request.


  that makes it difficult or impossible for an allocation against a
  parent provider to be able to determine the correct child
  providers to which to cascade some of the allocation? (And by
  extension make the earlier scheduling decision.)


See above. The sorting/weighing logic, which is very much deployer-defined 
and wreaks of customization, is what would need to be added to the placement 
API.


And enough of that sorting/weighing logic is likely to do with child or
shared providers that it's not possible to constrain the weighing
and sorting to solely compute nodes? Not just whether the host is on
fire, but the share disk farm too?

Okay, thank you, that helps set the stage more clearly and leads
straight to my remaining big question, which is asked on the spec
you've proposed:

https://review.openstack.org/#/c/471927/

What are big strokes mechanisms for connecting the non-allocation
data in the response to GET /allocation_requests to the sorting
weighing logic? Answering on the spec works fine for me, I'm just
repeating it here in case people following along want the transition
over to the spec.

Thanks again.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-09 Thread Dan Smith
>> b) a compute node could very well have both local disk and shared 
>> disk. how would the placement API know which one to pick? This is a
>> sorting/weighing decision and thus is something the scheduler is 
>> responsible for.

> I remember having this discussion, and we concluded that a 
> computenode could either have local or shared resources, but not 
> both. There would be a trait to indicate shared disk. Has this 
> changed?

I've always thought we discussed that one of the benefits of this
approach was that it _could_ have both. Maybe we said "initially we
won't implement stuff so it can have both" but I think the plan has been
that we'd be able to support it.

>>> * We already have the information the filter scheduler needs now
>>>  by some other means, right?  What are the reasons we don't want
>>>  to use that anymore?
>> 
>> The filter scheduler has most of the information, yes. What it 
>> doesn't have is the *identifier* (UUID) for things like SRIOV PFs 
>> or NUMA cells that the Placement API will use to distinguish 
>> between things. In other words, the filter scheduler currently does
>> things like unpack a NUMATopology object into memory and determine
>> a NUMA cell to place an instance to. However, it has no concept
>> that that NUMA cell is (or will soon be once 
>> nested-resource-providers is done) a resource provider in the 
>> placement API. Same for SRIOV PFs. Same for VGPUs. Same for FPGAs,
>>  etc. That's why we need to return information to the scheduler 
>> from the placement API that will allow the scheduler to understand 
>> "hey, this NUMA cell on compute node X is resource provider 
>> $UUID".

Why shouldn't scheduler know those relationships? You were the one (well
one of them :P) that specifically wanted to teach the nova scheduler to
be in the business of arranging and making claims (allocations) against
placement before returning. Why should some parts of the scheduler know
about resource providers, but not others? And, how would scheduler be
able to make the proper decisions (which require knowledge of
hierarchical relationships) without that knowledge? I'm sure I'm missing
something obvious, so please correct me.

IMHO, the scheduler should eventually evolve into a thing that mostly
deals in the currency of placement, translating those into nova concepts
where needed to avoid placement having to know anything about them.
In other words, I would expect to be able to explain the purpose of the
scheduler as "applies nova-specific logic to the generic resources that
placement says are _valid_, with the goal of determining which one is
_best_".

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pike m2 has been released

2017-06-09 Thread Emilien Macchi
On Fri, Jun 9, 2017 at 5:01 PM, Ben Nemec  wrote:
> Hmm, I was expecting an instack-undercloud release as part of m2.  Is there
> a reason we didn't do that?

You just released a new tag: https://review.openstack.org/#/c/471066/
with a new release model, why would we release m2? In case you want
it, I think we can still do it on Monday.

> On 06/08/2017 03:47 PM, Emilien Macchi wrote:
>>
>> We have a new release of TripleO, pike milestone 2.
>> All bugs targeted on Pike-2 have been moved into Pike-3.
>>
>> I'll take care of moving the blueprints into Pike-3.
>>
>> Some numbers:
>> Blueprints: 3 Unknown, 18 Not started, 14 Started, 3 Slow progress, 11
>> Good progress, 9 Needs Code Review, 7 Implemented
>> Bugs: 197 Fix Released
>>
>> Thanks everyone!
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-09 Thread Ed Leafe
On Jun 9, 2017, at 4:35 PM, Jay Pipes  wrote:

>> We can declare that allocating for shared disk is fairly deterministic
>> if we assume that any given compute node is only associated with one
>> shared disk provider.
> 
> a) we can't assume that
> b) a compute node could very well have both local disk and shared disk. how 
> would the placement API know which one to pick? This is a sorting/weighing 
> decision and thus is something the scheduler is responsible for.

I remember having this discussion, and we concluded that a compute node could 
either have local or shared resources, but not both. There would be a trait to 
indicate shared disk. Has this changed?

>> * We already have the information the filter scheduler needs now by
>>  some other means, right?  What are the reasons we don't want to
>>  use that anymore?
> 
> The filter scheduler has most of the information, yes. What it doesn't have 
> is the *identifier* (UUID) for things like SRIOV PFs or NUMA cells that the 
> Placement API will use to distinguish between things. In other words, the 
> filter scheduler currently does things like unpack a NUMATopology object into 
> memory and determine a NUMA cell to place an instance to. However, it has no 
> concept that that NUMA cell is (or will soon be once 
> nested-resource-providers is done) a resource provider in the placement API. 
> Same for SRIOV PFs. Same for VGPUs. Same for FPGAs, etc. That's why we need 
> to return information to the scheduler from the placement API that will allow 
> the scheduler to understand "hey, this NUMA cell on compute node X is 
> resource provider $UUID".

I guess that this was the point that confused me. The RP uuid is part of the 
provider: the compute node's uuid, and (after 
https://review.openstack.org/#/c/469147/ merges) the PCI device's uuid. So in 
the code that passes the PCI device information to the scheduler, we could add 
that new uuid field, and then the scheduler would have the information to a) 
select the best fit and then b) claim it with the specific uuid. Same for all 
the other nested/shared devices.

I don't mean to belabor this, but to my mind this seems a lot less disruptive 
to the existing code.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-09 Thread Jay Pipes

Sorry, been in a three-hour meeting. Comments inline...

On 06/06/2017 10:56 AM, Chris Dent wrote:

On Mon, 5 Jun 2017, Ed Leafe wrote:


One proposal is to essentially use the same logic in placement
that was used to include that host in those matching the
requirements. In other words, when it tries to allocate the amount
of disk, it would determine that that host is in a shared storage
aggregate, and be smart enough to allocate against that provider.
This was referred to in our discussion as "Plan A".


What would help for me is greater explanation of if and if so, how and
why, "Plan A" doesn't work for nested resource providers.


We'd have to add all the sorting/weighing logic from the existing 
scheduler into the Placement API. Otherwise, the Placement API won't 
understand which child provider to pick out of many providers that meet 
resource/trait requirements.



We can declare that allocating for shared disk is fairly deterministic
if we assume that any given compute node is only associated with one
shared disk provider.


a) we can't assume that
b) a compute node could very well have both local disk and shared disk. 
how would the placement API know which one to pick? This is a 
sorting/weighing decision and thus is something the scheduler is 
responsible for.



My understanding is this determinism is not the case with nested
resource providers because there's some fairly late in the game
choosing of which pci device or which numa cell is getting used.
The existing resource tracking doesn't have this problem because the
claim of those resources is made very late in the game. < Is this
correct?


No, it's not about determinism or how late in the game a claim decision 
is made. It's really just that the scheduler is the thing that does 
sorting/weighing, not the placement API. We made this decision due to 
the operator feedback that they were not willing to give up their 
ability to add custom weighers and be able to have scheduling policies 
that rely on transient data like thermal metrics collection.



The problem comes into play when we want to claim from the scheduler
(or conductor). Additional information is required to choose which
child providers to use. <- Is this correct?


Correct.


Plan B overcomes the information deficit by including more
information in the response from placement (as straw-manned in the
etherpad [1]) allowing code in the filter scheduler to make accurate
claims. <- Is this correct?


Partly, yes. But, more than anything it's about the placement API 
returning resource provider UUIDs for child providers and sharing 
providers so that the scheduler, when it picks one of those SRIOV 
physical functions, or NUMA cells, or shared storage pools, has the 
identifier with which to tell the placement API "ok, claim *this* 
resource against *this* provider".



* We already have the information the filter scheduler needs now by
  some other means, right?  What are the reasons we don't want to
  use that anymore?


The filter scheduler has most of the information, yes. What it doesn't 
have is the *identifier* (UUID) for things like SRIOV PFs or NUMA cells 
that the Placement API will use to distinguish between things. In other 
words, the filter scheduler currently does things like unpack a 
NUMATopology object into memory and determine a NUMA cell to place an 
instance to. However, it has no concept that that NUMA cell is (or will 
soon be once nested-resource-providers is done) a resource provider in 
the placement API. Same for SRIOV PFs. Same for VGPUs. Same for FPGAs, 
etc. That's why we need to return information to the scheduler from the 
placement API that will allow the scheduler to understand "hey, this 
NUMA cell on compute node X is resource provider $UUID".



* Part of the reason for having nested resource providers is because
  it can allow affinity/anti-affinity below the compute node (e.g.,
  workloads on the same host but different numa cells).


Mmm, kinda, yeah.

>  If I

  remember correctly, the modelling and tracking of this kind of
  information in this way comes out of the time when we imagined the
  placement service would be doing considerably more filtering than
  is planned now. Plan B appears to be an acknowledgement of "on
  some of this stuff, we can't actually do anything but provide you
  some info, you need to decide".


Not really. Filtering is still going to be done in the placement API. 
It's the thing that says "hey, these providers (or trees of providers) 
meet these resource and trait requirements". The scheduler however is 
what takes that set of filtered providers and does its sorting/weighing 
magic and selects one.


> If that's the case, is the

  topological modelling on the placement DB side of things solely a
  convenient place to store information? If there were some other
  way to model that topology could things currently being considered
  for modelling as nested providers be instead simply modelled as
  inventories of a 

Re: [openstack-dev] [release][barbican][congress][designate][neutron][zaqar] missing pike-2 milestone releases

2017-06-09 Thread Armando M.
On 9 June 2017 at 06:36, Doug Hellmann  wrote:

> We have several projects with deliverables following the
> cycle-with-milestones release model without pike 2 releases. Please
> check the list below and prepare those release requests as soon as
> possible. Remember that this milestone is date-based, not feature-based,
> so unless your gate is completely broken there is no reason to wait to
> tag the milestone.
>
> Doug
>
> barbican
> congress
> designate-dashboard
> designate
> networking-bagpipe
> networking-bgpvpn
> networking-midonet
> networking-odl
> networking-ovn
> networking-sfc
> neutron-dynamic-routing
> neutron-fwaas
> neutron
>

I was waiting on the PTL ack [1], we should be good to go now.

[1] https://review.openstack.org/#/c/471414/


> zaqar-ui
> zaqar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread Mike Bayer



On 06/09/2017 11:12 AM, Lance Bragstad wrote:



I should have clarified. The idea was to put the keys used to encrypt 
and decrypt the tokens in etcd so that synchronizing the repository 
across a cluster for keystone nodes is easier for operators (but not 
without other operator pain as Kevin pointed out). The tokens themselves 
will remain completely non-persistent. Fernet key creation is explicitly 
controlled by operators and isn't something that end users generate.


makes sense and I agree is entirely appropriate thanks!





[0] 
https://github.com/openstack/keystone/blob/c528539879e824b8e6d5654292a85ccbee6dcf89/keystone/conf/fernet_tokens.py#L44-L54

[1] https://launchpad.net/bugs/1649616






On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer  >> wrote:



 On 06/08/2017 12:47 AM, Joshua Harlow wrote:

 So just out of curiosity, but do people really even
know what
 etcd is good for? I am thinking that there should be some
 guidance from folks in the community as to where etcd
should be
 used and where it shouldn't (otherwise we just all end
up in a
 mess).


 So far I've seen a proposal of etcd3 as a replacement for
memcached
 in keystone, and a new dogpile connector was added to
oslo.cache to
 handle referring to etcd3 as a cache backend.  This is a really
 simplistic / minimal kind of use case for a key-store.

 But, keeping in mind I don't know anything about etcd3
other than
 "it's another key-store", it's the only database used by
Kubernetes
 as a whole, which suggests it's doing a better job than
Redis in
 terms of "durable".   So I wouldn't be surprised if new /
existing
 openstack applications express some gravitational pull
towards using
 it as their own datastore as well.I'll be trying to
hang onto
 the etcd3 track as much as possible so that if/when that
happens I
 still have a job :).





 Perhaps a good idea to actually give examples of how it
should
 be used, how it shouldn't be used, what it offers, what it
 doesn't... Or at least provide links for people to read
up on this.

 Thoughts?

 Davanum Srinivas wrote:

 One clarification: Since
https://pypi.python.org/pypi/etcd3gw

 > just
 uses the HTTP API (/v3alpha) it will work under both
 eventlet and
 non-eventlet environments.

 Thanks,
 Dims


 On Wed, Jun 7, 2017 at 6:47 AM, Davanum
 Srinivas >>  wrote:

 Team,

 Here's the update to the base services
resolution from
 the TC:
https://governance.openstack.org/tc/reference/base-services.html


>


 First request is to Distros, Packagers, Deployers,
 anyone who
 installs/configures OpenStack:
 Please make sure you have latest etcd 3.x
available in your
 environment for Services to use, Fedora already
does, we
 need help in
 making sure all distros and architectures are
covered.

 Any project who want to use etcd v3 API via
grpc, please
 use:
https://pypi.python.org/pypi/etcd3

 > (works only for
 non-eventlet services)

 Those that depend on eventlet, please use the etcd3
 v3alpha HTTP API using:
https://pypi.python.org/pypi/etcd3gw

 >

 If you use 

[openstack-dev] [tc][qa][all]Potential New Interoperability Programs: The Current Thinking

2017-06-09 Thread Mark Voelker
Hi Everyone,

Happy Friday!  There have been a number of discussions (at the PTG, at 
OpenStack Summit, in Interop WG and Board of Directors meetings, etc) over the 
past several months about the possibility of creating new interoperability 
programs in addition to the existing OpenStack Powered program administered by 
the Interop Working Group (formerly the DefCore Committee).  In particular, 
lately there have been a lot of discussions recently [1] about where to put 
tests associated with trademark programs with respect to some existing TC 
guidance [2] and community goals for Queens [3].  Although these potential new 
programs have been discussed in a number of places, it’s a little hard to keep 
tabs on where we’re at with them unless you’re actively following the Interop 
WG.  Given the recent discussions on openstack-dev, I thought it might be 
useful to try and brain dump our current thinking on what these new programs 
might look like into a document somewhere that people could point at in 
discussions rather than discussing abstracts and working off memories from 
prior meetings.  To that end, I took a first stab at it this week which you can 
find here:

https://review.openstack.org/#/c/472785/

Needless to say this is just a draft to try to get some of the ideas out of 
neurons and on to electrons, so please don’t take it to be firm 
consensus—rather consider it a draft of what we’re currently thinking and an 
invitation to collaborate.  I expect that other members of the Interop Working 
Group will be leaving comments in Gerrit as we hash through this, and we’d love 
to have input from other folks in the community as well.  These programs 
potentially touch a lot of you (in fact, almost all of you) in some way or 
another, so we’re happy to hear your input as we work on evolving the interop 
programs.  Quite a lot has happened over the past couple of years, so we hope 
this will help folks understand where we came from and think about whether we 
want to make changes going forward.  

By the way, for those of you who might find an HTML-rendered document easier to 
read, click on the "gate-interop-docs-ubuntu-xenial” link in the comments left 
by Jenkins and then on “Extension Programs - Current Direction”.  Thanks, and 
have a great weekend!

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#117657
[2] 
https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html
[3] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html

At Your Service,

Mark T. Voelker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-09 Thread Melvin Hillsman
...

No words can express, will try to keep in touch, and congratulations on
your new adventure sir! Continue to be a great influence and valued member
of your new team.

On Thu, Jun 8, 2017 at 7:45 AM, Jim Rollenhagen 
wrote:

> Hey friends,
>
> I've been mostly missing for the past six weeks while looking for a new
> job, so maybe you've forgotten me already, maybe not. I'm happy to tell you
> I've found one that I think is a great opportunity for me. But, I'm sad to
> tell you that it's totally outside of the OpenStack community.
>
> The last 3.5 years have been amazing. I'm extremely grateful that I've
> been able to work in this community - I've learned so much and met so many
> awesome people. I'm going to miss the insane(ly awesome) level of
> collaboration, the summits, the PTGs, and even some of the bikeshedding.
> We've built amazing things together, and I'm sure y'all will continue to do
> so without me.
>
> I'll still be lurking in #openstack-dev and #openstack-ironic for a while,
> if people need me to drop a -2 or dictate old knowledge or whatever, feel
> free to ping me. Or if you just want to chat. :)
>
> <3 jroll
>
> P.S. obviously my core permissions should be dropped now :P
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646

Learner | Ideation | Belief | Responsibility | Command
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Making stack outputs static

2017-06-09 Thread Zane Bitter
History lesson: a long, long time ago we made a very big mistake. We 
treated stack outputs as things that would be resolved dynamically when 
you requested them, instead of having values fixed at the time the 
template was created or updated. This makes performance of reading 
outputs slow, especially for e.g. large stacks, because it requires 
making ReST calls, and it can result in inconsistencies between Heat's 
internal model of the world and what it actually outputs.


As unfortunate as this is, it's difficult to change the behaviour and be 
certain that no existing users will get broken. For that reason, this 
issue has never been addressed. Now is the time to address it.


Here's the tracker bug: https://bugs.launchpad.net/heat/+bug/1660831

It turns out that the correct fix is to store the attributes of a 
resource in the DB - this accounts for the fact that outputs may contain 
attributes of multiple resources, and that these resources might get 
updated at different times. It also solves a related consistency issue, 
which is that during a stack update a resource that is not updated may 
nevertheless report new attribute values, and thus cause things 
downstream to be updated, or to fail, unexpectedly (e.g. 
https://bugzilla.redhat.com/show_bug.cgi?id=1430753#c13).


The proposal[1] is to make this change in Pike for convergence stacks 
only. This is to allow some warning for existing users who might be 
relying on the current behaviour - at least if they control their own 
cloud then they can opt to keep convergence disabled, and even once they 
opt to enable it for new stacks they can keep using existing stacks in 
legacy mode until they are ready to convert them to convergence or 
replace them. In addition, it avoids the difficulty of trying to get 
consistency out of the legacy path's crazy backup-stack shenanigans - 
there's basically no way to get the outputs to behave in exactly the 
same way in the legacy path as they will in convergence.


This topic was raised at the Forum, and there was some feedback that:

1) There are users who require the old behaviour even after they move to 
convergence.
2) Specifically, there are users who don't have public API endpoints for 
services other than Heat, and who rely on Heat proxying requests to 
other services to get any information at all about their resources o.O
3) There are users still using the legacy path (*cough*TripleO) that 
want the performance benefits of quick output resolution.


The suggestion is that instead of tying the change to the convergence 
flag, we should make it configurable by the user on a per-stack basis.


I am vehemently opposed to this suggestion.

It's a total cop-out to make the user decide. The existing behaviour is 
clearly buggy and inconsistent. Users are not, and should not have to 
be, sufficiently steeped in the inner workings of Heat to be able to 
decide whether and when to subject themselves to random inconsistencies 
and hope for the best. If we make the change the default then we'll 
still break people, and if we don't we'll still be saying "OMG, you 
forgot to enable the --not-suck option??!" 10 years from now.


Instead, this is what I'm proposing as the solution to the above feedback:

1) The 'show' attribute of each resource will be marked CACHE_NONE[2]. 
This ensures that the live data is always available via this attribute.
2) When showing a resource's attributes via the API (as opposed to 
referencing them from within a template), always return live values.[3] 
Since we only store the attribute values that are actually referenced in 
the template anyway, we more or less have to do this if we want the 
attributes output through this API to be consistent with each other.
3) Move to convergence. Seriously, the memory and database usage are 
much improved, and there are even more memory improvements in the 
pipeline,[4] and they might even get merged in Pike as long as we don't 
have to stop and reimplement the attribute storage patches that they 
depend on. If TripleO were to move to convergence in Queens, which I 
believe is 100% feasible, then it would get the performance improvements 
at least as soon as it would if we tried to implement attribute storage 
in the legacy path.


Is anyone still dissatisfied? Speak now or... you know the drill.

cheers,
Zane.

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bug/1660831

[2] https://review.openstack.org/#/c/422983/33/heat/engine/resource.py
[3] https://review.openstack.org/472501
[4] 
https://review.openstack.org/#/q/status:open+project:openstack/heat+topic:bp/stack-definition


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] swift3 Plugin Development

2017-06-09 Thread Pete Zaitcev
On Fri, 9 Jun 2017 10:37:15 +0530
Niels de Vos  wrote:

> > > we are looking for S3 plugin with ACLS so that we can integrate gluster 
> > > with that.
> > 
> > Did you look into porting Ceph RGW on top of Gluster?
> 
> This is one of the longer term options that we have under consideration.
> I am very interested in your reasons to suggest it, care to elaborate a
> little?

RGW seems like the least worst starting point in terms of the end
result you're likely to get.

The swift3 does a good job for us in OpenStack Swift, providing a degree
of compatibility with S3. When Kota et.al. took over from Tomo, they revived
the development successfully. However, it remains fundamentally limited in
what it does, and its main function is to massage S3 to fit it on top
of Swift. If you place it in front of Gluster, you're saddled with
this fundamental incompatibility, unless you fork swift3 and rework it
beyond recognition.

In addition, surely you realize that swift3 is only a shim and you need
to have an object store to back it. Do you even have one in Gluster?

Fedora used to ship a self-contained S3 store "tabled", so unlike swift3
it's complete. It's written in C, so may be better compatible with Gluster's
development environment. However, it was out of development for years and
it only supports canned ACL. You aren't getting the full ACLs with it that
you're after.

The RGW gives you all that. It's well-compatible with S3, because it is
its native API (with Swift API being grafted on). Yehuda and crea maintain
a good compatibility. Yes, it's in C++, but the dialect is reasonable,
The worst downside is, yes, it's wedded to Ceph's RADOS and you need
a major surgery to place it on top of Gluster. Nonetheless, it seems like
a better defined task to me than trying to maintain your own webserver,
which you must do if you select swift3.

There are still some parts of RGW which will give you trouble. In particular,
it uses loadable classes, which run in the context of Ceph OSD. There's no
place in Gluster to run them. You may have to drag parts of OSD into the
project. But I didn't look closely enough to determine the feasibility.

In your shoes, I'd talk to Yehuda about this. He knows the problem domain
exceptionally and will give you a good advice, even though you're a
competitor in Open Source in general. Kinda like I do now :-)

Cheers,
-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-09 Thread Mike Perez
On June 8, 2017 at 07:20:23, Jeremy Stanley (fu...@yuggoth.org) wrote:
> > On 2017-06-07 16:36:45 
> > -0700(http://airmail.calendar/2017-06-07%2016:36:45%20PDT)
> (-0700), Ken'ichi Ohmichi wrote:
> [...]
> > one of config files is 30KL due to much user information and that
> > makes the maintenance hard now. I am trying to separate user
> part
> > from the existing file but I cannot find the way to make a
> > consensus for such thing.
>
> There is a foundation member directory API now which provides
> affiliation details and history, so if it were my project (it's
> not
> though) I'd switch to querying that and delete all the static
> affiliation mapping out of that config instead. Not only would
> it
> significantly reduce the reviewer load for Stackalytics, but
> it
> would also provide a greater incentive for contributors to keep
> their affiliation data updated in the foundation member directory.

+1. This would really help me when generating the stats for our yearly
reports/keynote/etc stats instead of having to query multiple sources
and figure out which one is more current.

—
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Action Items WG Chairs: Requesting your input to a cross Working Group session

2017-06-09 Thread MCCABE, JAMEY A
Per the request below the most widely supported time for a cross Working Group 
status meeting seems to be Wednesdays at 0500 UTC.  We will bring this to the 
UC meeting on Monday.  Proposal is that the first UC meeting with WG status 
would be 6/21 Wednesday 0500 UTC (Tuesday Late Evening US time).

From: MCCABE, JAMEY A
Sent: Wednesday, May 31, 2017 12:11 PM
To: 'user-commit...@lists.openstack.org' ; 
'openstack-operat...@lists.openstack.org' 
; 'openstack-dev@lists.openstack.org.' 

Subject: Action Items WG Chairs: Requesting your input to a cross Working Group 
session

Working group (WG) chairs or delegates, please enter your name (and WG name) 
and what times you could meet at this poll: 
https://beta.doodle.com/poll/6k36zgre9ttciwqz#table

As back ground and to share progress:

  *   We started and generally confirmed the desire to have a regular cross WG 
status meeting at the Boston Summit.
  *   Specifically the groups interested in Telco NFV and Fog Edge agreed to 
collaborate more often and in a more organized fashion.
  *   In e-mails and then in today’s Operators Telco/NFV we finalized a 
proposal to have all WGs meet for high level status monthly and to bring the 
collaboration back to our individual WG sessions.
  *   the User Committee sessions are appropriate for the Monthly WG Status 
meeting
  *   more detailed coordination across Telco/NFV and Fog Edge groups should 
take place in the Operators Telco NFV WG meetings which already occur every 2 
weeks.
  *   we need participation of each WG Chair (or a delegate)
  *   we welcome and request the OPNFV and Linux Foundation and other WGs to 
join us in the cross WG status meetings

The Doodle was setup to gain concurrence for a time of week in which we could 
schedule and is not intended to be for a specific week.

​Jamey McCabe – AT Integrated Cloud -jm6819 - mobile if needed 
847-496-1176


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-09 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-06-09 16:52:25 +:
> On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser) 
> wrote:
> 
> > How does confd run inside the container?  Does this mean we’d need some
> > kind of systemd in every container which would spawn both confd and the
> > real service?  That seems like a very large architectural change.  But
> > maybe I’m misunderstanding it.
> >
> >
> Copying part of my reply to Doug's email:
> 
> 1. Run confd + openstack service in side the container. My concern in this
> case
> would be that we'd have to run 2 services inside the container and structure
> things in a way we can monitor both services and make sure they are both
> running. Nothing impossible but one more thing to do.
> 
> 2. Run confd `-onetime` and then run the openstack service.
> 
> 
> I either case, we could run confd as part of the entrypoint and have it run
> in
> background for the case #1 or just run it sequentially for case #2.

I think all of this is moot unless we can solve the case where we don't
know in advance of the deployment what settings to tell confd to look at
(what I've been calling the "cinder case", since that's where I saw it
come up first).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-09 Thread Paul Belanger
On Fri, Jun 09, 2017 at 04:52:25PM +, Flavio Percoco wrote:
> On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser) 
> wrote:
> 
> > How does confd run inside the container?  Does this mean we’d need some
> > kind of systemd in every container which would spawn both confd and the
> > real service?  That seems like a very large architectural change.  But
> > maybe I’m misunderstanding it.
> >
> >
> Copying part of my reply to Doug's email:
> 
> 1. Run confd + openstack service in side the container. My concern in this
> case
> would be that we'd have to run 2 services inside the container and structure
> things in a way we can monitor both services and make sure they are both
> running. Nothing impossible but one more thing to do.
> 
> 2. Run confd `-onetime` and then run the openstack service.
> 
> 
> I either case, we could run confd as part of the entrypoint and have it run
> in
> background for the case #1 or just run it sequentially for case #2.
> 
Both approached are valid, it all depends on your use case.  I suspect in the
case of openstack, you'll be running 2 daemons in your containers. Otherwise,
-onetime will need to launch new containers each config change.

> 
> > Thx,
> > britt
> >
> > On 6/9/17, 9:04 AM, "Doug Hellmann"  wrote:
> >
> > Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
> >
> > > Unless I'm missing something, to use confd with an OpenStack
> > deployment on
> > > k8s, we'll have to do something like this:
> > >
> > > * Deploy confd in every node where we may want to run a pod
> > (basically
> > > wvery node)
> >
> > Oh, no, no. That's not how it works at all.
> >
> > confd runs *inside* the containers. It's input files and command line
> > arguments tell it how to watch for the settings to be used just for
> > that
> > one container instance. It does all of its work (reading templates,
> > watching settings, HUPing services, etc.) from inside the container.
> >
> > The only inputs confd needs from outside of the container are the
> > connection information to get to etcd. Everything else can be put
> > in the system package for the application.
> >
> > Doug
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-09 Thread Doug Hellmann
Excerpts from Alex Schultz's message of 2017-06-09 10:54:16 -0600:
> I ran into a case where I wanted to add python-tripleoclient to
> test-requirements for tripleo-heat-templates but it's not in the
> global requirements. In looking into adding this, I noticed that
> python-tripleoclient and tripleo-common are not
> cycle-with-intermediary either. Should/can we update these as well?
> tripleo-common is already in the global requirements but I guess since
> we've been releasing non-prerelease versions fairly regularly with the
> milestones it hasn't been a problem.

Yes, let's get all of the tripleo team's libraries onto the
cycle-with-intermediary release model.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-09 Thread Flavio Percoco
(sorry if duplicate, having troubles with email)

Hi Team,

I've been working a bit with the Glance team and trying to help where I can
and
I can't but be worried about the critical status of the Glance team.
Unfortunately, the number of participants in the Glance team has been
reduced a
lot resulting in the project not being able to keep up with the goals, the
reviews required, etc.[0]

I've always said that Glance is one of those critical projects that not many
people notice until it breaks. It's in every OpenStack cloud sitting in a
corner
and allowing for VMs to be booted. So, before things get even worse, I'd
like us to brainstorm a bit on what solutions/options we have now.

I know Glance is not the only project "suffering" from lack of contributors
but
I don't want us to get to the point where there won't be contributors left.

How do people feel about adding Glance to the list of "help wanted" areas of
interest?

Would it be possible to get help w/ reviews from folks from teams like
nova/cinder/keystone? Any help is welcomed, of course, but I'm trying to
think
about teams that may be familiar with the Glance code/api already.

Cheers,
Flavio

[0] http://stackalytics.com/?module=glance-group=marks
[1] https://review.openstack.org/#/c/466684/

--
@flaper87
Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-09 Thread Monty Taylor

On 06/08/2017 07:45 AM, Jim Rollenhagen wrote:

Hey friends,

I've been mostly missing for the past six weeks while looking for a new 
job, so maybe you've forgotten me already, maybe not. I'm happy to tell 
you I've found one that I think is a great opportunity for me. But, I'm 
sad to tell you that it's totally outside of the OpenStack community.


The last 3.5 years have been amazing. I'm extremely grateful that I've 
been able to work in this community - I've learned so much and met so 
many awesome people. I'm going to miss the insane(ly awesome) level of 
collaboration, the summits, the PTGs, and even some of the bikeshedding. 
We've built amazing things together, and I'm sure y'all will continue to 
do so without me.


I'll still be lurking in #openstack-dev and #openstack-ironic for a 
while, if people need me to drop a -2 or dictate old knowledge or 
whatever, feel free to ping me. Or if you just want to chat. :)


I'm sad to see you go. You will definitely be missed.

Thank you for all of your amazing hard work over the last 3.5 years, and 
good luck with your next adventure!


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-09 Thread Alex Schultz
On Tue, May 30, 2017 at 3:08 PM, Emilien Macchi  wrote:
> On Tue, May 30, 2017 at 8:36 PM, Matthew Thode
>  wrote:
>> We have a problem in requirements that projects that don't have the
>> cycle-with-intermediary release model (most of the cycle-with-milestones
>> model) don't get integrated with requirements until the cycle is fully
>> done.  This causes a few problems.
>>
>> * These projects don't produce a consumable release for requirements
>> until end of cycle (which does not accept beta releases).
>>
>> * The former causes old requirements to be kept in place, meaning caps,
>> exclusions, etc. are being kept, which can cause conflicts.
>>
>> * Keeping the old version in requirements means that cross dependencies
>> are not tested with updated versions.
>>
>> This has hit us with the mistral and tripleo projects particularly
>> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
>> mistral sqlalchemy updates.
>>
>> [mistral]
>> mistral - blocking sqlalchemy - milestones
>>
>> [tripleo]
>> os-refresh-config - blocking pbr - milestones
>> os-apply-config - blocking pbr - milestones
>> os-collect-config - blocking pbr - milestones
>
> These are cycle-with-milestones., like os-net-config for example,
> which wasn't mentioned in this email. It has the same releases as
> os-net-config also, so I'm confused why these 3 cause issue, I
> probably missed something.
>
> Anyway, I'm happy to change os-*-config (from TripleO) to be
> cycle-with-intermediary. Quick question though, which tag would you
> like to see, regarding what we already did for pike-1?
>

I ran into a case where I wanted to add python-tripleoclient to
test-requirements for tripleo-heat-templates but it's not in the
global requirements. In looking into adding this, I noticed that
python-tripleoclient and tripleo-common are not
cycle-with-intermediary either. Should/can we update these as well?
tripleo-common is already in the global requirements but I guess since
we've been releasing non-prerelease versions fairly regularly with the
milestones it hasn't been a problem.

Thanks,
-Alex

> Thanks,
>
>> [nova]
>> os-vif - blocking pbr - intermediary
>>
>> [horizon]
>> django-openstack-auth - blocking django - intermediary
>>
>>
>> So, here's what needs doing.
>>
>> Those projects that are already using the cycle-with-intermediary model
>> should just do a release.
>>
>> For those that are using cycle-with-milestones, you will need to change
>> to the cycle-with-intermediary model, and do a full release, both can be
>> done at the same time.
>>
>> If anyone has any questions or wants clarifications this thread is good,
>> or I'm on irc as prometheanfire in the #openstack-requirements channel.
>>
>> --
>> Matthew Thode (prometheanfire)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-09 Thread Flavio Percoco
On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser) 
wrote:

> How does confd run inside the container?  Does this mean we’d need some
> kind of systemd in every container which would spawn both confd and the
> real service?  That seems like a very large architectural change.  But
> maybe I’m misunderstanding it.
>
>
Copying part of my reply to Doug's email:

1. Run confd + openstack service in side the container. My concern in this
case
would be that we'd have to run 2 services inside the container and structure
things in a way we can monitor both services and make sure they are both
running. Nothing impossible but one more thing to do.

2. Run confd `-onetime` and then run the openstack service.


I either case, we could run confd as part of the entrypoint and have it run
in
background for the case #1 or just run it sequentially for case #2.


> Thx,
> britt
>
> On 6/9/17, 9:04 AM, "Doug Hellmann"  wrote:
>
> Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
>
> > Unless I'm missing something, to use confd with an OpenStack
> deployment on
> > k8s, we'll have to do something like this:
> >
> > * Deploy confd in every node where we may want to run a pod
> (basically
> > wvery node)
>
> Oh, no, no. That's not how it works at all.
>
> confd runs *inside* the containers. It's input files and command line
> arguments tell it how to watch for the settings to be used just for
> that
> one container instance. It does all of its work (reading templates,
> watching settings, HUPing services, etc.) from inside the container.
>
> The only inputs confd needs from outside of the container are the
> connection information to get to etcd. Everything else can be put
> in the system package for the application.
>
> Doug
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-09 Thread Flavio Percoco
On Fri, Jun 9, 2017 at 8:07 AM Doug Hellmann  wrote:

> Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
>
> > Unless I'm missing something, to use confd with an OpenStack deployment
> on
> > k8s, we'll have to do something like this:
> >
> > * Deploy confd in every node where we may want to run a pod (basically
> > wvery node)
>
> Oh, no, no. That's not how it works at all.
>
> confd runs *inside* the containers. It's input files and command line
> arguments tell it how to watch for the settings to be used just for that
> one container instance. It does all of its work (reading templates,
> watching settings, HUPing services, etc.) from inside the container.
>
> The only inputs confd needs from outside of the container are the
> connection information to get to etcd. Everything else can be put
> in the system package for the application.
>

A-ha, ok! I figured this was another option. In this case I guess we would
have 2 options:

1. Run confd + openstack service in side the container. My concern in this
case
would be that we'd have to run 2 services inside the container and structure
things in a way we can monitor both services and make sure they are both
running. Nothing impossible but one more thing to do.

2. Run confd `-onetime` and then run the openstack service.


Either would work but #2 means we won't have config files monitored and the
container would have to be restarted to update the config files.

Thanks, Doug.
Flavio
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call For Proposals: KVM Forum 2017 [Submission deadline: 15-JUN-2017]

2017-06-09 Thread Kashyap Chamarthy

KVM Forum 2017: Call For Participation
October 25-27, 2017 - Hilton Prague - Prague, Czech Republic

(All submissions must be received before midnight June 15, 2017)
=


KVM Forum is an annual event that presents a rare opportunity
for developers and users to meet, discuss the state of Linux   
virtualization technology, and plan for the challenges ahead.
We invite you to lead part of the discussion by submitting a speaking
proposal for KVM Forum 2017.

At this highly technical conference, developers driving innovation
in the KVM virtualization stack (Linux, KVM, QEMU, libvirt) can
meet users who depend on KVM as part of their offerings, or to
power their data centers and clouds.

KVM Forum will include sessions on the state of the KVM
virtualization stack, planning for the future, and many
opportunities for attendees to collaborate. As we celebrate ten years
of KVM development in the Linux kernel, KVM continues to be a
critical part of the FOSS cloud infrastructure.

This year, KVM Forum is joining Open Source Summit in Prague,
Czech Republic. Selected talks from KVM Forum will be presented on
Wednesday October 25 to the full audience of the Open Source Summit.
Also, attendees of KVM Forum will have access to all of the talks from
Open Source Summit on Wednesday.


===
IMPORTANT DATES
===
Submission deadline: June 15, 2017
Notification: August 10, 2017
Schedule announced: August 17, 2017
Event dates: October 25-27, 2017

http://events.linuxfoundation.org/cfp

Suggested topics:
* Scaling, latency optimizations, performance tuning, real-time guests
* Hardening and security
* New features
* Testing

KVM and the Linux kernel:
* Nested virtualization
* Resource management (CPU, I/O, memory) and scheduling
* VFIO: IOMMU, SR-IOV, virtual GPU, etc.
* Networking: Open vSwitch, XDP, etc.
* virtio and vhost
* Architecture ports and new processor features

QEMU:
* Management interfaces: QOM and QMP
* New devices, new boards, new architectures
* Graphics, desktop virtualization and virtual GPU
* New storage features
* High availability, live migration and fault tolerance
* Emulation and TCG
* Firmware: ACPI, UEFI, coreboot, U-Boot, etc.

Management and infrastructure
* Managing KVM: Libvirt, OpenStack, oVirt, etc.
* Storage: Ceph, Gluster, SPDK, etc.
* Network Function Virtualization: DPDK, OPNFV, OVN, etc.
* Provisioning


===
SUBMITTING YOUR PROPOSAL
===
Abstracts due: June 15, 2017

Please submit a short abstract (~150 words) describing your presentation
proposal. Slots vary in length up to 45 minutes. Also include the proposal
type -- one of:
- technical talk
- end-user talk

Submit your proposal here:
http://events.linuxfoundation.org/cfp
Please only use the categories "presentation" and "panel discussion"

You will receive a notification whether or not your presentation proposal
was accepted by August 10, 2017.

Speakers will receive a complimentary pass for the event. In the instance
that case your submission has multiple presenters, only the primary speaker for 
a
proposal will receive a complimentary event pass. For panel discussions, all
panelists will receive a complimentary event pass.

TECHNICAL TALKS

A good technical talk should not just report on what has happened over
the last year; it should present a concrete problem and how it impacts
the user and/or developer community. Whenever applicable, focus on
work that needs to be done, difficulties that haven't yet been solved,
and on decisions that other developers should be aware of. Summarizing
recent developments is okay but it should not be more than a small
portion of the overall talk.

END-USER TALKS

One of the big challenges as developers is to know what, where and how
people actually use our software. We will reserve a few slots for end
users talking about their deployment challenges and achievements.

If you are using KVM in production you are encouraged submit a speaking
proposal. Simply mark it as an end-user talk. As an end user, this is a
unique opportunity to get your input to developers.

HANDS-ON / BOF SESSIONS

We will reserve some time for people to get together and discuss
strategic decisions as well as other topics that are best solved within
smaller groups.

These sessions will be announced during the event. If you are interested
in organizing such a session, please add it to the list at

  http://www.linux-kvm.org/page/KVM_Forum_2017_BOF

Let people you think who might be interested know about your BOF, and encourage
them to add their names to the wiki page as well. Please try to
add your ideas to the list before KVM Forum starts.


PANEL DISCUSSIONS

If you are proposing a panel discussion, please make sure that you list
all of your potential panelists in your the abstract. We will request full
biographies if a panel is accepted.


===
HOTEL / 

Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-09 Thread Dan Smith
>> My current feeling is that we got ourselves into our existing mess
>> of ugly, convoluted code when we tried to add these complex 
>> relationships into the resource tracker and the scheduler. We set
>> out to create the placement engine to bring some sanity back to how
>> we think about things we need to virtualize.
> 
> Sorry, I completely disagree with your assessment of why the
> placement engine exists. We didn't create it to bring some sanity
> back to how we think about things we need to virtualize. We created
> it to add consistency and structure to the representation of
> resources in the system.
> 
> I don't believe that exposing this structured representation of 
> resources is a bad thing or that it is leaking "implementation
> details" out of the placement API. It's not an implementation detail
> that a resource provider is a child of another or that a different
> resource provider is supplying some resource to a group of other
> providers. That's simply an accurate representation of the underlying
> data structures.

This ^.

With the proposal Jay has up, placement is merely exposing some of its
own data structures to a client that has declared what it wants. The
client has made a request for resources, and placement is returning some
allocations that would be valid. None of them are nova-specific at all
-- they're all data structures that you would pass to and/or retrieve
from placement already.

>> I don't know the answer. I'm hoping that we can have a discussion 
>> that might uncover a clear approach, or, at the very least, one
>> that is less murky than the others.
> 
> I really like Dan's idea of returning a list of HTTP request bodies
> for POST /allocations/{consumer_uuid} calls along with a list of
> provider information that the scheduler can use in its
> sorting/weighing algorithms.
> 
> We've put this straw-man proposal here:
> 
> https://review.openstack.org/#/c/471927/
> 
> I'm hoping to keep the conversation going there.

This is the most clear option that we have, in my opinion. It simplifies
what the scheduler has to do, it simplifies what conductor has to do
during a retry, and it minimizes the amount of work that something else
like cinder would have to do to use placement to schedule resources.
Without this, cinder/neutron/whatever has to know about things like
aggregates and hierarchical relationships between providers in order to
make *any* sane decision about selecting resources. If placement returns
valid options with that stuff figured out, then those services can look
at the bits they care about and make a decision.

I'd really like us to use the existing strawman spec as a place to
iterate on what that API would look like, assuming we're going to go
that route, and work on actual code in both placement and the scheduler
to use it. I'm hoping that doing so will help clarify whether this is
the right approach or not, and whether there are other gotchas that we
don't yet have on our radar. We're rapidly running out of runway for
pike here and I feel like we've got to get moving on this or we're going
to have to punt. Since several other things depend on this work, we need
to consider the impact to a lot of our pike commitments if we're not
able to get something merged.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread Lance Bragstad
On Fri, Jun 9, 2017 at 11:17 AM, Clint Byrum  wrote:

> Excerpts from Lance Bragstad's message of 2017-06-08 16:10:00 -0500:
> > On Thu, Jun 8, 2017 at 3:21 PM, Emilien Macchi 
> wrote:
> >
> > > On Thu, Jun 8, 2017 at 7:34 PM, Lance Bragstad 
> > > wrote:
> > > > After digging into etcd a bit, one place this might be help deployer
> > > > experience would be the handling of fernet keys for token encryption
> in
> > > > keystone. Currently, all keys used to encrypt and decrypt tokens are
> > > kept on
> > > > disk for each keystone node in the deployment. While simple, it
> requires
> > > > operators to perform rotation on a single node and then push, or
> sync,
> > > the
> > > > new key set to the rest of the nodes. This must be done in lock step
> in
> > > > order to prevent early token invalidation and inconsistent token
> > > responses.
> > >
> > > This is what we discussed a few months ago :-)
> > >
> > > http://lists.openstack.org/pipermail/openstack-dev/2017-
> March/113943.html
> > >
> > > I'm glad it's coming back ;-)
> > >
> >
> > Yep! I've proposed a pretty basic spec to backlog [0] in an effort to
> > capture the discussion. I've also noted the point Kevin brought up about
> > authorization in etcd (thanks, Kevin!)
> >
> > If someone feels compelled to take that and run with it, they are more
> than
> > welcome to.
> >
> > [0] https://review.openstack.org/#/c/472385/
> >
>
> I commented on the spec. I think this is a misguided idea. etcd3 is a
> _coordination_ service. Not a key manager. It lacks the audit logging
> and access control one expects to protect and manage key material. I'd
> much rather see something like Hashicorp's Vault [1] implemented for
> Fernet keys than etcd3. We even have a library for such things called
> Castellan[2].
>

Great point, and thanks for leaving it in the spec. I'm glad we're getting
this documented since this specific discussion has cropped up a couple
times.


>
> [1] https://www.vaultproject.io/
> [2] https://docs.openstack.org/developer/castellan/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Call for check: is your project ready for pylint 1.7.1?

2017-06-09 Thread Doug Hellmann
Excerpts from Akihiro Motoki's message of 2017-06-09 03:53:34 +0900:
> Hi all,
> 
> Is your project ready for pylint 1.7.1?
> If you use pylint in your pep8 job, it is worth checked.
> 
> Our current version of pylint is 1.4.5 but it is not safe in python 3.5.
> The global-requirements update was merged once [1],
> However, some projects (at least neutron) are not ready for pylint
> 1.7.1 and it was reverted [2].
> it is reasonable to give individual projects time to cope with pylint 1.7.1.
> 
> I believe bumping pylint version to 1.7.1 (or later) is the right
> direction in long term.
> I would suggest to make your project ready for pylint 1.7.1 soon (two
> weeks or some?)
> You can disable new rules in pylint 1.7.1 temporarily and clean up
> your code later
> as neutron does [3]. As far as I checked, most rules are reasonable
> and worth enabled.
> 
> Thanks,
> Akihiro Motoki
> 
> [1] https://review.openstack.org/#/c/470800/
> [2] https://review.openstack.org/#/c/471756/
> [3] https://review.openstack.org/#/c/471763/
> 

I thought we had linters in a list that didn't require the versions
to be synced across projects, to allow projects to update at their
own pace. Did we undo that work?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-09 Thread Britt Houser (bhouser)
How does confd run inside the container?  Does this mean we’d need some kind of 
systemd in every container which would spawn both confd and the real service?  
That seems like a very large architectural change.  But maybe I’m 
misunderstanding it.

Thx,
britt

On 6/9/17, 9:04 AM, "Doug Hellmann"  wrote:

Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:

> Unless I'm missing something, to use confd with an OpenStack deployment on
> k8s, we'll have to do something like this:
> 
> * Deploy confd in every node where we may want to run a pod (basically
> wvery node)

Oh, no, no. That's not how it works at all.

confd runs *inside* the containers. It's input files and command line
arguments tell it how to watch for the settings to be used just for that
one container instance. It does all of its work (reading templates,
watching settings, HUPing services, etc.) from inside the container.

The only inputs confd needs from outside of the container are the
connection information to get to etcd. Everything else can be put
in the system package for the application.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread Clint Byrum
Excerpts from Lance Bragstad's message of 2017-06-08 16:10:00 -0500:
> On Thu, Jun 8, 2017 at 3:21 PM, Emilien Macchi  wrote:
> 
> > On Thu, Jun 8, 2017 at 7:34 PM, Lance Bragstad 
> > wrote:
> > > After digging into etcd a bit, one place this might be help deployer
> > > experience would be the handling of fernet keys for token encryption in
> > > keystone. Currently, all keys used to encrypt and decrypt tokens are
> > kept on
> > > disk for each keystone node in the deployment. While simple, it requires
> > > operators to perform rotation on a single node and then push, or sync,
> > the
> > > new key set to the rest of the nodes. This must be done in lock step in
> > > order to prevent early token invalidation and inconsistent token
> > responses.
> >
> > This is what we discussed a few months ago :-)
> >
> > http://lists.openstack.org/pipermail/openstack-dev/2017-March/113943.html
> >
> > I'm glad it's coming back ;-)
> >
> 
> Yep! I've proposed a pretty basic spec to backlog [0] in an effort to
> capture the discussion. I've also noted the point Kevin brought up about
> authorization in etcd (thanks, Kevin!)
> 
> If someone feels compelled to take that and run with it, they are more than
> welcome to.
> 
> [0] https://review.openstack.org/#/c/472385/
> 

I commented on the spec. I think this is a misguided idea. etcd3 is a
_coordination_ service. Not a key manager. It lacks the audit logging
and access control one expects to protect and manage key material. I'd
much rather see something like Hashicorp's Vault [1] implemented for
Fernet keys than etcd3. We even have a library for such things called
Castellan[2].

[1] https://www.vaultproject.io/
[2] https://docs.openstack.org/developer/castellan/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Call for check: is your project ready for pylint 1.7.1?

2017-06-09 Thread Amrith Kumar
Thanks, Sean.

I will heartily second that request/proposal.

-amrith


-amrith

--
Amrith Kumar
Phone: +1-978-563-9590


On Fri, Jun 9, 2017 at 11:07 AM, Sean Dague  wrote:

> On 06/08/2017 02:53 PM, Akihiro Motoki wrote:
> > Hi all,
> >
> > Is your project ready for pylint 1.7.1?
> > If you use pylint in your pep8 job, it is worth checked.
> >
> > Our current version of pylint is 1.4.5 but it is not safe in python 3.5.
> > The global-requirements update was merged once [1],
> > However, some projects (at least neutron) are not ready for pylint
> > 1.7.1 and it was reverted [2].
> > it is reasonable to give individual projects time to cope with pylint
> 1.7.1.
> >
> > I believe bumping pylint version to 1.7.1 (or later) is the right
> > direction in long term.
> > I would suggest to make your project ready for pylint 1.7.1 soon (two
> > weeks or some?)
> > You can disable new rules in pylint 1.7.1 temporarily and clean up
> > your code later
> > as neutron does [3]. As far as I checked, most rules are reasonable
> > and worth enabled.
> >
> > Thanks,
> > Akihiro Motoki
> >
> > [1] https://review.openstack.org/#/c/470800/
> > [2] https://review.openstack.org/#/c/471756/
> > [3] https://review.openstack.org/#/c/471763/
>
> Please only make changes like this in the first milestone of the cycle.
> Lint requirements changes are distracting, and definitely shouldn't be
> happening during the final milestone.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Call for check: is your project ready for pylint 1.7.1?

2017-06-09 Thread Sean Dague
On 06/08/2017 02:53 PM, Akihiro Motoki wrote:
> Hi all,
> 
> Is your project ready for pylint 1.7.1?
> If you use pylint in your pep8 job, it is worth checked.
> 
> Our current version of pylint is 1.4.5 but it is not safe in python 3.5.
> The global-requirements update was merged once [1],
> However, some projects (at least neutron) are not ready for pylint
> 1.7.1 and it was reverted [2].
> it is reasonable to give individual projects time to cope with pylint 1.7.1.
> 
> I believe bumping pylint version to 1.7.1 (or later) is the right
> direction in long term.
> I would suggest to make your project ready for pylint 1.7.1 soon (two
> weeks or some?)
> You can disable new rules in pylint 1.7.1 temporarily and clean up
> your code later
> as neutron does [3]. As far as I checked, most rules are reasonable
> and worth enabled.
> 
> Thanks,
> Akihiro Motoki
> 
> [1] https://review.openstack.org/#/c/470800/
> [2] https://review.openstack.org/#/c/471756/
> [3] https://review.openstack.org/#/c/471763/

Please only make changes like this in the first milestone of the cycle.
Lint requirements changes are distracting, and definitely shouldn't be
happening during the final milestone.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Call for check: is your project ready for pylint 1.7.1?

2017-06-09 Thread Amrith Kumar
​Is there a driving reason why this has to be done in the Pike cycle? The
requirements freeze is coincident with Pike-3 and your two week deadline
puts it pretty close to that date so I'm going to assume that you will have
to make this change before p3.

Trove is another of the projects that went down in flames with the new
pylint and I'm wondering what benefit this has for projects in general. The
notion of accumulating more technical debt (enable pylint 1.7.1 and disable
the new tests, cleanup code later) strikes me as less than ideal.

Thanks,

-amrith​



-amrith

--
Amrith Kumar
Phone: +1-978-563-9590


On Thu, Jun 8, 2017 at 1:53 PM, Akihiro Motoki  wrote:

> Hi all,
>
> Is your project ready for pylint 1.7.1?
> If you use pylint in your pep8 job, it is worth checked.
>
> Our current version of pylint is 1.4.5 but it is not safe in python 3.5.
> The global-requirements update was merged once [1],
> However, some projects (at least neutron) are not ready for pylint
> 1.7.1 and it was reverted [2].
> it is reasonable to give individual projects time to cope with pylint
> 1.7.1.
>
> I believe bumping pylint version to 1.7.1 (or later) is the right
> direction in long term.
> I would suggest to make your project ready for pylint 1.7.1 soon (two
> weeks or some?)
> You can disable new rules in pylint 1.7.1 temporarily and clean up
> your code later
> as neutron does [3]. As far as I checked, most rules are reasonable
> and worth enabled.
>
> Thanks,
> Akihiro Motoki
>
> [1] https://review.openstack.org/#/c/470800/
> [2] https://review.openstack.org/#/c/471756/
> [3] https://review.openstack.org/#/c/471763/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-09 Thread Walter Boring
I had initially looked into this for the 3PAR drivers when we initially
were working on the target driver code.   The problem I found was, it would
take a fair amount of time to refactor the code, with marginal benefit.
Yes, the design is better, but I couldn't justify the refactoring time,
effort and testing of the new driver model just to get the same
functionality.   Also, we would still need 2 CIs to ensure that the FC vs.
iSCSI target drivers for 3PAR would work correctly, so it doesn't really
save CI efforts much.   I guess what I'm trying to say is that, even though
it's a better model, we always have to weigh the time investment to reward,
and I couldn't justify it with all the other efforts I was involved with at
the time.

I kind of assume that for the most part, most developers don't even
understand why we have the target driver model, and secondly if they were
educated on it, that they'd run into the same issue I had.

On Fri, Jun 2, 2017 at 12:47 PM, John Griffith 
wrote:

> Hey Everyone,
>
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
>
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.
>
> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
>
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
>
> What I'm wondering is, even though I certainly think this is a FAR
> SUPERIOR design to what we had, I don't like having both code-paths and
> designs in the code base.  Should we consider reverting the drivers that
> are using the new model back and remove cinder/volume/targets?  Or should
> we start flagging those new drivers that don't use the new model during
> review?  Also, what about the legacy/burden of all the other drivers that
> are already in place?
>
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][glance][barbican][telemetry][keystone][designate][congress][magnum][searchlight][swift][tacker] unreleased libraries

2017-06-09 Thread Hanxi Liu
thanks Doug,

Here is ceilometerclient release:

 https://review.openstack.org/#/c/472736/

Cheers,
Hanxi Liu

On Fri, Jun 9, 2017 at 11:02 PM, Spyros Trigazis  wrote:

> Thanks for the reminder.
>
> python-magnumclient https://review.openstack.org/#/c/472718/
>
> Cheers,
> Spyros
>
> On 9 June 2017 at 16:39, Doug Hellmann  wrote:
>
>> We have several teams with library deliverables that haven't seen
>> any releases at all yet this cycle. Please review the list below,
>> and if there are changes on master since the last release prepare
>> a release request.  Remember that because of the way our CI system
>> works, patches that land in libraries are not used in tests for
>> services that use the libs unless the library has a release and the
>> constraints list is updated.
>>
>> Doug
>>
>> glance-store
>> instack
>> pycadf
>> python-barbicanclient
>> python-ceilometerclient
>> python-congressclient
>> python-designateclient
>> python-keystoneclient
>> python-magnumclient
>> python-searchlightclient
>> python-swiftclient
>> python-tackerclient
>> requestsexceptions
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Role updates

2017-06-09 Thread Alex Schultz
Hey folks,

I wanted to bring to your attention that we've merged the change[0] to
add a basic set of roles that can be combined to create your own
roles_data.yaml as needed.  With this change the roles_data.yaml and
roles_data_undercloud.yaml files in THT should not be changed by hand.
Instead if you have an update to a role, please update the appropriate
roles/*.yaml file. I have proposed a change[1] to THT with additional
tools to validate that the roles/*.yaml files are updated and that
there are no unaccounted for roles_data.yaml changes.  Additionally
this change adds in a new tox target to assist in the generate of
these basic roles data files that we provide.

Ideally I would like to get rid of the roles_data.yaml and
roles_data_undercloud.yaml so that the end user doesn't have to
generate this file at all but that won't happen this cycle.  In the
mean time, additional documentation around how to work with roles has
been added to the roles README[2].

Thanks,
-Alex

[0] https://review.openstack.org/#/c/445687/
[1] https://review.openstack.org/#/c/472731/
[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/roles/README.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread Lance Bragstad
On Fri, Jun 9, 2017 at 9:57 AM, Mike Bayer  wrote:

>
>
> On 06/08/2017 01:34 PM, Lance Bragstad wrote:
>
>> After digging into etcd a bit, one place this might be help deployer
>> experience would be the handling of fernet keys for token encryption in
>> keystone. Currently, all keys used to encrypt and decrypt tokens are kept
>> on disk for each keystone node in the deployment. While simple, it requires
>> operators to perform rotation on a single node and then push, or sync, the
>> new key set to the rest of the nodes. This must be done in lock step in
>> order to prevent early token invalidation and inconsistent token responses.
>>
>> An alternative would be to keep the keys in etcd and make the fernet bits
>> pluggable so that it's possible to read keys from disk or etcd (pending
>> configuration). The advantage would be that operators could initiate key
>> rotations from any keystone node in the deployment (or using etcd directly)
>> and not have to worry about distributing the new key set. Since etcd
>> associates metadata to the key-value pairs, we might be able to simplify
>> the rotation strategy as well.
>>
>
> Interesting, I had the mis-conception that "fernet" keys no longer
> required any server-side storage (how is "kept-on-disk" now implemented?) .


Currently - the keys used to encrypt and decrypt fernet tokens are stored
as files on the keystone server. The repositories default location is in
`/etc/keystone/fernet-keys`. The size of this repository is regulated by
the rotation process we provide in keystone-manage tooling [0].


> We've had continuous issues with the pre-fernet Keystone tokens filling up
> databases, even when operators were correctly expunging old tokens; some
> environments just did so many requests that the keystone-token table still
> blew up to where MySQL can no longer delete from it without producing a
> too-large transaction for Galera.
>

Yep - we actually just fixed a bug related to this [1].


>
> So after all the "finally fernet solves this problem" we propose, hey lets
> put them *back* in the database :).  That's great.  But, lets please not
> leave "cleaning out old tokens" as some kind of cron/worry-about-it-later
> thing.  that was a terrible architectural decision, with apologies to
> whoever made it.if you're putting some kind of "we create an infinite,
> rapidly growing, turns-to-garbage-in-30-seconds" kind of data in a
> database, removing that data robustly and ASAP needs to be part of the
> process.
>
>
I should have clarified. The idea was to put the keys used to encrypt and
decrypt the tokens in etcd so that synchronizing the repository across a
cluster for keystone nodes is easier for operators (but not without other
operator pain as Kevin pointed out). The tokens themselves will remain
completely non-persistent. Fernet key creation is explicitly controlled by
operators and isn't something that end users generate.

[0]
https://github.com/openstack/keystone/blob/c528539879e824b8e6d5654292a85ccbee6dcf89/keystone/conf/fernet_tokens.py#L44-L54
[1] https://launchpad.net/bugs/1649616


>
>
>
>
>
>> On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer  mba...@redhat.com>> wrote:
>>
>>
>>
>> On 06/08/2017 12:47 AM, Joshua Harlow wrote:
>>
>> So just out of curiosity, but do people really even know what
>> etcd is good for? I am thinking that there should be some
>> guidance from folks in the community as to where etcd should be
>> used and where it shouldn't (otherwise we just all end up in a
>> mess).
>>
>>
>> So far I've seen a proposal of etcd3 as a replacement for memcached
>> in keystone, and a new dogpile connector was added to oslo.cache to
>> handle referring to etcd3 as a cache backend.  This is a really
>> simplistic / minimal kind of use case for a key-store.
>>
>> But, keeping in mind I don't know anything about etcd3 other than
>> "it's another key-store", it's the only database used by Kubernetes
>> as a whole, which suggests it's doing a better job than Redis in
>> terms of "durable".   So I wouldn't be surprised if new / existing
>> openstack applications express some gravitational pull towards using
>> it as their own datastore as well.I'll be trying to hang onto
>> the etcd3 track as much as possible so that if/when that happens I
>> still have a job :).
>>
>>
>>
>>
>>
>> Perhaps a good idea to actually give examples of how it should
>> be used, how it shouldn't be used, what it offers, what it
>> doesn't... Or at least provide links for people to read up on
>> this.
>>
>> Thoughts?
>>
>> Davanum Srinivas wrote:
>>
>> One clarification: Since
>> https://pypi.python.org/pypi/etcd3gw
>>  just
>> uses the HTTP API (/v3alpha) it will work under both
>> eventlet and
>>  

Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread gordon chung


On 09/06/17 10:57 AM, Mike Bayer wrote:
> Interesting, I had the mis-conception that "fernet" keys no longer
> required any server-side storage (how is "kept-on-disk" now
> implemented?) .  We've had continuous issues with the pre-fernet
> Keystone tokens filling up databases, even when operators were correctly
> expunging old tokens; some environments just did so many requests that
> the keystone-token table still blew up to where MySQL can no longer
> delete from it without producing a too-large transaction for Galera.

i feel your pain. had exact same "can't clean token table because it's 
too damn big" issue.

>
> So after all the "finally fernet solves this problem" we propose, hey
> lets put them *back* in the database :).  That's great.  But, lets
> please not leave "cleaning out old tokens" as some kind of
> cron/worry-about-it-later thing.  that was a terrible architectural
> decision, with apologies to whoever made it.if you're putting some
> kind of "we create an infinite, rapidly growing,
> turns-to-garbage-in-30-seconds" kind of data in a database, removing
> that data robustly and ASAP needs to be part of the process.

my very basic understanding is that only the key to generate token is 
stored. so it in theory will expire less often but more importantly, 
isn't affected by the number of requests.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][glance][barbican][telemetry][keystone][designate][congress][magnum][searchlight][swift][tacker] unreleased libraries

2017-06-09 Thread Spyros Trigazis
Thanks for the reminder.

python-magnumclient https://review.openstack.org/#/c/472718/

Cheers,
Spyros

On 9 June 2017 at 16:39, Doug Hellmann  wrote:

> We have several teams with library deliverables that haven't seen
> any releases at all yet this cycle. Please review the list below,
> and if there are changes on master since the last release prepare
> a release request.  Remember that because of the way our CI system
> works, patches that land in libraries are not used in tests for
> services that use the libs unless the library has a release and the
> constraints list is updated.
>
> Doug
>
> glance-store
> instack
> pycadf
> python-barbicanclient
> python-ceilometerclient
> python-congressclient
> python-designateclient
> python-keystoneclient
> python-magnumclient
> python-searchlightclient
> python-swiftclient
> python-tackerclient
> requestsexceptions
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pike m2 has been released

2017-06-09 Thread Ben Nemec
Hmm, I was expecting an instack-undercloud release as part of m2.  Is 
there a reason we didn't do that?


On 06/08/2017 03:47 PM, Emilien Macchi wrote:

We have a new release of TripleO, pike milestone 2.
All bugs targeted on Pike-2 have been moved into Pike-3.

I'll take care of moving the blueprints into Pike-3.

Some numbers:
Blueprints: 3 Unknown, 18 Not started, 14 Started, 3 Slow progress, 11
Good progress, 9 Needs Code Review, 7 Implemented
Bugs: 197 Fix Released

Thanks everyone!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread Mike Bayer



On 06/08/2017 04:24 PM, Julien Danjou wrote:

On Thu, Jun 08 2017, Mike Bayer wrote:



So I wouldn't be surprised if new / existing openstack applications
express some gravitational pull towards using it as their own
datastore as well. I'll be trying to hang onto the etcd3 track as much
as possible so that if/when that happens I still have a job :).


Sounds like a recipe for disaster. :)


What architectural decision in any of Openstack is *not* considered by 
some subset of folks to be a "recipe for disaster" ?  :)








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][glance][barbican][telemetry][keystone][designate][congress][magnum][searchlight][swift][tacker] unreleased libraries

2017-06-09 Thread Lance Bragstad
Just pushed a release for pycadf as well [1].

[1] https://review.openstack.org/#/c/472717/

On Fri, Jun 9, 2017 at 9:43 AM, Lance Bragstad  wrote:

> We have a review in flight to release python-keystoneclient [0]. Thanks
> for the reminder!
>
> [0] https://review.openstack.org/#/c/472667/
>
> On Fri, Jun 9, 2017 at 9:39 AM, Doug Hellmann 
> wrote:
>
>> We have several teams with library deliverables that haven't seen
>> any releases at all yet this cycle. Please review the list below,
>> and if there are changes on master since the last release prepare
>> a release request.  Remember that because of the way our CI system
>> works, patches that land in libraries are not used in tests for
>> services that use the libs unless the library has a release and the
>> constraints list is updated.
>>
>> Doug
>>
>> glance-store
>> instack
>> pycadf
>> python-barbicanclient
>> python-ceilometerclient
>> python-congressclient
>> python-designateclient
>> python-keystoneclient
>> python-magnumclient
>> python-searchlightclient
>> python-swiftclient
>> python-tackerclient
>> requestsexceptions
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread Mike Bayer



On 06/08/2017 01:34 PM, Lance Bragstad wrote:
After digging into etcd a bit, one place this might be help deployer 
experience would be the handling of fernet keys for token encryption in 
keystone. Currently, all keys used to encrypt and decrypt tokens are 
kept on disk for each keystone node in the deployment. While simple, it 
requires operators to perform rotation on a single node and then push, 
or sync, the new key set to the rest of the nodes. This must be done in 
lock step in order to prevent early token invalidation and inconsistent 
token responses.


An alternative would be to keep the keys in etcd and make the fernet 
bits pluggable so that it's possible to read keys from disk or etcd 
(pending configuration). The advantage would be that operators could 
initiate key rotations from any keystone node in the deployment (or 
using etcd directly) and not have to worry about distributing the new 
key set. Since etcd associates metadata to the key-value pairs, we might 
be able to simplify the rotation strategy as well.


Interesting, I had the mis-conception that "fernet" keys no longer 
required any server-side storage (how is "kept-on-disk" now 
implemented?) .  We've had continuous issues with the pre-fernet 
Keystone tokens filling up databases, even when operators were correctly 
expunging old tokens; some environments just did so many requests that 
the keystone-token table still blew up to where MySQL can no longer 
delete from it without producing a too-large transaction for Galera.


So after all the "finally fernet solves this problem" we propose, hey 
lets put them *back* in the database :).  That's great.  But, lets 
please not leave "cleaning out old tokens" as some kind of 
cron/worry-about-it-later thing.  that was a terrible architectural 
decision, with apologies to whoever made it.if you're putting some 
kind of "we create an infinite, rapidly growing, 
turns-to-garbage-in-30-seconds" kind of data in a database, removing 
that data robustly and ASAP needs to be part of the process.








On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer > wrote:




On 06/08/2017 12:47 AM, Joshua Harlow wrote:

So just out of curiosity, but do people really even know what
etcd is good for? I am thinking that there should be some
guidance from folks in the community as to where etcd should be
used and where it shouldn't (otherwise we just all end up in a
mess).


So far I've seen a proposal of etcd3 as a replacement for memcached
in keystone, and a new dogpile connector was added to oslo.cache to
handle referring to etcd3 as a cache backend.  This is a really
simplistic / minimal kind of use case for a key-store.

But, keeping in mind I don't know anything about etcd3 other than
"it's another key-store", it's the only database used by Kubernetes
as a whole, which suggests it's doing a better job than Redis in
terms of "durable".   So I wouldn't be surprised if new / existing
openstack applications express some gravitational pull towards using
it as their own datastore as well.I'll be trying to hang onto
the etcd3 track as much as possible so that if/when that happens I
still have a job :).





Perhaps a good idea to actually give examples of how it should
be used, how it shouldn't be used, what it offers, what it
doesn't... Or at least provide links for people to read up on this.

Thoughts?

Davanum Srinivas wrote:

One clarification: Since
https://pypi.python.org/pypi/etcd3gw
 just
uses the HTTP API (/v3alpha) it will work under both
eventlet and
non-eventlet environments.

Thanks,
Dims


On Wed, Jun 7, 2017 at 6:47 AM, Davanum
Srinivas>  wrote:

Team,

Here's the update to the base services resolution from
the TC:
https://governance.openstack.org/tc/reference/base-services.html



First request is to Distros, Packagers, Deployers,
anyone who
installs/configures OpenStack:
Please make sure you have latest etcd 3.x available in your
environment for Services to use, Fedora already does, we
need help in
making sure all distros and architectures are covered.

Any project who want to use etcd v3 API via grpc, please
use:
https://pypi.python.org/pypi/etcd3
 (works only for
non-eventlet services)

Those that depend on eventlet, please use 

Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-09 Thread Jay Pipes

On 06/05/2017 05:22 PM, Ed Leafe wrote:

Another proposal involved a change to how placement responds to the
scheduler. Instead of just returning the UUIDs of the compute nodes
that satisfy the required resources, it would include a whole bunch
of additional information in a structured response. A straw man
example of such a response is here:
https://etherpad.openstack.org/p/placement-allocations-straw-man.
This was referred to as "Plan B".


Actually, this was Plan "C". Plan "B" was to modify the return of the 
GET /resource_providers Placement REST API endpoint.


> The main feature of this approach

is that part of that response would be the JSON dict for the
allocation call, containing the specific resource provider UUID for
each resource. This way, when the scheduler selects a host


Important clarification is needed here. The proposal is to have the 
scheduler actually select *more than just the compute host*. The 
scheduler would select the host, any sharing providers and any child 
providers within a host that actually contained the resources/traits 
that the request demanded.


>, it would

simply pass that dict back to the /allocations call, and placement
would be able to do the allocations directly against that
information.

There was another issue raised: simply providing the host UUIDs
didn't give the scheduler enough information in order to run its
filters and weighers. Since the scheduler uses those UUIDs to
construct HostState objects, the specific missing information was
never completely clarified, so I'm just including this aspect of the
conversation for completeness. It is orthogonal to the question of
how to allocate when the resource provider is not "simple".


The specific missing information is the following, but not limited to:

* Whether or not a resource can be provided by a sharing provider or a 
"local provider" or either. For example, assume a compute node that is 
associated with a shared storage pool via an aggregate but that also has 
local disk for instances. The Placement API currently returns just the 
compute host UUID but no indication of whether the compute host has 
local disk to consume from, has shared disk to consume from, or both. 
The scheduler is the thing that must weigh these choices and make a 
choice. The placement API gives the scheduler the choices and the 
scheduler makes a decision based on sorting/weighing algorithms.


It is imperative to remember the reason *why* we decided (way back in 
Portland at the Nova mid-cycle last year) to keep sorting/weighing in 
the Nova scheduler. The reason is because operators (and some 
developers) insisted on being able to weigh the possible choices in ways 
that "could not be pre-determined". In other words, folks wanted to keep 
the existing uber-flexibility and customizability that the scheduler 
weighers (and home-grown weigher plugins) currently allow, including 
being able to sort possible compute hosts by such things as the average 
thermal temperature of the power supply the hardware was connected to 
over the last five minutes (I kid you friggin not.)


* Which SR-IOV physical function should provider an SRIOV_NET_VF 
resource to an instance. Imagine a situation where a compute host has 4 
SR-IOV physical functions, each having some traits representing hardware 
offload support and each having an inventory of 8 SRIOV_NET_VF. 
Currently the scheduler absolutely has the information to pick one of 
these SRIOV physical functions to assign to a workload. What the 
scheduler does *not* have, however, is a way to tell the Placement API 
to consume an SRIOV_NET_VF from that particular physical function. Why? 
Because the scheduler doesn't know that a particular physical function 
even *is* a resource provider in the placement API. *Something* needs to 
inform the scheduler that the physical function is a resource provider 
and has a particular UUID to identify it. This is precisely what the 
proposed GET /allocation_requests HTTP response data provides to the 
scheduler.



My current feeling is that we got ourselves into our existing mess of
ugly, convoluted code when we tried to add these complex
relationships into the resource tracker and the scheduler. We set out
to create the placement engine to bring some sanity back to how we
think about things we need to virtualize.


Sorry, I completely disagree with your assessment of why the placement 
engine exists. We didn't create it to bring some sanity back to how we 
think about things we need to virtualize. We created it to add 
consistency and structure to the representation of resources in the system.


I don't believe that exposing this structured representation of 
resources is a bad thing or that it is leaking "implementation details" 
out of the placement API. It's not an implementation detail that a 
resource provider is a child of another or that a different resource 
provider is supplying some resource to a group of other providers. 
That's simply an 

[openstack-dev] [tripleo] Containers Deep Dive - 15th June

2017-06-09 Thread Jiří Stránský

Hello,

as discussed previously on the list and at the weekly meeting, we'll do 
a deep dive about containers. The time:


Thursday 15th June, 14:00 UTC (the usual time)

Link for attending will be at the deep dives etherpad [1], preliminary 
agenda is in another etherpad [2], and i hope i'll be able to record it too.


This time it may be more of a "broad dive" :) as that's what containers 
in TripleO mostly are -- they add new bits into many TripleO 
areas/topics (composable services/upgrades, Quickstart/CI, etc.). So 
i'll be trying to bring light to the container-specific parts of the 
mix, and assume some familiarity with the generic TripleO 
concepts/features (e.g. via docs and previous deep dives). Given this 
pattern, i'll have slides with links into code. I'll post them online, 
so that you can reiterate or examine some code more closely later, in 
case you want to.



Have a good day!

Jirka

[1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
[2] https://etherpad.openstack.org/p/tripleo-deep-dive-containers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][glance][barbican][telemetry][keystone][designate][congress][magnum][searchlight][swift][tacker] unreleased libraries

2017-06-09 Thread Lance Bragstad
We have a review in flight to release python-keystoneclient [0]. Thanks for
the reminder!

[0] https://review.openstack.org/#/c/472667/

On Fri, Jun 9, 2017 at 9:39 AM, Doug Hellmann  wrote:

> We have several teams with library deliverables that haven't seen
> any releases at all yet this cycle. Please review the list below,
> and if there are changes on master since the last release prepare
> a release request.  Remember that because of the way our CI system
> works, patches that land in libraries are not used in tests for
> services that use the libs unless the library has a release and the
> constraints list is updated.
>
> Doug
>
> glance-store
> instack
> pycadf
> python-barbicanclient
> python-ceilometerclient
> python-congressclient
> python-designateclient
> python-keystoneclient
> python-magnumclient
> python-searchlightclient
> python-swiftclient
> python-tackerclient
> requestsexceptions
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][glance][barbican][telemetry][keystone][designate][congress][magnum][searchlight][swift][tacker] unreleased libraries

2017-06-09 Thread Doug Hellmann
We have several teams with library deliverables that haven't seen
any releases at all yet this cycle. Please review the list below,
and if there are changes on master since the last release prepare
a release request.  Remember that because of the way our CI system
works, patches that land in libraries are not used in tests for
services that use the libs unless the library has a release and the
constraints list is updated.

Doug

glance-store
instack
pycadf
python-barbicanclient
python-ceilometerclient
python-congressclient
python-designateclient
python-keystoneclient
python-magnumclient
python-searchlightclient
python-swiftclient
python-tackerclient
requestsexceptions

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic] Hardware provisioning testing for Ocata

2017-06-09 Thread Mark Goddard
This is great information Justin, thanks for sharing. It will prove useful
as we scale up our ironic deployments.

It seems to me that a reference configuration of ironic would be a useful
resource for many people. Some key decisions affecting scalability and
performance may at first seem arbitrary but have an impact on performance
and scalability, such as:

- BIOS vs. UEFI
- PXE vs. iPXE bootloader
- TFTP vs. HTTP for kernel/ramdisk transfer
- iSCSI vs. Swift (or one day standalone HTTP?) for image transfer
- Hardware specific drivers vs. IPMI
- Local boot vs. netboot
- Fat images vs. slim + post-configuration
- Any particularly useful configuration tunables (power state polling
interval, nova build concurrency, others?)

I personally use kolla + kolla-ansible which by default uses PXE + TFTP +
iSCSI which is arguably not the best combination.

Cheers,
Mark

On 9 June 2017 at 12:28, Justin Kilpatrick  wrote:

> On Fri, Jun 9, 2017 at 5:25 AM, Dmitry Tantsur 
> wrote:
> > This number of "300", does it come from your testing or from other
> sources?
> > If the former, which driver were you using? What exactly problems have
> you
> > seen approaching this number?
>
> I haven't encountered this issue personally, but talking to Joe
> Talerico and some operators at summit around this number a single
> conductor begins to fall behind polling all of the out of band
> interfaces for the machines that it's responsible for. You start to
> see what you would expect from polling running behind, like incorrect
> power states listed for machines and a general inability to perform
> machine operations in a timely manner.
>
> Having spent some time at the Ironic operators form this is pretty
> normal and the correct response is just to scale out conductors, this
> is a problem with TripleO because we don't really have a scale out
> option with a single machine design. Fortunately just increasing the
> time between interface polling acts as a pretty good stopgap for this
> and lets Ironic catch up.
>
> I may get some time on a cloud of that scale in the future, at which
> point I will have hard numbers to give you. One of the reasons I made
> YODA was the frustrating prevalence of anecdotes instead of hard data
> when it came to one of the most important parts of the user
> experience. If it doesn't deploy people don't use it, full stop.
>
> > Could you please elaborate? (a bug could also help). What exactly were
> you
> > doing?
>
> https://bugs.launchpad.net/ironic/+bug/1680725
>
> Describes exactly what I'm experiencing. Essentially the problem is
> that nodes can and do fail to pxe, then cleaning fails and you just
> lose the nodes. Users have to spend time going back and babysitting
> these nodes and there's no good instructions on what to do with failed
> nodes anyways. The answer is move them to manageable and then to
> available at which point they go back into cleaning until it finally
> works.
>
> Like introspection was a year ago this is a cavalcade of documentation
> problems and software issues. I mean really everything *works*
> technically but the documentation acts like cleaning will work all the
> time and so does the software, leaving the user to figure out how to
> accommodate the realities of the situation without so much as a
> warning that it might happen.
>
> This comes out as more of a ux issue than a software one, but we can't
> just ignore these.
>
> - Justin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 23) - images, devmode and the RDO Cloud

2017-06-09 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

We had a packed agenda and intense discussion as always! Let's start 
with an announcement:


The smoothly named "TripleO deploy time optimization hackathlon" will be 
held on 21st and 22nd of June. It would be great to have the cooperation 
of multiple teams here. See the etherpad[1] for details.


= Extending our image building =

It seems that multiple teams would like to utilize the upstream/RDO 
image building process and produce images just like we do upstream. 
Unfortunately our current image storage systems are not having enough 
bandwidth (either upstream or on the RDO level) to increase the amount 
of images served.


Paul Belanger joined us and explained the longer term plans of OpenStack 
infra, which would provide a proper image/binary blob hosting solution 
in the 6 months time frame.


In the short term, we will recreate both the upstream and RDO image 
hosting instances on the new RDO Cloud and will test the throughput.


= Transitioning the promotion jobs =

This task still needs some further work. We're missing feature parity on 
the ovb-updates job. As the CI Squad is not able to take responsibility 
for the update functionality, we will probably migrate the job with 
everything else but the update part and make that the new promotion job.


We will also extend the amount of jobs voting on a promotion, probably 
will the scenario jobs.


= Devmode =

Quickstart's devmode.sh seems to be picking up popularity among the 
TripleO developers. Meanwhile we're starting to realize the limitations 
of the interface it provides for Quickstart. We're going to have a 
design session next week on Tuesday (13th) at 1pm UTC where we will try 
to come up with some ideas to improve this.


Ian Main suggested to default devmode.sh to deploy a containerized 
system so that developers get more familiar with that. We agreed on this 
being a good idea and will follow it up with some changes.


= RDO Cloud =

The RDO cloud transition is continuing, however Paul requested that we 
don't add the new cloud to the tripleo queue upstream but rather use the 
rdoproject's own zuul and nodepool to be a bit more independent and run 
it like a third party CI system. This will require further cooperation 
with RDO Infra folks.


Meanwhile Sagi is setting up the infrastructure needed on the RDO Cloud 
instance to run CI jobs.


Thank you for reading the summary. Have a great weekend!

Best regards,
Attila

[1] https://etherpad.openstack.org/p/tripleo-deploy-time-hack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Pike 2 released

2017-06-09 Thread Telles Nobrega
Hey Saharans and interested parties,

Just announcing that Sahara Pike 2 was released yesterday and I will be
taking care of bugs and blueprints targeted to P2 and moving them to P3.

This release was a bit crazy, thanks for everyone that helped us get things
together before the deadline.

Let's keep up the good work folks.

Regards,
-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat I 

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread gordon chung


On 09/06/17 12:37 AM, Joshua Harlow wrote:
> My thinking is that people should look over https://raft.github.io/ or
> http://thesecretlivesofdata.com/raft/ (or both or others...)
>

this was really useful. thanks for this! love how they described it so 
simply with visuals. spend a few minutes and look this ^

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Install Kubernetes in the overcloud using TripleO

2017-06-09 Thread Jeremy Eder
Do you intend to support a scenario where overcloud nodes are bare metal?

On Thu, Jun 8, 2017 at 12:36 PM, Flavio Percoco  wrote:

> Hey y'all,
>
> Just wanted to give an updated on the work around tripleo+kubernetes. This
> is
> still far in the future but as we move tripleo to containers using
> docker-cmd,
> we're also working on the final goal, which is to have it run these
> containers
> on kubernetes.
>
> One of the first steps is to have TripleO install Kubernetes in the
> overcloud
> nodes and I've moved forward with this work:
>
> https://review.openstack.org/#/c/471759/
>
> The patch depends on the `ceph-ansible` work and it uses the
> mistral-ansible
> action to deploy kubernetes by leveraging kargo. As it is, the patch
> doesn't
> quite work as it requires some files to be in some places (ssh keys) and a
> couple of other things. None of these "things" are blockers as in they can
> be
> solved by just sending some patches here and there.
>
> I thought I'd send this out as an update and to request some early
> feedback on
> the direction of this patch. The patch, of course, works in my local
> environment
> ;)
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

-- Jeremy Eder
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][barbican][congress][designate][neutron][zaqar] missing pike-2 milestone releases

2017-06-09 Thread Doug Hellmann
We have several projects with deliverables following the
cycle-with-milestones release model without pike 2 releases. Please
check the list below and prepare those release requests as soon as
possible. Remember that this milestone is date-based, not feature-based,
so unless your gate is completely broken there is no reason to wait to
tag the milestone.

Doug

barbican
congress
designate-dashboard
designate
networking-bagpipe
networking-bgpvpn
networking-midonet
networking-odl
networking-ovn
networking-sfc
neutron-dynamic-routing
neutron-fwaas
neutron
zaqar-ui
zaqar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift3 Plugin Development

2017-06-09 Thread Nicolas Trangez
On Thu, 2017-06-08 at 17:06 +0530, Venkata R Edara wrote:
> Hello,
> 
> we  have storage product called Gluster which is file storage system,
> we 
> are looking to support S3 APIs for it.

Hello Venkata,

Did you consider using the S3 Server project [1] to implement this
functionality? S3 Server supports object and bucket ACLs since its very
first release and was designed to provide a fully compatible AWS S3 API
(including e.g. object versioning) on top of existing storage systems
like Scality RING, Docker volumes and other cloud storage providers.
It's a fully open-source project, under the Apache-2 license.

See https://github.com/Scality/S3 for code and http://s3.scality.com
for some more background.

I believe it should be easy to integrate with Gluster, as it's meant to
have plugable meta-data and data back-ends.
 
Feel free to get in touch if you have any questions or would like to
discuss how to move forward with this project, we're happy to help and
collaborate!

Cheers,

Nicolas

-- 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-09 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-06-07 14:00:38 -0400:
> Excerpts from Emilien Macchi's message of 2017-06-07 16:42:13 +0200:
> > On Wed, Jun 7, 2017 at 3:31 PM, Doug Hellmann  wrote:
> > >
> > > On Jun 7, 2017, at 7:20 AM, Emilien Macchi  wrote:
> > >
> > > I'm also wondering if we could use oslo-config-generate directly to
> > > generate confd templates, with a new format. So we would have ini,
> > > yaml, json and confd.
> > > "confd" format would be useful when building rpms that we ship in
> > > containers.
> > > "yaml" format would be useful for installers to expose the options
> > > directly to the User Interface, so we know which params OpenStack
> > > provide and we could re-use the data to push it into etcd.
> > >
> > > Would it make sense?
> > >
> > >
> > > I did think about making oslo-config-generator also take the YAML file as
> > > input instead of scanning plugins, and then including all the output 
> > > formats
> > > in the single command. I haven’t looked to see how much extra complexity
> > > that would add.
> > 
> > Do you mean taking the YAML file that we generate with Ben's work
> > (which would include the parameters values, added by some other
> > tooling maybe)?
> > 
> > I see 2 options at least:
> > 
> > * Let installers to feed etcd with the parameters by using this etcd
> > namespace $CUSTOM_PREFIX + /project/section/parameter (example
> > /node1/keystone/DEFAULT/debug).
> >   And patch oslo.config to be able to generate confd templates with
> > all the options (and ship the template in the package)
> >   I like this option because it provides a way for operators to learn
> > about all possible options in the configuration, with documentation
> > and default values.
> > 
> > * Also let installers to feed etcd but use a standard template like
> > you showed me last week (credits to you for the code):
> > http://paste.openstack.org/show/2KZUQsWYpgrcG2K8TDcE/
> >I like this option because nothing has to be done in oslo.config,
> > since we use a standard template for all OpenStack configs (see the
> > paste ^)
> > 
> > Thoughts?

[My apologies, I sent this reply directly to Emilien the first time.]

> There are 2 problems with using the generic template.
> 
> 1. In order for confd to work, you have to give it a list of all of the
>keys in etcd that it should monitor, and that list is
>application-specific.
> 
> 2. Not all of our configuration values are simple strings or numbers.
>We have options for managing lists of values, and there is even
>an Opt class for loading a dictionary for some reason. So,
>rendering the value in the template will depend on the type of
>the option.
> 
> Given those constraints, it makes sense to generate a custom template
> for each set of options. We need to generate the confd file anyway, and
> the template can have the correct logic for rendering mult-value
> options.
> 

> One further problem I don't know how to address yet is the applications
> that use dynamic sections in configuration files. I think Cinder
> is still the primary example of this, but other apps may use that
> ability.  I don't know how to tell confd that it needs to look at
> the keys in those groups, since we don't know the names in advance.

The more I think about dealing with configuration files, the more
I think this is going to be the killer issue. If an application
doesn't know what sections go in the file, it can't monitor the
right parts of etcd or any other database looking at individual
settings.

The configmap approach assumes that something publishes the entire
INI file, which at least moves the problem outside of the container
to a place where we've already implemented the logic to deal with
the dynamic aspect of the configuration files.

Using configmap to inject config files, we gain the ability to have
a full accurate INI file but lose the ability to monitor the file
for updates and have mutable options.  Given that we're running the
service in a container, and starting a new container is easy, maybe
that's fine.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][keystone] Can we get a new python-keystoneclient release?

2017-06-09 Thread Hanxi Liu
Hi,

I have proposed one to the new release:

https://review.openstack.org/#/c/472667/

Best Regards,
Hanxi Liu


On Fri, Jun 9, 2017 at 6:09 PM, Javier Pena  wrote:

> Hi,
>
> The latest python-keystoneclient release (3.10.0) dates back to Ocata, and
> it can't be properly packaged in Pike because it fails to run unit tests,
> since [1] is required.
>
> Can we get a new release?
>
> Thanks,
> Javier
>
> [1] - https://github.com/openstack/python-keystoneclient/commit/
> cfd33730868350cd475e45569a8c1573803a6895
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-09 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:

> Unless I'm missing something, to use confd with an OpenStack deployment on
> k8s, we'll have to do something like this:
> 
> * Deploy confd in every node where we may want to run a pod (basically
> wvery node)

Oh, no, no. That's not how it works at all.

confd runs *inside* the containers. It's input files and command line
arguments tell it how to watch for the settings to be used just for that
one container instance. It does all of its work (reading templates,
watching settings, HUPing services, etc.) from inside the container.

The only inputs confd needs from outside of the container are the
connection information to get to etcd. Everything else can be put
in the system package for the application.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to set default security group rules?

2017-06-09 Thread Paul Belanger
On Fri, Jun 09, 2017 at 05:20:03AM -0700, Kevin Benton wrote:
> This was an intentional decision. One of the goals of OpenStack is to
> provide consistency across different clouds and configurable defaults for
> new tenants default rules hurts consistency.
> 
> If I write a script to boot up a workload on one OpenStack cloud that
> allows everything by default and it doesn't work on another cloud that
> doesn't allow everything by default, that leads to a pretty bad user
> experience. I would now need logic to scan all of the existing security
> group rules and do a diff between what I want and what is there and have
> logic to resolve the difference.
> 
FWIW: While that argument is valid, the reality is every cloud provider runs a
different version of operating system you boot up your workload on, so it is
pretty much assume that every cloud is different out of box.

What we do now in openstack-infra, is place expected cloud configuration[2] in 
ansible-role-cloud-launcher[1], and run ansible against the cloud. This has been
one of the ways we ensure consistency between clouds. Bonus point, we build and
upload images daily to ensure our workloads are also the same.

[1] http://git.openstack.org/cgit/openstack/ansible-role-cloud-launcher
[2] 
http://git.openstack.org/cgit/openstack-infra/system-config/tree/playbooks/clouds_layouts.yml

> It's a backwards-incompatible change so we'll probably be stuck with the
> current behavior.
> 
> 
> On Fri, Jun 9, 2017 at 2:27 AM, Ahmed Mostafa 
> wrote:
> 
> > I believe that there are no features impelemented in neutron that allows
> > changing the rules for the default security group.
> >
> > I am also interested in seeing such a feature implemented.
> >
> > I see only this blueprint :
> >
> > https://blueprints.launchpad.net/neutron/+spec/default-
> > rules-for-default-security-group
> >
> > But no work has been done on it so far.
> >
> >
> >
> > On Fri, Jun 9, 2017 at 9:16 AM, Paul Schlacter 
> > wrote:
> >
> >> I see the neutron code, which added the default rules to write very
> >> rigid, only for ipv4 ipv6 plus two rules. What if I want to customize the
> >> default rules?
> >>
> >> 
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
> >> e
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 26

2017-06-09 Thread Chris Dent


Placement update 26.

First a note from your editor: This will be the last of these I do
until July 7th. I'll be taking a break from sometime next week until
July. If someone else would like to do the three updates on the state of
placement and resource providers in that window that would be great.

# What Matters Most

It's actually a bit hard to be clear at the moment. Things are a bit
up in the air with some unresolved questions. The work for how to
deal with claims in the scheduler was proceeding apace at

https://review.openstack.org/#/q/status:open+topic:bp/placement-claims

and then at this week's scheudler subteam meeting we started talking
about "complex" allocations (reflected in an email thread:

http://lists.openstack.org/pipermail/openstack-dev/2017-June/117913.html

and etherpad:

https://etherpad.openstack.org/p/placement-allocations-straw-man.

and a proposed (wip-ish) spec:

https://review.openstack.org/#/c/471927/

). Part of what's up in the air about this is a) there isn't yet
agreement that this is the right way to go, b) it's not clear what
the impact of this change will be on the processing done in the
filter scheduler, mostly in terms of how the provided information is
used (if at all) during the filtering and weighing.

In the thread linked above there's a series of question that are
trying to shine lights into the various corners so we can make an
informed decision. From my perspective, writing answers to those
questions will help us to resolve the current situation and help us
to not get caught in these common cycles of questioning that we seem
to do when working on scheduler-related stuff.

# What's Changed

Incremental progress across the board. Some new code linked in
below.

# Help Wanted

Areas where volunteers are needed.

* General attention to bugs tagged placement:
https://bugs.launchpad.net/nova/+bugs?field.tag=placement

* Helping to create api documentation for placement (see the Docs
section below).

# Main Themes

## Claims in the Scheduler

Work is in progress on having the scheduler make resource claims but
is having a period of reflection and discussion. See the "matters
most" section above.

https://review.openstack.org/#/q/status:open+topic:bp/placement-claims

## Traits

The traits blueprint has been completed, based on the work described
in the spec, but we still have no way to make use of the traits:

* The placement API and the get_all_by_filters method on the
  ResourceProviderList object have no way of expressing that a
  request for resource providers should be filtered by a set of
  trait requirements (or preferences).

* There's nothing on the nova-scheduler side of things which allows
  expressing a trait that would be sent to the placement API.

There is a new spec (for queens) for the second part:

https://review.openstack.org/#/c/468797/

And there is some old code for the first part that will need to
revitalized and will likely be impacted by the outcome of "what
matters most" above:

https://review.openstack.org/#/c/429364/

## Shared Resource Providers

Currently pending resolution of the complex allocations discussion.

## Nested Resource Providers

Work has resumed on nested resource providers.

 
https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

Currently having some good review discussion on data structures and
graph traversal and search. It's a bit like being back in school.

## User and Project IDs in Allocations

This will allow placement allocations to be considered when doing
resource accounting for things like quota. User id and project id
information is added to allocation records and a new API resource is
added to be able to get summaries of usage by user or project.

 https://review.openstack.org/#/q/topic:bp/placement-project-user

## Docs

Lots of placement-related api docs have merged or are in progress:
topics:

* https://review.openstack.org/#/q/status:open+topic:cd/placement-api-ref

Soon the whole API will be documented, so it will be time to turn on
a publishing job (only drafts are created now, as far as I recall).

It's interesting to note that in the process of documenting the API
we found (and fixed) a few (minor) bugs.

# Other Code/Specs

* https://review.openstack.org/#/c/472378/
  A proposed fix to using multiple config locations with the
  placement wsgi app. There's some active discussion on whether the
  solution in mind is the right solution, or even whether the bug is
  a bug (it is!).

* https://review.openstack.org/#/c/470578/
  Add functional test for local delete allocations

* https://review.openstack.org/#/c/460147/
Use DELETE inventories method in report client.

* https://review.openstack.org/#/c/427200/
 Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/453916/
 Don't send instance updates from compute if not using filter
 scheduler

* 

Re: [openstack-dev] how to set default security group rules?

2017-06-09 Thread Kevin Benton
This was an intentional decision. One of the goals of OpenStack is to
provide consistency across different clouds and configurable defaults for
new tenants default rules hurts consistency.

If I write a script to boot up a workload on one OpenStack cloud that
allows everything by default and it doesn't work on another cloud that
doesn't allow everything by default, that leads to a pretty bad user
experience. I would now need logic to scan all of the existing security
group rules and do a diff between what I want and what is there and have
logic to resolve the difference.

It's a backwards-incompatible change so we'll probably be stuck with the
current behavior.


On Fri, Jun 9, 2017 at 2:27 AM, Ahmed Mostafa 
wrote:

> I believe that there are no features impelemented in neutron that allows
> changing the rules for the default security group.
>
> I am also interested in seeing such a feature implemented.
>
> I see only this blueprint :
>
> https://blueprints.launchpad.net/neutron/+spec/default-
> rules-for-default-security-group
>
> But no work has been done on it so far.
>
>
>
> On Fri, Jun 9, 2017 at 9:16 AM, Paul Schlacter 
> wrote:
>
>> I see the neutron code, which added the default rules to write very
>> rigid, only for ipv4 ipv6 plus two rules. What if I want to customize the
>> default rules?
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [ironic] - Global Request ID progress - update 6/9

2017-06-09 Thread Sean Dague
A bunch more work landed this week, here is where we stand:


STATUS

oslo.context / oslo.middleware - DONE

devstack logging additional global_request_id - DONE

cinder: DONE
- client supports global_request_id - DONE
- call Nova & Glance with global_request_id - DONE

neutron: BLOCKED
- client supports global_request_id - DONE
- neutron calls Nova with global_request_id - BLOCKED (see below)

nova: DONE
- Convert to oslo.middleware (to accept global_request_id) - DONE
- client supports global_request_id - DONE
- call Neutron / Cinder / Glance with global_request_id - DONE

glance: BLOCKED
- client supports global_request_id - DONE
- Glance supports setting global_request_id - BLOCKED (see below)

ironic (NEW): in progress
- Ironic supports accepting global_request_id - IN REVIEW


BLOCKED ITEMS

Neutron:

There is a mailing list post out here
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118031.html.
The neutron code for interactions back to Nova is wildly different than
the patterns in other services, so I'm actually stumped on the right
path forward. Some questions are there. Any neutron experts that could
advise or help dive in would be appreciated.

Glance:

The review that would set the global_request_id in the context is
blocked - https://review.openstack.org/#/c/468443/ over different
perspectives on API change here. There are only 2 of us in this review
so far, so it would be good to get more perspectives from folks as well.


STRETCH GOALS

Ironic:

My original intent was to get through Nova, Neutron, Glance, Cinder this
cycle. As that is nearly done, I thought that the next logical service
to loop in would be Ironic. There is an initial patch there to add the
global_request_id inbound - https://review.openstack.org/#/c/472258/.
Ironic reviews to get that into shape for merge would be appreciated.


Comments / questions welcomed. As well as anyone that's interested in
expanding this support to additional services.


-Sean


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic] Hardware provisioning testing for Ocata

2017-06-09 Thread Justin Kilpatrick
On Fri, Jun 9, 2017 at 5:25 AM, Dmitry Tantsur  wrote:
> This number of "300", does it come from your testing or from other sources?
> If the former, which driver were you using? What exactly problems have you
> seen approaching this number?

I haven't encountered this issue personally, but talking to Joe
Talerico and some operators at summit around this number a single
conductor begins to fall behind polling all of the out of band
interfaces for the machines that it's responsible for. You start to
see what you would expect from polling running behind, like incorrect
power states listed for machines and a general inability to perform
machine operations in a timely manner.

Having spent some time at the Ironic operators form this is pretty
normal and the correct response is just to scale out conductors, this
is a problem with TripleO because we don't really have a scale out
option with a single machine design. Fortunately just increasing the
time between interface polling acts as a pretty good stopgap for this
and lets Ironic catch up.

I may get some time on a cloud of that scale in the future, at which
point I will have hard numbers to give you. One of the reasons I made
YODA was the frustrating prevalence of anecdotes instead of hard data
when it came to one of the most important parts of the user
experience. If it doesn't deploy people don't use it, full stop.

> Could you please elaborate? (a bug could also help). What exactly were you
> doing?

https://bugs.launchpad.net/ironic/+bug/1680725

Describes exactly what I'm experiencing. Essentially the problem is
that nodes can and do fail to pxe, then cleaning fails and you just
lose the nodes. Users have to spend time going back and babysitting
these nodes and there's no good instructions on what to do with failed
nodes anyways. The answer is move them to manageable and then to
available at which point they go back into cleaning until it finally
works.

Like introspection was a year ago this is a cavalcade of documentation
problems and software issues. I mean really everything *works*
technically but the documentation acts like cleaning will work all the
time and so does the software, leaving the user to figure out how to
accommodate the realities of the situation without so much as a
warning that it might happen.

This comes out as more of a ux issue than a software one, but we can't
just ignore these.

- Justin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev]   [neutron][nova]   sriov bond

2017-06-09 Thread yan.songming
Hi all:




In the NFV field,many scenes need L2 redundant for SR-IOV.But currently

nova or neutron solution for L2 bond is usually configured multiple

neutron ports and allocate different VFs on different physical network

adapters to implement this, like this bp.

https://blueprints.launchpad.net/nova/+spec/distribute-pci-allocation

We think there are some limits on it.

So we try to use another way to solve this problem. It's grateful if anyone 
have 

a look at it and express you opinions on this issue.





[1] https://review.openstack.org/#/c/463526/

[2] https://blueprints.launchpad.net/nova/+spec/sriov-bond







yansongming









M: +86 13813871418

E: yan.songm...@zte.com.cn 

www.zte.com.cn__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][l2gw] OVS code currently broken

2017-06-09 Thread Ricardo Noriega De Soto
Hi Kevin,

There was already a bug filed about the Tempest plugin:

https://bugs.launchpad.net/networking-l2gw/+bug/1692529

Thanks

On Thu, Jun 8, 2017 at 9:31 PM, Kevin Benton  wrote:

> Can you file a bug against Neutron and reference it here?
>
> On Thu, Jun 8, 2017 at 8:28 AM, Ricardo Noriega De Soto <
> rnori...@redhat.com> wrote:
>
>> There is actually a bunch of patches waiting to be reviewed and approved.
>>
>> Please, we'd need core reviewers to jump in.
>>
>> I'd like to thank Gary for all his support and reviews.
>>
>> Thanks Gary!
>>
>> On Tue, May 30, 2017 at 3:56 PM, Gary Kotton  wrote:
>>
>>> Hi,
>>>
>>> Please note that the L2 GW code is currently broken due to the commit
>>> e6333593ae6005c4b0d73d9dfda5eb47f40dd8da
>>>
>>> If someone has the cycles can they please take a look.
>>>
>>> Thanks
>>>
>>> gary
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Ricardo Noriega
>>
>> Senior Software Engineer - NFV Partner Engineer | Office of Technology
>>  | Red Hat
>> irc: rnoriega @freenode
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][keystone] Can we get a new python-keystoneclient release?

2017-06-09 Thread Javier Pena
Hi,

The latest python-keystoneclient release (3.10.0) dates back to Ocata, and it 
can't be properly packaged in Pike because it fails to run unit tests, since 
[1] is required.

Can we get a new release?

Thanks,
Javier

[1] - 
https://github.com/openstack/python-keystoneclient/commit/cfd33730868350cd475e45569a8c1573803a6895

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Install Kubernetes in the overcloud using TripleO

2017-06-09 Thread Bogdan Dobrelya
On 08.06.2017 18:36, Flavio Percoco wrote:
> Hey y'all,
> 
> Just wanted to give an updated on the work around tripleo+kubernetes.
> This is
> still far in the future but as we move tripleo to containers using
> docker-cmd,
> we're also working on the final goal, which is to have it run these
> containers
> on kubernetes.
> 
> One of the first steps is to have TripleO install Kubernetes in the
> overcloud
> nodes and I've moved forward with this work:
> 
> https://review.openstack.org/#/c/471759/
> 
> The patch depends on the `ceph-ansible` work and it uses the
> mistral-ansible
> action to deploy kubernetes by leveraging kargo. As it is, the patch
> doesn't
> quite work as it requires some files to be in some places (ssh keys) and a
> couple of other things. None of these "things" are blockers as in they
> can be
> solved by just sending some patches here and there.
> 
> I thought I'd send this out as an update and to request some early
> feedback on
> the direction of this patch. The patch, of course, works in my local
> environment
> ;)

Kudos for using Kargo [0], an incubated Kubernetes project (installers'
docs home page [1]) that needs more love and adoption by OpenStack and
perhaps OpenShift communities. Flavio, I'd love to join the research and
start contributing into this effort as soon as possible.

Even though the adoption of COEs for managing OpenStack clouds looks in
distant future, one thing that concerns me now, is *early* consolidation
of design decisions. And as well upstream development approaches of the
teams working on installing Kubernetes in the overcloud, by TripleO with
heat templates vs OpenShift on OpenStack by providers backed with
ansible & shade. Like in these examples [2], where heat templates do
only provisioning, and [3], where os-stack [4] is used instead.

And it seems that heat templates will be replaced soon for provisioning
*and* software deployment/configuration tasks, which seems the current
development trend.

[0] https://github.com/kubernetes-incubator/kargo/
[1] https://kubernetes.io/docs/home/
[2] https://github.com/openshift/openshift-ansible-contrib/pull/397
[3] https://github.com/openshift/openshift-ansible/pull/4317
[4] https://docs.ansible.com/ansible/os_stack_module.html


> 
> Flavio
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to set default security group rules?

2017-06-09 Thread Ahmed Mostafa
I believe that there are no features impelemented in neutron that allows
changing the rules for the default security group.

I am also interested in seeing such a feature implemented.

I see only this blueprint :

https://blueprints.launchpad.net/neutron/+spec/default-rules-for-default-security-group

But no work has been done on it so far.



On Fri, Jun 9, 2017 at 9:16 AM, Paul Schlacter  wrote:

> I see the neutron code, which added the default rules to write very
> rigid, only for ipv4 ipv6 plus two rules. What if I want to customize the
> default rules?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic] Hardware provisioning testing for Ocata

2017-06-09 Thread Dmitry Tantsur

On 06/08/2017 02:21 PM, Justin Kilpatrick wrote:

Morning everyone,

I've been working on a performance testing tool for TripleO hardware
provisioning operations off and on for about a year now and I've been
using it to try and collect more detailed data about how TripleO
performs in scale and production use cases. Perhaps more importantly
YODA (Yet Openstack Deployment Tool, Another) automates the task
enough that days of deployment testing is a set it and forget it
operation. >
You can find my testing tool here [0] and the test report [1] has
links to raw data and visualization. Just scroll down, click the
capcha and click "go to kibana". I  still need to port that machine
from my own solution over to search guard.

If you have too much email to consider clicking links I'll copy the
results summary here.

TripleO inspection workflows have seen massive improvements from
Newton with a failure rate for 50 nodes with the default workflow
falling from 100% to <15%. Using patches slated for Pike that spurious
failure rate reaches zero.


\o/



Overcloud deployments show a significant improvement of deployment
speed in HA and stack update tests.

Ironic deployments in the overcloud allow the use of Ironic for bare
metal scale out alongside more traditional VM compute. Considering a
single conductor starts to struggle around 300 nodes it will be
difficult to push a multi conductor setup to it's limits.


This number of "300", does it come from your testing or from other sources? If 
the former, which driver were you using? What exactly problems have you seen 
approaching this number?




Finally Ironic node cleaning, shows a similar failure rate to
inspection and will require similar attention in TripleO workflows to
become painless.


Could you please elaborate? (a bug could also help). What exactly were you 
doing?



[0] https://review.openstack.org/#/c/384530/
[1] 
https://docs.google.com/document/d/194ww0Pi2J-dRG3-X75mphzwUZVPC2S1Gsy1V0K0PqBo/

Thanks for your time!


Thanks for YOUR time, this work is extremely valuable!



- Justin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] openstack-ubuntu-testing-bot -- please turn off

2017-06-09 Thread James Page
Hi Ian

On Fri, 9 Jun 2017 at 07:57 Ian Wienand  wrote:

> Hi,
>
> If you know of someone in control of whatever is trying to use this
> account, running on 91.189.91.27 (a canonical IP), can you please turn
> it off.  It's in a tight loop failing to connect to gerrit, which
> probably isn't good for either end :)


Disabled - apologies for any issues caused.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-09 Thread Dmitry Tantsur

Sigh.

Jim, you're on of the brightest people I've ever worked with. The project will 
definitely have hard time recovering from the loss, and so will I personally. 
Thank you for your great patches, discussions, for your leadership during your 
times as a PTL.


I heartily wish you the very best of luck with your new challenges, and please 
don't disappear :)


On 06/08/2017 02:45 PM, Jim Rollenhagen wrote:

Hey friends,

I've been mostly missing for the past six weeks while looking for a new job, so 
maybe you've forgotten me already, maybe not. I'm happy to tell you I've found 
one that I think is a great opportunity for me. But, I'm sad to tell you that 
it's totally outside of the OpenStack community.


The last 3.5 years have been amazing. I'm extremely grateful that I've been able 
to work in this community - I've learned so much and met so many awesome people. 
I'm going to miss the insane(ly awesome) level of collaboration, the summits, 
the PTGs, and even some of the bikeshedding. We've built amazing things 
together, and I'm sure y'all will continue to do so without me.


I'll still be lurking in #openstack-dev and #openstack-ironic for a while, if 
people need me to drop a -2 or dictate old knowledge or whatever, feel free to 
ping me. Or if you just want to chat. :)


<3 jroll

P.S. obviously my core permissions should be dropped now :P


Sure, sure. I'm getting used to doing it :(




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tasks update of Mogan Project

2017-06-09 Thread hao wang
Hi,

We are glad to present this week's tasks update of Mogan.


Essential Priorities


1.Node aggregates (liudong, zhangyang, zhenguo)
---

blueprint: https://blueprints.launchpad.net/mogan/+spec/node-aggregate

spec: https://review.openstack.org/#/c/470927/

code: expose admin node list API https://review.openstack.org/#/c/470183/


2.Server groups and scheduler hints(liudong, liusheng)
-

blueprint: 
https://blueprints.launchpad.net/mogan/+spec/server-group-api-extension

https://blueprints.launchpad.net/mogan/+spec/support-schedule-hints

spec:

code: scheduler hints: https://review.openstack.org/#/c/463534/


3. Adopt servers (wanghao, litao)


blueprint: https://blueprints.launchpad.net/mogan/+spec/manage-existing-bms

spec: https://review.openstack.org/#/c/459967/ under review

Change the spec according to the review comments, need more review


4. Valence integration (zhenguo, shaohe, luyao, Xinran)
--

blueprint: https://blueprints.launchpad.net/mogan/+spec/valence-integration

spec: 
https://review.openstack.org/#/c/441790/3/specs/pike/approved/valence-integration.rst

code: No updates

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to set default security group rules?

2017-06-09 Thread Paul Schlacter
The following is the code, there is no configuration item to configure the
default rules

for ethertype in ext_sg.sg_supported_ethertypes:
if default_sg:
# Allow intercommunication
ingress_rule = sg_models.SecurityGroupRule(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
security_group=security_group_db,
direction='ingress',
ethertype=ethertype,
source_group=security_group_db)
context.session.add(ingress_rule)

egress_rule = sg_models.SecurityGroupRule(
id=uuidutils.generate_uuid(), tenant_id=tenant_id,
security_group=security_group_db,
direction='egress',
ethertype=ethertype)
context.session.add(egress_rule)

On Fri, Jun 9, 2017 at 3:16 PM, Paul Schlacter  wrote:

> I see the neutron code, which added the default rules to write very
> rigid, only for ipv4 ipv6 plus two rules. What if I want to customize the
> default rules?
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] how to set default security group rules?

2017-06-09 Thread Paul Schlacter
I see the neutron code, which added the default rules to write very
rigid, only for ipv4 ipv6 plus two rules. What if I want to customize the
default rules?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] openstack-ubuntu-testing-bot -- please turn off

2017-06-09 Thread Ian Wienand

Hi,

If you know of someone in control of whatever is trying to use this
account, running on 91.189.91.27 (a canonical IP), can you please turn
it off.  It's in a tight loop failing to connect to gerrit, which
probably isn't good for either end :)

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev