This isn't about the operating system of the instance or even the host.
It's the behavior of the Neutron API WRT what traffic will be filtered by
the default security group.
If we go down this route, users will have to expect effectively random sets
of security group rules from cloud to cloud and
On Fri, 9 Jun 2017, Dan Smith wrote:
In other words, I would expect to be able to explain the purpose of the
scheduler as "applies nova-specific logic to the generic resources that
placement says are _valid_, with the goal of determining which one is
_best_".
This sounds great as an explanatio
On Fri, 9 Jun 2017, Jay Pipes wrote:
Sorry, been in a three-hour meeting. Comments inline...
Thanks for getting to this, it's very helpful to me.
* Part of the reason for having nested resource providers is because
it can allow affinity/anti-affinity below the compute node (e.g.,
workloa
>> b) a compute node could very well have both local disk and shared
>> disk. how would the placement API know which one to pick? This is a
>> sorting/weighing decision and thus is something the scheduler is
>> responsible for.
> I remember having this discussion, and we concluded that a
> comp
On Fri, Jun 9, 2017 at 5:01 PM, Ben Nemec wrote:
> Hmm, I was expecting an instack-undercloud release as part of m2. Is there
> a reason we didn't do that?
You just released a new tag: https://review.openstack.org/#/c/471066/
with a new release model, why would we release m2? In case you want
it
On Jun 9, 2017, at 4:35 PM, Jay Pipes wrote:
>> We can declare that allocating for shared disk is fairly deterministic
>> if we assume that any given compute node is only associated with one
>> shared disk provider.
>
> a) we can't assume that
> b) a compute node could very well have both local
Sorry, been in a three-hour meeting. Comments inline...
On 06/06/2017 10:56 AM, Chris Dent wrote:
On Mon, 5 Jun 2017, Ed Leafe wrote:
One proposal is to essentially use the same logic in placement
that was used to include that host in those matching the
requirements. In other words, when it tr
On 9 June 2017 at 06:36, Doug Hellmann wrote:
> We have several projects with deliverables following the
> cycle-with-milestones release model without pike 2 releases. Please
> check the list below and prepare those release requests as soon as
> possible. Remember that this milestone is date-base
On 06/09/2017 11:12 AM, Lance Bragstad wrote:
I should have clarified. The idea was to put the keys used to encrypt
and decrypt the tokens in etcd so that synchronizing the repository
across a cluster for keystone nodes is easier for operators (but not
without other operator pain as Kevin
Hi Everyone,
Happy Friday! There have been a number of discussions (at the PTG, at
OpenStack Summit, in Interop WG and Board of Directors meetings, etc) over the
past several months about the possibility of creating new interoperability
programs in addition to the existing OpenStack Powered pr
...
No words can express, will try to keep in touch, and congratulations on
your new adventure sir! Continue to be a great influence and valued member
of your new team.
On Thu, Jun 8, 2017 at 7:45 AM, Jim Rollenhagen
wrote:
> Hey friends,
>
> I've been mostly missing for the past six weeks whil
History lesson: a long, long time ago we made a very big mistake. We
treated stack outputs as things that would be resolved dynamically when
you requested them, instead of having values fixed at the time the
template was created or updated. This makes performance of reading
outputs slow, especi
On Fri, 9 Jun 2017 10:37:15 +0530
Niels de Vos wrote:
> > > we are looking for S3 plugin with ACLS so that we can integrate gluster
> > > with that.
> >
> > Did you look into porting Ceph RGW on top of Gluster?
>
> This is one of the longer term options that we have under consideration.
> I am
On June 8, 2017 at 07:20:23, Jeremy Stanley (fu...@yuggoth.org) wrote:
> > On 2017-06-07 16:36:45
> > -0700(http://airmail.calendar/2017-06-07%2016:36:45%20PDT)
> (-0700), Ken'ichi Ohmichi wrote:
> [...]
> > one of config files is 30KL due to much user information and that
> > makes the maintenanc
Per the request below the most widely supported time for a cross Working Group
status meeting seems to be Wednesdays at 0500 UTC. We will bring this to the
UC meeting on Monday. Proposal is that the first UC meeting with WG status
would be 6/21 Wednesday 0500 UTC (Tuesday Late Evening US time)
Excerpts from Flavio Percoco's message of 2017-06-09 16:52:25 +:
> On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser)
> wrote:
>
> > How does confd run inside the container? Does this mean we’d need some
> > kind of systemd in every container which would spawn both confd and the
> > real
On Fri, Jun 09, 2017 at 04:52:25PM +, Flavio Percoco wrote:
> On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser)
> wrote:
>
> > How does confd run inside the container? Does this mean we’d need some
> > kind of systemd in every container which would spawn both confd and the
> > real serv
Excerpts from Alex Schultz's message of 2017-06-09 10:54:16 -0600:
> I ran into a case where I wanted to add python-tripleoclient to
> test-requirements for tripleo-heat-templates but it's not in the
> global requirements. In looking into adding this, I noticed that
> python-tripleoclient and tripl
(sorry if duplicate, having troubles with email)
Hi Team,
I've been working a bit with the Glance team and trying to help where I can
and
I can't but be worried about the critical status of the Glance team.
Unfortunately, the number of participants in the Glance team has been
reduced a
lot result
On 06/08/2017 07:45 AM, Jim Rollenhagen wrote:
Hey friends,
I've been mostly missing for the past six weeks while looking for a new
job, so maybe you've forgotten me already, maybe not. I'm happy to tell
you I've found one that I think is a great opportunity for me. But, I'm
sad to tell you t
On Tue, May 30, 2017 at 3:08 PM, Emilien Macchi wrote:
> On Tue, May 30, 2017 at 8:36 PM, Matthew Thode
> wrote:
>> We have a problem in requirements that projects that don't have the
>> cycle-with-intermediary release model (most of the cycle-with-milestones
>> model) don't get integrated with r
On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser)
wrote:
> How does confd run inside the container? Does this mean we’d need some
> kind of systemd in every container which would spawn both confd and the
> real service? That seems like a very large architectural change. But
> maybe I’m mi
On Fri, Jun 9, 2017 at 8:07 AM Doug Hellmann wrote:
> Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
>
> > Unless I'm missing something, to use confd with an OpenStack deployment
> on
> > k8s, we'll have to do something like this:
> >
> > * Deploy confd in every node where w
KVM Forum 2017: Call For Participation
October 25-27, 2017 - Hilton Prague - Prague, Czech Republic
(All submissions must be received before midnight June 15, 2017)
=
>> My current feeling is that we got ourselves into our existing mess
>> of ugly, convoluted code when we tried to add these complex
>> relationships into the resource tracker and the scheduler. We set
>> out to create the placement engine to bring some sanity back to how
>> we think about things
On Fri, Jun 9, 2017 at 11:17 AM, Clint Byrum wrote:
> Excerpts from Lance Bragstad's message of 2017-06-08 16:10:00 -0500:
> > On Thu, Jun 8, 2017 at 3:21 PM, Emilien Macchi
> wrote:
> >
> > > On Thu, Jun 8, 2017 at 7:34 PM, Lance Bragstad
> > > wrote:
> > > > After digging into etcd a bit, one
Excerpts from Akihiro Motoki's message of 2017-06-09 03:53:34 +0900:
> Hi all,
>
> Is your project ready for pylint 1.7.1?
> If you use pylint in your pep8 job, it is worth checked.
>
> Our current version of pylint is 1.4.5 but it is not safe in python 3.5.
> The global-requirements update was m
How does confd run inside the container? Does this mean we’d need some kind of
systemd in every container which would spawn both confd and the real service?
That seems like a very large architectural change. But maybe I’m
misunderstanding it.
Thx,
britt
On 6/9/17, 9:04 AM, "Doug Hellmann"
Excerpts from Lance Bragstad's message of 2017-06-08 16:10:00 -0500:
> On Thu, Jun 8, 2017 at 3:21 PM, Emilien Macchi wrote:
>
> > On Thu, Jun 8, 2017 at 7:34 PM, Lance Bragstad
> > wrote:
> > > After digging into etcd a bit, one place this might be help deployer
> > > experience would be the ha
Thanks, Sean.
I will heartily second that request/proposal.
-amrith
-amrith
--
Amrith Kumar
Phone: +1-978-563-9590
On Fri, Jun 9, 2017 at 11:07 AM, Sean Dague wrote:
> On 06/08/2017 02:53 PM, Akihiro Motoki wrote:
> > Hi all,
> >
> > Is your project ready for pylint 1.7.1?
> > If you use p
On 06/08/2017 02:53 PM, Akihiro Motoki wrote:
> Hi all,
>
> Is your project ready for pylint 1.7.1?
> If you use pylint in your pep8 job, it is worth checked.
>
> Our current version of pylint is 1.4.5 but it is not safe in python 3.5.
> The global-requirements update was merged once [1],
> Howev
Is there a driving reason why this has to be done in the Pike cycle? The
requirements freeze is coincident with Pike-3 and your two week deadline
puts it pretty close to that date so I'm going to assume that you will have
to make this change before p3.
Trove is another of the projects that went d
I had initially looked into this for the 3PAR drivers when we initially
were working on the target driver code. The problem I found was, it would
take a fair amount of time to refactor the code, with marginal benefit.
Yes, the design is better, but I couldn't justify the refactoring time,
effort
thanks Doug,
Here is ceilometerclient release:
https://review.openstack.org/#/c/472736/
Cheers,
Hanxi Liu
On Fri, Jun 9, 2017 at 11:02 PM, Spyros Trigazis wrote:
> Thanks for the reminder.
>
> python-magnumclient https://review.openstack.org/#/c/472718/
>
> Cheers,
> Spyros
>
> O
Hey folks,
I wanted to bring to your attention that we've merged the change[0] to
add a basic set of roles that can be combined to create your own
roles_data.yaml as needed. With this change the roles_data.yaml and
roles_data_undercloud.yaml files in THT should not be changed by hand.
Instead if
On Fri, Jun 9, 2017 at 9:57 AM, Mike Bayer wrote:
>
>
> On 06/08/2017 01:34 PM, Lance Bragstad wrote:
>
>> After digging into etcd a bit, one place this might be help deployer
>> experience would be the handling of fernet keys for token encryption in
>> keystone. Currently, all keys used to encry
On 09/06/17 10:57 AM, Mike Bayer wrote:
> Interesting, I had the mis-conception that "fernet" keys no longer
> required any server-side storage (how is "kept-on-disk" now
> implemented?) . We've had continuous issues with the pre-fernet
> Keystone tokens filling up databases, even when operators
Thanks for the reminder.
python-magnumclient https://review.openstack.org/#/c/472718/
Cheers,
Spyros
On 9 June 2017 at 16:39, Doug Hellmann wrote:
> We have several teams with library deliverables that haven't seen
> any releases at all yet this cycle. Please review the list below,
> and if th
Hmm, I was expecting an instack-undercloud release as part of m2. Is
there a reason we didn't do that?
On 06/08/2017 03:47 PM, Emilien Macchi wrote:
We have a new release of TripleO, pike milestone 2.
All bugs targeted on Pike-2 have been moved into Pike-3.
I'll take care of moving the bluepr
On 06/08/2017 04:24 PM, Julien Danjou wrote:
On Thu, Jun 08 2017, Mike Bayer wrote:
So I wouldn't be surprised if new / existing openstack applications
express some gravitational pull towards using it as their own
datastore as well. I'll be trying to hang onto the etcd3 track as much
as poss
Just pushed a release for pycadf as well [1].
[1] https://review.openstack.org/#/c/472717/
On Fri, Jun 9, 2017 at 9:43 AM, Lance Bragstad wrote:
> We have a review in flight to release python-keystoneclient [0]. Thanks
> for the reminder!
>
> [0] https://review.openstack.org/#/c/472667/
>
> On
On 06/08/2017 01:34 PM, Lance Bragstad wrote:
After digging into etcd a bit, one place this might be help deployer
experience would be the handling of fernet keys for token encryption in
keystone. Currently, all keys used to encrypt and decrypt tokens are
kept on disk for each keystone node i
On 06/05/2017 05:22 PM, Ed Leafe wrote:
Another proposal involved a change to how placement responds to the
scheduler. Instead of just returning the UUIDs of the compute nodes
that satisfy the required resources, it would include a whole bunch
of additional information in a structured response. A
Hello,
as discussed previously on the list and at the weekly meeting, we'll do
a deep dive about containers. The time:
Thursday 15th June, 14:00 UTC (the usual time)
Link for attending will be at the deep dives etherpad [1], preliminary
agenda is in another etherpad [2], and i hope i'll be a
We have a review in flight to release python-keystoneclient [0]. Thanks for
the reminder!
[0] https://review.openstack.org/#/c/472667/
On Fri, Jun 9, 2017 at 9:39 AM, Doug Hellmann wrote:
> We have several teams with library deliverables that haven't seen
> any releases at all yet this cycle. P
We have several teams with library deliverables that haven't seen
any releases at all yet this cycle. Please review the list below,
and if there are changes on master since the last release prepare
a release request. Remember that because of the way our CI system
works, patches that land in librar
This is great information Justin, thanks for sharing. It will prove useful
as we scale up our ironic deployments.
It seems to me that a reference configuration of ironic would be a useful
resource for many people. Some key decisions affecting scalability and
performance may at first seem arbitrary
If the topics below interest you and you want to contribute to the
discussion, feel free to join the next meeting:
Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/
Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
We had a packed agenda and intens
Hey Saharans and interested parties,
Just announcing that Sahara Pike 2 was released yesterday and I will be
taking care of bugs and blueprints targeted to P2 and moving them to P3.
This release was a bit crazy, thanks for everyone that helped us get things
together before the deadline.
Let's ke
On 09/06/17 12:37 AM, Joshua Harlow wrote:
> My thinking is that people should look over https://raft.github.io/ or
> http://thesecretlivesofdata.com/raft/ (or both or others...)
>
this was really useful. thanks for this! love how they described it so
simply with visuals. spend a few minutes an
Do you intend to support a scenario where overcloud nodes are bare metal?
On Thu, Jun 8, 2017 at 12:36 PM, Flavio Percoco wrote:
> Hey y'all,
>
> Just wanted to give an updated on the work around tripleo+kubernetes. This
> is
> still far in the future but as we move tripleo to containers using
>
We have several projects with deliverables following the
cycle-with-milestones release model without pike 2 releases. Please
check the list below and prepare those release requests as soon as
possible. Remember that this milestone is date-based, not feature-based,
so unless your gate is completely
On Thu, 2017-06-08 at 17:06 +0530, Venkata R Edara wrote:
> Hello,
>
> we have storage product called Gluster which is file storage system,
> we
> are looking to support S3 APIs for it.
Hello Venkata,
Did you consider using the S3 Server project [1] to implement this
functionality? S3 Server s
Excerpts from Doug Hellmann's message of 2017-06-07 14:00:38 -0400:
> Excerpts from Emilien Macchi's message of 2017-06-07 16:42:13 +0200:
> > On Wed, Jun 7, 2017 at 3:31 PM, Doug Hellmann wrote:
> > >
> > > On Jun 7, 2017, at 7:20 AM, Emilien Macchi wrote:
> > >
> > > I'm also wondering if we co
Hi,
I have proposed one to the new release:
https://review.openstack.org/#/c/472667/
Best Regards,
Hanxi Liu
On Fri, Jun 9, 2017 at 6:09 PM, Javier Pena wrote:
> Hi,
>
> The latest python-keystoneclient release (3.10.0) dates back to Ocata, and
> it can't be properly packaged in Pike because
Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
> Unless I'm missing something, to use confd with an OpenStack deployment on
> k8s, we'll have to do something like this:
>
> * Deploy confd in every node where we may want to run a pod (basically
> wvery node)
Oh, no, no. That
On Fri, Jun 09, 2017 at 05:20:03AM -0700, Kevin Benton wrote:
> This was an intentional decision. One of the goals of OpenStack is to
> provide consistency across different clouds and configurable defaults for
> new tenants default rules hurts consistency.
>
> If I write a script to boot up a work
Placement update 26.
First a note from your editor: This will be the last of these I do
until July 7th. I'll be taking a break from sometime next week until
July. If someone else would like to do the three updates on the state of
placement and resource providers in that window that would be grea
This was an intentional decision. One of the goals of OpenStack is to
provide consistency across different clouds and configurable defaults for
new tenants default rules hurts consistency.
If I write a script to boot up a workload on one OpenStack cloud that
allows everything by default and it doe
A bunch more work landed this week, here is where we stand:
STATUS
oslo.context / oslo.middleware - DONE
devstack logging additional global_request_id - DONE
cinder: DONE
- client supports global_request_id - DONE
- call Nova & Glance with global_request_id - DONE
neutron: BLOCKED
- client su
On Fri, Jun 9, 2017 at 5:25 AM, Dmitry Tantsur wrote:
> This number of "300", does it come from your testing or from other sources?
> If the former, which driver were you using? What exactly problems have you
> seen approaching this number?
I haven't encountered this issue personally, but talking
Hi all:
In the NFV field,many scenes need L2 redundant for SR-IOV.But currently
nova or neutron solution for L2 bond is usually configured multiple
neutron ports and allocate different VFs on different physical network
adapters to implement this, like this bp.
https://blueprints.launchp
Hi Kevin,
There was already a bug filed about the Tempest plugin:
https://bugs.launchpad.net/networking-l2gw/+bug/1692529
Thanks
On Thu, Jun 8, 2017 at 9:31 PM, Kevin Benton wrote:
> Can you file a bug against Neutron and reference it here?
>
> On Thu, Jun 8, 2017 at 8:28 AM, Ricardo Noriega
Hi,
The latest python-keystoneclient release (3.10.0) dates back to Ocata, and it
can't be properly packaged in Pike because it fails to run unit tests, since
[1] is required.
Can we get a new release?
Thanks,
Javier
[1] -
https://github.com/openstack/python-keystoneclient/commit/cfd33730868
On 08.06.2017 18:36, Flavio Percoco wrote:
> Hey y'all,
>
> Just wanted to give an updated on the work around tripleo+kubernetes.
> This is
> still far in the future but as we move tripleo to containers using
> docker-cmd,
> we're also working on the final goal, which is to have it run these
> con
I believe that there are no features impelemented in neutron that allows
changing the rules for the default security group.
I am also interested in seeing such a feature implemented.
I see only this blueprint :
https://blueprints.launchpad.net/neutron/+spec/default-rules-for-default-security-gro
On 06/08/2017 02:21 PM, Justin Kilpatrick wrote:
Morning everyone,
I've been working on a performance testing tool for TripleO hardware
provisioning operations off and on for about a year now and I've been
using it to try and collect more detailed data about how TripleO
performs in scale and pro
Hi Ian
On Fri, 9 Jun 2017 at 07:57 Ian Wienand wrote:
> Hi,
>
> If you know of someone in control of whatever is trying to use this
> account, running on 91.189.91.27 (a canonical IP), can you please turn
> it off. It's in a tight loop failing to connect to gerrit, which
> probably isn't good f
Sigh.
Jim, you're on of the brightest people I've ever worked with. The project will
definitely have hard time recovering from the loss, and so will I personally.
Thank you for your great patches, discussions, for your leadership during your
times as a PTL.
I heartily wish you the very best
Hi,
We are glad to present this week's tasks update of Mogan.
Essential Priorities
1.Node aggregates (liudong, zhangyang, zhenguo)
---
blueprint: https://blueprints.launchpad.net/mogan/+spec/node-aggregate
sp
The following is the code, there is no configuration item to configure the
default rules
for ethertype in ext_sg.sg_supported_ethertypes:
if default_sg:
# Allow intercommunication
ingress_rule = sg_models.SecurityGroupRule(
I see the neutron code, which added the default rules to write very
rigid, only for ipv4 ipv6 plus two rules. What if I want to customize the
default rules?
__
OpenStack Development Mailing List (not for usage questions)
Un
72 matches
Mail list logo