After discussion in the Large Deployments Team session this morning, we wanted
to follow up on the earlier thread [1,2] about overriding endpoint URLs.
That topic is exposing an underlying implication about the purpose of the
service catalog. The LDT position is that the service catalog should
mputes. We consider these glance-apis as part of the underlying
> cloud infra rather than user-facing, so I think we'd prefer not to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
>
> On 28 April 201
t to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
>
> On 28 April 2017 at 09:15, Mike Dorman <mdor...@godaddy.com> wrote:
>> We make extensive use of the [glance]/api_servers list. We confi
We make extensive use of the [glance]/api_servers list. We configure that on
hypervisors to direct them to Glance servers which are more “local”
network-wise (in order to reduce network traffic across security
zones/firewalls/etc.) This way nova-compute can fail over in case one of the
We noticed an issue in one of our larger clouds (~700 hypervisors and ovs
agents) where (Liberty) neutron-server CPU and RAM load would spike up quite a
bit whenever a DHCP agent port was updated. So much load that processes were
getting OOM killed on our API servers, and so many queries were
I wonder if we should just refactor the Neutron provider to support either
format? That way we stay independent from whatever the particular
installation’s cliff/tablib situation is.
We can probably safely assume that none of the Neutron objects will have
attributes called ‘Field’ or ‘Value’,
+1
From: Clayton O'Neill >
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
Date: Sunday, November 1, 2015 at 5:13 PM
To: "OpenStack Development
During our meeting [1] today we discussed our agenda [2] and action items for
the LDT working session [3] at the summit.
We’ve compiled a list of a few things folks can be working on before getting to
Tokyo to ensure this session is productive:
1. Neutron Segmented/Routed Networks:
*
videoconference tool to us? Absent any opinions, I’ll just
do a Google Hangout.
Thanks!
Mike
From: Kyle Mestery
Date: Tuesday, August 4, 2015 at 8:09 AM
To: Ryan Moats
Cc: Mike Dorman, OpenStack Development Mailing List (not for usage
questions), OpenStack Operators
Subject: Re: [Openstack
I hope we can move this idea moving forward. I was disappointed to see
the spec abandoned.
Some of us from the large deployers group will be at the Ops Meetup. Will
there be any representation from Neutron there that we could discuss with
more?
Thanks,
Mike
On 8/3/15, 12:27 PM, Carl
On 7/23/15, 8:54 AM, Carl Baldwin c...@ecbaldwin.net wrote:
On Thu, Jul 23, 2015 at 8:51 AM, Kevin Benton blak...@gmail.com wrote:
Or, migration scheduling would need to respect the constraint that a
port may be confined to a set of hosts. How can be assign a port to a
different network? The
On 7/23/15, 9:42 AM, Carl Baldwin c...@ecbaldwin.net wrote:
On Wed, Jul 22, 2015 at 3:21 PM, Kevin Benton blak...@gmail.com wrote:
The issue with the availability zone solution is that we now force
availability zones in Nova to be constrained to network configuration.
In
the L3 ToR/no
I have been meaning to ask you about this, so thanks for posting.
I like the approach. Definitely a lot cleaner than the somewhat hardcoded
dependencies and subscriptions that are in the modules now.
Do you envision that long term the docker/venv/whatever else implementation
(like you have in
I noticed in Kilo there’s a validation check in the console web socket proxies
to ensure the hostnames from the Origin and Host headers match. This was as a
result of CVE-2015-0259 (https://bugs.launchpad.net/nova/+bug/1409142).
Effectively it disabled cross-site web socket connections.
This
” if they do nothing and don’t update
their composition module.
My vote is for #1.
Let’s plan to close the voting by next week’s IRC meeting, so we can come to a
final conclusion at that time.
Thanks,
Mike
From: Mike Dorman
Reply-To: OpenStack Development Mailing List (not for usage
As a follow up to https://review.openstack.org/#/c/194399/ and the meeting
discussion earlier today, I’ve determined that everybody (RDU, Ubuntu, Debian)
is packaging oslo.messaging 1.8.2 or 1.8.3 with the Kilo build. (This is also
the version we get on our internal Anvil-based build.) This
We’ve had this same problem, too, and I’d agree it should fail the Puppet run
rather than just passing. Would you mind writing up a bug report for this at
https://launchpad.net/puppet-openstacklib ?
I have this on my list of stuff to fix when we go to Kilo (soon), so if
somebody else doesn’t
I vote #2, with a smaller N.
We can always adjust this policy in the future if find we have to manually
abandon too many old reviews.
From: Colleen Murphy
Reply-To:
puppet-openst...@puppetlabs.commailto:puppet-openst...@puppetlabs.com
Date: Tuesday, June 2, 2015 at 12:39 PM
To: OpenStack
+1 Let’s do it.
From: Matt Fischer
Reply-To: OpenStack Development Mailing List (not for usage questions)
Date: Friday, May 29, 2015 at 1:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [puppet] Renaming the IRC channel to
#openstack-puppet
I
+1 I agree we should do this, etc., etc.
I don’t have a strong preference for #1 or #2, either. But I do think #1
is slightly more complicated from a deployer/operator perspective. It’s
another module I have to manage, pull in, etc. Granted this is a trivial
amount of incremental work.
I
We also run all masterless/puppet apply. And we just populate a bare
bones keystone.conf on any box that does not have keystone installed, but
Puppet needs to be able to create keystone resources.
Also agreed on avoiding puppetdb, for the same reasons.
(Something to note for those of us doing
][operators] How to specify Keystone v3
credentials?
On Wed, May 6, 2015 at 4:26 PM, Mike Dorman
mdor...@godaddy.commailto:mdor...@godaddy.com wrote:
We also run all masterless/puppet apply. And we just populate a bare
bones keystone.conf on any box that does not have keystone installed, but
Puppet
I feel somewhat responsible for this whole thing, since I landed the first
review that kicked off all this. We had gone to a kilo oslo-messaging for
RMQ improvements, which was what spurred me to patch in order to get rid
of the deprecation warnings. I should have actually validated it
I can report that we do use this option (‘global' setting.) We have to
enforce name uniqueness for instances’ integration with some external
systems (namely AD and Spacewalk) which require unique naming.
However, we also do some external name validation which I think
effectively enforces
24 matches
Mail list logo