Did you change the nova_metadata_ip option to nova_metadata_host in
metadata_agent.ini? The former value was deprecated several releases ago
and now no longer functions as of pike. The metadata service will throw
500 errors if you don't change it.
On November 12, 2018 19:00:46 Ignazio Cassan
We are using multiple keystone domains - still can't reproduce this.
Do you happen to have a customized keystone policy.json?
Worst case, I would launch a devstack of your targeted release. If you
can't reproduce the issue there, you would at least know its caused by a
nonstandard config rath
Do you have a liberal/custom policy.json that perhaps is causing unexpected
behavior? Can't seem to reproduce this.
On October 18, 2018 18:13:22 "Moore, Michael Dane (GSFC-720.0)[BUSINESS
INTEGRA, INC.]" wrote:
I have replicated this unexpected behavior in a Pike test environment, in
addit
:
On Thu, Aug 09, 2018 at 12:14:56PM -0500, Matt Riedemann wrote:
On 8/9/2018 6:03 AM, Chris Apsey wrote:
Exactly. And I agree, it seems like hw_architecture should dictate
which emulator is chosen, but as you mentioned its currently not. I'm
not sure if this is a bug and it's suppose
would be
more of a feature request/suggestion for a later version. The docs are
kind of sparse in this area.
What are your thoughts? I can open a bug if you think the scope is
reasonable.
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2018-08-08 06:40 PM, Matt Ried
rrently does not (it always choose qemu-system-x86_64).
Does that make sense?
Chris
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2018-08-08 03:07 PM, Matt Riedemann wrote:
On 8/7/2018 8:54 AM, Chris Apsey wrote:
We don't actually have any non-x86 hardware a
ere is some config option I'm missing.
Thanks!
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2018-08-07 09:32 AM, Matt Riedemann wrote:
On 8/5/2018 1:43 PM, Chris Apsey wrote:
Trying to enable some alternate (non-x86) architectures on xenial +
queens. I can load
the correct binary, everything works as expected.
Am I missing something here, or is this a bug in nova-compute?
Thanks in advance,
--
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
___
OpenStack-operators mailing list
OpenStack
Ignazio,
Are your horizon instances in separate containers/VMS? If so, I'd highly
recommend completely wiping them and rebuilding from scratch since horizon
itself is stateless. I am not a fan of upgrades for reasons like this.
If that's not possible, a purge of the horizon packages on your
This is great. I would even go so far as to say the install docs should be
updated to capture this as the default; as far as I know there is no
negative impact when running in daemon mode, even on very small
deployments. I would imagine that there are operators out there who have
run into thi
I want to echo the effectiveness of this change - we had vif failures when
launching more than 50 or so cirros instances simultaneously, but moving to
daemon mode made this issue disappear and we've tested 5x that amount.
This has been the single biggest scalability improvement to date. This
o represent your
real gateway router. This will prevent anyone from being able to attach a
router using the subnet as a reference since the gateway_ip address will
already be in use.
Cheers,
Kevin Benton
On Sat, Mar 17, 2018 at 4:10 PM, Chris Apsey wrote:
All,
Had a strange incident th
bnet). Does
neutron just expect issues like this to be handled by the physical
provider infrastructure (spoofing prevention, etc.)?
Thanks,
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
___
OpenStack-operators mailing list
Op
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2018-02-20 10:27 PM, Chris Apsey wrote:
All,
Currently experiencing a sporadic issue with our keystone endpoints.
Throughout the day, keystone will just stop responding on both the
admin and public endpoints, which will cause all ser
121:35357 check inter 2000 rise 2 fall 5
listen keystone_public_internal_cluster
bind 10.50.10.0:5000 ssl crt /etc/letsencrypt/live/*/master.pem
bind 10.10.5.200:5000
balance roundrobin
option tcpka
option httpchk
option tcplog
server keystone-0 10.10.5.120:5
various hosts in
our cluster (the timeline matches). Has anyone else experienced similar
impacts or can suggest anything to try to lessen the impact?
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2018-01-31 04:47 PM, Chris Apsey wrote:
That looks promising. I
That looks promising. I'll report back to confirm the solution.
Thanks!
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2018-01-31 04:40 PM, Matt Riedemann wrote:
On 1/31/2018 3:16 PM, Chris Apsey wrote:
All,
Running in to a strange issue I haven't s
symptom, not a cause.
Currently running pike on Xenial.
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
tly running Pike.
Thanks in advance,
--
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
James,
Bug report submitted.
Thanks!
Chris
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2017-06-26 09:28, James Page wrote:
Tweaking subject line a bit...
On Mon, 26 Jun 2017 at 02:27 Chris Apsey
wrote:
All,
Doing some testing prior to moving to Ocata from
s
--
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-ready in this
particular role...
Thanks
--
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
that, it looks like that
option is deprecated anyway (at least in heat), although I have not
found any indication about what is supposed to 'replace' those options
going forward.
Ideas?
Thanks so much,
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2017-02
one know why my response body is giving the wrong URL? Horizon
works perfectly fine with the https endpoints; it's just the command
line clients that are having issues.
Thanks in advance,
--
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
sure to specify the backend under [cache], as the
default is null.
Thanks for the assist!
Chris
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2017-01-24 11:36, Saverio Proto wrote:
Did you try to restart memcached after chaning the configuration to HA
?
there are
George,
No dice; I set the config directives in [default] or [cache]
individually as well as simultaneously, same behavior. I also restart
mmemcached between every change just-in-case. No changes.
Thank you for the suggestion, though.
Chris
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
have found reliably working. I'm on Ubuntu 16.04 LTS+Newton from UCA.
Ideas?
--
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openst
27 matches
Mail list logo