Beside eliminate race conditions, we use host_subnet_size in the special
cases, we have different capacity hardware in a deployment,
imagine a simple case, two compute hosts(RAM 48G vs 16G free), only enable
the RAM weighter for nova-scheduler, if we launch
10 instances(RAM 1G flavor) one by one, a
On 5/26/2017 12:42 PM, Alfredo Moralejo Alonso wrote:
According to
https://docs.openstack.org/developer/nova/cells.html#first-time-setup
you can specify database connection string with map_cell0 command:
nova-manage cell_v2 map_cell0 --database_connection \
mysql+pymysql://root:secretmysql@d
Dear UC Community,
Monday May 29th, is a holiday in USA, we will cancel our UC IRC meeting for
next week.
Thank you in advanced,
User Committee OpenStack
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.opensta
I've mentioned this elsewhere but writing here for posterity...
Making N to N+1 upgrades seamless and work well is already challenging
today which is one of the reasons why people aren't upgrading in the
first place.
Making N to N+1 upgrades work as well as possible already puts a great
strain on
Excerpts from Dan Smith's message of 2017-05-26 07:56:02 -0700:
> > As most of the upgrade issues center around database migrations, we
> > discussed some of the potential pitfalls at length. One approach was to
> > roll-up all DB migrations into a single repository and run all upgrades
> > for a g
Folks,
At the Boston summit the oslo.messaging team decided to deprecate the
pika driver. It is planned to be removed from oslo.messaging in Rocky
[1].
If you're asking yourself "what's a pika??", read on.
The pika driver was intended to be a replacement for the default
rabbit driver. It was d
According to
https://docs.openstack.org/developer/nova/cells.html#first-time-setup
you can specify database connection string with map_cell0 command:
nova-manage cell_v2 map_cell0 --database_connection \
mysql+pymysql://root:secretmysql@dbserver/nova_cell0?charset=utf8
Regards,
Alfredo
On Fr
hi,
as all of you know, we moved all storage out of ceilometer so it is
handles only data generation and normalisation. there seems to be very
little contribution to panko which handles metadata indexing, event
storage so given how little it's being adopted and how little resources
are being p
[resending to include the operators list]
The host_subset_size configuration option was added to the scheduler to help
eliminate race conditions when two requests for a similar VM would be processed
close together, since the scheduler’s algorithm would select the same host in
both cases, leadin
Just a reminder that the Tuesday after Memorial Day, bright and early (in
the US) is the Ops Meetup Planning session in #openstack-operators on
Freenode IRC at 14:00 UTC ie 10 am Eastern US, 9 am Central US, 8 am
Mountain, and 7 am Pacific. Find your timezone time here:
https://www.timeanddate.com
Thanks
I guess you meant "nova db with cell0 appended" and not "nova_api db with
cell0 appended" as you wrote
My use case is that I share the same percona cluster to host the databases
of multiple openstack installations. So if it is -cell0
this is fine. If instead it is hard-coded to nova-cell0,
The whole cell thing tripped me up earlier this week. From what I understand
it’s hard coded in the upgrade scripts to be the same as the nova_api db with
cell0 appended to the db name but there is a patch in to change this behavior
to match what the install docs say. So it looks like if you j
Hi
I am reading the RDO installation guide for Ocata. In the nova section [*]
it is explained how to create the nova_cell0 database, but I can't find how
to set the relevant connection string in the nova configuration file.
Any hints ?
Thanks, Massimo
[*]
https://docs.openstack.org/ocata/install
> As most of the upgrade issues center around database migrations, we
> discussed some of the potential pitfalls at length. One approach was to
> roll-up all DB migrations into a single repository and run all upgrades
> for a given project in one step. Another was to simply have mutliple
> python v
> We use provider networks to essentially take neutron-l3 out of the equation.
> Generally they are shared on all compute hosts, but usually there aren't huge
> numbers of computes.
Hello,
we have a datacenter completely L3, routing to the host.
to implement the provider networks we are using
hi,
since i've been referencing this to a few people already, i've done some
basic benchmarking on the upcoming gnocchi release which should be
released in next few weeks. here is a short deck highlighting a few updates:
https://www.slideshare.net/GordonChung/gnocchi-v4-preview
if you have tim
On 26/05/17 09:31 AM, Mathieu Gagné wrote:
>
> With Mitaka, I found that you need to run this command to get resource
> types created in Gnocchi:
>
> gnocchi-upgrade --create-legacy-resource-types
>
> With latest version, Ceilometer handles that part.
this ^... thanks Mathieu!
if you are using
On Fri, May 26, 2017 at 9:27 AM, wrote:
> Thank you Gordon !
>
> Now I'm encountering this kind of errors :
>
> *[...]*
>
**
> *2017-05-26 13:21:28.393 1404925 ERROR ceilometer.dispatcher.gnocchi [-]
> Resource type instance does not exist (HTTP 404)*
> *2017-05-26 13:21:28.400 1404925 ERROR ce
Thank you Gordon !
Now I'm encountering this kind of errors :
2017-05-26 13:19:27.853 1404925 DEBUG oslo_messaging._drivers.amqpdriver [-]
received message msg_id: None reply to None
__call__
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:2012017-05-26
13:19:28.026 1404
This is a follow up to an email from Melanie Witt [1] calling attention
to a high severity performance regression identified in Newton. That
change is merged and the fix will be in the Ocata 15.0.5 release [2] and
Newton 14.0.7 release [3].
Those releases will also contain a fix for a bug wher
Sorry about the long delay.
Can you dump the OVS flows before and after the outage? This will let us
know if the flows Neutron setup are getting wiped out.
On Tue, May 2, 2017 at 12:26 PM, Gustavo Randich
wrote:
> Hi Kevin, here is some information aout this issue:
>
> - if the network outage l
+1 on not forcing Operators to transition to something new twice, even if
we did go for option 3.
Do we have an agreed non-distruptive upgrade path mapped out yet? (For any
of the options) We spoke about fallback rules you pass but with a warning
to give us a smoother transition. I think that's my
Warning: wall of text incoming :-)
On 26/05/2017 03:55, Carter, Kevin wrote:
> If you've taken on an adventure like this how did you approach
> it? Did it work? Any known issues, gotchas, or things folks should be
> generally aware of?
We're fresh out of a Juno-to-Mitaka upgrade. It worked, but i
Hi,
thanks for bringing this into discussion in the Operators list.
Option 1 and 2 and not complementary but complety different.
So, considering "Option 2" and the goal to target it for Queens I would
prefer not going into a migration path in
Pike and then again in Queens.
Belmiro
On Fri, May 2
24 matches
Mail list logo