Hi,
with Ocata upgrade we decided to run local placements (one service per
cellV1) because we were nervous about possible scalability issues but
specially the increase of the schedule time. Fortunately, this is now been
address with the placement-req-filter work.
We started slowly to aggregate
Hi Jonathan,
this was introduced in Pike.
Belmiro
On Tue, 16 Jan 2018 at 22:48, Jonathan Proulx wrote:
> On Tue, Jan 16, 2018 at 03:49:25PM -0500, Jonathan Proulx wrote:
> :On Tue, Jan 16, 2018 at 08:42:00PM +, Tim Bell wrote:
> ::If you want to hide the VM signature,
Hi,
can we add this meeting into the official IRC meetings page?
https://wiki.openstack.org/wiki/Meetings
http://eavesdrop.openstack.org/
thanks,
Belmiro
On Tue, 24 Oct 2017 at 15:51, Chris Morgan wrote:
> Next meeting in about 10 minutes from now
>
> Chris
>
> --
>
> Hope this helps some,
>
> Thanks,
> Paul Browne
>
> [1] https://pastebin.com/JshWi6i3
> [2] https://pastebin.com/5b8cAanP
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1495171
>
> On 9 October 2017 at 04:59, Belmiro Moreira <moreira.belmiro.email.lists@
Hi,
the CPU model that we expose to the guest VMs varies considering the
compute node use case.
We use "cpu_mode=host-passthrough" for the compute nodes that run batch
processing VMs and "cpu_mode=host-model" for the compute nodes for service
VMs. The reason to have "cpu_mode=host-model" is
In our cloud rebuild is the only way for a user to keep the same IP.
Unfortunately, we don't offer floating IPs, yet.
Also, we use the user_data to bootstrap some actions in new instances
(puppet, ...).
Considering all the use-cases for rebuild it would be great if the
user_data can be updated at
Hi Matt,
thanks for these great summaries.
I didn't find any mention to nested quotas.
Was it discussed in the PTG? and what can we expect for Queens?
thanks,
Belmiro
CERN
On Mon, Sep 18, 2017 at 11:58 PM, Matt Riedemann
wrote:
> There was a whole lot of other stuff
This option is useful in large deployments.
Our scheduler strategy is to "pack", however we are not interested in this
strategy per individual compute node but per sets of them. One of the
advantages is that when a user creates consecutive instances in the same
AVZ it's unlikely that they will be
Hi,
thanks for bringing this into discussion in the Operators list.
Option 1 and 2 and not complementary but complety different.
So, considering "Option 2" and the goal to target it for Queens I would
prefer not going into a migration path in
Pike and then again in Queens.
Belmiro
On Fri, May
Hi David,
AVZs are basically aggregates.
In cells_v2 aggregates are defined in the cell_api, so it will be possible
to have
multiple AVZs per cell and AVZs that spread between different cells.
Belmiro
On Wed, May 24, 2017 at 5:14 AM, David Medberry
wrote:
> Hi Devs and
Hi Joe,
congrats.
Can you also make available your scripts changes for IPv6?
The more the better for any site that is still working in the migration,
like us :)
thanks,
Belmiro
On Sat, May 20, 2017 at 6:51 PM, Joe Topjian wrote:
> Hi all,
>
> There probably aren't a lot of
Hi Saverio,
when not using "max_count" the message "Running batches of 50 until complete"
is always printed.
If you are not getting any error and no more output the migrations should
have finished.
Unfortunately, there is not such message "All Done" when the
online_data_migrations
finish.
You
+1
We use "block-migration" and we needed to disable this timeout.
Belmiro
CERN
On Thu, Feb 9, 2017 at 5:29 PM, Matt Riedemann wrote:
> This is just a heads up to anyone running with this since Liberty, there
> is a patch [1] that will go into Ocata which deprecates the
>
re Cinder with the LVM driver to
> use that pool of space to present volumes to your compute instances.
>
> Thanks,
> Sean
>
> On Thu, Dec 08, 2016 at 07:46:35PM +0100, Belmiro Moreira wrote:
> > Hi,
> >
> > we have a set of disk servers (JBOD) that we would like to int
Hi,
we have a set of disk servers (JBOD) that we would like to integrate into
our cloud to run applications like Hadoop and Spark.
Using file disks for storage and a huge "/var/lib/nova" is not an option
for these use cases so we would like to expose the local drives directly to
the VMs as
How many nova-schedulers are you running?
You can hit this issue when multiple nova-schedulers select the same
compute node for different instances.
Belmiro
On Wed, Nov 30, 2016 at 3:56 PM, Massimo Sgaravatto <
massimo.sgarava...@gmail.com> wrote:
> Hi all
>
> I have a problem with scheduling
Hi,
we wrote this blog post a year ago but it still can be useful depending on
the
OpenStack version that you are running.
http://openstack-in-production.blogspot.ch/2015/05/purging-nova-databases-in-cell.html
Belmiro
On Thu, Jun 23, 2016 at 3:32 PM, Nick Jones
Hi Nikhil,
I'm interested in this discussion.
Initially you were proposing Thursday June 9th, 2016 at 2000UTC.
Are you suggesting to change also the date? Because in the new timeanddate
suggestions is 6/7 of June.
Belmiro
On Tue, May 31, 2016 at 6:13 PM, Nikhil Komawar
When a service starts with the log level configured to debug you can see
what options/values is using.
Belmiro
On Sat, May 7, 2016 at 10:17 PM, Sergio Cuellar Valdes <
scuell...@kionetworks.com> wrote:
> Hi everybody,
>
> How can you display all the values that has nova or other service and
Hi,
thanks Carl for info about the DHCP plans.
Our DHCP concern is because currently the DHCP agent needs to be assigned
to a network and then it creates a port for each subnet.
In our infrastructure we only consider a network with several hundred
subnets.
By default the DHCP agent runs in the
Hi,
for the Ops Meetup in Manchester we have the following etherpad:
https://etherpad.openstack.org/p/MAN-ops-meetup but is there any schedule
already available for the sessions?
thanks,
Belmiro
___
OpenStack-operators mailing list
IMHO I think it's a great way to fix the URI problem.
+1
Belmiro
On Fri, Jan 8, 2016 at 3:23 PM, Sylvain Bauza wrote:
>
>
> Le 08/01/2016 15:10, Andrew Laski a écrit :
>
>> On 01/08/16 at 12:43pm, John Garbutt wrote:
>>
>>> On 7 January 2016 at 19:59, Matt Riedemann
Hi Mathieu,
thanks for the related bugs.
But I'm observing this on 2015.1.1.
On Sun, Nov 22, 2015 at 12:58 AM, Mathieu Gagné <mga...@internap.com> wrote:
> On 2015-11-21 4:47 PM, Belmiro Moreira wrote:
> > Hi,
> > We are about to upgrade nova to kilo using cells and we n
_specs": {}, "swap": 0, "rxtx_factor": 1.0, "flavorid": "2",
"vcpu_weight": null, "id": 5}, "nova_object.namespace": "nova"}, "cur":
{"nova_object.version": "1.1", "nova_object.c
Hi,
We are about to upgrade nova to kilo using cells and we noticed
the resize/migrate functionality is not working properly.
The instance is correctly resized/migrated but fails to
“confirm resize” with the following trace:
2015-11-21 22:40:49.804 26786 ERROR nova.api.openstack.wsgi
Hi,
We are about to upgrade nova to kilo using cells and we noticed
the resize/migrate functionality is not working properly.
The instance is correctly resized/migrated but fails to
“confirm resize” with the following trace:
2015-11-21 22:40:49.804 26786 ERROR nova.api.openstack.wsgi
Hi Saverio,
we always upgrade one component at a time.
Cinder was one of the first components that we upgraded to kilo,
meaning that other components (glance, nova, ...) were running Juno.
We didn't have any problem with this setup.
Belmiro
CERN
On Tue, Nov 17, 2015 at 6:01 PM, Saverio Proto
Hi,
we are still running nova Juno and I don't see this performance issue.
(I can comment on Kilo next week).
Per cell, we have a node that runs conductor + other control plane services.
The number of conductor workers can change between 16 to 48.
We try to not have more than 200 compute nodes
+1
Belmiro
On Thursday, 29 October 2015, Kris G. Lindgren
wrote:
> We seem to have enough interest… so meeting time will be at 10am in the
> Prince room (if we get an actual room I will send an update).
>
> Does anyone have any ideas about what they want to talk
Hi,
just added our use-cases/patches to the etherpad.
Belmiro
On Fri, Jun 19, 2015 at 11:09 PM, Kris G. Lindgren klindg...@godaddy.com
wrote:
Mike added our use case to the etherpad [1] today. I talked it over
with Carl Baldwin and he seemed ok with the format. If you guys want to
add
Hi,
I would like to raise your attention for the bug
https://bugs.launchpad.net/nova/+bug/1461777
since it can impact the efficiency of your cloud.
It affects Juno and Kilo deployments.
Belmiro
___
OpenStack-operators mailing list
Hi,
I just posted in our operations blog how CERN is dealing with quotas
synchronization problem.
http://openstack-in-production.blogspot.fr/2015/03/nova-quota-usage-synchronization.html
Hope it helps,
cheers,
Belmiro
On Sat, Mar 21, 2015 at 12:55 AM, Sam Morrison sorri...@gmail.com wrote:
I completely agree with Tim and Daniel.
Also, deprecating nova EC2 API without having the community engaged with
the new stackforge “EC2 standalone service” can lead to a no EC2 support at
all.
On Thu, Jan 29, 2015 at 4:46 AM, Saju M sajup...@gmail.com wrote:
I think, new EC2 API also uses EC2
Hi,
we had similar issues.
In our case, some times (not really a pattern here!) nova-compute didn't
consume messages even if everything was apparently happy.
We started monitoring the queues size and restarting nova-compute.
We are still using python-oslo-messaging-1.3.0.2, however the problem
Hi,
as operators I would like to have your comments/suggestions on:
https://review.openstack.org/#/c/136645/1
With a large number of nodes several services are disabled because various
reasons (in our case mainly hardware interventions).
To help operations we use the disable reason as fast
Hi,
our nova DBs are growing rapidly and it's time to start pruning them...
I'm trying the archive deleted rows however is not working and I'm
getting the following
warning in the logs: IntegrityError detected when archiving table
Searching about this problem I found the bug
Hi,
I'm observing exactly the same problem.
But in my case it is happening every time a VM is deleted.
I'm using icehouse.
Any idea?
regards,
Belmiro
On Fri, Jul 18, 2014 at 10:22 AM, Alex Leonhardt aleonhardt...@gmail.com
wrote:
Hi All,
I keep seeing this in the logs when deleting an
37 matches
Mail list logo