Thanks Eric for the patch.
This will help keeping placement calls under control.
Belmiro
On Sun, Nov 4, 2018 at 1:01 PM Jay Pipes wrote:
> On 11/02/2018 03:22 PM, Eric Fried wrote:
> > All-
> >
> > Based on a (long) discussion yesterday [1] I have put up a patch [2]
> > whereby you can set
Hi,
this looks reasonable to me but I would prefer B.
In this case the operator can configure the hard limit.
I don't think we more granularity or expose it using the API.
Belmiro
On Fri, Jun 8, 2018 at 3:46 PM Dan Smith wrote:
> > Some ideas that have been discussed so far include:
>
> FYI,
Hi,
with Ocata upgrade we decided to run local placements (one service per
cellV1) because we were nervous about possible scalability issues but
specially the increase of the schedule time. Fortunately, this is now been
address with the placement-req-filter work.
We started slowly to aggregate
Hi Jonathan,
this was introduced in Pike.
Belmiro
On Tue, 16 Jan 2018 at 22:48, Jonathan Proulx wrote:
> On Tue, Jan 16, 2018 at 03:49:25PM -0500, Jonathan Proulx wrote:
> :On Tue, Jan 16, 2018 at 08:42:00PM +, Tim Bell wrote:
> ::If you want to hide the VM signature,
Hi Ondrej,
the following spec tries to address the issue that you described.
https://review.openstack.org/#/c/508133/
Let me know if you have comments/suggestions.
cheers,
Belmiro
On Fri, Jan 12, 2018 at 2:31 PM, Ondrej Vaško
wrote:
> Hello guys,
>
> I am dealing with
Hi,
just started to look what's available as Functions as a Service in
OpenStack.
Thanks for the demo.
What's the difference between "qinling" and "picasso"?
https://github.com/openstack/qinling
https://github.com/openstack/picasso
cheers,
Belmiro
On Mon, Nov 13, 2017 at 11:03 AM, Lingxian
Hi,
can we add this meeting into the official IRC meetings page?
https://wiki.openstack.org/wiki/Meetings
http://eavesdrop.openstack.org/
thanks,
Belmiro
On Tue, 24 Oct 2017 at 15:51, Chris Morgan wrote:
> Next meeting in about 10 minutes from now
>
> Chris
>
> --
>
> Hope this helps some,
>
> Thanks,
> Paul Browne
>
> [1] https://pastebin.com/JshWi6i3
> [2] https://pastebin.com/5b8cAanP
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1495171
>
> On 9 October 2017 at 04:59, Belmiro Moreira <moreira.belmiro.email.lists@
Hi,
the CPU model that we expose to the guest VMs varies considering the
compute node use case.
We use "cpu_mode=host-passthrough" for the compute nodes that run batch
processing VMs and "cpu_mode=host-model" for the compute nodes for service
VMs. The reason to have "cpu_mode=host-model" is
In our cloud rebuild is the only way for a user to keep the same IP.
Unfortunately, we don't offer floating IPs, yet.
Also, we use the user_data to bootstrap some actions in new instances
(puppet, ...).
Considering all the use-cases for rebuild it would be great if the
user_data can be updated at
wrote:
> On 27 September 2017 at 22:40, Belmiro Moreira
> <moreira.belmiro.email.li...@gmail.com> wrote:
> > In the past we used the tabs but latest Horizon versions use the
> visibility
> > column/search instead.
> > The issue is that we would like the old images to cont
/
Cheers,
Belmiro
On Wed, 27 Sep 2017 at 00:25, Blair Bethwaite <blair.bethwa...@gmail.com>
wrote:
> Hi Belmiro,
>
>
> On 20 Sep. 2017 7:58 pm, "Belmiro Moreira" <
> moreira.belmiro.email.li...@gmail.com> wrote:
> > Discovering the latest image relea
g/abusing of the
visibility "community".
Changing the visibility of old images releases to "community" will hide
them from the default
"image-list" but they will continue discoverable and available.
Belmiro
On Tue, Sep 19, 2017 at 8:24 PM, Brian Rosmaita <ros
Hi Matt,
thanks for these great summaries.
I didn't find any mention to nested quotas.
Was it discussed in the PTG? and what can we expect for Queens?
thanks,
Belmiro
CERN
On Mon, Sep 18, 2017 at 11:58 PM, Matt Riedemann
wrote:
> There was a whole lot of other stuff
Hi Matt,
thanks for these great summaries.
I didn't find any mention to nested quotas.
Was it discussed in the PTG? and what can we expect for Queens?
thanks,
Belmiro
CERN
On Mon, Sep 18, 2017 at 11:58 PM, Matt Riedemann
wrote:
> There was a whole lot of other stuff
Hi Brian,
Thanks for the sessions summaries.
We are really interested in the image lifecycle support.
Can you elaborate how searchlight would help solving this problem?
thanks,
Belmiro
CERN
On Fri, Sep 15, 2017 at 4:46 PM, Brian Rosmaita
wrote:
> For those who
This option is useful in large deployments.
Our scheduler strategy is to "pack", however we are not interested in this
strategy per individual compute node but per sets of them. One of the
advantages is that when a user creates consecutive instances in the same
AVZ it's unlikely that they will be
This option is useful in large deployments.
Our scheduler strategy is to "pack", however we are not interested in this
strategy per individual compute node but per sets of them. One of the
advantages is that when a user creates consecutive instances in the same
AVZ it's unlikely that they will be
Hi,
thanks for bringing this into discussion in the Operators list.
Option 1 and 2 and not complementary but complety different.
So, considering "Option 2" and the goal to target it for Queens I would
prefer not going into a migration path in
Pike and then again in Queens.
Belmiro
On Fri, May
Hi,
thanks for bringing this into discussion in the Operators list.
Option 1 and 2 and not complementary but complety different.
So, considering "Option 2" and the goal to target it for Queens I would
prefer not going into a migration path in
Pike and then again in Queens.
Belmiro
On Fri, May
Hi David,
AVZs are basically aggregates.
In cells_v2 aggregates are defined in the cell_api, so it will be possible
to have
multiple AVZs per cell and AVZs that spread between different cells.
Belmiro
On Wed, May 24, 2017 at 5:14 AM, David Medberry
wrote:
> Hi Devs and
Hi David,
AVZs are basically aggregates.
In cells_v2 aggregates are defined in the cell_api, so it will be possible
to have
multiple AVZs per cell and AVZs that spread between different cells.
Belmiro
On Wed, May 24, 2017 at 5:14 AM, David Medberry
wrote:
> Hi Devs and
Hi Matt,
if by "incomplete results" you mean retrieve the instances UUIDs (in the
cell_api) for the cells that failed to answer,
I would prefer to have incomplete results than a failed operation.
Belmiro
On Mon, May 22, 2017 at 11:39 AM, Matthew Booth wrote:
> On 19 May 2017
Hi Joe,
congrats.
Can you also make available your scripts changes for IPv6?
The more the better for any site that is still working in the migration,
like us :)
thanks,
Belmiro
On Sat, May 20, 2017 at 6:51 PM, Joe Topjian wrote:
> Hi all,
>
> There probably aren't a lot of
Hi Georgios,
probably the problem is related with quotas. For some reason they were not
sync when you deleted the instances.
To confirm this, you can increase your quota and then check you can create
new instances.
If you are using Newton you can use "nova-manage project
quota_usage_refresh" to
Hi Saverio,
when not using "max_count" the message "Running batches of 50 until complete"
is always printed.
If you are not getting any error and no more output the migrations should
have finished.
Unfortunately, there is not such message "All Done" when the
online_data_migrations
finish.
You
+1
We use "block-migration" and we needed to disable this timeout.
Belmiro
CERN
On Thu, Feb 9, 2017 at 5:29 PM, Matt Riedemann wrote:
> This is just a heads up to anyone running with this since Liberty, there
> is a patch [1] that will go into Ocata which deprecates the
>
re Cinder with the LVM driver to
> use that pool of space to present volumes to your compute instances.
>
> Thanks,
> Sean
>
> On Thu, Dec 08, 2016 at 07:46:35PM +0100, Belmiro Moreira wrote:
> > Hi,
> >
> > we have a set of disk servers (JBOD) that we would like to int
re Cinder with the LVM driver to
> use that pool of space to present volumes to your compute instances.
>
> Thanks,
> Sean
>
> On Thu, Dec 08, 2016 at 07:46:35PM +0100, Belmiro Moreira wrote:
> > Hi,
> >
> > we have a set of disk servers (JBOD) that we would like to int
Hi,
we have a set of disk servers (JBOD) that we would like to integrate into
our cloud to run applications like Hadoop and Spark.
Using file disks for storage and a huge "/var/lib/nova" is not an option
for these use cases so we would like to expose the local drives directly to
the VMs as
Hi,
we have a set of disk servers (JBOD) that we would like to integrate into
our cloud to run applications like Hadoop and Spark.
Using file disks for storage and a huge "/var/lib/nova" is not an option
for these use cases so we would like to expose the local drives directly to
the VMs as
How many nova-schedulers are you running?
You can hit this issue when multiple nova-schedulers select the same
compute node for different instances.
Belmiro
On Wed, Nov 30, 2016 at 3:56 PM, Massimo Sgaravatto <
massimo.sgarava...@gmail.com> wrote:
> Hi all
>
> I have a problem with scheduling
Hi,
we wrote this blog post a year ago but it still can be useful depending on
the
OpenStack version that you are running.
http://openstack-in-production.blogspot.ch/2015/05/purging-nova-databases-in-cell.html
Belmiro
On Thu, Jun 23, 2016 at 3:32 PM, Nick Jones
47=22=157=87=24=78=283
>
>
>
> A vote of +1, 0, -1 on these times would help long way.
>
>
> On 5/31/16 4:35 PM, Belmiro Moreira wrote:
> > Hi Nikhil,
> > I'm interested in this discussion.
> >
> > Initially you were proposing Thursday June 9th, 2016 at
Hi Nikhil,
I'm interested in this discussion.
Initially you were proposing Thursday June 9th, 2016 at 2000UTC.
Are you suggesting to change also the date? Because in the new timeanddate
suggestions is 6/7 of June.
Belmiro
On Tue, May 31, 2016 at 6:13 PM, Nikhil Komawar
Hi Nikhil,
I'm interested in this discussion.
Initially you were proposing Thursday June 9th, 2016 at 2000UTC.
Are you suggesting to change also the date? Because in the new timeanddate
suggestions is 6/7 of June.
Belmiro
On Tue, May 31, 2016 at 6:13 PM, Nikhil Komawar
When a service starts with the log level configured to debug you can see
what options/values is using.
Belmiro
On Sat, May 7, 2016 at 10:17 PM, Sergio Cuellar Valdes <
scuell...@kionetworks.com> wrote:
> Hi everybody,
>
> How can you display all the values that has nova or other service and
Hi,
thanks Carl for info about the DHCP plans.
Our DHCP concern is because currently the DHCP agent needs to be assigned
to a network and then it creates a port for each subnet.
In our infrastructure we only consider a network with several hundred
subnets.
By default the DHCP agent runs in the
Hi,
for the Ops Meetup in Manchester we have the following etherpad:
https://etherpad.openstack.org/p/MAN-ops-meetup but is there any schedule
already available for the sessions?
thanks,
Belmiro
___
OpenStack-operators mailing list
IMHO I think it's a great way to fix the URI problem.
+1
Belmiro
On Fri, Jan 8, 2016 at 3:23 PM, Sylvain Bauza wrote:
>
>
> Le 08/01/2016 15:10, Andrew Laski a écrit :
>
>> On 01/08/16 at 12:43pm, John Garbutt wrote:
>>
>>> On 7 January 2016 at 19:59, Matt Riedemann
IMHO I think it's a great way to fix the URI problem.
+1
Belmiro
On Fri, Jan 8, 2016 at 3:23 PM, Sylvain Bauza wrote:
>
>
> Le 08/01/2016 15:10, Andrew Laski a écrit :
>
>> On 01/08/16 at 12:43pm, John Garbutt wrote:
>>
>>> On 7 January 2016 at 19:59, Matt Riedemann
Hi Mathieu,
thanks for the related bugs.
But I'm observing this on 2015.1.1.
On Sun, Nov 22, 2015 at 12:58 AM, Mathieu Gagné <mga...@internap.com> wrote:
> On 2015-11-21 4:47 PM, Belmiro Moreira wrote:
> > Hi,
> > We are about to upgrade nova to kilo using cells and we n
_specs": {}, "swap": 0, "rxtx_factor": 1.0, "flavorid": "2",
"vcpu_weight": null, "id": 5}, "nova_object.namespace": "nova"}, "cur":
{"nova_object.version": "1.1", "nova_object.c
Hi,
We are about to upgrade nova to kilo using cells and we noticed
the resize/migrate functionality is not working properly.
The instance is correctly resized/migrated but fails to
“confirm resize” with the following trace:
2015-11-21 22:40:49.804 26786 ERROR nova.api.openstack.wsgi
Hi,
We are about to upgrade nova to kilo using cells and we noticed
the resize/migrate functionality is not working properly.
The instance is correctly resized/migrated but fails to
“confirm resize” with the following trace:
2015-11-21 22:40:49.804 26786 ERROR nova.api.openstack.wsgi
With my operator hat on I think the release notes are the right place
for these changes.
Belmiro
On Wed, Nov 18, 2015 at 4:58 PM, Alexis Lee wrote:
> Sylvain Bauza said on Wed, Nov 18, 2015 at 04:48:50PM +0100:
> > >This is just for the case of "we're going to change the default
Hi Saverio,
we always upgrade one component at a time.
Cinder was one of the first components that we upgraded to kilo,
meaning that other components (glance, nova, ...) were running Juno.
We didn't have any problem with this setup.
Belmiro
CERN
On Tue, Nov 17, 2015 at 6:01 PM, Saverio Proto
Hi,
we are still running nova Juno and I don't see this performance issue.
(I can comment on Kilo next week).
Per cell, we have a node that runs conductor + other control plane services.
The number of conductor workers can change between 16 to 48.
We try to not have more than 200 compute nodes
+1
Belmiro
On Thursday, 29 October 2015, Kris G. Lindgren
wrote:
> We seem to have enough interest… so meeting time will be at 10am in the
> Prince room (if we get an actual room I will send an update).
>
> Does anyone have any ideas about what they want to talk
Hi,
just added our use-cases/patches to the etherpad.
Belmiro
On Fri, Jun 19, 2015 at 11:09 PM, Kris G. Lindgren klindg...@godaddy.com
wrote:
Mike added our use case to the etherpad [1] today. I talked it over
with Carl Baldwin and he seemed ok with the format. If you guys want to
add
Hi,
I would like to raise your attention for the bug
https://bugs.launchpad.net/nova/+bug/1461777
since it can impact the efficiency of your cloud.
It affects Juno and Kilo deployments.
Belmiro
___
OpenStack-operators mailing list
Hi,
I just posted in our operations blog how CERN is dealing with quotas
synchronization problem.
http://openstack-in-production.blogspot.fr/2015/03/nova-quota-usage-synchronization.html
Hope it helps,
cheers,
Belmiro
On Sat, Mar 21, 2015 at 12:55 AM, Sam Morrison sorri...@gmail.com wrote:
Hi,
in nova there are several options that can be defined in the flavor (extra
specs)
and/or as image properties.
This is great, however to deploy some of these options we will need offer
the
same image with different properties or let the users upload the same image
with
the right properties.
It
I completely agree with Tim and Daniel.
Also, deprecating nova EC2 API without having the community engaged with
the new stackforge “EC2 standalone service” can lead to a no EC2 support at
all.
On Thu, Jan 29, 2015 at 4:46 AM, Saju M sajup...@gmail.com wrote:
I think, new EC2 API also uses EC2
Hi,
nova-conductor starts multiple processes as well.
Belmiro
On Wednesday, January 28, 2015, Johannes Erdfelt johan...@erdfelt.com
wrote:
On Wed, Jan 28, 2015, murali reddy muralimmre...@gmail.com javascript:;
wrote:
On hosts with multi-core processors, it does not seem optimal to run a
Hi,
we had similar issues.
In our case, some times (not really a pattern here!) nova-compute didn't
consume messages even if everything was apparently happy.
We started monitoring the queues size and restarting nova-compute.
We are still using python-oslo-messaging-1.3.0.2, however the problem
Hi,
as operators I would like to have your comments/suggestions on:
https://review.openstack.org/#/c/136645/1
With a large number of nodes several services are disabled because various
reasons (in our case mainly hardware interventions).
To help operations we use the disable reason as fast
Hi Anita,
I'm available Tuesday and Wednesday (0800-1600 UTC), Friday (0800-1800 UTC).
Belmiro
On Tuesday, December 30, 2014, Oleg Bondarev obonda...@mirantis.com wrote:
On Tue, Dec 30, 2014 at 12:56 AM, Anita Kuno ante...@anteaya.info
javascript:_e(%7B%7D,'cvml','ante...@anteaya.info');
Hi Vish,
do you have more info about the libvirt deadlocks that you observed?
Maybe I'm observing the same on SLC6 where I can't even kill libvirtd
process.
Belmiro
On Tue, Dec 16, 2014 at 12:01 AM, Vishvananda Ishaya vishvana...@gmail.com
wrote:
I have seen deadlocks in libvirt that could
Hi,
my experience is that soft delete is important to keep record of deleted
instances and its characteristics.
In fact in my organization we are obliged to keep these records for several
months.
However, it would be nice that after few months we were able to purge the
DB with a nova tool.
In the
Fine for me.
On Wed, Nov 12, 2014 at 8:51 AM, Sylvain Bauza sba...@redhat.com wrote:
Le 11/11/2014 23:07, Matt Riedemann a écrit :
On 11/11/2014 3:51 PM, Matt Riedemann wrote:
On 11/11/2014 3:50 PM, Matt Riedemann wrote:
On 11/11/2014 3:04 PM, Andrew Laski wrote:
We had a great
Hi,
to help the discussion,
a small compilation about the bugs and previous attempts to fix the
missing functionality in cells.
Aggregates
https://bugs.launchpad.net/nova/+bug/1161208
https://blueprints.launchpad.net/nova/+spec/cells-aggregate-support
https://review.openstack.org/#/c/25813/
Hi Andrew,
great that you have started the “cells” discussion.
Looking forward to see cells as default setup in Kilo.
The feature gap is really painful for current cells users.
We are looking into these features for some time and the main concern is
really where
these concepts should live.
Hi,
our nova DBs are growing rapidly and it's time to start pruning them...
I'm trying the archive deleted rows however is not working and I'm
getting the following
warning in the logs: IntegrityError detected when archiving table
Searching about this problem I found the bug
Hi,
I'm observing exactly the same problem.
But in my case it is happening every time a VM is deleted.
I'm using icehouse.
Any idea?
regards,
Belmiro
On Fri, Jul 18, 2014 at 10:22 AM, Alex Leonhardt aleonhardt...@gmail.com
wrote:
Hi All,
I keep seeing this in the logs when deleting an
Hi Martinx,
currently nova only supports fallocate to preallocate space.
Use the confirmation option preallocate_images=space.
preallocation=metadata is mentioned in
https://blueprints.launchpad.net/nova/+spec/preallocated-images
as future work.
Belmiro
---
Belmiro Moreira
.
Is it available in github?
Belmiro
--
Belmiro Moreira
CERN
Email: belmiro.more...@cern.ch
IRC: belmoreira
On Mon, Jun 30, 2014 at 4:05 PM, Eric Frizziero eric.frizzi...@pd.infn.it
wrote:
Hi All,
we have analyzed the nova-scheduler component (FilterScheduler
I like the current behavior of not changing the VM state if nova-compute
goes down.
The cloud operators can identify the issue in the compute node and try to
fix it without users noticing. Depending in the problem I can inform users
if instances are affected and change the state if necessary.
I
with the cache) but
we expect that OpenStack will behave in the same way.
Anyone running OpenStack in Mysql 5.6?
thanks,
Belmiro
--
Belmiro Moreira
CERN
IRC: belmoreira
___
Mailing list: http://lists.openstack.org/cgi-bin
Hi,
if you are interested in this filter see:
https://review.openstack.org/#/c/99476/
Belmiro
--
Belmiro Moreira
CERN
Email: belmiro.more...@cern.ch
IRC: belmoreira
On Tue, Jun 10, 2014 at 10:42 PM, Belmiro Moreira
moreira.belmiro.email.li...@gmail.com wrote
considering the number
of DB queries required. However this can be documented if people intend to
enable the filter.
In the review there was also the discussion about a config option for the
old filter.
cheers,
Belmiro
--
Belmiro Moreira
CERN
Email: belmiro.more
+1 for Phil comments.
I agree that VMs should spread between different default avzs if user
doesn't define one at boot time.
There is a blueprint for that feature that unfortunately didn't make it for
icehouse.
https://blueprints.launchpad.net/nova/+spec/schedule-set-availability-zones
Belmiro
billo...@us.ibm.com
GPFS and OpenStack
520-799-4829
[image: Inactive hide details for Belmiro Moreira ---03/05/2014 01:31:54
PM---Hi, in our cinder setup we would like to have different v]Belmiro
Moreira ---03/05/2014 01:31:54 PM---Hi, in our cinder setup we would like
to have different
Hi,
in our cinder setup we would like to have different volume types associated
to different qos. For example: standard, high_iops.
If the user doesn't specify explicitly the volume type when it creates the
volume in our use case it should default to one. standard for example.
I can't find
: [Openstack] nova unique name generator middleware
To: Belmiro Moreira moreira.belmiro.email.li...@gmail.com
Since I am relatively new to the guts of OpenStack this might be an off base
suggestion but why is this even OpenStack's problem vs. something that can be
queried by whatever provisioning
Hi,
in our case we have a network DB were all VMs are registered.
We just check if the name provided by the user don’t conflict.
Belmiro
On Feb 1, 2014, at 20:19 , Craig J craig.jell...@gmail.com wrote:
Hi,
In our OpenStack environment, we have the need to enforce unique names for
each
76 matches
Mail list logo