Please do not use these mailing lists to advertise
closed-source/proprietary software solutions.
Thank you,
-jay
On 10/19/2018 05:42 AM, Adrian Andreias wrote:
Hello,
We've just released Fleio version 1.1.
Fleio is a billing solution and control panel for OpenStack public
clouds and traditi
On 10/17/2018 01:41 AM, Ignazio Cassano wrote:
Hello Jay, when I add a New compute node I run nova-manage cell_v2
discover host .
IS it possible this command update the old host uuid in resource table?
No, not unless you already had a nova-compute installed on a host with
the exact same host
On 10/16/2018 10:11 AM, Sylvain Bauza wrote:
On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano
mailto:ignaziocass...@gmail.com>> wrote:
Hi everybody,
when on my ocata installation based on centos7 I update (only update
not changing openstack version) some kvm compute nodes, I
die
On 09/26/2018 05:48 PM, melanie witt wrote:
On Tue, 25 Sep 2018 12:08:03 -0500, Matt Riedemann wrote:
On 9/25/2018 8:36 AM, John Garbutt wrote:
Another thing is about existing flavors configured for these
capabilities-scoped specs. Are you saying during the deprecation
we'd
cont
Fred,
I had a hard time understanding the articles. I'm not sure if you used
Google Translate to do the translation from Chinese to English, but I
personally found both of them difficult to follow.
There were a couple points that I did manage to decipher, though. One
thing that both articles
On 08/29/2018 04:04 PM, Dan Smith wrote:
- The VMs to be migrated are not generally not expensive
configurations, just hardware lifecycles where boxes go out of
warranty or computer centre rack/cooling needs re-organising. For
CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a
~
On 08/29/2018 02:26 PM, Chris Friesen wrote:
On 08/29/2018 10:02 AM, Jay Pipes wrote:
Also, I'd love to hear from anyone in the real world who has successfully
migrated (live or otherwise) an instance that "owns" expensive hardware
(accelerators, SR-IOV PFs, GPUs or otherwis
On 08/29/2018 12:39 PM, Dan Smith wrote:
If we're going to discuss removing move operations from Nova, we should
do that in another thread. This one is about making existing operations
work :)
OK, understood. :)
The admin only "owns" the instance because we have no ability to
transfer ownersh
I respect your opinion but respectfully disagree that this is something
we need to spend our time on. Comments inline.
On 08/29/2018 10:47 AM, Dan Smith wrote:
* Cells can shard across flavors (and hardware type) so operators
would like to move users off the old flavors/hardware (old cell) to
n
Sorry for delayed response. Was on PTO when this came out. Comments
inline...
On 08/22/2018 09:23 PM, Matt Riedemann wrote:
Hi everyone,
I have started an etherpad for cells topics at the Stein PTG [1]. The
main issue in there right now is dealing with cross-cell cold migration
in nova.
At
On 08/22/2018 11:05 AM, Brian Rosmaita wrote:
On Tue, Jul 10, 2018 at 8:04 AM Christian Berendt
wrote:
It is possible to add a domain as a member, however this is not taken in
account. It should be mentioned that you can also add non-existing project ids
as a member.
Yes, you can add any s
+openstack-dev since I believe this is an issue with the Heat source code.
On 06/18/2018 11:19 AM, Spyros Trigazis wrote:
Hello list,
I'm hitting quite easily this [1] exception with heat. The db server is
configured to have 1000
max_connnections and 1000 max_user_connections and in the databa
On 06/13/2018 10:18 AM, Blair Bethwaite wrote:
Hi Jay,
Ha, I'm sure there's some wisdom hidden behind the trolling here?
I wasn't trolling at all. I was trying to be funny. Attempt failed I
guess :)
Best,
-jay
___
OpenStack-operators mailing list
On 06/13/2018 09:58 AM, Blair Bethwaite wrote:
Hi all,
Wondering if anyone can share experience with architecting Nova KVM
boxes for large capacity high-performance storage? We have some
particular use-cases that want both high-IOPs and large capacity local
storage.
In the past we have used
On 06/07/2018 01:56 PM, melanie witt wrote:
Hello Stackers,
Recently, we've received interest about increasing the maximum number of
allowed volumes to attach to a single instance > 26. The limit of 26 is
because of a historical limitation in libvirt (if I remember correctly)
and is no longer
On 05/29/2018 01:06 PM, Matt Riedemann wrote:
I'm wondering if the RequestSpec.project_id is null? Like, I wonder if
you're hitting this bug:
https://bugs.launchpad.net/nova/+bug/1739318
Although if this is a clean Ocata environment with new instances, you
shouldn't have that problem.
Looks
The hosts you are attempting to migrate *to* do not have the
filter_tenant_id property set to the same tenant ID as the compute host
2 that originally hosted the instance.
That is why you see this in the scheduler logs when evaluating the
fitness of compute host 1 and compute host 3:
"fails
On 04/03/2018 06:48 AM, Chris Dent wrote:
On Mon, 2 Apr 2018, Alex Schultz wrote:
So this is/was valid. A few years back there was some perf tests done
with various combinations of process/threads and for Keystone it was
determined that threads should be 1 while you should adjust the
process co
On 03/27/2018 10:40 AM, Matt Riedemann wrote:
Sylvain has had a spec up for awhile [1] about solving an old issue
where admins can rename an AZ (via host aggregate metadata changes)
while it has instances in it, which likely results in at least user
confusion, but probably other issues later if
On 02/06/2018 04:26 AM, Flint WALRUS wrote:
Aren’t CellsV2 more adapted to what you’re trying to do?
No, cellsv2 are not user-facing nor is there a way to segregate certain
tenants on to certain cells.
Host aggregates are the appropriate way to structure this grouping.
Best,
-jay
Le mar. 6
On 01/29/2018 06:30 PM, Mathieu Gagné wrote:
So lets explore what would looks like a placement centric solution.
(let me know if I get anything wrong)
Here are our main concerns/challenges so far, which I will compare to
our current flow:
1. Compute nodes should not be enabled by default
When
On 01/29/2018 06:48 PM, Mathieu Gagné wrote:
On Mon, Jan 29, 2018 at 8:47 AM, Jay Pipes wrote:
What I believe we can do is change the behaviour so that if a 0.0 value is
found in the nova.conf file on the nova-compute worker, then instead of
defaulting to 16.0, the resource tracker would
On 01/29/2018 12:40 PM, Chris Friesen wrote:
On 01/29/2018 07:47 AM, Jay Pipes wrote:
What I believe we can do is change the behaviour so that if a 0.0
value is found
in the nova.conf file on the nova-compute worker, then instead of
defaulting to
16.0, the resource tracker would first look to
Greetings again, Mathieu, response inline...
On 01/18/2018 07:24 PM, Mathieu Gagné wrote:
So far, a couple challenges/issues:
We used to have fine grain control over the calls a user could make to
the Nova API:
* os_compute_api:os-aggregates:add_host
* os_compute_api:os-aggregates:remove_host
The bug in question doesn't have anything to do with that.
I've pushed a fix and a test case up here:
https://review.openstack.org/538310
Best,
-jay
On 01/26/2018 12:16 PM, Blake Covarrubias wrote:
The inconsistency in device naming is documented in
https://docs.openstack.org/nova/pike/user/b
On 01/22/2018 11:36 AM, Maciej Kucia wrote:
Hi!
Is there any noticeable performance penalty when using multiple virtual
functions?
For simplicity I am enabling all available virtual functions in my NICs.
I presume by the above you are referring to setting your
pci_passthrough_whitelist on
-reference/compute/schedulers.html
On Wed, Jan 17, 2018 at 7:57 AM, Sylvain Bauza wrote:
On Wed, Jan 17, 2018 at 2:22 PM, Jay Pipes wrote:
On 01/16/2018 08:19 PM, Zhenyu Zheng wrote:
Thanks for the info, so it seems we are not going to implement aggregate
overcommit ratio in placement at le
On 01/16/2018 08:19 PM, Zhenyu Zheng wrote:
Thanks for the info, so it seems we are not going to implement aggregate
overcommit ratio in placement at least in the near future?
As @edleafe alluded to, we will not be adding functionality to the
placement service to associate an overcommit ratio
On 09/20/2017 06:17 AM, Georgios Kaklamanos wrote:
Hello,
Usecase: We have to deploy instances that belong in different domains,
to different compute hosts.
Does anyone else have the same usecase? If so, how did you implement
it?
[The rest of the mail is a more detailed explanation on the ques
SIGs approach). Anyway if
anyone is interested to investigate further, please reply or reach out
to me: jamemcc at gmail dot com.
On 08/16/2017 09:25 PM, Curtis wrote:
/On Wed, Aug 16, 2017 at 12:03 AM, Jay Pipes <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>&g
On 08/16/2017 09:25 PM, Curtis wrote:
On Wed, Aug 16, 2017 at 12:03 AM, Jay Pipes wrote:
Hi Curtis, Andrew U, Jamie M,
May I request that if the telco working group merges with the LCOO, that we
get regular updates to the openstack[-operator|-dev] mailing list with
information about the
Hi Curtis, Andrew U, Jamie M,
May I request that if the telco working group merges with the LCOO, that
we get regular updates to the openstack[-operator|-dev] mailing list
with information about the goings-on of LCOO? Would be good to get a
bi-weekly or even monthly summary.
Other working gr
On 08/01/2017 11:14 AM, John Petrini wrote:
Just my two cents here but we started out using mostly Ephemeral storage
in our builds and looking back I wish we hadn't. Note we're using Ceph
as a backend so my response is tailored towards Ceph's behavior.
The major pain point is snapshots. When y
On 07/19/2017 12:35 PM, Chris Friesen wrote:
On 07/12/2017 06:57 PM, Jay Pipes wrote:
On 07/04/2017 05:21 AM, Kekane, Abhishek wrote:
Hi operators,
I want to know how evacuation of resized instances is handled in real
environment.
For example if the vm is in resized state and if the compute
On 07/04/2017 05:21 AM, Kekane, Abhishek wrote:
Hi operators,
I want to know how evacuation of resized instances is handled in real
environment.
For example if the vm is in resized state and if the compute host on which the
vm is resized goes down, then how will operator evacuate the vm.
One
On 05/31/2017 05:52 AM, federica fanzago wrote:
Hello operators,
we have a problem with the placement after the update of our cloud from
Mitaka to Ocata release.
We started from a mitaka cloud and we have followed these steps: updated
the cloud controller from Mitaka to newton, run the dbsync
On 05/23/2017 07:06 PM, Blair Bethwaite wrote:
Thanks Jay,
I wonder whether there is an easy-ish way to collect stats about the
sorts of errors deployers see in that catchall, so that when this
comes back around in a release or two there might be some less
anecdotal data available...?
Don't wo
Hello Dear Operators,
OK, we've heard you loud and (mostly) clear. We won't remove the
automated rescheduling behavior from Nova. While we will be removing the
primary cause of reschedules (resource overconsumption races), we cannot
yet eliminate the catchall exception handling on the compute
Thanks for the feedback, Curtis, appreciated!
On 05/23/2017 04:09 PM, Curtis wrote:
On Tue, May 23, 2017 at 1:20 PM, Edward Leafe wrote:
On May 23, 2017, at 1:27 PM, James Penick wrote:
Perhaps this is a place where the TC and Foundation should step in and foster
the existence of a porce
On 05/23/2017 12:34 PM, Marc Heckmann wrote:
On Tue, 2017-05-23 at 11:44 -0400, Jay Pipes wrote:
On 05/23/2017 09:48 AM, Marc Heckmann wrote:
For the anti-affinity use case, it's really useful for smaller or
medium
size operators who want to provide some form of failure domains to
users
b
On 05/22/2017 03:36 PM, Sean Dague wrote:
On 05/22/2017 02:45 PM, James Penick wrote:
I recognize that large Ironic users expressed their concerns about
IPMI/BMC communication being unreliable and not wanting to have
users manually retry a baremetal instance launch. But, on this
On 05/23/2017 09:48 AM, Marc Heckmann wrote:
For the anti-affinity use case, it's really useful for smaller or medium
size operators who want to provide some form of failure domains to users
but do not have the resources to create AZ's at DC or even at rack or
row scale. Don't forget that as so
nity that has much of a chance of
last-minute violation.
Best,
-jay
On Mon, May 22, 2017 at 03:00:09PM -0400, Jonathan Proulx wrote:
:On Mon, May 22, 2017 at 11:45:33AM -0700, James Penick wrote:
::On Mon, May 22, 2017 at 10:54 AM, Jay Pipes wrote:
::
::> Hi Ops,
::>
::> Hi!
::
ll the time, but haven't
touched nfv.
How often do you see retries due to the last-minute anti-affinity violation?
Thanks for the feedback, Kevin!
-jay
Thanks,
Kevin
From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, May 22, 2017 10:54 AM
To: openst
Hi Ops,
I need your feedback on a very important direction we would like to
pursue. I realize that there were Forum sessions about this topic at the
summit in Boston and that there were some decisions that were reached.
I'd like to revisit that decision and explain why I'd like your support
On 05/01/2017 03:39 PM, Blair Bethwaite wrote:
Hi all,
Following up to the recent thread "[Openstack-operators] [scientific]
Resource reservation requirements (Blazar) - Forum session" and adding
openstack-dev.
This is now a confirmed forum session
(https://www.openstack.org/summit/boston-2017/
On 04/28/2017 08:22 AM, Shamail Tahir wrote:
Hi everyone,
Most of the proposed/accepted Forum sessions currently have moderators
but there are six sessions that do not have a confirmed moderator yet.
Please look at the list below and let us know if you would be willing to
help moderate any of th
On 04/26/2017 05:22 PM, Sun, Yih Leong wrote:
Hi,
In preparation for the OpenStack Boston Forum, the Product WG recently
discussed how to make it easier for moderators and the community to continue
discussions from the Forum an easy and consistent experience. The team came up
with a few recom
On 04/11/2017 02:08 PM, Pierre Riteau wrote:
On 4 Apr 2017, at 22:23, Jay Pipes mailto:jaypi...@gmail.com>> wrote:
On 04/04/2017 02:48 PM, Tim Bell wrote:
Some combination of spot/OPIE
What is OPIE?
Maybe I missed a message: I didn’t see any reply to Jay’s question about
OPIE.
asible?
I'm not sure how the above is different from the constraints I mention
below about having separate sets of resource providers for preemptible
instances than for non-preemptible instances?
Best,
-jay
Tim
On 04.04.17, 19:21, "Jay Pipes" wrote:
On 04/03/2017 06:0
On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
Hi Jay,
On 4 April 2017 at 00:20, Jay Pipes wrote:
However, implementing the above in any useful fashion requires that Blazar
be placed *above* Nova and essentially that the cloud operator turns off
access to Nova's POST /servers API cal
der Nova spec
which is related:
https://review.openstack.org/#/c/389216/
<https://review.openstack.org/#/c/389216/>
And see the points that Jay Pipes makes in that review. Before
spending a lot of time reviving the project, I'd encourage people to
read and digest
My vote is to deprecate it.
On 03/22/2017 11:16 AM, Matt Riedemann wrote:
This is mostly directed at operators but I'm cross-posting to the ops
and dev lists.
First, does anyone use the os-hosts API and if so, for what use cases?
The os-hosts and os-services APIs are very similar, and they wor
On 03/23/2017 01:01 PM, Jean-Philippe Methot wrote:
Hi,
Lately, on my production openstack Newton setup, I've ran into a
situation that defies my assumptions regarding memory management on
Openstack compute nodes and I've been looking for explanations.
Basically, we had a VM with a flavor that l
On 03/16/2017 10:48 PM, Masha Atakova wrote:
Hi everyone,
Is there any up-to-date functionality in nova / neutron which allows to
run some additional code triggered by changes in instance like creating
or deleting an instance?
I see that nova hooks are deprecated as of Nova 13:
https://github.
On 03/08/2017 04:14 PM, Sean Dague wrote:
On 03/08/2017 07:12 AM, Tim Bell wrote:
On 7 Mar 2017, at 11:52, Sean Dague wrote:
One of the things that came out of the PTG was perhaps a new path
forward on hierarchical limits that involves storing of limits in
keystone doing counting on the proj
infrastructure and therefore give you the benefit of auto-recording and
publishing the meeting minutes.
However, I do understand it can sometimes be difficult to follow IRC
conversations with lots of participants. Definitely has trade-offs.
-Original Message-
From: Jay Pipes [mailto:jayp
On 02/03/2017 01:16 PM, Jonathan Proulx wrote:
On Fri, Feb 03, 2017 at 04:34:20PM +0100, lebre.adr...@free.fr wrote:
:Hi,
:
:I don't know whether there is already a concrete/effective way to identify
overlapping between WGs.
:But if not, one way can be to arrange one general session in each summ
e damned, let's pull together and do all that we can to
make OpenStack as great as it can be and make the world a better
place along the way.
Trust me, politics was the last thing I had in mind when I wrote my
questions about the LCOO!
> Here in the USA where I live, I find myself
h.leong@intel.com | +1 503 264 0610
-----Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Thursday, February 2, 2017 5:23 PM
To: Edgar Magana ;
openstack-operators@lists.openstack.org; user-commit...@lists.openstack.org
Cc: MCCABE, JAMEY A ; UKASICK, ANDREW
Subject: R
to rephrase or elaborate on any questions. Happy to do
so. I genuinely want to see alignment with other groups in this effort.
Best,
-jay
Thanks,
Edgar
On 2/2/17, 12:14 PM, "Jay Pipes" wrote:
Hi,
I was told about this group today. I have a few questions. Hopefully
so
Hi,
I was told about this group today. I have a few questions. Hopefully
someone from this team can illuminate me with some answers.
1) What is the purpose of this group? The wiki states that the team
"aims to define the use cases and identify and prioritise the
requirements which are needed
upgrade.html#
5. Cinder,
http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#
On Sat, Nov 12, 2016 at 9:13 AM, Matt Riedemann
wrote:
On 11/11/2016 9:45 AM, Jay Pipes wrote:
On 11/11/2016 08:38 AM, William Josefsson wrote:
Hi everyone, I have been quite concerned about how to mi
On 11/11/2016 08:38 AM, William Josefsson wrote:
Hi everyone, I have been quite concerned about how to migrate 50
projects and total 100 instances on Liberty/CentOS72, to Newton.
My storage backend is CEPH. Can anyone advice if I can as a safe
migration path, do a fresh install of Newton on addi
On 10/12/2016 10:17 AM, Ulrich Kleber wrote:
Hi,
I didn’t see an official announcement, so I like to point you to the new
release of OPNFV.
https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0
OPNFV is an open source project
Timofei, thanks for bringing up the resource providers work; it is
absolutely trying to solve the problem highlighted in this post.
Antonio, comments inline.
On 08/08/2016 05:22 AM, Antonio Messina wrote:
2016-08-08 10:52 GMT+02:00 Timofei Durakov :
Hi,
so for this moment we have 2 options:
6 at 10:23, Steven Dake (stdake) wrote:
On 7/31/16, 7:13 AM, "Jay Pipes" wrote:
On 07/29/2016 11:35 PM, Steven Dake (stdake) wrote:
Hey folks,
In Kolla we have a significant bug in that Horizon can't be used because
it requires a member user. We have a few approaches to fi
On 07/29/2016 11:35 PM, Steven Dake (stdake) wrote:
Hey folks,
In Kolla we have a significant bug in that Horizon can't be used because
it requires a member user. We have a few approaches to fixing this
problem in mind, but want to understand what Operators want. Devstack
itself has switched b
On 04/14/2016 05:14 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
As admin I want to know when host is ready to actions to be done by admin
during the maintenance. Meaning physical resources are emptied.
You are equating "host maintenance mode" with the end result of a call
to `nova host-evacua
On 04/18/2016 09:24 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
I noticed while reading through Mitaka release notes that
vendordata_driver has been deprecated in Mitaka
(https://review.openstack.org/#/c/288107/) and is slated for removal at
some point. This came as somewhat of a surprise to me - I
On 03/03/2016 08:57 AM, Robert Starmer wrote:
There was work done on enabling much more dynamic scheduling, including
cross project scheduling (e.g. get additional placement hints from
Neutron or Cinder), and I believe the framework is even in place to make
use of this, but I don't believe anyone
On 02/05/2016 08:17 AM, Chris Marino wrote:
Hi Tomas, functionally, that is pretty accurate, but operationally they
are quite different. All the L3 approaches have fundamentally the same
point of view. I'd add OpenContrail and Nuage and what CloudScaling did
to the list of similar approaches as w
On 09/28/2015 12:51 PM, Matt Fischer wrote:
Yes. We have a separate DB cluster for global stuff like Keystone &
Designate, and a regional cluster for things like nova/neutron etc.
Yep, this ^
-jay
___
OpenStack-operators mailing list
OpenStack-opera
On 09/08/2015 04:44 PM, Jonathan Proulx wrote:
Hi All,
I'm pretty close to opening a second region in my cloud at a second
physical location.
The plan so far had been to only share keystone between the regions
(nova, glance, cinder etc would be distinct) and implement this by
using MariaDB with
On 06/19/2015 05:22 AM, Thierry Carrez wrote:
In conclusion, I'd like to suggest that you find an better name to
describe this operational data about projects, because calling them
"tags" or "labels" will be confusing in this two-step picture. My
personal suggestion would be ops-data
+1
-jay
Adding -dev because of the reference to the Neutron "Get me a network
spec". Also adding [nova] and [neutron] subject markers.
Comments inline, Kris.
On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
During the Openstack summit this week I got to talk to a number of other
operators of large Open
Cross-posting to -operators and -dev because this involves *packagers*
of OpenStack, as well as operators who use those packages.
Hello Operators,
First, let me start out by saying if you were offended by my snarky
comments at yesterday's TC meeting [1] regarding the direction of the
Ops Tags
On 06/03/2015 06:52 AM, Tom Fifield wrote:
Hi all,
As agreed at the summit, let's have a monthly meeting for the Ops Tags Team.
On the agenda for this round:
0. Announcements (new repo, wiki page)
1. Discussion of ops:docs:install-guide tag
2. Discussion of Ops:packaged Tag
3. New tags people w
On 06/02/2015 10:29 AM, Tom Fifield wrote:
On 02/06/15 22:18, Jay Pipes wrote:
On 06/01/2015 04:07 AM, Tom Fifield wrote:
Hi all,
Thank you very much for officially kicking off the Ops Tags Team at the
Vancouver summit!
Based on our discussions, I've made a bit of progress. We now h
On 06/01/2015 04:07 AM, Tom Fifield wrote:
Hi all,
Thank you very much for officially kicking off the Ops Tags Team at the
Vancouver summit!
Based on our discussions, I've made a bit of progress. We now have a
* wiki page: https://wiki.openstack.org/wiki/Operations/Tags
* repository: https://g
On 05/15/2015 12:38 PM, George Shuklin wrote:
Just to let everyone know: broken antispoofing is not an 'security
issue' and the fix is not planned to be backported to Juno/kilo.
https://bugs.launchpad.net/bugs/1274034
What can I say? All hail devstack! Who care about production?
George, I can
Chris, responded on the bug :)
Thanks!
-jay
On 03/31/2015 02:47 AM, Chris Friesen wrote:
On 03/30/2015 09:53 PM, Jay Pipes wrote:
On 03/30/2015 07:30 PM, Chris Friesen wrote:
On 03/30/2015 04:57 PM, Jay Pipes wrote:
On 03/30/2015 06:42 PM, Chris Friesen wrote:
On 03/30/2015 02:47 PM, Jay
On 03/30/2015 07:30 PM, Chris Friesen wrote:
On 03/30/2015 04:57 PM, Jay Pipes wrote:
On 03/30/2015 06:42 PM, Chris Friesen wrote:
On 03/30/2015 02:47 PM, Jay Pipes wrote:
On 03/30/2015 10:42 AM, Chris Friesen wrote:
On 03/29/2015 09:26 PM, Mike Dorman wrote:
Hi all,
I’m curious about how
On 03/30/2015 06:42 PM, Chris Friesen wrote:
On 03/30/2015 02:47 PM, Jay Pipes wrote:
On 03/30/2015 10:42 AM, Chris Friesen wrote:
On 03/29/2015 09:26 PM, Mike Dorman wrote:
Hi all,
I’m curious about how people deal with failures of compute
nodes, as in total failure when the box is gone for
On 03/30/2015 10:42 AM, Chris Friesen wrote:
On 03/29/2015 09:26 PM, Mike Dorman wrote:
Hi all,
I’m curious about how people deal with failures of compute nodes,
as in total failure when the box is gone for good. (Mainly care
about KVM HV, but also interested in more general cases as well.)
T
Thanks very much for this notification, Matt, much appreciated!
Best,
-jay
On 03/11/2015 08:25 AM, Fischer, Matt wrote:
We were remiss for not mentioning this during our talk on Monday since
apparently its happening to other folks as I found out last night at
dinner. During the Juno database mi
On 03/06/2015 10:54 AM, Jesse Keating wrote:
On 3/6/15 10:48 AM, Jay Pipes wrote:
Have you ever done this in practice?
One way of doing this would be to enable the host after adding it to a
host aggregate that only has your administrative tenant allowed. Then
launch an instance specifying
On 03/06/2015 10:43 AM, Jesse Keating wrote:
On 3/6/15 10:27 AM, Jay Pipes wrote:
As for adding another CONF option, I'm -1 on that. I see no valid reason
to schedule workloads to disabled hosts.
There may be a better way to skin this cat, but one scenario is we have
a host that has al
On 03/06/2015 07:19 AM, Sylvain Bauza wrote:
Hi,
First, sorry for cross-posting on both dev and operator MLs but I also
would like to get operators feedback.
So, I was reviewing the scheduler ComputeFilter and I was wondering why
the logic should be in a filter.
We indeed already have a check o
ns for any
database server used in production deployments.
Best,
-jay
On 02/20/15 10:20, Jay Pipes wrote:
On 02/20/2015 10:39 AM, Sean Lynn wrote:
We finished upgrading to Juno about the time you guys did. Just checked
logs across all environments since the time of the Juno upgrade and I'
On 02/20/2015 10:39 AM, Sean Lynn wrote:
We finished upgrading to Juno about the time you guys did. Just checked
logs across all environments since the time of the Juno upgrade and I'm
*not* seeing the same errors.
For comparison here's what we have (mostly out-of-the-box):
api_workers and
On 02/18/2015 02:31 AM, Marc Koderer wrote:
Hello everyone,
We already got good feedback on my sandbox test review. So I would like
to move forward.
With review [1] we will get a stackforge repo called „telcowg-usecases“.
Submitting a usecase will then follow the process of OpenStack developmen
On 02/05/2015 03:19 PM, Kris G. Lindgren wrote:
Is Mirantis going to have someone at the ops mid-cycle?
I believe Sean Collins (at least) is going to be present from Mirantis.
We were talking
about this in the operators channel today and it seemed like pretty much
everyone who was active has
On 02/13/2015 09:20 AM, Daniel P. Berrange wrote:
On Fri, Feb 13, 2015 at 08:49:26AM -0500, Jay Pipes wrote:
On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:
Historically Nova has had a bunch of code which mounted images on the
host OS using qemu-nbd before passing them to libvirt to setup
On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:
Historically Nova has had a bunch of code which mounted images on the
host OS using qemu-nbd before passing them to libvirt to setup the
LXC container. Since 1.0.6, libvirt is able todo this itself and it
would simplify the codepaths in Nova if we
On 02/02/2015 12:46 PM, Matt Riedemann wrote:
This came up in the operators mailing list back in June [1] but given
the subject probably didn't get much attention.
Basically there is a really old bug [2] from Grizzly that is still a
problem and affects multiple projects. A tenant can be deleted
Great topic, Morgan. Coments inline.
On 01/29/2015 11:26 AM, Morgan Fainberg wrote:
From an operator perspective I wanted to get input on the SQL Schema
Downgrades.
Today most projects (all?) provide a way to downgrade the SQL Schemas
after you’ve upgraded. Example would be moving from Juno to
On 01/15/2015 05:20 PM, George Shuklin wrote:
Hello everyone.
One more thing in the light of small openstack.
I really dislike tripple network load caused by current glance snapshot
operations. When compute do snapshot, it playing with files locally,
than it sends them to glance-api, and (if gl
auth_protocol = http
auth_version = v2.0
admin_tenant_name = service
admin_user = nova
admin_password = openstack-compute
signing_dir = /var/cache/nova/api
hash_algorithms = md5
Could you pastebin the output of:
keystone catalog
and also pastebin your nova.conf for the node running the Nova API service?
Thanks!
-jay
On 01/14/2015 02:25 AM, Geo Varghese wrote:
Hi Team,
I need a help with cinder volume attachment with an instance.
I have succesfully created cinder v
1 - 100 of 113 matches
Mail list logo