We only bump the version if something has changed IIRC. I think bumping
when nothing has changed would create a burden for implementers of client
software. So its not like you get a chance to sneak this in "for free".
Does this information really need to be available in the host OS? Its
trivial to
I think a good start would be a concrete list of the places you felt you
needed to change upstream and the specific reasons for each that it wasn't
done as part of the community.
For example, I look at your nova fork and it has a "don't allow this call
during an upgrade" decorator on many API call
Hi,
further to last week's example of how to add a new privsep'ed call in Nova,
I thought I'd write up how to add privsep to a new OpenStack project. I've
used Cinder in this worked example, but it really applies to any project
which wants to do things with escalated permissions.
The write up is
I was asked yesterday for a guide on how to write new escalated methods
with oslo privsep, so I wrote up a blog post about it this morning. It
might be useful to others here.
http://www.madebymikal.com/how-to-make-a-privileged-call-with-oslo-privsep/
I intend to write up how to add privsep to a n
I'm confused about the design of AE to be honest. Is there a good reason
that this functionality couldn't be provided by cloud-init? I think there's
a lot of cost in deviating from the industry standard, so the reasons to do
so have to be really solid.
I'm also a bit confused by what seems to be s
Internet: jiche...@cn.ibm.com
> > Phone: +86-10-82451493
> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
> > District, Beijing 100193, PRC
> >
> > Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On
> > 4/11/2018 5:09 PM, Mi
The more I think about it, the more I dislike how the proposed driver also
"lies" about it using iso9660. That's definitely wrong:
if CONF.config_drive_format in ['iso9660']:
# cloud-init only support iso9660 and vfat, but in z/VM
# implementation, can't link a disk
Heya,
https://review.openstack.org/#/c/527658 is a z/VM patch which introduces
their support for config drive. They do this by attaching a tarball to the
instance, having pretended in the nova code that it is an iso9660. This
worries me.
In the past we've been concerned about adding new filesyste
Hi,
https://review.openstack.org/#/c/523387 proposes adding a z/VM specific
dependancy to nova's requirements.txt. When I objected the counter argument
is that we have examples of windows specific dependancies (os-win) and
powervm specific dependancies in that file already.
I think perhaps all th
itchell, wrote:
> On Wed, 2018-04-04 at 07:54 +1000, Michael Still wrote:
> > Thanks to jichenjc for fixing the pep8 failures I was seeing on
> > master. I'd decided they were specific to my local dev environment
> > given no one else was seeing them.
> >
> > As
Thanks to jichenjc for fixing the pep8 failures I was seeing on master. I'd
decided they were specific to my local dev environment given no one else
was seeing them.
As I said in the patch that fixed the issue [1], I think its worth
exploring how these got through the gate in the first place. Ther
aving an interface for vendordata that gets deletes would be quite nice.
> Right now for novajoin we listen to the nova notifications for updates and
> deletes; if this could be handled natively by vendordata, it would simplify
> our codebase.
>
> BR
>
> On Fri, Mar 16, 2018 at
gt;
> best,
> Pino
>
>
> On Thu, Mar 15, 2018 at 3:42 PM, Michael Still wrote:
>
>> Heya,
>>
>> I've just stumbled across Tatu and the design presentation [1], and I am
>> wondering how you handle cleaning up instances when they are deleted given
>
Heya,
I've just stumbled across Tatu and the design presentation [1], and I am
wondering how you handle cleaning up instances when they are deleted given
that nova vendordata doesn't expose a "delete event".
Specifically I'm wondering if we should add support for such an event to
vendordata someh
_dir variable to /var/log/ironic, or setting
>> [default] log_dir to the same in ironic.conf.
>>
>> I'm surprised it's not logging to a file by default.
>>
>> Mark
>>
>> On 4 Mar 2018 8:33 p.m., "Michael Still" wrote:
>>
>>
onic, or setting
> [default] log_dir to the same in ironic.conf.
>
> I'm surprised it's not logging to a file by default.
>
> Mark
>
> On 4 Mar 2018 8:33 p.m., "Michael Still" wrote:
>
>> Ok, so I applied your patch and redeployed. I now get a list of
I was thinking about this the other day... How do you de-register instances
from freeipa when the instance is deleted? Is there a missing feature in
vendordata there that you need?
Michael
On Fri, Nov 11, 2016 at 2:01 AM, Rob Crittenden wrote:
> Wanted to let you know I'm working on a nova meta
management
> and power interfaces were not enabled. The patch should address that but
> please let us know if there are further issues.
> Mark
>
>
> On 4 Mar 2018 7:59 p.m., "Michael Still" wrote:
>
> Replying to a single email because I am lazier than you.
&g
Replying to a single email because I am lazier than you.
I would have included logs, except /var/log/ironic on the bifrost machine
is empty. There are entries in syslog, but nothing that seems related (its
all periodic task kind of stuff).
However, Mark is right. I had an /etc/ironic/ironic.conf
Heya,
I've been playing with bifrost to help me manage some lab machines. I must
say the install process was well documented and smooth, so that was a
pleasure. Thanks!
That said, I am struggling to get a working node enrolment. I'm resisting
using the JSON file / ansible playbook approach, becau
Sorry for the slow reply, I've spent the last month camping in a tent and
it was wonderful.
The privsep transition isn't complete in Nova, but it was never intended to
be in Queens. We did get further than we envisaged and its doable to finish
off in Rocky.
That said, I feel like we have a nice e
Do we continue to support the previous two releases as stable branches?
Doesn't that mean we double the amount of time we need to keep older CI
setups around? Isn't that already a pain point for the stable teams?
Michael
On Wed, Dec 13, 2017 at 8:17 AM, Thierry Carrez
wrote:
> Hi everyone,
>
>
Hi,
I'm out of my depth a little here. I've done the following:
- installed kubernetes
- followed the deploy guide for kolla-kubernetes [1]
- except where I didn't because I had to fix it [2]
I can download an openrc and even send a "nova boot" that looks like it
works. However, I have this c
Thanks for this summary. I'd say the cinder-booted IPA is definitely of
interest to the operators I've met. Building new IPAs, especially when
trying to iterate on what drivers are needed is a pain so being able to
iterate faster would be very useful. That said, I guess this implies
booting more th
That does work for me, except it means I'll still need to port it to
privsep to hit my goal of no rootwrap in Queens. I can live with that.
Michael
On Wed, Nov 8, 2017 at 4:54 PM, Matt Riedemann wrote:
> On 11/8/2017 12:24 PM, Michael Still wrote:
>
>> Hi,
>>
>>
Hi,
a really really long time ago (think 2011), we added support in Nova for
configuring the mkfs commands that are run for new ephemeral disks using
the virt_mkfs command. The current implementation is in
nova/virt/disk/api.py for your reading pleasure.
I'm battling a little with how to move thi
On Mon, Nov 6, 2017 at 1:26 PM, Dan Smith wrote:
> > I hope everyone travelling to the Sydney Summit is enjoying jet lag
> > just as much as I normally do. Revenge is sweet! My big advice is that
> > caffeine is your friend, and to not lick any of the wildlife.
>
> I wasn't planning on licking an
The privsep session doesn't appear to be in that list. Did it get dropped
or something?
Michael
On Wed, Nov 1, 2017 at 12:04 AM, Thierry Carrez
wrote:
> Hi everyone,
>
> Etherpads for the Forum sessions in Sydney can be found here:
>
> https://wiki.openstack.org/wiki/Forum/Sydney2017
>
> If you
Greetings,
I hope everyone travelling to the Sydney Summit is enjoying jet lag just as
much as I normally do. Revenge is sweet! My big advice is that caffeine is
your friend, and to not lick any of the wildlife.
On a more serious note, I want to give a checkpoint for the Nova privsep
transition i
I think new-keypair-on-rebuild makes sense for some forms of key rotation
as well. For example, I've worked with a big data ironic customer who uses
rebuild to deploy new OS images onto their ironic managed machines.
Presumably if they wanted to do a keypair rotation they'd do it in a very
similar
to discuss anything in flight for Queens, what
> we're working on, and have a chance to ask questions of operators/users for
> feedback. For example, we plan to add vGPU support but it will be quite
> simple to start, similar with volume multi-attach.
>
> 4. Michael Still had an i
One thing I'd like to explore is what the functional difference between a
rebuild and a delete / create cycle is. With a rebuild you get to keep your
IP I suppose, but that could also be true of floating IPs for a delete /
create as well.
Operationally, why would I want to inject a new keypair? Th
Hi,
this email is a courtesy message to make sure you're all aware that at the
PTG we decided to try to convert all of nova-compute to privsep for the
Queens release. This will almost certainly have an impact on out of tree
drivers, although I am hoping the fall out is minimal.
A change like this
Dims, I'm not sure that's actually possible though. Many of these files
have been through rewrites and developed over a large number of years.
Listing all authors isn't practical.
Given the horse has bolted on forking these files, I feel like a comment
acknowledging the original source file is pro
"Matt Riedemann" wrote:
> On 8/20/2017 1:11 AM, Michael Still wrote:
>
>> Specifically we could do something like this:
>> https://review.openstack.org/#/c/495532
>>
>
> Sounds like we're OK with doing this in Queens given the other discussion
> in this
017 at 03:43:22PM +1000, Michael Still wrote:
> > Hi,
> >
> > nova.virt.libvirt.storage.lvm.clear_volume() has a comment that we could
> > use shred to zero out volumes efficiently if we could assume that shred
> > 8.22 was in all our downstream distros [1]. shred 8.22 sh
I'm going to take the general silence on this as permission to remove the
idmapshift binary from nova. You're welcome.
Michael
On Sat, Jul 29, 2017 at 10:09 AM, Michael Still wrote:
> Hi.
>
> I'm working through the process of converting the libvirt driver in No
Specifically we could do something like this:
https://review.openstack.org/#/c/495532
Michael
On Sun, Aug 20, 2017 at 3:43 PM, Michael Still wrote:
> Hi,
>
> nova.virt.libvirt.storage.lvm.clear_volume() has a comment that we could
> use shred to zero out volumes efficiently if we
Hi,
nova.virt.libvirt.storage.lvm.clear_volume() has a comment that we could
use shred to zero out volumes efficiently if we could assume that shred
8.22 was in all our downstream distros [1]. shred 8.22 shipped in 2013 [2].
Can we assume that thing now? xenial appears to ship with 8.25 for examp
Hi.
I'm working through the process of converting the libvirt driver in Nova to
privsep with the assistance of Tony Breeds. For various reasons, I started
with removing all the calls to the chown binary and am replacing them with
privsep equivalents. You can see this work at:
https://review.o
Hi,
I'm cc'ing openstack-dev because your email is the same as the comment you
made on the relevant review, and I think getting visibility with the wider
Nova team is a good idea.
Unfortunately this is a risk of having an out of tree Nova driver, which
has never been the recommended path for impl
Certainly removing the "--no-binary :all:" results in a build that builds.
I'll test and see if it works todayish.
Michael
On Mon, Jun 12, 2017 at 9:56 PM, Chris Smart wrote:
> On Mon, 12 Jun 2017, at 21:36, Michael Still wrote:
> > The experimental buildroot based iron
te:
> On 06/12/2017 04:29 AM, Michael Still wrote:
> > Hi,
> >
> > I'm trying to explain this behaviour in stable/newton, which specifies
> > Routes==2.3.1 in upper-constraints:
> >
> > $ pip install --no-binary :all: Routes==2.3.1
> > ...
> >
Hi,
I'm trying to explain this behaviour in stable/newton, which specifies
Routes==2.3.1 in upper-constraints:
$ pip install --no-binary :all: Routes==2.3.1
...
Could not find a version that satisfies the requirement Routes==2.3.1
(from versions: 1.5, 1.5.1, 1.5.2, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.7
2014 at 08:26:44AM +1000, Michael Still wrote:
> > On Tue, Sep 23, 2014 at 8:58 PM, Daniel P. Berrange
> wrote:
> > > On Tue, Sep 23, 2014 at 02:27:52PM +0400, Roman Bogorodskiy wrote:
> > >> Michael Still wrote:
> > >>
> > >>
This sort of question comes up every six months or so it seems.
The issue is that for config drive users we don't have a way of rebuilding
all of the config drive (for example, the root password is gone). That's
probably an issue for rescue because its presumably one of the things you
might reset.
It would be interesting for this to be built in a way where other endpoints
could be added to the list that have extra headers added to them.
For example, we could end up with something quite similar to EC2 IAMS if we
could add headers on the way through for requests to OpenStack endpoints.
Do yo
Config drive over read-only NFS anyone?
Michael
On Sun, Feb 19, 2017 at 6:12 AM, Steve Gordon wrote:
> - Original Message -
> > From: "Artom Lifshitz"
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Sent: Saturday, Febru
We have had this discussion several times in the past for other reasons.
The reality is that some people will never deploy the metadata API, so I
feel like we need a better solution than what we have now.
However, I would consider it probably unsafe for the hypervisor to read the
current config dr
At a previous employer we had a policy that all passwords started with "/"
because of the sheer number of times someone typed the root password into a
public IRC channel.
Michael
On Thu, Feb 9, 2017 at 10:04 AM, Jay Pipes wrote:
> On 02/08/2017 03:36 PM, Kendall Nelson wrote:
>
>> Hello All!
>>
What version of nova is tripleo using here? This wont work quite right if
you're using Mitaka until https://review.openstack.org/#/c/427547/ lands
and is released.
Also, I didn't know novajoin existed and am pleased to have discovered it.
Michael
On Fri, Feb 3, 2017 at 11:27 AM, Juan Antonio O
I think #3 is the right call for now. The person we had working on privsep
has left the company, and I don't have anyone I could get to work on this
right now. Oh, and we're out of time.
Michael
On Thu, Jan 26, 2017 at 3:49 PM, Matt Riedemann wrote:
> The patch to add support for ephemeral stor
t Riedemann" wrote:
>
> On 1/3/2017 8:48 PM, Michael Still wrote:
>
>> So...
>>
>> Our python3 tests hate [1] my exception handling for continued
>> vendordata implementation [2].
>>
>> Basically, it goes a bit like this -- I need to move from usin
So...
Our python3 tests hate [1] my exception handling for continued vendordata
implementation [2].
Basically, it goes a bit like this -- I need to move from using requests to
keystoneauth1 for external vendordata requests. This is because we're
adding support for sending keystone headers with th
I'd be remiss if I didn't point out that the nova LXC driver is much better
supported than the nova-docker driver.
Michael
On Thu, Dec 29, 2016 at 8:01 PM, Esra Celik
wrote:
>
> Hi Sam,
>
> nova-lxc is not recommended in production [1]. And LXD is built on top of
> LXC AFAIK. But I will investi
Hi,
radar was an antique effort to import some outside-OpenStack code that did
CI reliability dashboarding. It was never really a thing, and has been
abandoned over time.
The last commit that wasn't part of a project wide change series was in
January 2015.
Does anyone object to me following the
+1, I'd value him on the team.
Michael
On Sat, Dec 3, 2016 at 2:22 AM, Matt Riedemann
wrote:
> I'm proposing that we add Stephen Finucane to the nova-core team. Stephen
> has been involved with nova for at least around a year now, maybe longer,
> my ability to tell time in nova has gotten fuzzy
On Mon, Nov 28, 2016 at 4:37 PM, Jay Pipes wrote:
[Snip]
>
> I don't see any compelling reason not to work with the Nova and Ironic
> projects and add the functionality you wish to see in those respective
> projects.
>
Jay, I agree and I don't. First off, I think improving our current projects
This is a good summary, thanks. I finally uploaded the spec which describes
the decisions from the summit. Its here:
https://review.openstack.org/395959
Michael
On Thu, Nov 10, 2016 at 7:11 AM, Matt Riedemann
wrote:
> Michael Still led a session on completing the vendordata v2 work t
Heya,
I've been asked to let you all know about a single day OpenStack conference
in Canberra that's coming up in a few weeks. The event is being run by the
OpenStack Foundation along with the various meetup organizers.
The conference is on Monday 14 November and has two tracks -- a management
tr
Doh, and I'd already done one cleanup patch against it.
Michael
On Fri, Oct 28, 2016 at 1:24 PM, Devdatta Kulkarni <
kulkarni.devda...@gmail.com> wrote:
> Hi Steve,
>
> Your observation is correct.
>
> Solum team had created solum-infra-guestagent repository to investigate
> the idea of a build
So, its good that you're working on third party CI, but I see that as a
blocker before we can start this conversation -- we need to have a solid
history there before we can do much. You also don't mention (that I can
find) where the code to the driver is. Can I have a pointer to that please?
Micha
There is a bit of a wish list of things people want in metadata for Ocata
at https://etherpad.openstack.org/p/ocata-nova-metadata-wishlist -- it
might be worth adding your requirements and a link to your spec there?
Michael
On Sat, Oct 8, 2016 at 5:04 AM, Bence Romsics
wrote:
> Jay, let me corr
On Thu, Sep 1, 2016 at 11:58 AM, Adam Young wrote:
> On 08/31/2016 07:56 AM, Michael Still wrote:
>
> There is a quick sketch of what a service account might look like at
> https://review.openstack.org/#/c/363606/ -- I need to do some more
> fiddling to get the new option group
Riedemann
wrote:
> On 8/30/2016 4:36 PM, Michael Still wrote:
>
>> Sorry for being slow on this one, I've been pulled into some internal
>> things at work.
>>
>> So... Talking to Matt Riedemann just now, it seems like we should
>> continue to pass through the
Sorry for being slow on this one, I've been pulled into some internal
things at work.
So... Talking to Matt Riedemann just now, it seems like we should continue
to pass through the user authentication details when we have them to the
plugin. The problem is what to do in the case where we do not (w
Its a shame so many are -2'ed. There is a lot there I could have merged
yesterday if it wasn't for that.
Michael
On Mon, Aug 22, 2016 at 9:00 PM, Sean Dague wrote:
> On 08/22/2016 12:10 AM, Michael Still wrote:
>
>> So, if this is about preserving CI time, then it
So, if this is about preserving CI time, then its cool for me to merge
these on a US Sunday when the gate is otherwise idle, right?
Michael
On Fri, Aug 19, 2016 at 7:02 AM, Sean Dague wrote:
> On 08/18/2016 04:46 PM, Michael Still wrote:
> > We're still ok with merging existi
We're still ok with merging existing ones though?
Michael
On Fri, Aug 19, 2016 at 5:18 AM, Jay Pipes wrote:
> Roger that.
>
> On 08/18/2016 11:48 AM, Matt Riedemann wrote:
>
>> We have a lot of open changes for the centralize / cleanup config option
>> work:
>>
>> https://review.openstack.org/#
On Fri, Aug 19, 2016 at 1:00 AM, Matt Riedemann
wrote:
> It's that time of year again to talk about killing this job, at least from
> the integrated gate (move it to experimental for people that care about
> postgresql, or make it gating on a smaller subset of projects like oslo.db).
>
> The post
On Thu, Aug 11, 2016 at 7:38 AM, Doug Hellmann
wrote:
> Excerpts from Michael Still's message of 2016-08-11 07:27:07 +1000:
> > On Thu, Aug 11, 2016 at 7:12 AM, Doug Hellmann
> > wrote:
> >
> > > Excerpts from Michael Still's message of 2016-08-11 07:01:37 +1000:
> > > > On Thu, Aug 11, 2016 at
On Thu, Aug 11, 2016 at 7:12 AM, Doug Hellmann
wrote:
> Excerpts from Michael Still's message of 2016-08-11 07:01:37 +1000:
> > On Thu, Aug 11, 2016 at 2:24 AM, Doug Hellmann
> > wrote:
> >
> > > It's time to make sure we have all of our active technical contributors
> > > (ATCs) identified for
On Thu, Aug 11, 2016 at 2:24 AM, Doug Hellmann
wrote:
> It's time to make sure we have all of our active technical contributors
> (ATCs) identified for Newton.
>
> Following the Foundation bylaws [1] and TC Charter [2], Project
> teams should identify contributors who have had a significant impac
On Tue, Jul 26, 2016 at 4:44 PM, Fox, Kevin M wrote:
[snip]
The issue is, as I see it, a parallel activity to one of the that is
> currently accepted into the Big Tent, aka Containerized Deployment
[snip]
This seems to be the crux of the matter as best as I can tell. Is it true
to say that th
On 16 Jul 2016 1:27 PM, "Thomas Herve" wrote:
>
> On Fri, Jul 15, 2016 at 8:36 PM, Fox, Kevin M wrote:
> > Some specific things:
> >
> > Magnum trying to not use Barbican as it adds an addition dependency.
See the discussion on the devel mailing list for details.
> >
> > Horizon discussions at th
So, is now a good time to mention that "Quamby" is the name of a local
prison?
Michael
On Fri, Jul 15, 2016 at 7:50 PM, Eoghan Glynn wrote:
>
>
> > (top posting on purpose)
> >
> > I have re-started the Q poll and am slowly adding all of you fine folks
> > to it. Let's keep our fingers crosse
On Wed, Jun 22, 2016 at 11:13 PM, Sean Dague wrote:
> On 06/22/2016 09:03 AM, Matt Riedemann wrote:
> > On 6/21/2016 12:53 AM, Michael Still wrote:
> >> So, https://review.openstack.org/#/c/317739 is basically done I think.
> >> I'm after people's thoughts on
So, https://review.openstack.org/#/c/317739 is basically done I think. I'm
after people's thoughts on:
- I need to do some more things, as described in the commit message. Are
we ok with them being in later patches to get reviews moving on this?
- I'm unsure what level of tempest testing makes
On Fri, Jun 10, 2016 at 7:18 AM, Tony Breeds
wrote:
> On Wed, Jun 08, 2016 at 08:10:47PM -0500, Matt Riedemann wrote:
>
> > Agreed, but it's the worked example part that we don't have yet,
> > chicken/egg. So we can drop the hammer on all new things until someone
> does
> > it, which sucks, or ho
+Angus
On Thu, Jun 9, 2016 at 7:10 AM, Matt Riedemann
wrote:
> While sitting in Angus' cross-project session on oslo.privsep at the
> Austin summit I believe I had a conversation with myself in my head that
> Nova should stop adding new rootwrap filters and anything new should use
> oslo.privsep
On Tue, Jun 7, 2016 at 7:41 AM, Clif Houck wrote:
> Hello all,
>
> At Rackspace we're running into an interesting problem: Consider a user
> who boots an instance in Nova with an image which only supports SSH
> public-key authentication, but the user doesn't provide a public key in
> the boot req
I've always done it manually by eyeballing the review, but the script is
tempting.
Thanks,
Michael
On 27 May 2016 8:42 PM, "Sean Dague" wrote:
> On 05/27/2016 05:36 AM, Michael Still wrote:
> > Hi,
> >
> > I've spent some time today abandoning old revi
Hi,
I've spent some time today abandoning old reviews from the Nova queue.
Specifically, anything which hadn't been updated before February this year
has been abandoned with a message like this:
"This patch has been idle for a long time, so I am abandoning it to keep
the review clean sane. If you
On Tue, May 24, 2016 at 11:42 PM, Muneeb Ahmad
wrote:
> If not, can I add it's support? any ideas how can I do that?
>
> On Sat, May 21, 2016 at 10:23 PM, Muneeb Ahmad
> wrote:
>
>> Hey guys,
>>
>> Does OpenStack support Xvisor?
>>
>
So, given I'd never heard of xvisor I think we can say that no
On Tue, May 10, 2016 at 2:12 AM, Markus Zoeller wrote:
> We're close to have all options moved to "nova/conf/". At the bottom
> is a list of the remaining options and their open reviews.
>
> The documentation of the options in "nova/conf/" is done for ~ 150
> options. Which means ~ 450 are missin
On Fri, May 6, 2016 at 12:50 AM, Matthew Booth wrote:
> I mentioned in the meeting last Tuesday that there are now 2 of us working
> on the persistent storage metadata patches: myself and Diana Clarke. I've
> also been talking to Paul Carlton today trying to work out how he can get
> moving with
I can't think of a reason. In fact its a bit warty because we've changed
the way we name the instance directories at least once. Its just how this
code was written back in the day.
Cleaning this up would be a fair bit of work though. Is it really worth the
effort just so people can have different
On Wed, May 4, 2016 at 11:03 AM, Davanum Srinivas wrote:
> Michael,
>
> The stackalytics bots do not have access to gerrit at the moment. We
> noticed it last friday and talked to infra folks:
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-04-29.log.html#t20
The instance of stackalytics run by the openstack-infra team seems to be
gummed up. It alleges that the last time there was a nova code review was
April 17, which seems... unlikely.
Who looks after this thing so I can ping them gently?
Thanks,
Michael
--
Rackspace Australia
On Sun, May 1, 2016 at 10:27 PM, ZhiQiang Fan wrote:
> Hi Nova cores,
>
> There is a spec[1] submitted to Telemetry project for Newton release,
> mentioned that a new feature requires libvirt >= 1.3.4 , I'm not sure if
> this will have bad impact to Nova service, so I open this thread and wait
>
either at
San Antonio, Texas or Hillsboro, Oregon during R-15 (June 20-24) or R-11
(July 18-22) as preferred by the Nova community.
Regards
Malini
Forwarded Message
Subject:Re: [openstack-dev] [nova] Newton midcycle planning
Date: Tue, 12 Apr 2016 08:54:17 +1000
From:
On 12 Apr 2016 12:19 AM, "Sean Dague" wrote:
>
> On 04/11/2016 10:08 AM, Ed Leafe wrote:
> > On 04/11/2016 08:38 AM, Julien Danjou wrote:
> >
> >> There's a lot of assumption in oslo.log about Nova, such as talking
> >> about "instance" and "context" in a lot of the code by default. There's
> >> e
On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann
wrote:
> A few people have been asking about planning for the nova midcycle for
> newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work
> the best. R-14 is close to the US July 4th holiday, R-13 is during the week
> of the US July
On Wed, Apr 6, 2016 at 7:28 AM, Ian Cordasco wrote:
>
>
> -Original Message-
> From: Michael Still
> Reply: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Date: April 5, 2016 at 16:11:05
> To: OpenStack
As a recent newcomer to using our client libraries, my only real objection
to this plan is that our client libraries as a mess [1][2]. The interfaces
we expect users to use are quite different for basic things like initial
auth between the various clients, and by introducing another library we
insi
I normally do this in one big batch, but haven't had a chance yet. I'll do
that later this week.
Michael
On 17 Mar 2016 7:50 AM, "Matt Riedemann" wrote:
> Specs are proposed to the 'approved' subdirectory and when they are
> completely implemented in launchpad (the blueprint status is
> 'Impleme
On Mon, Feb 8, 2016 at 1:51 AM, Monty Taylor wrote:
[snip]
> Fifth - if we do this, the real need for the mid-cycles we currently have
> probably goes away since the summit week can be a legit wall-to-wall work
> week.
>
[snip]
Another reply to a specific point...
I disagree strongly here, a
On Sun, Feb 7, 2016 at 8:07 PM, Jay Pipes wrote:
[snip]
> Many contributors submit talks to speak at the conference part of an
> OpenStack Summit because their company says it's the only way they will pay
> for them to attend the design summit. This is, IMHO, a terrible thing. The
> design summ
I know its late to ask, but what is the parking situation at the office? Is
driving reasonable as a plan or should be walk from the Holiday Inn?
Michael
On 25 Jan 2016 4:58 PM, "Murray, Paul (HP Cloud)" wrote:
> See updated event detail information for the mid-cycle at:
> https://wiki.openstack.
Heya,
I am not aware of anyone working on this. That said, its also not clear to
me that this is actually a good idea. Why can't you just loop through the
instances and delete them one at a time?
Michael
On Wed, Jan 20, 2016 at 12:08 AM, vishal yadav
wrote:
> Hey guys,
>
> Would like to know t
1 - 100 of 510 matches
Mail list logo