On 04/04/2014 03:52 AM, Ricardo Carrillo Cruz wrote:
> I'd like to get your feedback for platforms you use that you know that
> work just fine for devtest and those that you know that do not, be
> Fedora, OpenSuse or whatever.
>
> Regards
FYI, I'm committed into supporting TripleO, Tuskar, and so
Clint Byrum wrote on 04/04/2014 19:05:04:
> From: Clint Byrum
> To: openstack-dev
> Date: 04/04/2014 19:06
> Subject: Re: [openstack-dev] [Heat] [Murano] [Solum] applications inthe
cloud
>
> Excerpts from Stan Lagun's message of 2014-04-04 02:54:05 -0700:
> > Hi Steve, Thomas
> >
> > I'm glad th
I just submitted a CL for the Neutron Python API samples
https://review.openstack.org/#/c/85451/
On Fri, Apr 4, 2014 at 9:43 AM, Tom Fifield wrote:
> Feel free to submit doc patches that don't build for review - docs
> reviewers are known to fix markup for you :)
>
>
> On 04/04/14 11:11, Rajd
On Sat, 5 Apr 2014 15:16:33 +1100
Joshua Hesketh wrote:
> I'm moving a conversation that has begun on a review to this mailing
> list as it is perhaps systematic of a larger issue regarding API
> compatibility (specifically between neutron and nova-networking).
> Unfortunately these are areas I do
Hi Chris,
Thanks for your input.
On 4/5/14 9:56 PM, Christopher Yeoh wrote:
On Sat, 5 Apr 2014 15:16:33 +1100
Joshua Hesketh wrote:
I'm moving a conversation that has begun on a review to this mailing
list as it is perhaps systematic of a larger issue regarding API
compatibility (specifically
On Fri, 2014-04-04 at 11:52 +0100, Julie Pichon wrote:
> On 03/04/14 23:20, Jay Pipes wrote:
> > On Thu, 2014-04-03 at 14:41 -0500, Kevin L. Mitchell wrote:
> >> On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
> >>> Jay, thanks for taking ownership on this idea, we are really
> >>> in
On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:
>
>
>
> 2014-04-04 12:46 GMT+08:00 Jay Pipes :
> On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
> > Thanks Jay and Chris for the comments!
> >
> > @Jay Pipes, I think that we still need to enable "one nova
>
One fairly common failure mode folk run into is registering a node
with a nova-bm/ironic environment that is itself part of that
environment. E.g. if you deploy ironic-conductor using Ironic (scaling
out a cluster say), that conductor can then potentially power itself
off if the node that represent
On 04/04/2014 08:12 PM, Hopper, Justin wrote:
> Greetings,
>
> I am trying to address an issue from certain perspectives and I think
> some support from Nova may be needed.
>
> _Problem_
> Services like Trove use run in Nova Compute Instances. These Services
> try to provide an integrated and st
Hi Mohammad,
Thanks for suggestion. I'll add a proposal for the design session and also
bring this topic to the next ML2 weekly meeting.
Thanks,
Nader.
On Thu, Apr 3, 2014 at 7:49 PM, Mohammad Banikazemi wrote:
> Nader,
>
> During the last ML2 IRC weekly meeting [1] having per-MD extensions
Russell,
Thanks for the quick reply. If I understand what you are suggesting it is
that there would be one Trove-Service Tenant/User that owns all instances
from the perspective of Nova. This was one option proposed during our
discussions. However, what we thought would be best is to continue to
Thanks Jay Pipes.
If go back to having a single nova-compute managing a single vCenter
cluster, then there might be problems in a large sacle vCenter cluster.
There are still problems that we can not handle:
1) The VCDriver can also manage multiple resource pools with a single nova
compute, the re
Stephen
Mike is right, it is mostly (possibly only?) extensions that do double
lookups. Your plan looks sensible, and definitely useful. I guess I'll
see if I can actually break it once the review is up :-) I mostly
wanted to give a heads-up - there are people who are way better at
reviewing this
Mike
Glance metadata gets used for billing tags, among things, that we
would like to stay as attached to a volume as possible, as another
example. Windows images use this - which is why cinder copies all of
the glance metadata in the first place, rather than just a bootable
flag.
Apparently prote
My advice is two-fold:
- No need to wait for the blueprint to be approved before submitting
a review - put a review up and let people see the details, then
respond to the discussion as necessary
- Drop into the Wednesday (16:00 UTC) IRC meeting for Cinder - most
if not all of the core team are u
I'm not yet sure of the right way to do cleanup on shutdown, but any
driver should do as much checking as possible on startup - the service
might not have gone down cleanly (kill -9, SEGFAULT, etc), or
something might have gone wrong during clean shutdown. The driver
coming up should therefore not
16 matches
Mail list logo