On Tue, 2011-12-06 at 14:12 -0800, Duncan McGreggor wrote:
On 06 Dec 2011 - 13:52, Duncan McGreggor wrote:
On 06 Dec 2011 - 21:14, Thierry Carrez wrote:
Tim Bell wrote:
I'm not clear on who will be maintaining the stable/diablo branch.
The people such as EPEL for RedHat systems need
On Tue, 2011-12-06 at 10:11 -0800, Duncan McGreggor wrote:
On 06 Dec 2011 - 14:28, Thierry Carrez wrote:
So the general consensus so far on this discussion seems to be:
(0) The 2011.3 release PPA bears false expectations and should be
removed now. In the future, we should not provide such
On Tue, 2011-12-06 at 19:54 -0800, Vishvananda Ishaya wrote:
Hello Everyone,
The Nova subteams have now been active for a month and a half. Some
things are going very well, and others could use a little improvement.
To keep things moving forward, I'd like to make the following changes:
Vishvananda Ishaya wrote:
2) *Closing down the team mailinglists.* Some of the lists have been a
bit active, but I think the approach that soren has been using of
sending messages to the regular list with a team [header] is a better
approach. Examples:
[db] Should we use zookeeper?
Hi,all,
I am trying to install OpenStack with XenServer.I followed what this
page says,but with no luck.
(http://wiki.openstack.org/XenServerDevelopment) Is there any one who
successfully installed this?
Is there any guides available beyond the page above?
What's more, I tried this way:
On 06 Dec 2011 - 13:52, Duncan McGreggor wrote:
Yikes! I forgot an incredibly important one:
* What is the migration path story (diablo to essex, essex to f, etc.)?
I think it was going to be the Upgrades Team?
___
Mailing list:
For orchestration (and now the scheduler improvements) we need to know when an
operation fails ... and specifically, which resource was involved. In the
majority of the cases it's an instance_uuid we're looking for, but it could be
a security group id or a reservation id.
With most of the
Can you talk a little more about how you want to apply this failure
notification? That is, what is the case where you are going to use the
information that an operation failed? In my head I have an idea of getting code
simplicity dividends from an everything succeeds approach to some of our
Sure, the problem I'm immediately facing is reclaiming resources from the
Capacity table when something fails. (we claim them immediately in the
scheduler when the host is selected to lessen the latency).
The other situation is Orchestration needs it for retries, rescheduling,
rollbacks and
Hey all,
A quick reminder that the QA team has our weekly meeting on
#openstack-meeting in about 30 minutes.
12:00 EST
09:00 PST
17:00 UTC
See you there,
-jay
___
Mailing list: https://launchpad.net/~openstack
Post to :
Gotcha.
So the way this might work is, for example, when a run_instance fails on
compute node, it would publish a run_instance for uuid=blah failed event.
There would be a subscriber associated with the scheduler listening for such
events--when it receives one it would go check the capacity
Exactly! ... or it could be handled in the notifier itself.
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Mark Washenberger
Hi Sandy,
I'm wondering if it is possible to change the scheduler's rpc cast to
rpc call. This way the exceptions should be magically propagated back
to the scheduler, right? Naturally the scheduler can find another node
to retry or decide to give up and report failure. If we need to
provision
True ... this idea has come up before (and is still being kicked around). My
biggest concern is what happens if that scheduler dies? We need a mechanism
that can live outside of a single scheduler service.
The more of these long-running processes we leave in a service the greater the
impact
*removing our Asynchronous nature.
(heh, such a key point to typo on)
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Sandy Walsh
Hi Jeff,
Can you be more specific about what doesn't work? There are lots of people
using OpenStack with XenServer, including Citrix and Rackspace, so I can
guarantee that it works! The docs are lacking though, that's for certain.
Where did you get stuck?
Thanks,
Ewan.
From:
On Tue, 2011-12-06 at 23:56 +0100, Loic Dachary wrote:
I think there is an opportunity to leverage the momentum that is
growing in each distribution by creating an openstack team for them to
meet. Maybe Stefano Maffulli has an idea about how to go in this
direction. The IRC channel was a great
On Wed, Dec 7, 2011 at 7:26 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:
For orchestration (and now the scheduler improvements) we need to know when
an operation fails ... and specifically, which resource was involved. In the
majority of the cases it's an instance_uuid we're looking for,
On 07 Dec 2011 - 08:22, Mark McLoughlin wrote:
On Tue, 2011-12-06 at 10:11 -0800, Duncan McGreggor wrote:
On 06 Dec 2011 - 14:28, Thierry Carrez wrote:
So the general consensus so far on this discussion seems to be:
(0) The 2011.3 release PPA bears false expectations and should be
On 12/07/2011 10:32 PM, Stefano Maffulli wrote:
On Tue, 2011-12-06 at 23:56 +0100, Loic Dachary wrote:
I think there is an opportunity to leverage the momentum that is
growing in each distribution by creating an openstack team for them to
meet. Maybe Stefano Maffulli has an idea about how to
Hi folks
I wanna make Delete server spec clear.
The API doc says,
When a server is deleted, all images created from that server are also removed
http://docs.openstack.org/api/openstack-compute/1.1/content/Delete_Server-d1e2883.html
IMO, all images is vm images which stored on compute node
I would interpret that to include the snapshots - but I'm not sure
that is what I'd expect as a user.
On Wed, Dec 7, 2011 at 5:05 PM, Nachi Ueno
ueno.na...@nttdata-agilenet.com wrote:
Hi folks
I wanna make Delete server spec clear.
The API doc says,
When a server is deleted, all images
Hi Jessy
Thanks.
Hmm, there are no implementation of cleanup snapshot images.
IMO, Snapshot image should not deleted, in case of API request mistake.
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L540
2011/12/7 Jesse Andrews anotherje...@gmail.com:
I would interpret
I would agree with that. If I delete a server instance, I don't want
to destroy snapshot images I took of that server...
-jay
On Wed, Dec 7, 2011 at 8:50 PM, Nachi Ueno
ueno.na...@nttdata-agilenet.com wrote:
Hi Jessy
Thanks.
Hmm, there are no implementation of cleanup snapshot images.
IMO,
excellent ideas. I especially like the standardized list of headers.
just to be sure, mondays at 2100utc? if so, no conflicts on my end
On Wed, Dec 7, 2011 at 4:51 AM, Thierry Carrez thie...@openstack.orgwrote:
Vishvananda Ishaya wrote:
2) *Closing down the team mailinglists.* Some of the
25 matches
Mail list logo