Re: [Openstack] Stable branch reviews

2011-11-10 Thread Mark McLoughlin
Hi Dave,

On Thu, 2011-11-10 at 17:33 +, Dave Walker wrote:
> On Thu, Nov 10, 2011 at 08:02:23AM -0800, James E. Blair wrote:
> 
> > 
> > > But wait! Vish +2ed a stable branch patch yesterday:
> > >
> > >   https://review.openstack.org/328
> > >
> > > James, help a poor confused soul out here, would you? :)
> > >
> > > Right, that makes sense. Only folks that understand the stable branch
> > > policy[1] should be allowed to +2 on the stable branch.
> > >
> > > Basically, a stable branch reviewer should only +2 if:
> > >
> > >   - It fixes a significant issue, seen, or potentially seen, by someone
> > > during real life use
> > >
> > >   - The fix, or equivalent, must be in master already
> > >
> > >   - The fix was either a fairly trivial cherry-pick that looks 
> > > equally correct for the stable branch, or that the fix has 
> > > sufficient technical review (e.g. a +1 from another stable 
> > > reviewer if it's fairly straightforward, or one or more +1s from 
> > > folks on core it it's really gnarly)
> > >
> > >   - If this reviewer proposed the patch originally, another stable
> > > branch reviewer should have +1ed it 
> > >
> > > All we need is an understanding of the policy and reasonable judgement,
> > > it's not rocket science. I'd encourage folks to apply to the team for
> > > membership after reviewing a few patches.
> > 
> > It sounds like the best way to implement this policy is to give
> > openstack-stable-maint exclusive approval authority on stable branches,
> > and then make sure people understand those rules when adding them to
> > that team.  If that's the consensus, I can make the change.
> 
> Hi,
> 
> Thanks for helping to add clarification to this.  From our
> perspective, I have confidence that ~*-core members know the
> difference between trunk and stable policy.  Therefore for the short
> term, it makes sense to have more eyes - especially those which are
> likely to have good knowledge of the internals.
> 
> Therefore, I am happy for ~*-core to still have +2 access; especially
> if it helps seed the maint team.
> 
> Going forward, it probably will make sense to have a distinction, but
> I feel it might be quite early for that to be a requirement.

I basically said the same thing initially to Thierry on irc, but he
turned me around.

I'm not actually sure all folks on core do grok (or even want to grok)
the subtleties of the stable branch policy and the tradeoffs you need to
make when deciding whether to +2 some on the stable branch. Thierry has
had some similar experience with the milestone-proposed branch, I guess.

Also, I'm not even sure all folks on core always notice that a patch is
being submitted against stable, not master :)

But, of course, if anyone in core wanted to help with +2ing on the
stable branch, we'd add them to stable-maint in a flash.

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Stable branch reviews

2011-11-10 Thread Mark McLoughlin
On Thu, 2011-11-10 at 08:02 -0800, James E. Blair wrote:
> Mark McLoughlin  writes:
> > Only folks that understand the stable branch policy[1] should be 
> > allowed to +2 on the stable branch.
> >
> > Basically, a stable branch reviewer should only +2 if:
> >
> >   - It fixes a significant issue, seen, or potentially seen, by someone
> > during real life use
> >
> >   - The fix, or equivalent, must be in master already
> >
> >   - The fix was either a fairly trivial cherry-pick that looks 
> > equally correct for the stable branch, or that the fix has 
> > sufficient technical review (e.g. a +1 from another stable 
> > reviewer if it's fairly straightforward, or one or more +1s from 
> > folks on core it it's really gnarly)
> >
> >   - If this reviewer proposed the patch originally, another stable
> > branch reviewer should have +1ed it 
> >
> > All we need is an understanding of the policy and reasonable judgement,
> > it's not rocket science. I'd encourage folks to apply to the team for
> > membership after reviewing a few patches.
> 
> It sounds like the best way to implement this policy is to give
> openstack-stable-maint exclusive approval authority on stable branches,
> and then make sure people understand those rules when adding them to
> that team.  If that's the consensus, I can make the change.

Yes, that's what Thierry initially suggested and I'm persuaded now
too :)

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Stable branch reviews

2011-11-10 Thread Mark McLoughlin
On Thu, 2011-11-10 at 09:02 -0800, Vishvananda Ishaya wrote:
> On Nov 10, 2011, at 6:22 AM, Mark McLoughlin wrote:
> 
> > But wait! Vish +2ed a stable branch patch yesterday:
> > 
> >  https://review.openstack.org/328
> 
> 
> I don't mind losing my powers over stable/diablo.
> 
> On a related note, is there a way we can change the color scheme in
> gerrit (to red??) for stable branches?  I think there are a number of
> cases with core members reviewing stable/diablo patches thinking they
> were for trunk.

No doubt gerrit could be improved, but what works for me is to just look
at reviews for the master branch with e.g.

https://review.openstack.org/#q,status:open+project:openstack/nova+branch:master,n,z

Gerrit's query syntax is actually quite useful, e.g.

https://review.openstack.org/#q,status:open+project:openstack/nova+branch:master+owner:vish,n,z

Docs on it here:

http://review.coreboot.org/Documentation/user-search.html

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problem with Gerrit Workflow (probably with ssh public key)

2011-11-10 Thread Dugger, Donald D
Nevermind.  I'm not sure how I did it but by some magic incantation of 
resetting login name/public key it's now working.

Sorry for the noise.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


-Original Message-
From: Dugger, Donald D 
Sent: Thursday, November 10, 2011 6:14 PM
To: openstack@lists.launchpad.net
Subject: Problem with Gerrit Workflow (probably with ssh public key)

I'm having problems trying to follow the steps in Gerrit Workflow Quick 
Reference (wiki.openstack.org/GerritWorkflow), in fact I'm failing on the first 
step.  When I try and get the list of available projects I'm getting the 
failure:

Permission denied (publickey).

I've uploaded my public key and this key does work when I `ssh' into other 
machines so I think the key itself is correct but `review.openstack.org' seems 
to be unhappy with it.  Note that the machine I'm using is behind a NAT server 
but that hasn't caused problems with other `ssh' connections.  I've pasted the 
output from a `ssh -v ...' commenad but it doesn't really show anything.

If anyone has any ideas on what I'm doing wrong I would greatly appreciate it.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

--- cut here for log ---
OpenSSH_5.1p1 Debian-5, OpenSSL 0.9.8g 19 Oct 2007
debug1: Reading configuration data /home/n0ano/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to review.openstack.org [173.203.103.119] port 29418.
debug1: Connection established.
debug1: identity file /home/n0ano/.ssh/identity type -1
debug1: identity file /home/n0ano/.ssh/id_rsa type 1
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-1024
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-1024
debug1: identity file /home/n0ano/.ssh/id_dsa type 2
debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024
debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024
debug1: Remote protocol version 2.0, remote software version 
GerritCodeReview_2.2.1-4-g4b9d4ed (SSHD-CORE-0.5.1-R1095809)
debug1: no match: GerritCodeReview_2.2.1-4-g4b9d4ed (SSHD-CORE-0.5.1-R1095809)
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-5
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-md5 none
debug1: kex: client->server aes128-cbc hmac-md5 none
debug1: sending SSH2_MSG_KEXDH_INIT
debug1: expecting SSH2_MSG_KEXDH_REPLY
debug1: Host '[review.openstack.org]:29418' is known and matches the RSA host 
key.
debug1: Found key in /home/n0ano/.ssh/known_hosts:17
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /home/n0ano/.ssh/identity
debug1: Offering public key: /home/n0ano/.ssh/id_rsa
debug1: Authentications that can continue: publickey
debug1: Offering public key: /home/n0ano/.ssh/id_dsa
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
Permission denied (publickey).



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Problem with Gerrit Workflow (probably with ssh public key)

2011-11-10 Thread Dugger, Donald D
I'm having problems trying to follow the steps in Gerrit Workflow Quick 
Reference (wiki.openstack.org/GerritWorkflow), in fact I'm failing on the first 
step.  When I try and get the list of available projects I'm getting the 
failure:

Permission denied (publickey).

I've uploaded my public key and this key does work when I `ssh' into other 
machines so I think the key itself is correct but `review.openstack.org' seems 
to be unhappy with it.  Note that the machine I'm using is behind a NAT server 
but that hasn't caused problems with other `ssh' connections.  I've pasted the 
output from a `ssh -v ...' commenad but it doesn't really show anything.

If anyone has any ideas on what I'm doing wrong I would greatly appreciate it.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

--- cut here for log ---
OpenSSH_5.1p1 Debian-5, OpenSSL 0.9.8g 19 Oct 2007
debug1: Reading configuration data /home/n0ano/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to review.openstack.org [173.203.103.119] port 29418.
debug1: Connection established.
debug1: identity file /home/n0ano/.ssh/identity type -1
debug1: identity file /home/n0ano/.ssh/id_rsa type 1
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-1024
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-1024
debug1: identity file /home/n0ano/.ssh/id_dsa type 2
debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024
debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024
debug1: Remote protocol version 2.0, remote software version 
GerritCodeReview_2.2.1-4-g4b9d4ed (SSHD-CORE-0.5.1-R1095809)
debug1: no match: GerritCodeReview_2.2.1-4-g4b9d4ed (SSHD-CORE-0.5.1-R1095809)
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-5
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-md5 none
debug1: kex: client->server aes128-cbc hmac-md5 none
debug1: sending SSH2_MSG_KEXDH_INIT
debug1: expecting SSH2_MSG_KEXDH_REPLY
debug1: Host '[review.openstack.org]:29418' is known and matches the RSA host 
key.
debug1: Found key in /home/n0ano/.ssh/known_hosts:17
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /home/n0ano/.ssh/identity
debug1: Offering public key: /home/n0ano/.ssh/id_rsa
debug1: Authentications that can continue: publickey
debug1: Offering public key: /home/n0ano/.ssh/id_dsa
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
Permission denied (publickey).



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Draft API specifications

2011-11-10 Thread Vishvananda Ishaya
I think we need a straight copy of Nova 1.1 -> Nova 2.0 that simply renames 
everything from 1.1 -> 2.0 and changes the endpoint url v1.1 -> v2

Also a big note saying that 2.0 is exactly the same as 1.1 but was renamed to 
avoid confusion with 1.0 when we pulled out the minor version from the url.

Having 2.1 in draft seems great to me.  I think it is fine to have a draft docs 
available before RFC.  It gives us a place to start working on stuff.

Vish

On Nov 8, 2011, at 2:54 PM, Anne Gentle wrote:

> Hi all - 
> 
> We have three projects that need to have draft API docs (for a new API 
> version) published for feedback and consumption during the Essex timeframe. 
> (Quantum 1.0>1.1, Glance 1.1>2.0, and Nova 1.1>2.1)
> 
> I'd like to get ideas about where those should be published and some of the 
> requirements around their draft status. 
> 
> Is there a need for special treatment for "RFC" vs. "Draft" designations such 
> as RFC for a certain time period, then Draft?
> Do these need drafts need to be published to docs.openstack.org/api, or is 
> that site for "final" APIs for end-users? I envision introducing more 
> confusion than is already present if we publish them side-by-side.
> Do these API drafts need their own site for the RFC/Draft period, such as 
> api.openstack.org/drafts?
> What do other projects do with their draft APIs that you like?
> Thanks for your input. At the team meeting I got *crickets* when I asked. :) 
> I'd like to set up a site that's as lights-out and automated as possible, no 
> need for me to be a gatekeeper on this info, but some up-front info 
> architecture will help people find and use the info.
> 
> Thanks,
> Anne
> Anne Gentle 
> a...@openstack.org
> my blog | my book | LinkedIn | Delicious | Twitter
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Caitlin Bestler
Ryan Lane Wrote in response to Soren Hansen:

>> That's the whole point. For most interesting applications, "fast"
>> automatic migration isn't anywhere near fast enough. Don't try to 
>> avoid failure. Expect it and design around it.
>>

>This assumes all application designers are doing this. Most web applications do
>this fairly well, but most enterprise applications do this very poorly.

>Hardware HA is useful for more than just poorly designed applications though.
> I have a cloud instance that runs my personal website. I don't want to pay for
>two (or more, realistically) instances just to ensure that if my host dies 
>that my
>site will continue to run. My provider should automatically detect the hardware
>failure and re-launch my instance on another piece of hardware; it should also
>notify me that it happened, but that's a different story ;).

There are techniques to migrate VMs between non-HA hosts, and there are
techniques that allow applications to be written so that any instance of the 
server
can be lost without impairing the application (you just start a new instance of 
the
server, rather than migrating the server).

But neither of those solve the problem as well has hardware High Availability.
Whether Hardware HA is a cost effective solution is something that customers
will ultimately have to determine. 

A successful proposal would need to include identifying when a VM wants/needs
to be hosted on a Hardware-HA enhanced host, a method of identifying the 
Hardware-HA enhanced hosts, and the ability to track when a Hardware-HA
Host is in degraded mode (i.e., it currently is one resource failure away from
an absolute failure).

I think those features can be designed in a way that does not impose too strong
of a burden on the core scheduling algorithm, as long as it isn't required to
evaluate a long list of "Hardware HA QoS metrics" to do optimal guest to host
assignments.

This is actually virtually the same issue as Object Storage support for 
self-healing
Mirroring (via ZFS) that we have proposed for Swift. It defines an enhanced 
capability
For specific servers that can be characterized in a way that the generic  
control and
Management plane algorithms can understand. The hardest part of that 
understanding
In both cases is the addition of a "degraded" status for a server.

Without Hardware HA or self-healing mirroring a host/data server is either "up" 
or "down".
With Hardware HA and self-healing mirroring they can be "degraded". The 
Hardware HA
Host can be down to a single hardware node. The self-healing mirror could be 
done to a
Single working storage device. In either case the remaining copy is still 
functional, but you
Probably want to begin migrating the VMs/Swift Partitions elsewhere (unless 
your mean
Time to repair is really good).


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Ryan Lane
> I know. That's what makes them a poor fit for "the cloud".
>

Meh. Private clouds will still use applications like this. I think
"the cloud" is great for cloud providers, but why limit nova's
usefulness to just cloud providers?

"The cloud" way of doing things pushes the responsibility of keeping
applications alive to the client. There's a lot of clients that don't
have this level of sophistication.

>> Hardware HA is useful for more than just poorly designed applications
>> though. I have a cloud instance that runs my personal website. I don't
>> want to pay for two (or more, realistically) instances just to ensure
>> that if my host dies that my site will continue to run. My provider
>> should automatically detect the hardware failure and re-launch my
>> instance on another piece of hardware; it should also notify me that
>> it happened, but that's a different story ;).
>
> I'm not sure I count that as High Availability. It's more like
> Eventual Availability. :)
>

So, this is one HA mode for VMware. There is also a newer HA mode that
is much more expensive (from the resources perspective) that keeps a
shadow copy of a virtual machine on another piece of hardware, and if
the primary instance's hardware dies, it automatically switches over
to the shadow copy.

Both modes are really useful. There's a huge level of automation
needed for doing things "the cloud way" that is completely
unnecessary. I don't want to have to monitor my instances to see if
one died due to a hardware failure, then start new ones, then pool
them, then depool the dead ones. I want my provider to handle hardware
deaths for me. If I have 200 web servers instances, and 40 of them die
because they are on nodes that die, I want them to restart somewhere
else. It removes all the bullshit automation I'd need to do otherwise.

- Ryan Lane

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Ryan Lane
> That's the whole point. For most interesting applications, "fast"
> automatic migration isn't anywhere near fast enough. Don't try to
> avoid failure. Expect it and design around it.
>

This assumes all application designers are doing this. Most web
applications do this fairly well, but most enterprise applications do
this very poorly.

Hardware HA is useful for more than just poorly designed applications
though. I have a cloud instance that runs my personal website. I don't
want to pay for two (or more, realistically) instances just to ensure
that if my host dies that my site will continue to run. My provider
should automatically detect the hardware failure and re-launch my
instance on another piece of hardware; it should also notify me that
it happened, but that's a different story ;).

- Ryan Lane

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Draft API specifications

2011-11-10 Thread Jay Pipes
On Thu, 2011-11-10 at 13:22 -0600, Anne Gentle wrote:
> Thanks, Jay. I usually try to be more careful with the API names, so
> thanks for clarifying.

Sorry if I sounded patronizing there.. didn't mean to be.

> I think the landing page containing Draft API docs looks something
> like the attached mockup, let me know your feedback.

Looks great to me.

> Jay, you're the only PTL using Google Docs for feedback, others have
> used the doc comments system, Disqus. I can set up doc comments feeds
> specifically for RFC periods on particular specs, though your Google
> Docs approach works fine also. It would be nice to standardize but I
> also like that Google docs lets you click "Resolved" on a comment. 

Yeah, I like the fact that you can comment on a specific block of the
proposal as well, instead of all in one list of comments at the bottom
of a page. Also, yes, the Resolved comment feature is very nice for this
kind of iterative feedback.

That said, however, if it is your wish to do all draft API proposals
using a single system, so that it is easier for you to maintain, I will
bend to your will :)

> I have the ability to make DRAFT in big red letters on the output. I
> could also put RFC as a watermark on the page during the RFC period in
> addition to the DRAFT in each page. 
> 
> I mostly want to ensure easy access so that the draft APIs get plenty
> of comments and reviews.

Yep, me too. :)

-jay


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Soren Hansen
2011/11/10 Ryan Lane :
>> That's the whole point. For most interesting applications, "fast"
>> automatic migration isn't anywhere near fast enough. Don't try to
>> avoid failure. Expect it and design around it.
> This assumes all application designers are doing this. Most web
> applications do this fairly well, but most enterprise applications do
> this very poorly.

I know. That's what makes them a poor fit for "the cloud".

> Hardware HA is useful for more than just poorly designed applications
> though. I have a cloud instance that runs my personal website. I don't
> want to pay for two (or more, realistically) instances just to ensure
> that if my host dies that my site will continue to run. My provider
> should automatically detect the hardware failure and re-launch my
> instance on another piece of hardware; it should also notify me that
> it happened, but that's a different story ;).

I'm not sure I count that as High Availability. It's more like
Eventual Availability. :)

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bug fixes and test cases submitted against stable/diablo

2011-11-10 Thread Nachi Ueno
Hi folks

Thank you for your help > Mark and Jay and Reviewers
I removed all review request for diablo/stable from Gerrit.
And, We will follow community policy.

Current our test code and bug fix is based on stable/diablo.
For each branch. "forward-porting" is needed.

12 bug patch branch is in progress ( they are almost done)
34 bug patch branch is on github(*)
30 test code branch is on github.
(*)https://github.com/ntt-pf-lab/nova/branches

>From next work alter these branches, We will follow the policy (Essex first).

However, for now, we have not enough man-power.
So please help us.
I wrote a script which shows bug description and conflict files and
merge command.
(See https://gist.github.com/1355816)

Each branch is linked to bug report.

If you guys help forward-porting work, would you please assign the bug
for yourself.
(Thanks Jay!)

Naming rule in our repository is like this.
https://github.com/ntt-pf-lab/nova/tree/openstack-qa-nova-(bugID)

For now, There are bugs which is not fixed yet, so test code fails.
So I think we should start from bug fix.

Cheers
Nati

2011/11/10 Jay Pipes :
> On Wed, 2011-11-09 at 14:57 +0100, Thierry Carrez wrote:
>> Soren Hansen wrote:
>> > 2011/11/9 Nachi Ueno :
>> >> I understand your point. Stop QAing stable/diablo and focus on Essex.
>> >
>> > Oh, no no. That's not the point. I'm thrilled to have you work on
>> > QAing Diablo. The only issue is that the fixes you come up with should
>> > be pushed to Essex first. There are two reasons for this:
>> >
>> >  * If we don't push the fixes to Essex, the problems will still be
>> > present in Essex and every release after that.
>> >
>> >  * Having them in Essex lets us try them out, vet them and validate
>> > them more thoroughly before we let them into the stable branch. When a
>> > patch lands in the stable branch it has to be well tested already
>> > (unless of course Essex has deviated too much, in which case we'll
>> > have to accept the risk of getting it into Diablo directly).
>>
>> +1
>>
>> You should submit patches to master and then backport them to
>> stable/diablo, rather than proposing them for stable/diablo directly.
>> That ensures your work benefits both branches: making diablo better
>> without making essex worse than diablo.
>>
>> If that's just too much work, maybe you should raise the issue at the
>> next QA meeting to try to get some outside help ?
>
> At the QA meeting yesterday, I offered my help to Nati. I will handle
> proposing his patches to Essex up to a future date where Nati and his
> team will switch to code against Essex, not Diablo/stable and propose
> first to master, then others will backport to diablo/stable.
>
> Nati and I will decide on that future date for his team to switch their
> focus to Essex trunk and not have to have someone manually
> "forward-port" these patches to trunk.
>
> Cheers,
> -jay
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Essex-1 milestone available for Keystone, Glance, Nova and Horizon

2011-11-10 Thread Thierry Carrez
Hi everyone,

It's my great pleasure to announce the immediate availability for
Keystone, Glance, Nova and Horizon of the first milestone of the Essex
development cycle, called "essex-1".

This milestone picks up all development made on trunk since Diablo
release branches were cut in early September. You can see the full list
of new features and fixed bugs, as well as tarball downloads, at:

https://launchpad.net/keystone/essex/essex-1
https://launchpad.net/glance/essex/essex-1
https://launchpad.net/nova/essex/essex-1
https://launchpad.net/horizon/essex/essex-1

Note that you can also test the Glance & Nova milestones on Ubuntu by
enabling the following PPAs:
 ppa:nova-core/milestone
 ppa:glance-core/milestone

The next milestone, essex-2, is scheduled for release on December 15th.
Enjoy !

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Soren Hansen
2011/11/10 Viacheslav Biriukov :
> Hm
> If we planning vm hosting we work on the other level. So if hw node fails we
> need fast automatic migration to other node.

That's the whole point. For most interesting applications, "fast"
automatic migration isn't anywhere near fast enough. Don't try to
avoid failure. Expect it and design around it.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Which nova scheduler for different hardware sizes?

2011-11-10 Thread Christian Wittwer
Thanks a lot for your answers. Unfortunately I can't follow the trunk
and I have to use the Diablo release. Is it possible to backport that
new scheduler to Diablo?
Anyway I gave the least cost scheduler a try, it loads but never
schedules a vm correctly.

--compute_scheduler_driver=nova.scheduler.least_cost.LeastCostScheduler

But then log fires that message when I try to launch an instance

2011-11-10 19:31:15,588 DEBUG nova.scheduler.least_cost [-] Weighted
Costs => [] from (pid=1697) weigh_hosts
/usr/lib/pymodules/python2.7/nova/scheduler/least_cost.py:170
2011-11-10 19:31:15,591 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE:   File
"/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py", line 620, in
_process_data
(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE:   File
"/usr/lib/pymodules/python2.7/nova/scheduler/manager.py", line 103, in
_schedule
(nova.rpc): TRACE: host = real_meth(*args, **kwargs)
(nova.rpc): TRACE:   File
"/usr/lib/pymodules/python2.7/nova/scheduler/abstract_scheduler.py",
line 231, in schedule_run_instance
(nova.rpc): TRACE: raise driver.NoValidHost(_('No hosts were available'))
(nova.rpc): TRACE: NoValidHost: No hosts were available
(nova.rpc): TRACE:

My compute hosts are up and running, and there are no other instances running.

root@unic-dev-os-controller:~# nova-manage service list
Binary   Host Zone
Status State Updated_At
nova-compute unic-dev-os-controller   nova
enabled:-)   2011-11-10 18:35:06
nova-scheduler   unic-dev-os-controller   nova
enabled:-)   2011-11-10 18:35:06
nova-network unic-dev-os-controller   nova
enabled:-)   2011-11-10 18:35:06
nova-compute unic-dev-os-compute1 nova
enabled:-)   2011-11-10 18:35:06

Any ideas why the scheduler does not find a valid host?

Christian

2011/11/1 Lorin Hochstein :
> Christian:
>
> Sandy's branch just landed in the repository. You should be able to use the 
> distributed scheduler with the least cost functionality by specifying the 
> following flag in nova.conf for the nova-scheduler service:
>
> --compute_scheduler_driver=nova.scheduler.distributed_scheduler.DistributedScheduler
>
> By default, this uses the 
> nova.scheduler.least_cost.compute_fill_first_cost_fn weighting function.
>
> Note, however, that this function will favor scheduling instances to nodes 
> that have the smallest amount of RAM available that can still fit the 
> instance. If you're looking for the opposite effect (deploy to the node that 
> has the most amount of RAM free), then you'll have to write your own cost 
> function.  One way would be to add the following method to least_cost.py:
>
>
> def compute_least_loaded_cost_fn(host_info):
>    return -compute_fill_first_cost_fn(host_info)
>
>
> Then add the following flag to your nova.conf
>
> --least_cost_functions=nova.scheduler.least_cost.compute_least_loaded_cost_fn
>
>
> Lorin
> --
> Lorin Hochstein, Computer Scientist
> USC Information Sciences Institute
> 703.812.3710
> http://www.east.isi.edu/~lorin
>
>
>
>
> On Nov 1, 2011, at 11:37 AM, Sandy Walsh wrote:
>
>> I'm hoping to land this branch asap.
>> https://review.openstack.org/#change,1192
>>
>> It replaces all the "kind of alike" schedulers with a single 
>> DistributedScheduler.
>>
>> -S
>>
>> 
>> From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
>> [openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf 
>> of Christian Wittwer [wittwe...@gmail.com]
>> Sent: Tuesday, November 01, 2011 5:38 AM
>> To: Lorin Hochstein
>> Cc: openstack@lists.launchpad.net
>> Subject: Re: [Openstack] Which nova scheduler for different hardware sizes?
>>
>> Lorin,
>> Thanks for your reply. Well the least cost scheduler with these cost
>> functions looks interesting.
>> Unfortunately there is not much documenation about it. Can somebody
>> give me an example how to switch to that scheduler using the memory
>> cost function which already exist?
>>
>> Cheers,
>> Christian
>>
>> 2011/10/24 Lorin Hochstein :
>>> Christian:
>>> You could use the least cost scheduler, but I think you'd have to write your
>>> own cost function to take into account the different number of cores.
>>> Looking at the source, the only cost function it comes with only takes into
>>> account the amount of memory that's free, not loading in terms of total
>>> physical cores and allocated virtual cores. (We use a custom scheduler at
>>> our site, so I don't have any firsthand experience with the least-cost
>>> scheduler).
>>> Lorin
>>> --
>>> Lorin Hochstein, Computer Scientist
>>> USC Information Sciences Institute
>>> 703.812.3710
>>> http://www.east.isi.edu/~lorin
>>>
>>>
>>>
>>> On Oct 22, 2011, at 3:17 AM, Christian Wittwer wrote:
>>>
>>> I'm planning to build a ope

Re: [Openstack] Bug fixes and test cases submitted against stable/diablo

2011-11-10 Thread Jay Pipes
On Wed, 2011-11-09 at 14:57 +0100, Thierry Carrez wrote:
> Soren Hansen wrote:
> > 2011/11/9 Nachi Ueno :
> >> I understand your point. Stop QAing stable/diablo and focus on Essex.
> > 
> > Oh, no no. That's not the point. I'm thrilled to have you work on
> > QAing Diablo. The only issue is that the fixes you come up with should
> > be pushed to Essex first. There are two reasons for this:
> > 
> >  * If we don't push the fixes to Essex, the problems will still be
> > present in Essex and every release after that.
> > 
> >  * Having them in Essex lets us try them out, vet them and validate
> > them more thoroughly before we let them into the stable branch. When a
> > patch lands in the stable branch it has to be well tested already
> > (unless of course Essex has deviated too much, in which case we'll
> > have to accept the risk of getting it into Diablo directly).
> 
> +1
> 
> You should submit patches to master and then backport them to
> stable/diablo, rather than proposing them for stable/diablo directly.
> That ensures your work benefits both branches: making diablo better
> without making essex worse than diablo.
> 
> If that's just too much work, maybe you should raise the issue at the
> next QA meeting to try to get some outside help ?

At the QA meeting yesterday, I offered my help to Nati. I will handle
proposing his patches to Essex up to a future date where Nati and his
team will switch to code against Essex, not Diablo/stable and propose
first to master, then others will backport to diablo/stable.

Nati and I will decide on that future date for his team to switch their
focus to Essex trunk and not have to have someone manually
"forward-port" these patches to trunk.

Cheers,
-jay


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Stable branch reviews

2011-11-10 Thread Dave Walker
On Thu, Nov 10, 2011 at 08:02:23AM -0800, James E. Blair wrote:

> 
> > But wait! Vish +2ed a stable branch patch yesterday:
> >
> >   https://review.openstack.org/328
> >
> > James, help a poor confused soul out here, would you? :)
> >
> > Right, that makes sense. Only folks that understand the stable branch
> > policy[1] should be allowed to +2 on the stable branch.
> >
> > Basically, a stable branch reviewer should only +2 if:
> >
> >   - It fixes a significant issue, seen, or potentially seen, by someone
> > during real life use
> >
> >   - The fix, or equivalent, must be in master already
> >
> >   - The fix was either a fairly trivial cherry-pick that looks 
> > equally correct for the stable branch, or that the fix has 
> > sufficient technical review (e.g. a +1 from another stable 
> > reviewer if it's fairly straightforward, or one or more +1s from 
> > folks on core it it's really gnarly)
> >
> >   - If this reviewer proposed the patch originally, another stable
> > branch reviewer should have +1ed it 
> >
> > All we need is an understanding of the policy and reasonable judgement,
> > it's not rocket science. I'd encourage folks to apply to the team for
> > membership after reviewing a few patches.
> 
> It sounds like the best way to implement this policy is to give
> openstack-stable-maint exclusive approval authority on stable branches,
> and then make sure people understand those rules when adding them to
> that team.  If that's the consensus, I can make the change.

Hi,

Thanks for helping to add clarification to this.  From our
perspective, I have confidence that ~*-core members know the
difference between trunk and stable policy.  Therefore for the short
term, it makes sense to have more eyes - especially those which are
likely to have good knowledge of the internals.

Therefore, I am happy for ~*-core to still have +2 access; especially
if it helps seed the maint team.

Going forward, it probably will make sense to have a distinction, but
I feel it might be quite early for that to be a requirement.

Thanks.

Kind Regards,
Dave Walker


signature.asc
Description: Digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Ilya Alekseyev
Hi Armando!

It is very interesting feature. Are you already have specs for this
blueprints or may be etherpad?

Regards,
Ilya

10 ноября 2011 г. 20:16 пользователь Armando Migliaccio <
armando.migliac...@eu.citrix.com> написал:

> There is a blueprint that touches these aspects:
>
> https://blueprints.launchpad.net/nova/+spec/guest-ha
>
> This is tailored at use cases where you cannot redesign an existing app.
>
> The work is at the early stages, but you are more than welcome to join the
> effort!
>
> Cheers,
> Armando
>
> > -Original Message-
> > From: openstack-bounces+armando.migliaccio=
> eu.citrix@lists.launchpad.net
> > [mailto:openstack-
> > bounces+armando.migliaccio=eu.citrix@lists.launchpad.net] On Behalf
> Of
> > Soren Hansen
> > Sent: 10 November 2011 15:51
> > To: Viacheslav Biriukov
> > Cc: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] Hardware HA
> >
> > 2011/11/10 Viacheslav Biriukov :
> > > Hi all.
> > > What are the best practices for HA of the hardware compute-node, and
> virtual
> > > machines.
> > > After googling I found matahari, pacemaker-cloud, but nothing about
> > > build-in fiches  openstack.
> > > 1) How do you create such environments?
> > > 2) Does it is right way to use pacemaker-cloud with openstack? Is it
> stable?
> >
> > I'd avoid depending on anything like that altogether. Try to design
> > your application so that it doesn't depend on any one instance being
> > up. It'll work out better in the long run.
> >
> > --
> > Soren Hansen| http://linux2go.dk/
> > Ubuntu Developer| http://www.ubuntu.com/
> > OpenStack Developer | http://www.openstack.org/
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Stable branch reviews

2011-11-10 Thread Vishvananda Ishaya

On Nov 10, 2011, at 6:22 AM, Mark McLoughlin wrote:

> But wait! Vish +2ed a stable branch patch yesterday:
> 
>  https://review.openstack.org/328


I don't mind losing my powers over stable/diablo.

On a related note, is there a way we can change the color scheme in gerrit (to 
red??) for stable branches?  I think there are a number of cases with core 
members reviewing stable/diablo patches thinking they were for trunk.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Host Aggregates ...

2011-11-10 Thread Vishvananda Ishaya
The main thing that the idea of host-aggregates provides is the ability to 
specify metadata at a group of hosts level.  If you put 10 hosts into a single 
aggregate, you can specify characteristics for those hosts as a group (i.e. 
which san they are backing files onto, whether they support live migration, 
etc.)  It is also the way that xen (and esx) model things internally, so if we 
don't do it we have to map our custom tags  to resource pools in the hypervisor 
anyway. Finally, it also simplifies some scheduler logic.  For example if you 
are trying to find a valid host to migrate instances onto, you can just ask for 
hosts in the same "group" and it is the cloud administrator's responsibility to 
make sure that all of the systems in that group have the required 
functionality.  These "groups" could be implemented with tags, but I think 
conceptually the idea of a tag implies a more tenuous relationship than 
aggregate group.

I look at it as a grouping of hosts that is smaller than a zone (say a 
cluster).  Large enough for shared metadata but small enough so that splitting 
the db and api would be a pain.

Vish

On Nov 10, 2011, at 5:01 AM, Sandy Walsh wrote:

> Ok, that helps ... now I see the abstraction your going for (a new layer 
> under availability zones).
> 
> Personally I prefer a tagging approach to a modeled hierarchy. It was 
> something we debated at great length with Zones. In this case, the "tag" 
> would be in the capabilities assigned to the host.
> 
> I think both availability zones (and host aggregates) should be modeled using 
> tags/capabilities without having to explicitly model it as a tree or in the 
> db ... which is how I see this evolving. At the scheduler level we should be 
> able to make decisions using simple tag collections.
> 
> "WestCoast, HasGPU, GeneratorBackup, PriorityNetwork"
> 
> Are we saying the same thing?
> 
> Are there use cases that this approach couldn't handle?
> 
> -S
> 
> 
> From: Armando Migliaccio [armando.migliac...@eu.citrix.com]
> Sent: Thursday, November 10, 2011 8:50 AM
> To: Sandy Walsh
> Cc: openstack@lists.launchpad.net
> Subject: RE: Host Aggregates ...
> 
> Hi Sandy,
> 
> Thanks for taking the time to read this.
> 
> My understanding is that a typical Nova deployment would span across multiple 
> zones, that zones may have subzones, and that child zones will have a number 
> of availability zones in them; please do correct me if I am wrong :)
> 
> That stated, it was assumed that an aggregate will be a grouping of servers 
> within an availability zone (hence the introduction of the extra concept), 
> and would be used to manage hypervisor pools when and if required. This 
> introduces benefits like VM live migration, VM HA and zero-downtime host 
> upgrades. The introduction of hypervisor pools is just the easy way to get 
> these benefits in the short term.
> 
> Going back to your point, it is possible to match "host-aggregates" with 
> "single-zone that uses capabilities" on the implementation level (assumed 
> that it is okay to be unable to represent aggregates as children of 
> availability zones). Nevertheless, I still see zones and aggregates as being 
> different on the conceptual level.
> 
> What is your view if we went with the approach of implementing an aggregate 
> as a special "single-zone that uses capabilities"? Would there be a risk of 
> tangling the zone management API a bit?
> 
> Thanks for feedback!
> 
> Cheers,
> Armando
> 
>> -Original Message-
>> From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
>> Sent: 09 November 2011 21:10
>> To: Armando Migliaccio
>> Cc: openstack@lists.launchpad.net
>> Subject: Host Aggregates ...
>> 
>> Hi Armando,
>> 
>> I finally got around to reading
>> https://blueprints.launchpad.net/nova/+spec/host-aggregates.
>> 
>> Perhaps you could elaborate a little on how this differs from host
>> capabilities (key-value pairs associated with a service) that the scheduler
>> can use when making decisions?
>> 
>> The distributed scheduler doesn't need zones to operate, but will use them if
>> available. Would host-aggregates simply be a single-zone that uses
>> capabilities?
>> 
>> Cheers,
>> Sandy
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Tutorials of how to install openstack swift into centos 6

2011-11-10 Thread Stefano Maffulli
Hello pf

thanks for sending this. Did you write this down on a wiki page
somewhere, too? I fear that in the mailing list archive this won't get
the visibility it deserves.

Cheers,
stef

On Wed, 2011-11-09 at 17:00 +1100, pf shineyear wrote:
> openstack swift install on centos 6
> 
> 1. proxy install
> 
> 1) check your python version must >= 2.6
> 
> 2) yum install libvirt
> 
> 3) yum install memcached
> 
> 4) yum install xfsprogs
> 
> 5) yum install python-setuptools python-devel python-simplejson
> python-config
> 
> 6) easy_install webob
> 
> 7) easy_install eventlet
> 
> 8) install xattr-0.6.2.tar.gz, python setup.py build, python setup.py
> install
> 
> 9) install coverage-3.5.1.tar.gz, python setup.py build, python
> setup.py install
> 
> 10) wget "http://www.openstack.org/projects/storage/latest-release/";
>  python setup.py build
>  python setup.py install
> 
> 11) wget
> "https://github.com/downloads/gholt/swauth/swauth-lucid-build-1.0.2-1.tgz";
> python setup.py build
> python setup.py install
> 
> 12) mkdir /etc/swift
> 
> 13) yum install openssh-server
> 
> 14) yum install git-core
> 
> 15) vi /etc/swift/swift.conf
> 
> [swift-hash]
> # random unique string that can never change (DO NOT LOSE)
> swift_hash_path_suffix = `od -t x8 -N 8 -A n  
> 
> 16) goto /etc/swift/
> 
> 17) openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
> 
> 18) service memcached restart, ps -aux | grep mem
> 
> 495  16954  0.0  0.1 330756   816 ?Ssl  18:19   0:00
> memcached -d -p 11211 -u memcached -m 64 -c 1024
> -P /var/run/memcached/memcached.pid
> 
> 19) easy_install netifaces
> 
> 20) vi /etc/swift/proxy-server.conf
> 
> [DEFAULT]
> cert_file = /etc/swift/cert.crt
> key_file = /etc/swift/cert.key
> bind_port = 8080
> workers = 8
> user = swift
> log_facility = LOG_LOCAL0
> allow_account_management = true
> 
> [pipeline:main]
> pipeline = healthcheck cache swauth proxy-server
> 
> [app:proxy-server]
> use = egg:swift#proxy
> allow_account_management = true
> account_autocreate = true
> log_facility = LOG_LOCAL0
> log_headers = true
> log_level =DEBUG
> 
> [filter:swauth]
> use = egg:swauth#swauth
> #use = egg:swift#swauth
> default_swift_cluster = local#https://10.38.10.127:8080/v1
> # Highly recommended to change this key to something else!
> super_admin_key = swauthkey
> log_facility = LOG_LOCAL1
> log_headers = true
> log_level =DEBUG
> allow_account_management = true
> 
> [filter:healthcheck]
> use = egg:swift#healthcheck
> 
> [filter:cache]
> use = egg:swift#memcache
> memcache_servers = 10.38.10.127:11211
> 
> 
> 21) config /etc/rsyslog.conf
> 
> local0.*/var/log/swift/proxy.log
> local1.*/var/log/swift/swauth.log
> 
> 
> 
> 
> 
> 21) build the ring, i have 3 node, 1 proxy
> 
>  swift-ring-builder account.builder create 18 3 1
>  swift-ring-builder account.builder add z1-10.38.10.109:6002/sdb1 1
>  swift-ring-builder account.builder add z2-10.38.10.119:6002/sdb1 1
>  swift-ring-builder account.builder add z3-10.38.10.114:6002/sdb1 1
>  swift-ring-builder object.builder rebalance
> 
>  swift-ring-builder account.builder rebalance
>  swift-ring-builder object.builder create 18 3 1
>  swift-ring-builder object.builder add z1-10.38.10.109:6000/sdb1 1
>  swift-ring-builder object.builder add z2-10.38.10.119:6000/sdb1 1
>  swift-ring-builder object.builder add z3-10.38.10.114:6000/sdb1 1
>  swift-ring-builder object.builder rebalance
> 
>  swift-ring-builder container.builder create 18 3 1
>  swift-ring-builder container.builder add z1-10.38.10.109:6001/sdb1 1
>  swift-ring-builder container.builder add z2-10.38.10.119:6001/sdb1 1
>  swift-ring-builder container.builder add z3-10.38.10.114:6001/sdb1 1
>  swift-ring-builder container.builder rebalance
> 
> 
> 22) easy_install configobj
> 
> 23) easy_install nose
> 
> 24) easy_install simplejson
> 
> 25) easy_install xattr
> 
> 26) easy_install eventlet
> 
> 27) easy_install greenlet
> 
> 28) easy_install pastedeploy
> 
> 29) groupadd swift
> 
> 30) useradd -g swift swift
> 
> 31) chown -R swift:swift /etc/swift/
> 
> 32) service rsyslog restart
> 
> 33) swift-init proxy start
> 
> 
> 2. storage node install
> 
> 1) yum install python-setuptools python-devel python-simplejson
> python-configobj python-nose
> 
> 2) yum install openssh-server
> 
> 3) easy_install webob
> 
> 4) yum install curl gcc memcached sqlite xfsprogs
> 
> 
> 
> 5) easy_install eventlet
> 
> 
> 6) wget
> "http://pypi.python.org/packages/source/x/xattr/xattr-0.6.2.tar.gz#md5=5fc899150d03c082558455483fc0f89f";
> 
> 
>  python setup.py build
> 
>  python setup.py install
> 
> 
> 
> 7)  wget
> "http://pypi.python.org/packages/source/c/coverage/coverage-3.5.1.tar.gz#md5=410d4c8155a4dab222f2bc51212d4a24";
> 
> 
>  python setup.py build
> 
>  python setup.py install
> 
> 
> 8) yum install libvirt
> 
> 
> 9) groupadd swift
> 
> 
> 10) useradd -g swift swift
> 
> 
> 11) mkdir -p /etc/swift
> 
> 
> 12) chown

Re: [Openstack] Hardware HA

2011-11-10 Thread Razique Mahroua
Funny you bring that up today;I spent the day working on that. I've implemented Gluster FS on my openstack running installation and written a script along that.Here is the implementationnode1- 1 instance runningthe node 1 crashes (could be anything atm)the script detect the node is gone (to be defined : heartbeat, monitoring, etc...) and relaunch the instance in the specified node.I've tested it and successfully works. Just need to write the final touchesRegards
Razique Mahrouarazique.mahr...@gmail.com

Le 10 nov. 2011 à 17:15, Viacheslav Biriukov a écrit :HmIf we planning vm hosting we work on the other level. So if hw node fails we need fast automatic migration to other node.2011/11/10 Soren Hansen 
2011/11/10 Viacheslav Biriukov :
> Hi all.
> What are the best practices for HA of the hardware compute-node, and virtual
> machines.
> After googling I found matahari, pacemaker-cloud, but nothing about
> build-in fiches  openstack.
> 1) How do you create such environments?
> 2) Does it is right way to use pacemaker-cloud with openstack? Is it stable?

I'd avoid depending on anything like that altogether. Try to design
your application so that it doesn't depend on any one instance being
up. It'll work out better in the long run.

--
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/
-- Viacheslav BiriukovBRhttp://biriukov.com

___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Viacheslav Biriukov
Hm
If we planning vm hosting we work on the other level. So if hw node fails
we need fast automatic migration to other node.

2011/11/10 Soren Hansen 

> 2011/11/10 Viacheslav Biriukov :
> > Hi all.
> > What are the best practices for HA of the hardware compute-node, and
> virtual
> > machines.
> > After googling I found matahari, pacemaker-cloud, but nothing about
> > build-in fiches  openstack.
> > 1) How do you create such environments?
> > 2) Does it is right way to use pacemaker-cloud with openstack? Is it
> stable?
>
> I'd avoid depending on anything like that altogether. Try to design
> your application so that it doesn't depend on any one instance being
> up. It'll work out better in the long run.
>
> --
> Soren Hansen| http://linux2go.dk/
> Ubuntu Developer| http://www.ubuntu.com/
> OpenStack Developer | http://www.openstack.org/
>



-- 
Viacheslav Biriukov
BR
http://biriukov.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Stable branch reviews

2011-11-10 Thread James E. Blair
Mark McLoughlin  writes:

>> To mitigate that, we decided that the group doing stable branch
>> maintenance would be a separate group (i.e. *not* core developers), and
>> we decided that whatever ends up in the stable branch must first land in
>> the master branch.
>
> Well, I recall it a little differently - that both the stable branch
> maint group and the core team members would have +2 privileges on the
> stable branch.

If we discussed that explicitly, I'm afraid it didn't make it into my
notes from the summit.  It wouldn't surprise me if we left the answer to
that question in the bike shed.

> Maybe I just misread James here:
>
>   https://lists.launchpad.net/openstack/msg04751.html
>
>   "only members of the maintainers team or core devs can +/-2"

You read correctly; I took your proposal, translated it into slightly
more formal gerrit terms, and implemented that.  So currently gerrit is
configured so that _either_ core devs or the maintainers team can
approve changes to stable/ branches.

That can be changed, and maintainers can be given exclusive approval
authority over the stable/ branches.

> I also seem to have imagined the core teams being members of the stable
> branch maint team:
>
>   https://launchpad.net/~openstack-stable-maint/+members

Right now it doesn't matter because in gerrit core devs have the same
authority as maintainers.  However, if we make maintainers exclusive
approvers, it might be better for individuals with interests in both to
be individually members of both since the stable branch has extra
behavioral rules.

> But wait! Vish +2ed a stable branch patch yesterday:
>
>   https://review.openstack.org/328
>
> James, help a poor confused soul out here, would you? :)
>
> Right, that makes sense. Only folks that understand the stable branch
> policy[1] should be allowed to +2 on the stable branch.
>
> Basically, a stable branch reviewer should only +2 if:
>
>   - It fixes a significant issue, seen, or potentially seen, by someone
> during real life use
>
>   - The fix, or equivalent, must be in master already
>
>   - The fix was either a fairly trivial cherry-pick that looks 
> equally correct for the stable branch, or that the fix has 
> sufficient technical review (e.g. a +1 from another stable 
> reviewer if it's fairly straightforward, or one or more +1s from 
> folks on core it it's really gnarly)
>
>   - If this reviewer proposed the patch originally, another stable
> branch reviewer should have +1ed it 
>
> All we need is an understanding of the policy and reasonable judgement,
> it's not rocket science. I'd encourage folks to apply to the team for
> membership after reviewing a few patches.

It sounds like the best way to implement this policy is to give
openstack-stable-maint exclusive approval authority on stable branches,
and then make sure people understand those rules when adding them to
that team.  If that's the consensus, I can make the change.

-Jim

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Armando Migliaccio
There is a blueprint that touches these aspects:

https://blueprints.launchpad.net/nova/+spec/guest-ha

This is tailored at use cases where you cannot redesign an existing app. 

The work is at the early stages, but you are more than welcome to join the 
effort!

Cheers,
Armando

> -Original Message-
> From: openstack-bounces+armando.migliaccio=eu.citrix@lists.launchpad.net
> [mailto:openstack-
> bounces+armando.migliaccio=eu.citrix@lists.launchpad.net] On Behalf Of
> Soren Hansen
> Sent: 10 November 2011 15:51
> To: Viacheslav Biriukov
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] Hardware HA
> 
> 2011/11/10 Viacheslav Biriukov :
> > Hi all.
> > What are the best practices for HA of the hardware compute-node, and virtual
> > machines.
> > After googling I found matahari, pacemaker-cloud, but nothing about
> > build-in fiches  openstack.
> > 1) How do you create such environments?
> > 2) Does it is right way to use pacemaker-cloud with openstack? Is it stable?
> 
> I'd avoid depending on anything like that altogether. Try to design
> your application so that it doesn't depend on any one instance being
> up. It'll work out better in the long run.
> 
> --
> Soren Hansen        | http://linux2go.dk/
> Ubuntu Developer    | http://www.ubuntu.com/
> OpenStack Developer | http://www.openstack.org/
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Soren Hansen
2011/11/10 Viacheslav Biriukov :
> Hi all.
> What are the best practices for HA of the hardware compute-node, and virtual
> machines.
> After googling I found matahari, pacemaker-cloud, but nothing about
> build-in fiches  openstack.
> 1) How do you create such environments?
> 2) Does it is right way to use pacemaker-cloud with openstack? Is it stable?

I'd avoid depending on anything like that altogether. Try to design
your application so that it doesn't depend on any one instance being
up. It'll work out better in the long run.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Pádraig Brady
On 11/10/2011 02:38 PM, Viacheslav Biriukov wrote:
> Hi all.
> 
> What are the best practices for HA of the hardware compute-node, and virtual 
> machines.
> After googling I found matahari, pacemaker-cloud, but nothing about build-in 
> fiches  openstack. 
> 
> 1) How do you create such environments?  
> 2) Does it is right way to use pacemaker-cloud with openstack? Is it stable?

About pacemaker-cloud, one can currently use it with openstack to some extent:
http://ahsalkeld.wordpress.com/a-mash-up-of-openstack-and-pacemaker-cloud/
However there are lots of manual steps and it's still in development.
It would be cool if you wanted to give that a shot and reported any issues or 
thoughts.

cheers,
Pádraig.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Hardware HA

2011-11-10 Thread Viacheslav Biriukov
Hi all.

What are the best practices for HA of the hardware compute-node, and
virtual machines.
After googling I found matahari, pacemaker-cloud, but nothing about
build-in fiches  openstack.

1) How do you create such environments?
2) Does it is right way to use pacemaker-cloud with openstack? Is it stable?

Tnx
-- 
Viacheslav Biriukov
BR
http://biriukov.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bug fixes and test cases submitted against stable/diablo

2011-11-10 Thread Mark McLoughlin
Hi Nati,

I think it's fair to say there is very strong consensus around the
policy that only changes that have been accepted into master should be
considered for the stable branch.

It's not unusual for people to focus their QA efforts on a stable
branch. However, it is crucial that any fixes for bugs discovered during
this QA process are first applied to the master branch.

Your situation is slightly unusual in that a large part of your QA
effort seems to be writing new unit tests against the stable branch.
Perhaps this is a reasonable way of finding issues whose fixes would be
suitable for stable branch, but I think these tests would have much more
value if they were added to the master branch. I'd be hesitant to accept
backports of a large number of new test cases to stable branch.

In any case, I really hope you can figure out how to adjust your
workflow such that fixes and new test cases flow into the master branch.
We really want all your cool work upstream! :-)

In the meantime, I'm going to -2 all the reviews submitted against the
stable branch because too many people are being confused into thinking
that these are patches for master.

Thanks,
Mark.

On Tue, 2011-11-08 at 10:19 -0800, Nachi Ueno wrote:
> Hi Mark
> 
> Thank you for your sharing discussion.
> # hmm, If I could create new instance of me, problem will be fixed.
> 
> I understand your point. Stop QAing stable/diablo and focus on Essex.
> Ideally, we should focus on upstream branch. Ideally, we can start use
> the code after release out.
> 
> However the current situation is different. IMO the quality diablo is
> not ready for real deployment.
> In the diablo summit, I think we agreed the policy "Do not decrease
> code coverage on merge".
> But it is not applied through diablo timeframe,and the diablo has
> small coverage.
> 
> And for essex, the specs are changing, so it is quite difficult QA by
> non-implementer.
> In addition, to wait 6 month is not allowed for my team.
> So QAing stable/branch with fixed specs is very important.
> 
> Our contribution is 1000 unit test cases for stable/diablo nova, and bug patch
> (I'm not sure all test could be used for Essex) #Sorry i sent wrong
> number for you.
> There test cases found about 60 bugs. And also we are writing each bag patch.
> https://bugs.launchpad.net/nova/+bugs?search=Search&field.bug_reporter=nati-ueno
> 
> For test case, it didn't have bad effect for code. Otherwise they
> helps keep quality of code. No violation for (1).
> So I think it should be merged to stable/diablo.
> 
> For bug patch, it should be discussed case-by-case.
> Some large refactoring have done already for Essex,then some bugs are
> fixed on the refactoring.
> 
> We are struggling with very tight schedule. X(
> If our contribution is rejected to the stable/diablo, to maintain our
> own branch is only option for us.
> And I don't really want to do this.
> 
> 
> 2011/11/8 Mark McLoughlin :
> > Hi Nati
> >
> > (Restarting our offline discussion here ...)
> >
> > I see you've proposed a stack of changes to Nova. Nice work! Kudos!
> >
> >  
> > https://review.openstack.org/#q,status:open+project:openstack/nova+branch:stable/diablo+owner:nati,n,z
> >
> > However, they shouldn't be submitted against the stable/diablo branch.
> > If they were just merged there, they would never make it into the Essex
> > and later releases.
> >
> > The policy for what is acceptable in the stable branch is documented
> > here:
> >
> >  http://wiki.openstack.org/StableBranch
> >
> > The policy is pretty standard practice for stable branches and the
> > reasons for it include:
> >
> >  1) We try and reduce the risk of regressions on the stable branch to
> > the absolute minimum. We also try to reduce the size and number of
> > changes so that people using the stable branch can be confident
> > that the risk of the changes is low and they can review the
> > changes themselves.
> >
> >  2) Getting fixes onto the main development branch before applying
> > them to the stable branch means we have a good chance of catching
> > any regressions caused by the fix on master before it has a chance
> > to cause a regression on the stable branch.
> >
> >  3) But most importantly, the policy is there to ensure that people
> > don't focus on stable branches to the detriment of the development
> > branch. If everyone focused their effort on fixing the stable
> > branch and never included those fixes in the development branch,
> > every new release would be in terrible shape and the fixing effort
> > would have to start over again.
> >
> > I think you're in the situation that (3) is trying to prevent.
> >
> > i.e. you and your team are focused on testing and fixing Diablo and
> > don't have the time to submit your fixes against Essex. While it's great
> > to see your fixes, IMHO you really need to think longer term.
> >
> > If you leave it until later to rebase the fixes onto master, you'll
> > probably f

Re: [Openstack] Stable branch reviews

2011-11-10 Thread Mark McLoughlin
Hey,

On Wed, 2011-11-09 at 16:50 +0100, Thierry Carrez wrote:
> Hi everyone,
> 
> Since there seems to be some confusion around master vs. stable/diablo
> vs. core reviewers, I think it warrants a small thread.
> 
> When at the Design Summit we discussed setting up stable branches, I
> warned about the risks that setting them up brings for trunk development:
> 
> 1) Reduce resources affected to trunk development
> 2) Reduce quality of trunk

The "it must be in trunk first" policy is the best way to mitigate
against that.

> To mitigate that, we decided that the group doing stable branch
> maintenance would be a separate group (i.e. *not* core developers), and
> we decided that whatever ends up in the stable branch must first land in
> the master branch.

Well, I recall it a little differently - that both the stable branch
maint group and the core team members would have +2 privileges on the
stable branch.

Maybe I just misread James here:

  https://lists.launchpad.net/openstack/msg04751.html

  "only members of the maintainers team or core devs can +/-2"

I also seem to have imagined the core teams being members of the stable
branch maint team:

  https://launchpad.net/~openstack-stable-maint/+members

But, whatever :)

We have a separate team with me, Chuck and Dave on it. Only one of us
can +2.

But wait! Vish +2ed a stable branch patch yesterday:

  https://review.openstack.org/328

James, help a poor confused soul out here, would you? :)

> So a change goes like this:
> * Change is proposed to trunk
> * Change is reviewed by core (is it appropriate, well-written, etc)
> * Change lands in trunk
> * Change is proposed to stable/diablo
> * Change is reviewed by stable team (is it relevant for a stable update,
> did it land in trunk first)
> * Change lands in stable/diablo
> 
> This avoids the aforementioned risks, avoids duplicating review efforts
> (the two reviews actually check for different things), and keep the
> teams separate (so trunk reviews are not slowed down by stable reviews).
> 
> Note that this does not prevent core developers that have an interest in
> stable/diablo from being in the two teams.
> 
> Apparently people in core can easily mistake master for stable/diablo,
> and can also +2 stable/diablo changes. In order to avoid mistakes, I
> think +2 powers on stable/diablo should be limited to members of the
> stable maintenance team (who know their stable review policy).
> 
> That should help avoid mistakes (like landing a fix in stable/diablo
> that never made it to master), while not preventing individual core devs
> from helping in stable reviews.

Right, that makes sense. Only folks that understand the stable branch
policy[1] should be allowed to +2 on the stable branch.

Basically, a stable branch reviewer should only +2 if:

  - It fixes a significant issue, seen, or potentially seen, by someone
during real life use

  - The fix, or equivalent, must be in master already

  - The fix was either a fairly trivial cherry-pick that looks 
equally correct for the stable branch, or that the fix has 
sufficient technical review (e.g. a +1 from another stable 
reviewer if it's fairly straightforward, or one or more +1s from 
folks on core it it's really gnarly)

  - If this reviewer proposed the patch originally, another stable
branch reviewer should have +1ed it 

All we need is an understanding of the policy and reasonable judgement,
it's not rocket science. I'd encourage folks to apply to the team for
membership after reviewing a few patches.

Cheers,
Mark.

[1] - http://wiki.openstack.org/StableBranch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Four compute-node, everytime the 1st and 2nd are choosen

2011-11-10 Thread Sateesh Chodapuneedi
The flag (of type list) you can use is "--least_cost_functions"
Multiple algorithms can be specified, the default one is 
"compute_fill_first_cost_fn" which gives priority to the compute host 
(XenServer/ESX etc.) with more free RAM.

Regards,
Sateesh


"This e-mail message is for the sole use of the intended recipient(s) and may 
contain confidential and/or privileged information. Any unauthorized review, 
use, disclosure, or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message."


From: openstack-bounces+sateesh.chodapuneedi=citrix@lists.launchpad.net 
[mailto:openstack-bounces+sateesh.chodapuneedi=citrix@lists.launchpad.net] 
On Behalf Of Razique Mahroua
Sent: Thursday, November 10, 2011 4:12 PM
To: Jorge Luiz Correa
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Four compute-node, everytime the 1st and 2nd are 
choosen

+1 :)

Razique Mahroua
razique.mahr...@gmail.com

[cid:image002.jpg@01CC9FD5.EC50CAF0]

Le 10 nov. 2011 à 11:27, Jorge Luiz Correa a écrit :


Is there a flag in nova.conf that permits we configure that? In documentation 
we can see that exists some algorithms used by scheduler. But, I don't know how 
to choose that one best fit our requirements.

Thanks!
:)
On Wed, Nov 9, 2011 at 1:36 PM, Ed Leafe 
mailto:ed.le...@rackspace.com>> wrote:
On Nov 9, 2011, at 7:51 AM, Razique Mahroua wrote:

> I use the default scheduler, in fact, I've never tunned it really.
> The hypervisors all run KVM

   This is where the flag is defined in 
nova.scheduler.least_cost.py:

 32 FLAGS = flags.FLAGS
 33 flags.DEFINE_list('least_cost_functions',
 34 ['nova.scheduler.least_cost.compute_fill_first_cost_fn'],
 35 'Which cost functions the LeastCostScheduler should use.')

   Since the default weighting function is 'compute_fill_first_cost_fn', 
which, as its name suggests, chooses hosts so as to fill up one host as much as 
possible before selecting another, the pattern you're seeing is expected. If 
you change that flag to 'nova.scheduler.noop_cost_fn', you should see the hosts 
selected randomly. The idea is that you can create your own weighting functions 
that will select potential hosts in a way that best fits your needs.


-- Ed Leafe


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



--
- MSc. Correa, J.L.
___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

<><>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Host Aggregates ...

2011-11-10 Thread Sandy Walsh
Ok, that helps ... now I see the abstraction your going for (a new layer under 
availability zones).

Personally I prefer a tagging approach to a modeled hierarchy. It was something 
we debated at great length with Zones. In this case, the "tag" would be in the 
capabilities assigned to the host.

I think both availability zones (and host aggregates) should be modeled using 
tags/capabilities without having to explicitly model it as a tree or in the db 
... which is how I see this evolving. At the scheduler level we should be able 
to make decisions using simple tag collections.

"WestCoast, HasGPU, GeneratorBackup, PriorityNetwork"

Are we saying the same thing?

Are there use cases that this approach couldn't handle?

-S


From: Armando Migliaccio [armando.migliac...@eu.citrix.com]
Sent: Thursday, November 10, 2011 8:50 AM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: RE: Host Aggregates ...

Hi Sandy,

Thanks for taking the time to read this.

My understanding is that a typical Nova deployment would span across multiple 
zones, that zones may have subzones, and that child zones will have a number of 
availability zones in them; please do correct me if I am wrong :)

That stated, it was assumed that an aggregate will be a grouping of servers 
within an availability zone (hence the introduction of the extra concept), and 
would be used to manage hypervisor pools when and if required. This introduces 
benefits like VM live migration, VM HA and zero-downtime host upgrades. The 
introduction of hypervisor pools is just the easy way to get these benefits in 
the short term.

Going back to your point, it is possible to match "host-aggregates" with 
"single-zone that uses capabilities" on the implementation level (assumed that 
it is okay to be unable to represent aggregates as children of availability 
zones). Nevertheless, I still see zones and aggregates as being different on 
the conceptual level.

What is your view if we went with the approach of implementing an aggregate as 
a special "single-zone that uses capabilities"? Would there be a risk of 
tangling the zone management API a bit?

Thanks for feedback!

Cheers,
Armando

> -Original Message-
> From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
> Sent: 09 November 2011 21:10
> To: Armando Migliaccio
> Cc: openstack@lists.launchpad.net
> Subject: Host Aggregates ...
>
> Hi Armando,
>
> I finally got around to reading
> https://blueprints.launchpad.net/nova/+spec/host-aggregates.
>
> Perhaps you could elaborate a little on how this differs from host
> capabilities (key-value pairs associated with a service) that the scheduler
> can use when making decisions?
>
> The distributed scheduler doesn't need zones to operate, but will use them if
> available. Would host-aggregates simply be a single-zone that uses
> capabilities?
>
> Cheers,
> Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Four compute-node, everytime the 1st and 2nd are choosen

2011-11-10 Thread Sandy Walsh
Are you using Diablo or Trunk?

If you're using trunk the default scheduler is MultiScheduler, which uses 
Chance scheduler. I think Diablo uses Chance by default?

--scheduler_driver

Unless you've explicitly selected the LeastCostScheduler (which only exists in 
Diablo now) I wouldn't worry about those settings.

Did you explicitly define a scheduler to use?


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Jorge Luiz Correa [corre...@gmail.com]
Sent: Thursday, November 10, 2011 6:27 AM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Four compute-node, everytime the 1st and 2nd are 
choosen

Is there a flag in nova.conf that permits we configure that? In documentation 
we can see that exists some algorithms used by scheduler. But, I don't know how 
to choose that one best fit our requirements.

Thanks!
:)

On Wed, Nov 9, 2011 at 1:36 PM, Ed Leafe 
mailto:ed.le...@rackspace.com>> wrote:
On Nov 9, 2011, at 7:51 AM, Razique Mahroua wrote:

> I use the default scheduler, in fact, I've never tunned it really.
> The hypervisors all run KVM

   This is where the flag is defined in 
nova.scheduler.least_cost.py:

 32 FLAGS = flags.FLAGS
 33 flags.DEFINE_list('least_cost_functions',
 34 ['nova.scheduler.least_cost.compute_fill_first_cost_fn'],
 35 'Which cost functions the LeastCostScheduler should use.')

   Since the default weighting function is 'compute_fill_first_cost_fn', 
which, as its name suggests, chooses hosts so as to fill up one host as much as 
possible before selecting another, the pattern you're seeing is expected. If 
you change that flag to 'nova.scheduler.noop_cost_fn', you should see the hosts 
selected randomly. The idea is that you can create your own weighting functions 
that will select potential hosts in a way that best fits your needs.


-- Ed Leafe


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



--
- MSc. Correa, J.L.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Host Aggregates ...

2011-11-10 Thread Armando Migliaccio
Hi Sandy,

Thanks for taking the time to read this.

My understanding is that a typical Nova deployment would span across multiple 
zones, that zones may have subzones, and that child zones will have a number of 
availability zones in them; please do correct me if I am wrong :)

That stated, it was assumed that an aggregate will be a grouping of servers 
within an availability zone (hence the introduction of the extra concept), and 
would be used to manage hypervisor pools when and if required. This introduces 
benefits like VM live migration, VM HA and zero-downtime host upgrades. The 
introduction of hypervisor pools is just the easy way to get these benefits in 
the short term. 

Going back to your point, it is possible to match "host-aggregates" with 
"single-zone that uses capabilities" on the implementation level (assumed that 
it is okay to be unable to represent aggregates as children of availability 
zones). Nevertheless, I still see zones and aggregates as being different on 
the conceptual level. 

What is your view if we went with the approach of implementing an aggregate as 
a special "single-zone that uses capabilities"? Would there be a risk of 
tangling the zone management API a bit?

Thanks for feedback!

Cheers,
Armando

> -Original Message-
> From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
> Sent: 09 November 2011 21:10
> To: Armando Migliaccio
> Cc: openstack@lists.launchpad.net
> Subject: Host Aggregates ...
> 
> Hi Armando,
> 
> I finally got around to reading
> https://blueprints.launchpad.net/nova/+spec/host-aggregates.
> 
> Perhaps you could elaborate a little on how this differs from host
> capabilities (key-value pairs associated with a service) that the scheduler
> can use when making decisions?
> 
> The distributed scheduler doesn't need zones to operate, but will use them if
> available. Would host-aggregates simply be a single-zone that uses
> capabilities?
> 
> Cheers,
> Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Kevin Mitchell to nova-core

2011-11-10 Thread Soren Hansen
That brings us to +5. If noone objects by Tuesday next week, I'll make it so.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Johannes Erdfelt to nova-core

2011-11-10 Thread Soren Hansen
We now have a sufficient amount of +1 to go ahead with this. If noone
objects before Tuesday next week, I'll make it so.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Four compute-node, everytime the 1st and 2nd are choosen

2011-11-10 Thread Razique Mahroua
+1 :)
Razique Mahrouarazique.mahr...@gmail.com

Le 10 nov. 2011 à 11:27, Jorge Luiz Correa a écrit :Is there a flag in nova.conf that permits we configure that? In documentation we can see that exists some algorithms used by scheduler. But, I don't know how to choose that one best fit our requirements. 
Thanks!:)On Wed, Nov 9, 2011 at 1:36 PM, Ed Leafe  wrote:
On Nov 9, 2011, at 7:51 AM, Razique Mahroua wrote:

> I use the default scheduler, in fact, I've never tunned it really.
> The hypervisors all run KVM

        This is where the flag is defined in nova.scheduler.least_cost.py:

 32 FLAGS = flags.FLAGS
 33 flags.DEFINE_list('least_cost_functions',
 34         ['nova.scheduler.least_cost.compute_fill_first_cost_fn'],
 35         'Which cost functions the LeastCostScheduler should use.')

        Since the default weighting function is 'compute_fill_first_cost_fn', which, as its name suggests, chooses hosts so as to fill up one host as much as possible before selecting another, the pattern you're seeing is expected. If you change that flag to 'nova.scheduler.noop_cost_fn', you should see the hosts selected randomly. The idea is that you can create your own weighting functions that will select potential hosts in a way that best fits your needs.



-- Ed Leafe


___
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
-- - MSc. Correa, J.L.

___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Four compute-node, everytime the 1st and 2nd are choosen

2011-11-10 Thread Jorge Luiz Correa
Is there a flag in nova.conf that permits we configure that? In
documentation we can see that exists some algorithms used by scheduler.
But, I don't know how to choose that one best fit our requirements.

Thanks!
:)

On Wed, Nov 9, 2011 at 1:36 PM, Ed Leafe  wrote:

> On Nov 9, 2011, at 7:51 AM, Razique Mahroua wrote:
>
> > I use the default scheduler, in fact, I've never tunned it really.
> > The hypervisors all run KVM
>
>This is where the flag is defined in nova.scheduler.least_cost.py:
>
>  32 FLAGS = flags.FLAGS
>  33 flags.DEFINE_list('least_cost_functions',
>  34 ['nova.scheduler.least_cost.compute_fill_first_cost_fn'],
>  35 'Which cost functions the LeastCostScheduler should use.')
>
>Since the default weighting function is
> 'compute_fill_first_cost_fn', which, as its name suggests, chooses hosts so
> as to fill up one host as much as possible before selecting another, the
> pattern you're seeing is expected. If you change that flag to
> 'nova.scheduler.noop_cost_fn', you should see the hosts selected randomly.
> The idea is that you can create your own weighting functions that will
> select potential hosts in a way that best fits your needs.
>
>
> -- Ed Leafe
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>



-- 
- MSc. Correa, J.L.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp