Re: [Openstack] the format of disk.local

2011-11-09 Thread Razique Mahroua

---use_qcow_images=True in nova.conf will do it ?

On Wed, 09 Nov 2011 00:45:18 +0100, Jae Sang Lee hyan...@gmail.com wrote:


Hi,

You should to modify nova.virt.libvirt.connection._create_image.

This is source code about make a local disk.

 959 local_gb = inst['local_gb']
 960 if local_gb and not self._volume_in_mapping(
 961 self.default_local_device, block_device_info):
 962 fn = functools.partial(self._create_ephemeral,
 963fs_label='ephemeral0',
 964os_type=inst.os_type)
 965 self._cache_image(fn=fn,
 966   target=basepath('disk.local'),
 967   fname=ephemeral_%s_%s_%s %
 968   (0, local_gb, inst.os_type),
 969   cow=*FLAGS.use_cow_images*,
 970   local_size=local_gb)

You can change 'cow' value to FALSE. (FLAGS.use_cow_images is TRUE by
default). then _cache_image function make a disk to raw format.


2011/11/8 ljvsss ljv...@gmail.com


hi all

if i create an instance flavor of m1.large(4u,8GRAM.80G disk),at floder
here /var/lib/nova/instances/instance-006 will have two img:disk and
disk.local
they format is same,qcow2 or raw
what i want is the disk (instance os disk) format  is qcow2,and the
disk.local's format is  raw,because qcow2 is small and easy to make
snapshots,raw's speed is bettter

what should i do?

thanks :)

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Nova in Offline Mac

2011-11-09 Thread Frans Thamura
hi all

we use stackops for our openstack distribution, to explain how openstack
work

but i found the stackops is linked with an IP, which in my notebook, we use
dynamic IP

anyone can help, handle the dynamic IP for demo openstack/stackops

thx
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Four compute-node, everytime the 1st and 2nd are choosen

2011-11-09 Thread Razique Mahroua
Hl all, 
I've four compute-nodes, around 20 instances runing.
I've four nodes registered to nova-scheduler (nova-manage shows them)
but everytime I spawn a new instance, the 3rd an 4th node are never choosen for 
the instances.
the ressources are the same on the nodes (around 24gb of ram), they are idle, 
and available.
Can I force the scheduler to use them for an instance ?
Thanks
Razique - doc team -
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: Four compute-node, everytime the 1st and 2nd are choosen

2011-11-09 Thread Sateesh Chodapuneedi
Hi Razique,

Which scheduler are you using?
And what is the underlying hypervisor?

Regards,
Sateesh


This e-mail message is for the sole use of the intended recipient(s) and may 
contain confidential and/or privileged information. Any unauthorized review, 
use, disclosure, or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message.


From: openstack-bounces+sateesh.chodapuneedi=citrix@lists.launchpad.net 
[mailto:openstack-bounces+sateesh.chodapuneedi=citrix@lists.launchpad.net] 
On Behalf Of Razique Mahroua
Sent: Wednesday, November 09, 2011 3:22 PM
To: openstack (openstack@lists.launchpad.net)
Subject: [Openstack] Fwd: Four compute-node, everytime the 1st and 2nd are 
choosen

I confirm that issue,
since I've disabled the nova-compute service on the first 3 nodes, and when I 
spawn a new instance, now it goes to the last one, so the scheduler would have 
been able to choose them I've now a disbalanced farm :

Host 1 : High load (11 instances)
Host2 : Middel load (9 instance)
Host 3 : null load (0 instances)
Host 4 : null load (1 instance)

I use nova 2011.2 (Cactus stable)

Thanks

Début du message réexpédié :


De : Razique Mahroua 
razique.mahr...@gmail.commailto:razique.mahr...@gmail.com
Objet : Four compute-node, everytime the 1st and 2nd are choosen
Date : 9 novembre 2011 10:39:17 HNEC
À : openstack 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net

Hl all,
I've four compute-nodes, around 20 instances runing.
I've four nodes registered to nova-scheduler (nova-manage shows them)
but everytime I spawn a new instance, the 3rd an 4th node are never choosen for 
the instances.
the ressources are the same on the nodes (around 24gb of ram), they are idle, 
and available.
Can I force the scheduler to use them for an instance ?
Thanks
Razique - doc team -

inline: image001.gif___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bug fixes and test cases submitted against stable/diablo

2011-11-09 Thread Soren Hansen
2011/11/9 Nachi Ueno ueno.na...@nttdata-agilenet.com:
 I understand your point. Stop QAing stable/diablo and focus on Essex.

Oh, no no. That's not the point. I'm thrilled to have you work on
QAing Diablo. The only issue is that the fixes you come up with should
be pushed to Essex first. There are two reasons for this:

 * If we don't push the fixes to Essex, the problems will still be
present in Essex and every release after that.

 * Having them in Essex lets us try them out, vet them and validate
them more thoroughly before we let them into the stable branch. When a
patch lands in the stable branch it has to be well tested already
(unless of course Essex has deviated too much, in which case we'll
have to accept the risk of getting it into Diablo directly).

 However the current situation is different. IMO the quality diablo is
 not ready for real deployment.
 In the diablo summit, I think we agreed the policy Do not decrease
 code coverage on merge.
 But it is not applied through diablo timeframe,and the diablo has
 small coverage.

This is true :(

 We are struggling with very tight schedule. X(
 If our contribution is rejected to the stable/diablo, to maintain our
 own branch is only option for us.
 And I don't really want to do this.

Yes, I would very much like to avoid this as well.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Essex-1 milestone proposed candidates

2011-11-09 Thread Thierry Carrez
Hi everyone,

Milestone-proposed branches were created for Keystone, Glance, Nova and
Horizon in preparation for the essex-1 milestone delivery on Thursday.
Trunk development continues on essex-2.

Please test proposed deliveries to ensure no critical regression found
its way in. Milestone-critical fixes will be backported to the
milestone-proposed branch until final delivery of the milestone, and
will be tracked using the essex-1 milestone targeting.

Links:
for PROJ in ['keystone', 'nova', 'glance', 'horizon']:
 Milestone-critical bugs: https://launchpad.net/PROJ/+milestone/essex-1
 Branch at: https://github.com/openstack/PROJ/tree/milestone-proposed
 Proposed tarballs at: http://PROJ.openstack.org/tarballs/
  (Look for the most recent PROJ-2012.1~e1*.tar.gz build)

You can also test the Glance  Nova candidates on Ubuntu by enabling:
 ppa:nova-core/milestone-proposed
 ppa:glance-core/milestone-proposed

The current plan is to deliver the milestone Thursday morning, US time.
Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bug fixes and test cases submitted against stable/diablo

2011-11-09 Thread Thierry Carrez
Soren Hansen wrote:
 2011/11/9 Nachi Ueno ueno.na...@nttdata-agilenet.com:
 I understand your point. Stop QAing stable/diablo and focus on Essex.
 
 Oh, no no. That's not the point. I'm thrilled to have you work on
 QAing Diablo. The only issue is that the fixes you come up with should
 be pushed to Essex first. There are two reasons for this:
 
  * If we don't push the fixes to Essex, the problems will still be
 present in Essex and every release after that.
 
  * Having them in Essex lets us try them out, vet them and validate
 them more thoroughly before we let them into the stable branch. When a
 patch lands in the stable branch it has to be well tested already
 (unless of course Essex has deviated too much, in which case we'll
 have to accept the risk of getting it into Diablo directly).

+1

You should submit patches to master and then backport them to
stable/diablo, rather than proposing them for stable/diablo directly.
That ensures your work benefits both branches: making diablo better
without making essex worse than diablo.

If that's just too much work, maybe you should raise the issue at the
next QA meeting to try to get some outside help ?

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Proposal to add Kevin Mitchell to nova-core

2011-11-09 Thread Brian Waldon
Vek has absolutely stepped up and started doing quite few reviews, so I'd like 
to nominate him to be added to nova-core.

Waldon


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Proposal to add Johannes Erdfelt to nova-core

2011-11-09 Thread Brian Waldon
I'd like to nominate Johannes for nova-core, as he has definitely been doing a 
good number of reviews lately.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Draft API specifications

2011-11-09 Thread Jay Pipes
On Tue, Nov 8, 2011 at 5:54 PM, Anne Gentle a...@openstack.org wrote:
 Hi all -

 We have three projects that need to have draft API docs (for a new API
 version) published for feedback and consumption during the Essex timeframe.
 (Quantum 1.01.1, Glance 1.12.0, and Nova 1.12.1)

Small, but important correction: It is not the Quantum, Glance API or
the Nova API, but the OpenStack Networks API, Images API and Compute
API :)

I raised this issue at the last design summit and have continued to
raise it on the mailing list in various discussions, but I think it is
important to state that how an API evolves can and should be separated
from a reference implementation of that API.

There's a reason why the word Glance doesn't appear anywhere in the
proposed Images API 2.0 drafts.

 I'd like to get ideas about where those should be published and some of the
 requirements around their draft status.

So... are we talking about when/where to publish the proposed draft
spec AFTER it has gone through an RFC period and gotten feedback? Or
are we talking about codifying the way we go about getting feedback
during an RFC period on a proposed API?

I kind of like the way that commenting on Google Docs has worked for
the Images API 2.0 proposed drafts. It's easy enough to comment on a
block of the document and respond -- and emails get sent out notifying
you of new or updated comments. We got feedback from 12 individuals
via comments on the Google Doc, and through an iterative process, have
responded to those comments and/or incorporated the feedback back into
the proposal.

I'm in the process of completing the final requested changes to the
second draft document and was then planning to email the mailing list
with a OK, here is the final draft. Last chance to comment before we
begin implementing it in Glance post. I'd like to work with you on
taking the proposed draft Google Doc into the main
http://docs.openstack.org/api/ site.

The current 1.x API is here:

http://docs.openstack.org/api/openstack-image-service/1.0/content/

I'd love it if we could put the final 2.0 proposal here:

http://docs.openstack.org/api/openstack-image-service/2.0/content/

With a link to it from the main API area, noting that the 2.0 API is
in DRAFT mode until X date -- to be determined later?

 Is there a need for special treatment for RFC vs. Draft designations
 such as RFC for a certain time period, then Draft?

I think the RFC period should run by the respective PTL for the
project and then a DRAFT mode indicates it is in the period AFTER the
RFC time and before the proposed API is fully implemented by a
project. That work for you?

 Do these need drafts need to be published to docs.openstack.org/api, or is
 that site for final APIs for end-users?

See above. I think having the DRAFT on the API site would be very
helpful (again, after the RFC period closes)

 I envision introducing more
 confusion than is already present if we publish them side-by-side.
 Do these API drafts need their own site for the RFC/Draft period, such as
 api.openstack.org/drafts?

No, I think just clearly marking the draft API with DRAFT in big red
letters is good :)

Cheers, and thanks for caring about this subject that's close to my heart!
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Johannes Erdfelt to nova-core

2011-11-09 Thread Todd Willey
Plus one
On Nov 9, 2011 9:25 AM, Brian Waldon brian.wal...@rackspace.com wrote:

 I'd like to nominate Johannes for nova-core, as he has definitely been
 doing a good number of reviews lately.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Tutorials of how to install openstack swift into centos 6

2011-11-09 Thread Jay Pipes
Phew! Only 69 steps to install Swift on CentOS. I was worried it might
be easy to do. Silly me :)

-jay

On Wed, Nov 9, 2011 at 1:00 AM, pf shineyear shin...@gmail.com wrote:
 openstack swift install on centos 6

 1. proxy install

 1) check your python version must = 2.6

 2) yum install libvirt

 3) yum install memcached

 4) yum install xfsprogs

 5) yum install python-setuptools python-devel python-simplejson
 python-config

 6) easy_install webob

 7) easy_install eventlet

 8) install xattr-0.6.2.tar.gz, python setup.py build, python setup.py
 install

 9) install coverage-3.5.1.tar.gz, python setup.py build, python setup.py
 install

 10) wget http://www.openstack.org/projects/storage/latest-release/;
  python setup.py build
  python setup.py install

 11) wget
 https://github.com/downloads/gholt/swauth/swauth-lucid-build-1.0.2-1.tgz;
 python setup.py build
 python setup.py install

 12) mkdir /etc/swift

 13) yum install openssh-server

 14) yum install git-core

 15) vi /etc/swift/swift.conf

 [swift-hash]
 # random unique string that can never change (DO NOT LOSE)
 swift_hash_path_suffix = `od -t x8 -N 8 -A n /dev/random`


 16) goto /etc/swift/

 17) openssl req -new -x509 -nodes -out cert.crt -keyout cert.key

 18) service memcached restart, ps -aux | grep mem

 495  16954  0.0  0.1 330756   816 ?    Ssl  18:19   0:00 memcached
 -d -p 11211 -u memcached -m 64 -c 1024 -P /var/run/memcached/memcached.pid

 19) easy_install netifaces

 20) vi /etc/swift/proxy-server.conf

 [DEFAULT]
 cert_file = /etc/swift/cert.crt
 key_file = /etc/swift/cert.key
 bind_port = 8080
 workers = 8
 user = swift
 log_facility = LOG_LOCAL0
 allow_account_management = true

 [pipeline:main]
 pipeline = healthcheck cache swauth proxy-server

 [app:proxy-server]
 use = egg:swift#proxy
 allow_account_management = true
 account_autocreate = true
 log_facility = LOG_LOCAL0
 log_headers = true
 log_level =DEBUG

 [filter:swauth]
 use = egg:swauth#swauth
 #use = egg:swift#swauth
 default_swift_cluster = local#https://10.38.10.127:8080/v1
 # Highly recommended to change this key to something else!
 super_admin_key = swauthkey
 log_facility = LOG_LOCAL1
 log_headers = true
 log_level =DEBUG
 allow_account_management = true

 [filter:healthcheck]
 use = egg:swift#healthcheck

 [filter:cache]
 use = egg:swift#memcache
 memcache_servers = 10.38.10.127:11211


 21) config /etc/rsyslog.conf

 local0.*    /var/log/swift/proxy.log
 local1.*    /var/log/swift/swauth.log





 21) build the ring, i have 3 node, 1 proxy

  swift-ring-builder account.builder create 18 3 1
  swift-ring-builder account.builder add z1-10.38.10.109:6002/sdb1 1
  swift-ring-builder account.builder add z2-10.38.10.119:6002/sdb1 1
  swift-ring-builder account.builder add z3-10.38.10.114:6002/sdb1 1
  swift-ring-builder object.builder rebalance

  swift-ring-builder account.builder rebalance
  swift-ring-builder object.builder create 18 3 1
  swift-ring-builder object.builder add z1-10.38.10.109:6000/sdb1 1
  swift-ring-builder object.builder add z2-10.38.10.119:6000/sdb1 1
  swift-ring-builder object.builder add z3-10.38.10.114:6000/sdb1 1
  swift-ring-builder object.builder rebalance

  swift-ring-builder container.builder create 18 3 1
  swift-ring-builder container.builder add z1-10.38.10.109:6001/sdb1 1
  swift-ring-builder container.builder add z2-10.38.10.119:6001/sdb1 1
  swift-ring-builder container.builder add z3-10.38.10.114:6001/sdb1 1
  swift-ring-builder container.builder rebalance


 22) easy_install configobj

 23) easy_install nose

 24) easy_install simplejson

 25) easy_install xattr

 26) easy_install eventlet

 27) easy_install greenlet

 28) easy_install pastedeploy

 29) groupadd swift

 30) useradd -g swift swift

 31) chown -R swift:swift /etc/swift/

 32) service rsyslog restart

 33) swift-init proxy start


 2. storage node install

 1) yum install python-setuptools python-devel python-simplejson
 python-configobj python-nose

 2) yum install openssh-server

 3) easy_install webob

 4) yum install curl gcc memcached sqlite xfsprogs

 5) easy_install eventlet

 6) wget
 http://pypi.python.org/packages/source/x/xattr/xattr-0.6.2.tar.gz#md5=5fc899150d03c082558455483fc0f89f;

  python setup.py build

  python setup.py install

 7)  wget
 http://pypi.python.org/packages/source/c/coverage/coverage-3.5.1.tar.gz#md5=410d4c8155a4dab222f2bc51212d4a24;

  python setup.py build

  python setup.py install

 8) yum install libvirt

 9) groupadd swift

 10) useradd -g swift swift

 11) mkdir -p /etc/swift

 12) chown -R swift:swift /etc/swift/

 13) cp swift.conf account.ring.gz container.ring.gz object.ring.gz
 /etc/swift/  (scp from proxy server)

 14) yum install xfsprogs

 15) wget http://www.openstack.org/projects/storage/latest-release/;

 python setup.py build

 python setup.py install

 16) vi /etc/rsyncd.conf

 # rsyncd.conf

 secrets file = /etc/rsyncd.secrets

 

Re: [Openstack] Four compute-node, everytime the 1st and 2nd are choosen

2011-11-09 Thread Razique Mahroua
Hi Akira, 
I did, 4 nodes are here, up and smiling :-)
Like I said, I explicitly disable the first three nodes in order to force the 
scheduler, and it worked ; the last node was choosen for the new instance I 
spawned, without any issue.



Le 9 nov. 2011 à 16:03, Akira Yoshiyama a écrit :

 Hi,
 
 Did you try nova-manage --flagfile=/etc/nova/nova.conf service list?
 
 Regards,
 Akira Yoshiyama
 
 2011/11/09 18:52 Razique Mahroua razique.mahr...@gmail.com:
 Hl all,
 I've four compute-nodes, around 20 instances runing.
 I've four nodes registered to nova-scheduler (nova-manage shows them)
 but everytime I spawn a new instance, the 3rd an 4th node are never choosen 
 for the instances.
 the ressources are the same on the nodes (around 24gb of ram), they are idle, 
 and available.
 Can I force the scheduler to use them for an instance ?
 Thanks
 Razique - doc team -
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Johannes Erdfelt to nova-core

2011-11-09 Thread Sandy Walsh
9 * 3 - 26



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Brian Waldon [brian.wal...@rackspace.com]
Sent: Wednesday, November 09, 2011 10:02 AM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: [Openstack] Proposal to add Johannes Erdfelt to nova-core

I'd like to nominate Johannes for nova-core, as he has definitely been doing a 
good number of reviews lately.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Kevin Mitchell to nova-core

2011-11-09 Thread Sandy Walsh
3746 ^ 0

From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Brian Waldon [brian.wal...@rackspace.com]
Sent: Wednesday, November 09, 2011 9:59 AM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: [Openstack] Proposal to add Kevin Mitchell to nova-core

Vek has absolutely stepped up and started doing quite few reviews, so I'd like 
to nominate him to be added to nova-core.

Waldon


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Stable branch reviews

2011-11-09 Thread Thierry Carrez
Hi everyone,

Since there seems to be some confusion around master vs. stable/diablo
vs. core reviewers, I think it warrants a small thread.

When at the Design Summit we discussed setting up stable branches, I
warned about the risks that setting them up brings for trunk development:

1) Reduce resources affected to trunk development
2) Reduce quality of trunk

To mitigate that, we decided that the group doing stable branch
maintenance would be a separate group (i.e. *not* core developers), and
we decided that whatever ends up in the stable branch must first land in
the master branch.

So a change goes like this:
* Change is proposed to trunk
* Change is reviewed by core (is it appropriate, well-written, etc)
* Change lands in trunk
* Change is proposed to stable/diablo
* Change is reviewed by stable team (is it relevant for a stable update,
did it land in trunk first)
* Change lands in stable/diablo

This avoids the aforementioned risks, avoids duplicating review efforts
(the two reviews actually check for different things), and keep the
teams separate (so trunk reviews are not slowed down by stable reviews).

Note that this does not prevent core developers that have an interest in
stable/diablo from being in the two teams.

Apparently people in core can easily mistake master for stable/diablo,
and can also +2 stable/diablo changes. In order to avoid mistakes, I
think +2 powers on stable/diablo should be limited to members of the
stable maintenance team (who know their stable review policy).

That should help avoid mistakes (like landing a fix in stable/diablo
that never made it to master), while not preventing individual core devs
from helping in stable reviews.

Regards,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: Four compute-node, everytime the 1st and 2nd are choosen

2011-11-09 Thread John Garbutt
I have seen this issue myself when I had a clock skew between the compute nodes.
The scheduler assumed one of my nodes was dead because it was so long since it 
reported, because the compute clock was behind the schedulers clock.
I think that was using Cactus, can't 100% remember now.

Cheers,
John

From: openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net 
[mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net] On 
Behalf Of Sateesh Chodapuneedi
Sent: 09 November 2011 12:03
To: Razique Mahroua; openstack (openstack@lists.launchpad.net)
Subject: Re: [Openstack] Fwd: Four compute-node, everytime the 1st and 2nd are 
choosen

Hi Razique,

Which scheduler are you using?
And what is the underlying hypervisor?

Regards,
Sateesh


This e-mail message is for the sole use of the intended recipient(s) and may 
contain confidential and/or privileged information. Any unauthorized review, 
use, disclosure, or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message.


From: openstack-bounces+sateesh.chodapuneedi=citrix@lists.launchpad.net 
[mailto:openstack-bounces+sateesh.chodapuneedi=citrix@lists.launchpad.net] 
On Behalf Of Razique Mahroua
Sent: Wednesday, November 09, 2011 3:22 PM
To: openstack (openstack@lists.launchpad.net)
Subject: [Openstack] Fwd: Four compute-node, everytime the 1st and 2nd are 
choosen

I confirm that issue,
since I've disabled the nova-compute service on the first 3 nodes, and when I 
spawn a new instance, now it goes to the last one, so the scheduler would have 
been able to choose them I've now a disbalanced farm :

Host 1 : High load (11 instances)
Host2 : Middel load (9 instance)
Host 3 : null load (0 instances)
Host 4 : null load (1 instance)

I use nova 2011.2 (Cactus stable)

Thanks

Début du message réexpédié :

De : Razique Mahroua 
razique.mahr...@gmail.commailto:razique.mahr...@gmail.com
Objet : Four compute-node, everytime the 1st and 2nd are choosen
Date : 9 novembre 2011 10:39:17 HNEC
À : openstack 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net

Hl all,
I've four compute-nodes, around 20 instances runing.
I've four nodes registered to nova-scheduler (nova-manage shows them)
but everytime I spawn a new instance, the 3rd an 4th node are never choosen for 
the instances.
the ressources are the same on the nodes (around 24gb of ram), they are idle, 
and available.
Can I force the scheduler to use them for an instance ?
Thanks
Razique - doc team -

inline: image001.gif___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Four compute-node, everytime the 1st and 2nd are choosen

2011-11-09 Thread Jorge Luiz Correa
I would like to understand that too. When I was testing, in some cases, a
16 GB node were with no instance while a 2 GB host ran 3 or 4 instances.
And, new instances were to the 2 GB node, even all the nodes 'smiling'.

Thanks!

On Wed, Nov 9, 2011 at 1:14 PM, Razique Mahroua
razique.mahr...@gmail.comwrote:

 Hi Akira,
 I did, 4 nodes are here, up and smiling :-)
 Like I said, I explicitly disable the first three nodes in order to force
 the scheduler, and it worked ; the last node was choosen for the new
 instance I spawned, without any issue.



 Le 9 nov. 2011 à 16:03, Akira Yoshiyama a écrit :

 Hi,

 Did you try nova-manage --flagfile=/etc/nova/nova.conf service list?

 Regards,
 Akira Yoshiyama
 2011/11/09 18:52 Razique Mahroua razique.mahr...@gmail.com:

 Hl all,
 I've four compute-nodes, around 20 instances runing.
 I've four nodes registered to nova-scheduler (nova-manage shows them)
 but everytime I spawn a new instance, the 3rd an 4th node are never
 choosen for the instances.
 the ressources are the same on the nodes (around 24gb of ram), they are
 idle, and available.
 Can I force the scheduler to use them for an instance ?
 Thanks
 Razique - doc team -
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
- MSc. Correa, J.L.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Four compute-node, everytime the 1st and 2nd are choosen

2011-11-09 Thread Masanori ITOH
Hi Razique,

Did you synchronize clock of all the servers?

I saw similar issue when clock of nova servers were not synchronized well.
In this case, compute nodes are recognized as down and up (smiley) in turn
repeatedly.

Setting up an NTP server is a good idea, and please check using ntpq command
if all nova nodes are synchronized.

Regards,
Masanori

From: Razique Mahroua razique.mahr...@gmail.com
Subject: Re: [Openstack] Four compute-node, everytime the 1st and 2nd are 
choosen
Date: Wed, 9 Nov 2011 16:14:35 +0100

 Hi Akira, 
 I did, 4 nodes are here, up and smiling :-)
 Like I said, I explicitly disable the first three nodes in order to force the 
 scheduler, and it worked ; the last node was choosen for the new instance I 
 spawned, without any issue.
 
 
 
 Le 9 nov. 2011 à 16:03, Akira Yoshiyama a écrit :
 
  Hi,
  
  Did you try nova-manage --flagfile=/etc/nova/nova.conf service list?
  
  Regards,
  Akira Yoshiyama
  
  2011/11/09 18:52 Razique Mahroua razique.mahr...@gmail.com:
  Hl all,
  I've four compute-nodes, around 20 instances runing.
  I've four nodes registered to nova-scheduler (nova-manage shows them)
  but everytime I spawn a new instance, the 3rd an 4th node are never choosen 
  for the instances.
  the ressources are the same on the nodes (around 24gb of ram), they are 
  idle, and available.
  Can I force the scheduler to use them for an instance ?
  Thanks
  Razique - doc team -
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Kevin Mitchell to nova-core

2011-11-09 Thread Chris Behrens
Yep...  +1


On Nov 9, 2011, at 5:59 AM, Brian Waldon wrote:

 Vek has absolutely stepped up and started doing quite few reviews, so I'd 
 like to nominate him to be added to nova-core.
 
 Waldon
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Johannes Erdfelt to nova-core

2011-11-09 Thread Chris Behrens
+1  --  For a while now, I've been going to review things, finding Johannes's 
name already in the review list quite often.  I also agree with his reviews. :)

- Chris

On Nov 9, 2011, at 6:02 AM, Brian Waldon wrote:

 I'd like to nominate Johannes for nova-core, as he has definitely been doing 
 a good number of reviews lately.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Stable branch reviews

2011-11-09 Thread Jay Pipes
++

On Wed, Nov 9, 2011 at 10:50 AM, Thierry Carrez thie...@openstack.org wrote:
 Hi everyone,

 Since there seems to be some confusion around master vs. stable/diablo
 vs. core reviewers, I think it warrants a small thread.

 When at the Design Summit we discussed setting up stable branches, I
 warned about the risks that setting them up brings for trunk development:

 1) Reduce resources affected to trunk development
 2) Reduce quality of trunk

 To mitigate that, we decided that the group doing stable branch
 maintenance would be a separate group (i.e. *not* core developers), and
 we decided that whatever ends up in the stable branch must first land in
 the master branch.

 So a change goes like this:
 * Change is proposed to trunk
 * Change is reviewed by core (is it appropriate, well-written, etc)
 * Change lands in trunk
 * Change is proposed to stable/diablo
 * Change is reviewed by stable team (is it relevant for a stable update,
 did it land in trunk first)
 * Change lands in stable/diablo

 This avoids the aforementioned risks, avoids duplicating review efforts
 (the two reviews actually check for different things), and keep the
 teams separate (so trunk reviews are not slowed down by stable reviews).

 Note that this does not prevent core developers that have an interest in
 stable/diablo from being in the two teams.

 Apparently people in core can easily mistake master for stable/diablo,
 and can also +2 stable/diablo changes. In order to avoid mistakes, I
 think +2 powers on stable/diablo should be limited to members of the
 stable maintenance team (who know their stable review policy).

 That should help avoid mistakes (like landing a fix in stable/diablo
 that never made it to master), while not preventing individual core devs
 from helping in stable reviews.

 Regards,

 --
 Thierry Carrez (ttx)
 Release Manager, OpenStack

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Kevin Mitchell to nova-core

2011-11-09 Thread Josh Kearney
+1!


 Vek has absolutely stepped up and started doing quite few reviews, so I'd
 like to nominate him to be added to nova-core.

 Waldon


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Johannes Erdfelt to nova-core

2011-11-09 Thread Josh Kearney
+1!


 I'd like to nominate Johannes for nova-core, as he has definitely been
 doing a good number of reviews lately.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Johannes Erdfelt to nova-core

2011-11-09 Thread Rick Harris
Definite +1

On Nov 9, 2011, at 11:05 AM, Trey Morris wrote:

+1

On Wed, Nov 9, 2011 at 9:35 AM, Sandy Walsh 
sandy.wa...@rackspace.commailto:sandy.wa...@rackspace.com wrote:
9 * 3 - 26



From: 
openstack-bounces+sandy.walsh=rackspace@lists.launchpad.netmailto:rackspace@lists.launchpad.net
 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.netmailto:rackspace@lists.launchpad.net]
 on behalf of Brian Waldon 
[brian.wal...@rackspace.commailto:brian.wal...@rackspace.com]
Sent: Wednesday, November 09, 2011 10:02 AM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net)
Subject: [Openstack] Proposal to add Johannes Erdfelt to nova-core

I'd like to nominate Johannes for nova-core, as he has definitely been doing a 
good number of reviews lately.
___
Mailing list: 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Kevin Mitchell to nova-core

2011-11-09 Thread Rick Harris
+1 as well.

On Nov 9, 2011, at 11:24 AM, Chris Behrens wrote:

 Yep...  +1
 
 
 On Nov 9, 2011, at 5:59 AM, Brian Waldon wrote:
 
 Vek has absolutely stepped up and started doing quite few reviews, so I'd 
 like to nominate him to be added to nova-core.
 
 Waldon
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Tutorials of how to install openstack swift into centos 6

2011-11-09 Thread John Dickinson
Awesome. Thanks for putting this together. I know a lot of people have been 
interested in getting swift running on non-ubuntu systems. Thanks for sharing 
this with everyone.

--John


On Nov 9, 2011, at 12:00 AM, pf shineyear wrote:

 openstack swift install on centos 6
 
 1. proxy install
 
   1) check your python version must = 2.6
 
   2) yum install libvirt
 
   3) yum install memcached
 
   4) yum install xfsprogs
 
   5) yum install python-setuptools python-devel python-simplejson 
 python-config
 
   6) easy_install webob
 
   7) easy_install eventlet
 
   8) install xattr-0.6.2.tar.gz, python setup.py build, python setup.py 
 install
 
   9) install coverage-3.5.1.tar.gz, python setup.py build, python 
 setup.py install
 
   10) wget http://www.openstack.org/projects/storage/latest-release/;
   python setup.py build
   python setup.py install
 
   11) wget 
 https://github.com/downloads/gholt/swauth/swauth-lucid-build-1.0.2-1.tgz;
   python setup.py build
   python setup.py install
 
   12) mkdir /etc/swift
 
   13) yum install openssh-server
 
   14) yum install git-core
   
   15) vi /etc/swift/swift.conf
 
 [swift-hash]
 # random unique string that can never change (DO NOT LOSE)
 swift_hash_path_suffix = `od -t x8 -N 8 -A n /dev/random`
 
 
   16) goto /etc/swift/
 
   17) openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
 
   18) service memcached restart, ps -aux | grep mem
 
 495  16954  0.0  0.1 330756   816 ?Ssl  18:19   0:00 memcached -d 
 -p 11211 -u memcached -m 64 -c 1024 -P /var/run/memcached/memcached.pid
 
   19) easy_install netifaces
 
   20) vi /etc/swift/proxy-server.conf
 
 [DEFAULT]
 cert_file = /etc/swift/cert.crt
 key_file = /etc/swift/cert.key
 bind_port = 8080
 workers = 8
 user = swift
 log_facility = LOG_LOCAL0
 allow_account_management = true
 
 [pipeline:main]
 pipeline = healthcheck cache swauth proxy-server
 
 [app:proxy-server]
 use = egg:swift#proxy
 allow_account_management = true
 account_autocreate = true
 log_facility = LOG_LOCAL0
 log_headers = true
 log_level =DEBUG
 
 [filter:swauth]
 use = egg:swauth#swauth
 #use = egg:swift#swauth
 default_swift_cluster = local#https://10.38.10.127:8080/v1
 # Highly recommended to change this key to something else!
 super_admin_key = swauthkey
 log_facility = LOG_LOCAL1
 log_headers = true
 log_level =DEBUG
 allow_account_management = true
 
 [filter:healthcheck]
 use = egg:swift#healthcheck
 
 [filter:cache]
 use = egg:swift#memcache
 memcache_servers = 10.38.10.127:11211
 
 
   21) config /etc/rsyslog.conf
 
 local0.*/var/log/swift/proxy.log
 local1.*/var/log/swift/swauth.log
 
 
 
 
 
   21) build the ring, i have 3 node, 1 proxy
 
   swift-ring-builder account.builder create 18 3 1
   swift-ring-builder account.builder add z1-10.38.10.109:6002/sdb1 1
   swift-ring-builder account.builder add z2-10.38.10.119:6002/sdb1 1
   swift-ring-builder account.builder add z3-10.38.10.114:6002/sdb1 1
   swift-ring-builder object.builder rebalance
 
   swift-ring-builder account.builder rebalance
   swift-ring-builder object.builder create 18 3 1
   swift-ring-builder object.builder add z1-10.38.10.109:6000/sdb1 1
   swift-ring-builder object.builder add z2-10.38.10.119:6000/sdb1 1
   swift-ring-builder object.builder add z3-10.38.10.114:6000/sdb1 1
   swift-ring-builder object.builder rebalance
 
   swift-ring-builder container.builder create 18 3 1
   swift-ring-builder container.builder add z1-10.38.10.109:6001/sdb1 1
   swift-ring-builder container.builder add z2-10.38.10.119:6001/sdb1 1
   swift-ring-builder container.builder add z3-10.38.10.114:6001/sdb1 1
   swift-ring-builder container.builder rebalance
 
 
   22) easy_install configobj
 
   23) easy_install nose
 
   24) easy_install simplejson
 
   25) easy_install xattr
 
   26) easy_install eventlet
 
   27) easy_install greenlet
 
   28) easy_install pastedeploy
 
   29) groupadd swift
 
   30) useradd -g swift swift
 
   31) chown -R swift:swift /etc/swift/
 
   32) service rsyslog restart
 
   33) swift-init proxy start
 
 
 2. storage node install
 
   1) yum install python-setuptools python-devel python-simplejson 
 python-configobj python-nose
 
   2) yum install openssh-server
 
   3) easy_install webob
 
   4) yum install curl gcc memcached sqlite xfsprogs
 
 5) easy_install eventlet
 
 6) wget 
 http://pypi.python.org/packages/source/x/xattr/xattr-0.6.2.tar.gz#md5=5fc899150d03c082558455483fc0f89f;
 
   python setup.py build
   python setup.py install
 
 
 7)  wget 
 http://pypi.python.org/packages/source/c/coverage/coverage-3.5.1.tar.gz#md5=410d4c8155a4dab222f2bc51212d4a24;
 
   python setup.py build
   python setup.py install
 
 8) yum install libvirt
 
 9) groupadd swift
 
 10) 

[Openstack] Host Aggregates ...

2011-11-09 Thread Sandy Walsh
Hi Armando,

I finally got around to reading 
https://blueprints.launchpad.net/nova/+spec/host-aggregates.

Perhaps you could elaborate a little on how this differs from host capabilities 
(key-value pairs associated with a service) that the scheduler can use when 
making decisions?

The distributed scheduler doesn't need zones to operate, but will use them if 
available. Would host-aggregates simply be a single-zone that uses capabilities?

Cheers,
Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Power Management in the cloud

2011-11-09 Thread Stefano Maffulli
During last UDS in Orlando, a few of us were attending the Cloud Power
Management session, led by Arnaud Quette.

Arnaud presented his project, called NUT (Network UPS Tools), which
provides support for the so called power devices. These devices are
often, if not always, used in datacenters to feed power and provide
protection and runtime in case of power failures and other power events.

NUT is a modular and lightweight set of tools that provides support for
UPS, PDU and IPMI power supplies. 100s of manufacturers are supported,
on most OS (Linux, OS X, Windows, Unix) and architecture (x86, x86_64,
ARM). It offers tools and language binding to interface with NUT and
administer large scale setups.

Some of the scenarios painted at the session integrating NUT with
OpenStack:

- get visibility of power availability before deploying new VMs (ie,
which system is protected by which UPS, and should be preferred)
- remotely power on server, through PDUs outlets interaction,
- improve power efficiency and PUE, by selecting the best UPS to put new
loads (ie servers) on,

Cloud Power Management also implied Andres Rodriguez PowerWake /
PowerNap system, and its interaction with NUT, to improve general power
efficiency, and consolidate power management through a single interface.

Eaton, which is the 2nd largest power devices manufacturer, opensource
supporter and Arnaud's employer, also offered to provide devices for
development and testing purposes.

I would like to hear your opinion on the idea of adding knowledge of
power management to OpenStack: would you like to see this happening? Is
anybody interested in cooperating with Arnaud on a prototype? Other
ideas, use cases?

thanks,
stef



References:
- NUT: http://www.networkupstools.org/
- UPS: http://en.wikipedia.org/wiki/Uninterruptible_power_supply
- PDU: http://en.wikipedia.org/wiki/Power_distribution_unit
-
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-cloud-power-management


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Tutorials of how to install openstack swift into centos 6

2011-11-09 Thread Chmouel Boudjnah
It would be nice if it was an alternative version of the swift all in
one for RedHat flavored distros instead of just the commands.

Chmouel.

On Wed, Nov 9, 2011 at 11:51 PM, Anne Gentle a...@openstack.org wrote:
 Would you be willing to document this on the OpenStack wiki? You can link to
 it from this page:

 http://wiki.openstack.org/InstallInstructions/Swift

 I don't want your valuable work to be only on the mailing list archive (and
 truncated).

 Thanks,
 Anne

 Anne Gentle
 a...@openstack.org
 my blog | my book | LinkedIn | Delicious | Twitter
 On Wed, Nov 9, 2011 at 12:00 AM, pf shineyear shin...@gmail.com wrote:

 openstack swift install on centos 6

 1. proxy install

 1) check your python version must = 2.6

 2) yum install libvirt

 3) yum install memcached

 4) yum install xfsprogs

 5) yum install python-setuptools python-devel python-simplejson
 python-config

 6) easy_install webob

 7) easy_install eventlet

 8) install xattr-0.6.2.tar.gz, python setup.py build, python setup.py
 install

 9) install coverage-3.5.1.tar.gz, python setup.py build, python setup.py
 install

 10) wget http://www.openstack.org/projects/storage/latest-release/;
  python setup.py build
  python setup.py install

 11) wget
 https://github.com/downloads/gholt/swauth/swauth-lucid-build-1.0.2-1.tgz;
 python setup.py build
 python setup.py install

 12) mkdir /etc/swift

 13) yum install openssh-server

 14) yum install git-core

 15) vi /etc/swift/swift.conf

 [swift-hash]
 # random unique string that can never change (DO NOT LOSE)
 swift_hash_path_suffix = `od -t x8 -N 8 -A n /dev/random`


 16) goto /etc/swift/

 17) openssl req -new -x509 -nodes -out cert.crt -keyout cert.key

 18) service memcached restart, ps -aux | grep mem

 495  16954  0.0  0.1 330756   816 ?    Ssl  18:19   0:00 memcached
 -d -p 11211 -u memcached -m 64 -c 1024 -P /var/run/memcached/memcached.pid

 19) easy_install netifaces

 20) vi /etc/swift/proxy-server.conf

 [DEFAULT]
 cert_file = /etc/swift/cert.crt
 key_file = /etc/swift/cert.key
 bind_port = 8080
 workers = 8
 user = swift
 log_facility = LOG_LOCAL0
 allow_account_management = true

 [pipeline:main]
 pipeline = healthcheck cache swauth proxy-server

 [app:proxy-server]
 use = egg:swift#proxy
 allow_account_management = true
 account_autocreate = true
 log_facility = LOG_LOCAL0
 log_headers = true
 log_level =DEBUG

 [filter:swauth]
 use = egg:swauth#swauth
 #use = egg:swift#swauth
 default_swift_cluster = local#https://10.38.10.127:8080/v1
 # Highly recommended to change this key to something else!
 super_admin_key = swauthkey
 log_facility = LOG_LOCAL1
 log_headers = true
 log_level =DEBUG
 allow_account_management = true

 [filter:healthcheck]
 use = egg:swift#healthcheck

 [filter:cache]
 use = egg:swift#memcache
 memcache_servers = 10.38.10.127:11211


 21) config /etc/rsyslog.conf

 local0.*    /var/log/swift/proxy.log
 local1.*    /var/log/swift/swauth.log





 21) build the ring, i have 3 node, 1 proxy

  swift-ring-builder account.builder create 18 3 1
  swift-ring-builder account.builder add z1-10.38.10.109:6002/sdb1 1
  swift-ring-builder account.builder add z2-10.38.10.119:6002/sdb1 1
  swift-ring-builder account.builder add z3-10.38.10.114:6002/sdb1 1
  swift-ring-builder object.builder rebalance

  swift-ring-builder account.builder rebalance
  swift-ring-builder object.builder create 18 3 1
  swift-ring-builder object.builder add z1-10.38.10.109:6000/sdb1 1
  swift-ring-builder object.builder add z2-10.38.10.119:6000/sdb1 1
  swift-ring-builder object.builder add z3-10.38.10.114:6000/sdb1 1
  swift-ring-builder object.builder rebalance

  swift-ring-builder container.builder create 18 3 1
  swift-ring-builder container.builder add z1-10.38.10.109:6001/sdb1 1
  swift-ring-builder container.builder add z2-10.38.10.119:6001/sdb1 1
  swift-ring-builder container.builder add z3-10.38.10.114:6001/sdb1 1
  swift-ring-builder container.builder rebalance


 22) easy_install configobj

 23) easy_install nose

 24) easy_install simplejson

 25) easy_install xattr

 26) easy_install eventlet

 27) easy_install greenlet

 28) easy_install pastedeploy

 29) groupadd swift

 30) useradd -g swift swift

 31) chown -R swift:swift /etc/swift/

 32) service rsyslog restart

 33) swift-init proxy start


 2. storage node install

 1) yum install python-setuptools python-devel python-simplejson
 python-configobj python-nose

 2) yum install openssh-server

 3) easy_install webob

 4) yum install curl gcc memcached sqlite xfsprogs

 5) easy_install eventlet

 6) wget
 http://pypi.python.org/packages/source/x/xattr/xattr-0.6.2.tar.gz#md5=5fc899150d03c082558455483fc0f89f;

  python setup.py build

  python setup.py install

 7)  wget
 http://pypi.python.org/packages/source/c/coverage/coverage-3.5.1.tar.gz#md5=410d4c8155a4dab222f2bc51212d4a24;

  python setup.py build

  python setup.py install

 8) yum install