[Openstack] Summit conference session?

2013-03-14 Thread andi abes
Is there a listing of the summint conference (not the design part at [1])
available somewhere? (there used to be the vote for speakers list, but that
now is gone).


[1] http://summit.openstack.org/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Comparing OpenStack to OpenNebula

2013-02-25 Thread andi abes
On Mon, Feb 25, 2013 at 5:46 PM, Shawn Starr shawn.st...@rogers.com wrote:

 On Monday, February 25, 2013 10:34:11 PM Jeremy Stanley wrote:
  On 2013-02-25 06:20 -0500 (-0500), Shawn Starr wrote:
  [...]
 
   I see no options on how to control what nova-compute nodes can be
   'provisioned' into an OpenStack cloud, I'd consider that a
   security risk (potentially) if any computer could just register to
   become a nova-compute?
 
  [...]
 
  On 2013-02-25 11:42:47 -0500 (-0500), Shawn Starr wrote:
   I was hoping in future we could have a mechanism via mac address
   to restrict which hypervisor/nova-computes are able to join the
   cluster.
 
  [...]
 
  It bears mention that restricting by MAC is fairly pointless as
  security protections go. There are a number of tricks an adversary
  can play to rewrite the system's MAC address or otherwise
  impersonate other systems at layer 2. Even filtering by IP address
  doesn't provide you much protection if there are malicious actors
  within your local broadcast domain, but at least there disabling
  learning on switches or implementing 802.1x can buy some relief.
 
  Extending the use of MAC address references from the local broadcast
  domain where they're intended to be relevant up into the application
  layer (possibly across multiple routed hops well away from their
  original domain of control) makes them even less effective of a
  system identifier from a security perspective.

 Hi Jeremy,

 Of course, one can modify/spoof the MAC address and or assign themselves an
 IP. It is more so that new machines aren't immediately added to the cluster
 and start launching VM instances without explicitly being enabled to do
 so. In
 this case, I am not concerned about impersonators on the network trying to
 join the cluster.

 Thanks,
 Shawn

 if you're deploying multiple clusters, are you using different passwords
for each? different mysql connection strings? different IP address for the
controller and MQ?

Assuming the answer to any of those is yes, the a nova compute won't just
connect to the cluster.
If you look at the nova.conf file, you'll see that there are lots of
cluster specifics bits of info in it that should completely assure you that
compute nodes won't just connect to the wrong cluster.

___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Distributed configuration database

2012-11-03 Thread andi abes
On Sat, Nov 3, 2012 at 9:30 AM, Aniruddha Khadkikar
askhadki...@gmail.com wrote:
 @Rob - excellent points. It would be good to know in real life
 deployments how often are configurations changed so the question of
 network interruptions are handled in a befitting way. It is my
 impression that once the database has synced the information, changes
 would be infrequent. Regarding your second point, the solution lies in
 how the records are maintained. One can design the structure to
 include a 'version' column along with a boolean attribute for logical
 deletion of records. This would allow storing information across
 different versions. Puppet has its use, but the main point is that the
 metadata should lie within openstack and not solely outside in a
 configuration management tool (i.e. Puppet). With so many
 configuration parameters in quantum, nova, swift, cinder, we need an
 easy way to be able to query 'global' values and if required be able
 to specify 'local' values applicable to nodes also, if such a need
 arises. Areas where local values could be required could be location
 of log files, different tuning parameters depending on hardware
 configuration etc.

from my experience many of the problems encountered in deploying
openstack have to do with what you call local parameters which
depend on the node's configuration. For nova e.g. the name of the
interface/bridge. for swift e.g. the disks available on a node and so
on. The values that are common for the whole deployment are the easy
(or at least easier) part ;)


 The data store design can be adapted to have the node level local
 values and the individual daemons on start up would honour the local
 values if defined. Also a common data store that can be queried (my
 main point) within an openstack deployment would be extremely useful
 for troubleshooting, rather than having to dig through each and every
 configuration file (if I'm not using Puppet).

for troubleshooting you'd want to look at resources available on the
node, and compare/match them to the configuration parameters. typical
configuration management systems (e.g. puppet , chef, juju etc)
provide you that information,  in a centralized location, with various
querying capabilities. Additionally, once you've found the problem,
you'd want to fix it...where CM systems shine

Other sources of problems configuration management systems can help
you with are:
- dependencies - other python modules and OS packages required to make
openstack happy.
- disk, network and other local resources' configuration (e.g
interface /bridge config, disk formatting etc)

I think there are good solutions out there, that provide more value
than just a db for parameters...
It might be worth your time to compare those to what would be gained
by just a parameter store.


 @Jon - I am happy that these ideas resonate with you. My moot point is
 that the metadata should be within the openstack implementation and
 not outside. I am not very familiar with Puppet - is there a way to
 query the parameters set in the conf file. I would think that Puppet
 would be given a conf file to deploy. The values within the conf file
 would still remain abstracted and not be readily available. Please
 correct me if I'm wrong in my presumption. Having the parameters with
 their default values in the data store would allow a better
 understanding of the different configuration parameters. Also if its
 in a database then dependency and relationship rules or even
 constraints (permissible values) could be defined.

 On Sat, Nov 3, 2012 at 12:08 PM, Robert Collins
 robe...@robertcollins.net wrote:
 One thing to bear in mind when considering a network API for this -
 beyond the issue of dealing with network interruptions gracefully - is
 dealing with version skew: while deploying a new release of Openstack,
 the definition of truth may be different for each version, so you need
 to either have very high quality accept-old-configurations-code in
 openstack (allowing you to never need differing versions of truth), or
 you need a system (such as Puppet) that can parameterise what it
 delivers based on e.g. the software version in question.

 -Rob

 On Sat, Nov 3, 2012 at 8:17 AM, Jonathan Proulx j...@csail.mit.edu wrote:
 On Sat, Nov 03, 2012 at 12:19:58AM +0530, Aniruddha Khadkikar wrote:
 : However I feel that the parameters that
 :govern the behaviour of openstack components should be in a data store
 :that can be queried from a single data store. Also it would make
 :deployments less error prone.

 On one hand I agree having a single source of truth is appealing in
 many ways.  The simplicity of text configuration files and the shared
 nothing nature of having config local to each system is also very
 appealing.

 In my world my puppet manifest is my single source of truth which
 provides both a single config interface so there is no error prone
 manual duplication and also results in a fully distributed text
 configuration so the truth 

Re: [Openstack] Expanding Storage - Rebalance Extreeemely Slow (or Stalled?)

2012-10-23 Thread andi abes
On Tue, Oct 23, 2012 at 12:16 PM, Emre Sokullu e...@groups-inc.com wrote:
 Folks,

 This is the 3rd day and I see no or very little (kb.s) change with the new
 disks.

 Could it be normal, is there a long computation process that takes time
 first before actually filling newly added disks?

 Or should I just start from scratch with the create command this time. The
 last time I did it, I didn't use the swift-ring-builder create 20 3 1 ..
 command first but just started with swift-ring-builder add ... and used
 existing ring.gz files, thinking otherwise I could be reformatting the whole
 stack. I'm not sure if that's the case.


That is correct - you don't want to recreate the rings, since that is
likely to cause redundant partition movement.

 Please advise. Thanks,


I think your expectations might be misplaced. the ring builder tries
to not move partitions needlessly. In your cluster, you had 3
zones(and i'm assuming 3 replicas). swift placed the partitions as
efficiently as it could, spread across the 3 zones (servers). As
things stand, there's no real reason for partitions to move across the
servers. I'm guessing that the data growth you've seen is from new
data, not from existing data movement (but there are some calls to
random in the code which might have produced some partition movement).

If you truly want to move things around forcefully, you could:
* decrease the weight of the old devices. This would cause them to be
over weighted, and partitions reassigned away from them.
* delete and re-add devices to the ring. This will cause all the
partitions from the deleted devices to be spread across the new set of
devices.

After you perform your ring manipulation commands, execute the
rebalance command and copy the ring files.
This is likely to cause *lots* of activity in your cluster... which
seems to be the desired outcome. Its likely to have negative impact of
service requests to the proxy. It's something you probably want to be
careful about.

If you leave things alone as they are, new data will be distributed on
the new devices, and as old data gets deleted usage will rebalance
over time.


 --
 Emre

 On Mon, Oct 22, 2012 at 12:09 PM, Emre Sokullu e...@groups-inc.com wrote:

 Hi Samuel,

 Thanks for quick reply.

 They're all 100. And here's the output of swift-ring-builder

 root@proxy1:/etc/swift# swift-ring-builder account.builder
 account.builder, build version 13
 1048576 partitions, 3 replicas, 3 zones, 12 devices, 0.00 balance
 The minimum number of hours before a partition can be reassigned is 1
 Devices:id  zone  ip address  port  name weight partitions
 balance meta
  0 1 192.168.1.3  6002c0d1p1 100.00 262144
 0.00
  1 1 192.168.1.3  6002c0d2p1 100.00 262144
 0.00
  2 1 192.168.1.3  6002c0d3p1 100.00 262144
 0.00
  3 2 192.168.1.4  6002c0d1p1 100.00 262144
 0.00
  4 2 192.168.1.4  6002c0d2p1 100.00 262144
 0.00
  5 2 192.168.1.4  6002c0d3p1 100.00 262144
 0.00
  6 3 192.168.1.5  6002c0d1p1 100.00 262144
 0.00
  7 3 192.168.1.5  6002c0d2p1 100.00 262144
 0.00
  8 3 192.168.1.5  6002c0d3p1 100.00 262144
 0.00
  9 1 192.168.1.3  6002c0d4p1 100.00 262144
 0.00
 10 2 192.168.1.4  6002c0d4p1 100.00 262144
 0.00
 11 3 192.168.1.5  6002c0d4p1 100.00 262144
 0.00

 On Mon, Oct 22, 2012 at 12:03 PM, Samuel Merritt s...@swiftstack.com
 wrote:
  On 10/22/12 9:38 AM, Emre Sokullu wrote:
 
  Hi folks,
 
  At GROU.PS, we've been an OpenStack SWIFT user for more than 1.5 years
  now. Currently, we hold about 18TB of data on 3 storage nodes. Since
  we hit 84% in utilization, we have recently decided to expand the
  storage with more disks.
 
  In order to do that, after creating a new c0d4p1 partition in each of
  the storage nodes, we ran the following commands on our proxy server:
 
  swift-ring-builder account.builder add z1-192.168.1.3:6002/c0d4p1 100
  swift-ring-builder container.builder add z1-192.168.1.3:6002/c0d4p1 100
  swift-ring-builder object.builder add z1-192.168.1.3:6002/c0d4p1 100
  swift-ring-builder account.builder add z2-192.168.1.4:6002/c0d4p1 100
  swift-ring-builder container.builder add z2-192.168.1.4:6002/c0d4p1 100
  swift-ring-builder object.builder add z2-192.168.1.4:6002/c0d4p1 100
  swift-ring-builder account.builder add z3-192.168.1.5:6002/c0d4p1 100
  swift-ring-builder container.builder add z3-192.168.1.5:6002/c0d4p1 100
  swift-ring-builder object.builder add z3-192.168.1.5:6002/c0d4p1 100
 
  [snip]
 
 
  So right now, the problem is;  the disk growth in each of the storage
  nodes seems to have stalled,
 
  So you've added 3 new devices to each ring and assigned a weight of  100
  to
  each one. What are the weights of the 

Re: [Openstack] [OSSA 2012-016] Token authorization for a user in a disabled tenant is allowed (CVE-2012-4457)

2012-09-28 Thread andi abes
is the plan going forward to announce these on friday afternoons?

On Fri, Sep 28, 2012 at 4:50 PM, Russell Bryant rbry...@redhat.com wrote:
 OpenStack Security Advisory: 2012-016
 CVE: CVE-2012-4457
 Date: September 28, 2012
 Title: Token authorization for a user in a disabled tenant is allowed
 Impact: High
 Reporter: Rohit Karajgi (NTT Data)
 Affects: Essex (prior to 2012.1.2), Folsom (prior to folsom-3
 development milestone)

 Description:
 Rohit Karajgi reported a vulnerability in Keystone. It was possible to
 get a token that is authorized for a disabled tenant. Once the token is
 established with authorization on the tenant, keystone would respond 200
 OK to token validation requests from other OpenStack services, allowing
 the user to work with the tenant's resources.

 Folsom fix: (Included in 2012.2)
 http://github.com/openstack/keystone/commit/4ebfdfaf23c6da8e3c182bf3ec2cb2b7132ef685

 Essex fix: (Included in 2012.1.2)
 http://github.com/openstack/keystone/commit/5373601bbdda10f879c08af1698852142b75f8d5

 References:
 http://cve.mitre.org/cgi-bin/cvename.cgi?name=2012-4457
 https://bugs.launchpad.net/keystone/+bug/988920

 --
 Russell Bryant
 OpenStack Vulnerability Management Team

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-012] Horizon, Open redirect through 'next' parameter (CVE-2012-3540)

2012-09-13 Thread andi abes
Has a fix for this been  backported to essex/stable branch?

On Thu, Aug 30, 2012 at 11:35 AM, Russell Bryant rbry...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 This advisory included the wrong CVE.  It was CVE-2012-3540.  Sorry
 about that.

 On 08/30/2012 11:10 AM, Russell Bryant wrote:
 OpenStack Security Advisory: 2012-012 CVE: CVE-2012-3542

 This should have been CVE-2012-3540

 Date: August 30, 2012 Title: Open redirect through 'next'
 parameter Impact: Medium Reporter: Thomas Biege (SUSE) Products:
 Horizon Affects: Essex (2012.1)

 Description: Thomas Biege from SUSE reported a vulnerability in
 Horizon authentication mechanism. By adding a malicious 'next'
 parameter to a Horizon authentication URL and enticing an
 unsuspecting user to follow it, the victim might get redirected
 after authentication to a malicious site where useful information
 could be extracted. Only setups running Essex are affected.

 Fixes: 2012.1:
 https://github.com/openstack/horizon/commit/35eada8a27323c0f83c400177797927aba6bc99b

  References:
 http://cve.mitre.org/cgi-bin/cvename.cgi?name=2012-3542

 This should have been:

 http://cve.mitre.org/cgi-bin/cvename.cgi?name=2012-3540

 https://bugs.launchpad.net/horizon/+bug/1039077

 Notes: This fix will be included in a future Essex (2012.1)
 release.

 - --
 Russell Bryant
 OpenStack Vulnerability Management Team
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

 iEYEARECAAYFAlA/iDEACgkQFg9ft4s9SAbPBQCgndIk58K5ZF71PCxmWfDjV9MO
 4yoAoJDGBeqC4TbJnyo+AsEeQYeTQEe6
 =zO6p
 -END PGP SIGNATURE-

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum vs. Nova-network in Folsom

2012-09-05 Thread andi abes
late to the party... but I'll dabble.

On Mon, Aug 27, 2012 at 12:21 PM, Chris Wright chr...@sous-sol.org wrote:
 * rob_hirschf...@dell.com (rob_hirschf...@dell.com) wrote:
 We've been discussing using Open vSwitch as the basis for non-Quantum Nova 
 Networking deployments in Folsom.  While not Quantum, it feels like we're 
 bringing Nova Networking a step closer to some of the core technologies that 
 Quantum uses.

 To what end?

OVS provides much more robust monitoring and operational facilities
(e.g sFlow monitoring, better switch table visibility etc).
It also provides a linux-bridge compatibility layer (ovs-brcompatd
[1]), which should work out-of-box with the linux-bridge. As such,
switching to using OVS rather than the linux bridge could be done
without any code changes to nova, just deployment changes (e.g. ensure
that ovs-brcompatd is running to intercept brctl ioctl's - [2]).

For the more adventurous, there could be any number of interesting
scenarios enabled by having access to ovs capabilities  (e.g.
tunneling)


 I'm interested in hearing what other's in the community think about this 
 approach.


I'm similarly curious if any *operators* have experimented with OVS
and sFlow or other of its capabilities.



[1] 
http://openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=README;hb=HEAD
[2] http://openvswitch.org/cgi-bin/ovsman.cgi?page=vswitchd%2Fovs-brcompatd.8.in

 I don't think legacy nova networking should get features while working to
 stabilize and improve quantum and nova/quantum integration.

 thanks,
 -chris

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DHCP and kernel 3.2

2012-09-05 Thread andi abes
I've heard of folks having issues with UDP checksum not being
generated correctly, and having success by running the command below
on the nova compute nodes.

iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM --checksum-fill


I'm not sure if this affects the versions you're working with.

On Wed, Sep 5, 2012 at 10:41 AM, Anton Haldin ahal...@griddynamics.com wrote:
 I have the same issue  ( kernel 3.5 , guest vm cannot get response from
 dnsmasq )


 On Fri, Aug 10, 2012 at 7:08 AM, Lorin Hochstein lo...@nimbisservices.com
 wrote:


 On Aug 9, 2012, at 3:22 AM, Alessandro Tagliapietra
 tagliapietra.alessan...@gmail.com wrote:

  Hello guys,
 
  i've just installed kernel 3.4 from Ubuntu kernel PPA archive and after
  this upgrade VM aren't able to get the DHCP address but with tcpdump i see
  the request and offer on the network.
  Someone else experienced this? I've tried also with 3.3, same story.
  Rolling back to 3.2 and everything works fine.
 


 When I had a similar problem the issue turned out to be that I needed to
 configure the NIC on the compute host to be in promiscuous mode, otherwise
 the DHCP response wouldn't make it to the VM.

 Lorin

 Sent from my iPad

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum vs. Nova-network in Folsom

2012-09-05 Thread andi abes
On Wed, Sep 5, 2012 at 1:01 PM, Dan Wendlandt d...@nicira.com wrote:
 On Wed, Sep 5, 2012 at 5:23 AM, andi abes andi.a...@gmail.com wrote:
 late to the party... but I'll dabble.

 On Mon, Aug 27, 2012 at 12:21 PM, Chris Wright chr...@sous-sol.org wrote:
 * rob_hirschf...@dell.com (rob_hirschf...@dell.com) wrote:
 We've been discussing using Open vSwitch as the basis for non-Quantum Nova 
 Networking deployments in Folsom.  While not Quantum, it feels like we're 
 bringing Nova Networking a step closer to some of the core technologies 
 that Quantum uses.

 To what end?

 OVS provides much more robust monitoring and operational facilities
 (e.g sFlow monitoring, better switch table visibility etc).

 You won't find any disagreement from me about OVS having more advanced
 capabilities :)

 It also provides a linux-bridge compatibility layer (ovs-brcompatd
 [1]), which should work out-of-box with the linux-bridge. As such,
 switching to using OVS rather than the linux bridge could be done
 without any code changes to nova, just deployment changes (e.g. ensure
 that ovs-brcompatd is running to intercept brctl ioctl's - [2]).

 Using ovs-brcompatd would be possible, though some distros do not
 package and run it by default and in general it is not the preferred
 way to run things according to email on the OVS mailing list.

agreed this is providing minimal exposure to OVS capabilities. The
thought was that it would provide a path to:
* AVOID making ANY changes to nova-network, while still being able to use ovs
* this could allow Operators to obtain operational experience running
and monitoring ovs when its deployed in a somewhat degenerate
deployment.


 For the more adventurous, there could be any number of interesting
 scenarios enabled by having access to ovs capabilities  (e.g.
 tunneling)

 Tunneling is definitely a huge benefit of OVS, but you still need
 someone to setup the tunnels and direct packets into them correctly.
 That's is exactly what the Quantum OVS plugin does and it is
 completely open source and freely available, so if people want to
 experiment with OVS tunneling, using Quantum would seem like the
 obvious way to do this.


true. but that would require moving on to quantum, with the associated pains.

 Dan



sorry if I stirred a fuss on a relatively dormant thread.

 --
 ~~~
 Dan Wendlandt
 Nicira, Inc: www.nicira.com
 twitter: danwendlandt
 ~~~

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Fwd: nova-compute on VirtualBox with qemu

2012-08-27 Thread andi abes
-- Forwarded message --
From: andi abes andi.a...@gmail.com
Date: Mon, Aug 27, 2012 at 1:54 PM
Subject: nova-compute on VirtualBox with qemu
To: openstack-operat...@lists.openstack.org


I'm using Essex on virtual box, and am having some issues getting
nova-compute to not hate me that much.
The error I'm getting is: libvir: QEMU error : internal error Cannot
find suitable emulator for x86_64
Running the same steps as in [1] seems to reproduce the same behavior.

The VB guest is 12.04.

nova-compute.conf has:
[DEFAULT]
libvirt_type=qemu

I guess my question is - where do i supply libvirt / nova the magical
'disable accel' flags? (i.e. '-machine accel=kvm:tcg', which seem to
make qemu happy).


TIA,
a.


[1] https://lists.fedoraproject.org/pipermail/virt/2012-July/003358.html



(adding openstack, and some more details)

Versions:
qemu-system-x86_64 --version
QEMU emulator version 1.0.50 (qemu-kvm-devel), Copyright (c) 2003-2008
Fabrice Bellard

libvirtd --version
libvirtd (libvirt) 0.9.9

tia,
a

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift performance for very small objects

2012-05-22 Thread andi abes
Remember that when an object is written to swift, it's not written
just to the  object server, the container and account servers are
updated as well... the container for object listings (and timestmaps)
and the account for overall statistics. Also, the proxy ensures a
quorum for the newly written object - there will be 2/3 of the
replicas written before the request is ack'd to the client.
If you're trying to find ways to optimize swift for performance,
especially large clusters, I'd probably focus on performance
optimization of the account and container servers.
A few more thoughts:
 * swift is designed to scale out very well - both across machines and
across disks. You effectively defeat that scaling when you use
loopback devices - since your effectively force all the disk activity
onto the same physical disk.
 * you might want to prime your environment before your
performance tests. Things like ARP caches, and DNS name resolution.
Also, make sure to prime your accounts and containers, and not
have them be created as part of the test.
 * There are some caches in swift that run around 128K entries (in the
account / container servers). You might want to run larger tests, to
make sure you get those flashed once in a while.
 * once you have real disks, you might want to play around with disk
to zone ratios. Replicas are guaranteed to go to different zonesso
the number of disk-spindles in a zone will affect the overall
performance of your cluster.

It will be interesting to hear more about your results !

Oh... persistent connections. I believe the python httplib will
auto-negotiate persistent connections, so no app level code is
required (good thought though ;)


On Sat, May 19, 2012 at 9:34 PM, Paulo Ricardo Motta Gomes
pauloricard...@gmail.com wrote:
 Hello,

 I'm doing some experiments in a Swift cluster testbed of 9 nodes/devices and
 3 zones (3 nodes on each zone).

 In one of my tests, I noticed that PUTs of very small objects are extremely
 inefficient.

 - 5000 PUTs of objects with an average size of 40K - total of 195MB - took
 67s (avg time per request: 0.0135s)
 - 5000 PUTS of objects with an average size of 190 bytes - total of 930KB -
 took 60s (avg time per request: 0.0123s)

 I plotted object size vs request time and found that there is significant
 difference in request times only after 200KB. When objects are smaller than
 this PUT requests have a minimum execution time of 0.01s, no matter the
 object size.

 I suppose swift is not optimized for such small objects, but I wonder what
 is the main cause for this, if it's the HTTP overhead or disk writing. I
 checked the log of the object servers and requests are taking an average of
 0.006s, whether objects are 40K or 190 bytes, which indicate part of the
 bottleneck could be at the disk. Curently I'm using a loopback device for
 storage.

 I thought that maybe this could be improved a bit if the proxy server
 maintained persistent connections to the storage nodes instead of opening a
 new one for each request?

 It would be great if you could share your thoughts on this and how could the
 performance of this special case be improved.

 Cheers,

 Paulo

 --
 European Master in Distributed Computing
 Royal Institute of Technology - KTH
 Instituto Superior Técnico - IST
 http://paulormg.com

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Boston UG

2012-05-15 Thread andi abes
Quick note about our next meetup, morrow 5/16, @ Harvard University.
If you happen to be in Boston area, come eat some pizza, get to know
some cool folks and have some Openstack shop-talk
(if you're commuting, fear not the parking. There are instructions on
getting cheap parking at Harvard facilities for 5$)

Thanks SUSE for feeding us, and Dell for herding the cats...

logistics and such here: http://www.meetup.com/Openstack-Boston/events/63106082/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swift indexation

2012-04-30 Thread andi abes
swift updates the account and container listings as containers and
objects are created.
So, if what you're looking for is a list of objects or containers...
your can just query the proxy: A GET on the tenant url would return
the list of containers, a GET on a container will return the list of
objects in that container.

The full API is here:
http://docs.openstack.org/api/openstack-object-storage/1.0/content/ch_object-storage-dev-api-storage.html



On Mon, Apr 30, 2012 at 11:43 AM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:


 khabou imen asked:

 ➢
 ➢ can any one help me in understanding how swift indexation happens ,
 ➢ I am tryind to develop a client looking for a specific file stored with 
 openstack storage

 Your client would have to match what the Swift Proxy server did for a GET, 
 including tracking which servers were down.
 Is there a specific benefit to bypassing the Swift Proxy that you are trying 
 to achieve?

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Code contribution

2012-04-27 Thread andi abes
the full setup is described here: http://wiki.openstack.org/GerritJenkinsGithub


On Fri, Apr 27, 2012 at 2:42 PM, John Postlethwait
john.postlethw...@nebula.com wrote:
 If you are asking how it becomes approved the answer is that two
 core-contributors of the specific project need to manually review the code,
 and +2 it. When that is done, Jenkins, a CI tool, will run the various
 automated acceptability tests against the code and if they all pass, it will
 merge it in to the project.


 John Postlethwait
 Nebula, Inc.
 206-999-4492

 On Friday, April 27, 2012 at 9:08 AM, Victor Rodionov wrote:

 Hello

 How code in gerrit becomes verified?

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] raw or qcow2

2012-04-25 Thread andi abes
 So the ability to create a snapshot, rollback to a snapshot and to create a 
 new snapshot that references a snapshot as its base are strong
 candidates for abilities to design for volume drivers. Any device that did 
 not support this capability would simply state so. Snapshotting would
 then have to be implemented with generic logic rather than vendor specific 
 logic. Generally the vendor specific logic will be able to perform
 the operation more efficiently than generic logic, otherwise the vendor would 
 not have developed it.

Would this also be applicable to the ephemeral instance storage?

Since I'm guessing not,  at least not generically, are there any plans
on some sort of parity between volumes and ephemeral storage?
(But then again, given the discussions about boot-from-volume, maybe
I'm guessing wrong...)



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] raw or qcow2

2012-04-25 Thread andi abes
On Wed, Apr 25, 2012 at 3:39 PM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:
 andi.abes asked:

 Would this also be applicable to the ephemeral instance storage?

 An ephemeral instance is essentially a non-persistent clone of a snapshot 
 image.

that's an interesting perspective. But currently, those can be
snapshotted and made into new images (see [1]).
The approach you're describing appears to depend heavily on
capabilities of backend storage devices. If snapshoting is only
supported using these capabilities - would it mean deprecating these
capabilities? (or did you just not consider them?)

[1] 
http://docs.openstack.org/trunk/openstack-compute/admin/content/creating-images-from-running-instances.html

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [quantum] summit slides for clearpath service insertion

2012-04-23 Thread andi abes
During the summit, there was an API proposal for service insertion,
present by folks from Clear path.
Does anyone have links?

a.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on wsgi rate limiting

2012-03-30 Thread andi abes
Caitlin, I'm curious what were the use cases and concerns in DCB?
If my memory serves me right (from rate limiting at L2 level) the main
issues are guaranteeing QoS, effective bandwidth usage, fair
allocation of memory buffer space. All of those goals damaged pretty
badly if congestion occurs and any of the resources are exhausted
(leading to indiscriminate packet loss, for lack of any other
recourse).

The use cases, at least the way as I conceive them for OS are very
different. They're not intended to resolve resource constraints as a
primary goal (though, that's definitely a secondary goal).  As an
example - issuing lots of Nova API calls is by itself not the problem
- executing the downstream effects of what those requests trigger is
whats being protected (access to DB, spawning VM's, Rabbit MQ message
rates etc).

For Swift, where pure bandwidth is a primary concern, and the primary
resource being consumed - I imagine you're right. Some L2/L3 traffic
shaping (and monitoring) would be advisable - but that's not to say
that's the only resource. e.g. creating and deleting containers
repeatedly will consume relatively little bandwidth, but will exert
quite a lot of resource consumption on the back end. Rate limiting
these API calls is probably prudent at the API layer.





On Fri, Mar 30, 2012 at 1:56 PM, Jay Pipes jaypi...@gmail.com wrote:
 You make some good points about what is the appropriate level in the stack
 to do rate shaping, but I would just like to have a configurable, manageable
 and monitorable ratelimit/quota solution that doesn't seem like a giant hack
 :)

 Baby steps.

 -jay


 On 03/30/2012 01:23 PM, Caitlin Bestler wrote:

 Throughout the discussion on distributed rate limiting I’ve had the
 annoying feeling that I’ve heard this joke before.

 Basically, are we looking for our keys under the street lamp because the
 light is good rather than looking for them
 where they were lost?

 Has anyone studied the effectiveness of rate limitations implemented at
 this layer. From my experience with rate

 shaping discussions in IEEE 802.1 Data Center Bridging group I am
 concerned that the response time working this

 far up the stack will preclude effective rate shaping.

 Of course, if someone has studied this and shown it to be effective then
 this would be great news. The light is

 Better under the street lamp.



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on wsgi rate limiting

2012-03-30 Thread andi abes
Caitlin, alas, you missed my point.

The intent of rate limiting API calls in OS goes well beyond limiting
network traffic. It's intended to ensure that no one tenant/user
consumes unduly high resources throughout the system. These resources
are not just network bandwidth, but the myriad of resources involved
in providing the service (DB access,  replication activity (both
CPU,mem and bandwidth).
These have little to do with networking or quantum (though having
quantum provide an API for bandwidth management would be cool).

You might want to think of this functionality as API-Quota - rather
than the traditional bandwidth only rate limits.


On Fri, Mar 30, 2012 at 2:40 PM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:
 Caitlin Replies inline /Caitlin

 -Original Message-
 From: andi abes [mailto:andi.a...@gmail.com]
 Sent: Friday, March 30, 2012 11:32 AM
 To: Jay Pipes; Caitlin Bestler
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Question on wsgi rate limiting

 Caitlin, I'm curious what were the use cases and concerns in DCB?
 If my memory serves me right (from rate limiting at L2 level) the main issues 
 are guaranteeing QoS, effective bandwidth usage, fair allocation of memory 
 buffer space. All of those goals damaged pretty badly if congestion occurs 
 and any of the resources are exhausted (leading to indiscriminate packet 
 loss, for lack of any other recourse).
 Caitlin
 Correct.  The fundamental goal was to allow storage-oriented classes of 
 service which could be effectively guaranteed to be drop-free within a 
 Datacenter.
 FCoE as a specific application needed this to be a very strong guarantee, but 
 congestion drops triggering TCP back-offs is generally not a good thing for 
 storage traffic.

 And once you cut through the match, this is ultimately about good algorithms 
 that can be implemented in a distributed fashion that allocate the buffering 
 capacity
 Of the network elements somewhat intelligently and robustly.
 /Caitlin

 The use cases, at least the way as I conceive them for OS are very different. 
 They're not intended to resolve resource constraints as a primary goal 
 (though, that's definitely a secondary goal).  As an example - issuing lots 
 of Nova API calls is by itself not the problem
 - executing the downstream effects of what those requests trigger is whats 
 being protected (access to DB, spawning VM's, Rabbit MQ message rates etc).

 For Swift, where pure bandwidth is a primary concern, and the primary 
 resource being consumed - I imagine you're right. Some L2/L3 traffic shaping 
 (and monitoring) would be advisable - but that's not to say that's the only 
 resource. e.g. creating and deleting containers repeatedly will consume 
 relatively little bandwidth, but will exert quite a lot of resource 
 consumption on the back end. Rate limiting these API calls is probably 
 prudent at the API layer.
 Caitlin
 Yes, Nova APIs calls are very unlikely to cause network congestion.
 Bulk payload transfers (whether Swift or Nova Volumes) is an issue.
 My concern is that a truly effective solution will have to be at the Quantum 
 level, not wsgi.
 /Caitlin





 On Fri, Mar 30, 2012 at 1:56 PM, Jay Pipes jaypi...@gmail.com wrote:
 You make some good points about what is the appropriate level in the
 stack to do rate shaping, but I would just like to have a
 configurable, manageable and monitorable ratelimit/quota solution that
 doesn't seem like a giant hack
 :)

 Baby steps.

 -jay


 On 03/30/2012 01:23 PM, Caitlin Bestler wrote:

 Throughout the discussion on distributed rate limiting I've had the
 annoying feeling that I've heard this joke before.

 Basically, are we looking for our keys under the street lamp because
 the light is good rather than looking for them where they were lost?

 Has anyone studied the effectiveness of rate limitations implemented
 at this layer. From my experience with rate

 shaping discussions in IEEE 802.1 Data Center Bridging group I am
 concerned that the response time working this

 far up the stack will preclude effective rate shaping.

 Of course, if someone has studied this and shown it to be effective
 then this would be great news. The light is

 Better under the street lamp.



 ___
 Mailing list: https://launchpad.net/~openstack Post to     :
 openstack@lists.launchpad.net Unsubscribe :
 https://launchpad.net/~openstack More help   :
 https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack Post to     :
 openstack@lists.launchpad.net Unsubscribe :
 https://launchpad.net/~openstack More help   :
 https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Crowbar problem (using Rob's ISO)

2012-03-26 Thread andi abes
Hi Salman,

a) it might be more appropriate to post Crowbar specific questions on
the crowbar mailing list... crow...@lists.us.dell.com

b) The errors you're seeing seem to be cause the node failing to
communicate with the admin node (it did for a while, then something
changed...). That code is being hardened now, but in the version
you're using it was still a bit brittle.
You can reboot the node, it will probably recover, and continue the
deployment sequence.

It might be useful to get the details how you arrived at this situation.




On Fri, Mar 23, 2012 at 1:31 PM, Salman Malik salma...@live.com wrote:
 Hi All,

 I have tried installing OpenStack using the crowbar ISO provided by Rob
 (http://crowbar.zehicle.com/crowbar111219.iso) and successfully installed
 the admin node. But when it comes to PXE boot of the other crowbar nodes,
 the installation gets stuck showing:

 HOSTNAME=h00-0c-29-58-21-fe.crowbar.org
 NODE_STATE=
 /usr/lib/ruby/gems/1.8/gems/json-1.4.6/lib/json/common.rb:146 in ‘parse’:
 705: unexpected token at ‘Host not found’ (JSON::ParserError)
 from /usr/lib/ruby/gems/1.8/gems/json1.4.6/lib/json/common.rb:146 in ‘parse’
 from /updates/parse_node_data:57:in ‘main’
 from /updates/parse_node_data:69
 BMC_ROUTER=
 BMC_ADDRESS=
 BMC_NETMASK=
 /usr/lib/ruby/gems/1.8/gems/json-1.4.6/lib/json/common.rb:146 in ‘parse’:
 705:unexpected token at ‘Host not found’ (JSON::ParserError)
 from /usr/lib/ruby/gems/1.8/gems/json1.4.6/lib/json/common.rb:146 in ‘parse’
 from /updates/parse_node_data:57:in ‘main’
 from /updates/parse_node_data:69
 BMC_ROUTER=
 BMC_ADDRESS=
 BMC_NETMASK=
 HOSTNAME=h00-0c-29-58-21-fe.crowbar.org
 NODE_STATE=

 This problem has already been reported at Rob's blog but I am not sure if it
 is fixed.
 Any ideas?

 Thanks.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Installation Woes - Need re-assurance and help.

2012-03-13 Thread andi abes
On Tue, Mar 13, 2012 at 7:43 AM, Kevin Jackson
ke...@linuxservices.co.uk wrote:
 Hi Andi,
 Sure - the methods aren't meant for automated production installs, but to
 get to a world where I can automate using Orchestra or variations on PXE
 booting,

Well.. setting up a distributed system where all the pieces connect,
is a bit more complex than a sequence of PXE boots, as the reminder of
your post identifies ;)


I first need to get to a world where a manual install works.
 I was getting desperate in having a demonstration available to those that
 pay my wages - the best candidate I had here were Kiall's managedit repo.
  Great work (really), but it wouldn't have been anything to be considered
 for production for obvious reasons.

 I looked at Crowbar a few months back and tried to persevere with it, but it
 was very very clunky and not very friendly to use.  I couldn't customise it
 to my network requirements - which aren't anything out of the ordinary, but
 I needed customisation like VLAN IDs, etc.  The docs pointed to editing
 Barclamps.

You can look at it as add complexity, but the layer that crowbar adds
on top of the core packages is meant to solve the set of problems that
go beyond installing a single node. Like ensuring that the right nodes
end up on the right vlan and use the right gateway for that logical
network.

I would be interested to hear privately more about what you found
confusing and complex. And I'd admit that we've made strides in
publishing more guides, videos and advice (and heard pretty good
feedback).

An added complexity which I don't believe is necessary in a
 world where PXE booting an OS is simple and package installation is even
 simpler.  The crux of the challenge I need to solve is just OpenStack
 configuration but documentation lags development (naturally - not
 criticising) and comparing like-for-like hasn't worked for me (e.g. devstack
 configs are completely different to what, say, Ubuntu deb packages expect).


exactly. One thing to note though is that your comparison is not quite
fair. Package managers do a great job in installing a single machine.
They're not meant, nor really capable to deploy clusters of machines
without some layer of orchestration. I think that's actually your
expectation from the top of the email.

From personal experience - to develop crowbar, I find myself reading
more .py files than .html/txt files... that's the world of agile
software development. If you want to take on orchestrating an
openstack deployment - I can share some good python resources.


 Given Canonical's backing of OpenStack I thought I was in good company.
  After I've a working setup of installing Ubuntu onto a few nodes the next
 natural step would be to use Orchestra (or Cobbler itself which we currently
 use).


To be fair, canonical has been working on JUJU as a layer on top of
packages, to... orchestrate deployments.

 The issue I have is that all the components are installed without out error.

package installed is not equal to cluster deployed... sorry, just had
to hammer than nail.

  I come to use it and keystone doesn't want to play ball with the other
 components.
 This leads me to believe it can be two things: misconfiguration or bugs.

 If its misconfiguration - excellent - I can fix that today if someone shares
 a script or steps to configure Keystone Light to work with the rest of the
 environment.

Assuming you're chef savvy, this might be useful:
https://github.com/dellcloudedge/barclamp-keystone/tree/release/essex-hack/master/chef/cookbooks/keystone


 In the meantime I'm assuming bugs as I'm not getting anywhere fast with what
 I currently *think* are the correct steps.



 Cheers,

 Kev

 On 13 March 2012 11:27, andi abes andi.a...@gmail.com wrote:

 Hi Kevin, sorry for the hard time you're having.
 However, most of the methods you described, are NOT meant for
 production deployments (not saying all, because I haven't tried
 them all).
 You might want to look at projects which aim to automate production
 deployments.
 I can point you to the one I'm working on (The diablo release is in
 production in many installations,  The essex series is abit nascent,
 but pretty far along). It's here [1]
 You can also download ISO's from [2]
 (the crowbar mailing list is here [3], so you can see what folks have
 said, and check the wiki here [4])

 hope you have a more successful experience.


 [1] http://github.com/dellcloudedge/crowbar
 [2] http://crowbar.zehicle.com
 [3] https://lists.us.dell.com/mailman/listinfo/crowbar
 [4] http://github.com/dellcloudedge/crowbar/wiki


 On Tue, Mar 13, 2012 at 6:27 AM, Kevin Jackson
 ke...@linuxservices.co.uk wrote:
  Cheers Padraig - I'll grab a Fedora install and compare notes.
  I guess if Fedora has an installation candidate, the problem is probably
  Ubuntu packaging - at least I can direct my issues at Ubuntu rather than
  OpenStack as a whole...
 
  Kev
 
 
  2012/3/13 Pádraig Brady p...@draigbrady.com
 
  On 03/13

Re: [Openstack] Enabling data deduplication on Swift

2012-03-11 Thread andi abes
On Sun, Mar 11, 2012 at 3:49 PM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:
 Restricting fingerprinting to blocks would make block level compares 
 possible, but as I noted on an earlier reply
 it would *always* require that the blocks be transferred to perform the 
 calculation. It is a lot harder to double
 Network bandwidth than to double storage.

hmm? why is it harder? It might be more expensive to add 10/40G ports
but with LAG's it's as possible...

 Deduplication that only saves disk space is leaving the larger problem
 of network bottlenecks unaddressed.

Doesn't that depend on the ratios of read vs write?
In a read tilted environment (e.g. CDN's, image stores etc), being
able to dedup at the block level in the relatively rare write case
seems a boon. The simplification this could allow - performing
localized dedup (i.e. each object server deduping just its local
storage) seems worth while.




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [CHEF] How to structure upstream OpenStack cookbooks?

2012-03-10 Thread andi abes
I like where this discussion is going. So
I'd like to throw a couple more sticks into the fire, around test/SAIO
vs production deployments..

* Swift cookbooks (and in general) should not assume control of system
side resources, but rather use the appropriate cookbook  (or better
yet definition if it exists). e.g rsync might be used for a variety
of other purposes - by other roles deployed to the same node. The
rsync (not currently, but hopefully soon) cookbook should provide the
appropriate hooks to add your role's extras. Maybe a better example is
the soduers cookbook, which allows node attributes to describe users 
groups.

* SAIO deployments could probably be kept really simple if they don't
have to deal with repeated application - no need to worry about
idempotency which tends to make things much harder. A greenfield
deployment + some scripts to operate the test install are probably
just the right thing.

* Configurability - in testing, you'd like things pretty consistent.
One pattern I've been using is having attribute values that are
'eval'ed to retrieve the actual data.
For example - the IP address/interface to use for storage
communication (i.e. proxy - account server)  a node attribute called
storage_interface is evaluated. A user (or higher level system) can
assign either node[:ipaddress] (which is controlled by chef, and
goes slightly bonkers when multiple interfaces are present) or be more
opinionated and use e.g
node[:crowbar][:interfaces][:storage_network]






2012/3/9 Matt Ray m...@opscode.com:
 I agree with some of the replies from Rafael, but I have a few
 suggestions (inline).

 2012/3/9 Rafael Durán Castañeda rafadurancastan...@gmail.com:
 Hi,

 Bearing in mind I'm no really a Chef expert:

 On 03/09/2012 04:58 AM, Jay Pipes wrote:

 Hi Stackers,

 Specifically, these are the questions I'd like to discuss and get
 consensus on:

 1) Do resources that set up non-production environments such as Swift
 All-in-One belong in the OpenStack Chef upstream cookbooks?

 I think this kind of recipes help a lot new stackers and I can't see any
 reason for not include them.

 If the openstack-chef cookbooks are going to be examples for
 implementations, including SAIO makes sense.

 2) Should the cookbook be called swift instead of swift-aio, with the
 idea that the cookbook should be the top-most container of resources
 involved with a specific project?

 I think so

 If there's not a lot of overlap or shared attributes between the
 multi-node Swift cookbook and the SAIO cookbook, having them separate
 makes more sense so it's easier to maintain them separately.

 3) Is it possible to have a swift cookbook and have resources underneath
 that allow a user to deploy either SAIO *or* into a multi-node production
 environment? If so, would the best practice be to create recipes for SAIO
 and recipes for each of the individual Swift servers (proxy, object, etc)
 that would be used in a production configuration?

 I think if you split your cookbooks on small reusable components you can
 combine them so you get a SAIO, proxy, whatever with little or not extra
 effort

 For the Nova cookbooks I've written (haven't done Swift yet), I had a
 role for 'nova-single-machine' that was just a special case of
 multi-node and reused all the multi-node recipes. I would propose
 including the 'swift-aio' as a separate cookbook for now and have the
 goal of ensuring that there is a 'swift-single-machine' role that
 works with the 'swift' cookbook. This might take some additional work,
 so keep both sets of cookbooks for now in case it doesn't get the
 attention it needs.

 4) Instead of having an SAIO recipe in a swift cookbook, is it more
 appropriate to make a Chef *role* called swift-aio that would have a run
 list that contained a number of recipes in the swift cookbook for all the
 Swift servers plus rsync, loopback, etc?

 I think this is good practice. As I said before having small reusable
 components you can combine them getting all what you need, and probably the
 best place for combining them is roles. You can of course include smaller
 cookbooks into bigger ones and get the same result, but I prefer role based.

 See my previous comment. I too prefer role-based.


 HTH,
 Rafael



 Thanks,
 Matt Ray
 Senior Technical Evangelist | Opscode Inc.
 m...@opscode.com | (512) 731-2218
 Twitter, IRC, GitHub: mattray

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Enabling data deduplication on Swift

2012-03-10 Thread andi abes
Maybe a happy path exists, between efficiency and correctness ;) I
think the Rsync is probably a good comparison to the use case at hand
(it identifies identical blocks between the source and target, and
only sends deltas of the wire).
It combines a quick has to identify candidates that might be
duplicates, but relies on comparison to ensure that the match is real,
and not just a hash collision.

See the source of all knowledge:
http://en.wikipedia.org/wiki/Rsync#Algorithm






On Sat, Mar 10, 2012 at 1:15 PM, Maru Newby mne...@internap.com wrote:
 Hi Joe,

 There's one huge difference between page deduplication and object
 deduplication:  Page size is small and predictable, whereas object size is
 not.  Given this, full compares would not be a good way to implement
 performant object deduplication in swift.

 Thanks,


 Maru


 On 2012-03-10, at 9:57 AM, Joe Gordon wrote:

 Paulo, Caitlin,


 Can SHA-1 collisions be generated?  If so can you point me to the article?

 Also why compare hashes in the first place?  Linux 'Kenel Samepage Merging',
 which does page deduplication for KVM, does a full compare to be safe [1].
  Even if collisions can't be generated, what are the odds of a collision
 (for SHA-1 and SHA-256) happening by chance when using Swift at scale?


 best,
 Joe Gordon




 [1] http://www.linux-kvm.com/sites/default/files/KvmForum2008_KSM.pdf


 On Fri, Mar 9, 2012 at 4:44 PM, Caitlin Bestler
 caitlin.best...@nexenta.com wrote:

 Paulo,



 I believe you’ll find that we’re thinking along the same lines. Please
 review my proposal at http://etherpad.openstack.org/P9MMYSWE6U



 One quick observation is that SHA-1 is totally inadequate for
 fingerprinting objects in a public object store. An attacker could easily

 predict the fingerprint of content likely to be posted, generate alternate
 content that had the same SHA-1 fingerprint and pre-empt

 the signature. For example: an ISO of an open source OS distribution. If I
 get my false content with the same fingerprint into the

 repository first then everyone who downloads that ISO will get my altered
 copy.



 SHA-256 is really needed to make this type of attack infeasible.



 I also think that distributed deduplication works very well with object
 versioning. Your comments on the proposal cited above

 would be great to hear.



 From: openstack-bounces+caitlin.bestler=nexenta@lists.launchpad.net
 [mailto:openstack-bounces+caitlin.bestler=nexenta@lists.launchpad.net]
 On Behalf Of Paulo Ricardo Motta Gomes
 Sent: Thursday, March 08, 2012 1:19 PM
 To: openstack@lists.launchpad.net


 Subject: [Openstack] Enabling data deduplication on Swift



 Hello everyone,



 I'm a student of the European Master in Distributed Computing (EMDC)
 currently working on my master thesis on distributed content-addressable
 storage/deduplication.



 I'm happy to announce I will be contributing the outcome of my thesis work
 to OpenStack by enabling both object-level and block-level deduplication
 functionality on Swift
 (https://answers.launchpad.net/swift/+question/156862).



 I have written a detailed blog post where I describe the initial
 architecture of my
 solution: http://paulormg.com/2012/03/05/enabling-deduplication-in-a-distributed-object-storage/



 Feedback from the OpenStack/Swift community would be very appreciated.



 Cheers,



 Paulo



 --
 European Master in Distributed Computing - www.kth.se/emdc
 Royal Institute of Technology - KTH

 Instituto Superior Técnico - IST

 http://paulormg.com


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Netstack] Interaction between nova and melange : ip fixed not found

2012-02-29 Thread andi abes
2012/2/29 Dan Wendlandt d...@nicira.com:


 2012/2/29 Jérôme Gallard jeronimo...@gmail.com

 Hi Jason,

 Thank you very much for your answer.
 The problem about the wrong ip address is solved now! Perhaps this
 octect should be excluded automatically by nova at the network
 creation time?


 I agree that it seems reasonable to have the default exclude the .0
 address.


Agree that it would  be nice if Melange is smart enough to exclude the
network and broadcast addresses... but excluding  .0 might not always
be accurate - e.g. when subnetting, melange will still allocate
network addresses.



 Regarding the other problem about nova/melange, in fact, I creates all
 my networks with the nova-manage command:
 nova-manage network create --label=public
 --project_id=def761d251814aa8a10a1e268206f02d
 --fixed_range_v4=172.16.0.0/24 --priority=0 --gateway=172.16.0.1
 But it seems that the nova.fixed_ips table is not well filled.


 When using melange, the nova DB is not used to store IP address allocations.
  They are stored in Melange.  We allow network create using nova-manage
 purely for backward compatibility.  The underlying implementation is totally
 different, with Nova effectively acting as a client to proxy calls to
 Quantum + Melange.  Hope that helps.

 Dan




 Thanks again,
 Jérôme

 On Tue, Feb 28, 2012 at 16:31, Jason Kölker jkoel...@rackspace.com
 wrote:
  On Tue, 2012-02-28 at 11:52 +0100, Jérôme Gallard wrote:
  Hi all,
 
  I use the trunk version of Nova, Quantum (with the OVS plugin) and
  Melange.
  I created networks, everything seems to be right.
 
  I have two questions :
  - the first VM I boot takes always a wrong IP address (for instance
  172.16.0.0). However, when I boot a second VM, this one takes a good
  IP (for instance 172.16.0.2). Do you know why this can happened ?
 
  The default melange policy allows assignment of the network address and
  synthesise a gateway address (if it is not specified). It will not hand
  out the gateway address. The fix is to create an ip policy that
  restricts the octect 0. I think the syntax is something like
 
  `melange policy create -t {tennant} name={block_name}
  desc={policy_name}` (This should return the policy_id for the next
  command)
 
  `melange unusable_ip_octet create -t {tennant} policy_id={policy_id}
  octect=0`
 
  `melange ip_block update -t {tennant} id={block_id}
  policy_id={policy_id}`
 
 
  - I have an error regarding an fixed IP not found. Effectively, when I
  check the nova database, the fixed_ip table is empty but as I am using
  quantum and melange and their tables seems to be nicely filled. Do you
  have an idea about this issue ?
  This is a copy/paste of the error:
  2012-02-28 10:45:53 DEBUG nova.rpc.common [-] received
  {u'_context_roles': [u'admin'], u'_context_request_id':
  u'req-461788a6-3570-4fa9-8620-6705eb69243c', u│··
  '_context_read_deleted': u'no', u'args': {u'address': u'172.16.0.2'},
  u'_context_auth_token': None, u'_context_strategy': u'noauth',
  u'_context_is_admin': Tr│··
  ue, u'_context_project_id': None, u'_context_timestamp':
  u'2012-02-28T09:45:53.484445', u'_context_user_id': None, u'method':
  u'lease_fixed_ip', u'_context_r│··
  emote_address': None} from (pid=8844) _safe_log
  /usr/local/src/nova/nova/rpc/common.py:144 │··
  2012-02-28 10:45:53 DEBUG nova.rpc.common
  [req-461788a6-3570-4fa9-8620-6705eb69243c None None] unpacked context:
  {'request_id': u'req-461788a6-3570-4fa9-8620│··
  -6705eb69243c', 'user_id': None, 'roles': [u'admin'], 'timestamp':
  '2012-02-28T09:45:53.484445', 'is_admin': True, 'auth_token': None,
  'project_id': None, 'r│··
  emote_address': None, 'read_deleted': u'no', 'strategy': u'noauth'}
  from (pid=8844) unpack_context
  /usr/local/src/nova/nova/rpc/amqp.py:187 │··
  2012-02-28 10:45:53 DEBUG nova.network.manager
  [req-461788a6-3570-4fa9-8620-6705eb69243c None None] Leased IP
  |172.16.0.2| from (pid=8844) lease_fixed_ip
  /us│··r/local/src/nova/nova/network/manager.py:1186 │··
  2012-02-28 10:45:53 ERROR nova.rpc.common [-] Exception during message
  handling │··(nova.rpc.common): TRACE: Traceback (most recent call
  last): │··
  (nova.rpc.common): TRACE: File /usr/local/src/nova/nova/rpc/amqp.py,
  line 250, in _process_data │··(nova.rpc.common): TRACE: rval =
  node_func(context=ctxt, **node_args) │··(nova.rpc.common): TRACE: File
  /usr/local/src/nova/nova/network/manager.py, line 1187, in
  lease_fixed_ip │··(nova.rpc.common): TRACE: fixed_ip =
  self.db.fixed_ip_get_by_address(context, address) │··
  (nova.rpc.common): TRACE: File /usr/local/src/nova/nova/db/api.py,
  line 473, in fixed_ip_get_by_address │··(nova.rpc.common): TRACE:
  return IMPL.fixed_ip_get_by_address(context, address)
  │··(nova.rpc.common): TRACE: File
  /usr/local/src/nova/nova/db/sqlalchemy/api.py, line 119, in wrapper
  │··
  (nova.rpc.common): TRACE: return f(*args, **kwargs)
  │··(nova.rpc.common): TRACE: File
  /usr/local/src/nova/nova/db/sqlalchemy/api.py, line 1131, in
 

Re: [Openstack] [CHEF] Aligning Cookbook Efforts

2012-02-28 Thread andi abes
On Tue, Feb 28, 2012 at 2:44 PM, Jay Pipes jaypi...@gmail.com wrote:
 cc'ing list, since it's a great question and good follow-up conversation to
 have...


 On 02/28/2012 02:32 PM, andi abes wrote:

 Interesting. Would you mind doing a code review on Mary Newby's Swift All
 in
 One cookbook? Could sure use your experience :)

 https://review.openstack.org/#change,3613

 Might be a good dip into the Gerrit world for ya, too ;)


 I think I might get to be a scratched record here, since the question
 was asked before is the goal of these cookbooks to support a SAIO
 env, or a more multi-noded version?

 a SIOA would be nice for devs and newbies, but hides some of the
 complexities. A multi-noded version would be more complex (and
 probably controversial around tradeoffs) but potentially have more
 value for users...

 (I obviously have my opinion, but curious as to where other folks are
 trying to drive this effort)


 I think that both are incredibly useful. With my as-yet-still-limited
 understanding of Chef, it would be possible to have both an SAIO and a
 multi-node Swift cookbook in the same repo, no? Or some combination of
 cookbooks and roles that would allow a node to install SAIO or a piece of
 the Swift multi-node puzzle?

yes and no

One neat feature of chef is it's search capability - being able to
query the sever of where other pieces of the puzzle are located, which
makes it very convenient for multi-node operations.
E.g. for swift there are a few cookbooks floating around where by the
rings are constructed by locating all the servers that are tagged as
storage nodes (i.e. they have the appropriate role(s) assigned to
them.
While search is a neat capability, it does make the recipes more
complex (recipes are part of cookbooks, that express the operations to
be performed). So if the intent is to have the cookbooks serve as an
newbie exemplar, showcasing openstack - its probably not a good idea.

Other complexities arise when you start dealing with machine variably,
that can be easily hidden in SAIO. Using swift as an example - the #
and device names of disks. In SAIO, you just create a bunch of
loopback devices... (at least the sample deployment docs do). On a
more (dare I say) production environment, you'd want to discover
what disks are available, and use the appropriate ones.

That said - there could be recipes for both SIAO and multi-node. Users
would then have to combine and apply the right set. But maybe that's
not the full question... maybe a more complete question would be:

is this effort geared towards producing deployments that can be
considered production ready?




 -jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Essex-4 Installfest March 8, 2012

2012-02-22 Thread andi abes
Essex-4 is almost here, and once it comes out, you’d probably want to
install it.


A bunch of folks from across the country and across Dell, Rackspace,
OpsCode, Niciria, Nokia and more, will be getting together on an
IRC/Skype (and in person) to hash out deployment issues to get the
major components rocking.

Having folks with a wide background and reach focusing on deploying
the stable E-4 bits could lower the hurdles we’ll undoubtedly
encounter.


For full disclosure - my team’s focus will be Chef and Crowbar, but
given past experience, many of the problems encountered are not
specific to the your favorite deployment method……..

Checkout the wiki page here [1] for pointers and contacts that will be
live on March 8, 9AM EST … until we’re done, or starving.

For Boston face-to-face info, look for [2] for details (to be
announced soon). For Austin area checkout [3]

If you're going to be hacking at the same time, let's connect!



[1] https://github.com/dellcloudedge/crowbar/wiki/Install-Fest-Prep
[2] http://www.meetup.com/Openstack-Boston/
[3] http://www.meetup.com/OpenStack-Austin/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Remove Zones code - FFE

2012-02-20 Thread andi abes
On Sun, Feb 19, 2012 at 8:57 PM, Ed Leafe ed.le...@rackspace.com wrote:

 On Feb 19, 2012, at 12:53 PM, Mark Washenberger wrote:

  For this reason, whatever name we choose I would hope we prefix it with
 compute- (i.e. compute-zone or compute-cell) so that we aren't letting
 language trick us out of some of our better implementation options, such as
 allowing deployers to scale compute, volume, network, and api resources
 separately.

 I certainly don't want to preclude implementation options, and
 like the clarification that this is an issue of scaling compute, but I have
 two problems with the use of a prefix. First, it indirectly implies that
 scaling other entities would be done in the same manner: e.g., that an
 api-cell would have its resources independently deployed and with the
 inter-cell communication design as compute-cell. If the desired outcome is
 that we encourage different entities to implement their scaling
 independently and in the best manner for that entity, having a common name
 would seem to encourage the opposite.

Second, it just seems cumbersome. We should ensure that
 documentation about cells is clear that this is a way of scaling compute,
 but for referring to them in code and in discussion, a simple name like
 'zone' or 'cell' is simpler and cleaner.


 I've seen folks confuse swift zones with nova zones (the old version). The
confusion led to assuming that swift zones are optional, and that nova
zones somehow solve HA problems (talk about a mishmash of concepts)

Adding clarity for *users* around the fact that cells are different in the
context of nova compute vs any other context where the same term is used is
a net plus imho so I agree that in the code we could use the short
version.
But I'd really like user/deployer facing locations (e.g. docs, flag files,
API calls etc) be more explicit about what variant of cell is being
referred to, and use compute-cell.




 -- Ed Leafe


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Object Storage Swift on rhel6.0

2012-02-20 Thread andi abes
You can find details on configuring swift here:
http://swift.openstack.org/deployment_guide.html

To get packages for RHEL, it seems that the fedora project packages,
http://rpmfind.net/linux/rpm2html/search.php?query=openstack-swiftsubmit=Search+...system=arch=


hth,
a

On Sun, Feb 19, 2012 at 11:23 PM, Sudhakar Maiya sma...@gmail.com wrote:

 Hi,
 i would like to configure object storage swift on rhel6.0..

 can some one provide me the steps to install package/configure on each
 node.

 Regards
 Sudhakar Maiya.P

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [CHEF] Aligning Cookbook Efforts

2012-02-16 Thread andi abes
On Wed, Feb 8, 2012 at 11:42 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/08/2012 01:40 AM, Monty Taylor wrote:

 On 02/07/2012 08:32 PM, Jay Pipes wrote:

 Well, in my original email I proposed using the NTT PF Lab branch point
 for the stable/diablo branch of the upstream chef repos. If we can get a
 casual consensus from folks that this is OK, I will go ahead and push
 that to Gerrit. Please +1 if you are cool with that. This will allow us
 to have a branch of the upstream cookbooks that aligns with the core
 projects.


 Ping me when you want to do that... jeblair and I can handle getting the
 branch in once you have it.


 I'd like to do it today, please. Find me on IRC and we'll get this done.

 Best,

 -jay


Did this branch landed somewhere?


As a side note - seems that the NTT cookbooks don't contain swift support.
The swift cookbook in crowbar function standalone, and has a relatively
heavily annotated default attribute file as a mini howto guide.



 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone Swift: swiftauth tenant namespace collisions?

2012-02-10 Thread andi abes


  To summarize the intent:

- we add a string UID to the database schema
- For deployments with the integer ID, we copy that into the UID field
- For deployments where the ID is a string (cactus and pre-Diablo) we
copy that into the UID field
- We use the UID field in the URLs displayed by Keystone

 That will allow migrations into Keystone and you can decide in your data
 import what value to make the ID that shows up as the REST URL.



Did this code land somewhere? Any chance it can be back ported to
diablo/stable?




   From: Judd Maltin openst...@newgoliath.com
 Date: Thu, 1 Dec 2011 16:32:00 -0500
 To: Ziad Sawalha ziad.sawa...@rackspace.com
 Subject: Re: [Openstack] Keystone  Swift: swiftauth tenant namespace
 collisions?

  Hi Ziad,

 The current authentication systems for Swift use a hash as the
 tenant_id.  I saw that keystone is using a sequential integer from the DB
 as the tenant_id.  This doesn't allow Keystone to match an existing Swift
 tenant_id (called account in Swift).  This prevents Keystone from just
 taking over for swauth or tempauth.

 If the definition of tenant_id is changed in Keystone to be configurable
 by the administrator, or at least NOT be a seq from the DB, then migration
 from swauth to keystone is possible, and may even be automated.

 Looking forward to your thoughts,
 -judd

 On Sun, Nov 27, 2011 at 12:51 AM, Ziad Sawalha 
 ziad.sawa...@rackspace.com wrote:

  Hi Judd –

  Account in swift is the same thing as tenant in Keystone.

  Is the problem that you are specifying account 'name' instead of the
 ID?

  I'm asking because we have had a number of users having problems
 migrating into Keystone after we switched to ID/Name for tenants and users
 and we are considering a schema change that would allow for simpler
 migration into Keystone and support tenant ID and name being the same.

  I'm not sure that would help you, but if it would we would like to get
 your input on the design we are considering.

   From: Judd Maltin openst...@newgoliath.com
 Date: Fri, 25 Nov 2011 11:31:50 -0500
 To: Rouault, Jason (Cloud Services) jason.roua...@hp.com
 Cc: John Dickinson m...@not.mn, Ziad Sawalha ziad.sawa...@rackspace.com,
 openstack@lists.launchpad.net openstack@lists.launchpad.net

 Subject: Re: [Openstack] Keystone  Swift: swiftauth tenant namespace
 collisions?

  Thanks Jason,

 I am indeed working off stable/diablo.  It looks like I'm going to have
 to use mod_proxy and mod_rewrite to migrate my users form
 AUTH_account_name to AUTH_tenant_id  Any other ideas for this sort of
 migration?

 -judd




 On Mon, Nov 21, 2011 at 9:42 AM, Rouault, Jason (Cloud Services) 
 jason.roua...@hp.com wrote:

 Yes, I am aware of the new swift code for Keystone, but the question
 came
 from Judd who may be working off of Diablo-stable.

 -Original Message-
 From: John Dickinson [mailto:m...@not.mn]
 Sent: Sunday, November 20, 2011 8:59 AM
 To: Rouault, Jason (Cloud Services)
 Cc: Ziad Sawalha; Judd Maltin; openstack@lists.launchpad.net
 Subject: Re: [Openstack] Keystone  Swift: swiftauth tenant namespace
 collisions?

 I don't think that is exactly right, but my understanding of tenants vs
 accounts vs users may be lacking. Nonetheless, auth v2.0 support was
 added
 to the swift cli tool by Chmouel recently. Have you tried with the code
 in
 swift's trunk (also the 1.4.4 release scheduled for Tuesday)?

 --John


 On Nov 20, 2011, at 8:55 AM, Rouault, Jason (Cloud Services) wrote:

  Ziad,
 
  I think the problem is that the 'swift' command scopes a user to an
 account(tenant) via the concatenation of account:username when providing
 credentials for a valid token.  With Keystone and /v2.0 auth the
 tenantId
 (or tenantName) are passed in the body of the request.
 
  Jason
 
  From: openstack-bounces+jason.rouault=hp@lists.launchpad.net
 [mailto:openstack-bounces+jason.rouault=hp@lists.launchpad.net] On
 Behalf Of Ziad Sawalha
  Sent: Friday, November 18, 2011 2:10 PM
  To: Judd Maltin; openstack@lists.launchpad.net
  Subject: Re: [Openstack] Keystone  Swift: swiftauth tenant namespace
 collisions?
 
  Hi Judd - I'm not sire I understand. Can you give me an example of two
 tenants, their usernames, and the endpoints you would like them to have
 in
 Keystone?
 
 
  From: Judd Maltin j...@newgoliath.com
  Date: Fri, 18 Nov 2011 15:22:09 -0500
  To: openstack@lists.launchpad.net
  Subject: [Openstack] Keystone  Swift: swiftauth tenant namespace
 collisions?
 
  In keystone auth for swift (swiftauth), is there a way to eliminate
 namespace conflicts across tenants?
 
  i.e. in tempauth we use account:username password
 
  curl -k  -v -H 'X-Auth-User: test:tester' -H 'X-Auth-Token: testing'
 http://127.0.0.1:8080/auth/v1.0
 
  in swiftauth we use username password:
  $ swift -A http://127.0.0.1:5000/v1.0 -U joeuser -K secrete stat -v
  StorageURL: http://127.0.0.1:/v1/AUTH_1234
  Auth Token: 74ce1b05-e839-43b7-bd76-85ef178726c3
  Account: AUTH_12
 

Re: [Openstack] mailing list etiquette

2012-02-08 Thread andi abes
On Wed, Feb 8, 2012 at 7:59 AM, Chmouel Boudjnah chmo...@openstack.orgwrote:

 Mark McLoughlin mar...@redhat.com writes:

  I wrote this some time ago:
https://fedorahosted.org/rhevm-api/wiki/Email_Guidelines
  If it's helpful, I'm happy to move it across to the OpenStack wiki

 +1



 +1 to that!

 Chmouel.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [CHEF] Aligning Cookbook Efforts

2012-02-07 Thread andi abes
apologies for possible duplicates - some replies last night were from the
wrong email account (and didn't make it to the list)

On Mon, Feb 6, 2012 at 9:37 PM, Jesse Andrews anotherje...@gmail.comwrote:

 I know that the RCB deploy team works with the Crowbar team on chef
 recipes for that project.

 Right. The results of those efforts are here:
https://github.com/dellcloudedge/crowbar/tree/openstack-os-build/barclamps
(these are git submodules).
There are barclamps for swift,keystone, nova and horizon. In each
barclamp you'll find a chef directory with sub-directories for cookbooks
and databags.

Most are designed to work both within and outside of crowbar (by using
default attributes to replace values otherwise set by crowbar).



 Regarding the github.com/ansolabs  github.com/rcb recipes - I'll have
 to delegate to Vishy who worked on those.




 Jesse

 On Mon, Feb 6, 2012 at 6:07 PM, Jay Pipes jaypi...@gmail.com wrote:
  Hi Stackers,
 
  tl;dr
  -
 
  There are myriad Chef cookbooks out there in the ecosystem and locked
 up
  behind various company firewalls. It would be awesome if we could agree
 to:
 
  * Align to a single origin repository for OpenStack cookbooks
  * Consolidate OpenStack Chef-based deployment experience into a single
  knowledge base
  * Have branches on the origin OpenStack cookbooks repository that align
 with
  core OpenStack projects
  * Automate the validation and testing of these cookbooks on multiple
  supported versions of the OpenStack code base
 
  Details
  ---
 
  Current State of Forks
  ==
 
  Matt Ray and I tried to outline the current state of the various
 OpenStack
  Chef cookbooks this past Thursday, and we came up with the following
 state
  of affairs:
 
  ** The official OpenStack Chef cookbooks **
 
  https://github.com/openstack/openstack-chef
 
  These chef cookbooks are the ones maintained mostly by Dan Prince and
 Brian
  Lamar and these are the cookbooks used by the SmokeStack project. The
  cookbooks contained in the above repo can install all the core OpenStack
  projects with the exception of Swift and Horizon.
 
  This repo is controlled by the Gerrit instance at review.openstack.orgjust
  like other core OpenStack projects.
 
  However, these cookbooks DO NOT currently have a stable/diablo branch --
  they are updated when the development trunks of any OpenStack project
 merges
  a commit that requires deployment or configuration-related changes to
 their
  associated cookbook.
 
  Important note: it's easy for Dan and Brian to know when updates to these
  cookbooks are necessary -- SmokeStack will bomb out if a
  deployment-affecting configuration change hits a core project trunk :)
 
  These cookbooks are the ONLY cookbooks that contain stuff for deploying
 with
  XenServer, AFAICT.
 
  ** NTT PF Lab Diablo Chef cookbooks **
 
  https://github.com/ntt-pf-lab/openstack-chef/
 
  So, NTT PF Lab forked the upstream Chef cookbooks back in Nov 11, 2011,
  because they needed a set of Chef cookbooks for OpenStack that functioned
  for the Diablo code base.
 
  While Nov 11, 2011, is not the *exact* date of the Diablo release, these
  cookbooks do in fact work for a Diablo install -- Nati Ueno is using them
  for the FreeCloud deployment so we know they work...
 
  ** OpsCode OpenStack Chef Cookbooks **
 
  Matt Ray from OpsCode created a set of cookbooks for OpenStack for the
  Cactus release of OpenStack:
 
  https://github.com/mattray/openstack-cookbooks
  http://wiki.opscode.com/display/chef/Deploying+OpenStack+with+Chef
 
  These cookbooks were forked from the Anso Labs' original OpenStack
 cookbooks
  from the Bexar release and were the basis for the Chef work that Dell did
  for Crowbar. Crowbar was originally based on Cactus, and according to
 Matt,
  the repositories of OpenStack cookbooks that OpsCode houses internally
 and
  uses most often are Cactus-based cookbooks. (Matt, please correct me if
 I am
  wrong here...)
 
  ** Rackspace CloudBuilders OpenStack Chef Cookbooks **
 
  The RCB team also has a repository of OpenStack Chef cookbooks:
 
  https://github.com/cloudbuilders/openstack-cookbooks
 
  Now, GitHub *says* that these cookbooks were forked from the official
  upstream cookbooks, but I do not think that is correct. Looking at this
  repo, I believe that this repo was *actually* forked from the Anso Labs
  OpenStack Chef Cookbooks, as the list of cookbooks is virtually
 identical.
 
  ** Anso Labs OpenStack Chef Cookbooks **
 
  These older cookbooks are in this repo:
 
  https://github.com/ansolabs/openstack-cookbooks/tree/master/cookbooks
 
  Interestingly, this repo DOES contain a cookbook for Swift.
 
  Current State of Documentation
  ==
 
  Documentation for best practices on using Chef for your OpenStack
  deployments is, well, a bit scattered. Matt Ray has some good
 information on
  the README on his cookbook repo and the OpsCode wiki:
 
  

[Openstack] mailing list etiquette

2012-02-07 Thread andi abes
I've seen a few folks apologizing for top-posts and a few pokes in some
threads about folks with less than intelligent email clients.
Which leads me to ask: are there any pointers to best practices on the
mailing list?
(replying to the right message in a thread, ideally inline with the context
-  sounds like motherhood and apple pie. anything more detailed ?)
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] mailing list etiquette

2012-02-07 Thread andi abes
On Tue, Feb 7, 2012 at 1:33 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/07/2012 01:08 PM, andi abes wrote:

 I've seen a few folks apologizing for top-posts and a few pokes in
 some threads about folks with less than intelligent email clients.
 Which leads me to ask: are there any pointers to best practices on the
 mailing list?


 Not using HTML email and not using any version of Outlook (or any mail
 client that cannot properly handle inline replies) is a good start IMHO. :)


  (replying to the right message in a thread, ideally inline with the
 context -  sounds like motherhood and apple pie. anything more detailed ?)


 Some good stuff here:

 http://en.wikipedia.org/wiki/**Posting_style#Choosing_the_**
 proper_posting_stylehttp://en.wikipedia.org/wiki/Posting_style#Choosing_the_proper_posting_style

 Best,
 -jay


seeing that openstack is getting its own foundation ... should it have it's
own style guide for ML's ;)


 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Consistency Guarantees?

2012-01-20 Thread andi abes
I'm finding this thread a bit confusing. You're comparing offered SERVICES
to Software. While some of the details of the software will dictate what's
possible, some are heavily dependent on how you deploy the swift software,
and what kind of deployment decisions you (or your service provider) make.

As an extreme example - if you deploy 1 container server in a highly
available fashion (hardware style), the you probably could get consistent
container listings in the different update followed by read scenarios.
Hosting huge swift installations with such a setup is not realistic - but
that doesn't say you can't do that.

Similarly, swift offers quite a lot of flexibility in setting the eventual
consistency window sizes (replication frequency, rates and such). So, while
there are theoretical answers to missing replicas, the likelihood of those
occurring depends on your deployment and operational practices employed.
(e.g. how many replicas are made, how quickly are failed nodes/drives fixed
and their content replicated to their replacement etc).
In the amazon case, much of this is captured in the 17 9's or the 3 9's
guarantees for the reduced redundancy class.

If your approach is from an API perspective, then issues around # of
replicas (which is deployment parameter) are probably not relevant - if you
trust your provider.

If your approach in this is from a Swift developer / deployer perspective
 - then nvm. keep asking, cause it's much easier to read email than python
;)






On Fri, Jan 20, 2012 at 3:06 PM, Chmouel Boudjnah chmo...@openstack.orgwrote:

 As Stephen mentionned if there is only one replica left Swift would not
 serve it.

 Chmouel.


 On Fri, Jan 20, 2012 at 1:58 PM, Nikolaus Rath nikol...@rath.org wrote:

 Hi,

 Sorry for being so persistent, but I'm still not sure what happens if
 the 2 servers that carry the new replica are down, but the 1 server that
 has the old replica is up. Will GET fail or return the old replica?

 Best,
 Niko

 On 01/20/2012 02:52 PM, Stephen Broeker wrote:
  By default there are 3 replicas.
  A PUT Object will return after 2 replicas are done.
  So if all nodes are up then there are at least 2 replicas.
  If all replica nodes are down, then the GET Object will fail.
 
  On Fri, Jan 20, 2012 at 11:21 AM, Nikolaus Rath nikol...@rath.org
  mailto:nikol...@rath.org wrote:
 
  Hi,
 
  So if an object update has not yet been replicated on all nodes,
 and all
  nodes that have been updated are offline, what will happen? Will
 swift
  recognize this and give me an error, or will it silently return the
  older version?
 
  Thanks,
  Nikolaus
 
 
  On 01/20/2012 02:14 PM, Stephen Broeker wrote:
   If a node is down, then it is ignored.
   That is the whole point about 3 replicas.
  
   On Fri, Jan 20, 2012 at 10:43 AM, Nikolaus Rath 
 nikol...@rath.org
  mailto:nikol...@rath.org
   mailto:nikol...@rath.org mailto:nikol...@rath.org wrote:
  
   Hi,
  
   What happens if one of the nodes is down? Especially if that
  node holds
   the newest copy?
  
   Thanks,
   Nikolaus
  
   On 01/20/2012 12:33 PM, Stephen Broeker wrote:
The X-Newest header can be used by a GET Operation to ensure
  that
   all of the
Storage Nodes (3 by default) are queried for the latest
 copy of
   the Object.
The COPY Object operation already has this functionality.
   
On Fri, Jan 20, 2012 at 9:12 AM, Nikolaus Rath
  nikol...@rath.org mailto:nikol...@rath.org
   mailto:nikol...@rath.org mailto:nikol...@rath.org
mailto:nikol...@rath.org mailto:nikol...@rath.org
  mailto:nikol...@rath.org mailto:nikol...@rath.org wrote:
   
Hi,
   
No one able to further clarify this?
   
Does swift offer there read-after-create consistence
 like
non-us-standard S3? What are the precise syntax and
  semantics of
X-Newest header?
   
Best,
Nikolaus
   
   
On 01/18/2012 10:15 AM, Nikolaus Rath wrote:
 Michael Barton mike-launch...@weirdlooking.com
  mailto:mike-launch...@weirdlooking.com
   mailto:mike-launch...@weirdlooking.com
  mailto:mike-launch...@weirdlooking.com
mailto:mike-launch...@weirdlooking.com
  mailto:mike-launch...@weirdlooking.com
   mailto:mike-launch...@weirdlooking.com
  mailto:mike-launch...@weirdlooking.com writes:
 On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath
   nikol...@rath.org mailto:nikol...@rath.org
  mailto:nikol...@rath.org mailto:nikol...@rath.org
mailto:nikol...@rath.org mailto:nikol...@rath.org
  mailto:nikol...@rath.org mailto:nikol...@rath.org wrote:
 Amazon 

Re: [Openstack] Swift Consistency Guarantees?

2012-01-20 Thread andi abes
nice reply ;)

On Fri, Jan 20, 2012 at 6:35 PM, Pete Zaitcev zait...@redhat.com wrote:

 On Fri, 20 Jan 2012 15:17:32 -0500
 Nikolaus Rath nikol...@rath.org wrote:

  Thanks! So there is no way to reliably get the most-recent version of an
  object under all conditions.

 If you bend the conditions hard enough to hit the CAP theorem, you do.

 -- Pete

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Consistency Guarantees?

2012-01-20 Thread andi abes
You would need to have the following occur, to make your scenario plausible:

* you write the object, which places it on a majority of the replica nodes
(i.e. 2 out of 3)
* replication is slowly churning away, but doesn't quite catch up
* both the nodes that have the updated data fail simultaneously, before
replication catches up the remaining node.

Swift chooses the A  P from CAP. if the swift proxy where to wait till all
replicas got updated before it returned a reply, it would be choosing the C
but probably dropping both the P and maybe the A (depending on how it would
handle a failure).

So yes.. you are hitting CAP on the head...



On Fri, Jan 20, 2012 at 6:56 PM, Nikolaus Rath nikol...@rath.org wrote:

 On 01/20/2012 06:35 PM, Pete Zaitcev wrote:
  On Fri, 20 Jan 2012 15:17:32 -0500
  Nikolaus Rath nikol...@rath.org wrote:
 
  Thanks! So there is no way to reliably get the most-recent version of an
  object under all conditions.
 
  If you bend the conditions hard enough to hit the CAP theorem, you do.

 From what I have heard so far, it seems to be sufficient if all servers
 holding the newest replica are down for me to get old data. I don't
 think that this condition is already hitting the CAP theorem, or is it?


 Best,

   -Nikolaus

 --
  »Time flies like an arrow, fruit flies like a Banana.«

  PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using Swift S3 API with Keystone

2012-01-03 Thread andi abes
See http://wiki.openstack.org/GerritWorkflow


On Jan 3, 2012, at 8:11, Akira Yoshiyama akirayoshiy...@gmail.com wrote:

Hi,

I hope to merge my patches to upstream. What should I do?

Thank you,
Akira Yoshiyama
2012/01/03 21:32 adrian_f_sm...@dell.com:

 Is it possible to use Swift’s S3 API if Keystone is being used for auth?**
 **

 ** **

 Looking back at this thread (
 https://lists.launchpad.net/openstack/msg05203.html) it appears Akira
 Yoshiyama has a patch to allow this functionality but it’s not been merged
 (submitted?) yet. Is there an alternative?

 ** **

 Thanks

 ** **

 Adrian

 

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] swift enforcing ssl?

2011-12-27 Thread andi abes
Does the swift proxy enforce SSL connections if it's configured with a
cert/key file? Or is it assumed that there's an external entity performing
that?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Default User name and Password for Horizon i.e., Openstack-dashboard that i have been installed using [DevStack] script

2011-12-15 Thread andi abes
reading the script..

you either have it in localrc file as *ADMIN_PASSWORD, *or you were
prompted during install to type it in.
*
*

On Thu, Dec 15, 2011 at 1:38 PM, sn alaya...@gmail.com wrote:

 and also the default username of openstack that has been installed using
 devstack script...


 On Thu, Dec 15, 2011 at 11:58 PM, sn alaya...@gmail.com wrote:


 hi experts...
 i have just now installed successfully openstack without any
 errors.with the scripts mention in the devstack.org. but i could not
 login to my dashboard...if anyone know what are the default password for
 openstack dashboard that has been installed using devstack


 thanks
 --
 I am on Twitter. Follow Me 
 @SanjibNarzaryhttp://www.twitter.com/SanjibNarzary





 --
 I am on Twitter. Follow Me 
 @SanjibNarzaryhttp://www.twitter.com/SanjibNarzary



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] swfit / keystone auth

2011-12-13 Thread andi abes
I'm getting really funny ( :( ) results trying to get swift to work w/
keystone.

A few questions (about keystone 2012.1)
a) does the  swift middleware work with v1.0 or 2.0 auth?
b) are folks using swift-keystone2 or the middleware bundled with keystone
(auth_token + swift_auth).
c) when trying to use auth_token and swift_auth, I see the keystone log
below trying to stat an account. This request fails with unauthorized. What
is a bit weird are the last 2 get operations - one returning 200 and the
other 401 for the same token
( 889783596547 - is the admin token.  c38c23cd-4280-4f32-9c1e-eca483a55c47
is the user's token that I get when authenticating with curl directly).

This was triggered with:
 swift -A http://192.168.124.82:5000/v2.0/ -V 2.0 -U openstack:user -K
password stat


Any thoughts will be highly appreciated.



sqlalchemy.engine.base.Engine.0x...30d0: INFO ('889783596547',)
sqlalchemy.engine.base.Engine.0x...30d0: INFO SELECT users.id AS
users_id, users.name AS users_name, users.password AS users_password,
users.email AS users_email, users.enabled AS users_enabled, users.tenant_id
AS users_tenant_id
FROM users
WHERE users.id = %s
 LIMIT 0, 1
sqlalchemy.engine.base.Engine.0x...30d0: INFO (1L,)
sqlalchemy.engine.base.Engine.0x...30d0: INFO SELECT tenants.id AS
tenants_id, tenants.name AS tenants_name, tenants.`desc` AS tenants_desc,
tenants.enabled AS tenants_enabled
FROM tenants
WHERE tenants.id = %s
 LIMIT 0, 1
sqlalchemy.engine.base.Engine.0x...30d0: INFO (1L,)
sqlalchemy.engine.base.Engine.0x...30d0: INFO SELECT tenants.id AS
tenants_id, tenants.name AS tenants_name, tenants.`desc` AS tenants_desc,
tenants.enabled AS tenants_enabled
FROM tenants
WHERE tenants.id = %s
 LIMIT 0, 1
sqlalchemy.engine.base.Engine.0x...30d0: INFO (1L,)
sqlalchemy.engine.base.Engine.0x...30d0: INFO SELECT roles.id AS
roles_id, roles.name AS roles_name, roles.`desc` AS roles_desc,
roles.service_id AS roles_service_id
FROM roles
WHERE roles.name = %s
 LIMIT 0, 1
sqlalchemy.engine.base.Engine.0x...30d0: INFO ('KeystoneServiceAdmin',)
keystone.logic.service: WARNING  No service admin role is defined.
sqlalchemy.engine.base.Engine.0x...30d0: INFO SELECT user_roles.id AS
user_roles_id, user_roles.user_id AS user_roles_user_id, user_roles.role_id
AS user_roles_role_id, user_roles.tenant_id AS user_roles_tenant_id
FROM user_roles
WHERE user_roles.user_id = %s AND tenant_id is null
sqlalchemy.engine.base.Engine.0x...30d0: INFO (1L,)
sqlalchemy.engine.base.Engine.0x...30d0: INFO SELECT token.id AS
token_id, token.user_id AS token_user_id, token.tenant_id AS
token_tenant_id, token.expires AS token_expires
FROM token
WHERE token.id = %s
 LIMIT 0, 1
sqlalchemy.engine.base.Engine.0x...30d0: INFO
(u'c38c23cd-4280-4f32-9c1e-eca483a55c47',)
sqlalchemy.engine.base.Engine.0x...30d0: INFO SELECT users.id AS
users_id, users.name AS users_name, users.password AS users_password,
users.email AS users_email, users.enabled AS users_enabled, users.tenant_id
AS users_tenant_id
FROM users
WHERE users.id = %s
 LIMIT 0, 1
sqlalchemy.engine.base.Engine.0x...30d0: INFO (3L,)
sqlalchemy.engine.base.Engine.0x...30d0: INFO SELECT user_roles.id AS
user_roles_id, user_roles.user_id AS user_roles_user_id, user_roles.role_id
AS user_roles_role_id, user_roles.tenant_id AS user_roles_tenant_id
FROM user_roles
WHERE user_roles.user_id = %s AND tenant_id is null
sqlalchemy.engine.base.Engine.0x...30d0: INFO (3L,)
eventlet.wsgi.server: DEBUG192.168.124.83 - - [13/Dec/2011 09:07:51]
GET /v2.0/tokens/c38c23cd-4280-4f32-9c1e-eca483a55c47 HTTP/1.1 200 286
0.019196
sqlalchemy.engine.base.Engine.0x...30d0: INFO SELECT token.id AS
token_id, token.user_id AS token_user_id, token.tenant_id AS
token_tenant_id, token.expires AS token_expires
FROM token
WHERE token.id = %s
 LIMIT 0, 1
sqlalchemy.engine.base.Engine.0x...30d0: INFO ('None',)
eventlet.wsgi.server: DEBUG192.168.124.83 - - [13/Dec/2011 09:07:51]
GET /v2.0/tokens/c38c23cd-4280-4f32-9c1e-eca483a55c47 HTTP/1.1 401 213
0.003294
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift - set preferred proxy/datanodes (cross datacenter schema)

2011-12-06 Thread andi abes
You could try to use the container sync added in 1.4.4.

The scheme would be to setup 2 separate clusters in each data center.
Obviously requests will be satisfied locally.
You will also setup your containers identically, and configure them to
sync, to make sure data is available in both DC's.

You might want to consider how many replicas you want in each data center,
and how you'd recover from failures, rather than just setting up 2 DC x 3-5
replicas for each object.

a.


On Tue, Dec 6, 2011 at 1:49 PM, Caitlin Bestler caitlin.best...@nexenta.com
 wrote:

  Lendro Reox asked:



  We're replicating our datacenter in another location (something like
 Amazons east and west coast) , thinking about our applications and

  our use of Swift, is there any way that we can set up weights for our
 datanodes so if a request enter via for example DATACENTER 1 ,

   then we want the main copy of the data being written on a datanode on
 the SAME datacenter o read from the same datacenter, so

  when we want to read it and comes from a proxy node of the same
 datacenter we dont add delay of the latency between the two datacenters.
  The moto is if a request to write or read enters via DATACENTER 1 then
 is served via proxynodes/datanodes located on DATACENTER 1,
  then the replicas gets copied across zones over both datacenters.

  Routing the request to especific proxy nodes is easy, but dont know if
 swift has a way to manage this internally too for the datanodes 

 ** **

 I don’t see how you would accomplish that with the current Swift
 infrastructure.

 ** **

 An object is hashed to a partition, and the ring determines where replicas
 of that partition are stored.

 ** **

 What you seem to be suggesting is that when an object is created in region
 X that it should be assigned to partition that is primarily stored in
 region X,

 While if the same object had been created in region Y it would be assigned
 to a partition that is primary stored in region Y.

 ** **

 The problem is that “where this object was first created” is not a
 contributor to the hash algorithm, nor could it be since there is no way
 for someone

 trying to get that object to know where it was first created.

 ** **

 What I think you are looking for is a solution where you have **two**
 rings, DATACENTER-WEST and DATACENTER-EAST. Both of these rings would have
 

 an adequate number of replicas to function independently, but would
 asynchronously update each other to provide eventual consistency.

 ** **

 That would use more disk space, but avoids making all updates wait for the
 data to be updated at each site.

 ** **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift - set preferred proxy/datanodes (cross datacenter schema)

2011-12-06 Thread andi abes
sorry, should have included the link:
http://swift.openstack.org/overview_container_sync.html


On Tue, Dec 6, 2011 at 2:49 PM, andi abes andi.a...@gmail.com wrote:

 You could try to use the container sync added in 1.4.4.

 The scheme would be to setup 2 separate clusters in each data center.
 Obviously requests will be satisfied locally.
 You will also setup your containers identically, and configure them to
 sync, to make sure data is available in both DC's.

 You might want to consider how many replicas you want in each data center,
 and how you'd recover from failures, rather than just setting up 2 DC x 3-5
 replicas for each object.

 a.


 On Tue, Dec 6, 2011 at 1:49 PM, Caitlin Bestler 
 caitlin.best...@nexenta.com wrote:

  Lendro Reox asked:



  We're replicating our datacenter in another location (something like
 Amazons east and west coast) , thinking about our applications and

  our use of Swift, is there any way that we can set up weights for our
 datanodes so if a request enter via for example DATACENTER 1 ,

   then we want the main copy of the data being written on a datanode on
 the SAME datacenter o read from the same datacenter, so

  when we want to read it and comes from a proxy node of the same
 datacenter we dont add delay of the latency between the two datacenters.
  The moto is if a request to write or read enters via DATACENTER 1
 then is served via proxynodes/datanodes located on DATACENTER 1,
  then the replicas gets copied across zones over both datacenters.

  Routing the request to especific proxy nodes is easy, but dont know if
 swift has a way to manage this internally too for the datanodes 

 ** **

 I don’t see how you would accomplish that with the current Swift
 infrastructure.

 ** **

 An object is hashed to a partition, and the ring determines where
 replicas of that partition are stored.

 ** **

 What you seem to be suggesting is that when an object is created in
 region X that it should be assigned to partition that is primarily stored
 in region X,

 While if the same object had been created in region Y it would be
 assigned to a partition that is primary stored in region Y.

 ** **

 The problem is that “where this object was first created” is not a
 contributor to the hash algorithm, nor could it be since there is no way
 for someone

 trying to get that object to know where it was first created.

 ** **

 What I think you are looking for is a solution where you have **two**
 rings, DATACENTER-WEST and DATACENTER-EAST. Both of these rings would have
 

 an adequate number of replicas to function independently, but would
 asynchronously update each other to provide eventual consistency.

 ** **

 That would use more disk space, but avoids making all updates wait for
 the data to be updated at each site.

 ** **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] how to list all account and get disk usage info?

2011-11-30 Thread andi abes
take a look at this: https://github.com/notmyname/slogging

It collects usage information for swift, including storage usage and
traffic statistics.
(it has pretty good documentation - just build from source in doc/)


On Wed, Nov 30, 2011 at 6:45 PM, pf shineyear shin...@gmail.com wrote:

 my question is for swift

 but, this python code not for swift, it's for nova

 can anyone tell me how do that in swift?

 thanks.


 On Thu, Dec 1, 2011 at 10:29 AM, Tom Fifield fifie...@unimelb.edu.auwrote:

 Hi,

 Is this for Swift?

 Regards,

 Tom


 On 12/01/2011 09:41 AM, pf shineyear wrote:

 hi all :

 does anyone know how to list all account and get each account's disk
 usage info?

 because i want to get every account disk usage info for bill per hour.

 thanks.


 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Boston openstack user group meetup

2011-11-16 Thread andi abes
Just a quick shout out to folks in the Boston and NE area - we're putting
the final touch on a meetup on 11/29 in Lexington MA at 6pm.
checkout the meetup page, and throw in topics for the unconference
discussion here (or just write them on the board that evening):

http://www.meetup.com/Openstack-Boston/

a.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Most current diablo swift pkgs?

2011-10-24 Thread andi abes
I'm a bit confused with the state of affairs for swift diablo.
I've seen notes and checkins for backports to nova from essex, and found
https://launchpad.net/~openstack-release/+archive/2011.3 which seems to be
the repo for the patched packages...

Is that right? Is this the location that will be kept upto date?

thx
a.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] dashaboard+keystone+nova+glance work well?

2011-09-28 Thread andi abes
is this just to avoid the need for the long lived token, or are there other
issues in dash/keystone integration?


2011/9/28 Devin Carlen devin.car...@gmail.com

 We should have a drop of Keystone by the end of the week that fully support
 Diablo.  This will fix the Dashboard issues as well.

 Thanks,

 Devin



 On Sep 28, 2011, at 12:31 AM, Jesse Andrews wrote:

  at various points in time they have worked together.  We
  (cloudbuilders) keep a list of repositories that work well together.
 
  # compute service
  NOVA_REPO=https://github.com/openstack/nova.git
  NOVA_BRANCH=2011.3
 
  # image catalog service
  GLANCE_REPO=https://github.com/cloudbuilders/glance.git
  GLANCE_BRANCH=diablo
 
  # unified auth system (manages accounts/tokens)
  KEYSTONE_REPO=https://github.com/cloudbuilders/keystone.git
  KEYSTONE_BRANCH=diablo
 
  # a websockets/html5 or flash powered VNC console for vm instances
  NOVNC_REPO=https://github.com/cloudbuilders/noVNC.git
  NOVNC_BRANCH=master
 
  # django powered web control panel for openstack
  DASH_REPO=https://github.com/cloudbuilders/openstack-dashboard.git
  DASH_BRANCH=master
 
  # python client library to nova that dashboard (and others) use
  NOVACLIENT_REPO=https://github.com/cloudbuilders/python-novaclient.git
  NOVACLIENT_BRANCH=master
 
  # openstackx is a collection of extensions to openstack.compute  nova
  # that is *deprecated*.  The code is being moved into python-novaclient 
 nova.
  OPENSTACKX_REPO=https://github.com/cloudbuilders/openstackx.git
  OPENSTACKX_BRANCH=diablo
 
  On Tue, Sep 27, 2011 at 7:11 PM, shake chen shake.c...@gmail.com
 wrote:
  No, Now the Dashbaord can not working.
 
  https://bugs.launchpad.net/openstack-dashboard/+bug/855142
 
  I think need the bug beed fixed.
 
 
 
  On Wed, Sep 28, 2011 at 9:31 AM, l jv ljv...@gmail.com wrote:
 
  hi
  is there anybody config dashaboard+keystone+nova+glance sucess and work
  well?
  when i do as the http://docs.openstack.org/,but it does not work
  well,always has some wrong when use dashboard(glance and nova work
 well).
  So somebody can write a detail config process doc ?
  thans a lot
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
 
  --
  陈沙克
  手机:13661187180
  msn:shake.c...@hotmail.com
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Is Openstack suitable to my problem?

2011-08-15 Thread andi abes
I think John pointed you to some info on how to achieve the the hierarchical
structure requirement
The other requirement was around syncing remote clusters:
Swift Diablo (1.4) can probably be suited for the scenario you're
describing:
See  the spec for Multi cluster sync for swift [1] and
the excellent comments in the code [2] implementing it.
(I haven't yet played with it, but planning to soon)

[1] http://etherpad.openstack.org/QAoBrOHZxd
[2]
https://github.com/openstack/swift/blob/master/swift/container/sync.py#L72


hth,
a.



On Mon, Aug 15, 2011 at 8:50 AM, John Dickinson m...@not.mn wrote:

 See
 http://programmerthoughts.com/programming/nested-folders-in-cloud-files/for 
 info on how to use a nested directory structure in swift.

 --John


 On Aug 14, 2011, at 11:07 PM, Thiago Moraes wrote:

  I took a look at some distributed file systems and went a little deeper
 in Hadoop and his HDFS, for instance. I don't really need full POSIX
 compliance, but having a nested structure is important, but as far as I know
 there are way to simulate this on Switf, is that correct?
 
  The problem I see in using something like hadoop is the single point of
 failure, not because I need almost 100% availability, but because the people
 who will access the data does not belong to the same organization. They will
 be researchers from different institutions that may want to deploy a local
 server with a subset of the data to improve their productivity, but the data
 set's size makes impractical to just copy everything.
 
  The plan would be that the interface to the system would show which files
 are stored locally and which are not, so that everyone gets access to
 everything, almost like a peer to peer system where they download from the
 closest source and then store for their own use.
 
  At first, I though of implementing something by hand, but using an
 already mature solution makes a lot more sense.
 
  So, is this plausible or am I trying to use the wrong tools?
 
  thanks, again
 
  Thiago Moraes - EnC 07 - UFSCar
 
 
  2011/8/14 Todd Deshane todd.desh...@xen.org
  On Sun, Aug 14, 2011 at 4:10 AM, Thiago Moraes
  thiago.camposmor...@gmail.com wrote:
   Hey guys,
  
   I'm new on the list and I'm currently considering Openstack to solve a
 data
   distribution problem. Right now, there's a server which contains very
 large
   files (usual files have 30GB or even more). This server is accessed by
 LAN
   and over the internet but, of course, it's difficult to do this without
   local connection.
  
   My idea to solve this problem is to deploy new servers on the places
 which
   access data more often in an such a way that they get a local copy of
 the
   most accessed part of data by then. In my head, I consider that there
 will
   be N different clouds, one at my location and the others spread on
 another
   networks. Then, these new clouds would download and store parts of the
 data
   (entire files) so that they can be accessed through their own LAN.
  
 
  It sounds like you are looking for the functionality that Zones (aim
  to?) provide.
 
  Take a look at:
 
  http://wiki.openstack.org/MultiClusterZones
 
 
   Is Openstack suitable in this environment? Anyone would recommend
 another
   solution?
  
 
  Have you also looked at SheepDog, Hadoop or HC2? All of these seem to
  have some OpenStack integration points as well.
 
  Some links to look into:
  http://wiki.openstack.org/SheepdogSupport
  http://doubleclix.wordpress.com/2011/03/17/hadoop-2-0-openstack-pbj/
 
 http://www.quora.com/What-features-differentiate-HDFS-and-OpenStack-Object-Storage
 
 
  Hope that helps.
 
  Thanks,
  Todd
 
   PS: I know the file size limitations of 5GB. I just need that all parts
 of a
   file to be in the same local area network so that a blazingly fast
 Internet
   connection is not required all the time.
  
   thanks,
  
  
   Thiago Moraes - EnC 07 - UFSCar
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
  
 
 
 
  --
  Todd Deshane
  http://www.linkedin.com/in/deshantm
  http://www.xen.org/products/cloudxen.html
  http://runningxen.com/
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More 

Re: [Openstack] Chef Deployment System for Swift - a proposed design - feedback?

2011-08-01 Thread andi abes
Time for another update

The crowbar is (finally) out, it made some splash at OSCON, but in case you
missed it, you can find it here: https://github.com/dellcloudedge/crowbar

In case you really missed it - crowbar provides a bare metal to a fully
deployed glance, nova and/or swift install system. It will take care of PXE
booting the systems and walking them through the paces (discovery,
hw-install and configure, os-install and configuration). Once the base OS is
ready for the set of machines to be installed, clusters of compute or
storage services are deployed and prep-ed for use.

You can take crowbar for a spin in a virutal environment or on bare metal
hw.
take it for a spin, drop us a note.

(p.s. it subsumes (by a lot) the previous swift-only recipes from this
thread, and includes some bug fixes)

On Mon, May 23, 2011 at 1:41 PM, andi abes andi.a...@gmail.com wrote:



 It took a while, but finally:
 https://github.com/dellcloudedge/openstack-swift

 Jay, I've added a swift-proxy-acct and swift-proxy (without account
 management).

 This cookbook is an advanced leak of for swift, soon to be followed with
 a leak of a nova cookbook. The full crowbar that was mentioned is on its
 way...

 To use these recipes (with default settings) you just need to pick your
 storage nodes and 1 or more proxies. Then assign the appropriate roles
 (swift-storage swift-proxy or swift-proxy-acct) using the chef ui or a knife
 command. Choose one of the nodes and assign it the swift-node-compute. and
 the swift cluster is built (because of async nature of multi-node
 deployments, it might require a few chef-client runs while the ring files
 are generated and pushed around.

 have a spin. eager to hear comments.








 On Mon, May 2, 2011 at 11:36 AM, andi abes andi.a...@gmail.com wrote:

 Jay,

 hmmm, interesting point about account management in the proxy. Guess
 you're suggesting that you have 2 flavors of a proxy server - one with
 account management enabled and one without?
  Is the main concern here security - you'd have more controls on the
 account management servers? Or is this about something else?

 About ring-compute:
 so there are 2 concerns I was thinking about with rings - a) make sure the
 ring information is consistent across all the nodes in the cluster, and b)
 try not to lose the ring info.

 The main driver to have only 1 ring compute node was a). the main concern
 being guaranteeing consistency of the ring data among all nodes without
 causing too strong coupling to the underlying mechanisms used to build the
 ring.
 For example - if 2 new rings are created independently, then the order in
 which disks are added to the ring should be consistent (assuming that the
 disk/partition allocation algorithm is sensitive to ordering). Which implies
 that the query to chef should always return data in exactly the same order.
 If also would require that the ring building (and mandate that it will
 never be changed) does not use any heuristics that are time or machine
 dependent (I _think_ that right now that is the case, but I would rather not
 depend on it).

 I was thinking that these restrictions can be avoided easily by making
 sure that only 1 node computes the ring. To make sure that b) (don't lose
 the ring) is addressed - the ring is copied around.
 If the ring compute node fails, then any other node can be used to seed a
 new compute ring without any loss.
 Does that make sense?


 Right now I'm using a snapshot deb package built from bzr266. Changing the
 source of the bits is pretty esay... (and installing the deb includes the
 utilities you mentioned)


 Re: load balancers:
 What you're proposing makes perfect sense. Chef is pretty modular. So the
 swift configuration recipe focuses on setting up swift - not the whole
 environment. It would make sense to deploy some load balancer, firewall
 appliance etc in an environment. However, these would be add-ons to the
 basic swift configuration.
 A simple way to achieve this would be to have a recipe that would query
 the chef server for all nodes which have the swift-proxy role, and add them
 as internal addresses for the load balancer of your choice.
 (e.g. :
 http://wiki.opscode.com/display/chef/Search#Search-FindNodeswithaRoleintheExpandedRunList
 )


 a.



 On Sun, May 1, 2011 at 10:14 AM, Jay Payne lett...@gmail.com wrote:

 Andi,

 This looks great.   I do have some thoughts/questions.

 If you are using 1.3, do you have a separate role for the management
 functionality in the proxy?It's not a good idea to have all your
 proxy servers running in management mode (unless you only have one
 proxy).

 Why only 1 ring-compute node?  If that node is lost or unavailable do
 you loose your ring-builder files?

 When I create an environment I always setup utilities like st,
 get-nodes, stats-report, and a simple functional test script on a
 server to help troubleshoot and manage the cluster(s).

 Are you using packages or eggs to deploy the swift

Re: [Openstack] Thinking about Backups/Snapshots in Nova Volume

2011-07-21 Thread andi abes
hmm - they definitely muddy the waters, but provide a really cool feature
set:
Amazon EBS Snapshots

Amazon EBS provides the ability to back up point-in-time snapshots of your
data to Amazon S3 for durable recovery. Amazon EBS snapshots are incremental
backups, meaning that only the blocks on the device that have changed since
your last snapshot will be saved. If you have a device with 100 GBs of data,
but only 5 GBs of data has changed since your last snapshot, only the 5
additional GBs of snapshot data will be stored back to Amazon S3. Even
though the snapshots are saved incrementally, when you delete a snapshot,
only the data not needed for any other snapshot is removed. So regardless of
which prior snapshots have been deleted, all active snapshots will contain
all the information needed to restore the volume. In addition, the time to
restore the volume is the same for all snapshots, offering the restore time
of full backups with the space savings of incremental



That quoted - it's not exactly a low bar to meet in terms of capability.
Chuck - are you proposing that as the target for Diablo?

p.s - typing on a real keyboard is so much easier than an iPad, and leads to
much better grammar...

On Thu, Jul 21, 2011 at 12:19 PM, Chuck Thier cth...@gmail.com wrote:

 Hey Andi,

 Perhaps it would be better to re-frame the question.

 What should the base functionality of the Openstack API for
 backup/snapshot functionality be?

 I'm looking at it from the perspective of initially providing the
 capabilities that EC2/EBS currently provides (which they call
 snapshots).  To me, this is the absolute base of what is needed, and
 is what I am basically proposing as the idea of backups.

 I also see that allowing for true volume snapshot capabilities are
 desirable down the road.  The difficulty with snapshots, is that their
 properties can vary greatly between different storage systems, and
 thus needs some care in defining what a Nova Volume snapshot should
 support.  I would expect that the different storage providers would
 initially provide this support through extensions to the API.  At that
 point it may be easier to find what commonalities there are, and to
 find what types of features are most demanded in the cloud.

 --
 Chuck

 On Thu, Jul 21, 2011 at 5:57 AM, Andiabes andi.a...@gmail.com wrote:
  I think vish pointed out the main differences between the 2 entities, and
 maybe that can lead to name disambiguation...
 
  Backup is a full copy, and usable without the original object being
 available in any state ( original or modified). It's expensive, since it's a
 full copy. Main use cases are dr and recovery.
 
  Snapshot represents a point in time state of the object. It's relatively
 cheap ( with the expectation that some copy-on-write or differencing
 technique is used). Only usable if the reference point of the snapshot is
 available (could be thought of as an incremental backup); what that
 reference point is depends on the underlying implementation technology. Main
 use case is rewinding to so a historic state some time in the future.
 
  That said, with the prereqs met, both can probably be used to mount a new
 volume.
  Reasonable?
 
  On Jul 20, 2011, at 5:27 PM, Chuck Thier cth...@gmail.com wrote:
 
  Yeah, I think you are illustrating how this generates much confusion :)
 
  To try to be more specific, the base functionality should be:
 
  1. Create a point in time backup of a volume
  2. Create a new volume from a backup (I guess it seems reasonable to
  call this a clone)
 
  This emulates the behavior of what EC2/EBS provide with volume
  snapshots.  In this scenario, a restore is create a new volume from
  the backup, and delete the old volume.
 
  In the Storage world, much more can generally be done with snapshots.
  For example in most storage system snapshots are treated just like a
  normal volume and can be mounted directly.  A snapshot is often used
  when creating a backup to ensure that you have a consistent point in
  time backup, which I think most of the confusion comes from.
 
  What we finally call it doesn't matter as much to me, as long as we
  paint a consistent story that isn't confusing, and that we get it in
  the Openstack API.
 
  --
  Chuck
 
  On Wed, Jul 20, 2011 at 3:33 PM, Vishvananda Ishaya
  vishvana...@gmail.com wrote:
  In rereading this i'm noticing that you are actually suggesting
 alternative usage:
 
  backup/clone
 
  snapshot/restore
 
  Correct?
 
  It seems like backup and snapshot are kind of interchangable.  This is
 quite confusing, perhaps we should refer to them as:
 
  partial-snapshot
 
  whole-snapshot
 
  or something along those lines that conveys that one is a differencing
 image and one is a copy of the entire object?
 
  On Jul 20, 2011, at 12:01 PM, Chuck Thier wrote:
 
  At the last developers summit, it was noted by many, that the idea of
  a volume snaphsot in the cloud is highly overloaded.  EBS uses the
  notion of snapshots for 

Re: [Openstack] Keystone tenants vs. Nova projects

2011-07-15 Thread andi abes
Yuriy,

a  use-case scenario for keystone would be a service provider servicing
 large customers with  their own  authentication infrastructure (e.g. LDAP/
AD etc). Obviously, different tenants  have different instances. To
authenticate a user, the correct authentication back end must be selected.

In your model, how would a service provide be able to allow delegated
authentication to different customers?


On Fri, Jul 15, 2011 at 1:37 AM, Yuriy Taraday yorik@gmail.com wrote:

 I think, there should not be such thing as default tenant.
 If user does not specify tenant in authentication data, ones token should
 not be bound to any tenant, and user should have access to resources based
 on global role assignments.
 If user specify tenant, one should be either explicitly bound to tenant
 (probably through UserRoleAssignment model, but it is not the best way) or
 in some global role. Then one will have access to resources based on global
 role assignments and tenant role assignments.
 I'm not sure whether users should be added to a tenant and then to roles in
 this tenant or we should remove totally direct link between user and tenant,
 so that user is in tenant if and only if one is in any role in this tenant.

 Kind regards, Yuriy.


 On Fri, Jul 15, 2011 at 00:07, Nguyen, Liem Manh liem_m_ngu...@hp.comwrote:

  When one creates a user, should a user always have a tenant associated
 with her?  If that’s the case, then the “default” tenant is the tenant that
 the user is associated with at creation time?  Sorry for responding to the
 question with another question, but it is unclear for me from looking at the
 model (there is no non-null constraint on the tenant_id fk on the user
 table).

 ** **

 Thanks,

 Liem

 ** **

 *From:* openstack-bounces+liem_m_nguyen=hp@lists.launchpad.net[mailto:
 openstack-bounces+liem_m_nguyen=hp@lists.launchpad.net] *On Behalf Of
 *Ziad Sawalha
 *Sent:* Thursday, July 14, 2011 12:22 PM

 *To:* Rouault, Jason (Cloud Services); Yuriy Taraday;
 openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] Keystone tenants vs. Nova projects

  ** **

 In the example I gave below they are not members of any group and have no
 roles assigned to them. Should they still be authenticated?

 ** **

 *From: *Rouault, Jason (Cloud Services) jason.roua...@hp.com
 *Date: *Thu, 14 Jul 2011 16:25:22 +
 *To: *Ziad Sawalha ziad.sawa...@rackspace.com, Yuriy Taraday 
 yorik@gmail.com, openstack@lists.launchpad.net 
 openstack@lists.launchpad.net
 *Subject: *RE: [Openstack] Keystone tenants vs. Nova projects

 ** **

 A user can specify a tenantID at the time of authentication.  If no
 tenantID is specified during authentication, then I would expect the
 ‘default’ tenant for the user would apply.  The capabilities of User1 on
 TenantA (in this case the default tenant for the user) would be determined
 by their role and group assignments within the context of TenantA.  

  

 Jason

  

 *From:* Ziad Sawalha 
 [mailto:ziad.sawa...@rackspace.comziad.sawa...@rackspace.com]

 *Sent:* Wednesday, July 13, 2011 10:35 PM
 *To:* Rouault, Jason (Cloud Services); Yuriy Taraday;
 openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] Keystone tenants vs. Nova projects

  

 What if:

  

 -  User1 has TenantA as her default tenant

  

 Should the service authenticate the user against TenantA? And if so, why?
 What does the 'default tenant' grant User1 on TenantA? It's some nebulous,
  implied role…

  

  

  

 *From: *Rouault, Jason (Cloud Services) jason.roua...@hp.com
 *Date: *Wed, 13 Jul 2011 13:18:44 +
 *To: *Ziad Sawalha ziad.sawa...@rackspace.com, Yuriy Taraday 
 yorik@gmail.com, openstack@lists.launchpad.net 
 openstack@lists.launchpad.net
 *Subject: *RE: [Openstack] Keystone tenants vs. Nova projects

  

 If a user is bound to their default tenant, why wouldn’t any role
 assignments for that user in their default tenant apply?

  

  

 User1 authenticates specifying TenantB, this binds User1 into the context
 of TenantB.  In subsequent web service requests using the token received
 after authentication, the Auth component filter would decorate the headers
 with RoleY.

 If User1 authenticates specifying TenantA, or specifying no Tenant,  this
 binds User1 into the context of TenantA.  The headers would then be
 decorated with RoleX.

  

 Jason

  

 *From:* openstack-bounces+jason.rouault=hp@lists.launchpad.net [
 mailto:openstack-bounces+jason.rouault=hp@lists.launchpad.netopenstack-bounces+jason.rouault=hp@lists.launchpad.net]
 *On Behalf Of *Ziad Sawalha
 *Sent:* Tuesday, July 12, 2011 10:09 PM
 *To:* Yuriy Taraday; openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] Keystone tenants vs. Nova projects

  

 Our goal is to support Nova use cases right now. You can provide access to
 multiple tenants using a role 

Re: [Openstack] Keystone tenants vs. Nova projects

2011-07-15 Thread andi abes
I guess sfdc disagrees with you - they allow e.g Dell to use a single sign
on to authenticate to their services - as a @dell user, you can login with
the same email/password to internal resources as well as sfdc ones. ( in
case it's not obvious - you also update your password in one location - the
Dell AD directory)

On Jul 15, 2011, at 14:14, Yuriy Taraday yorik@gmail.com wrote:

Currently there is a basic skeleton for only one backend (identity
store) configuration per Keystone instance. It can be either DB or LDAP (the
latter is almost done).
May be in future we should be somehow able to specify not only tenants but
also an backend for each authentication request.
But I cannot imagine a real use case for that. All identities should be
stored in one place. I doubt that it'll be useful to keep different users
and/or tenants (or roles or whatever) in different stores. There usually is
one single central repository, DB, LDAP or may be some billing system. If we
have two isolated systems, we should consider using two separate auth
services.

Kind regards, Yuriy.


On Fri, Jul 15, 2011 at 21:40, andi abes andi.a...@gmail.com wrote:

 Yuriy,

 a  use-case scenario for keystone would be a service provider servicing
  large customers with  their own  authentication infrastructure (e.g. LDAP/
 AD etc). Obviously, different tenants  have different instances. To
 authenticate a user, the correct authentication back end must be selected.

 In your model, how would a service provide be able to allow delegated
 authentication to different customers?


 On Fri, Jul 15, 2011 at 1:37 AM, Yuriy Taraday yorik@gmail.comwrote:

 I think, there should not be such thing as default tenant.
 If user does not specify tenant in authentication data, ones token should
 not be bound to any tenant, and user should have access to resources based
 on global role assignments.
 If user specify tenant, one should be either explicitly bound to tenant
 (probably through UserRoleAssignment model, but it is not the best way) or
 in some global role. Then one will have access to resources based on global
 role assignments and tenant role assignments.
 I'm not sure whether users should be added to a tenant and then to roles
 in this tenant or we should remove totally direct link between user and
 tenant, so that user is in tenant if and only if one is in any role in this
 tenant.

 Kind regards, Yuriy.


 On Fri, Jul 15, 2011 at 00:07, Nguyen, Liem Manh liem_m_ngu...@hp.comwrote:

  When one creates a user, should a user always have a tenant associated
 with her?  If that’s the case, then the “default” tenant is the tenant that
 the user is associated with at creation time?  Sorry for responding to the
 question with another question, but it is unclear for me from looking at the
 model (there is no non-null constraint on the tenant_id fk on the user
 table).

 ** **

 Thanks,

 Liem

 ** **

 *From:* openstack-bounces+liem_m_nguyen=hp@lists.launchpad.net[mailto:
 openstack-bounces+liem_m_nguyen=hp@lists.launchpad.net] *On Behalf
 Of *Ziad Sawalha
 *Sent:* Thursday, July 14, 2011 12:22 PM

 *To:* Rouault, Jason (Cloud Services); Yuriy Taraday;
 openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] Keystone tenants vs. Nova projects

  ** **

 In the example I gave below they are not members of any group and have no
 roles assigned to them. Should they still be authenticated?

 ** **

 *From: *Rouault, Jason (Cloud Services) jason.roua...@hp.com
 *Date: *Thu, 14 Jul 2011 16:25:22 +
 *To: *Ziad Sawalha ziad.sawa...@rackspace.com, Yuriy Taraday 
 yorik@gmail.com, openstack@lists.launchpad.net 
 openstack@lists.launchpad.net
 *Subject: *RE: [Openstack] Keystone tenants vs. Nova projects

 ** **

 A user can specify a tenantID at the time of authentication.  If no
 tenantID is specified during authentication, then I would expect the
 ‘default’ tenant for the user would apply.  The capabilities of User1 on
 TenantA (in this case the default tenant for the user) would be determined
 by their role and group assignments within the context of TenantA.  

  

 Jason

  

 *From:* Ziad Sawalha 
 [mailto:ziad.sawa...@rackspace.comziad.sawa...@rackspace.comziad.sawa...@rackspace.com]

 *Sent:* Wednesday, July 13, 2011 10:35 PM
 *To:* Rouault, Jason (Cloud Services); Yuriy Taraday;
 openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] Keystone tenants vs. Nova projects

  

 What if:

  

 -  User1 has TenantA as her default tenant

  

 Should the service authenticate the user against TenantA? And if so, why?
 What does the 'default tenant' grant User1 on TenantA? It's some nebulous,
  implied role…

  

  

  

 *From: *Rouault, Jason (Cloud Services) jason.roua...@hp.com
 *Date: *Wed, 13 Jul 2011 13:18:44 +
 *To: *Ziad Sawalha ziad.sawa...@rackspace.com, Yuriy Taraday 
 yorik@gmail.com, openstack@lists.launchpad.net 
 openstack

Re: [Openstack] Keystone tenants vs. Nova projects

2011-07-15 Thread andi abes
Just to clarify - yuriy, what you're describing is very reasonable for an
enterprise system, where you definitely strive to achieve centralized
authentication. I however believe that model is too restrictive for a cloud
service provider. These two worlds are somewhat different.

On Jul 15, 2011, at 15:07, andi abes andi.a...@gmail.com wrote:

I guess sfdc disagrees with you - they allow e.g Dell to use a single sign
on to authenticate to their services - as a @dell user, you can login with
the same email/password to internal resources as well as sfdc ones. ( in
case it's not obvious - you also update your password in one location - the
Dell AD directory)

On Jul 15, 2011, at 14:14, Yuriy Taraday yorik@gmail.com wrote:

Currently there is a basic skeleton for only one backend (identity
store) configuration per Keystone instance. It can be either DB or LDAP (the
latter is almost done).
May be in future we should be somehow able to specify not only tenants but
also an backend for each authentication request.
But I cannot imagine a real use case for that. All identities should be
stored in one place. I doubt that it'll be useful to keep different users
and/or tenants (or roles or whatever) in different stores. There usually is
one single central repository, DB, LDAP or may be some billing system. If we
have two isolated systems, we should consider using two separate auth
services.

Kind regards, Yuriy.


On Fri, Jul 15, 2011 at 21:40, andi abes  andi.a...@gmail.com
andi.a...@gmail.com wrote:

 Yuriy,

 a  use-case scenario for keystone would be a service provider servicing
  large customers with  their own  authentication infrastructure (e.g. LDAP/
 AD etc). Obviously, different tenants  have different instances. To
 authenticate a user, the correct authentication back end must be selected.

 In your model, how would a service provide be able to allow delegated
 authentication to different customers?


 On Fri, Jul 15, 2011 at 1:37 AM, Yuriy Taraday  yorik@gmail.com
 yorik@gmail.com wrote:

 I think, there should not be such thing as default tenant.
 If user does not specify tenant in authentication data, ones token should
 not be bound to any tenant, and user should have access to resources based
 on global role assignments.
 If user specify tenant, one should be either explicitly bound to tenant
 (probably through UserRoleAssignment model, but it is not the best way) or
 in some global role. Then one will have access to resources based on global
 role assignments and tenant role assignments.
 I'm not sure whether users should be added to a tenant and then to roles
 in this tenant or we should remove totally direct link between user and
 tenant, so that user is in tenant if and only if one is in any role in this
 tenant.

 Kind regards, Yuriy.


 On Fri, Jul 15, 2011 at 00:07, Nguyen, Liem Manh  liem_m_ngu...@hp.com
 liem_m_ngu...@hp.com wrote:

  When one creates a user, should a user always have a tenant associated
 with her?  If that’s the case, then the “default” tenant is the tenant that
 the user is associated with at creation time?  Sorry for responding to the
 question with another question, but it is unclear for me from looking at the
 model (there is no non-null constraint on the tenant_id fk on the user
 table).

 ** **

 Thanks,

 Liem

 ** **

 *From:* openstack-bounces+liem_m_nguyen= 
 http://hp.comhp.com@http://lists.launchpad.net
 lists.launchpad.net [mailto:openstack-bounces+liem_m_nguyen=http://hp.com
 hp.com@ http://lists.launchpad.netlists.launchpad.net] *On Behalf Of *Ziad
 Sawalha
 *Sent:* Thursday, July 14, 2011 12:22 PM

 *To:* Rouault, Jason (Cloud Services); Yuriy Taraday;
 openstack@lists.launchpad.netopenstack@lists.launchpad.net
 *Subject:* Re: [Openstack] Keystone tenants vs. Nova projects

  ** **

 In the example I gave below they are not members of any group and have no
 roles assigned to them. Should they still be authenticated?

 ** **

 *From: *Rouault, Jason (Cloud Services)  jason.roua...@hp.com
 jason.roua...@hp.com
 *Date: *Thu, 14 Jul 2011 16:25:22 +
 *To: *Ziad Sawalha  ziad.sawa...@rackspace.com
 ziad.sawa...@rackspace.com, Yuriy Taraday  yorik@gmail.com
 yorik@gmail.com,  openstack@lists.launchpad.net
 openstack@lists.launchpad.net  openstack@lists.launchpad.net
 openstack@lists.launchpad.net
 *Subject: *RE: [Openstack] Keystone tenants vs. Nova projects

 ** **

 A user can specify a tenantID at the time of authentication.  If no
 tenantID is specified during authentication, then I would expect the
 ‘default’ tenant for the user would apply.  The capabilities of User1 on
 TenantA (in this case the default tenant for the user) would be determined
 by their role and group assignments within the context of TenantA.  

  

 Jason

  

 *From:* Ziad Sawalha [ ziad.sawa...@rackspace.com
 mailto:ziad.sawa...@rackspace.com 
 ziad.sawa...@rackspace.comziad.sawa...@rackspace.com]

 *Sent:* Wednesday, July 13

Re: [Openstack] OpenStack Identity: Keystone API Proposal

2011-07-13 Thread andi abes
Dropped off the thread for a while... sorry.

Ziad, I think this sounds very reasonable. I think the only hiccup might be
with the use of the term role which might connote some bigger meaning to
folks with  backgrounds.

If I understand your proposal, then a service can decide what is the
granularity of thingies  it registers with keystone(sorry.. no better
term comes to mind - it's what you refer to in #2). These thingies can be
roles or actions or whatever makes sense to the authorizing middleware.

In Jason's model, these thingies can be in the form of:
service-namespace:object-type:action. In this case, it could
be convenient to have a keystone api to check if a given thingy is bestowed
on a user, rather than returning the full list of thingies the user possess.
The rational here is that if services do end up defining very granular
thingies (please do come up with a better name...) then the list might get
somewhat lengthy. On some systems I've worked on we had 100's of
action-tokens (more or less the name we used) - returning the full list,
even filtered by the service's name-space, might be a bit much.


On Wed, Jul 13, 2011 at 12:45 AM, Ziad Sawalha
ziad.sawa...@rackspace.comwrote:

  Here's a possible use case we can implement to address this:

1. A service 'registers' itself with Keystone and reserves a name (Ex.
Swift, or nova). Keystone will guarantee uniqueness.
2. Registered services can then create roles for the service (Ex.
swift:admin or nova:netadmin) or tuples as suggested below
(nova:delete:volume)
3. On token validation, Keystone returns these roles and a service can
apply it's own policies based on them.

 This is super-simplified and we can expand on it.

  Other benefits:

- Registration would also be handy to allow services to add and manage
endpoints as well.
- We can also tie this with the concept of a ClientID so services can
identify themselves as well with a long-lived token (see
https://github.com/rackspace/keystone/issues/84)
- Common names for services could be implemented as shareable among
different implementations (Ex: compute:admin)

 Thoughts?

  And comments inline ZNS


   From: Rouault, Jason (Cloud Services) jason.roua...@hp.com
 Date: Thu, 16 Jun 2011 19:54:22 +
 To: andi abes andi.a...@gmail.com
 Cc: Ziad Sawalha ziad.sawa...@rackspace.com, 
 openstack@lists.launchpad.net openstack@lists.launchpad.net
 Subject: RE: [Openstack] OpenStack Identity: Keystone API Proposal

   See inline…

 ** **

 Jason

 ** **

 *From:* andi abes [mailto:andi.a...@gmail.com andi.a...@gmail.com]
 *Sent:* Wednesday, June 15, 2011 5:04 PM
 *To:* Rouault, Jason (Cloud Services)
 *Cc:* Ziad Sawalha; openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] OpenStack Identity: Keystone API Proposal

 ** **

 Jason,  

   Sounds like the model you're proposing could be achieved by  something
 like this:

 ** **

 On Keystone:

 - Roles are identified by name and contain tuples in the form of:

  -- the service to which this permission applies (e.g. service nova,
 swift). Including the service is meant to side track attempts to normalize
 actions across very different types of services

  --  the type of target this action applies to - e.g.  volume, network port
 etc.

  -- action this permission allows - e.g. start vm, create volume.

  

 - An authorize API call which accepts: 

  - the service requesting the authorization

  -  user token (from a previous authentication) 

  - tenant ID (to resolve the realm of the user token)

  - a target type

  -  the attempted action.

 ** **

  This API would lookup the token, and if its present combine a set of the
 relevant permissions from all the roles the token is referencing. If the
 requested tuple exists in this combine set, the request is authorized.

 ** **

 A few caveats remain:

 ** **

 a) the above description doesn't include Resource Groups... as Ziad
 mentioned, that is currently differed. When those are introduced, the
 service should probably pass the instance-id of the target, and Keystone
 would have to take that into account.
 JLR  I think there are a number of ways to account for this if we
 leveraged a hierarchical (URI) structure and allowed for wild carding. ***
 *

 ZNS We start to get in the world of policies here (lookup XACML)

b) the current API's in keystone allows a service to perform
 actions on multiple instances across tenants (containers) efficiently - a
 service could obtain a list of accessible tenants and cache it. If only the
 'authorize' API is available, the service would need to perform a check with
 keystone for every instance 

 JLR Please explain the model for Tenant, Accounts, Projects, Groups,
 Roles…. I have not been able to discern how tenant will map to accounts and
 projects.  In any case, there are things that can be done to improve the
 potential

Re: [Openstack] Need to store the Data using open stack storage

2011-06-29 Thread andi abes
I'm assuming you've the following:
- you installed cactus (the 1.3 release)
- you're playing both the provider role (administrating users and such) and
the end user (creating files and containers).

I that is the case, then you should also look into swauth.
This section of the guide could help:
http://docs.openstack.org/cactus/openstack-object-storage/admin/content/verify-swift-installation.html
note that swauth is just one possible authentication scheme and in diablo
things are changing...

While you're just starting, you might want to just bypass all authentication
in swift. you can do that by editing the proxy configuration (by default:
/etc/swift/proxy-server.conf). find the line:

pipeline = healthcheck cache swauth proxy-server

and change it to this:

pipeline = healthcheck cache proxy-server


hope this helps.

a.




2011/6/29 Juan J. j...@memset.com

 On Wed, 2011-06-29 at 11:38 +0530, bharath pb wrote:
  Hi,
 
  I’m new to the open stack , I have installed openstack storage on
  ubuntu 10.10 .
 
  Now I want store some files/data(XML data) to the openstack , I have
  stuck up at this part .
 
  I went through the developer guide but could not understand.
 
  Pls suggest me how to go with this ..? I want use java/python for the
  interface

 The developer guide explains the API, in case you want to use it
 directly (it's a REST API over HTTP).

 In case you want a high level interface so you don't have to deal with
 HTTP directly, you can use Rackspace Cloud Files Python library:

 https://github.com/rackspace/python-cloudfiles

 In Ubuntu is packaged as python-rackspace-cloudfiles, although I
 recommend you to use the latest version available on github.

 There's a Java library too:

 https://github.com/rackspace/java-cloudfiles

 Regards,

 Juan

 --
 Juan J. Martinez
 Development, MEMSET

 mail: j...@memset.com
  web: http://www.memset.com/

 Memset Ltd., registration number 4504980. 25 Frederick Sanger Road,
 Guildford, Surrey, GU2 7YD, UK.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Opensource Public Cloud Solution .. !!

2011-06-23 Thread andi abes
not to beat my own drum here... but also check out  dell.com/openstack
(I happen to be working on that)


On Thu, Jun 23, 2011 at 12:15 PM, Stephen Spector 
stephen.spec...@openstack.org wrote:

 Fahad:

 Thank you for your interest in OpenStack and considering this technology as
 the Public Cloud platform of choice. The best place to start with the
 technology is at http://docs.openstack.org/ where you can find manuals for
 OpenStack Compute and Object Storage. This can give you some basics for
 running and installing the technology.

 I also encourage you to contact three of our community partners who are in
 the business of getting OpenStack clouds up and running for consumers:

1. StackOps - http://www.stackops.com/
2. CloudScaling - http://cloudscaling.com/
3. Rackspace Cloud Builders - http://www.rackspace.com/cloudbuilders/
4. Citrix Open Cloud and Project Olympus  -

 http://www.citrix.com/English/ps2/products/product.asp?contentID=1681633ntref=hp_cat_cloud
  http://deliver.citrix.com/projectolympus

 There may be other companies in the community that I have not listed so
 take a look at our participating partner list at
 http://www.openstack.org/community/companies/.

 Thanks,

 - - -
 Stephen Spector, Rackspace
 OpenStack Community Manager
 stephen.spec...@openstack.org
 OpenStack Blog http://openstack.org/blog* | 
 *@opnstk_http://twitter.com/opnstk_com_mgr
 com http://twitter.com/opnstk_com_mgr_mgrhttp://twitter.com/opnstk_com_mgr
 *Office*  +1 (512) 539-1162 | *Mobile* +1 (210) 415-0930

 From: Fahad Mahmood fah...@cubexsweatherly.com
 Date: Thu, 23 Jun 2011 16:36:09 +0500

 To: openstack@lists.launchpad.net
 Subject:  [Openstack] Opensource Public Cloud Solution .. !!

  Hi All,

 My name is fahad, working as an assistant network engineer @ CubeXS
 Weatherly Pvt Ltd, Company resides within Karachi, Pakistan.

 We are currently looking forward to build an OpenSource Public Cloud for
 which we require some useful advise as well as the solutions, We will most
 welcome if anyone provides us fully solution based on Public Cloud computing
 or any expert advises would be very much appreciated.

 Looking forward some positive response.

 Thanks  Regards,

 Fahad Mahmood
 Assistant Network Engineer
 CubeXS Weatherly Pvt Ltd
 ___ Mailing list:
 https://launchpad.net/~openstack Post to : 
 openstack@lists.launchpad.netUnsubscribe :
 https://launchpad.net/~openstack More help :
 https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Identity: Keystone API Proposal

2011-06-15 Thread andi abes
I would expect that the API of each service would have to interpret the role
assigned to a user in the context of that service - roles for swift nova
glance quantum etc would probably carry very different semantics.

So, to my understanding, key stone provides authentication and user
information - what tenants the user has access to, and what roles the user
is assigned. The mapping of these to what the user can do on what instances
in each service are left for the service to determine.


On Wed, Jun 15, 2011 at 10:32 AM, Rouault, Jason (Cloud Services) 
jason.roua...@hp.com wrote:

 Is there a plan to also have Keystone be the centralizing framework around
 authorization?   Right now it looks like policy enforcement is left to the
 API layer.



 Thanks,

 Jason



 *From:* openstack-bounces+jason.rouault=hp@lists.launchpad.net[mailto:
 openstack-bounces+jason.rouault=hp@lists.launchpad.net] *On Behalf Of
 *Ziad Sawalha
 *Sent:* Friday, June 10, 2011 5:24 PM
 *To:* openstack@lists.launchpad.net
 *Subject:* [Openstack] OpenStack Identity: Keystone API Proposal



 Time flies! It's June 10th already. In my last email to this community I
 had proposed today as the day to lock down the Keystone API so we can
 finalize implementation by Diablo-D2 (June 30th).



 We've been working on this feverishly over the past couple of weeks and
 have just pushed out a proposed API here:
 https://github.com/rackspace/keystone/raw/master/keystone/content/identitydevguide.pdf



 For any and all interested, the original source and code is on Github (
 https://github.com/rackspace/keystonehttps://github.com/rackspace/keystone/raw/master/keystone/content/identitydevguide.pdf),
 along with the current implementation of Keystone, examples, sample data,
 tests, instructions, and all the goodies we could muster to put together.
 The project also lives on Launchpad at http://launchpad.net/keystone.



 The API we just put out there is still a proposal. We're going to be
 focusing on the implementation, but would still love to get community input,
 feedback, and participation.



 Have a great weekend and regards to all,



 Ziad











 Confidentiality Notice: This e-mail message (including any attached or

 embedded documents) is intended for the exclusive and confidential use of the

 individual or entity to which this message is addressed, and unless otherwise

 expressly indicated, is confidential and privileged information of Rackspace.

 Any dissemination, distribution or copying of the enclosed material is 
 prohibited.

 If you receive this transmission in error, please notify us immediately by 
 e-mail

 at ab...@rackspace.com, and delete the original message.

 Your cooperation is appreciated.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Identity: Keystone API Proposal

2011-06-15 Thread andi abes
Jason,
  Sounds like the model you're proposing could be achieved by  something
like this:

On Keystone:
- Roles are identified by name and contain tuples in the form of:
 -- the service to which this permission applies (e.g. service nova,
swift). Including the service is meant to side track attempts to normalize
actions across very different types of services
 --  the type of target this action applies to - e.g.  volume, network port
etc.
 -- action this permission allows - e.g. start vm, create volume.

- An authorize API call which accepts:
 - the service requesting the authorization
 -  user token (from a previous authentication)
 - tenant ID (to resolve the realm of the user token)
 - a target type
 -  the attempted action.

 This API would lookup the token, and if its present combine a set of the
relevant permissions from all the roles the token is referencing. If the
requested tuple exists in this combine set, the request is authorized.

A few caveats remain:

a) the above description doesn't include Resource Groups... as Ziad
mentioned, that is currently differed. When those are introduced, the
service should probably pass the instance-id of the target, and Keystone
would have to take that into account.

b) the current API's in keystone allows a service to perform actions on
multiple instances across tenants (containers) efficiently - a service could
obtain a list of accessible tenants and cache it. If only the 'authorize'
API is available, the service would need to perform a check with keystone
for every instance

c) In this model it is required to populate role definitions into keystone,
for all services. Since keystone should be independent of other services,
 the set of actions/targets should probably be considered as data for it
- requiring a deployment step of sorts to make keystone aware of these
roles.
This could be avoided if the authorization decision is looked at as 2
separate steps:
 1. figure out what roles a user posses.
 2. expand the set of roles to set of actions allowed
 3. determine if the action attempted is allowed

it is obviously debatable where keystone ends and the services begin. In the
model above, keystone is responsible for all 3 steps via the authorize API.
I *think* the current API provides a very similar model, with the line drawn
at 1 - i.e. keystone provides to roles, and there is a separate middleware
piece to perform 2  3, executing in the request pipleline of the service.
Where this middleware executes (i.e. what is the API boundary to keystone)
doesn't necessarily change the overall model.

I *think*.









On Wed, Jun 15, 2011 at 11:52 AM, Rouault, Jason (Cloud Services) 
jason.roua...@hp.com wrote:



 In my opinion the services (and their developers) should not need to
 interpret roles thus resulting in varying semantics.  Roles should be
 defined by a set of configurable privileges to perform certain *actions*on 
 specific
 *targets* for particular *services*.   The API should only need to know to
 check with an authorization subsystem whether the incoming request is
 allowed based on the who is making the request and the 3-tuple mentioned
 previously.



 Jason





 *From:* andi abes [mailto:andi.a...@gmail.com]
 *Sent:* Wednesday, June 15, 2011 9:18 AM
 *To:* Rouault, Jason (Cloud Services)
 *Cc:* Ziad Sawalha; openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] OpenStack Identity: Keystone API Proposal



 I would expect that the API of each service would have to interpret the
 role assigned to a user in the context of that service - roles for swift
 nova glance quantum etc would probably carry very different semantics.



 So, to my understanding, key stone provides authentication and user
 information - what tenants the user has access to, and what roles the user
 is assigned. The mapping of these to what the user can do on what instances
 in each service are left for the service to determine.



 On Wed, Jun 15, 2011 at 10:32 AM, Rouault, Jason (Cloud Services) 
 jason.roua...@hp.com wrote:

 Is there a plan to also have Keystone be the centralizing framework around
 authorization?   Right now it looks like policy enforcement is left to the
 API layer.



 Thanks,

 Jason



 *From:* openstack-bounces+jason.rouault=hp@lists.launchpad.net[mailto:
 openstack-bounces+jason.rouault=hp@lists.launchpad.net] *On Behalf Of
 *Ziad Sawalha
 *Sent:* Friday, June 10, 2011 5:24 PM
 *To:* openstack@lists.launchpad.net
 *Subject:* [Openstack] OpenStack Identity: Keystone API Proposal



 Time flies! It's June 10th already. In my last email to this community I
 had proposed today as the day to lock down the Keystone API so we can
 finalize implementation by Diablo-D2 (June 30th).



 We've been working on this feverishly over the past couple of weeks and
 have just pushed out a proposed API here:
 https://github.com/rackspace/keystone/raw/master/keystone/content/identitydevguide.pdf



 For any and all interested, the original source and code

[Openstack] Fwd: Chef Deployment System for Swift - a proposed design - feedback?

2011-05-23 Thread andi abes
It took a while, but finally:
https://github.com/dellcloudedge/openstack-swift

Jay, I've added a swift-proxy-acct and swift-proxy (without account
management).

This cookbook is an advanced leak of for swift, soon to be followed with a
leak of a nova cookbook. The full crowbar that was mentioned is on its
way...

To use these recipes (with default settings) you just need to pick your
storage nodes and 1 or more proxies. Then assign the appropriate roles
(swift-storage swift-proxy or swift-proxy-acct) using the chef ui or a knife
command. Choose one of the nodes and assign it the swift-node-compute. and
the swift cluster is built (because of async nature of multi-node
deployments, it might require a few chef-client runs while the ring files
are generated and pushed around.

have a spin. eager to hear comments.








On Mon, May 2, 2011 at 11:36 AM, andi abes andi.a...@gmail.com wrote:

 Jay,

 hmmm, interesting point about account management in the proxy. Guess you're
 suggesting that you have 2 flavors of a proxy server - one with account
 management enabled and one without?
  Is the main concern here security - you'd have more controls on the
 account management servers? Or is this about something else?

 About ring-compute:
 so there are 2 concerns I was thinking about with rings - a) make sure the
 ring information is consistent across all the nodes in the cluster, and b)
 try not to lose the ring info.

 The main driver to have only 1 ring compute node was a). the main concern
 being guaranteeing consistency of the ring data among all nodes without
 causing too strong coupling to the underlying mechanisms used to build the
 ring.
 For example - if 2 new rings are created independently, then the order in
 which disks are added to the ring should be consistent (assuming that the
 disk/partition allocation algorithm is sensitive to ordering). Which implies
 that the query to chef should always return data in exactly the same order.
 If also would require that the ring building (and mandate that it will
 never be changed) does not use any heuristics that are time or machine
 dependent (I _think_ that right now that is the case, but I would rather not
 depend on it).

 I was thinking that these restrictions can be avoided easily by making sure
 that only 1 node computes the ring. To make sure that b) (don't lose the
 ring) is addressed - the ring is copied around.
 If the ring compute node fails, then any other node can be used to seed a
 new compute ring without any loss.
 Does that make sense?


 Right now I'm using a snapshot deb package built from bzr266. Changing the
 source of the bits is pretty esay... (and installing the deb includes the
 utilities you mentioned)


 Re: load balancers:
 What you're proposing makes perfect sense. Chef is pretty modular. So the
 swift configuration recipe focuses on setting up swift - not the whole
 environment. It would make sense to deploy some load balancer, firewall
 appliance etc in an environment. However, these would be add-ons to the
 basic swift configuration.
 A simple way to achieve this would be to have a recipe that would query the
 chef server for all nodes which have the swift-proxy role, and add them as
 internal addresses for the load balancer of your choice.
 (e.g. :
 http://wiki.opscode.com/display/chef/Search#Search-FindNodeswithaRoleintheExpandedRunList
 )


 a.



 On Sun, May 1, 2011 at 10:14 AM, Jay Payne lett...@gmail.com wrote:

 Andi,

 This looks great.   I do have some thoughts/questions.

 If you are using 1.3, do you have a separate role for the management
 functionality in the proxy?It's not a good idea to have all your
 proxy servers running in management mode (unless you only have one
 proxy).

 Why only 1 ring-compute node?  If that node is lost or unavailable do
 you loose your ring-builder files?

 When I create an environment I always setup utilities like st,
 get-nodes, stats-report, and a simple functional test script on a
 server to help troubleshoot and manage the cluster(s).

 Are you using packages or eggs to deploy the swift code?   If your
 using packages, are you building them yourself or using the ones from
 launchpad?

 If you have more than three proxy servers, do you plan on using load
 balancers?


 Thanks
 --J




 On Sun, May 1, 2011 at 8:37 AM, andi abes andi.a...@gmail.com wrote:
  Judd,
Sorry, today I won't be around. I'd love to hear feedback and
 suggestions
  on what I have so far ( I'm not 100% sure when I can make the fully
  available, but I'm hoping this is very soon). I'm running with swift 1.3
 on
  ubuntu 10.10.
  I'm using  the environment pattern in chef - when nodes search for their
  peers a predicate comparing the node[:swift][:config][:environment] to
 the
  corresponding value on the prospective peer. A default value is
 assigned
  to this by the default recipe's attributes, so if only 1 cluster is
 present,
  all nodes are eligible. For example, when proxy recipe creates

Re: [Openstack] Moving code hosting to GitHub

2011-05-03 Thread andi abes
I'm not sure who wins in git vs. bzr ease of use... guess it depends on how
quickly I get over this error:

$ bzr pull lp:swift/1.3
bzr: ERROR: Cannot lock LockDir(
http://bazaar.launchpad.net/~swift/swift/omega-1.3.0-7/.bzr/branch/lock):
Transport oper
ation not possible: http does not support mkdir()


any idea ?

On Wed, Apr 27, 2011 at 1:37 PM, Soren Hansen so...@linux2go.dk wrote:

 2011/4/27 Thomas Goirand tho...@goirand.fr:
  On 04/27/2011 11:26 PM, Soren Hansen wrote:
  To get working, yes. To be an expert, no.
 
  bzr lp-login
  (bzr init-repo)
  bzr branch
  (bzr add)
  bzr commit
  bzr push
 
  ..are sufficient to just get started.
  No, I don't agree, it's not enough. See below.


  and that's most of the time the issues with using bzr for git users
  tutorials: they tend to think that you're ok with the most basics
  command, and that you wont ever need more. Truth is you do, and
  finding the relevant information for the thing you need takes time (a
  big cost, to use your own words...).  If you find a learning quickly
  advanced bzr commands for git users type of tutorial, I might change
  my mind! :)
 
  If you can explain what sort of stuff you've had a hard time finding, I
  can probably whip up something that will be helpful to others.
  - git reset --hard sha256

 bzr uncommit -r revisionspec

 that leaves the changes in the working directory, though. You can use
 bzr revert to remove the changes from the working directory.

  - git commit -a --amend (to correct the latest commit)

 bzr uncommit ; bzr commit

  - git format-patch sha256

 bzr log -c revisionspec -p

  - or maybe instead: git diff -u -r sha256 -r sha256

 bzr diff -r revisionspec..revisionspec

  - git push --force (you told me, but I forgot... is that bzr push
  --overwrite?)

 bzr push --overwrite, but please don't use it. It's the same for
 git, really. Once you've pushed it somewhere, don't remove stuff from
 it, or rebase it or whatever. If anyone has pulled from it and based
 work on it, it's extremely awkward if they want to sync up with you.

  - git cherry-pick -x

 bzr merge -c revisionspec, but its use is discouraged.

  - git -r branch (does listing branches on the remote side even make
  sense with bzr?)

 No.

  - git tag (to list tags, as bzr tag tagname seems working)

 bzr tags

  There must be more than I can't recall just now, in 5 minutes of deep
  thoughts.

 I still don't see how any of the above are *required* to start working,
 though.

  Also, one thing I love with git, is that I can always do man
  git-command if I want help with command, and there's more than 100 of
  them. Is this available somehow?

 bzr subcommand -h shows the help for the subcommand.

 bzr help foo is roughly the same, but it provides help for a bunch
 of things other than commands.

 bzr help commands shows you (almost) all the available commands (bzr
 help hidden-commands shows a few extra commands that most people will
 never need)

 bzr help topics shows a bunch of topics that has more extensive
 explanations.


 --
 Soren Hansen| http://linux2go.dk/
 Ubuntu Developer| http://www.ubuntu.com/
 OpenStack Developer | http://www.openstack.org/

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Chef Deployment System for Swift - a proposed design - feedback?

2011-05-01 Thread andi abes
Judd,

  Sorry, today I won't be around. I'd love to hear feedback and suggestions
on what I have so far ( I'm not 100% sure when I can make the fully
available, but I'm hoping this is very soon). I'm running with swift 1.3 on
ubuntu 10.10.
I'm using  the environment pattern in chef - when nodes search for their
peers a predicate comparing the node[:swift][:config][:environment] to the
corresponding value on the prospective peer. A default value is assigned
to this by the default recipe's attributes, so if only 1 cluster is present,
all nodes are eligible. For example, when proxy recipe creates the memcached
list of addresses, it searches for all the other nodes with the swift-proxy
role assigned to it anded with the above.
 Is that what you meant about having a classifier?

To the basic roles you've described (base, storage and proxy) I've added 1:
ring-compute. This role should be assigned to only 1 node on top of either
the storage or proxy. This server will recompute the rings whenever the set
of storage servers change (new disks or machines added or existing ones
removed). It uses information on the affected machines (ip, disk and zone
assignment) to update the ring. Once the ring is updated, it is rsynced to
all the other nodes in the cluster.
[ this is achieved with a LWRP as I mentioned earlier, which first parses
the current ring info, then compares it to the current set of disks. It
notifies chef if any changes were made, so that the rsync actions can be
triggered]
At this point, all disks are added to all rings. Though it should be easy to
make this conditional on an attribute on the node/disk or some heuristic.

The 'base role' is also a bit more extensive than your description, but
needs a bit more work. The recipe uses a swift_disk LWRP to ensure the
partition table matches the configuration. The LWRP accepts an array
describing the desired partition table, and allows using a :remaining token
to indicate using what's left (should only be used on the last partition).
At this point it, the recipe is pretty hard coded. It assumes that /dev/sdb
is dedicated to storage. it just requires a hard coded single partition,
that uses the whole disk. If it's not present, or different, the LWRP
creates a BSD label, a single partition and xfs filesystem.  Once this is
done, the available disk is 'published' as node attributes. If the current
state of the system matches the desired state, nothing is modified.

The proxy and storage roles, do as you'd expect. install the relevant
packages (including memcached on the proxy using the opscode recipe) and
plunk in the server/cluster specific info into the relevant config file
templates.

What's totally not yet addressed:
- starting services
- prep-ing  the authentication
- injecting disk configuration.

This cookbook can be used by it self, but it is more powerful when used in
conjunction with the crowbar proposal mechanism. crowbar basically allows
you to define the qualifications for a system to be assigned a given role,
edit the automatic role assignment and configuration of the cluster and then
materialize the cluster based on these values by driving chef. Part of the
planned crowbar capability is performing automatic disk/zone allocation,
based on network topology and connectivity. In the mean time, the allocation
is done when the storage node is created.

Currently,  a proposal can provide values for the following:
cluster_hash,  cluster_admin_pw, replicas, and user/group to run swift
as. I'm really curious to hear what other configuration parameters might be
useful

I'm really curious to hear some feedback about this.
and sorry about the timing, I'm in the boston area and it's finally nice
around here... hence other weekend plans.



On Thu, Apr 28, 2011 at 11:25 PM, Judd Maltin j...@newgoliath.com wrote:

 Hey andi,

 I'm psyched to collaborate on this.  I'm a busy guy, but I'm dedicating
 Sunday to this. So if you have time Sunday, that would be best to catch up
 via IRC, IM or voice.

 Having a node classifier of some sort is critical.

 -Judd

 Judd Maltin
 +1 917 882 1270
 Happiness is a straight line and a goal. -fn

 On Apr 28, 2011, at 11:59, andi abes andi.a...@gmail.com wrote:

 Judd,

  Ok. Here are some of the thoughts I've had (and have  mostly working, but
 hitting some swift snags..) Maybe we can collaborate on this?

 Since Chef promotes idempotent operations and cookbooks, I put some effort
 in making sure that changes are only made if they're required, particularly
 around destructive or expensive operations.
 The 2 main cases are:
 - partitioning disks, which is obviously destructive
 - building / rebuilding the ring files - rebalancing the ring is relatively
 expensive.

 for both cases I've built a LWRP which reads the current state of affairs,
 and decides if and what are the required changes. For disks, I'm using '
 'parted', which produces machine friendly output. For ring files I'm using
 the output from ring-builder.

 the LWRP

Re: [Openstack] Chef Deployment System for Swift - a proposed design - feedback?

2011-04-28 Thread andi abes
Judd,

 this is a great idea... actually so great, that some folks @Dell and
OpsCode, me included, have been working on it.

Have a peek on :
https://github.com/opscode/openstack-cookbooks/tree/master/cookbooks

This effort is also being included into Crowbar (take a peek here:
http://robhirschfeld.com/tag/crowbar/) which adds the steps needed to start
with bare metal (rather than installed OS), then using chef to get to a
working Open stack deployment.
(if you're at the design meeting, there are demos scheduled).

That said - I'm updating the swift cookbook, and hope to update github soon.

a.





On Wed, Apr 27, 2011 at 9:55 PM, Jay Payne lett...@gmail.com wrote:

 Judd,

 I'm not that familiar with Chef (I'll do some research) but I have a
 couple of questions and some thoughts:

 1.  Is this for a multi-server environment?
 2.  Are all your proxy nodes going to have allow_account_management =
 true in the configs?   It might be a good idea to have a second proxy
 config for account management only
 3.  Have you looked at using swauth instead of auth?
 4.  Have you thought about an admin or client node that has utilities
 on it like st and stats-report?
 5.  How where will you do on-going ring management or changes?
 6.  I would think about including some type of functional test at the
 end of the deployment process to verify everything was created
 properly and that all nodes can communicate.



 --J

 On Wed, Apr 27, 2011 at 6:18 PM, Judd Maltin j...@newgoliath.com wrote:
  Hi Folks,
 
  I've been hacking away at creating an automated deployment system for
 Swift
  using Chef.  I'd like to drop a design idea on you folks (most of which
 I've
  already implemented) and get feedback from this esteemed group.
 
  My end goal is to have a manifest (apologies to Puppet) which will
 define
  an entire swift cluster, deploy it automatically, and allow edits to the
  ingredients to manage the cluster.  In this case, a manifest is a
  combination of a chef databag describing the swift settings, and a
  spiceweasel infrastructure.yaml file describing the OS configuration.
 
  Ingredients:
  - swift cookbook with base, proxy and server recipes.  proxy nodes also
  (provisionally) contain auth services. storage nodes handle object,
  container and account services.
  -- Base recipe handles common package install, OS user creation.  Sets up
  keys.
  -- Proxy recipe handles proxy nodes: network config, package install,
  memcache config, proxy and auth package config, user creation, ring
  management (including builder file backup), user management
  -- Storage recipe handles storage nodes: network config, storage device
  config, package install, ring management.
 
  - chef databag that describes a swift cluster (eg:
 mycluster_databag.json)
  -- proxy config settings
  -- memcached settings
  -- settings for all rings and devices
  -- basic user settings
  -- account management
 
  - chef spiceweasel file that auto-vivifies the infrastructure: (eg:
  mycluster_infra.yaml)
  -- uploads cookbooks
  -- uploads roles
  -- uploads the cluster's databag
  -- kicks off node provisioning by requesting from infrastructure API (ec2
 or
  what have you) the following:
  --- chef roles applied (role[swift:proxy] or role[swift:storage])
  --- server flavor
  --- storage device configs
  --- hostname
  --- proxy and storage network details
 
  By calling this spiceweasel file, the infrastructure can leap into
  existence.
 
  I'm more or less done with all this stuff - and I'd really appreciate
  conceptual feedback before I take out all the non-sense code I have in
 the
  files and publish.
 
  Many thanks!  Happy spring, northern hemispherians!
  -judd
 
  Judd Maltin
  T: 917-882-1270
  F: 501-694-7809
  A loving heart is never wrong.
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Chef Deployment System for Swift - a proposed design - feedback?

2011-04-28 Thread andi abes
Judd,

 Ok. Here are some of the thoughts I've had (and have  mostly working, but
hitting some swift snags..) Maybe we can collaborate on this?

Since Chef promotes idempotent operations and cookbooks, I put some effort
in making sure that changes are only made if they're required, particularly
around destructive or expensive operations.
The 2 main cases are:
- partitioning disks, which is obviously destructive
- building / rebuilding the ring files - rebalancing the ring is relatively
expensive.

for both cases I've built a LWRP which reads the current state of affairs,
and decides if and what are the required changes. For disks, I'm using '
'parted', which produces machine friendly output. For ring files I'm using
the output from ring-builder.

the LWRP are driven by recipes which inspect databag - very similar to your
approach. However, they also utilize inventory information about available
resources created by crowbar in chef during the initial bring up of the
deployment.

(As a side note - Dell has announced plans to opensource most of crowbar,
but there's legalese involved)


I'd be more than happy to elaborate and collaborate on this !


a








On Thu, Apr 28, 2011 at 11:35 AM, Judd Maltin j...@newgoliath.com wrote:

 Hi Andi,

 Indeed, the swift recipes hadn't been updated since mid 2010, so I pushed
 forward with my own.

 Thanks!
 -judd


 On Thu, Apr 28, 2011 at 10:03 AM, andi abes andi.a...@gmail.com wrote:

 Judd,

  this is a great idea... actually so great, that some folks @Dell and
 OpsCode, me included, have been working on it.

 Have a peek on :
 https://github.com/opscode/openstack-cookbooks/tree/master/cookbooks

 This effort is also being included into Crowbar (take a peek here:
 http://robhirschfeld.com/tag/crowbar/) which adds the steps needed to
 start with bare metal (rather than installed OS), then using chef to get to
 a working Open stack deployment.
 (if you're at the design meeting, there are demos scheduled).

 That said - I'm updating the swift cookbook, and hope to update github
 soon.

 a.





 On Wed, Apr 27, 2011 at 9:55 PM, Jay Payne lett...@gmail.com wrote:

 Judd,

 I'm not that familiar with Chef (I'll do some research) but I have a
 couple of questions and some thoughts:

 1.  Is this for a multi-server environment?
 2.  Are all your proxy nodes going to have allow_account_management =
 true in the configs?   It might be a good idea to have a second proxy
 config for account management only
 3.  Have you looked at using swauth instead of auth?
 4.  Have you thought about an admin or client node that has utilities
 on it like st and stats-report?
 5.  How where will you do on-going ring management or changes?
 6.  I would think about including some type of functional test at the
 end of the deployment process to verify everything was created
 properly and that all nodes can communicate.



 --J

 On Wed, Apr 27, 2011 at 6:18 PM, Judd Maltin j...@newgoliath.com
 wrote:
  Hi Folks,
 
  I've been hacking away at creating an automated deployment system for
 Swift
  using Chef.  I'd like to drop a design idea on you folks (most of which
 I've
  already implemented) and get feedback from this esteemed group.
 
  My end goal is to have a manifest (apologies to Puppet) which will
 define
  an entire swift cluster, deploy it automatically, and allow edits to
 the
  ingredients to manage the cluster.  In this case, a manifest is a
  combination of a chef databag describing the swift settings, and a
  spiceweasel infrastructure.yaml file describing the OS configuration.
 
  Ingredients:
  - swift cookbook with base, proxy and server recipes.  proxy nodes also
  (provisionally) contain auth services. storage nodes handle object,
  container and account services.
  -- Base recipe handles common package install, OS user creation.  Sets
 up
  keys.
  -- Proxy recipe handles proxy nodes: network config, package install,
  memcache config, proxy and auth package config, user creation, ring
  management (including builder file backup), user management
  -- Storage recipe handles storage nodes: network config, storage device
  config, package install, ring management.
 
  - chef databag that describes a swift cluster (eg:
 mycluster_databag.json)
  -- proxy config settings
  -- memcached settings
  -- settings for all rings and devices
  -- basic user settings
  -- account management
 
  - chef spiceweasel file that auto-vivifies the infrastructure: (eg:
  mycluster_infra.yaml)
  -- uploads cookbooks
  -- uploads roles
  -- uploads the cluster's databag
  -- kicks off node provisioning by requesting from infrastructure API
 (ec2 or
  what have you) the following:
  --- chef roles applied (role[swift:proxy] or role[swift:storage])
  --- server flavor
  --- storage device configs
  --- hostname
  --- proxy and storage network details
 
  By calling this spiceweasel file, the infrastructure can leap into
  existence.
 
  I'm more or less done with all this stuff - and I'd really

[Openstack] Swift and logging requests

2011-04-28 Thread andi abes
I was trying to better understand Swift, and to that end I thought it would
be interesting to log the requests coming in and out of the different
servers. Alas, I'm new to Paste (and very rusty on the little python I knew)
- hence I've having problems achieving this.

I found the following:

Logging configuration in python: http://www.red-dove.com/python_logging.html

Logging WSGI requests:
http://wiki.pylonshq.com/display/pylonscookbook/Request+logging


And based on these I've ended up in proxy-server.conf that looks something
like below. But it doesn't seem to achieve the desired results. Pointers
highly appreciated !





[pipeline:main]
pipeline =  mylogging healthcheck cache swauth proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true

[filter:mylogging]
use = egg:Paste#translogger
setup_console_handler = False
logger_name = wsgi

[loggers]
keys = root

[handlers]
keys = console

[logger_root]
level = DEBUG
handlers = console

# Handler for printing messages to the console
[handler_console]
class = FileHandler
args = ('/home/openstack/swift.log','a')
level = DEBUG
formatter = generic

[formatter_generic]
format = %(asctime)s %(name)s[%(levelname)s] %(message)s
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp