Re: [Openstack] Improved browser-based access to Swift

2013-07-18 Thread John Dickinson
Those both sound pretty cool.

If you submit a patch (via gerrit) to the 
https://github.com/openstack/swift/blob/master/doc/source/associated_projects.rst
 document, then you can have these projects listed on 
http://docs.openstack.org/developer/swift/associated_projects.html

--John



On Jul 18, 2013, at 5:20 AM, Koert van der Veer ko...@cloudvps.com wrote:

 We've been offering public object store services for roughly half a year now. 
 In the past few months, we received a wide range of responses from our 
 customers. Tech-savvy customers are very happy with this offering, and quite 
 a few are busy migrating their existing storage solutions to our object 
 store. 
 
 However, we find that several customers, especially those coming from shared 
 hosting solutions, struggle to grasp the abstract ideas behind object 
 storage. They struggle with two things: authentication against keystone, 
 which they consider too complicated, and having a user interface that is 
 totally distinct from the actual object store.
 
 To help these users, we've developed two middleware projects: swift_basicauth 
 and better_staticweb. swift_basicauth allows web browsers and general purpose 
 HTTP clients to access object stores, without having to contact keystone. The 
 middleware interprets the authentication, and then fetches a token based on 
 that authentication. With that token, the rest of the request is processed. 
 This enables a wide range of HTTP clients to access the object store. While 
 the primary aim was unlocking the objectstore for web-browsers, we quickly 
 discovered the convenience of using it with curl: curl --user uid:pwd -X PUT 
 https://static.example.net/the_file; -T the_file.
 
 The second middleware we developed was named better_staticweb (sorry for the 
 pretentious name). It is similar to static-web, in fact is is mostly 
 compatible. However, web-listings is enabled by default, even for 
 authenticated access (useful in combination with basic auth). It enables the 
 user to visualize his object store as a less-abstract concept. 
 Better_staticweb looks at the http Accept header to determine whether or not 
 to respond with a listing. It still listens to the same meta-headers, but it 
 assumes different defaults. We've gone through quite a bit of testing to 
 guarantee that it doesn't interfere with regular API usage.
 
 Both middleware projects are released under the Apache 2.0 licence, and can 
 be found on our github page:
 https://github.com/CloudVPS/better-staticweb
 https://github.com/CloudVPS/swift-basicauth
 
 --
 
 Koert van der Veer - Senior Developer @ CloudVPS
 CloudVPS - High Availability Cloud Solutions
 w: http://www.cloudvps.com/
 m: ko...@cloudvps.com
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Breakpoint resume with tempurl

2013-07-11 Thread John Dickinson
Swift supports Range requests, so you are able to make a GET request with the 
Range header starting with the first byte you didn't fetch the first time.

--John


On Jul 10, 2013, at 8:10 PM, Jonathan Lu jojokur...@gmail.com wrote:

 Hi, all stackers,
 
 I have realized downloading large object with tempurl middle-ware and 
 want to support break-point resume. Has anyone got the experience of 
 fulfilling break-point resume with tempurl in Swift?
 
 Thanks,
 Jonathan Lu
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Debug level / Swift

2013-07-03 Thread John Dickinson
You can set

log_level = DEBUG

in the [DEFAULT] section of your config files.

https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L19

--John



On Jul 3, 2013, at 7:37 AM, CHABANI Mohamed El Hadi 
chabani.mohamed.h...@gmail.com wrote:

 Hi guys,
 
 I want to active the Debug level on Swift because a 503 internal error, but i 
 don't know how, any help please ?
 
 Thanks
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Availability of metrics from SWIFT - Object Storage

2013-06-27 Thread John Dickinson
Swift itself also reports well over 130 unique metrics about itself via StatsD, 
including the bandwidth for each client request (like ceilometer does). You can 
monitor these with any standard statsd listener, and monitoring this data 
doesn't require integration with any other openstack project (eg keystone or 
ceilometer).

The description of the metrics gathered is at 
http://docs.openstack.org/developer/swift/admin_guide.html#reporting-metrics-to-statsd

--John




On Jun 27, 2013, at 7:55 PM, Fei Long Wang flw...@cn.ibm.com wrote:

 Hi Narayanan,
 
 You can see the  Swift metrics by this link: 
 http://docs.openstack.org/developer/ceilometer/measurements.html#object-storage-swift.
 
 Thanks  Best regards,
 Fei Long Wang (王飞龙)
 -
 Scrum Master of Nitrogen (SME team)
 Cloud Solutions and OpenStack Development
 Tel: 8610-82450513 | T/L: 905-0513 
 Email: flw...@cn.ibm.com
 China Systems  Technology Laboratory in Beijing
 -
 
 
 graycol.gifNarayanan, Krishnaprasad ---06/26/2013 05:39:20 PM---Hallo 
 All, Based on the documentation from Ceilometer, I see the metrics from all 
 the components exc
 
 From: Narayanan, Krishnaprasad naray...@uni-mainz.de
 To:   openstack@lists.launchpad.net openstack@lists.launchpad.net, 
 Date: 06/26/2013 05:39 PM
 Subject:  [Openstack] Availability of metrics from SWIFT - Object Storage
 Sent by:  Openstack 
 openstack-bounces+flwang=cn.ibm@lists.launchpad.net
 
 
 
 Hallo All,
  
 Based on the documentation from Ceilometer, I see the metrics from all the 
 components except SWIFT. Can I get to know whether Ceilometer offers any 
 metrics from the SWIFT component?
  
 Thanks
 Krishnaprasad___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] SWIFT Object Store spanning multiple data centers

2013-06-16 Thread John Dickinson
The global clusters feature is Swift is very new and just now being finished 
up. We are finishing up the last part of it and will have it completed in our 
next release (tentatively scheduled for June 27). The last part is the affinity 
write (ie don't write to a WAN region).

The regions concept is exactly as you have described it: use it for separate 
DCs. With 3 replicas, you'll have 2 replicas in one DC and one in the other. In 
your case, because of the way you have configured zones, it looks like you'll 
always have 2 replicas in A and one in B. Note that this is not a requirement 
of the system: you should set up your zones to match your failure domains.

Are you using the separate replication network feature (it's not required, but 
it may allow you some more control over the cross-DC replication)?

What is the latency between your DCs?

--John



On Jun 16, 2013, at 9:58 PM, Balamurugan V G balamuruga...@gmail.com wrote:

 Hi,
 
 I am exploring setting up a SWIFT Object Store across two data
 centers. Lets say I have DC-A and DC-B. I have setup a swift-proxy and
 two swift-storage nodes in DC-A. And I have setup one storage node in
 DC-B. This is just an experimental setup and if this works well, will
 have more storage nodes and proxy nodes in each DC. I have added the
 storage nodes in DC-A in Zone1 and Zone2. And storage nodes in DC-B is
 in Zone3. The replication count has been set to 3. My goal is to setup
 a multi site OpenStack and I am exploring using SWIFT to store the
 images such that the images can be shared across the DCs.
 
 Here are my questions:
 
   1. There seems to be a concept of regions. How do I use that with
 SWIFT in this case. I cant find any good documentation on it.
   2. In my current setup explained above, I can see that the
 partitions are getting copied(and synced) fine between the nodes in
 DC-A as confirmed by the used size returned by 'df -h /srv/node/sdb1'
 (I know its crube but its good enough). I see that the node in DC-B
 behaves differently. I see that the partitions are copied to this node
 and then removed again continuously. It never settles. That is for
 example, if I have a 5G content stored in the system, the DC-A nodes
 shows that 5Gb is used. But the DC-B node shows that it increases to
 5Gb and it then drops again to say 2Gb and then again increases to 5gb
 and then drop again and so forth.The rsyncd logs shows few errors as
 shown below:
 
 2013/06/17 04:38:54 [6965] receiving file list
 2013/06/17 04:53:52 [6963] rsync: connection unexpectedly closed
 (731405377 bytes received so far) [receiver]
 2013/06/17 04:53:52 [6963] rsync error: error in rsync protocol data
 stream (code 12) at io.c(605) [receiver=3.0.9]
 2013/06/17 04:53:52 [6963] rsync: connection unexpectedly closed (87
 bytes received so far) [generator]
 2013/06/17 04:53:52 [6963] rsync error: error in rsync protocol data
 stream (code 12) at io.c(605) [generator=3.0.9]
 2013/06/17 04:53:54 [6965] rsync: connection unexpectedly closed
 (716563202 bytes received so far) [receiver]
 2013/06/17 04:53:54 [6965] rsync error: error in rsync protocol data
 stream (code 12) at io.c(605) [receiver=3.0.9]
 2013/06/17 04:53:54 [6965] rsync: connection unexpectedly closed (87
 bytes received so far) [generator]
 2013/06/17 04:53:54 [6965] rsync error: error in rsync protocol data
 stream (code 12) at io.c(605) [generator=3.0.9]
 2013/06/16 21:54:24 [6996] name lookup failed for 10.5.64.47:
 Temporary failure in name resolution
 2013/06/16 21:54:24 [6996] connect from UNKNOWN (10.5.64.47)
 2013/06/16 21:54:24 [6997] name lookup failed for 10.5.64.48:
 Temporary failure in name resolution
 2013/06/16 21:54:24 [6997] connect from UNKNOWN (10.5.64.48)
 2013/06/17 04:54:25 [6996] rsync to object/sdb1/objects/189659 from
 UNKNOWN (10.5.64.47)
 2013/06/17 04:54:25 [6996] receiving file list
 2013/06/17 04:54:25 [6997] rsync to object/sdb1/objects/189659 from
 UNKNOWN (10.5.64.48)
 2013/06/17 04:54:25 [6997] receiving file list
 
I even tried to increase the timeout value of rsync from default
 30sec to 600 sec but I still see the same issue. What could be wrong
 here?
 
 
 Any help will be greatly appreciated. Also any pointers to good
 documentation on how to setup a multi site OpenStack deployment will
 be very helpful. I see that there is good documentation to getup and
 runing with a single or 3 nodes OpenStack setup, there is not much to
 know how to deploy a large scale multi site OpenStack deployment :-(
 
 Regards,
 Balu
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : 

Re: [Openstack] Load Balancers for Swift with nginx and pound

2013-06-14 Thread John Dickinson
Also, just as general info, don't use nginx in front of Swift. nginx buffers 
request bodies, and that can become very problematic when uploading content 
into the Swift cluster (especially since the body could be up to 5GB--not too 
many of those requests and you'll overload your nginx box).

Second, the inconsistency of the listings shouldn't be affected by your load 
balancer or proxy server. It's probably the result of Swift's eventual 
consistency model. If Swift was not able to update the account listings on 
container create, it will still return success to the client (for the container 
create), but it will queue the listing update to be performed later. THere is 
an updater process that runs to keep listings in sync and handle these 
situations. This is most obvious (ie you're most likely to see it) if you have 
had a failure in your cluster (eg network down or hard drive fail) and are 
trying to get the info after the failure has been restored but before the 
updater has done its work. SInce there are 3 copies of your account in the 
cluster, each with a listing of the containers, one may be out of sync. Perhaps 
you created a container while one of those drives was unavailable. The 
background replication and updater processes will take care of getting your 
listings back in
 to a consistent state. Make sure they are running, and check the logs to see 
if there are any problems.

--John




On Jun 14, 2013, at 4:27 AM, Christian Schwede i...@cschwede.de wrote:

 Hi,
 
 Because of the nginx problem ,so I change  to use pound ,but
 ...
 but can not execute  post  or  upload
 
 regarding pound: you have to enable the PUT method to upload objects. Simply 
 add xHTTP 2 in the section below:
 Userroot
 Group   root
 ListenHTTP
   Address 172.18.56.194
   Port80
   xHTTP 2
 End
 
 This will enable PUT and DELETE methods (see http://linux.die.net/man/8/pound 
 for further details).
 
 But POST should work out of the box?
 
 Cheers,
 
 Christian
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Folsom Keystone and Grizzly Swift

2013-06-11 Thread John Dickinson
Yes it will work, but I would not recommend it for production systems. There 
were some changes in Keystone during the Grizzly release that are pretty much 
required for Keystone to be usable with Swift (specifically around Keystone's 
ability to support token caching).

My recommendation would be to upgrade Keystone to its grizzly release or 
consider an alternative auth system for Swift.

--John

On Jun 11, 2013, at 1:23 AM, raydzu rary...@interia.pl wrote:

 Good morning,
 
 I have really simple and stupid question but I just want to confirm my 
 assumptions
 Do you expect any possible problems if I will use Folsom release of Keystone 
 with Grizzly release of Swift ?
 I believe that there shouldn't be any as swift in both versions using the 
 same protocol , but I will feel much safer if you can confirm that
 
 
 Thank you
 Radek
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Folsom Keystone and Grizzly Swift

2013-06-11 Thread John Dickinson
Ah, yes. Thanks Chmouel. My comments were specifically for the code in 
python-keystoneclient, not the keystone server. Sorry for the lack of clarity.

--John




On Jun 11, 2013, at 10:37 AM, Chmouel Boudjnah chmo...@enovance.com wrote:

 python-keystoneclient latest should be release independent and should
 be compatible with folsom and  have the fix you need for caching (the
 cache=swift.cache parameter to have in auth_token). You will need a
 recent version of swift tho to get everything working well.
 
 On Tue, Jun 11, 2013 at 4:39 PM, John Dickinson m...@not.mn wrote:
 Yes it will work, but I would not recommend it for production systems. 
 There were some changes in Keystone during the Grizzly release that are 
 pretty much required for Keystone to be usable with Swift (specifically 
 around Keystone's ability to support token caching).
 
 My recommendation would be to upgrade Keystone to its grizzly release or 
 consider an alternative auth system for Swift.
 
 --John
 
 On Jun 11, 2013, at 1:23 AM, raydzu rary...@interia.pl wrote:
 
 Good morning,
 
 I have really simple and stupid question but I just want to confirm my 
 assumptions
 Do you expect any possible problems if I will use Folsom release of 
 Keystone with Grizzly release of Swift ?
 I believe that there shouldn't be any as swift in both versions using the 
 same protocol , but I will feel much safer if you can confirm that
 
 
 Thank you
 Radek
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Upgrade Swift from ver1.4 to ver1.8

2013-06-10 Thread John Dickinson
This is quite possible to do, and in fact one thing that we keep in mind with 
every Swift release: deployers must be able to upgrade without any lapse in 
availability.

The basic steps are:
- stop background processes
- upgrade packages (system and/or swift)
- restart processes
- start background tasks

The above steps could be done for storage servers and then for proxy servers 
(assuming you have those as separate boxes). Overall, you upgrade one server or 
groups of servers at a time and then move on to the next.

Since you are upgrading from a version several releases old, I'd recommend 
paying close attention to Swift's changelog: 
https://github.com/openstack/swift/blob/master/CHANGELOG. There are several 
things described in there that you should pay attention to (eg changing config 
defaults and some format changes). Everything has an upgrade path, so, although 
you may elect to do several upgrades in a row (rather than jumping straight to 
1.8.0), you will be able to complete the upgrade with no loss to availability.

--John



On Jun 9, 2013, at 9:37 PM, Jonathan Lu jojokur...@gmail.com wrote:

 Hi, all,
 
My group downloads and installs the Openstack Swift of version 1.4 and 
 wants to upgrade it to 1.8. However, we don't want to abandon all the data we 
 have collected during the period. Does anyone can tell me how to upgrade the 
 swift without dropping all the data?
 
 Thanks!
 Jonathan Lu
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] curl with swift

2013-06-10 Thread John Dickinson
Sure, easy to do (curl is what I normally use anyway).

To auth (for auth v1. v2 and keystone will be different):

curl -i -H X-Auth-User: foo -H X-Auth-Key: bar http://swift/auth/v1.0

The 2 headers you need to look for are X-Storage-URL and X-Auth-Token.

After that, use the X-Auth-Token to talk to the X-Storage-URL, and you should 
be good to go:

curl -i -H X-Auth-Token: baz http://swift/v1/AUTH_foo


(As a footnote, it looks like your sample curl request is doing a container 
listing and not an object fetch.)

--John




On Jun 10, 2013, at 11:28 AM, Remo Mattei r...@mattei.org wrote:

 Hello everyone, 
 I am looking to do some testing on being able to retrieve data object from a 
 swift server on a Instance that does not have anything but curl. Any 
 suggestions?
 
 I am using this command now but I get not auth to get this object. 
 
 Thanks, 
 Remo 
 
 curl -X GET \
 -H X-Auth-Token: 813c6eef9f474e7f860ef42dcaeeb53b \
 http://192.168.235.113:8080/v1/AUTH_9ffeae726f33436b9e0796d31f85f730/remo.pen 
  Remo.pem
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Swift load balancing

2013-06-07 Thread John Dickinson
The given options (DNS, SW load balancer, and HW load balancer) are all things 
I've seen people use in production Swift clusters.

As mentioned in another reply, DNSRR isn't really load balancing, but it can be 
used if nothing else is available.

One thing to consider when choosing a load balancer is if you want it to also 
terminate your SSL connections. You shouldn't ever terminate SSL within the 
Swift proxy itself, so you either need something local (like stunnel or stud) 
or you can combine the functionality with something like Pound or HAProxy. Both 
Pound and HAProxy can do load balancing and SSL termination, but for SSL they 
both use OpenSSL, so you won't see a big difference in SSL performance. Another 
free option (for smaller clusters) is using LVS.

You could also use commercial load balancers with varying degrees of success.

Swift supports being able to tell the healthcheck middleware to send an error 
or not 
(https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L185),
 so when configuring your load balancer, you can more simply manage the 
interaction with the proxy servers by taking advantage of this feature.

I would strongly recommend against using nginx as a front-end to a Swift 
cluster. nginx spools request bodies locally, so it is not a good option in 
front of a storage system when the request bodies could be rather large.

--John





On Jun 7, 2013, at 1:24 AM, Heiko Krämer i...@honeybutcher.de wrote:

 Hey Kotwani,
 
 we are using an SW loadbalancer but L3 (keepalived).
 DNS round robin are not a load balancer :) if one node is done, some 
 connections will arrive the down host that's not the right way i think.
 
 HTTP Proxy are an option but you make a bottleneck of your connection to WAN 
 because all usage will pass your proxy server.
 
 You can use Keepalived as a Layer3 Loadbalancer, so all your incoming 
 responses will distributed to the swift proxy servers and delivered of them. 
 You don't have a bottleneck because you are using the WAN connection of each 
 swift proxy servers and you have automate failover of keepalived with an 
 other hot standby lb ( keepalived are using out of the box pacemaker + 
 corosync for lb failover).
 
 
 Greetings
 Heiko
 
 On 07.06.2013 06:40, Chu Duc Minh wrote:
 If you choose to use DNS round robin, you can set TTL small and use a 
 script/tool to continous check proxy nodes to reconfigure DNS record if one 
 proxy node goes down, and vice-versa.
 
 If you choose to use SW load-balancer, I suggest HAProxy for performance 
 (many high-traffic websites use it) and NGinx for features (if you really 
 need features provided by Nginx). 
 IMHO, I like Nginx more than Haproxy. It's stable, modern, high performance, 
 and full-featured.
 
 
 On Fri, Jun 7, 2013 at 6:28 AM, Kotwani, Mukul mukul.g.kotw...@hp.com 
 wrote:
 Hello folks,
 
 I wanted to check and see what others are using in the case of a Swift 
 installation with multiple proxy servers for load balancing/distribution. 
 Based on my reading, the approaches used are DNS round robin, or SW load 
 balancers such as Pound, or HW load balancers. I am really interested in 
 finding out what others have been using in their installations. Also, if 
 there are issues that you have seen related to the approach you are using, 
 and any other information you think would help would be greatly appreciated.
 
  
 As I understand it, DNS round robin does not check the state of the service 
 behind it, so if a service goes down, DNS will still send the record and the 
 record requires manual removal(?). Also, I am not sure how well it scales or 
 if there are any other issues. About Pound, I am not sure what kind of 
 resources it expects and what kind of scalability it has, and yet again, 
 what other issues have been seen.
 
  
 Real world examples and problems seen by you guys would definitely help in 
 understanding the options better.
 
  
 Thanks!
 
 Mukul
 
  
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: 
 https://launchpad.net/~openstack
 
 Post to : 
 openstack@lists.launchpad.net
 
 Unsubscribe : 
 https://launchpad.net/~openstack
 
 More help   : 
 https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Storage Server Redirection

2013-06-03 Thread John Dickinson

On May 31, 2013, at 4:53 PM, Luse, Paul E paul.e.l...@intel.com wrote:

 I’m looking at tacking this item:
  
 https://blueprints.launchpad.net/swift/+spec/support-storage-server-redirects
  
 and wanted to get some feedback on the following observations/thoughts:
  
 1) This is a capability that would be checked in independent of other 
 blueprints that might use it (2 are mentioned in the link above) and unit 
 test code would be the only way to initially exercise it; it essentially 
 enables other activities at this point

correct, IMO, but see below.

  
 2) The basic idea is that an object server (via middleware or otherwise) will 
 be given the ability to respond to a request to indicate ‘not me but I know 
 who should handle this’.  I’m thinking this makes more sense as a 5xx 
 response with additional information (partition, nodes) about the route 
 included in the response body (as opposed to a 3xx code)  

There are already some specific checks around a 5xx response from object 
servers that relate to failure handling. I'd assume a 3xx response would be 
used for redirects, with any additional info given in headers. Can you share 
more about why a 5xx response is more appropriate?


  
 3) The proxy server will be modified to process the response accordingly but 
 using the partition, nodes info from the response as opposed to 
 object_ring.get_nodes() to determine which nodes to use

yes

  
 4) Protection will be required to avoid endless redirection loops

Yes, but since this is handled by a single proxy worker on a per-request basis, 
protection should be fairly simple.

  
 5) This applies only to GET operations

Supporting this on writes can allow for less of a replication storm later if 
the object server knows the correct location before proxy is updated (eg during 
a cluster upgrade or a ring deploy).

  
 Appreciate any thoughts/feedback.,  In addition to the two usages of this 
 capability referenced in the blueprint I think there’s applicable to another 
 Tiering blueprint which interests me as well.

The fun part will be figuring out what to do when the cluster is not in an 
internally consistent state (eg cluster upgrades or ring deployment times). You 
want to redirect if the object server knows best, but you don't if the proxy 
server knows best. It may make sense (from a usefulness perspective) to 
implement https://blueprints.launchpad.net/swift/+spec/send-ring-version first.

  
 Thanks
 Paul
  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Understanding Failure Recovery in Swift

2013-05-09 Thread John Dickinson
Replication will be logged and you can look for a Object replication complete 
log message.

For a more detailed look at how swift handles failures, you can watch this 
video: 
http://mirror.linux.org.au/linux.conf.au/2013/mp4/Playing_with_OpenStack_Swift.mp4

(I need to post my blog post that has all that info...)

--John



On May 9, 2013, at 1:55 PM, mark.abraham...@hgst.com wrote:

 
 Hi-
 
 I am trying to get a better understanding of how Swift recovers from failure. 
   I've read the documentation, but the process is still somewhat unclear.
 
 I have a simple 5-node Swift cluster deployed with a replica count of 3 (3 
 zones and 2 handoff zones for a total of 5).I load the cluster with some 
 test data, and then on one of the nodes I shutdown swift and rsync.   I then 
 purge the account, container, and object data from the node I shutdown.   
 When I start swift back up again on the purged node, how can I detect that 
 replication has completed and the missing data has been restored to the node 
 that I purged?
 
 Thanks in advance for your 
 assistance.___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Configuring tempurl in Openstack Swift

2013-04-29 Thread John Dickinson
There are no plans to change the current implementation of tempurl.

In order to generate a tempurl that can be validated without relying on a 
network call to a centralized authority service, the owner and the service must 
have a shared secret. In the tempurl feature, this shared secret is set as a 
metadata key on the account. This allows the content owner to generate as many 
keys as necessary (without having to call an external service) and allows Swift 
to validate each one (without calling an external service).

Note that an auth token is only needed in order to save the shared secret in 
the account metadata. Once the shared secret is set, neither the owner of the 
content nor the user of the tempurl (ie someone otherwise without access to the 
swift cluster) need a valid auth token to generate or use a tempurl.

The only official docs for tempurl are at 
http://docs.openstack.org/developer/swift/misc.html#module-swift.common.middleware.tempurl
 (We should probably have some docs written for 
http://docs.openstack.org/api/openstack-object-storage/1.0/content/ on this 
feature.)

--John



On Apr 29, 2013, at 5:26 PM, Shrinand Javadekar shrin...@maginatics.com wrote:

 Hi,
 
 Configuring tempurl in openstack Swift requires two steps:
 
 1) Adding tempurl to the pipeline in proxy-server.conf
 2) Setting the tempurl key on the account.
 
 Step #1 above is fairly straight forward. However, step #2 is slightly 
 complicated; at least as per [1]. It requires first authenticating the user 
 and getting an auth token. Then make an http request with the 
 X-Meta-Tempurl-key header to set the tempurl key.
 
 Are there easier ways to do this? If not, is it on the roadmap to do this 
 easily?
 
 Thanks in advance.
 -Shri
 
 [1] http://failverse.com/using-temporary-urls-on-rackspace-cloud-files/
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [DevStack] Does a Swift/Keystone only install require AMQP?

2013-02-19 Thread John Dickinson
nothing in swift requires rabbit, qpid, or zeromq

--john


On Feb 19, 2013, at 4:53 PM, Everett Toews everett.to...@rackspace.com wrote:

 Hi All,
 
 When I was doing a Swift/Keystone only install with DevStack I used the 
 following in my localrc
 
   disable_all_services
   enable_service key swift mysql
 
 Then stack.sh paused with the error message
 
   ERROR: at least one rpc backend must be enabled,
 set one of 'rabbit', 'qpid', 'zeromq'
 via ENABLED_SERVICES.
 
 So I blindly added rabbit to enable_service and the error went away. Then I 
 did the PR https://review.openstack.org/#/c/22333/ to change the DevStack 
 README.md file.
 
 But then I got the question, is rabbit really necessary?
 
 Does anyone know if a Swift/Keystone only install requires AMQP?
 
 Thanks,
 Everett
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] code reading

2013-02-13 Thread John Dickinson
This part of the code gets the account info, but if the account isn't found and 
account autocreation is set (eg for tempauth and keystone), then we need to 
create the account first before returning the info.

However, since there are multiple replicas of the account in the cluster, we 
can't simply auto create after getting a 404. Swift is eventually consistent 
and the 404 may be stale data. We don't want to overwrite the account with a 
create if it already exists. So this loop is making sure that the account 
doesn't exist in any of replica count nodes, and if there is a combination of 
404 and errors returned, we don't send another account create.

That's a little complicated, so here's some bullets:

Got any successful response (2xx) from a server: return the account info
Got 404 form all the servers: auto create the account and return the info (if 
the create was successful, else error)
Got any errors other than 404 or 507 but no success: return error (ie return 
None, None, None)
Got 507 from a server: skip that server for this an subsequent requests (ie 
error_limit) and carry on

--John


On Feb 12, 2013, at 4:11 AM, Kun Huang academicgar...@gmail.com wrote:

 Hi, Chmouel and Darrell
 
 I know you're working on /swift/proxy/controllers/base.py for this bug: 
 https://review.openstack.org/#/c/21563/
 I didn't know the 
 https://github.com/openstack/swift/blob/master/swift/proxy/controllers/base.py#L371
  to #372. Could you show me a simple understanding?
 
 
 
 elif resp.status == HTTP_NOT_FOUND:
 
 
 
 if result_code == 0:
 
 
 
 result_code = HTTP_NOT_FOUND
 
 
 
 elif result_code != HTTP_NOT_FOUND:
 
 
 
 result_code = -1
 
 
 
 In this part, given a 404 response, we should reset the variable result_code. 
 If result_code is 0, reset it 404, 404 do nothing, and others set -1.
 
 What's the case for the -1?
 Furthermore, I simply think resp.status is sufficient condition to set 
 result_code, so why these judgement need the result_code value in last loop 
 (while attempts_left loop).
 
 
 Gareth
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [swift]

2013-02-11 Thread John Dickinson
This is the result of `python ./setup.py develop`. That command sets up the 
scripts to reference your local source so that active dev work is immediately 
reflected.

`python ./setup.py install` would actually copy the scripts to the right 
locations.

--John




On Feb 11, 2013, at 9:02 AM, Kun Huang academicgar...@gmail.com wrote:

 Hi, swift developers
 
 I found the script /usr/local/bin/swift is:
 
 #!/usr/bin/python  -u
 # EASY-INSTALL-DEV-SCRIPT: 'python-swiftclient==1.3.0.4.gb5f222b','swift'
 __requires__ = 'python-swiftclient==1.3.0.4.gb5f222b'
 from pkg_resources import require; 
 require('python-swiftclient==1.3.0.4.gb5f222b')
 del require
 __file__ = '/home/swift/bin/python-swiftclient/bin/swift'
 execfile(__file__)
 
 It seems generated by easy_install, but I didn't found which step did this. 
 (I use SAIO to build swift environment).
 
 Could someone do me a favor? Which command in SAIO generated swift scripts in 
 /usr/local/ .
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Swift 1.7.6 and python-swiftclient 1.3.0 released

2013-01-24 Thread John Dickinson
I'm happy to announce that Swift 1.7.6 has been released. This release
is the work of twenty five contributors. As always, you can upgrade to
this release with no downtime for your clients.

You can find download links at https://launchpad.net/swift and the
Launchpad release tracking at
https://launchpad.net/swift/+milestone/1.7.6.

The full changelog for this release is at
https://github.com/openstack/swift/blob/master/CHANGELOG.

I'd like to highlight a few of the more prominent changes in this release.

We've added a feature to the healthcheck middleware that allows you to
cause an error response to be returned based on the existence of a
file on disk. This allows you to turn off a proxy server and
automatically remove it from your load balancer pool simply by
touching a file. Since this is a file that can persist through a
reboot, this new feature can also protect you from using a proxy
server if an upgrade went awry. This new feature allows you to
simplify your operations and more simply control how your cluster
behaves.

We have added some new options to swift-recon. Using swift-recon, you
can now find the top full disks in your cluster and get more detailed
information about replication times. Also, Swift's dispersion report
tool has some new options to control what output is given. These
changes give you more insight into what is happening in your cluster
by filtering the results to container and object reports.

This release also includes many bug fixes, including some related to
object names and one to prevent the auditors from consuming too many
unix sockets.

I'd encourage everyone to upgrade to take advantage of the new
functionality and bug fixes in this release. If you have any
questions, ask on the OpenStack mailing list or hop in #openstack on
IRC (freenode).

I have also tagged python-swiftclient at 1.3.0 for release.

Thanks again to all of the people who have contributed to this
release. Swift's high quality and success are a direct result of your
hard work and dedication.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Candidate Swift 1.7.6 cut

2013-01-18 Thread John Dickinson
We've cut he milestone-proposed branch for Swift 1.7.6. It's scheduled to be 
released next Thursday January 24. Please take a look and let us know of any 
issues you find ASAP.

Candidate tarball: 
http://tarballs.openstack.org/swift/swift-milestone-proposed.tar.gz

The full proposed changelog for this release is available at 
https://github.com/openstack/swift/blob/master/CHANGELOG.

--John





smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-14 Thread John Dickinson
Yes, I think it would be a great topic for the summit.

--John


On Jan 14, 2013, at 7:54 AM, Tong Li liton...@us.ibm.com wrote:

 John and swifters,
 I see this problem as a big problem and I think that the scenario described 
 by Alejandro is a very common scenario. I am thinking if it is possible to 
 have like two rings (one with the newer extended power, one with the existing 
 ring power), when significant changes made to the hardware, partition, a new 
 ring get started with a command, and new data into Swift will use the new 
 ring, and existing data on the existing ring still available and slowly (not 
 impact the normal use) but automatically moves to the new ring, once the 
 existing ring shrinks to the size zero, then that ring can be removed. The 
 idea is to sort of having two virtual Swift systems working side by side, the 
 migration from existing ring to new ring being done without interrupting the 
 service. Can we put this topic/feature as one to be discussed during the next 
 summit and to be considered as a high priority feature to work on for coming 
 releases?
 
 Thanks.
 
 Tong Li
 Emerging Technologies  Standards
 Building 501/B205
 liton...@us.ibm.com
 
 graycol.gifJohn Dickinson ---01/11/2013 04:28:47 PM---If effect, this would 
 be a complete replacement of your rings, and that is essentially a whole new c
 
 From: John Dickinson m...@not.mn
 To:   Alejandro Comisario alejandro.comisa...@mercadolibre.com, 
 Cc:   openstack-operat...@lists.openstack.org 
 openstack-operat...@lists.openstack.org, openstack 
 openstack@lists.launchpad.net
 Date: 01/11/2013 04:28 PM
 Subject:  Re: [Openstack] [SWIFT] Change the partition power to recreate 
 the  RING
 Sent by:  openstack-bounces+litong01=us.ibm@lists.launchpad.net
 
 
 
 If effect, this would be a complete replacement of your rings, and that is 
 essentially a whole new cluster. All of the existing data would need to be 
 rehashed into the new ring before it is available.
 
 There is no process that rehashes the data to ensure that it is still in the 
 correct partition. Replication only ensures that the partitions are on the 
 right drives.
 
 To change the number of partitions, you will need to GET all of the data from 
 the old ring and PUT it to the new ring. A more complicated, but perhaps more 
 efficient) solution may include something like walking each drive and 
 rehashing+moving the data to the right partition and then letting replication 
 settle it down.
 
 Either way, 100% of your existing data will need to at least be rehashed (and 
 probably moved). Your CPU (hashing), disks (read+write), RAM (directory 
 walking), and network (replication) may all be limiting factors in how long 
 it will take to do this. Your per-disk free space may also determine what 
 method you choose.
 
 I would not expect any data loss while doing this, but you will probably have 
 availability issues, depending on the data access patterns.
 
 I'd like to eventually see something in swift that allows for changing the 
 partition power in existing rings, but that will be hard/tricky/non-trivial.
 
 Good luck.
 
 --John
 
 
 On Jan 11, 2013, at 1:17 PM, Alejandro Comisario 
 alejandro.comisa...@mercadolibre.com wrote:
 
  Hi guys.
  We've created a swift cluster several months ago, the things is that righ 
  now we cant add hardware and we configured lots of partitions thinking 
  about the final picture of the cluster.
  
  Today each datanodes is having 2500+ partitions per device, and even tuning 
  the background processes ( replicator, auditor  updater ) we really want 
  to try to lower the partition power.
  
  Since its not possible to do that without recreating the ring, we can have 
  the luxury of recreate it with a very lower partition power, and rebalance 
  / deploy the new ring.
  
  The question is, having a working cluster with *existing data* is it 
  possible to do this and wait for the data to move around *without data 
  loss* ???
  If so, it might be true to wait for an improvement in the overall cluster 
  performance ?
  
  We have no problem to have a non working cluster (while moving the data) 
  even for an entire weekend.
  
  Cheers.
  
  
 
 [attachment smime.p7s deleted by Tong Li/Raleigh/IBM] 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-11 Thread John Dickinson
If effect, this would be a complete replacement of your rings, and that is 
essentially a whole new cluster. All of the existing data would need to be 
rehashed into the new ring before it is available.

There is no process that rehashes the data to ensure that it is still in the 
correct partition. Replication only ensures that the partitions are on the 
right drives.

To change the number of partitions, you will need to GET all of the data from 
the old ring and PUT it to the new ring. A more complicated, but perhaps more 
efficient) solution may include something like walking each drive and 
rehashing+moving the data to the right partition and then letting replication 
settle it down.

Either way, 100% of your existing data will need to at least be rehashed (and 
probably moved). Your CPU (hashing), disks (read+write), RAM (directory 
walking), and network (replication) may all be limiting factors in how long it 
will take to do this. Your per-disk free space may also determine what method 
you choose.

I would not expect any data loss while doing this, but you will probably have 
availability issues, depending on the data access patterns.

I'd like to eventually see something in swift that allows for changing the 
partition power in existing rings, but that will be hard/tricky/non-trivial.

Good luck.

--John


On Jan 11, 2013, at 1:17 PM, Alejandro Comisario 
alejandro.comisa...@mercadolibre.com wrote:

 Hi guys.
 We've created a swift cluster several months ago, the things is that righ now 
 we cant add hardware and we configured lots of partitions thinking about the 
 final picture of the cluster.
 
 Today each datanodes is having 2500+ partitions per device, and even tuning 
 the background processes ( replicator, auditor  updater ) we really want to 
 try to lower the partition power.
 
 Since its not possible to do that without recreating the ring, we can have 
 the luxury of recreate it with a very lower partition power, and rebalance / 
 deploy the new ring.
 
 The question is, having a working cluster with *existing data* is it possible 
 to do this and wait for the data to move around *without data loss* ???
 If so, it might be true to wait for an improvement in the overall cluster 
 performance ?
 
 We have no problem to have a non working cluster (while moving the data) even 
 for an entire weekend.
 
 Cheers.
 
 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] new regular meeting for Swift

2013-01-08 Thread John Dickinson
I've scheduled a bi-weekly Swift team meeting for Swift contributors. Starting 
tomorrow, we will meet in #openstack-meeting every other Wednesday at 11am 
Pacific, 1pm Central, 1900 UTC.

For tomorrow's meeting, we will be discussing Swift's next release and the 
current outstanding patches 
(https://review.openstack.org/#/q/status:open+swift,n,z).

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swift -- object layout on storage

2013-01-04 Thread John Dickinson
It's pretty simple. Swift uses the underlying filesystem to store the data on 
disk, and so you can use normal FS tools to find and inspect your data.

For the object server, the magic happens here: 
https://github.com/openstack/swift/blob/master/swift/obj/server.py#L117

The end result is that the data is stored here:

/path/to/mount/points/device/objects/partition/hash_suffix/hash/

That directory is the object. Inside the directory, there is normally just one 
file (named timestamp.data). The object's data is stored in the file, and the 
object's metadata is stored in the xattrs of the file.

In some cases (mostly around failure handling), there may be more than one file 
in that directory, but for the general case, all the .data files are sorted (by 
filename) and the last is chosen (ie the most recent). As I said, there is 
normally just the one file in there.

If you delete the object, the .data file is deleted and a timestamp.ts (ts 
for tombstone) file is created as a zero-byte file. This is a delete marker 
that will be eventually reaped, but it exists to ensure that the delete 
properly propagates to all replicas in the cluster.

--John

 

On Jan 4, 2013, at 10:14 AM, Snider, Tim tim.sni...@netapp.com wrote:

 I’d like to understand more on how Swift lays out objects on the underlaying 
 storage. I can’t seem to find out  much about this in the openstack / swift 
 documentation itself or in associated web searchs.
 Thanks for pointers / links.
 Tim
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Error while enabling s3 support in swift

2012-11-28 Thread John Dickinson
swob is a part of recent versions of Swift. It's not a separately install able 
module. You need to either use a later version of Swift (1.7.5) is current or 
use an older version of swift3. 

--John


On Nov 28, 2012, at 3:49 AM, Gui Maluf guimal...@gmail.com wrote:

 John, I installed the fujita/swift3 but I'm stuck at this point:
 
 Traceback (most recent call last):
   File /usr/bin/swift-proxy-server, line 22, in module
 run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)
   File /usr/lib/python2.7/dist-packages/swift/common/wsgi.py, line 138, in 
 run_wsgi
 loadapp('config:%s' % conf_file, global_conf={'log_name': log_name})
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, 
 in loadapp
 return loadobj(APP, uri, name=name, **kw)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 271, 
 in loadobj
 global_conf=global_conf)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, 
 in loadcontext
 global_conf=global_conf)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 320, 
 in _loadconfig
 return loader.get_context(object_type, name, global_conf)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 450, 
 in get_context
 global_additions=global_additions)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 562, 
 in _pipeline_app_context
 for name in pipeline[:-1]]
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 454, 
 in get_context
 section)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 476, 
 in _context_from_use
 object_type, name=use, global_conf=global_conf)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 406, 
 in get_context
 global_conf=global_conf)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, 
 in loadcontext
 global_conf=global_conf)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 328, 
 in _loadegg
 return loader.get_context(object_type, name, global_conf)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 620, 
 in get_context
 object_type, name=name)
   File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 646, 
 in find_egg_entry_point
 possible.append((entry.load(), protocol, entry.name))
   File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 1989, in load
 entry = __import__(self.module_name, globals(),globals(), ['__name__'])
   File 
 /usr/local/lib/python2.7/dist-packages/swift3-1.0.0-py2.7.egg/swift3/middleware.py,
  line 67, in 
 module
 from swift.common.swob import Request, Response
 ImportError: No module named swob
 
 
 I couldnt find a way to install this swob module.
 Any hint?
 
 
 On Tue, Nov 20, 2012 at 5:46 AM, Shashank Sahni shredde...@gmail.com wrote:
 I haven't tested but the proxy server has started fine. Thanks.
 
 --
 Shashank Sahni
 
 
 
 
 On Tue, Nov 20, 2012 at 12:55 PM, John Dickinson m...@not.mn wrote:
 check out the README at https://github.com/fujita/swift3 for the correct 
 proxy server config section.
 
 --john
 
 
 On Nov 19, 2012, at 11:22 PM, Shashank Sahni shredde...@gmail.com wrote:
 
  Hi,
 
  I'm trying to install swift 1.7.4 on Ubuntu 12.04. In order to enable the 
  s3 support, I added the appropriate parameters in the 
  /etc/swift/proxy-server.conf file
 
  [filter:swift3]
  use=egg:swift#swift3
 
  and installed the package swift-plugin-s3. But when I try to start the 
  proxy-server I get the following error.
 
  LookupError: Entry point 'swift3' not found in egg 'swift' (dir: 
  /usr/lib/python2.7/dist-packages; protocols: paste.filter_factory, 
  paste.filter_app_factory; entry_points: )
 
  Note that, proxy server runs fine without s3 support. Suggestions?
 
  --
  Shashank Sahni
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 -- 
 guilherme \n
 \t maluf
 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] PoC Swift

2012-11-19 Thread John Dickinson
You can run your swift cluster with 2 replicas. This is set when the ring is 
first created. You can use `swift-ring-builder` (ie with no arguments) to get 
usage help. Note that there is not (yet) an easy way to change your replica 
count once the ring has been created.

If you have more than one drive for swift in each of your two machines, you can 
make three replicas. Swift will ensure that the two replicas on one machine are 
on different drives.

--John





On Nov 19, 2012, at 10:19 AM, Emilio García emilio.gar...@cloudreach.co.uk 
wrote:

 Good day everyone,
 
 I want to set up a PoC for Swift. But we are willing to make it a little bit 
 more complicated than just trivial. So we have two spare machines to dedicate 
 to it. I was thinking we could just set up proxy and data node in the same 
 machines. And have two data replicas (one per machine). Is this possible or 
 do I need to have at least 3 data copies? If that is the case, or we want to 
 make things a little bit more realistic but without adding extra servers I 
 guess I can have duplicated copies by having more than just one data disk per 
 server (so two data copies per machine). Is that assumption ok too?
 
 Thanks.


smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Error while enabling s3 support in swift

2012-11-19 Thread John Dickinson
check out the README at https://github.com/fujita/swift3 for the correct proxy 
server config section.

--john


On Nov 19, 2012, at 11:22 PM, Shashank Sahni shredde...@gmail.com wrote:

 Hi,
 
 I'm trying to install swift 1.7.4 on Ubuntu 12.04. In order to enable the s3 
 support, I added the appropriate parameters in the 
 /etc/swift/proxy-server.conf file
 
 [filter:swift3]
 use=egg:swift#swift3
 
 and installed the package swift-plugin-s3. But when I try to start the 
 proxy-server I get the following error.
 
 LookupError: Entry point 'swift3' not found in egg 'swift' (dir: 
 /usr/lib/python2.7/dist-packages; protocols: paste.filter_factory, 
 paste.filter_app_factory; entry_points: )
 
 Note that, proxy server runs fine without s3 support. Suggestions? 
 
 --
 Shashank Sahni
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] account-level and container-level usage information

2012-11-13 Thread John Dickinson
You could use a project like slogging (http://github.com/notmyname/slogging) to 
run inside the cluster and aggregate that information from account dbs and logs.

--John


On Nov 13, 2012, at 12:15 PM, Pete Zaitcev zait...@redhat.com wrote:

 On Mon, 12 Nov 2012 20:48:35 -0800
 Ning Zhang n...@zmanda.com wrote:
 
 Is there any Swift (GUI or command line) tool that can
 retrieve the account-level and
 container-level usage information (e.g. how large space
 has been used under an account, how large space has been
 used under a tenant) and also works with keystone?
 
 If you're content with 1st party view, Alex Yang's mail gives the
 answer. But if you want a 3rd part view (authenticate as administrator
 or bypassing the authentication), then I don't think there is a tool
 for that. I tried to find one before I started on swift-report, but
 found none.
 
 Swiftly can be used to ease the problem of formulating the correct
 URLs when accessing back-ends instead of the proxy, but it's not
 a complete turnkey solution.
 
 Actually I was thinking about pilfering Greg's code from Swiftly
 and grafting it onto swift-report, but that hasn't happened.
 
 So the only way you can do it today is to extract the URLs by
 authenticating with Keystone as 1st party (with curl), then issue
 curl -X HEAD to account, container, and object servers. Basically
 you still authenticate in the end as Alex suggested.
 
 -- Pete
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Running other services on swift nodes

2012-11-04 Thread John Dickinson
From the Swift perspective, there isn't any reason other services can't be run 
on the Swift boxes. I'd check for a few things, though.

1) Make sure dependencies aren't in conflict. Thanks to the work of the CI 
team, this should be mostly sane.

2) Obviously, monitor your systems and don't overload something. Some parts of 
Swift will use CPU (eg the proxy servers) and other will use IOPS (eg container 
and object servers). Try to balance the other things you run to complement each 
other.

3) If you have enough hardware to have separate proxy and storage nodes, Swift 
assumes that your storage nodes will be on a private network (ie Swift doesn't 
provide any additional security to the messages within the cluster). Although 
there are some proposed ideas about making this better, I wouldn't put other 
public services on boxes running the backend Swift storage processes, at least 
without some additional limitations on what can connect to the Swift ports.

4) Make sure the ports you are running the services on don't conflict. All of 
the ports used by Swift are configurable.

So while it's not a terrible, terrible idea, I certainly wouldn't call it 
best practice. You can make it work, just pay attention to what you're doing.

--John




On Nov 4, 2012, at 5:18 PM, Tom Fifield fifie...@unimelb.edu.au wrote:

 Hi,
 
 This came in as a doc bug, but I thought I'd throw it to the list:
 
 https://bugs.launchpad.net/openstack-manuals/+bug/988053
 
 I think it would be worthwhile to talk about what services can be
 co-located on the same servers as Swift. For example Can I run Keystone
 in combination with Swift Proxy and Swift Storage?
 
 example other potential combo that might not break: glance / swift proxy
 
 I think this is most relevant to small-scale deployments, where there
 isn't quite enough hardware to go around, rather than anywhere
 approaching best practice ;)
 
 
 Realistically expecting this is a terrible, terrible idea replies to
 this, but perhaps reasons, and the odd outside-the-box idea will be
 presented ...
 
 
 Regards,
 
 Tom
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Rack-awareness

2012-11-01 Thread John Dickinson
This is already supported in Swift with the concept of availability zones. 
Swift will place each replica in different availability zones, if possible. If 
you only have one zone, Swift will place the replicas on different machines. If 
you only have one machine, Swift will place the replicas on different drives.

There are active discussions right now about how Swift can support a tier above 
these availability zones: regions. A region would be defined by a hogher 
latency link and can provide additional data durability, and, depending on your 
deployment details, better availability. 
http://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/
 has more info on the ideas we're talking about.

--John




On Nov 1, 2012, at 8:45 AM, Leandro Reox leandro.r...@gmail.com wrote:

 Hi guys, 
 
 Any plans to implement something like hadoop rack-awareness where we can 
 define rack spaces to guarantee that a copy of an object is stored for 
 example on another datacenter, on another coast. Or should this be managed by 
 container sync to the other datacenter
 
 I think that this can be a nice-to-have feature, i dont know if its on the 
 dev roadmap
 
 Best
 Lean
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Rack-awareness

2012-11-01 Thread John Dickinson
That's absolutely what is planned. The idea for global replicas involves three 
basic pieces:

1) A separate replication network
2) Exposing a new tier of uniqueness (the region) in the ring management tools
3) Implementing proxy affinity so that a proxy server chooses closer replicas 
rather than more distant ones

The first two are simple and already have much of the infrastructure in place. 
The third is a little more tricky.

We will probably also need the ability to increase or decrease the cluster's 
replica count to make this useful to more deployers.

--John




On Nov 1, 2012, at 9:49 AM, Alejandro Comisario 
alejandro.comisa...@mercadolibre.com wrote:

 John, what i think would be terrific ( i hope is not implemented, if not im 
 gonna feel a dunce ) if, for latency matters, suppose you have 4 zones, 2 on 
 each datacenter, and on each datacenter, you have 2 proxies for example.
 
 De idea would be that there were some kind of mechanism to tell the ring, 
 what nodes are under what proxies, to give some kind of prefferece regarding 
 latency, and for example not having a proxy on DC1 to cross the country to 
 get an object from a datanode in DC2.
 
 Have you thought anything like it?
 If not, im wondering how, for example rackspace handles this kind of issues ( 
 ignoring all the CDN thing )
 
 Cheers.
 
 ---
 Alejandrito
 
 On Thu, Nov 1, 2012 at 12:55 PM, John Dickinson m...@not.mn wrote:
 This is already supported in Swift with the concept of availability zones. 
 Swift will place each replica in different availability zones, if possible. 
 If you only have one zone, Swift will place the replicas on different 
 machines. If you only have one machine, Swift will place the replicas on 
 different drives.
 
 There are active discussions right now about how Swift can support a tier 
 above these availability zones: regions. A region would be defined by a 
 hogher latency link and can provide additional data durability, and, 
 depending on your deployment details, better availability. 
 http://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/
  has more info on the ideas we're talking about.
 
 --John
 
 
 
 
 On Nov 1, 2012, at 8:45 AM, Leandro Reox leandro.r...@gmail.com wrote:
 
  Hi guys,
 
  Any plans to implement something like hadoop rack-awareness where we can 
  define rack spaces to guarantee that a copy of an object is stored for 
  example on another datacenter, on another coast. Or should this be managed 
  by container sync to the other datacenter
 
  I think that this can be a nice-to-have feature, i dont know if its on the 
  dev roadmap
 
  Best
  Lean
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova middleware for enabling CORS?

2012-10-30 Thread John Dickinson
Since the CORS support in Swift allows the preflight OPTIONS response to be 
different on a per-container basis (which is correct in a multi-tenant system), 
the CORS support was added directly into Swift's proxy server rather than as 
middleware. In order to fulfill the OPTIONS request, container information 
needs to be read from the system, and the proxy server already has this 
information (probably in cache). Implementing the CORS support as middleware 
would require duplicating much of the code and framework that already exists in 
Swift's proxy server. CORS support in Swift was correctly implemented.

--John




On Oct 30, 2012, at 10:57 AM, Renier Morales reni...@us.ibm.com wrote:

 On Oct 30, 2012, at 1:08 PM, David Kranz wrote:
 
 On 10/30/2012 12:43 PM, Renier Morales wrote:
 Hello,
 
 I'm wondering if someone has already created a nova paste filter/middleware 
 for enabling Cross-Origin Resource Sharing (CORS), allowing a web page to 
 access the openstack api from another domain. Any pointers out there?
 
 Thanks,
 
-Renier
 
 
 This https://review.openstack.org/#/c/6909/ was an attempt to add such 
 middleware to swift. It is
 generic CORS support but seems
 to have been rejected in favor of putting CORS support in swift directly and 
 checked in last week:
 https://github.com/openstack/swift/commit/74b27d504d310c70533175759923c21df158daf9
 
 Question for the list: this supports CORS in Swift. Should other services 
 (nova, keystone, glance) do the same kind of intrinsic CORS enablement?
 It's surprising that something like CORS, if done, would not be done in a 
 more generic plug-in friendly that you could use across all services.
 
   -Renier
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Troubleshooting Swift 1.7.4 on mini servers

2012-10-29 Thread John Dickinson
Also check the number of inodes used: `df -i`

--John



On Oct 29, 2012, at 8:31 AM, Nathan Trueblood nat...@truebloodllc.com wrote:

 Yeah, I read about the 507 error.However, when the error occurs on my I 
 can see with 'df' that the drive is only 1% full and is definitely not 
 unmounted.   I can write files to the mounted filesystem directly before, 
 during, and after the Swift error occurs.   So the problem must be some kind 
 of timeout that is causing the object server to think that something is wrong 
 with the disk.
 
 I'll keep digging... 
 
 On Fri, Oct 26, 2012 at 11:21 PM, John Dickinson m...@not.mn wrote:
 A 507 is returned by the object servers in 2 situations: 1) the drives are 
 full or 2) the drives have been unmounted because of disk error.
 
 It's highly likely that you simply have full drives. Remember that the usable 
 space in your cluster is 1/N where N = replica count. As an example, with 3 
 replicas and 5 nodes with a single 1TB drive each, you only have about 1.6TB 
 available for data.
 
 As Pete suggested in his response, how big are your drives, and what does 
 `df` tell you?
 
 --John
 
 
 On Oct 26, 2012, at 5:26 PM, Nathan Trueblood nat...@truebloodllc.com wrote:
 
  Hey folks-
 
  I'm trying to figure out what's going wrong with my Swift deployment on a 
  small cluster of mini servers.   I have a small test cluster (5 storage 
  nodes, 1 proxy) of mini-servers that are ARM-based.   The proxy is a 
  regular, Intel-based server with plenty of RAM.   The 
  object/account/container servers are relatively small, with 2GB of RAM per 
  node.
 
  Everything starts up fine, but now I'm trying to troubleshoot a strange 
  problem.   After I successfully upload a few test files, it seems like the 
  storage system stops responding and the proxy gives me a 503 error.
 
  Here's the test sequence I run on my proxy:
 
  lab@proxy01:~/bin$ ./swiftcl.sh stat
  swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass stat
 Account: AUTH_system
  Containers: 5
 Objects: 4
   Bytes: 47804968
  Accept-Ranges: bytes
  X-Timestamp: 1351294912.72119
  lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles1 /home/lab/bigfile1
  swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
  myfiles1 /home/lab/bigfile1
  home/lab/bigfile1
  lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles2 /home/lab/bigfile1
  swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
  myfiles2 /home/lab/bigfile1
  home/lab/bigfile1
  lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles3 /home/lab/bigfile1
  swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
  myfiles3 /home/lab/bigfile1
  home/lab/bigfile1
  lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles4 /home/lab/bigfile1
  swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
  myfiles4 /home/lab/bigfile1
  home/lab/bigfile1
  lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles5 /home/lab/bigfile1
  swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
  myfiles5 /home/lab/bigfile1
  Object PUT failed: 
  http://172.16.1.111:8080/v1/AUTH_system/myfiles5/home/lab/bigfile1 503 
  Service Unavailable  [first 60 chars of response] 503 Service Unavailable
 
  The server is currently unavailable
  lab@proxy01:~/bin$ ./swiftcl.sh stat
  swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass stat
 Account: AUTH_system
  Containers: 6
 Objects: 5
   Bytes: 59756210
  Accept-Ranges: bytes
  X-Timestamp: 1351294912.72119
 
  Here's the corresponding log on the Proxy:
 
  Oct 26 17:06:52 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/06/52 GET 
  /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0010
  Oct 26 17:07:13 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/13 GET 
  /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0017
  Oct 26 17:07:13 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/13 GET 
  /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0016
  Oct 26 17:07:22 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/22 GET 
  /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0010
  Oct 26 17:07:22 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/22 GET 
  /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0016
  Oct 26 17:07:27 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/27 GET 
  /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0010
  Oct 26 17:07:27 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/27 GET 
  /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0016
  Oct 26 17:07:27 proxy01 proxy-server Handoff requested (1) (txn: 
  tx6946419daba54efe9c2878f8a2a78f88) (client_ip: 172.16.1.111)
  Oct 26 17:07:27 proxy01 proxy-server Handoff requested (2) (txn: 
  tx6946419daba54efe9c2878f8a2a78f88) (client_ip: 172.16.1.111)
  Oct 26 17:07:33 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/33 GET 
  /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0010
  Oct 26 17:07:33 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/33 GET 
  /auth/v1.0/ HTTP/1.0 200

Re: [Openstack] Troubleshooting Swift 1.7.4 on mini servers

2012-10-27 Thread John Dickinson
A 507 is returned by the object servers in 2 situations: 1) the drives are full 
or 2) the drives have been unmounted because of disk error.

It's highly likely that you simply have full drives. Remember that the usable 
space in your cluster is 1/N where N = replica count. As an example, with 3 
replicas and 5 nodes with a single 1TB drive each, you only have about 1.6TB 
available for data.

As Pete suggested in his response, how big are your drives, and what does `df` 
tell you?

--John


On Oct 26, 2012, at 5:26 PM, Nathan Trueblood nat...@truebloodllc.com wrote:

 Hey folks-
 
 I'm trying to figure out what's going wrong with my Swift deployment on a 
 small cluster of mini servers.   I have a small test cluster (5 storage 
 nodes, 1 proxy) of mini-servers that are ARM-based.   The proxy is a regular, 
 Intel-based server with plenty of RAM.   The object/account/container servers 
 are relatively small, with 2GB of RAM per node.
 
 Everything starts up fine, but now I'm trying to troubleshoot a strange 
 problem.   After I successfully upload a few test files, it seems like the 
 storage system stops responding and the proxy gives me a 503 error.
 
 Here's the test sequence I run on my proxy:
 
 lab@proxy01:~/bin$ ./swiftcl.sh stat
 swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass stat
Account: AUTH_system
 Containers: 5
Objects: 4
  Bytes: 47804968
 Accept-Ranges: bytes
 X-Timestamp: 1351294912.72119
 lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles1 /home/lab/bigfile1 
 swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
 myfiles1 /home/lab/bigfile1
 home/lab/bigfile1
 lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles2 /home/lab/bigfile1 
 swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
 myfiles2 /home/lab/bigfile1
 home/lab/bigfile1
 lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles3 /home/lab/bigfile1 
 swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
 myfiles3 /home/lab/bigfile1
 home/lab/bigfile1
 lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles4 /home/lab/bigfile1 
 swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
 myfiles4 /home/lab/bigfile1
 home/lab/bigfile1
 lab@proxy01:~/bin$ ./swiftcl.sh upload myfiles5 /home/lab/bigfile1 
 swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass upload 
 myfiles5 /home/lab/bigfile1
 Object PUT failed: 
 http://172.16.1.111:8080/v1/AUTH_system/myfiles5/home/lab/bigfile1 503 
 Service Unavailable  [first 60 chars of response] 503 Service Unavailable
 
 The server is currently unavailable
 lab@proxy01:~/bin$ ./swiftcl.sh stat
 swift -A http://proxy01:8080/auth/v1.0 -U system:root -K testpass stat
Account: AUTH_system
 Containers: 6
Objects: 5
  Bytes: 59756210
 Accept-Ranges: bytes
 X-Timestamp: 1351294912.72119
 
 Here's the corresponding log on the Proxy:
 
 Oct 26 17:06:52 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/06/52 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0010
 Oct 26 17:07:13 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/13 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0017
 Oct 26 17:07:13 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/13 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0016
 Oct 26 17:07:22 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/22 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0010
 Oct 26 17:07:22 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/22 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0016
 Oct 26 17:07:27 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/27 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0010
 Oct 26 17:07:27 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/27 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0016
 Oct 26 17:07:27 proxy01 proxy-server Handoff requested (1) (txn: 
 tx6946419daba54efe9c2878f8a2a78f88) (client_ip: 172.16.1.111)
 Oct 26 17:07:27 proxy01 proxy-server Handoff requested (2) (txn: 
 tx6946419daba54efe9c2878f8a2a78f88) (client_ip: 172.16.1.111)
 Oct 26 17:07:33 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/33 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0010
 Oct 26 17:07:33 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/33 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0016
 Oct 26 17:07:33 proxy01 proxy-server Handoff requested (1) (txn: 
 tx5f9659f74cb2491f9a63cbb84f680c5c) (client_ip: 172.16.1.111)
 Oct 26 17:07:33 proxy01 proxy-server Handoff requested (2) (txn: 
 tx5f9659f74cb2491f9a63cbb84f680c5c) (client_ip: 172.16.1.111)
 Oct 26 17:07:39 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/39 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0009
 Oct 26 17:07:39 proxy01 proxy-server - 127.0.0.1 27/Oct/2012/00/07/39 GET 
 /auth/v1.0/ HTTP/1.0 200 - - - - - - - - 0.0009
 Oct 26 17:07:39 proxy01 proxy-server Handoff requested (1) (txn: 
 tx8dc917a4a8c84c40a4429b7bab0323c6) (client_ip: 172.16.1.111)
 Oct 26 17:07:39 proxy01 proxy-server Handoff requested (2) (txn: 
 

[Openstack] Swift 1.7.5 release plan

2012-10-25 Thread John Dickinson
Fast on the heels of a productive summit in San Diego, we are getting ready to 
release Swift 1.7.5. Our current schedule is to cut the QA release on November 
5 and, assuming it passes all QA tests, prepare the final release on November 8.

This is quite a solid release with a ton of bug fixes and a few new features. 
I'll keep you up to date as we get closer to the release date. Note that if you 
have patches that you would like to see included in this release, they will 
need to be merged before the QA release is cut on Nov 5.

Thanks to everyone who has contributed to Swift since the last release. You are 
helping Swift become something use by everyone, every day.

--John





smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread John Dickinson
Sorry for the delay. You've got an interesting problem, and we were all quite 
busy last week with the summit.

First, the standard caveat: Your performance is going to be highly dependent on 
your particular workload and your particular hardware deployment. 3500 req/sec 
in two different deployments may be very different based on the size of the 
requests, the spread of the data requested, and the type of requests. Your 
experience may vary, etc, etc.

However, for an attempt to answer your question...

6 proxies for 3500 req/sec doesn't sound unreasonable. It's in line with other 
numbers I've seen from people and what I've seen from other large scale 
deployments. You are basically looking at about 600 req/sec/proxy.

My first concern is not the swift workload, but how keystone handles the 
authentication of the tokens. A quick glance at the keystone source seems to 
indicate that keystone's auth_token middleware is using a standard memcached 
module that may not play well with concurrent connections in eventlet. 
Specifically, sockets cannot be reused concurrently by different greenthreads. 
You may find that the token validation in the auth_token middleware fails under 
any sort of load. This would need to be verified by your testing or an 
examination of the memcache module being used. An alternative would be to look 
at the way swift implements it's memcache connections in an eventlet-friendly 
way (see swift/common/memcache.py:_get_conns() in the swift codebase).

--John



On Oct 11, 2012, at 4:28 PM, Alejandro Comisario 
alejandro.comisa...@mercadolibre.com wrote:

 Hi Stackers !
 This is the thing, today we have a 24 datanodes (3 copies, 90TB usables) each 
 datanode has 2 intel hexacores CPU with HT and 96GB of RAM, and 6 Proxies 
 with the same hardware configuration, using swift 1.4.8 with keystone.
 Regarding the networking, each proxy / datanodes has a dual 1Gb nic, bonded 
 in LACP mode 4, each of the proxies are behind an F5 BigIP Load Balancer ( 
 so, no worries over there ).
 
 Today, we are receiving 5000 RPM ( Requests per Minute ) with 660 RPM per 
 Proxies, i know its low, but now ... with a new product migration, soon ( 
 really soon ) we are expecting to receive about a total of 90.000 RPM average 
 ( 1500 req / s ) with weekly peaks of 200.000 RPM ( 3500 req / s ) to the 
 swift api, witch will be 90% public gets ( no keystone auth ) and 10% 
 authorized PUTS (keystone in the middle, worth to know that we have a 10 
 keystone vms pool, connected to a 5 nodes galera mysql cluster, so no worries 
 there either ) 
 
 So, 3500 req/s divided by 6 proxy nodes doesnt sounds too much, but well, its 
 a number that we cant ignore.
 What do you think about this numbers? does this 6 proxies sounds good, or we 
 should double or triple the proxies ? Does anyone has this size of requests 
 and can share their configs ?
 
 Thanks a lot, hoping to ear from you guys !
 
 -
 alejandrito
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread John Dickinson
Smaller requests, of course, will have a higher percentage overhead for each 
request, so you will need more proxies for many small requests than the same 
number of larger requests (all other factors being equal).

If most of the requests are reads, then you probably won't have to worry about 
keystone keeping up.

You may want to look at tuning the object server config variable 
keep_cache_size. This variable is the maximum size of an object to keep in 
the buffer cache for publicly requested objects. So if you tuned it to be 20K 
(20971520)--by default it is 5424880--you should be able to serve most of your 
requests without needing to do a disk seek, assuming you have enough RAM on the 
object servers. Note that background processes on the object servers end up 
using the cache for storing the filesystem inodes, so lots of RAM will be a 
very good thing in your use case. Of course, the usefulness of this caching is 
dependent on how frequently a given object is accessed. You may consider an 
external caching system (anything from varnish or squid to a CDN provider) if 
the direct public access becomes too expensive.

One other factor to consider is that since swift stores 3 replicas of the data, 
there are 3 servers that can serve a request for a given object, regardless of 
how many storage nodes you have. This means that if all 3500 req/sec are to the 
same object, only 3 object servers are handling that. However, if the 3500 
req/sec are spread over many objects, the full cluster will be utilized. Some 
of us have talked about how to improve swift's performance for concurrent 
access to a single object, but those improvements have not been coded yet.

--John



On Oct 24, 2012, at 1:20 PM, Alejandro Comisario 
alejandro.comisa...@mercadolibre.com wrote:

 Thanks Josh, and Thanks John.
 I know it was an exciting Summit! Congrats to everyone !
 
 John, let me give you extra data and something that i've already said, that 
 might me wrong.
 
 First, the request size that will compose the 90.000RPM - 200.000 RPM will be 
 from 90% 20K objects, and 10% 150/200K objects.
 Second, all the GET requests, are going to be public, configured through 
 ACL, so, if the GET requests are public (so, no X-Auth-Token is passed) why 
 should i be worried about the keystone middleware ?
 
 Just to clarify, because i really want to understand what my real metrics are 
 so i can know where to tune in case i need to.
 Thanks !
 
 ---
 Alejandrito
 
 
 On Wed, Oct 24, 2012 at 3:28 PM, John Dickinson m...@not.mn wrote:
 Sorry for the delay. You've got an interesting problem, and we were all quite 
 busy last week with the summit.
 
 First, the standard caveat: Your performance is going to be highly dependent 
 on your particular workload and your particular hardware deployment. 3500 
 req/sec in two different deployments may be very different based on the size 
 of the requests, the spread of the data requested, and the type of requests. 
 Your experience may vary, etc, etc.
 
 However, for an attempt to answer your question...
 
 6 proxies for 3500 req/sec doesn't sound unreasonable. It's in line with 
 other numbers I've seen from people and what I've seen from other large scale 
 deployments. You are basically looking at about 600 req/sec/proxy.
 
 My first concern is not the swift workload, but how keystone handles the 
 authentication of the tokens. A quick glance at the keystone source seems to 
 indicate that keystone's auth_token middleware is using a standard memcached 
 module that may not play well with concurrent connections in eventlet. 
 Specifically, sockets cannot be reused concurrently by different 
 greenthreads. You may find that the token validation in the auth_token 
 middleware fails under any sort of load. This would need to be verified by 
 your testing or an examination of the memcache module being used. An 
 alternative would be to look at the way swift implements it's memcache 
 connections in an eventlet-friendly way (see 
 swift/common/memcache.py:_get_conns() in the swift codebase).
 
 --John
 
 
 
 On Oct 11, 2012, at 4:28 PM, Alejandro Comisario 
 alejandro.comisa...@mercadolibre.com wrote:
 
  Hi Stackers !
  This is the thing, today we have a 24 datanodes (3 copies, 90TB usables) 
  each datanode has 2 intel hexacores CPU with HT and 96GB of RAM, and 6 
  Proxies with the same hardware configuration, using swift 1.4.8 with 
  keystone.
  Regarding the networking, each proxy / datanodes has a dual 1Gb nic, bonded 
  in LACP mode 4, each of the proxies are behind an F5 BigIP Load Balancer ( 
  so, no worries over there ).
 
  Today, we are receiving 5000 RPM ( Requests per Minute ) with 660 RPM per 
  Proxies, i know its low, but now ... with a new product migration, soon ( 
  really soon ) we are expecting to receive about a total of 90.000 RPM 
  average ( 1500 req / s ) with weekly peaks of 200.000 RPM ( 3500 req / s ) 
  to the swift api, witch will be 90% public gets ( no keystone auth

Re: [Openstack] [openstack-dev] [Swift] community meeting Oct 1

2012-10-01 Thread John Dickinson
Reminder for today's meeting.


On Sep 24, 2012, at 10:59 PM, John Dickinson m...@not.mn wrote:

 As we finish up Folsom and head into the Grizzly summit, I'd like to have a 
 Swift community meeting.
 
 Who: The Swift community (users, deployers, contributors, core devs)
 When:  October 1, 2012 at 8pm (UTC), 3pm (Central), 1pm (Pacific)
 Where: #openstack-meeting on freenode (IRC)
 
 Agenda: http://wiki.openstack.org/SwiftOct1Meeting
 
 The goal of this meeting is to prepare for the summit, and therefore also the 
 next six months of Swift's development. We will do this by reviewing feature 
 ideas (gathered from the community) and ensuring that the most important 
 topics are addressed at the summit. Even if you are not able to attend the 
 summit, please try to attend this meeting. It's an opportunity for you to 
 share what's important to you as we continue to move Swift forward.
 
 
 --John
 
 
 
 ___
 OpenStack-dev mailing list
 openstack-...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift reliability

2012-09-26 Thread John Dickinson
The 404s on object PUTs are probably related to the timeout errors you are 
seeing on the container servers. This may be because of IO contention on your 
hardware (eg overtaxed drives). How does the disk IO look on your physical 
hardware?

The disk full errors may be because you are running out of inodes on the 
filesystem. You can check this with `df -i`. This is possible if you are using 
many small files.

--John


On Sep 26, 2012, at 3:39 AM, Phil Holden phil.hol...@cognitomobile.com wrote:

 Hello,
 
 I have been continuing to run the Swift reliability test described at 
https://answers.launchpad.net/swift/+question/201627
 This is now using ext4 filesystems but continues to have some issues.  
 The test has been resized a little and now consists of 40 threads doing 
 a PUT with an object, then a GET on it some time later. Each thread will 
 eventually PUT 15,000 objects in 1 container per thread.  The object 
 number then wraps around and it should thereafter be over-writing 
 objects which already exist.  The data objects are very small, e.g.,
Content of object 11234 in container 15-1 \n
 The test is rate limited.  It has been run at up to 2,100 HTTP requests 
 (GET or PUT) per minute which is the expected traffic rate we want it to 
 support.  
 
 The Swift cluster consists of a load balancer in front of 2 x Swift 
 proxies, in turn connected to 6 Swift data nodes. All these systems are 
 VM's in a managed cluster of physical servers and so may compete for 
 physical resources, but we think they are provisioned adequately for 
 this phase of testing.  Other tests have achieved over 3,500 HTTP 
 requests/minute using this cluster.  The rings are configured for 3 
 replicas of the data.  The Swift version is Essex (2012.1).  
 
 A number of problems continue to be encountered with the test.  These 
 have been as follows:
 
 The problems described in question 201627 (above) continued to occur 
 when XFS filesystems were used.  This problem is not seen if ext4 
 filesystems are used.  
 
 The remaining problems have only been seen using ext4 filesystems.  They 
 occur after the test has been running for some time, several days.  
 Using xfs filesystems, the test gets stuck as in question 201627 before 
 encountering any of these.  
 
 After the test has wrapped around on the object number that it is 
 writing, space usage continues to grow, eventually filling all the data 
 nodes.  If an object is over-written, replacing its contents, is the 
 old data freed immediately or is it left around, waiting to be tidied 
 away later by some clean-up process?  The object-expirer is being run on 
 one of the proxy nodes, but all objects should be over-written well 
 before their expiry time.  
 
 On one occasion half the data nodes were completely filled at 100% and 
 the cluster overall became unresponsive.  This situation was solved by a 
 rolling restart where each of the data nodes is restarted, one-by-one.  
 
 HTTP 404 : Not Found is repeatedly reported on a PUT to an object in an 
 existing container.  The test gets stuck on this until it is resolved.  
 This can often be resolved by a rolling restart where each of the data 
 nodes is restarted, one-by-one.  
 
 One of the Swift proxy server processes became unresponsive.  This meant 
 that only half the requests succeeded, the ones which went through the 
 other proxy.  There was nothing evident in the logfiles.  The proxy 
 process did not respond to an ordinary kill (SIGTERM).  A SIGKILL was 
 needed to remove it.  The object-expirer which was running at the same 
 time on the same host did respond to SIGTERM and stopped.  Everything 
 continued normally after the proxy server and object-expirer were 
 restarted.  
 
 
 Further testing is being performed at a reduced rate of 525 HTTP 
 requests per minute (25% of the target rate) to see if this Swift 
 cluster will perform more reliably at this reduced rate. 
 
 Can anyone shed any light on the problems described above and suggest 
 ways they could be prevented from happening.  
 
 
 The overall purpose of the test is to determine if Swift can be reliably 
 used for storage of mission-critical data.  Obviously open source 
 software such as this comes with no warranty, but, in a similar manner 
 to making a judgement about use of the Linux kernel and filesystems and 
 related software for mission-critical activities, a judgement about the 
 use of Swift needs to be made.  This test is intended to support the 
 ability to make this decision.  
 
 
 Regards
   - Phil -
 
 
 
 NOTICE: Cognito Limited. Benham Valence, Newbury, Berkshire, RG20 8LU.  UK. 
 Company number 02723032.  This e-mail message and any attachment is 
 confidential. It may not be disclosed to or used by anyone other than the 
 intended recipient. If you have received this e-mail in error please notify 
 the sender immediately then delete it from your system. Whilst every effort 
 has been made 

[Openstack] [Swift] community meeting Oct 1

2012-09-25 Thread John Dickinson
As we finish up Folsom and head into the Grizzly summit, I'd like to have a 
Swift community meeting.

Who: The Swift community (users, deployers, contributors, core devs)
When:  October 1, 2012 at 8pm (UTC), 3pm (Central), 1pm (Pacific)
Where: #openstack-meeting on freenode (IRC)

Agenda: http://wiki.openstack.org/SwiftOct1Meeting

The goal of this meeting is to prepare for the summit, and therefore also the 
next six months of Swift's development. We will do this by reviewing feature 
ideas (gathered from the community) and ensuring that the most important topics 
are addressed at the summit. Even if you are not able to attend the summit, 
please try to attend this meeting. It's an opportunity for you to share what's 
important to you as we continue to move Swift forward.


--John





smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [swift] make swift.common.utils.streq_const_time more efficient

2012-09-13 Thread John Dickinson
The intended purpose of this string comparison is to explicitly compare every 
character. Doing it this way guards against timing attacks 
(http://en.wikipedia.org/wiki/Timing_attack).

--John


On Sep 13, 2012, at 12:06 AM, Mike Green iasy...@gmail.com wrote:

 def streq_const_time(s1, s2):
 
 if len(s1) != len(s2):
 return False
 result = 0
 for (a, b) in zip(s1, s2):
 result |= ord(a) ^ ord(b)
 return result == 0
 
 +
 
 If s1 and s2 are of the same length,  then the function will compare every 
 characters in them.  I think it may be more efficient as follow:
 
 def streq_const_time(s1, s2):
 
 if len(s1) != len(s2):
 return False
 result = 0
 for (a, b) in zip(s1, s2):
 if ord(a) ^ ord(b):
   return False
 return True ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [swift] 1.7.0-final pushed

2012-09-12 Thread John Dickinson
I just pushed the final versioning change to Swift 1.7.0. This is our part of 
OpenStack Folsom. Good work everyone, and thanks the time and effort you have 
put into keeping Swift world-class software.

I'll follow up later with more details about what is in Swift 1.7.0 and the 
changes since the Essex release, but I wanted to publicly thank everyone who 
contributed to Swift during the Folsom release cycle. The following people have 
code contributions in Swift during the Folsom release cycle. Thanks again!

Greg Holt
John Dickinson
Darrell Bishop
Samuel Merritt
Florian Hines
David Goetz
Greg Lange
Victor Rodionov
Michael Barton
Ionuț Arțăriși
Vincent Untz
Pete Zaitcev
Chmouel Boudjnah
Alex Yang
Morita Kazutaka
Iryoung Jeong
Julien Danjou
Dan Prince
Adrian Smith
ning_zhang
Anne Gentle
Brent Roskos
Clark Boylan
Constantine Peresypkin
Dan Dillinger
François Charlier
Josh Kearney
Kota Tsuyuzaki
Li Riqiang
Marcelo Martins
Monty Taylor
Paul McMillan
Ray Chen
Scott Simpson
Thierry Carrez
Tom Fifield
Tong Li

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift][Replicator] Does object replicator push exist object to handoff node while a node/disk/network fails ?

2012-09-06 Thread John Dickinson
you can force a replicator to push to a handoff node by unmounting the drive 
one of the primary replicas is on.

--John


On Sep 6, 2012, at 9:00 AM, Kuo Hugo tonyt...@gmail.com wrote:

 Hi folks , John and Chmouel ,
 
 I did post a question about this long time ago. And my test result is match 
 to Chmouel's answer. 
 
 https://answers.launchpad.net/swift/+question/191924
 The object replicator will push an object to a handoff node if another 
 primary node returns that the drive the object is supposed to go on is bad. 
 We don't push to handoff nodes on general errors, otherwise things like 
 network partitions or rebooting machines would cause storms of unneeded 
 handoff traffic.
 
 But I read something different from John (or just my misunderstanding)  , so 
 want to clarify it.
 
 Assumption : 
 Storage Nodes :  5 (each for one zone)
 Zones :   5
 Replica :  3
 Disks :   2*5   ( 1 disk/per node )
 
 Account   AUTH_test
 ContainerCon_1
 Object  Obj1
 
 
 Partition   3430
 Hash6b342ac122448ef16bf1655d652bfe1e
 
 Server:Port Device  192.168.1.101:36000 DISK1
 Server:Port Device  192.168.1.102:36000 DISK1
 Server:Port Device  192.168.1.103:36000 DISK1
 Server:Port Device  192.168.1.104:36000 DISK1[Handoff]
 Server:Port Device  192.168.1.105:36000 DISK1[Handoff]
 
 
 curl -I -XHEAD http://192.168.1.101:36000/DISK1/3430/AUTH_test/Con_1/Obj1;
 curl -I -XHEAD http://192.168.1.102:36000/DISK1/3430/AUTH_test/Con_1/Obj1;
 curl -I -XHEAD http://192.168.1.103:36000/DISK1/3430/AUTH_test/Con_1/Obj1; 
 curl -I -XHEAD http://192.168.1.104:36000/DISK1/3430/AUTH_test/Con_1/Obj1; # 
 [Handoff]
 curl -I -XHEAD http://192.168.1.105:36000/DISK1/3430/AUTH_test/Con_1/Obj1; # 
 [Handoff]
 
 
 ssh 192.168.1.101 ls -lah 
 /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/
 ssh 192.168.1.102 ls -lah 
 /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/
 ssh 192.168.1.103 ls -lah 
 /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/ 
 ssh 192.168.1.104 ls -lah 
 /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/ # 
 [Handoff]
 ssh 192.168.1.105 ls -lah 
 /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/ # 
 [Handoff]
 
 Case : 
 Obj1 is already been uploaded to 3 primary devices properly. What kind of 
 fails on 192.168.1.101:3600 DISK1 will trigger replicator push a copy to 
 192.168.1.104:36000 DISK1 [handoff]  device ?
 
 In my past test , the replicator does not push a copy to handoff node for an 
 existing object. Whatever network fail / reboot machine / umount disk , I 
 think these are general errors from Chmouel mentioned before. But I'm not 
 that sure about the meaning of replicator will push an object to a handoff 
 node if another primary node returns that the drive the object is supposed to 
 go on is bad . How object-replicator to know that the drive the object is 
 supposed to go on is bad (I think replicator will never know it. Should it 
 work with object-auditor ?) 
 
 How to produce a fail to trigger replicator push object to handoff node ?
 
 In my consideration , for replicator pushes an object to handoff node there's 
 a condition is that primary device does not have the object , also can not 
 push into the device(192.168.1.101:36000 DISK1). It might be moved to 
 quarantine due to the object-auditor found the object is broken. 
 
 So that even the disk(192.168.1.101:3600 DISK1) is still mounted and the 
 target partition 3430 does not have Obj1 . Another node's object-replicator 
 try to push it's Obj1 to 192.168.1.101:36000 DISK1 , but unluckily , the 
 192.168.1.101:36000 DISK1 is bad. So the object-replicator will push object 
 to 192.168.1.104:36000 DISK1 [handoff]  now .
 
 That's my inference , please feel free to correct it . I'm really confusing 
 about to produce the kind of fails for replicator to push object to handoff 
 node . 
 Any idea would be great .
 
 
 Cheers 
 -- 
 +Hugo Kuo+
 tonyt...@gmail.com
 +886 935004793
 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift PTL candidacy

2012-09-04 Thread John Dickinson
I am running to continue my position as Swift PTL.

I have been involved with swift since the project started. I am an active 
contributor, reviewer, and community participant. I have lead meetups about 
swift, given conference presentations on swift, and am active in IRC helping 
those who have questions about swift. I'm tremendously excited about what swift 
can do and what the future holds for it.

In the next six months, my priorities for swift will be on growing the user 
community, solving the needs of production use cases, and working towards Swift 
2.0.

Slightly more details:

1) Growing the user community
- Swift needs to be easier to install so more people can try it out
- Swift needs more intro and getting started documentation
- Swift needs to encourage 3rd party developer support (client apps)

2) Solving the needs of production use cases
- What's running in production matters so much more than purity of code 
design
- Those running production swift clusters have the loudest voice in how 
swift works
- Swift must always work, have seamless migration paths, and allow for 
upgrades to running clusters
- (Swift does all these things now. We simply must keep doing them.)

3) Swift 2.0
- This is not (necessarily) an API change
- But what is it? Collection of features? What features?
- It will probably include things like geographically distributed 
clusters and improved replication
- We will be talking about this at the summit.


Company affiliation: I was an employee of Rackspace for about 3 years. In June 
of this year I left Rackspace and joined SwiftStack.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [swift] important upgrade note for swift deployers

2012-08-28 Thread John Dickinson
Swift 1.7
=

The next release of Swift will be version 1.7. This will be our
release for OpenStack Folsom, and is scheduled land mid-September.
There is an important change for deployers in this release. This
email has the details so you can begin planning your upgrade path.

What's the change
=

The version bump is based in part on a recent patch that changed the
on-disk format of the ring files
(https://github.com/openstack/swift/commit/f8ce43a21891ae2cc00d0770895b556eea9c7845
 ).
This was a necessary change that addresses a major performance issue
introduced by a change in Python between Py2.6 and Py2.7. See
https://bugs.launchpad.net/swift/+bug/1031954 for more detail.

This patch essentially changes a default in a backwards incompatible
way. Swift 1.7 can read the old format but only write the new format.
Therefore deployers can easily upgrade but not easily downgrade or
roll back this part of the system.

This information is included in the official docs at
http://docs.openstack.org/developer/swift/admin_guide.html#managing-the-rings


Safe Upgrade Path
=

This is how deployers can safely upgrade their existing swift cluster:

1) Upgrade the proxy, account, container, and object nodes as normal.
   Cluster operations will continue to work and you can still upgrade
   with no downtime, as always.

2) Once your entire cluster is upgraded, only then upgrade the version
   of swift on the box that builds your ring files (ie where you run
   swift-ring-builder). Upgrading this piece will change the on-disk
   format of your generated ring files. Deploy the new ring files to the
   swift cluster.

Notes:

 - Swift 1.7 can read both old and new format ring files.

 - If you upgrade the swift-ring-builder to the new format and
   generate new ring files with it, you cannot downgrade your cluster
   and use the new rings.


Oh No! I really, really have to downgrade my cluster


1) Downgrade your box where you run swift-ring-builder

2) Rebalance and write out rings (to put them in the old format) and
   deploy them to your cluster

3) Downgrade the rest of the swift cluster



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question about undo delete in Swift with Object Version

2012-08-22 Thread John Dickinson
With the current implementation of versioning in Swift, this isn't possible. 
It's better to think of the feature as versioned writes.

--John


On Aug 22, 2012, at 12:13 AM, ZHOU Yuan dunk...@gmail.com wrote:

 Hi stackers,
 
 I'm trying to understand the object version feature in Swift.
 In the current implementation we can store multi-versions of the same
 object in Swift. However when you delete it from the container, the
 latest version is deleted and this is not recoverable, right?
 
 Is there any magic to restore the deleted version? As I know, some
 users want to keep the history versions, like svn.
 
 --yuan
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Where do we go from here?

2012-08-14 Thread John Dickinson
Swift has many exciting features coming in the OpenStack Folsom
Release this fall, but where do we go from here? What's next for Swift
in grizzly?

I've got some ideas. I'd like to mention them and see where you the
community will take them. I've written up most of them into quick one-
line blueprints in Launchpad. If you'd like to contribute, grab the
blueprint and jump in.

- Optimize the many small writes workload. Swift actually handles
  many small concurrent writes very well. However, many small writes
  generally also implies that the cardinality of a single container
  gets very large. There are two ways this use case can be improved:

- Implement transparent container sharding
  https://blueprints.launchpad.net/swift/+spec/container-sharding

- Provide better listing traversal abstractions. Listing a few
  billion objects ten thousand at a time is somewhat impractical.

- Solve globally distributed clusters. How can I have servers in
  London and servers in San Jose in the same logical swift cluster
  with three replicas total, but guaranteed to have at least one
  replica in each cluster?
  https://blueprints.launchpad.net/swift/+spec/multi-region

- Support a single logical swift cluster with tiers of storage (eg
  cheap spinning disks and expensive high IOPS SSD arrays). Can, for
  example, a user choose to have a container and its objects be served
  from a particular tier of storage?
  https://blueprints.launchpad.net/swift/+spec/storage-tiers

- Some deployers have implemented metadata searching by intercepting
  write requests and sending the metadata to another system. Can
  metadata searching be implemented in swift itself? One possible
  implementation would be to dynamically generate indexes on the
  container DB.
  https://blueprints.launchpad.net/swift/+spec/searchable-metadata

- Support PUTs with unlimited size. Implement server-side large object
  splitting.
  https://blueprints.launchpad.net/swift/+spec/large-single-uploads

- Support the full HTTP spec for range requests
  https://blueprints.launchpad.net/swift/+spec/multi-range-support

- There are a few things that could be done to simplify installation

- Create or refactor existing code into a single swift binary or
  startup script. Would it be possible, for example, to install
  swift and run one command with the data drives listed and swift
  just works?

- Build a ring server that automatically discovers devices
  https://blueprints.launchpad.net/swift/+spec/ring-builder-server

- Provide a simple, intuitive way to test a deployment after install
  https://blueprints.launchpad.net/swift/+spec/post-deploy-test

- Support concurrent reads to objects to support a read-heavy workload
  https://blueprints.launchpad.net/swift/+spec/concurrent-reads

If you are in the San Francisco area, we will have a swift meetup on
August 30 at Citizen Space SF at 6:30 pm.
http://www.meetup.com/openstack/events/77706042/

We will have a swift team meeting on Monday October 1 in
#openstack-meeting at 8pm UTC to discuss the plans for swift over the
next six months and the sessions for the design summit. If you are
interested in participating in swift development, please attend.

If you are a new contributor to swift, please read
http://wiki.openstack.org/HowToContribute.

--John



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] community update and what's coming in Folsom

2012-08-13 Thread John Dickinson
We just released Swift 1.6.0 last Monday
( https://lists.launchpad.net/openstack/msg15505.html ). We've got a lot
of great features and improvements in it, and I wanted to take some
time to update the wider community about where Swift is.

Swift 1.4.8 was included with the last OpenStack realease (Essex).
Since then, all of the OpenStack projects have been working towards
OpenStack's Folsom release. It is scheduled for the end of September.
This summer, Swift has made two major releases (1.5.0 and 1.6.0). We
will most likely have one more release of Swift before Folsom is cut.
This next release will be included in OpenStack Folsom.

So what can you expect from swift in the Folsom release? Looking at at
the CHANGELOG, there are some exciting changes coming.

First, swift now has deep integration with statsd. This allows for
simple integration into existing statsd monitoring systems and
provides real-time monitoring of nearly every aspect of a swift
cluster. This feature is documented at
http://docs.openstack.org/developer/swift/admin_guide.html#reporting-metrics-to-statsd.
We have also expanded swift-recon to support all
types of servers in the cluster and to report on many of the
background processes used by swift. These features together allow
swift deployers to know exactly what is going on in their clusters.

Also, swift now supports versioned writes. With this feature enabled,
PUTs to an existing object will not overwrite that object but instead
move the current contents into a new location. A complete overview for
versioning is at 
http://docs.openstack.org/developer/swift/overview_object_versioning.html.

Swift has greatly improved its support for SSD-based account and
container storage. A new db_preallocation config flag can be set to
enable or disable preallocation of swift's sqlite databases. Enabling
preallocation minimizes disk fragmentation (good for spinning drives),
and disabling it maximizes usable space on the drive (good for SSDs).

We have also separated the client tools from swift and moved them into
python-swiftclient. This change benefits other projects that want to
integrate with swift. They can now install supported client tools
without needing to install all of swift.

We have also separated the swift3 middleware from swift. The code is
now managed apart from swift and is found at
https://github.com/fujita/swift3.

Finally, the swift-keystone middleware has moved from the keystone
project into the swift project. This allows those who know swift best
to support the code that ties the two projects together.

Swift's developer community has continued to grow. Since the Essex
release, Swift has had 30 contributors, 13 of whom are new. This
brings us to a total of 71 contributors.

I'm excited about delivering these features in Folsom. Thanks to all
of the contributors for your hard and thoughtful work on swift.

I'll be sending another email shortly about where swift is going in
grizzly and beyond. Stay tuned for more.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift + keystone integration

2012-08-11 Thread John Dickinson
Make sure that the endpoint stored in keystone is returning the right 
hostname/domain name and port (8080 based on your config).

--John


On Aug 11, 2012, at 12:58 PM, Miguel Alejandro González maggo...@gmail.com 
wrote:

 Hello
 
 I have 3 nodes with ubuntu 12.04 server and installed openstack with packages 
 from the ubuntu repos
   • controller (where keystone is installed)
   • compute
   • swift
 I'm trying to configure Swift with Keystone but I'm having some problems, 
 here's my proxy-server.conf
 
 [DEFAULT]
 bind_port = 8080
 user = swift
 swift_dir = /etc/swift
 [pipeline:main]
 # Order of execution of modules defined below
 pipeline = catch_errors healthcheck cache authtoken keystone proxy-server
 [app:proxy-server]
 use = egg:swift#proxy
 allow_account_management = true
 account_autocreate = true
 set log_name = swift-proxy
 set log_facility = LOG_LOCAL0
 set log_level = INFO
 et access_log_name = swift-proxy
 set access_log_facility = SYSLOG
 set access_log_level = INFO
 set log_headers = True
 account_autocreate = True
 [filter:healthcheck]
 use = egg:swift#healthcheck
 [filter:catch_errors]
 use = egg:swift#catch_errors
 [filter:cache]
 use = egg:swift#memcache
 set log_name = cache
 [filter:authtoken]
 paste.filter_factory = keystone.middleware.auth_token:filter_factory
 auth_protocol = http
 auth_host = 10.17.12.163
 auth_port = 35357
 auth_token = admin
 service_protocol = http
 service_host = 10.17.12.163
 service_port = 5000
 admin_token = admin
 admin_tenant_name = admin
 admin_user = admin
 admin_password = admin
 delay_auth_decision = 0
 [filter:keystone]
 paste.filter_factory = keystone.middleware.swift_auth:filter_factory
 operator_roles = admin, swiftoperator
 is_admin = true
 
 On Horizon I get a Django error page and says [Errno 111] ECONNREFUSED
 
 From the Swift server I try this command:
 
 swift -v -V 2.0 -A http://10.17.12.163:5000/v2.0/ -U admin:admin -K admin stat
 
 And I also get [Errno 111] ECONNREFUSED
 
 
 Is there any way to debug this??? Is there any conf or packages that I'm 
 missing for this to work on a multi-node deployment? Can you help me?
 
 Regards!
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [swift] Operational knowledge sharing

2012-08-10 Thread John Dickinson
In a standard swift deployment, the proxy server is running behind a load 
balancer and/or an SSL terminator. At SwiftStack, we discovered an issue that 
may arise from some config parameters in this layer, and we'd like to share it 
with other swift deployers.

Symptom:

Users updating metadata (ie POST) on larger objects get 503 error responses. 
However, there are no error responses logged by swift.

Cause:

Since POSTs are implemented, by default, as a server-side copy in swift and 
there is no traffic between the user and swift during the server-side copy, the 
LB or SSL terminator times out before the operation is done.

Solution:

Two options:

1) Raise the timeout in the LB/SSL terminator config. For example, with pound 
change the TimeOut for the swift backend. pound defaults to 15 seconds. The 
appropriate value is however log it takes to do a server side copy of your 
largest object. If you have a 1gbps network, it will take about 160 seconds to 
copy a 5GB object ((8*5*2**30)/((2**30)/4) -- the divide by 4 is because the 
1gbps link is used to read one stream (the original) and write the new copy (3 
replicas)).

2) Change the behavior of POSTs to not do a server-side copy. This will make 
POSTs faster, but it will prevent all metadata values from being updated 
(notably, Content-Type will not be able to be modified with a POST). Also, this 
will not make the issue go away with user-initiated server-side copies.

I would recommend the first solution, unless your workload makes heavy use of 
POSTs.

Hoep this helps.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift 1.6.0 released

2012-08-06 Thread John Dickinson
I'm happy to announce that Swift 1.6.0 has been released. You can get the 
tarball at https://launchpad.net/swift/folsom/1.6.0. As always, you can upgrade 
your production Swift clusters to this new version with no downtime to your 
clients.

The complete changelog for this release is at 
https://github.com/openstack/swift/blob/master/CHANGELOG, but I'd like to 
highlight a few of the more significant changes.

First, the bin/swift CLI client and swift/common/client.py have been moved to 
the new python-swiftclient OpenStack project. This change allows other projects 
to use an officially supported client without having to install all of Swift. 
Most immediately, Glance and Horizon will be able to use this. The 
python-swiftclient project is also helpful to non-OpenStack projects wanting to 
integrate with Swift. Note that Swift now depends on the new python-swiftclient 
project.

Secondly, Swift now includes the Keystone middleware keystoneauth. This now 
matches the pattern set by other OpenStack projects and is the logical place to 
support this part of the Swift-Keystone integration.

Lastly, the swift-dispersion-report now works with a replica count other than 
three. While this allows the tool to be more useful, it does necessitate a 
format change to the JSON returned. Therefore existing tools using the output 
of swift-dispersion-report will need to be updated.

There are many other updates and bugfixes in this release. I encourage you to 
read the entire changelog.

On the contributor side, this Swift release is the result of the work of 16 
contributors (`git shortlog --no-merges -nes 1.5.0..1.6.0 | wc -l`), 5 of whom 
are new to Swift. This brings the total contributor count for Swift to 71. The 
5 new contributors to swift are:

 - François Charlier (francois.charl...@enovance.com)
 - Iryoung Jeong (iryo...@gmail.com)
 - Tsuyuzaki Kota (tsuyuzaki.k...@lab.ntt.co.jp)
 - Dan Prince (dpri...@redhat.com)
 - Vincent Untz (vu...@suse.com)

Thank you to everyone who contributed for your hard work and commitment to 
making Swift the most reliable, open, and production-ready object storage 
system in the world.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] next swift release just around the corner

2012-07-23 Thread John Dickinson
The next swift release is scheduled for public release next Monday (July 30). 
That means we've got a little bit of work to do this week to get it ready.

In order to allow Cloud Files QA time to check it, we need to have packages 
built by the middle of the day Wednesday. This means all outstanding reviews 
that should get in to the next release should be merged by the end of the day 
Tuesday (or very early on Wednesday). I think we have a few outstanding reviews 
that could and should make it in.

Overall, this looks like a pretty good of features to release. Here is my WIP 
changelog for the release: 
https://github.com/notmyname/swift/blob/1.5.1-changelog/CHANGELOG


--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] LFS patch (Ia32c9c34)

2012-07-18 Thread John Dickinson
Nexenta's LFS patch (https://review.openstack.org/#/c/7524/) has languished for 
a while, and I'd like to address that.

First, thank you for your patch submission. This patch adds new functionality 
that potentially can allow swift to be deployed in more places. The original 
version of the patch, which you referenced, was quite a bit more complex. 
Thanks for listening to the feedback from the reviewers and refactoring out 
most of the complexity. The current state of the patch appears to be much 
improved. I do hope that the patch can be approved and merged into swift.

However, there are two things which make it more difficult to approve this 
patch: review time and ability to test.

This patch touches the ring builder, which is a rather complex part of swift. 
To properly review it, it will take two of the core devs at least a day each. 
Unfortunately, putting other normal job duties on hold for a day or more is 
very hard to do. This isn't a problem with Nexenta or the patch itself; it 
actually points to a problem with swift. We shouldn't have a part of the code 
so integral to the system that requires a full dev day every time it's touched.

The other issue with approving the patch is testing. Any new feature that is 
merged into swift becomes something that all swift contributors must now 
support and maintain. The maintenance burden is lessened (but not eliminated) 
by any sort of testing that can be provided. The LFS patch adds functionality 
that cannot be well tested. At best, we can only test that the patch doesn't 
break any existing functionality. But we have no way to ensure that later 
patches won't break the functionality that this patch provides. Since this 
patch is currently only really useful with Nexenta's separate lfs middleware 
for ZFS, and since there is no testing infrastructure set up to test swift on 
Solaris/ZFS, we cannot offer any sort of support or maintenance for the feature 
this patch provides.

If Nexenta would like to provide and run some hardware for testing purposes, it 
would go a long way to helping ensure that this feature and others like it can 
be properly added to and maintained in swift. If this LFS patch is indeed 
accepted, it will be Nextenta's responsibility to ensure that all future 
patches in swift do not break the LFS functionality. (This applies to the 
previously merged patch for Solaris compatibility, too.)



--John





On Jul 16, 2012, at 3:45 PM, Victor Rodionov wrote:

 Hello
  
 I've submit patch (https://review.openstack.org/#/c/7101/), that help Swift 
 use special features of file system on that it working.
  
 One of the  changes in this patch is for reduce number of network replicas of 
 partition if user use self-repairing mirrored device. For this user should 
 add mirror_copies parameter to each device. By default mirror_copies for all 
 devices is 1, so changes of code don't take any effect for current Swift 
 deployments.  For almost all systems three singleton replicas can be replaced 
 by two mirrored replicas. So if all user devices is mirrored (mirror_copies 
 = 2), then number of network copies of most partition will be reduced, and 
 then for operation like PUT and POST we will make less request. The 
 definition of mirroring specifically requires the local file system detect 
 the bad replica on its own, such as by calculating checksums of the content, 
 and automatically repairing data defects when discovered.  So if one of 
 devices fail recovery will be done by file system without coping data from 
 other device. This changes was made in ring builder and take effect if 
 mirror_copies  1, so this code is not danger for current Swift users, but 
 for other users can provide new possibility.
  
 Also this patch add hooks, that can be used for manipulation with file 
 system, when Swift operate with account, container or object files. This 
 hooks used by middleware that is separate project, so if user don't install 
 it this changes will not take effect.
  
 This feature only enabled by customers that have chosen to install  the 
 enabling software and turn it on and it is easy to test that this patches 
 have no impact on the generic deployments.
  
 Most of patch code was restructured, most of logic was moved to middleware 
 level and use hooks in Swift code. I create separate project (LFS middleware 
 https://github.com/nexenta/lfs) for now there are only 2 supported file 
 system types (XFS and ZFS) there. Also this middleware provide API for 
 getting file system status information (for example, for ZFS it's current 
 pool status, etc).
  
 Further the Nexenta side-project is not the only local file system that could 
 provide this form of local replication and data protection.Trading off 
 between network replication and local replication is a valid performance 
 decision. Insisting on a fixed amount of network replication without regard 
 to the degree of local protection provided against data loss would 
 

Re: [Openstack] Statsd on SWIFT 1.4.8

2012-07-05 Thread John Dickinson
statsd integration was added in swift 1.5.0


On Jul 5, 2012, at 9:07 AM, Leandro Reox wrote:

 On swift essex stable 1.4.8, is the capacity to send statistics to a statsd 
 server available? Its not in the docs
 
 Regards
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] [Storage node] Lots of timeouts in load test after several hours around 1, 000, 0000 operations

2012-07-01 Thread John Dickinson
I hope you are able to get an answer. I'm traveling this week, so I won't have 
a chance to look in to it. I hope some of the other core devs will have a 
chance to help you find an answer.

--John


On Jul 1, 2012, at 2:03 PM, Kuo Hugo tonyt...@gmail.com wrote:

 Hi all , 
 
 I did several loading tests for swift in recent days. 
 
 I'm facing an issue ... Hope you can share you consideration to me ... 
 
 My environment:
 Swift-proxy with Tempauth in one server : 4 cores/32G rams 
 
 Swift-object + Swift-account + Swift-container in storage node * 3 , each for 
 : 8 cores/32G rams   2TB SATA HDD * 7 
 =
 bench.conf :
 
 [bench]
 auth = http://172.168.1.1:8082/auth/v1.0
 user = admin:admin
 key = admin
 concurrency = 200
 object_size = 4048
 num_objects = 10
 num_gets = 10
 delete = yes
 =
 
 After 70 rounds .
 
 PUT operations get lots of failures , but GET still works properly
 ERROR log:
 Jul  1 04:35:03 proxy-server ERROR with Object server 
 192.168.100.103:36000/DISK6 re: Trying to get final status of PUT to 
 /v1/AUTH_admin/af5862e653054f7b803d8cf1728412d2_6/24fc2f997bcc4986a86ac5ff992c4370:
  Timeout (10s) (txn: txd60a2a729bae46be9b667d10063a319f) (client_ip: 
 172.168.1.2)
 Jul  1 04:34:32 proxy-server ERROR with Object server 
 192.168.100.103:36000/DISK2 re: Expect: 100-continue on 
 /AUTH_admin/af5862e653054f7b803d8cf1728412d2_19/35993faa53b849a89f96efd732652e31:
  Timeout (10s)
 
 
 And kernel starts to report failed message as below
 kernel failed log:
 7 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.020736] w83795 
 0-002f: Failed to read from register 0x03c, err -6
76667 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.052654] w83795 
 0-002f: Failed to read from register 0x015, err -6
76668 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.080613] w83795 
 0-002f: Failed to read from register 0x03c, err -6
76669 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.112583] w83795 
 0-002f: Failed to read from register 0x016, err -6
76670 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.144517] w83795 
 0-002f: Failed to read from register 0x03c, err -6
76671 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.176468] w83795 
 0-002f: Failed to read from register 0x017, err -6
76672 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.208455] w83795 
 0-002f: Failed to read from register 0x03c, err -6
76673 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.240410] w83795 
 0-002f: Failed to read from register 0x01b, err -6
76674 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.272Jul  1 
 17:05:28 angryman-storage-03 kernel: imklog 6.2.0, log source  = 
 /proc/kmsg started.
 
 PUTs become slower and slower , from 1,200/s to 200/s ...
 
 I'm not sure if this is a bug or that's the limitation of XFS. If it's an 
 limit of XFS . How to improve it ?
 
 An additional question is XFS seems consume lots of memory , does anyone know 
 about the reason of this behavior?
 
 
 Appreciate ...
   
 
 -- 
 +Hugo Kuo+
 tonyt...@gmail.com
 +886 935004793
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Probetests

2012-06-26 Thread John Dickinson
The probe tests are internal whole-system type of tests that test functionality 
never exposed through normal integration testing. They exist at the level 
between unit tests and functional tests. For example, one of the probetests 
makes sure that asynchronous container updates actually happen. Unit tests 
aren't concerned with this sort of internal system integration test, and 
functional tests only test external functionality. The probe tests ensure that 
the interaction between the different swift processes are still functioning as 
expected.

At least that's the idea. If they are broken in master right now, then that 
shows how little they are checked. Unless the other swift core devs feel 
differently, I think they should probably be fixed up.

--John


On Jun 26, 2012, at 4:21 PM, Jay Pipes wrote:

 Not that I know of.
 
 Best,
 -jay
 
 On 06/26/2012 04:54 PM, Maru Newby wrote:
 Have I missed a response in the past week?
 
 
 On 2012-06-19, at 12:14 PM, Jay Pipes wrote:
 
 On 06/19/2012 11:10 AM, Maru Newby wrote:
 The swift probetests are broken:
 
 https://bugs.launchpad.net/swift/+bug/1014931
 
 Does the swift team intend to maintain probetests going forward?  Given 
 how broken they are at present (bad imports, failures even when imports 
 are fixed), it would appear that probetests are not gating commits.  That 
 should probably change if the tests are to be maintainable.
 
 Hi Maru, cc'ing Jose from the Swift QA team at Rackspace...
 
 I don't know what the status is on these probetests or whether they are 
 being maintained. Jose or John, any ideas? If they are useful, we could 
 bring them into the module initialization of the Tempest Swift tests.
 
 Best,
 jay
 
 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] S3 like ACL for Swift

2012-06-20 Thread John Dickinson
Yes, this could be good for swift.

ACLs in swift do need to be stored in swift (for scale reasons), but their 
implementation is dependent on the particular auth system that you are using. 
The auth middleware is responsible for determining if a request is granted 
access to a particular swift entity. How does your implementation work with the 
current ACL support provided by tempauth and swauth? Are your ACLs compatible 
with the RBAC work being done in keystone?

I would suggest that general, full-featured ACL support should be done in 
conjunction with the work done in keystone and the swift-keystone middleware. 
If your implementation is simply more full-featured S3 compatibility, I'd 
suggest patching the 3rd party swift3 middleware.

--John


On Jun 20, 2012, at 9:38 AM, Victor Rodionov wrote:

 Hello
 
 I have working implementation of S3 like ACL API for Swift, for this changes 
 I need to store ACL on object and container server, then I need to change 
 container and object servers code.
 
 So my question, if this changes will be interesting for Swift community or no?
 
 Thanks,
 Victor
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] S3 like ACL for Swift

2012-06-20 Thread John Dickinson

On Jun 20, 2012, at 11:02 AM, Victor Rodionov wrote:
 
 Also, I want ask do you think it's good idea to store object ACL in object 
 metadata?


I'd suggest looking at container-level ACLs rather than object-level. But 
either way, the data does need to be stored in the metadata in swift itself. 
Storing the ACL information for tens of millions of containers or a hundred 
billion objects can't really be done well in the auth system. This is why the 
information needs to be stored in swift itself. The auth middleware then 
queries the auth system with the auth token and URL and gets back the allowed 
groups. The middleware then compares the groups returned from the auth system 
to the groups stored in the metadata. This is essentially the design of ACLs in 
tempauth and swauth.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Interested in implementing swift ring builder server

2012-06-19 Thread John Dickinson
It looks like Florian (at Rackspace) is working on that blueprint. He just 
assigned it to himself.

I'm happy to hear that you have some extra devs for swift work. I'd love to 
help coordinate some swift goals with you.

Off the top of my head, here are a few things that could be worked on:

1) Handoff logic should allow every node in the cluster to be used (rather than 
just every zone).

2) Compatibility with Ubuntu 12.04 needs to be solid

3) Make installation trivially easy.

--John



On Jun 18, 2012, at 2:23 PM, Mark Gius wrote:

 Hello Swifters,
 
 I've got some interns working with me this summer and I had a notion that 
 they might take a stab at the swift ring builder server blueprint that's been 
 sitting around for a while 
 (https://blueprints.launchpad.net/swift/+spec/ring-builder-server).  As a 
 first step I figured that the ring-builder-server would be purely an 
 alternative for the swift-ring-builder CLI, with a future iteration adding 
 support for deploying the rings to all servers in the cluster.  I'm currently 
 planning on making the ring-builder server be written and deployed like the 
 account/container/etc servers, although I imagine the implementation will be 
 a lot simpler.
 
 Is anybody else already working on this and forgot to update the blueprint?  
 If not can I get the blueprint assigned to me on launchpad?  Username 
 'markgius'.  Or if there's some other process I need to go through please let 
 me know.
 
 Mark
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Opening up bug triaging rights

2012-05-14 Thread John Dickinson

On May 14, 2012, at 10:06 AM, Thierry Carrez wrote:

 Hello everyone,
 
 Currently the bug triaging rights for a given PROJECT (ability to set
 status and importance of bugs, but also ability to nominate a bug for a
 past series) is restricted to the corresponding PROJECT-bugs team, which
 is generally a moderated team that nobody really monitors new members
 applications for. This restricts the number of people who can help with
 bugs, whereas we should probably encourage more people to do that.
 
 During the bug triaging session at the OpenStack Design Summit we
 proposed to open membership to the core PROJECT-bugs teams. This means
 that anybody could join the team(s) and start helping with bug triaging.
 If we get the documentation right first, the benefit (more triagers,
 empowered community) should outweigh the drawbacks (potentially insane
 triaging that needs to be reverted).
 
 If all projects are in agreement with this plan, we would create a
 single, open, openstack-bugs team. People joining that team would be
 able to  helping with bug triaging in all OpenStack core projects. This
 would certainly be clearer than having multiple teams with different
 membership rules.

Doesn't this come with the downside that now everyone will see every bug? That 
could lead to a lot of noise that a contributor will need to filter before 
seeing the bugs that are important to that person.


--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] swift news and plans

2012-05-04 Thread John Dickinson
TL;DR: removing code from swift, associated projects doc, swift 1.5.0

I want to let the openstack community know of some recent changes within swift 
and how those changes will affect the next version of swift. Swift has a 
growing developer community and a rapidly expanding deployed base. While this 
growth is fantastic, it does come with new challenges, especially for the 
swift core developers. As more and varied use cases are presented to swift, 
patches are submitted that enhance swift's functionality either by offering 
optional features or alternative APIs.

The challenge with this growth is that the core developers become responsible 
for understanding and maintaining an ever-increasing codebase. This 
responsibility becomes a timesink, both for reviews and for fixing regression 
bugs as new core features are added. For non-core developers, the review 
process for new code becomes slower, and changes that don't affect swift's 
core functionality often fall to the bottom of the pile--sometimes even to the 
point of expiring due to inactivity.

Our solution for these problems is to limit the scope of swift. Swift's core 
functionality is to provide cheap, durable, and scalable object storage 
exposed through its own API. Other functionality and alternative APIs should 
be maintained separately from the swift codebase.

As a result of this focus in scope, we have begun removing some of the 
optional parts of swift. Initially, this will include the tempurl, formpost, 
staticweb, rate limiting, swift3, domain remap, and cname lookup middleware 
modules. Proposed patches that offer alternative APIs (like CDMI) or include 
optional functionality that can be implemented external to swift will be 
encouraged to be developed separately from swift.

We have already begun the process of removing many of these pieces of 
middleware from swift and moving them into their own respective repos.

However, all of this functionality is quite valuable and beneficial to swift. 
There is a real need for most of these modules. Separating them from swift 
introduces the problem of discoverability. As a result, we have added a new 
page to our swift docs that lists associated projects and added links to that 
page on swift.openstack.org.

http://swift.openstack.org/associated_projects.html

This page is fairly limited right now, but the basic structure is there. As 
things are removed from swift and as new associated projects are created, they 
will be added to the list. This doc page is maintained in the swift codebase, 
so updating it is subject to the same requirements of any other patch to swift.

An important note is that this list offers no distinction or references to 
official or approved associated projects. This list is independent of any 
openstack CI integration that may or may not happen in the future.

Once we finish the process of migrating the optional pieces of swift away from 
the swift codebase, we will cut our next release: swift 1.5.0. There is no 
date set  for this yet, but I hope the migration process can be finished in 
the next several weeks. Swift 1.5.0, therefore, will be somewhat larger than 
most of our other swift releases. Existing deployers will need to be careful 
about upgrading to ensure that new dependencies are met.

If you have any questions, please feel free to email me. This whole effort is 
a work-in-progress. I know that there are several similar discussions going on 
within the openstack community, and swift's solution is not necessarily 
intended to replace any more general solution that may eventually arise. If 
there is a better solution at some point, we will do what we can to integrate 
with it.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-poc] 3rd Party APIs

2012-05-03 Thread John Dickinson

On May 3, 2012, at 1:16 PM, Jay Pipes wrote:
 
 The term recommended comes with a lot of baggage :) I don't want plugins to 
 be recommended or suggested -- at least by the community; companies should 
 feel free to recommend or suggest whatever they feel is best for their distro 
 or deployment. I just want a category called OpenStack Extensions (or 
 Plugins, depending on what the semantics-du-jour happen to be.

I agree with this, which is why I support option b

--John

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack-poc
Post to : openstack-poc@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-poc
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-operators] Question Regarding Swift Distribution Statistics

2012-05-02 Thread John Dickinson
On your proxy server, use swift-get-nodes to see which servers your object is 
on. With no arguments or --help you will get usage info.

--John


On May 2, 2012, at 3:48 PM, Duncan McGreggor wrote:

 cc'ing openstack list
 
 On Wed, May 2, 2012 at 4:45 PM, Richard Raseley rich...@raseley.com wrote:
 I am trying to figure out the best way to see view the distribution of a
 file or files across my test swift setup. I want to basically upload a file
 or files to containers and then be able to run a command or script that will
 tell me See, these files *are* actually distributed over these particular
 zones / nodes.
 
 Is there anything built into Swift like this? Thank you in advance for your
 answers.
 
 ___
 Openstack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HTTP status value naming normalization

2012-04-21 Thread John Dickinson
I like what you are trying to do here. Can you please submit this as a patch 
through gerrit so we can get the rest of the core devs to look at it?

--John


On Apr 20, 2012, at 12:14 PM, Victor Rodionov wrote:

 There are many place in Swift code where used hard coded values, such
 as response statuses (200, 201, 404, ...) which can replaced with
 constants HTTP_OK, HTTP_CREATED, HTTP_NOT_FOUND. Also there is widlly
 used idiom 200 = status  300, that can be replaced as well with
 something like this is_success(status). I want add modules for
 defining all required constants (Swift, HTTP).
 
 So I think this changes will improve Swift code readability.
 
 PS: this is an initial changes in github
 https://github.com/vitoordaz/swift/commit/7163d5df13ceaf8fc7b53ba812fe16bd7dd31131
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Does there any exist blueprint or sub-project of user's storage space quota or counting method for Swift ?

2012-04-12 Thread John Dickinson
Swift keeps total bytes, container, and object count (eventually) up-to-date in 
the account metadata. There are also log processing tools (like slogging - 
http://github.com/notmyname/slogging) that can provide usage information 
(including bandwidth) based on swift logs.

While I think that it's appropriate for swift to generate the usage information 
(via internal processes or log processing), the appropriate place for quotas is 
in whatever system handles the concept of a user (normally the auth system). 
This way quotas are enforced by revoking or limiting access of the auth token.

--John


On Apr 12, 2012, at 11:53 AM, Frederik Van Hecke wrote:

 Hi Kuo,
 
 One option would be to keep the usage information (num files, num bytes, etc) 
 per container / account in an sqlite DB, just like it is done for account and 
 container info.
 
 To avoid having to loop through all data at regular intervals (to update the 
 info), additional logic could be added to the api methods to update the 
 sqlite DB's when new files are added, files are deleted, etc. Such approach 
 will require more lines of code, but will be far less stressful on 
 performance.
 
 (the brute-force approach to loop through it at regular intervals will be 
 hell on performance on large deployments..)
 
 
 For data transfer billing based on download / upload amounts, a similar 
 approach could be used.
 
 If no one else is looking into this, I would certainly be willing to help to 
 help get this started.
 
 
 Kind regards,
 Frederik Van Hecke
 
 T:  +32487733713
 E:  frede...@cluttr.be
 W: www.cluttr.be
 
 
 
 
 
 This e-mail and any attachments thereto may contain information which is 
 confidential and/or protected by intellectual property rights and are 
 intended for the sole use of the recipient(s)named above. Any use of the 
 information contained herein (including, but not limited to, total or partial 
 reproduction, communication or distribution in any form) by persons other 
 than the designated recipient(s) is prohibited. If you have received this 
 e-mail in error, please notify the sender either by telephone or by e-mail 
 and delete the material from any computer. Thank you for your cooperation.
 
 
 
 On Thu, Apr 12, 2012 at 17:45, Kuo Hugo tonyt...@gmail.com wrote:
 Hi folks , 
 
 I'm thinking about the better approach to manage an user or an account 
 space usage quota in swift.
 Is  there any related blueprint or sub-project even an idea around ?
 Any suggestion of benefits to be an external service or to be a middle-ware 
 in swift-proxy ?
 
 I'm concerning about such feature will reduce the performance of entire Swift 
 environment. 
 
 Appreciate :
 
   
 
 -- 
 +Hugo Kuo+
 tonyt...@gmail.com
 +886 935004793
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Does there any exist blueprint or sub-project of user's storage space quota or counting method for Swift ?

2012-04-12 Thread John Dickinson
I should also mention the summit session talking about this very topic led by 
Everett Toews. It's (currently) scheduled for 9am on wednesday.

http://summit.openstack.org/sessions/view/81

--John



On Apr 12, 2012, at 8:51 PM, John Dickinson wrote:

 Swift keeps total bytes, container, and object count (eventually) up-to-date 
 in the account metadata. There are also log processing tools (like slogging - 
 http://github.com/notmyname/slogging) that can provide usage information 
 (including bandwidth) based on swift logs.
 
 While I think that it's appropriate for swift to generate the usage 
 information (via internal processes or log processing), the appropriate place 
 for quotas is in whatever system handles the concept of a user (normally the 
 auth system). This way quotas are enforced by revoking or limiting access of 
 the auth token.
 
 --John
 
 
 On Apr 12, 2012, at 11:53 AM, Frederik Van Hecke wrote:
 
 Hi Kuo,
 
 One option would be to keep the usage information (num files, num bytes, 
 etc) per container / account in an sqlite DB, just like it is done for 
 account and container info.
 
 To avoid having to loop through all data at regular intervals (to update the 
 info), additional logic could be added to the api methods to update the 
 sqlite DB's when new files are added, files are deleted, etc. Such approach 
 will require more lines of code, but will be far less stressful on 
 performance.
 
 (the brute-force approach to loop through it at regular intervals will be 
 hell on performance on large deployments..)
 
 
 For data transfer billing based on download / upload amounts, a similar 
 approach could be used.
 
 If no one else is looking into this, I would certainly be willing to help to 
 help get this started.
 
 
 Kind regards,
 Frederik Van Hecke
 
 T:  +32487733713
 E:  frede...@cluttr.be
 W: www.cluttr.be
 
 
 
 
 
 This e-mail and any attachments thereto may contain information which is 
 confidential and/or protected by intellectual property rights and are 
 intended for the sole use of the recipient(s)named above. Any use of the 
 information contained herein (including, but not limited to, total or 
 partial reproduction, communication or distribution in any form) by persons 
 other than the designated recipient(s) is prohibited. If you have received 
 this e-mail in error, please notify the sender either by telephone or by 
 e-mail and delete the material from any computer. Thank you for your 
 cooperation.
 
 
 
 On Thu, Apr 12, 2012 at 17:45, Kuo Hugo tonyt...@gmail.com wrote:
 Hi folks , 
 
 I'm thinking about the better approach to manage an user or an account 
 space usage quota in swift.
 Is  there any related blueprint or sub-project even an idea around ?
 Any suggestion of benefits to be an external service or to be a middle-ware 
 in swift-proxy ?
 
 I'm concerning about such feature will reduce the performance of entire 
 Swift environment. 
 
 Appreciate :
 
 
 
 -- 
 +Hugo Kuo+
 tonyt...@gmail.com
 +886 935004793
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift: NAS or DAS?

2012-03-16 Thread John Dickinson
Generally, you would introduce latency in the storage system by using a NAS 
attached to a storage drive. Also, at scale, your costs will be dominated by 
drive, so you will want to optimize the storage nodes for dense, cheap storage.

--John


On Mar 16, 2012, at 8:32 AM, Michaël Van de Borne wrote:

 Hi all,
 
 on the very useful www.referencearchitecture.org website, and in every piece 
 of documentation on Swift, I never found anything like a NAS attached to a 
 storage node. It was all about DAS solution.
 Is there a specific reason why a NAS wouldn't be a good choice to build a 
 swift infrastructure?
 
 thank you
 
 
 -- 
 Michaël Van de Borne
 RD Engineer, SOA team, CETIC
 Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
 www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swift and two data-centres - hierarchical zones?

2012-03-12 Thread John Dickinson

On Mar 10, 2012, at 5:58 AM, John Leach wrote:
 
 I think what I need here is hierarchical zones - I'd define one parent
 zone per data-centre, and then multiple child zones within each
 (representing racks or whatever).
 
 Swift would be configured to write 3 replicas in 3 child zones, aiming
 for at least 1 one replica per parent zone (handing off if the parent
 zone is unavailable).

This would be a great feature for swift, and it's very much in line with some 
things we've brainstormed about.

 
 I saw an oscon 2011 swift talk slide that mentioned layered zones as
 future dev work - mentioning cabinets, not zones.  Extrapolating from
 these 5 words, this is exactly what I need, when will it be ready? ;)
 
 Any thoughts on this?  Can the existing Ring implementation be extended
 to do this kind of thing? Is the code modular enough to be able to make
 the Ring implementation pluggable?

Beyond gholt's brimring (which you looked at earlier), I don't know of any work 
that has been done on this. I certainly think that the existing ring can be 
modified to handle these use cases, but it's not really pluggable (beyond a 
replacement that supports the same methods). I'd like to see the existing ring 
implementation expanded for these use cases (and remain compatible with 
existing deployments) rather than move to a plugin/extensions/whatever model.

If you and others would like to work on these features, I think a large part of 
the swift community would gratefully accept it.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift concept architecture

2012-03-12 Thread John Dickinson
On Mar 11, 2012, at 12:16 AM, Dmitry Ukov wrote:

 Hi all,
 I want to introduce some ideas about Swift.
  
 Let’s assume we have huge amount of data stored in Swift (e.g. 10Pb). This 
 data are dynamically changed by users. So we need to reduce network load 
 caused by replication and intensive data uploading/downloading. 
 My proposal is to create so called “Ring of rings”. For example we have 2 
 data centers with deployed Swift. We can distinguish some nodes for serving 
 “Ring of rings” (actually we need only Proxy Servers).

snip

 So we can use Ring to determine data center to send http request to.
 
 What do you think about this scheme?
 Feedback from the OpenStack/Swift community would be very appreciated.

I think these are great ideas to explore. Solving a multi-DC deployment is a 
goal we have talked about for a long time.

There is another thread on this mailing list from John Leach that is talking 
about solving this same problem, but it uses tiered zones in a single ring 
instead of a ring of rings. While conceptually similar (indeed it may just be 
semantic differences), I think the tiered zones approach is a better path to 
explore, and I'd love to hear your thoughts on that thread.

--John





smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift patches for the essex release

2012-03-07 Thread John Dickinson
The final swift release for openstack essex is coming quickly. Swift 1.4.8 is 
tentatively scheduled for March 22nd. To get your patches into this release, 
please submit them for review by next week so we have enough time to review and 
QA them.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Cannot create container.

2012-02-24 Thread John Dickinson
a 507 response means that the drive was unmounted. If you are running this in a 
VM (like the SAIO), then you need to disable to mount check. Docs for this are 
in the SAIO docs.

--John


On Feb 24, 2012, at 9:13 AM, Leander Bessa wrote:

 Hello,
 
 I'm trying to set up a custom swift server in a virtual machine. I have based 
 my install on the SAIO configurations but i'm using repository packages 
 instead of the cloning from git. 
 
 I'm using three partitions to simulate 3 different zones. Everything appears 
 to be running.  However, i am unable to create a new container (keep getting 
 404 response) and the only reference i can find in the logs is this:
 Feb 24 15:09:41 ubserver account-server 172.16.225.162 - - 
 [24/Feb/2012:15:09:41 +] HEAD /sdb1/99702/AUTH_admin 507 - 
 tx51735d3079074bfab3900df92d74466e - - 0.0059 
 Feb 24 15:09:41 ubserver account-server 172.16.225.162 - - 
 [24/Feb/2012:15:09:41 +] HEAD /sdb3/99702/AUTH_admin 507 - 
 tx51735d3079074bfab3900df92d74466e - - 0.0059 
 Feb 24 15:09:42 ubserver account-server 172.16.225.162 - - 
 [24/Feb/2012:15:09:42 +] HEAD /sdb2/99702/AUTH_admin 507 - 
 tx51735d3079074bfab3900df92d74466e - - 0.0059 
 Feb 24 15:09:42 ubserver proxy-server 172.16.225.1 172.16.225.1 
 24/Feb/2012/15/09/42 PUT /v1/AUTH_admin/default HTTP/1.0 404 - 
 curl/7.21.4%20%28universal-apple-darwin11.0%29%20libcurl/7.21.4%20OpenSSL/0.9.8r%20zlib/1.2.5
  admin%2CAUTH_tk9c942206e5d743fab3f7b45abb7d8741 - - - 
 tx51735d3079074bfab3900df92d74466e - 0.0459
 
  Any ideas?
 
 Regards,
 
 Leander
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] swift authors list

2012-02-24 Thread John Dickinson
Stefano has asked that swift start keeping email addresses in our AUTHORS file. 
I'll be adding these soon, but some contributors have more than one email. If 
you have a particular email address you'd like to have me add to your name, 
please let me know. Otherwise, I will use one from git log.

No need to respond to the list for this; just respond to me privately. Please 
let me know ASAP so that we don't hold up any election plans.

--John

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift container ACLs and container visibility question

2012-02-23 Thread John Dickinson
It all depends on the auth system you are using.

Below is for swauth and tempauth:

Are the users using the same shared storage? If so, set them up as .admin users 
with the same storage endpoint. If they are not using the same shared storage, 
then you may be stuck. The ACL support in swauth and tempauth is only on a 
container level (so you can't give permissions to do an account listing to see 
the containers in it). Of course, if this is something you need, then patches 
can be added to support this functionality.

--John


On Feb 23, 2012, at 3:55 PM, Lillie Ross-CDSR11 wrote:

 I'm setting up Swift storage for an internal project.  For the project's use 
 of Swift, I want all members of the project to be able to see what's stored 
 in Swift.  Applying suitable ACLs, it's possible for user's to see the 
 contents of the projects container.  However, is there any way to allow users 
 to see a list of containers used by the project?  Or must I create an 
 additional container to store this type of project meta data?  May be a 
 dumb question and more of a architecture convention issue, but I'm just 
 getting started with Swift and OpenStack in general and was wondering what 
 other's have done.
 
 Thanks and regards,
 Ross
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swprobe: swift middleware for sending metrics to graphite using statsd

2012-02-21 Thread John Dickinson
That's great. Have you by any chance seen 
https://github.com/pandemicsyn/swift-informant? It's something similar that 
we've been playing with at Rackspace.

--John


On Feb 21, 2012, at 10:36 AM, Jasper Capel wrote:

 Hi all,
 
 I'm announcing a piece of Swift middleware, swprobe [1], designed to gather 
 run-time metrics and ship them off to Graphite [2] for near real-time 
 monitoring. Currently it sends out bytes up- and downloaded per account, http 
 methods and response codes and timings in miliseconds on each call.
 
 To be able to use this you need Graphite [2]. You also need statsd running, 
 preferably on the local machine since there potentially many small UDP 
 packets are being sent out. Please also note that we have not yet tested this 
 with production workloads.
 
 [1] - https://github.com/spilgames/swprobe
 [2] - http://graphite.wikidot.com/
 [3] - https://github.com/etsy/statsd
 
 Best regards,
 
 -- 
 Jasper Capel
 Lead Infrastructure Engineer
 
 W http://www.spilgames.com | S jwcapel-spil
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] python-swiftclient?

2012-02-13 Thread John Dickinson

On Feb 13, 2012, at 8:29 AM, Chmouel Boudjnah wrote:
 
 
 What do you think if we :
 
 - split swift.common.client to its own.
 - have bin/swift import that package and shipped with it.
 - have a comprehensive test suite covering the CLI and the library.
 - have some proper PIP release for all the projects to depend on it.


+1

Also, you may want to look at https://github.com/gholt/swiftly

--John

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Doubts about virtual CPUs and Swift storage capacity.

2012-02-11 Thread John Dickinson

On Feb 11, 2012, at 7:29 AM, Jorge Luiz Corrêa wrote:
 
 2) About Swift, how do I determine the total usable storage capacity of the 
 system? For example: I have 3 nodes with 5 HDs of 1 TB each one. Straightly I 
 have 15 TB of space. If I use raid 1 I can say that I'm going to have 7,5 TB 
 of usable space. And with Swift, is it possible to determine this usable 
 space? 

Here's how to find the usable space for swift

marketing size of the drive (eg 2TB) * .92 (to account for formatted size * .8 
(for 80% fullness) / replicas

If you have 15 TB of formatted space with 3 replicas, that gives you 5TB of 
usable space. If you have 15TB of unformatted space with three replicas, that 
gives you a little less than 4TB of usable space.

The reason I calculate it at 80% fullness is so that you have some headroom to 
expand as your cluster grows. You don't want to have your hard drives fill up 
completely before you decide it's time to buy some new ones.

--John

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift 1.4.6 release

2012-02-10 Thread John Dickinson
I'm happy to announce that swift 1.4.6 has been released today. We've added
some great new features in this release and fixed several outstanding bugs. The
full changelog is below, but I'd like to highlight a few key points.

Swift 1.4.6 includes new middleware that adds the ability to upload objects to
a swift cluster using an HTML form POST. Now you can create a form on a
webpage that will directly upload content into a swift cluster without the need
to proxy the traffic through your webserver. You can find documentation for
this middleware at 
http://swift.openstack.org/misc.html#module-swift.common.middleware.formpost.

In conjunction with the FormPOST middleware, we also now have a TempURL
middlware that allows you to use URLs with temporary access to objects. Read
the docs at 
http://swift.openstack.org/misc.html#module-swift.common.middleware.tempurl.

For operational simplicity, we have added an option that use a memcache.conf
file instead of duplicating an option value in several config files; see
https://github.com/openstack/swift/blob/master/etc/memcache.conf-sample

As always, existing swift clusters can be upgraded to swift 1.4.6 in-place with
no client downtime.

Swift docs: http://swift.openstack.org
Swift code: http://github.com/openstack/swift
Openstack swift PPAs: https://launchpad.net/~swift-core/+archive/release


Changelog:
https://github.com/openstack/swift/blob/1021989b60082d5e402a01d32e97fe87737f283f/CHANGELOG

  * TempURL and FormPost middleware added
  
  * Added memcache.conf option
  
  * Dropped eval-based json parser fallback
  
  * Properly lose all groups when dropping privileges
  
  * Fix permissions when creating files
  
  * Fixed bug regarding negative Content-Length in requests
  
  * Consistent formatting on Last-Modified response header
  
  * Added timeout option to swift-recon
  
  * Allow arguments to be passed to nosetest
  
  * Removed tools/rfc.sh
  
  * Other minor bug fixes


--John



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Container Name Restrictions

2012-01-24 Thread John Dickinson

On Jan 24, 2012, at 5:22 PM, Matthew Wodrich wrote:

 Hi Folks,
 
 I'm trying to write some scripts to work with Swift containers, but I don't 
 actually know what the restrictions on container names are.  Does anyone know 
 what the specification is, or where I can read up on it?
 
 For example:
 What are the length requirements for container names? (Maximum/Minimum 
 numbers of characters?)

min is one character, max is 256

 Starting character requirements?

none

 Allowable character sets?

utf8

 Disallowed characters?

non-utf8

 Disallowed patterns? (things like .., .-, -., --, ip address-like things, 
 etc.)

none

 Anything else?

the container name can't contain a / since that would be the delimiter 
between the container and object name (eg /account/container/object)


--John

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift supported file system

2012-01-23 Thread John Dickinson
That functionality is left up to a client. For example, you could use FUSE to 
spoof swift as a filesystem or you could use a client like Cyberduck or even 
write your own. Last week someone on this mailing list talked about adding 
webDAV support to swift.

All of these work in that they present swift to the user as a filesystem. 
However, swift doesn't follow POSIX semantics and therefore any attempt to 
force swift into such a model will have drawbacks (mostly with regards to 
performance).

--John


On Jan 23, 2012, at 4:47 AM, Khaled Ben Bahri wrote:

 Thanks for your answer,
 
 For the first question I meant, tha i wish to mount a shared forlder that 
 uses swift for storage
 
 Best Regards
 Khaled
 
 Date: Fri, 20 Jan 2012 13:59:56 -0600
 From: florian.hi...@gmail.com
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Swift supported file system
 
 On Friday, January 20, 2012 at 12:47 PM, Khaled Ben Bahri wrote:
 Hi,
 
 Can any one please tell me if we can mount and use for openstack swift a 
 mounted shared folder as a storage device
 
 Do you mean use shared storage as the storage backend for swift?  Or do you 
 wish to mount a shared folder that uses swift for storage (ala 
 dropbox/jungledisk/fuse) ?
 
 
  
 Is it necessary that storage devices have to be mounted on /srv/node??
 
 You can change where you mount devices with the devices config option in 
 the default section of your config.
 
 --
 Florian Hines | @pandemicsyn
 http://about.me/pandemicsyn
  
 
 
 ___ Mailing list: 
 https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net 
 Unsubscribe : https://launchpad.net/~openstack More help : 
 https://help.launchpad.net/ListHelp
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift supported file system

2012-01-23 Thread John Dickinson
The storage volumes referenced in the ring are identified by an IP, port, and 
mount point. So, it is possible to use network attached storage for swift (as 
long as it still supports xattrs). However, I don't know if this has ever 
really been tried (especially in production), and I'd be surprised if you get 
any benefits doing this (rather than using local hard drives).

--John


On Jan 23, 2012, at 8:04 AM, Khaled Ben Bahri wrote:

 Thanks a lot
 
 and for using a shared folder mounted on nfs as the storage backend for 
 swift? is it possible??
 
 best regards
 Khaled
 
  Subject: Re: [Openstack] Swift supported file system
  From: m...@not.mn
  Date: Mon, 23 Jan 2012 07:47:44 -0600
  CC: florian.hi...@gmail.com; openstack@lists.launchpad.net
  To: khaled-...@hotmail.com
  
  That functionality is left up to a client. For example, you could use FUSE 
  to spoof swift as a filesystem or you could use a client like Cyberduck or 
  even write your own. Last week someone on this mailing list talked about 
  adding webDAV support to swift.
  
  All of these work in that they present swift to the user as a filesystem. 
  However, swift doesn't follow POSIX semantics and therefore any attempt to 
  force swift into such a model will have drawbacks (mostly with regards to 
  performance).
  
  --John
  
  
  On Jan 23, 2012, at 4:47 AM, Khaled Ben Bahri wrote:
  
   Thanks for your answer,
   
   For the first question I meant, tha i wish to mount a shared forlder that 
   uses swift for storage
   
   Best Regards
   Khaled
   
   Date: Fri, 20 Jan 2012 13:59:56 -0600
   From: florian.hi...@gmail.com
   To: openstack@lists.launchpad.net
   Subject: Re: [Openstack] Swift supported file system
   
   On Friday, January 20, 2012 at 12:47 PM, Khaled Ben Bahri wrote:
   Hi,
   
   Can any one please tell me if we can mount and use for openstack swift a 
   mounted shared folder as a storage device
   
   Do you mean use shared storage as the storage backend for swift? Or do 
   you wish to mount a shared folder that uses swift for storage (ala 
   dropbox/jungledisk/fuse) ?
   
   
   
   Is it necessary that storage devices have to be mounted on /srv/node??
   
   You can change where you mount devices with the devices config option 
   in the default section of your config.
   
   --
   Florian Hines | @pandemicsyn
   http://about.me/pandemicsyn
   
   
   
   ___ Mailing list: 
   https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net 
   Unsubscribe : https://launchpad.net/~openstack More help : 
   https://help.launchpad.net/ListHelp
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help : https://help.launchpad.net/ListHelp
 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] (no subject)

2012-01-19 Thread John Dickinson
look in syslog on your proxy server to see what caused the error.

--John


On Jan 19, 2012, at 6:28 PM, Khaled Ben Bahri wrote:

 Hi all,
 
 I tryed to install OpenStack swift,
 
 after creating and configuring all nodes, when i want to check that swift 
 works,
 I execute this command :
 swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K 
 testpass stat
 
 but I have an error :
 Account HEAD failed: https://x.x.x.x:8080/v1/AUTH_system 503 Internal Server 
 Error
 Can any one please help me
 
 Thanks in advance for any help
 
 Best regards
 Khaled
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] swift essex status update

2012-01-11 Thread John Dickinson
A quick note to talk about what's been going on with swift since the diablo 
release.

Swift 1.4.2 was the openstack diablo release. Since then, we've had three 
releases. We've done quite a bit of small bug fixes and general polish, and I'd 
like to highlight some of the bigger improvements we've made.

First, we'e included a new tool in swift called swift-recon. This is a 
combination of a scripts and middleware for the object-server, and it allows 
the swift cluster to report on its own health. For example, using swift-recon, 
you can find out the disk utilization in the cluster, socket utilization, load 
stats, async pending stats, replication stats, and unmounted disks info. It's a 
great tool that gives you good insight into important metrics in your swift 
cluster. Florian Hines designed and wrote this tool.

On the bug-fixing front, we saw a memory leak error under high load at large 
scale. In short, the Python garbage collector was not always freeing memory 
associated with a socket when a client would disconnect early. This would cause 
the proxy servers to run out of memory after a few days of use. Greg Holt spent 
quite a bit of time finding and fixing this error.

We've also included two new tools for managing production clusters 
(swift-orphans and swift-oldies). These tools are used to find potential issues 
with long-running swift processes. These tools were written by Greg Holt.

That brings us to our current release. All of the above-mentioned changes are 
available in swift 1.4.5 (released earlier this week). I'd also like to 
highlight another exciting update that was just merged into swift today and 
will be included in the swift 1.4.6 release: temp urls and form uploading.

With this new feature, you will be able to craft a temporary URL that grants a 
user limited access to your swift account. For example, you can craft a URL to 
your swift cluster that grants PUT access to a particular container for the 
next 30 minutes. You can use this in conjunction with HTML forms to directly 
upload content from a browser into swift (without having to proxy the data on 
your web servers). This feature has been requested by many and was written 
primarily by Greg Holt with input from David Goetz and Greg Lange.

We're halfway through the openstack essex release cycle. I'm excited about the 
improvements we've made to swift, and I expect some more exciting things to 
come before our final essex release is made. As always, patches welcome!

John Dickinson
Swift Project Technical Lead
notmyname on IRC




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using Gerrit to verify the CLA

2012-01-09 Thread John Dickinson
https://rackspace.echosign.com/verifier


On Jan 9, 2012, at 8:08 AM, Mark McLoughlin wrote:

 Hey,
 
 On Thu, 2012-01-05 at 10:02 -0800, James E. Blair wrote:
 This change is in place; membership in openstack-cla is required in
 order to submit changes to Gerrit.
 
 All of the -core groups have been made administrators of that group.  If
 core members could watch for new membership requests, validate that the
 user has added their information to the wiki page, and approve them,
 that would be swell.
 
 I've done a few of these now.
 
 Is there any way for -core members to check the EchoSign Transaction
 Number isn't just random gibberish before approving?
 
 Cheers,
 Mark.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Several questions about HOW SWIFT WORKS

2012-01-06 Thread John Dickinson
The best, technical description of the ring was written by the person who had 
the biggest role in writing it for swift: 
http://www.tlohg.com/p/building-consistent-hashing-ring.html

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] swift release 1.4.5

2012-01-04 Thread John Dickinson
It's time again for a swift release. We have cut the swift 1.4.5 release and 
it's headed to QA. We expect it to be validated by the end of this week and it 
should land for public use early next week.

Below is the changelog for this release. The highlights are the swift-orphans 
and swift-oldies tools and the fix to swift-init to support 
swift-object-expirer.

As always, you will be able to upgrade to this release with no downtime for 
your users.


Full changelog for swift (1.4.5)

* New swift-orphans and swift-oldies command line tools to detect
  orphaned Swift processes and long running processes.

* Command line tool swift now supports marker queries.

* StaticWeb middleware improved to save an extra request when
  possible.

* Updated swift-init to support swift-object-expirer.

* Fixed object replicator timeout handling [bug 814263].

* Fixed accept header 503 vs. 400 [bug 891247].

* More exception handling for auditors.

* Doc updates for PPA [bug 905608].

* Doc updates to explain replication more clearly [bug 906976].

* Updated SAIO instructions to no longer mention ~/swift/trunk.

* Fixed docstrings in the ring code.

* PEP8 Updates.

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Several questions about HOW SWIFT WORKS

2012-01-03 Thread John Dickinson
Answers inline.

On Jan 3, 2012, at 11:32 AM, Alejandro Comisario wrote:

 
 So, lets get down to business.
 
 # 1 we have memcache service running on each proxy, so as far as we know, 
 memcache actually caches keystone tokens and object paths as the request ( 
 PUT , GET) enters the proxy, but for example, if we restart one proxy server, 
 so the memcached service is empty, is the restarted proxy node going to the 
 neighbor memcache on nex request, lookup for what it needs, and cache the 
 answer on itself so the next query is solved locally ?

Memcache works as a distributed lookup. So the keys that were stored on the 
server that was restarted are no longer cached. The proxies share a memcache 
pool (at least in the example proxy config), so requests are fetched from that 
pool. Since the keys are balanced across the entire memcache pool, roughly 1/N 
memcache requests will be local (where N == the number of proxy servers).

 
 # 2 the documentation says regarding For each request, it will look up the 
 location of the account, container, or object in the ring (see below) and 
 route the request accordingly in what way the proxy actually does the 
 look-up regarding WHERE is an object / container in the cluster ? does it 
 connect to any datanode asking for an object location ? does the proxy have 
 any locally sotarge data ??

The proxy does not store any data locally (not even to buffer reads or writes). 
The proxy uses the ring to determine how to handle the read or write. The ring 
is a mapping of the storage volumes that, given an account, container, and 
object, provides the final location of where the data is to be stored. The 
proxy then uses this information to either read or write the object.

 
 # 3 Maybe it has to do with the previous question but, every dataNode knows 
 everything that is stored on the cluster (container service) or only knows 
 the object that has itself, and the replicas of its objects?

Things are stored in swift deterministically, so data nodes don't know where 
everything is stored, but they know how to find where it should be stored (ie 
the ring).

 
 # 4 We are building a production cluster of 24 datanodes, having 6 drives 
 each (144 immediate drives) we know, that a good default number of partitions 
 per drive is 100, so the math for creating the ring will be (24 nodes * 6 
 drives * 100 partitions) but we know the at the end of the year, the amount 
 of datanodes (and drives also) could be 2x or 3x more. So, for the initial 
 setup, can we build the RING with our 144 drives and 100 partitions per drive 
 so we can modify the ring / partitions later and rebalance? or is safer to 
 think about future infrastructure increase, and build the ring with those 
 numbers in mind ?

Your partition power should take into account the largest size your cluster can 
be. You cannot change the partition power after you deploy the ring unless you 
migrate everything in your cluster (a manual process of GET from the old ring 
and PUT to the new ring), so it is important to select the proper partition 
power up front.

 
 # 5 We put a new object into the cluster, the proxy decides where to write 
 the object (is it in a round-robin manner ?) is the proxy server giving a 
 Created response when the 1st replica is actually writen and put into the 
 account and container SQLite databases ? or there is and ok just when the 
 OBJECT service actually wrote the data on disc ?

The proxy sends the write to 3 object servers. The object servers write to disk 
and then send a request to the container servers to update the container 
listing. The object servers then return success to the proxy. After 2 object 
servers have returned success, the proxy can return success to the client.

 
 Hope, we can shed some lights regarding this doubts.

There are obviously some details I've glossed over in the short answers above. 
Much of the complexity in swift comes from failure scenarios. Please ask if you 
need more detail.


--John



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [RFC] Common config options module

2011-12-06 Thread John Dickinson
Overall, I think it's a great thing to have commonality between the projects on 
option names and environment variables. I think it's worthwhile to push for 
that in the swift cli tool in the essex timeframe.

On the topic of common config libraries, though, I think the differences are 
less important. Mark's wiki page describing a common config library sounds 
interesting, but I agree with Monty that it may be better to be a separate (ie 
non-openstack) module.

--John


On Dec 5, 2011, at 3:36 PM, Vishvananda Ishaya wrote:

 Just read through the description and the code.  I don't have any issues with 
 the way it is implemented, although others may have some suggestions/tweaks.  
 I think it is most important to get the common code established, so I'm up 
 for implementing you changes in Nova.  I think it is important to get buy in 
 from Jay and the Glance team ASAP as well.
 
 It would also be great if the Swift team could do a quick review and at least 
 give us a heads up on whether there are any blockers to moving to this 
 eventually.  They have a huge install base, so changing their config files 
 could be significantly more difficult, but it doesn't look too diffferent 
 from what they are doing.  John, thoughts?
 
 Vish
 
 On Nov 28, 2011, at 7:09 AM, Mark McLoughlin wrote:
 
 Hey,
 
 I've just posted this blueprint:
 
 https://blueprints.launchpad.net/openstack-common/+spec/common-config
 http://wiki.openstack.org/CommonConfigModule
 
 The idea is to unify option handling across projects with this new API.
 The module would eventually (soon?) live in openstack-common.
 
 Code and unit tests here:
 
 https://github.com/markmc/nova/blob/common-config/nova/common/cfg.py
 https://github.com/markmc/nova/blob/common-config/nova/tests/test_cfg.py
 
 And patches to make both Glance and Nova use it are on the
 'common-config' branches of my github forks:
 
 https://github.com/markmc/nova/commits/common-config
 https://github.com/markmc/glance/commits/common-config
 
 Glance and (especially) Nova still need a bunch of work to be fully
 switched over to the new model, but both trees do actually appear to
 work fine and could be merged now.
 
 Lots of detail in there, but all comments are welcome :)
 
 Thanks,
 Mark.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] rpms for rhel5.x to install Open stack Object Storage

2011-11-21 Thread John Dickinson
I suspect there is a communication gap somewhere, but this is certainly not the 
case. Openstack Object Storage (swift) is not deprecated. Glance provides a 
bridge between nova and swift, but all three are important, active projects.


Sudhaker,

I know that rpms exist for swift, but I don't know where they live. (I should 
find out--anyone know?)

--John



On Nov 21, 2011, at 6:18 AM, David Busby wrote:

 Also, as I recall Object Store is deprecated in favour of glance, at least 
 this was the case in October during the training course.
 
 Added cc to openstack@lists.launchpad.net as I forgot in last email.
 
 On 21 Nov 2011, at 12:15, David Busby wrote:
 
 HI Sudhakar,
 
 I do not believe there are any RPM packages being built of maintained for 
 5.x due to the large list of dependencies, one of which being the libvirt 
 version required (The exact version escapes me for the moment).
 
 There are EPEL packages for 6.x in the works (and we would always welcome 
 another tester), and there are GridDynamics RPMS already available for 6.x I 
 believe.
 
 
 Cheers
 
 David
 
 
 On 21 Nov 2011, at 11:25, Sudhakar Maiya wrote:
 
 Hi,
 Can some one help for prerequisites to install Openstack Object Storage 
 in RHEL system.
 
 Thanks  Regards
 Sudhakar
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Update Container Metadata.

2011-11-14 Thread John Dickinson
Updating existing container metadata with a POST is entirely supported  and 
intended behavior. (The same also applies to adding/updating metadata on an 
account.)

--John


On Nov 14, 2011, at 8:10 PM, easco wrote:

 Anne,
 
 Thank you for your help, and you're correct, that is the case I am pursuing. 
 
 In both sets of documents it is clear to me that one can add metadata to a 
 Container when it is created.  
 
 However, the documentation does not seem to mention anything about changing 
 the metadata on a Container after it has been created.
 
 Empirically it appears that I can do it, an experiment confirms that I can 
 update the metadata on a container using a POST request to the container's 
 URL, but that behavior is not documented so I wanted to know if I should rely 
 on it working in the future. :-)
 
 Thanks for looking though!
 
 Scott
 
 On Nov 14, 2011, at 01:56 PM, Anne Gentle a...@openstack.org wrote:
 
 Woops, the chapter isn't missing, nor is the information. Scott, you'll find 
 it in 
 http://docs.openstack.org/api/openstack-object-storage/1.0/content/create-container.html
  - you can assign metadata when you create the container. 
 
 I'll let one of the Swift devs answer about the ability to update container 
 metadata (which is I think the use case you're pursuing). 
  
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Tutorials of how to install openstack swift into centos 6

2011-11-09 Thread John Dickinson
Awesome. Thanks for putting this together. I know a lot of people have been 
interested in getting swift running on non-ubuntu systems. Thanks for sharing 
this with everyone.

--John


On Nov 9, 2011, at 12:00 AM, pf shineyear wrote:

 openstack swift install on centos 6
 
 1. proxy install
 
   1) check your python version must = 2.6
 
   2) yum install libvirt
 
   3) yum install memcached
 
   4) yum install xfsprogs
 
   5) yum install python-setuptools python-devel python-simplejson 
 python-config
 
   6) easy_install webob
 
   7) easy_install eventlet
 
   8) install xattr-0.6.2.tar.gz, python setup.py build, python setup.py 
 install
 
   9) install coverage-3.5.1.tar.gz, python setup.py build, python 
 setup.py install
 
   10) wget http://www.openstack.org/projects/storage/latest-release/;
   python setup.py build
   python setup.py install
 
   11) wget 
 https://github.com/downloads/gholt/swauth/swauth-lucid-build-1.0.2-1.tgz;
   python setup.py build
   python setup.py install
 
   12) mkdir /etc/swift
 
   13) yum install openssh-server
 
   14) yum install git-core
   
   15) vi /etc/swift/swift.conf
 
 [swift-hash]
 # random unique string that can never change (DO NOT LOSE)
 swift_hash_path_suffix = `od -t x8 -N 8 -A n /dev/random`
 
 
   16) goto /etc/swift/
 
   17) openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
 
   18) service memcached restart, ps -aux | grep mem
 
 495  16954  0.0  0.1 330756   816 ?Ssl  18:19   0:00 memcached -d 
 -p 11211 -u memcached -m 64 -c 1024 -P /var/run/memcached/memcached.pid
 
   19) easy_install netifaces
 
   20) vi /etc/swift/proxy-server.conf
 
 [DEFAULT]
 cert_file = /etc/swift/cert.crt
 key_file = /etc/swift/cert.key
 bind_port = 8080
 workers = 8
 user = swift
 log_facility = LOG_LOCAL0
 allow_account_management = true
 
 [pipeline:main]
 pipeline = healthcheck cache swauth proxy-server
 
 [app:proxy-server]
 use = egg:swift#proxy
 allow_account_management = true
 account_autocreate = true
 log_facility = LOG_LOCAL0
 log_headers = true
 log_level =DEBUG
 
 [filter:swauth]
 use = egg:swauth#swauth
 #use = egg:swift#swauth
 default_swift_cluster = local#https://10.38.10.127:8080/v1
 # Highly recommended to change this key to something else!
 super_admin_key = swauthkey
 log_facility = LOG_LOCAL1
 log_headers = true
 log_level =DEBUG
 allow_account_management = true
 
 [filter:healthcheck]
 use = egg:swift#healthcheck
 
 [filter:cache]
 use = egg:swift#memcache
 memcache_servers = 10.38.10.127:11211
 
 
   21) config /etc/rsyslog.conf
 
 local0.*/var/log/swift/proxy.log
 local1.*/var/log/swift/swauth.log
 
 
 
 
 
   21) build the ring, i have 3 node, 1 proxy
 
   swift-ring-builder account.builder create 18 3 1
   swift-ring-builder account.builder add z1-10.38.10.109:6002/sdb1 1
   swift-ring-builder account.builder add z2-10.38.10.119:6002/sdb1 1
   swift-ring-builder account.builder add z3-10.38.10.114:6002/sdb1 1
   swift-ring-builder object.builder rebalance
 
   swift-ring-builder account.builder rebalance
   swift-ring-builder object.builder create 18 3 1
   swift-ring-builder object.builder add z1-10.38.10.109:6000/sdb1 1
   swift-ring-builder object.builder add z2-10.38.10.119:6000/sdb1 1
   swift-ring-builder object.builder add z3-10.38.10.114:6000/sdb1 1
   swift-ring-builder object.builder rebalance
 
   swift-ring-builder container.builder create 18 3 1
   swift-ring-builder container.builder add z1-10.38.10.109:6001/sdb1 1
   swift-ring-builder container.builder add z2-10.38.10.119:6001/sdb1 1
   swift-ring-builder container.builder add z3-10.38.10.114:6001/sdb1 1
   swift-ring-builder container.builder rebalance
 
 
   22) easy_install configobj
 
   23) easy_install nose
 
   24) easy_install simplejson
 
   25) easy_install xattr
 
   26) easy_install eventlet
 
   27) easy_install greenlet
 
   28) easy_install pastedeploy
 
   29) groupadd swift
 
   30) useradd -g swift swift
 
   31) chown -R swift:swift /etc/swift/
 
   32) service rsyslog restart
 
   33) swift-init proxy start
 
 
 2. storage node install
 
   1) yum install python-setuptools python-devel python-simplejson 
 python-configobj python-nose
 
   2) yum install openssh-server
 
   3) easy_install webob
 
   4) yum install curl gcc memcached sqlite xfsprogs
 
 5) easy_install eventlet
 
 6) wget 
 http://pypi.python.org/packages/source/x/xattr/xattr-0.6.2.tar.gz#md5=5fc899150d03c082558455483fc0f89f;
 
   python setup.py build
   python setup.py install
 
 
 7)  wget 
 http://pypi.python.org/packages/source/c/coverage/coverage-3.5.1.tar.gz#md5=410d4c8155a4dab222f2bc51212d4a24;
 
   python setup.py build
   python setup.py install
 
 8) yum install libvirt
 
 9) groupadd swift
 
 10) 

Re: [Openstack] [Openstack-poc] proposal for policy around and management of client libraries

2011-11-08 Thread John Dickinson

On Nov 8, 2011, at 10:54 AM, Thierry Carrez wrote:
 
 With solution (2), if you look at the issue from Gerrit, GitHub,
 Launchpad or Jenkins, those will be separate projects though. The fact
 that they share the same PTL is not enough to make them one. For
 example, they will have separate bug pages and release pages on
 Launchpad, separate jobs on Jenkins...
 
 Solution (1) actually allows you to have two tarballs as release
 deliverables on the release page of a single core project. Solution (2)
 doesn't.
 
 In all cases you bring new client code projects into the realm of
 OpenStack core projects (be it as an extension of scope of an existing
 one or as a separate project), which I think warrants a discussion from
 the PPB.

I'm not ready to commit to one solution as better than the other yet. However, 
I don't see why a separate client library project managed by the same group as 
the core project needs to be seen as a proliferation of openstack core 
projects. Nova is still just one core project even if it has two components: 
the server-side engine and a client-facing library.

--John

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] proposal for policy around and management of client libraries

2011-11-07 Thread John Dickinson
In general, I support the idea of good, supported client libraries. I have a 
few questions about this particular proposal.

1) Would a distinct python client (and the associated project) be required for 
each core openstack project that exposes an API?

2) Why does the PPB need to vote? Actually, what would the PPB be voting on 
(assuming the answer to #1 is no)?

3) Why the requirement to have the same release schedule as the paired project? 
I would expect a client binding to change much less often than the underlying 
system (as I have seen with the Rackspace-specific swift bindings).

4) Will these libraries also be included in whatever set of packages are built 
for the openstack projects (eg as part of a *-dev package)?

--John


On Nov 7, 2011, at 12:53 PM, Monty Taylor wrote:

 Dealing with the client libraries has become a little bit of a tricky
 subject, which is both lacking consistency and direction - but is kind
 of essential to the project. (a service without a client library isn't
 nearly as useful) At UDS this past week, Joe Heck, Jim Blair and I sat
 down for a while and worked through a bunch of the issues and would like
 to propose the following:
 
 - Each project that exposes an API should have a separate client library
 project. For instance, python-novaclient, python-glanceclient, etc.
 
 - Each of these projects will have its own top-level git repo and be
 managed by gerrit just like a core project.
 
 - The python-*client project will be under the purvue of the PTL for the
 main project (mainly so that we don't have an explosion of PTLs all of a
 sudden)
 
 - Each client library project will release milestones and final releases
 on the same schedule as the rest of the core projects.
 
 - The client libraries will release directly to PyPI at final release
 time. If we do this, releasing the need to release main core projects to
 PyPI is obviated (which is good, as we do not expect anyone to actually
 install a running OpenStack from PyPI - but it is reasonable to expect
 people to want to use client libraries from PyPI)
 
 - OpenStack projects that need to depend on these will reference the git
 repo of the project in their tools/pip-requires file. This should take
 care of depends for developers. Normal installation depends can be taken
 care of by distro packagers as usual.
 
 As best we can tell, this should handle the development case and allow
 for better pip installing of code into virtualenv for the developer
 workflow without doing screwy things that imply deployment
 infrastructure. Other solutions discussed involved multiple modules per
 repo (which actually breaks pip -e) and creating our own PyPI that we
 upload trunk eggs of all of OpenStack software from and then
 reconfiguring install_venv.py to look at that repo. Those are both
 kludgy, whereas this actually serves final distro needs as well as
 developer needs.
 
 It also helps out with a versioning issue, which was that we were trying
 to find a computer-workable approach for dealing with pre-release
 versions of nova/glance/keystone that worked for both Ubuntu and for
 PyPI - and it turns out that there isn't a good answer. With this
 approach, the problem goes away.
 
 Finally, as we're on the cusp of rolling out some integration-test
 gating of trunk, it's important that we can also gate all of the
 components that are used as a part of that gating. (would suck if the
 client lib being used to test broke all of a sudden)
 
 We'd love to get a PPB vote on this approach, and if people consent
 begin to implement it. Glance needs to split its client lib out, and
 keystone and nova client libs need to get moved to gerrit and the
 openstack org.
 
 Thoughts or feedback?
 
 Monty
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] proposal for policy around and management of client libraries

2011-11-07 Thread John Dickinson

On Nov 7, 2011, at 4:23 PM, Monty Taylor wrote:
 
 2) Why does the PPB need to vote? Actually, what would the PPB be
 voting on (assuming the answer to #1 is no)?
 
 Well, it would be effectively promoting several existing projects which
 are managed in a few different places (rackspace, 4P, etc) to being more
 official things that live in the OpenStack gerrit and are replicated
 to repos in the openstack github org. I have the physical ability to
 just do that - but it kind of felt like something that should get buy in
 from someone officially.
 
 3) Why the requirement to have the same release schedule as the
 paired project? I would expect a client binding to change much less
 often than the underlying system (as I have seen with the
 Rackspace-specific swift bindings).
 
 I guess I'm arguing that proper client libraries are an essential part
 of a release. If there isn't much to be done in them, then release day
 will be really easy. :) (also, the other projects are adding API
 features like gangbusters at the moment, so I think the client libs will
 be under active dev at least until those settle down.
 
 (speaking of - should we grab the rackspace swift bindings and make a
 python-swiftclient out of it?)
 
 On the other hand, we could keep it as it is for the other projects and
 allow the PTL to decide. I'd be surprised if folks chose different
 cycles for the client libs - but of the things I care about personally,
 this is certainly one of the least.

Rackspace doesn't have any swift bindings because Rackspace doesn't sell swift. 
We have cloud files bindings (in many languages--which gets into another issue 
altogether), and they should work with most swift installations, but they also 
have Rackspace product-specific stuff in them.

I would expect any other company that offers an full or partial openstack 
product and offers value-add features to it to offer and maintain their own 
bindings, too.

I am a fan of official binding support as a separate project. My first instinct 
is that these would need to be managed by the same PTL as the associated 
project. I would be against one, unified client (although I would support 
packaging and distributing them together).

Overall, I think official bindings are important. Of course they won't fit the 
needs of everybody, but most people will use them to interact with Openstack 
projects. I see no reason that each PTL can't make the decision to support one 
or more language bindings for their own project. Keeping the bindings separate 
from the core code seems like a good idea too.

Seems like this is generally a good idea. I like it, and I think this should 
stay at the PTL level and doesn't need to involve the PPB. Thanks Monty, Joe, 
and Jim.

--John



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-poc] [Openstack] proposal for policy around and management of client libraries

2011-11-07 Thread John Dickinson

On Nov 7, 2011, at 4:23 PM, Monty Taylor wrote:
 
 2) Why does the PPB need to vote? Actually, what would the PPB be
 voting on (assuming the answer to #1 is no)?
 
 Well, it would be effectively promoting several existing projects which
 are managed in a few different places (rackspace, 4P, etc) to being more
 official things that live in the OpenStack gerrit and are replicated
 to repos in the openstack github org. I have the physical ability to
 just do that - but it kind of felt like something that should get buy in
 from someone officially.
 
 3) Why the requirement to have the same release schedule as the
 paired project? I would expect a client binding to change much less
 often than the underlying system (as I have seen with the
 Rackspace-specific swift bindings).
 
 I guess I'm arguing that proper client libraries are an essential part
 of a release. If there isn't much to be done in them, then release day
 will be really easy. :) (also, the other projects are adding API
 features like gangbusters at the moment, so I think the client libs will
 be under active dev at least until those settle down.
 
 (speaking of - should we grab the rackspace swift bindings and make a
 python-swiftclient out of it?)
 
 On the other hand, we could keep it as it is for the other projects and
 allow the PTL to decide. I'd be surprised if folks chose different
 cycles for the client libs - but of the things I care about personally,
 this is certainly one of the least.

Rackspace doesn't have any swift bindings because Rackspace doesn't sell swift. 
We have cloud files bindings (in many languages--which gets into another issue 
altogether), and they should work with most swift installations, but they also 
have Rackspace product-specific stuff in them.

I would expect any other company that offers an full or partial openstack 
product and offers value-add features to it to offer and maintain their own 
bindings, too.

I am a fan of official binding support as a separate project. My first instinct 
is that these would need to be managed by the same PTL as the associated 
project. I would be against one, unified client (although I would support 
packaging and distributing them together).

Overall, I think official bindings are important. Of course they won't fit the 
needs of everybody, but most people will use them to interact with Openstack 
projects. I see no reason that each PTL can't make the decision to support one 
or more language bindings for their own project. Keeping the bindings separate 
from the core code seems like a good idea too.

Seems like this is generally a good idea. I like it, and I think this should 
stay at the PTL level and doesn't need to involve the PPB. Thanks Monty, Joe, 
and Jim.

--John



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack-poc
Post to : openstack-poc@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-poc
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] +1, All services should have WADLs

2011-10-28 Thread John Dickinson
I am concerned about some of the implications that are being discussed.

1) A WADL is part of documentation of an API. Nobody is going to object to more 
documentation.

2) Being an open-source project, if somebody wants to commit to creating and 
maintaining a WADL for a particular part of Openstack, they are free to. 
Alternately, persuade somebody else to do it. However, having a WADL to 
describe a particular component of openstack is not something that can be 
forced onto that component. Phrases like All services should have WADLs are 
either meaningless (unenforcible or not really all services) or oppressive 
(mandating requirements on a project).

3) A WADL is not a replacement for any sort of dev documentation, and in fact, 
still requires there to be human-readable dev docs.

Specifically for swift, not one of the current developers are going to either 
write or maintain a WADL for the swift API. However, we'll be happy to assist 
anyone who wants to write and maintain docs for swift, including WADLs.

The important thing is that code talks. If you want WADLs (or your flavor of 
WADLs), make them! Stop trying to architect systems for architects. These 
things are meant to be used. Let's focus on what is necessary for getting a 
reliable system into the hands of those who will be using it.

(Just about all of the above goes for things like API versioning, too. And 
packaging vs tarballs vs python libraries. And polling vs pushing. And the true 
meaning of what a ReST interface is.)

--John

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] +1, All services should have WADLs

2011-10-28 Thread John Dickinson

On Oct 28, 2011, at 10:04 AM, Ed Leafe wrote:
   Swift had the advantage of starting out as a closed source project that 
 only had to serve a single master, and thus didn't need external 
 orchestration to keep it on track. Nova, OTOH, as a community development 
 effort, essentially had to be all things to all people, which is unworkable; 
 hence the need for some up-front design to keep some sort of focus to the 
 development. The problem is that this inevitably descends into bikeshedding, 
 which has been prominently on display in this thread.

I absolutely do not want to compare different openstack projects. That all too 
often is perceived as an us vs them, and I want to avoid that altogether. 
Yes, nova and swift and glance and keystone and horizon are different. My point 
from earlier is that because the projects are different (in scope, users, and 
dev lifecycle), statements like all openstack projects need to do X are 
either meaningless or unmanageable.

Openstack is a collection if different parts that should work together, but 
that doesn't mean that there are one size fits all solutions to issues that 
come up. These discussions around the One True Way to do things are a 
distraction at best. If you have 2 people arguing about the best way for an 
aspect of a particular project should work, have them both code it up (or write 
the docs or design the UI or whatever) and then compare and choose the best 
implementation. Bikeshedding (along with complaining about bikeshedding 
[meta!]) feels satisfying, but it's a hollow pursuit that distracts from 
getting things done.

--John

smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] New git-review tool ready for people to try

2011-10-14 Thread John Dickinson
I'd much prefer storing data in git config rather than in a dot file.

On Oct 14, 2011, at 7:33 AM, Julien Danjou wrote:

 On Fri, Oct 14 2011, James E. Blair wrote:
 
 Another idea though is to add a small dotfile into each repository
 indicating the canonical location of that repo's gerrit.  Unlike storing
 an entire tool like rfc.sh in the repo, it seems that just adding that
 bit of static data to the repo seems appropriate.  It shouldn't get in
 anyone's way, and it shouldn't need to be updated.  So we could look for
 a file called, say '.git-review' with the URL, and if it's not found,
 prompt the user.
 
 That sounds like a really good idea. :)
 
 -- 
 Julien Danjou
 // eNovance  http://enovance.com
 // ✉ julien.dan...@enovance.com  ☎ +33 1 49 70 99 81
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] New git-review tool ready for people to try

2011-10-14 Thread John Dickinson
But it's metadata about the code (a particular review pattern with a particular 
vcs). VCS info does not belong in the code. I'll admit that I like a dotfile in 
the repo much better than I like rfc.sh in the repo, but I'd prefer to keep 
info about remotes, review processes, and other repo metadata out of the repo. 
If this is something for a particular VCS (as the proposed git-review is), it 
should use the established locations for that particular VCS. In this case, 
git-review should pull info from the .git directory (more specifically, the git 
config data).

--John


On Oct 14, 2011, at 8:04 AM, Julien Danjou wrote:

 On Fri, Oct 14 2011, John Dickinson wrote:
 
 I'd much prefer storing data in git config rather than in a dot file.
 
 A dot file is commit-able. Git config is not.
 So in this case, providing the dot file wins. Think about like a
 .gitignore file.
 
 -- 
 Julien Danjou
 // eNovance  http://enovance.com
 // ✉ julien.dan...@enovance.com  ☎ +33 1 49 70 99 81



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Creating openstack-dev mailing list

2011-10-07 Thread John Dickinson
I would prefer to keep the list together. I do not think the volume of either 
dev or user conversations is too oppressive to the other, and I would like to 
avoid wither group being neglected by the other.

Sent from my iPhone

On Oct 7, 2011, at 2:16 AM, Wayne A. Walls wa...@openstack.org wrote:

 I remember having this discussion at the Cactus design summit, and the 
 reoccurring theme was dev vs sysads/deployment.  I was in favor of a split 
 list then, but over the past year my stance has changed a bit.
 
 I don't know if I feel that OpenStack has reached the level where deployment 
 knowledge is absent of dev knowhow.  Deployers of OS are typically in 
 irc/ml/forums pasting stack traces, working through problems with dev help.  
 Once we start seeing more public deployments, reference architectures, etc, 
 then maybe? 
 
 Do devs think there is too much noise on the main ml?  What has the impact 
 been on splitting the irc channels out?  To me, I just idle in two places 
 now, but did that really have a big impact?  I tend to agree with John's 
 points, I don't want the presumably easiest ml to consume to be neglected by 
 some of the brightest minds in the project. 
 
 Thanks,
 
 
 Wayne
 
 
 
 Sent from my iPhone
 
 On Oct 6, 2011, at 10:11 PM, John Purrier j...@openstack.org wrote:
 
 My 2 cents...
 
 The traffic on the list is less than 100 messages per day, of that about 35%
 is bug notifications. I wonder if we redirect the developer oriented email
 on the list whether we will have 10 messages a day on the original openstack
 mailing list.
 
 Stefano, can you elaborate on why the developers feel a split is necessary
 at this point? Most (if not all) of the traffic is developer oriented, what
 is the problem we want to solve?
 
 Thanks,
 
 John 
 
 -Original Message-
 From: openstack-bounces+john=openstack@lists.launchpad.net
 [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
 Of Stefano Maffulli
 Sent: Thursday, October 06, 2011 1:35 PM
 To: openstack
 Subject: [Openstack] Creating openstack-dev mailing list
 
 Jesse Andrews (anotherjesse)(Rackspace)
 Jonathan Bryce (jbryce)(Rackspace)
 Devin Carlen (devcamcar)(Nebula)
 Thierry Carrez (ttx)(Rackspace)
 John Dickinson (notmyname)(Rackspace))
 Vish Ishaya (vishy)(Rackspace)
 Josh Kearney (jk0)(Rackspace)
 Joshua McKenty (jmckenty)(Piston)
 Ewan Mellor (ewanmellor)(Citrix)
 Jay Pipes (jaypipes)(Rackspace)
 John Purrier (johnpur)(HP)
 Monty Taylor (mordred)(Rackspace)
 Paul Voccio (pvo)(Rackspace)
 Ziad Sawalha (zns)(Rackspace)
 
 Hello folks,
 
 I've been talking to quite a few developers participating to the Design
 Summit and a recurring request I got is to create a new mailing list for
 the OpenStack developers to meet and discuss.  
 
 There is also a concern that putting developers in another list will
 decrease their attention to the bigger part of the community. I believe
 this is a serious concern and that it's going to be our role as leaders
 of this community to prevent this from happening.
 
 I'd suggest to dedicate the existing mailing list for discussions about
 usage of OpenStack (deployment and development of applications on top of
 OpenStack API) and create a new one only for developers of OpenStack.
 Developers in this context should be developers of openstack projects
 (nova, swift, quantum, etc).
 
 If there is no opposition to this proposal in the next days, I'll
 proceed and create openstack-dev and invite developers to subscribe to
 it.
 
 cheers,
 stef
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift git differences between launchpad vs github milestone-proposed branch history

2011-09-29 Thread John Dickinson
The code that was on github was an unofficial mirror that I was maintaining. 
With each release, I added the tag to github repo. It looks like I didn't tag 
at the same commit as the (at the time) official repo. This is because the 
milestone-proposed branch did not exist in the unofficial github mirror.

When we moved to github, I think the code that was already there was used (why 
reimport if everything is already there?). This is what lead to the tag 
discrepancy. There is no significant difference in the eg 1.4.1 tags between 
what's tagged now and what was tagged in bzr. The only difference should be the 
final versioning. Functionally, the code at each of the two tags should be 
identical.

I apologize for the discrepancy. I think the risk of future problems is much 
lower now that the github repo is officially managed.

--John


On Sep 29, 2011, at 8:09 AM, Bailey, Darragh wrote:

 Hi,
 
 
 I originally imported the swift milestone-proposed branch  
 (lp:~hudson-openstack/swift/milestone-proposed) from launchpad into a git 
 repository locally, sometime after the 1.4.1 release and synced again after 
 the 1.4.2. This was before it was clear that the swift project was definitely 
 moving to use git and the repo on github would definitely be the public git 
 repo.
 
 
 In looking to now switch to pulling from the project repo on github and I've 
 noticed a few discrepancies.
 
 Tag 1.4.1 within my local repo, which was imported from bzr, points to a 
 different commit than what the github repository points to. I'm wondering how 
 that has come to be?
 
 github points to the following commit as being 1.4.1
 $ git ls-remote openstack | grep refs/tags/1.4.1
 9ab33970b58b8219245bfd89e2ad9442c0e94f17refs/tags/1.4.1
 
 (https://github.com/openstack/swift/commit/9ab33970b58b8219245bfd89e2ad9442c0e94f17)
 
 $ git log -n1 9ab33970b58b8219245bfd89e2ad9442c0e94f17
 commit 9ab33970b58b8219245bfd89e2ad9442c0e94f17
 Merge: 8526098 c4f0b55
 Author: John Dickinson 
 john.dickin...@rackspace.commailto:john.dickin...@rackspace.com
 Date:   Tue Jun 14 16:37:02 2011 +
 
updated changelog for 1.4.1
 
 
 Bzr points to the following change as being tagged for 1.4.1:
 http://bazaar.launchpad.net/~hudson-openstack/swift/milestone-proposed/revision/305
 
 *   Committer: Tarmac
 *   Author(s): Thierry Carrez
 *   Date: 2011-06-20 14:42:27
 *   mfrom: (304.1.1 
 milestone-proposed)http://bazaar.launchpad.net/%7Ehudson-openstack/swift/milestone-proposed/revision/304.1.1
 *   Revision ID: tarmac-20110620144227-n6ko7ns5s83aceh9
 
 Tags: 1.4.1
 Final 1.4.1 versioning for immediate release.
 
 
 Looking at the imported repo in git, I can see the following differences 
 between what is tagged in github versus what was tagged in launchpad.
 
 (openstack is the remote name I have for 
 https://github.com/openstack/swift.git)
 $ git log --graph --decorate --right-only $(git ls-remote openstack | grep 
 refs/tags/1.4.1 | awk '{print $1'})...1.4.1
 *   commit d8f39dbaaeab646117b8267a33c606a21dbef29b (tag: 1.4.1, 
 origin/upstream/milestone-proposed)
 |\  Merge: 3ad61bd 1dec5d4
 | | Author: Thierry Carrez 
 thie...@openstack.orgmailto:thie...@openstack.org
 | | Date:   Mon Jun 20 14:42:27 2011 +
 | |
 | | Final 1.4.1 versioning for immediate release.
 | |
 | * commit 1dec5d45c82a92fde49d7d7ba478cebce52fc162
 |/  Author: Thierry Carrez 
 thie...@openstack.orgmailto:thie...@openstack.org
 |   Date:   Mon Jun 20 14:37:17 2011 +0200
 |
 |   Final 1.4.1 versioning
 |
 *   commit 3ad61bd15cdf90b91c3834c6444e56a4d4436bd6
 |\  Merge: 60f9cbf d41490c
 | | Author: David Goetz 
 david.go...@rackspace.commailto:david.go...@rackspace.com
 | | Date:   Wed Jun 15 14:57:34 2011 +
 | |
 | | Merge 1.4.1 development from trunk (rev312)
 | |
 | * commit d41490c38477205c6619a8c79857973474497bad
 |/  Merge: 60f9cbf 9ab3397
 |   Author: Thierry Carrez 
 thie...@openstack.orgmailto:thie...@openstack.org
 |   Date:   Wed Jun 15 11:11:51 2011 +0200
 |
 |   Merge 1.4.1 development from trunk (rev312)
 |
 snip 2 additional commits
 
 Commit d41490c38477205c6 merges the commit marked as 1.4.1 in github 
 (9ab33970b58b82192) onto the milestone-proposed branch from launchpad.
 
 
 I've looked at the tags from bzr on launchpad and confirmed that they point 
 to same changes as my local tree for 1.4.1, so I'm wondering was the tag 
 moved on launchpad sometime after the import into the github repository?
 
 
 I see a similar thing happening with the 1.4.2 tag. What is tagged in github 
 is not quite the same as the tag imported from launchpad. Also confirmed that 
 the import of the tag from bzr to my local git repo is pointing to the same 
 data, so it looks like the tags on launchpad for swift for 1.4.1 and 1.4.2 
 don't match what is tagged in github.
 
 $ git log --graph --decorate --right-only $(git ls-remote openstack | grep 
 refs/tags/1.4.2 | awk '{print $1'})...1.4.2
 *   commit c4f718ff7c565d7b40e10c881506be009e6cbbd7 (tag

Re: [Openstack] 55PB storage cloud hosted using Swift

2011-09-29 Thread John Dickinson
clarification: 5.5PB


On Sep 29, 2011, at 3:00 PM, Brian Schott wrote:

 In case you missed it, SDSC is hosting a commercial 55 petabyte storage cloud 
 using Swift:
 
 http://arstechnica.com/business/news/2011/09/supercomputing-center-targets-55-petabyte-storage-at-academics-students.ars
 https://cloud.sdsc.edu/hp/index.php
 
 Congrats to the Swift team!
 Brian
 
 -
 Brian Schott, CTO
 Nimbis Services, Inc.
 brian.sch...@nimbisservices.com
 ph: 443-274-6064  fx: 443-274-6060
 
 
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


  1   2   >