Re: [openstack-dev] [swift] re-implementation in golang - hummingbird details

2015-11-06 Thread Brian Cline
There was a talk at the Summit in Tokyo last week which you can find here:
https://youtu.be/Jfat_FReZIE

Here is a blog post that was pushed about a week before:
http://blog.rackspace.com/making-openstack-powered-rackspace-cloud-files-buzz-with-hummingbird/

--
Brian
Fat-fingered from a Victrola

 Original Message 
Subject: [openstack-dev] [swift] re-implementation in golang - hummingbird  
details
From: Rahul Nair 
To: openstack-dev@lists.openstack.org
CC:
Date: Thu, October 29, 2015 12:23 PM



Hi All,

I was reading about the "hummingbird" re-implementation of some parts of swift 
in golang, can someone kindly point to documentation/blogs on the changes made, 
where I can understand the new implementation before going into the code.

​Thanks,
Rahul U Nair
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Brian Cline

On 03/04/2014 05:01 AM, Thierry Carrez wrote:

James E. Blair wrote:

Freenode has been having a rough time lately due to a series of DDoS
attacks which have been increasingly disruptive to collaboration.
Fortunately there's an alternative.

OFTC URL:http://www.oftc.net/ is a robust and established alternative
to Freenode.  It is a smaller network whose mission statement makes it a
less attractive target.  It's significantly more stable than Freenode
and has friendly and responsive operators.  The infrastructure team has
been exploring this area and we think OpenStack should move to using
OFTC.

There is quite a bit of literature out there pointing to Freenode, like
presentation slides from old conferences. We should expect people to
continue to join Freenode's channels forever. I don't think staying a
few weeks on those channels to redirect misled people will be nearly
enough. Could we have a longer plan ? Like advertisement bots that would
advise every n hours to join the right servers ?


[...]
1) Create an irc.openstack.org CNAME record that points to
chat.freenode.net.  Update instructions to suggest users configure their
clients to use that alias.

I'm not sure that helps. The people who would get (and react to) the DNS
announcement are likely using proxies anyway, which you'll have to
unplug manually from Freenode on switch day. The vast majority of users
will just miss the announcement. So I'd rather just make a lot of noise
on switch day :)

Finally, I second Sean's question on OFTC's stability. As bad as
Freenode is hit by DoS, they have experience handling this, mitigation
procedures in place, sponsors lined up to help, so damage ends up
*relatively* limited. If OFTC raises profile and becomes a target, are
we confident they would mitigate DoS as well as Freenode does ? Or would
they just disappear from the map completely ? I fear that we are trading
a known evil for some unknown here.

In all cases I would target post-release for the transition, maybe even
post-Summit.



Indeed, I can't help but feel like the large amount of effort involved 
in changing networks is a bit of a riverboat gamble. DDoS has been an 
unfortunate reality for every well-known/trusted/stable IRC network for 
the last 15-20 years, and running from it rather than planning for it is 
usually a futile effort. It feels like we'd be chasing our tails trying 
to find a place where DDoS couldn't cause serious disruption; even then 
it's still not a sure thing. I would hate to see everyone's efforts to 
have been in vain once the same problem occurs there.


--
Brian Cline
br...@linux.vnet.ibm.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-18 Thread Brian Cline
Honestly, with that reasoning, this approach strikes me as a technical solution 
to a political problem, a band-aid on a sprained ankle, and so on and so forth 
in that pattern.

There was no shortage of talk about cross-project coordination challenges in HK 
and Portland, so it shouldn't be news to anyone -- but keeping all project 
lists consolidated into one doesn't seem like a good solution if we're already 
doing that today and still have just as much cross-project coordination 
problems. That coordination should be fostered separately through process by OS 
leadership, rather than mailing list structure.

For what it's worth, I much prefer Caitlin's and Stefano's approach, separate 
established project lists with a single list for incubator projects. The 
tagging here isn't always consistent (or there at all sometimes -- we've all 
made that mistake before), so things often slip by the filters. I have 14 rules 
set up to catch most of the core projects, and I'm still getting tons more 
general dev discussion than I can keep up with (something I really *want* to be 
able to do, as both a developer and implementor).

Brian


-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: Saturday, November 16, 2013 1:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Split of the openstack-dev list



On 11/14/2013 07:54 PM, Caitlin Bestler wrote:
 On 11/14/2013 5:12 AM, Thierry Carrez wrote:
 Hi everyone,

 I think that we have recently reached critical mass for the
 openstack-dev mailing-list, with 2267 messages posted in October, and
 November well on its way to pass 2000 again. Some of those are just
 off-topic (and I've been regularly fighting against them) but most of
 them are just about us covering an ever-increasing scope, stretching the
 definition of what we include in openstack development.

 Therefore I'd like to propose a split between two lists:

 *openstack-dev*: Discussions on future development for OpenStack
 official projects

 *stackforge-dev*: Discussions on development for stackforge-hosted
 projects

 
 I would suggest that each *established* project (core or incubator) have
 its own mailing list, and that openstack-dev be reserved for
 topics of potential interest across multiple projects (which new
 projects would qualify as).

We've actually explicitly avoiding this model for quite some time on
purpose. The main reason being that one of the hardest challenges we
have is cross-project collaboration. Hacking just one one project? Not
so hard. Producing the output of 18 in a coordinated fashion? Hard.

Everyone does a great job so far of prefixing things.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Metadata Search API

2013-11-13 Thread Brian Cline
At the Icehouse design summit, Lincoln Thomas from HP presented a REST API spec 
for searching metadata in Swift. This API would allow folks to search both 
system and user metadata for accounts, containers, and objects. Today, about 
the best one can do is iterate through everything and inspect metadata along 
the way - obviously an infinitely expensive (and hilariously insane) operation.

You can find all the details on the API here (use the PDF link for full spec 
info for now):
https://wiki.openstack.org/wiki/MetadataSearch

Currently HP is working on a POC implementation of the linked spec. SoftLayer 
has had an existing implementation with custom middlewares for a while, but is 
not as rich as the proposed API (a link to SoftLayer's existing search API info 
is available on the above wiki page). If there are any others who have 
implemented search in Swift, please speak up and help shape this. We both want 
to get community consensus on a standard search API, then get a pluggable 
reference implementation into Swift.

This is all work-in-progress stuff, but we'd welcome any feedback, concerns, 
literal jumps for joy, etc. in this thread, both on the API and on a reference 
architecture.


Brian Cline
Software Engineer III, Product Innovation

SoftLayer, an IBM Company
4849 Alpha Rd, Dallas, TX 75244
214.782.7876 direct  |  469.892.8880 batphone  |  
bcl...@softlayer.commailto:bcl...@softlayer.com


Lincoln Thomas (IRC lincolnt)
System/Software Engineer, HP Storage RD
Hewlett-Packard Company
Portland, OR, USA,  +1 (503) 757-6274
lincoln.tho...@hp.commailto:lincoln.tho...@hp.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-09-04 Thread Brian Cline
Was any consensus on this ever reached? It appears both reviews are still open. 
I'm partial to review 37131 as it attacks the problem a more concisely, and, as 
mentioned, combined the efforts of the two more effective patches. I would echo 
Carl's sentiments that it's an easy review minus the few minor behaviors 
discussed on the review thread today.

We feel very strongly about these making it into Havana -- being confined to a 
single neutron-server instance per cluster or region is a huge 
bottleneck--essentially the only controller process with massive CPU churn in 
environments with constant instance churn, or sudden large batches of new 
instance requests.

In Grizzly, this behavior caused addresses not to be issued to some instances 
during boot, due to quantum-server thinking the DHCP agents timed out and were 
no longer available, when in reality they were just backlogged (waiting on 
quantum-server, it seemed).

Is it realistically looking like this patch will be cut for h3?

--
Brian Cline
Software Engineer III, Product Innovation

SoftLayer, an IBM Company
4849 Alpha Rd, Dallas, TX 75244
214.782.7876 direct  |  bcl...@softlayer.com
 

-Original Message-
From: Baldwin, Carl (HPCS Neutron) [mailto:carl.bald...@hp.com] 
Sent: Wednesday, August 28, 2013 3:04 PM
To: Mark McClain
Cc: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] The three API server multi-worker process 
patches.

All,

We've known for a while now that some duplication of work happened with
respect to adding multiple worker processes to the neutron-server.  There
were a few mistakes made which led to three patches being done
independently of each other.

Can we settle on one and accept it?

I have changed my patch at the suggestion of one of the other 2 authors,
Peter Feiner, in attempt to find common ground.  It now uses openstack
common code and therefore it is more concise than any of the original
three and should be pretty easy to review.  I'll admit to some bias toward
my own implementation but most importantly, I would like for one of these
implementations to land and start seeing broad usage in the community
earlier than later.

Carl Baldwin

PS Here are the two remaining patches.  The third has been abandoned.

https://review.openstack.org/#/c/37131/
https://review.openstack.org/#/c/36487/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev