Thanks Hugo, Clay and Pete. I asked this question based on some limited
knowledge I had about Swift and while trying to understand more based on
the source code. I'll try and run some more experiments and revert back
with questions if any. The general system tuning guidelines definitely seem
helpfu
On Fri, 10 Jan 2014 15:25:02 -0800
Shrinand Javadekar wrote:
> I see that the proxy-server already has a "workers" config option. However,
> looks like that is the # of threads in one proxy-server process.
Not so. Workers are separate Linux processes. Look at os.fork() in
run_sever(). Perhaps yo
It's not synchronous, each request/eventlet co-rotine will yield/trampoline
back to the reactor/hub on every socket operation that raises EWOULDBLOCK.
In cases where there's a tight long running read/write loop you'll
normally find a call to eventlet.sleep (or in at least one case a queue) to
avoi
Hi Shrinand,
The concurrency bottleneck of Swift cluster could be various.
Here's a list :
- Settings of each workers, workers count, max_clients,
threads_per_disk.
- Proxy CPU bound
- Storage nodes CPU bound
- Total Disk IO capacity (includes available memory for xfs caching)
-
Hi,
This question is specific to Openstack Swift. I am trying to understand
just how much is the proxy server a bottleneck when multiple clients are
concurrently trying to write to a swift cluster. Has anyone done
experiments to measure this? It'll be great to see some results.
I see that the pro