On Thu, May 31, 2007 at 01:38:46PM -0700, David Brown wrote:
> Okay this is kinda weird but here you go.

Heh, yeah, I'm having a bit of trouble making sense of the data

> 1) Setup N equal number of data servers and clients.
> 2) From each client choose a server to write to and determine the path
> of least resistance to that server, try to make the data from this
> client to this server as fast as possible. For pvfs this means run the
> client and server on the same box.
> 3) Simultaneously write a large chunk of data from all the clients to
> their individually selected server from above.

Running the clients and servers on the same machine might actually
hurt your performance a bit.  Since PVFS doesn't have a native
quadrics method, maybe you save a lot of overhead skipping the
tcp-over-quadrics stuff.

> The objective is to increase N to large numbers to see what performance
> hits are taken by the file system.

It might be easier to see patterns if you held servers constant and
increased the number of clients, or held clients constant while
varying numbers of servers.  To visualize both you'd end up with a
3-d plot...

> Okay Question:
> The data shows some interesting things as it jumps to the higher numbers
> theres a step increase in the time it takes to dd a 10Gb file from
> 45-50s then jumps up to 65-70s on some nodes and they are groupped
> across the node set as well. Any clues as to what could cause this?

I'm looking at 255-test.csv.  That's 256 nodes (acting as servers and
clients), each client running dd to write 10 GB to a single server?

I don't know why that workload would take about a minute for up to 64
clients, speed up for 65-141 clients, and then go back to being slower
for the rest of the runs, except for a cluster of fast runs at 173-183
clients.   

Since you've got things set up so each client talks to a single server
locally, we shouldn't be seeing network contention or switch
wierdness.  Since you have a single client talking to a single server,
the access pattern from each client should look pretty regular to
pvfs2-server.   The tight bimodal distribution of results suggests...
I don't know... fortunate placement of files on the storage device for
some runs? http://www.coker.com.au/bonnie++/zcav

==rob

-- 
Rob Latham
Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
Argonne National Lab, IL USA                 B29D F333 664A 4280 315B
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to