It was set to 1. This happened on a 2048 pe job reading in a few
hundred gigs of input data from a single file.
paul
Nicholas Henke wrote:
pauln wrote:
On our ddn's I've noticed that during large reads the backend
bandwidth of the ddn is pegged (700-800 MB/s) while the total
bandwidth being delivered through the fc interfaces is in the low
hundreds of MBs. This leads me to believe that the ddn cache is
being thrashed heavily due to overly aggressive read-ahead by many
OSTs and the ddn itself.
I haven't re-run these tests with the 2.6 kernel (cray 1.4XX) so I'm
not sure if this phenomena still exists but this theory still may
have some validity since the ddn cache is shared between all the osts
connected to it.
paul
Paul -- what is the "prefetch" value for the LUNs? The "cache" command
is where this can be found.
Historical testing on s2a8500s shows that for Lustre, prefetch=1 or
prefetch=0 is desired. When run with prefetch=8, we saw the same
behavior you are describing, namely the DDN "stomping" on itself
trying to prefetch too much data.
This testing has been done twice on the XT3, both to the same result
-- prefetch=1.
Nic
_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel