On Thu, Jan 14, 2021 at 10:36 AM Vicker, Darby J. (JSC-EG111)[Jacobs
Technology, Inc.] <darby.vicke...@nasa.gov> wrote:
>
>
> By a "single OSS", do you mean the same OSS for all files?  Or just 1 OSS for 
> each individual file (but not necessarily the same OSS for all files).  I 
> think you mean the latter.  All the lustre results I've sent so far are 
> effectively using a single OSS (but not the same OSS) for all/almost all the 
> files.  Our default PFL uses a single OST up to 32 MB, 4 OST's up to 1GB and 
> 8 OST's beyond that.  In the git repo I've been using for this test, there 
> are only 6 files bigger than 32 MB.  And there was the test where I 
> explicitly set the stripe count to 1 for all files (ephemeral1s).

i mean locate all the files of the git repository on a single oss,
maybe even a single ost.  not a mapping of one file per oss or
striping files across oss's.

the theory at least, is that by having all the files on a single
oss/ost there might be a reduction in rpc's and/or an added cache
effect from the disks and/or some readahead.  i don't know though,
it's just a stab in the dark

my hope is that by using a single client, single mdt, and single
oss/ost you can push closer to the performance of nfs.  i suspect the
added overhead of the mdt reaching across to the oss is going to
prevent this though.  but it would be interesting nonetheless if we
could move the needle at all

if true, it might just be that the added RPC overhead of the MDS/OSS
that the NFS doesn't have to contend with means the performance is
what it is.  there's probably a way to measure the RPC latency at
various points from client to disk through lustre, but i don't know
how

i'll give you this, it's an interesting research experiment.  if you
could come up with a way to replicate your git repo with fluff data i
could try to recreate the experiment and see how our results differ.
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to