On 03/06/2013 01:21 PM, Greg Farnum wrote:
>> > Also, this issue of stat on files created on other clients seems
>> > like it's going to be problematic for many interactions our users
>> > will have with the files created by their parallel compute jobs -
>> > any suggestion on how to avoid or fix it?
>> >  
> Brief background: stat is required to provide file size information,
> and so when you do a stat Ceph needs to find out the actual file
> size. If the file is currently in use by somebody, that requires
> gathering up the latest metadata from them. Separately, while Ceph
> allows a client and the MDS to proceed with a bunch of operations
> (ie, mknod) without having it go to disk first, it requires anything
> which is visible to a third party (another client) be durable on disk
> for consistency reasons.
> 
> These combine to mean that if you do a stat on a file which a client
> currently has buffered writes for, that buffer must be flushed out to
> disk before the stat can return. This is the usual cause of the slow
> stats you're seeing. You should be able to adjust dirty data
> thresholds to encourage faster writeouts, do fsyncs once a client is
> done with a file, etc in order to minimize the likelihood of running
> into this. Also, I'd have to check but I believe opening a file with
> LAZY_IO or whatever will weaken those requirements — it's probably
> not the solution you'd like here but it's an option, and if this
> turns out to be a serious issue then config options to reduce
> consistency on certain operations are likely to make their way into
> the roadmap. :)

That all makes sense.

But, it turns out the files in question were written yesterday,
and I did the stat operations today.

So, shouldn't the dirty buffer issue not be in play here?

Is there anything else that might be going on?

Thanks -- Jim

> -Greg
> 
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to