On Wed, Sep 05, 2018 at 06:55:15AM -0400, Jeff Layton wrote:
> There is no requirement for a filesystem to flush data on close().

And you can't start doing things like that. In some weird cases, you
might have an application open-write-close files at a much higher rate
than what a harddisk can handle. And this has worked for years because
the kernel caches stuff from inodes and data-blocks. If you suddenly
write stuff to harddisk at 10ms for each seek between inode area and
data-area... You end up limited to about 50 of these open-write-close
cycles per second.

My home system is now able make/write/close about 100000 files per
second.

assurancetourix:~/testfiles> time ../a.out 100000 000
0.103u 0.999s 0:01.10 99.0%     0+0k 0+800000io 0pf+0w

(The test program was accessing arguments beyond the end-of-arguments,
An extra argument for this one time program was easier than
open/fix/recompile).

        Roger. 

-- 
** r.e.wo...@bitwizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
**    Delftechpark 26 2628 XH  Delft, The Netherlands. KVK: 27239233    **
*-- BitWizard writes Linux device drivers for any device you may have! --*
The plan was simple, like my brother-in-law Phil. But unlike
Phil, this plan just might work.

Reply via email to