We are using Solaris 10 8/07. We found that a file stored contiguously on ufs 
can achieve the best read performance (both response time and throughput) by 
leveraging aio and directio. But it seems that such file can only be created 
under a single write thread circumstances. If multiple files are being created 
simultaneously, file contiguity is badly affected. The read performance on such 
file drops dramatically. So, we are confused on how to create a contiguously 
stored file when multiple files are being created simultaneously? We have tried 
"newfs -T", "tunefs -a maxcontig", maxphys, all of that just tries to optimize 
the situation indirectly rather guarantee it. We even think about rewrite 
bmap_write(), but we don't have such knowledge to make it happen. Please share 
your light with us. Thanks a lot.

BTW, we found a useful add-on utility filestat that can tell how blocks are 
distributed in a file. But that utility only has a SPARC version. Anyone knows 
where to find the x86/x64 version of filestat, or even the source code of 
filestat, so we can compile it by our own. Many thanks.

-- Hunter
 
 
This message posted from opensolaris.org
_______________________________________________
ufs-discuss mailing list
[email protected]

Reply via email to