On Tue, 27 Nov 2007, Nitin Agrawal wrote:

> I am a relatively new user of the Linux NTFS/FUSE file system,

Welcome!

> so apologies if this question has been already been answered elsewhere.
> 
> I need to create several large NTFS partitions (40-200GB) for 
> experimentation, which I discard afterwards and start the process again.

Why? What do you want todo?
 
> Since I don't care about the actual file content but only about time 
> taken to create the FS, I want to avoid issuing actual disk writes for 
> data blocks. The way I currently do this is by writing a magic number in 
> the data blocks at the application level and checking for it in the NTFS 
> driver (ntfs_pwrite()). If a block has the magic number, I simply return 
> with the number of bytes written, without actually issuing the pwrite().
> 
> I verify using iostat that indeed only few blocks (corresponding to 
> metadata) are being written to the device when I write a large file 
> (1GB). But in terms of time, I notice very little speed up. (17 secs as 
> compared to 19 secs)
> 
> Am I doing something wrong, or is the performance limited by some other 
> factor? 

What's your hardware? CPU, RAM? How big blocks do you write? What's the 
output of 'vmstat 1' during the test?

        Szaka

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
ntfs-3g-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel

Reply via email to