On Fri, 25 Jan 2008, Diomidis Spinellis wrote:
> Szabolcs Szakacsits wrote:
> > On Thu, 24 Jan 2008, Diomidis Spinellis wrote:
> > > When I read large files using ntfs-3g 1.1120 the read is truncated.  The
> > > problem does not occur with shorter files.
> > Thank you for the bug report. This looks to be a FreeBSD specific LFS (large
> > file support) issue. I can't reproduce it on Linux (1.2125-RC with
> > fuse-lite):
> > 
> >   % dd of=file seek=60011642880 bs=1 < /dev/null
> >   % time dd if=file bs=1M >/dev/null
> >   57231+1 records in
> >   57231+1 records out
> >   60011642880 bytes (60 GB) copied, 296.022 seconds, 203 MB/s
> 
> Just to make sure: are you sure you're not creating a sparse file with the
> first dd command?  

You caught me ;) 

The driver is tested with LFS files on Linux up to 3 TB (2TB is also a 
common "magical" limit due to <most_common_sector_size> * 2^32) and I knew 
simple writes must work too. The above example with a sparse file was a 
"trick" to demonstrate no fundamental LFS problem on Linux and make it 
possible to test the most common code paths on block devices without 60 GB 
real free space.

But the example indeed doesn't mean non-sparse files must also work.

Could you please test the above? If it passes then the problem is probably 
in FreeBSD's ntfs_pread() or UBLIO. It could be that config.h isn't 
included somewhere, so off_t doesn't get defined 64-bit. 

> I presume, reading an NTFS sparse file will follow a different code path 
> from that of a fully populated one.

Yes. If the file is not sparse then the implementation of the last phase 
reading the data from disk is slightly different, additionally FreeBSD has 
the UBLIO cache.
 
> To create a non sparse file, you could use
> dd of=file count=60000 bs=1M </dev/zero

I tested this now too with a large file and it passed on Linux:

 dd of=file bs=1M < /dev/zero
 dd: writing `file': No space left on device
 5621+0 records in
 5620+0 records out
 5893095424 bytes (5.9 GB) copied, 137.515 seconds, 42.9 MB/s

 dd if=file bs=1M > /dev/null
 5620+1 records in
 5620+1 records out
 5893095424 bytes (5.9 GB) copied, 122.912 seconds, 47.9 MB/s
 
> I would expect an LFS problem to truncate the files to the same size. Is it
> worth trying this out with 1.2125-RC, instead of the stable version?

I don't think so. No changes affected any relevant code path FreeBSD. It 
seems you have found an unknown FreeBSD bug.

Thank you,
            Szaka

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
ntfs-3g-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel

Reply via email to