Re: ntfs with big files

2013-12-02 Thread Joel Sing
On Sat, 19 Oct 2013, David Vasek wrote:
 On Thu, 17 Oct 2013, David Vasek wrote:
  On Fri, 11 Oct 2013, Joel Sing wrote:
  On Thu, 10 Oct 2013, Manuel Giraud wrote:
  Hi,
 
  I have a ntfs partition with rather large (about 3GB) files on it. When
  I copy these files on a ffs partition they are corrupted. When I try to
  checksum them directly from the ntfs partition the checksum is not
  correct (compared to the same file on a fat32 partition copied with
  Windows).
 
  I tried this (with same behaviour) on i386 5.3 release and on i386 last
  week current. I'm willing to do some testing to fix this issue but
  don't really know where to start.
 
  See if you can isolate the smallest possible reproducable test case. If
  you create a 3GB file with known content (e.g. the same byte repeated),
  does the
  same issue occur? If so, how small do you need to go before the problem
  goes
  away? Also, what operating system (and version) was used to write the
  files to the NTFS volume?
 
  Hello, I encountered the same issue. Anything over the 2 GB limit is
  wrong. I mean, first exactly 2 GB of the file are read correctly,
  following that I get wrong data till the end of the file. It is
  reproducible with any file over 2 GB in size so far. Smells like int
  somewhere... I get the same wrong data with any release since at least
  5.0, didn't test anything older, but I bet it is the same.
 
  The filesystem is a Windows XP NTFS system disk, 32-bit, the files were
  copied there with explorer.exe.

 Some additional notes and findings:

 (1)
 The data I receive after first 2 GB are not part of the file, the data is
 from another file (from the same directory, if that fact could be
 important). The data is taken in uninterrupted sequence and the starting
 offset of that sequence is way less than 2 GB in the other file where the
 data belong.

 (2)
 While reading past 2 GB in larger blocks gives me just wrong data, reading
 in smaller blocks (2kB and less) gives me kernel panic in KASSERT
 immediately when I read past the 2 GB limit. It is 100% reproducible with
 any file larger than 2 GB so far.

Thanks for taking the time to dig into this further and provide some 
reproducable test cases.

There were two problems - the first was an off_t (64-bit integer) to integer 
conversion, which meant that attempting to read past a 2GB offset would have 
become negative. The second issue was an unsigned 64-bit to unsigned 32-bit 
truncation, which effectively wrapped the attribute data length at 4GB.

I've just committed fixes for both of these and I can now successfully 
read/checksum a 6.5GB file on NTFS.

 # mount -r /dev/wd0i /mnt

 # ls -lo /mnt/DATA/ntfs_2gb_test.bin
 -rwxr-xr-x  1 root  wheel  - 3054813184 Oct 17 22:11
 /mnt/DATA/ntfs_2gb_test.bin

 # cat /mnt/DATA//ntfs_2gb_test.bin  /dev/null

 # dd if=/mnt/DATA/ntfs_2gb_test.bin bs=4k of=/dev/null
 745804+0 records in
 745804+0 records out
 3054813184 bytes transferred in 108.518 secs (28150083 bytes/sec)

 # dd if=/mnt/DATA/ntfs_2gb_test.bin bs=2k count=1m of=/dev/null
 1048576+0 records in
 1048576+0 records out
 2147483648 bytes transferred in 78.783 secs (27258052 bytes/sec)

 # dd if=/mnt/DATA/ntfs_2gb_test.bin bs=1k count=2m of=/dev/null
 2097152+0 records in
 2097152+0 records out
 2147483648 bytes transferred in 81.210 secs (26443280 bytes/sec)

 # dd if=/mnt/DATA/ntfs_2gb_test.bin bs=4k skip=512k of=/dev/null
 221516+0 records in
 221516+0 records out
 907329536 bytes transferred in 32.314 secs (28077667 bytes/sec)

 # dd if=/mnt/DATA/ntfs_2gb_test.bin bs=2k skip=1m of=/dev/null
 panic: kernel diagnostic assertion cl == 1  tocopy = ntfs_cntob(1)
 failed: file ../../../../ntfs/ntfs_subr.c, line 1556 Stopped at 
 Debugger+0x4:   popl%ebp
 RUN AT LEAST 'trace' AND 'ps' AND INCLUDE OUTPUT WHEN REPORTING THIS PANIC!
 DO NOT EVEN BOTHER REPORTING THIS WITHOUT INCLUDING THAT INFORMATION!
 ddb trace
 Debugger(d08fdcbc,f544fb88,d08dc500,f544fb88,200) at Debugger+0x4
 panic(d08dc500,d085fc0e,d08dfe60,d08e00b0,614) at panic+0x5d
 __assert(d085fc0e,d08e00b0,614,d08dfe60,8) at __assert+0x2e
 ntfs_readntvattr_plain(d1a2d200,d1a36200,d1a5bc00,8800,0) at
 ntfs_readntvat tr_plain+0x2e6
 ntfs_readattr_plain(d1a2d200,d1a36200,80,0,8800) at
 ntfs_readattr_plain+0x1 41
 ntfs_readattr(d1a2d200,d1a36200,80,0,8800) at ntfs_readattr+0x156
 ntfs_read(f544fddc,d64e5140,d6522a60,f544fea0,0) at ntfs_read+0xa8
 VOP_READ(d6522a60,f544fea0,0,d6599000,d64e5140) at VOP_READ+0x35
 vn_read(d65290a8,d65290c4,f544fea0,d6599000,0) at vn_read+0xb5
 dofilereadv(d65365d4,3,d65290a8,f544ff08,1) at dofilereadv+0x13a
 sys_read(d65365d4,f544ff64,f544ff84,106,d653f100) at sys_read+0x89
 syscall() at syscall+0x227
 --- syscall (number 0) ---
 0x2:
 ddb ps
 PID   PPID   PGRPUID  S   FLAGS  WAIT  COMMAND
 *19967   9961  19967  0  7   0dd
9961  1   9961  0  30x88  pause sh
  14  0  0  0  3   

Re: ntfs with big files

2013-10-18 Thread Paolo Aglialoro
Just a thought: now that fuse support is enabled what about ntfs-3g?

Il 17/ott/2013 23:36 David Vasek va...@fido.cz ha scritto:

 On Fri, 11 Oct 2013, Joel Sing wrote:

 On Thu, 10 Oct 2013, Manuel Giraud wrote:

 Hi,

 I have a ntfs partition with rather large (about 3GB) files on it. When
 I copy these files on a ffs partition they are corrupted. When I try to
 checksum them directly from the ntfs partition the checksum is not
 correct (compared to the same file on a fat32 partition copied with
 Windows).

 I tried this (with same behaviour) on i386 5.3 release and on i386 last
 week current. I'm willing to do some testing to fix this issue but don't
 really know where to start.


 See if you can isolate the smallest possible reproducable test case. If
you
 create a 3GB file with known content (e.g. the same byte repeated), does
the
 same issue occur? If so, how small do you need to go before the problem
goes
 away? Also, what operating system (and version) was used to write the
files
 to the NTFS volume?


 Hello, I encountered the same issue. Anything over the 2 GB limit is
wrong. I mean, first exactly 2 GB of the file are read correctly, following
that I get wrong data till the end of the file. It is reproducible with any
file over 2 GB in size so far. Smells like int somewhere... I get the same
wrong data with any release since at least 5.0, didn't test anything older,
but I bet it is the same.

 The filesystem is a Windows XP NTFS system disk, 32-bit, the files were
copied there with explorer.exe.

 Regards,
 David



Re: ntfs with big files

2013-10-18 Thread David Coppa
On Fri, Oct 18, 2013 at 1:32 PM, Paolo Aglialoro paol...@gmail.com wrote:
 Just a thought: now that fuse support is enabled what about ntfs-3g?

ntfs-3g is in ports (sysutils/ntfs-3g).
It's fuse support that, as of now, it's not enabled.

Ciao,
David



Re: ntfs with big files

2013-10-18 Thread David Vasek

On Thu, 17 Oct 2013, David Vasek wrote:


On Fri, 11 Oct 2013, Joel Sing wrote:


On Thu, 10 Oct 2013, Manuel Giraud wrote:

Hi,

I have a ntfs partition with rather large (about 3GB) files on it. When
I copy these files on a ffs partition they are corrupted. When I try to
checksum them directly from the ntfs partition the checksum is not
correct (compared to the same file on a fat32 partition copied with
Windows).

I tried this (with same behaviour) on i386 5.3 release and on i386 last
week current. I'm willing to do some testing to fix this issue but don't
really know where to start.


See if you can isolate the smallest possible reproducable test case. If you
create a 3GB file with known content (e.g. the same byte repeated), does 
the
same issue occur? If so, how small do you need to go before the problem 
goes

away? Also, what operating system (and version) was used to write the files
to the NTFS volume?


Hello, I encountered the same issue. Anything over the 2 GB limit is wrong. I 
mean, first exactly 2 GB of the file are read correctly, following that I get 
wrong data till the end of the file. It is reproducible with any file over 2 
GB in size so far. Smells like int somewhere... I get the same wrong data 
with any release since at least 5.0, didn't test anything older, but I bet it 
is the same.


The filesystem is a Windows XP NTFS system disk, 32-bit, the files were 
copied there with explorer.exe.


Some additional notes and findings:

(1)
The data I receive after first 2 GB are not part of the file, the data is 
from another file (from the same directory, if that fact could be 
important). The data is taken in uninterrupted sequence and the starting 
offset of that sequence is way less than 2 GB in the other file where the 
data belong.


(2)
While reading past 2 GB in larger blocks gives me just wrong data, reading 
in smaller blocks (2kB and less) gives me kernel panic in KASSERT 
immediately when I read past the 2 GB limit. It is 100% reproducible with 
any file larger than 2 GB so far.


# mount -r /dev/wd0i /mnt

# ls -lo /mnt/DATA/ntfs_2gb_test.bin
-rwxr-xr-x  1 root  wheel  - 3054813184 Oct 17 22:11 /mnt/DATA/ntfs_2gb_test.bin

# cat /mnt/DATA//ntfs_2gb_test.bin  /dev/null

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=4k of=/dev/null
745804+0 records in
745804+0 records out
3054813184 bytes transferred in 108.518 secs (28150083 bytes/sec)

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=2k count=1m of=/dev/null
1048576+0 records in
1048576+0 records out
2147483648 bytes transferred in 78.783 secs (27258052 bytes/sec)

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=1k count=2m of=/dev/null
2097152+0 records in
2097152+0 records out
2147483648 bytes transferred in 81.210 secs (26443280 bytes/sec)

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=4k skip=512k of=/dev/null
221516+0 records in
221516+0 records out
907329536 bytes transferred in 32.314 secs (28077667 bytes/sec)

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=2k skip=1m of=/dev/null
panic: kernel diagnostic assertion cl == 1  tocopy = ntfs_cntob(1) failed: file 
../../../../ntfs/ntfs_subr.c, line 1556
Stopped at  Debugger+0x4:   popl%ebp
RUN AT LEAST 'trace' AND 'ps' AND INCLUDE OUTPUT WHEN REPORTING THIS PANIC!
DO NOT EVEN BOTHER REPORTING THIS WITHOUT INCLUDING THAT INFORMATION!
ddb trace
Debugger(d08fdcbc,f544fb88,d08dc500,f544fb88,200) at Debugger+0x4
panic(d08dc500,d085fc0e,d08dfe60,d08e00b0,614) at panic+0x5d
__assert(d085fc0e,d08e00b0,614,d08dfe60,8) at __assert+0x2e
ntfs_readntvattr_plain(d1a2d200,d1a36200,d1a5bc00,8800,0) at ntfs_readntvat
tr_plain+0x2e6
ntfs_readattr_plain(d1a2d200,d1a36200,80,0,8800) at ntfs_readattr_plain+0x1
41
ntfs_readattr(d1a2d200,d1a36200,80,0,8800) at ntfs_readattr+0x156
ntfs_read(f544fddc,d64e5140,d6522a60,f544fea0,0) at ntfs_read+0xa8
VOP_READ(d6522a60,f544fea0,0,d6599000,d64e5140) at VOP_READ+0x35
vn_read(d65290a8,d65290c4,f544fea0,d6599000,0) at vn_read+0xb5
dofilereadv(d65365d4,3,d65290a8,f544ff08,1) at dofilereadv+0x13a
sys_read(d65365d4,f544ff64,f544ff84,106,d653f100) at sys_read+0x89
syscall() at syscall+0x227
--- syscall (number 0) ---
0x2:
ddb ps
   PID   PPID   PGRPUID  S   FLAGS  WAIT  COMMAND
*19967   9961  19967  0  7   0dd
  9961  1   9961  0  30x88  pause sh
14  0  0  0  30x100200  aiodoned  aiodoned
13  0  0  0  30x100200  syncerupdate
12  0  0  0  30x100200  cleaner   cleaner
11  0  0  0  30x100200  reaperreaper
10  0  0  0  30x100200  pgdaemon  pagedaemon
 9  0  0  0  30x100200  bored crypto
 8  0  0  0  30x100200  pftm  pfpurge
 7  0  0  0  30x100200  usbtskusbtask
 6  0  0  0  30x100200  usbatsk   usbatsk
 5  0  0  0  30x100200  acpi0 acpi0
 4  0 

Re: ntfs with big files

2013-10-17 Thread David Vasek

On Fri, 11 Oct 2013, Joel Sing wrote:


On Thu, 10 Oct 2013, Manuel Giraud wrote:

Hi,

I have a ntfs partition with rather large (about 3GB) files on it. When
I copy these files on a ffs partition they are corrupted. When I try to
checksum them directly from the ntfs partition the checksum is not
correct (compared to the same file on a fat32 partition copied with
Windows).

I tried this (with same behaviour) on i386 5.3 release and on i386 last
week current. I'm willing to do some testing to fix this issue but don't
really know where to start.


See if you can isolate the smallest possible reproducable test case. If you
create a 3GB file with known content (e.g. the same byte repeated), does the
same issue occur? If so, how small do you need to go before the problem goes
away? Also, what operating system (and version) was used to write the files
to the NTFS volume?


Hello, I encountered the same issue. Anything over the 2 GB limit is 
wrong. I mean, first exactly 2 GB of the file are read correctly, 
following that I get wrong data till the end of the file. It is 
reproducible with any file over 2 GB in size so far. Smells like int 
somewhere... I get the same wrong data with any release since at least 
5.0, didn't test anything older, but I bet it is the same.


The filesystem is a Windows XP NTFS system disk, 32-bit, the files were 
copied there with explorer.exe.


Regards,
David



Re: ntfs with big files

2013-10-10 Thread Joel Sing
On Thu, 10 Oct 2013, Manuel Giraud wrote:
 Hi,

 I have a ntfs partition with rather large (about 3GB) files on it. When
 I copy these files on a ffs partition they are corrupted. When I try to
 checksum them directly from the ntfs partition the checksum is not
 correct (compared to the same file on a fat32 partition copied with
 Windows).

 I tried this (with same behaviour) on i386 5.3 release and on i386 last
 week current. I'm willing to do some testing to fix this issue but don't
 really know where to start.

See if you can isolate the smallest possible reproducable test case. If you 
create a 3GB file with known content (e.g. the same byte repeated), does the 
same issue occur? If so, how small do you need to go before the problem goes 
away? Also, what operating system (and version) was used to write the files 
to the NTFS volume?
-- 

Action without study is fatal. Study without action is futile.
-- Mary Ritter Beard