On 06/03/2010 08:05 PM, Dave Kleikamp wrote:
> On Thu, 2010-06-03 at 19:59 -0700, Sandon Van Ness wrote:
>   
>> Did you do a commit fixing this today or maybe its working from other
>> patches you made to mkfs?
>>     
> Yes I did.  I ran out of time before sending out an email and was just
> getting ready to reply to you when I saw this.  If everything holds up
> for a few more days I'll create a 1.1.15 release.
>
> Thanks,
> Shaggy

Thanks a ton for this! I formatted my 32+ TiB partition without any
issues and the fsck ran ok as well. Just to test I went ahead and tried
creating and fscking a 511 TiB sparse file so it was just under 512TiB
and the mkfs seemed to run ok but the fsck failed:

r...@sabayonx86-64: 10:52 AM :/data# dd bs=1M count=0 seek=511M
of=./jfs_512tb.sparse
0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.2906e-05 s, 0.0 kB/s
r...@sabayonx86-64: 10:52 AM :/data# ls -lsah jfs_512tb.sparse
0 -rw-r--r-- 1 root root 511T 2010-06-04 10:52 jfs_512tb.sparse
r...@sabayonx86-64: 10:52 AM :/data# mkfs.jfs -s 128 -L sparse
jfs_512tb.sparse
mkfs.jfs version 1.1.14, Jun 3 2010
Warning! All data on device jfs_512tb.sparse will be lost!

Continue? (Y/N) y
|

Format completed successfully.

548682072064 kilobytes total disk space.
r...@sabayonx86-64: 10:57 AM :/data# ls -lsah jfs_512tb.sparse
65G -rw-r--r-- 1 root root 511T 2010-06-04 10:57 jfs_512tb.sparse
r...@sabayonx86-64: 10:57 AM :/data# mount -o loop jfs_512tb.sparse /sparse
r...@sabayonx86-64: 10:58 AM :/data# df -H /sparse
Filesystem Size Used Avail Use% Mounted on
/data/jfs_512tb.sparse
562T 69G 562T 1% /sparse
r...@sabayonx86-64: 10:58 AM :/data# df -h /sparse
Filesystem Size Used Avail Use% Mounted on
/data/jfs_512tb.sparse
511T 64G 511T 1% /sparse
r...@sabayonx86-64: 10:58 AM :/data# cd /sparse/
r...@sabayonx86-64: 10:58 AM :/sparse# cp -avf
/usr/src/linux-2.6.33.tar.gz ./
`/usr/src/linux-2.6.33.tar.gz' -> `./linux-2.6.33.tar.gz'
r...@sabayonx86-64: 10:58 AM :/sparse# tar xzvf linux-2.6.33.tar.gz | wc -l
33508
r...@sabayonx86-64: 10:58 AM :/sparse# df -h /sparse
Filesystem Size Used Avail Use% Mounted on
/data/jfs_512tb.sparse
511T 65G 511T 1% /sparse
r...@sabayonx86-64: 10:58 AM :/sparse# ls -lsah
total 81M
0 drwxr-xr-x 3 root root 16 2010-06-04 10:58 .
8.0K drwxr-xr-x 51 root root 4.0K 2010-06-04 10:58 ..
8.0K drwxr-xr-x 23 root root 4.0K 2010-02-24 10:52 linux-2.6.33
81M -rw-r--r-- 1 root root 81M 2010-02-24 11:14 linux-2.6.33.tar.gz
r...@sabayonx86-64: 10:58 AM :/sparse# cd ..
r...@sabayonx86-64: 10:58 AM :/# umount /sparse
r...@sabayonx86-64: 10:58 AM :/# cd /data
r...@sabayonx86-64: 10:59 AM :/data# fsck -f -v /data/jfs_512tb.sparse
fsck 1.39 (29-May-2006)
fsck.jfs version 1.1.14, Jun 3 2010
processing started: 6/4/2010 10.59.28
The current device is: /data/jfs_512tb.sparse
Open(...READ/WRITE EXCLUSIVE...) returned rc = 0
Invalid fwsp length detected in the superblock (P).
Invalid fwsp address detected in the superblock (P).
Invalid fwsp length detected in the superblock (S).
Invalid fwsp address detected in the superblock (S).
Superblock is corrupt and cannot be repaired
since both primary and secondary copies are corrupt.

CANNOT CONTINUE.
r...@sabayonx86-64: 10:59 AM :/data#

Testing quite a few different sparse file sizes (halving and what not)
made me figure out that it is failing at 128 TiB (127 TiB will fsck and
128 TiB will not).

I think being able to do up to 128 TiB is a great step in the right
direction and will be good for my needs for quite a long time but it
would be good if it could get fixed to atleast 512 TiB which is where
you start running into limitations of a lot of raid controllers using lba64.

------------------------------------------------------------------------------
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to