I am planning on having a file-system that is over 32TB in the very near
future and I heard from someone that JFS can't handle file-systems over
32 TB correctly (can't expand, fsck, or anything). I believe this is
indeed true as I just tried formatting a file-system that was 31 TB via
a sparse file and it took a bit of time to format and ended up with a
file with about 5 GB of usage:

r...@sabayonx86-64: 05:31 PM :/data# mkfs.jfs -s 1024 /data/sparse_jfs
mkfs.jfs version 1.1.14, 06-Apr-2009
Warning!  All data on device /data/sparse_jfs will be lost!

Continue? (Y/N) y
   -

Format completed successfully.

33285996544 kilobytes total disk space.


r...@sabayonx86-64: 05:32 PM :~# ls -lsah /data/sparse_jfs
4.9G -rw-r--r-- 1 root root 31T 2010-04-20 17:32 /data/sparse_jfs

However, when I made this 2 TB bigger (to 33 TB) the format went much
quicker and the end result was a sparse file with only 1.1 GB used:

r...@sabayonx86-64: 05:32 PM :/data# mkfs.jfs -s 1024 /data/sparse_jfs
mkfs.jfs version 1.1.14, 06-Apr-2009
Warning!  All data on device /data/sparse_jfs will be lost!

Continue? (Y/N) y
   -

Format completed successfully.

35433480192 kilobytes total disk space.
r...@sabayonx86-64: 05:32 PM :/data# ls -lsah /data/sparse_jfs
1.1G -rw-r--r-- 1 root root 33T 2010-04-20 17:33 /data/sparse_jfs


So obviously there is some bug here as if anything it should be writing
more data. Also the used space amount after mounting it is also about
the size I would expect from the sparse file minus the log size:

Again used size just about doubles with a 64TB file-system but the
sparse file is still much smaller than with a 31 TB file-system:

r...@sabayonx86-64: 05:37 PM :/data# dd if=/dev/zero of=sparse_jfs bs=1M
count=0 seek=64M
0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.1877e-05 s, 0.0 kB/s
r...@sabayonx86-64: 05:37 PM :/data# mkfs.jfs -s 1024 /data/sparse_jfs
mkfs.jfs version 1.1.14, 06-Apr-2009
Warning!  All data on device /data/sparse_jfs will be lost!

Continue? (Y/N) y
   -

Format completed successfully.

68719476736 kilobytes total disk space.
r...@sabayonx86-64: 05:38 PM :/data# mount -o loop -t jfs
/data/sparse_jfs /mnt/sparse/
r...@sabayonx86-64: 05:38 PM :/data# df -h /mnt/sparse/
Filesystem            Size  Used Avail Use% Mounted on
/data/sparse_jfs       65T  8.1G   64T   1% /mnt/sparse

However trying to copy anything to it immediately errors out:

r...@sabayonx86-64: 05:39 PM :/mnt/sparse# cp -avf /opt ./
cp: cannot create directory `./opt': Input/output error
r...@sabayonx86-64: 05:39 PM :/mnt/sparse#            

ERROR: (device loop1): dbAllocNext: Corrupt dmap page
ERROR: (device loop1): remounting filesystem as read-only

ialloc: diAlloc returned -5!


With the 32 TB sparse file it works as expected and the cp runs without
any problem:

r...@sabayonx86-64: 05:40 PM :/data# dd if=/dev/zero of=sparse_jfs bs=1M
count=0 seek=31M
0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.2172e-05 s, 0.0 kB/s
r...@sabayonx86-64: 05:40 PM :/data# mkfs.jfs -s 1024 /data/sparse_jfs
mkfs.jfs version 1.1.14, 06-Apr-2009
Warning!  All data on device /data/sparse_jfs will be lost!

Continue? (Y/N) y
   -

Format completed successfully.

33285996544 kilobytes total disk space.
r...@sabayonx86-64: 05:41 PM :/data# ls -lsah /data/sparse_jfs
4.9G -rw-r--r-- 1 root root 31T 2010-04-20 17:41 /data/sparse_jfs

r...@sabayonx86-64: 05:42 PM :/mnt/sparse# df -h /mnt/sparse/
Filesystem            Size  Used Avail Use% Mounted on
/data/sparse_jfs       31T  3.9G   31T   1% /mnt/sparse
r...@sabayonx86-64: 05:42 PM :/mnt/sparse# cp -avf /opt/ ./ | wc -l
17426
r...@sabayonx86-64: 05:43 PM :/mnt/sparse# df -h /mnt/sparse/
Filesystem            Size  Used Avail Use% Mounted on
/data/sparse_jfs       31T  5.5G   31T   1% /mnt/sparse
r...@sabayonx86-64: 05:43 PM :/mnt/sparse# ls -lsah /data/sparse_jfs
6.0G -rw-r--r-- 1 root root 31T 2010-04-20 17:41 /data/sparse_jfs
r...@sabayonx86-64: 05:43 PM :/mnt/sparse#

Are there any plans to fix this? Ive got a feeling that its something
simple as only the log seems to be written when doing the mkfs.


------------------------------------------------------------------------------
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to