Re: [Jfs-discussion] Partition corruption, help!!

2004-12-06 Thread Dave Kleikamp
On Mon, 2004-12-06 at 07:26 -0800, Sang Nguyen Van wrote:

 ... # jfs_fsck /dev/hdb1
 jfs_fsck version 1.1.7, 22-Jul-2004
 processing started: 12/6/2004 16.17.41
 Using default parameter: -p
 The current device is:  /dev/hdb1
 Block size in bytes:  4096
 Filesystem size in blocks:  40019915
 **Phase 0 - Replay Journal Log
 logredo failed (rc=-268).  fsck continuing.
 **Phase 1 - Check Blocks, Files/Directories, and 
 Directory Entries
 Filesystem is clean.

 I could not get to phase 2,... of jfs_fsck after many
 tries, and could not mount the 2nd harddisk even after
 trying various option of mount command: -o
 errors=continue ..., still the message:
 mount: wrong fs type, bad option, bad superblock on
 /dev/hdb1,
or too many mounted file systems

Okay, I've seen this before and this patch to jfsutils should get fsck
past phase 1.  Could you please give it a try?

Thanks,
Shaggy
-- 
David Kleikamp
IBM Linux Technology Center
Index: jfsutils/fsck/fsckmeta.c
===
RCS file: /usr/cvs/jfs/jfsutils/fsck/fsckmeta.c,v
retrieving revision 1.18
retrieving revision 1.19
diff -u -p -r1.18 -r1.19
--- jfsutils/fsck/fsckmeta.c	17 Dec 2003 20:28:47 -	1.18
+++ jfsutils/fsck/fsckmeta.c	24 Sep 2004 14:43:53 -	1.19
@@ -3322,6 +3322,7 @@ int verify_fs_super_ext(struct dinode *i
 			if (inorecptr-ignore_alloc_blks
 			|| (vfse_rc != FSCK_OK)) {
 inode_invalid = -1;
+vfse_rc = FSCK_OK;
 			}
 		}
 	}
Index: jfsutils/fsck/xchkdsk.c
===
RCS file: /usr/cvs/jfs/jfsutils/fsck/xchkdsk.c,v
retrieving revision 1.48
retrieving revision 1.49
diff -u -p -r1.48 -r1.49
--- jfsutils/fsck/xchkdsk.c	22 Jul 2004 14:29:37 -	1.48
+++ jfsutils/fsck/xchkdsk.c	24 Sep 2004 14:43:53 -	1.49
@@ -2103,10 +2103,6 @@ int phase1_processing()
 	if (p1_rc != FSCK_OK) {
 		agg_recptr-fsck_is_done = 1;
 		exit_value = FSCK_OP_ERROR;
-		if (p1_rc  0) {
-			/* this isn't a fsck failure */
-			p1_rc = 0;
-		}
 	}
 	return (p1_rc);
 }
Index: jfsutils/libfs/logredo.c
===
RCS file: /usr/cvs/jfs/jfsutils/libfs/logredo.c,v
retrieving revision 1.26
retrieving revision 1.27
diff -u -p -r1.26 -r1.27
--- jfsutils/libfs/logredo.c	29 Jun 2004 19:36:53 -	1.26
+++ jfsutils/libfs/logredo.c	24 Sep 2004 13:33:58 -	1.27
@@ -791,14 +791,9 @@ int jfs_logredo(caddr_t pathname, int32_
 		if (vopen[k].state != VOPEN_OPEN)
 			continue;
 
-		/* don't update the maps if the aggregate/lv is
-		 * FM_DIRTY since fsck will rebuild maps anyway
-		 */
-		if (!vopen[k].is_fsdirty) {
-			if ((rc = updateMaps(k)) != 0) {
-fsck_send_msg(lrdo_ERRORCANTUPDMAPS);
-goto error_out;
-			}
+		if ((rc = updateMaps(k)) != 0) {
+			fsck_send_msg(lrdo_ERRORCANTUPDMAPS);
+			goto error_out;
 		}
 
 		/* Make sure all changes are committed to disk before we


Re: [Jfs-discussion] performance probs - 2.4.28, jsf117, raid5

2004-12-06 Thread Sonny Rao
On Sun, Dec 05, 2004 at 08:41:35PM +0100, Per Jessen wrote:
 On Sun, 05 Dec 2004 18:40:58 +0100, Per Jessen wrote:
 
 I do a find in a directory that contains 5-600,000 files - which just 
 about makes the box grind to a halt.  The machine is not heavily loaded as 
 such,
 but does write 2 new files/sec to the same filesystem.  Or tries to.  
 
 I need to add - at the same time kswapd is very, very busy, despite only 
 about 1Gb of
 the 2Gb main core being used/active.
 
 
 /Per

Yes, this is a consequence of the way memory is partitioned on IA32
machines (which I'm assuming you're using).  If you look at the amount
of memory being used by the kernel slab cache, I'd bet it's using much
of that 1GB for kernel data structures (inodes, dentrys, etc) and
whenever the kernel needs to allocate some more memory it has to evict
some of those structures which is a very expensive process.

Look at /proc/slabinfo and add up the total number of slabs.

Sonny
___
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion


Re: [Jfs-discussion] Filesystem performance with Linux 2.4 vs. 2.6

2004-12-06 Thread Sonny Rao
On Sun, Dec 05, 2004 at 11:40:21AM +0100, Michael M?ller wrote:
 Hi all,
 
 I read an article in the German 'Linux Magazin' 11/04 about a
 comparision of the different FS. They tested Ext2, Ext3, JFS, XFS,
 ReiserFS, Reiser4 and Veritas. Detailed results can be found on
 http://www.linux-magazin.de/Service/Listings/2004/11/fs_bench.
 
 My question is: Why is the reading performance of every FS (available
 for both Linux versions) under 2.6 so bad compared to 2.4? 2.6 looses
 nearly 50%!
 
 The write performance is depending on the file size sometimes slightly
 higher or lower.
 
 Can you tell me in short words what changed from 2.4 to 2.6 that
 explains the difference?
 
 I thought that every major kernel release makes things better. So what
 is now so much better that is worth to loose 50% performance?
 

Well, it's fairly clear they messed something up here.  

My guess is that they didn't set the readahead high enough for
whatever type of device they were testing on 2.6 (It looks like a Raid
array, since on 2.4 it gets about 100MB/sec, which I don't think very
many single disks can do).  The readahead implementation on 2.6 is
certainly different from the one on 2.4.  IO performance on 2.6 is
much, much better across the board.

My German isn't great, so I'm not going to try and read the article,
but I'd also like to know what kind of array they are using for this
test.  Before we can make any conclusions, we should know what the
hardware is capable of doing.

Sonny


___
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion