Re: OT: Swapfile to RAM relation with 2.4.2X+
Thank you for your reply, Russell! On 02/07/2003 12:09 PM, Russell Coker wrote: On Fri, 7 Feb 2003 02:47, Manuel Krause wrote: In the beginning of 2.4.0+ a relation of swapfile-to-RAM of 2-to-1 was recommended. Due to my several system changes to come in those times I Such recommendations are only generalisations. Ignore them and look at what your system is doing. If your swap space never runs out and you don't expect So far, I followed these thoughts as I always seemed to have enough swap space in this way of interpretation of swap usage. It never ran out. (Except for buggy applications, e.g Netscape 6 betas, that sometimes first filled RAM to max and then the swap ... the system finally stalling.) your usage patterns to require more (including cron jobs and periods of unexpected load) then you have enough. If you run out of swap space then you need more, also you should have some swap even if you have a lot of memory. There's always data that isn't used much and can be paged out to make room for more disk cache. Seldom things happened on here this afternoon. Trying an "unexpected" load: Me changing an image with Gimp, opening some large chart in OpenOffice, a VMware doing disk defragmentation, running KDE2, and some other programs ... and Netscape 7.01 with several plugins also via crossover in separate plugin servers for some hours. Finally I had ~75% of the new huge swapfile filled. From time to time I had this _rate_ before, so far, but not more. Today, closing the applications step-by-step revealed Netscape and plugins had about 256MB swap in use! (Of course I know NS6++ have always been "memory hogs".) It was an acoustical and visual experience listening to 2 disks activity and watching KSysguard to show the actions. I just want to report back and now find it quite funny having had a max 75% filled swap so far, then repartitioning to have a doubled Linux swap, and having applications that use it up that excessively... still having a max 75% filled swap. ;-)) Best regards, Manuel BTW Anything that is worth saying in a .sig can be said in 4 lines. Yes. It should. But it was not intended as .sig.
reiserfsck --blame-it-on-the-hardware-yeah-yeah
Hi there, I have done as requested and used your latest pre-release of reiserfsck to check my filesystem. However, it reports an erroneous error. (none):~# reiserfsck --no-journal-available /dev/hda1 <-reiserfsck, 2002-> reiserfsprogs 3.6.5-pre1 * ** If you are using the latest reiserfsprogs and it fails ** ** please email bug reports to [EMAIL PROTECTED], ** ** providing as much information as possible -- your ** ** hardware, kernel, patches, settings, all reiserfsk ** ** messages (including version), the reiserfsck logfile, ** ** check the syslog file for any related information. ** ** If you would like advice on using this program, support ** ** is available for $25 at www.namesys.com/support.html. ** * Will read-only check consistency of the filesystem on /dev/hda1 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Filesystem with standard journal found, --no-journal-available is ignored ### reiserfsck --check started at Sun Feb 9 14:43:07 2003 ### The problem has occurred looks like a hardware problem. Check your hard drive for badblocks. bread: Cannot read the block (524111). Aborted (none):~# dd if=/dev/hda of=/tmp/foo skip=524100 count=100 100+0 records in 100+0 records out (none):~# od -x /tmp/foo 000 6974 6e6f 7720 6c69 206c 6562 6920 636e 020 756c 6564 2064 6e69 7420 6568 6e20 7865 [... lots of very valid looking data snipped ...] (none):~# time badblocks -c 2048 -n -s -v /dev/hda1 Initializing random test data Checking for bad blocks in non-destructive read-write mode >From block 0 to 261 Checking for bad blocks (non-destructive read-write test): 261/ 261 Pass completed, 0 bad blocks found. real15m44.304s user0m6.420s sys 0m33.250s (none):~# I used `-c 2048' as the disk's buffer is only 128KiB; to avoid having badblocks merely testing the integrity of the disk cache :-). Here's what's happening through the eyes of `strace': [...] read(0, "Yes\n", 4096) = 4 open("/dev/hda1", O_RDONLY|O_LARGEFILE) = 3 brk(0x808d000) = 0x808d000 brk(0x808f000) = 0x808f000 brk(0x8091000) = 0x8091000 brk(0x8093000) = 0x8093000 brk(0x8095000) = 0x8095000 brk(0x8097000) = 0x8097000 _llseek(3, 8192, [8192], SEEK_SET) = 0 read(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 409 6 _llseek(3, 65536, [65536], SEEK_SET)= 0 read(3, "P\377\7\0005\356\0\0\35\223\0\0\22\0\0\0\0\0\0\0\0 \0\0"..., 4096) = 40 96 open("/dev/hda1", O_RDONLY|O_LARGEFILE) = 4 _llseek(4, 33628160, [33628160], SEEK_SET) = 0 read(4, "\357M\3\0\324\27\0\0e\0\0\0\22\0\0\0\0\0\0\0\0 \0\0\0\4"..., 4096) = 40 96 fstat64(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(3, 1), ...}) = 0 open("/dev/null", O_RDONLY|O_NONBLOCK|O_DIRECTORY) = -1 ENOTDIR (Not a directory ) open("/dev/", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 5 fstat64(5, {st_mode=S_IFDIR|0755, st_size=24576, ...}) = 0 fcntl64(5, F_SETFD, FD_CLOEXEC) = 0 getdents64(0x5, 0x8095128, 0x1000, 0) = 4088 stat64("/dev/kmem", {st_mode=S_IFCHR|0640, st_rdev=makedev(1, 2), ...}) = 0 stat64("/dev/mem", {st_mode=S_IFCHR|0640, st_rdev=makedev(1, 1), ...}) = 0 stat64("/dev/core", 0xbcec) = -1 ENOENT (No such file or directory) close(5)= 0 time([1044755218]) = 1044755218 open("/etc/localtime", O_RDONLY)= 5 fstat64(5, {st_mode=S_IFREG|0644, st_size=870, ...}) = 0 old_mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0 x40015000 read(5, "TZif\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\5\0\0\0\5\0"..., 4096) = 870 close(5)= 0 munmap(0x40015000, 4096)= 0 write(2, "###\nreiserfsck --check s"..., 79) = 79 brk(0x80a8000) = 0x80a8000 _llseek(3, 2146758656, 0xbc84, SEEK_SET) = -1 EINVAL (Invalid argument) write(2, "\nThe problem has occurred looks "..., 94) = 94 write(2, "\nbread: Cannot read the block (5"..., 41) = 41 rt_sigprocmask(SIG_UNBLOCK, [ABRT], NULL, 8) = 0 getpid()= 71 kill(71, SIGABRT) = 0 --- SIGABRT (Aborted) --- +++ killed by SIGABRT +++ The parameters to that _llseek() command look quite off. The filesystem is less than 2GiB! It would be quite a loss in terms of time for me to have to rebuild this filesystem from scratch. It's my primary workstation's root filesystem (and I had only just purchased a backup device; I was in the process of backing up my data when this happened, Murphy's Law proving itself). I'd really appreciate it if you could help me zap that data j
Re: reiserfsck --no-journal-available --do-as-i-fucking-say-bitch
[EMAIL PROTECTED] wrote: Hi there, I have a reiserfs filesystem that has a corrupted journal; whenever I try to mount it, fsck it (including with the --no-journal-available option), etc reiserfs tries to seek beyond the end of the block device. Is there an easy way to blank the journal so that I can run reiserfsck on it? (none):/mnt/backups# reiserfsck /dev/hda1 <-reiserfsck, 2002-> reiserfsprogs 3.x.1b Will read-only check consistency of the filesystem on /dev/hda1 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes):Yes ### reiserfsck --check started at Fri Feb 7 10:43:10 2003 ### Replaying journal.. trans replayed: mountid 101, transid 216560, desc 6118, len 72, commit 6191, nex t trans offset 6174 bwrite: lseek to position 2048503808 (block=500123, dev=4): Invalid argument (none):/mnt/backups# reiserfsck --no-journal-available /dev/hda1 <-reiserfsck, 2002-> reiserfsprogs 3.x.1b Will read-only check consistency of the filesystem on /dev/hda1 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes):Yes Filesystem with standard journal found, --no-journal-availabel is ignored ### reiserfsck --check started at Fri Feb 7 10:43:29 2003 ### Replaying journal.. trans replayed: mountid 101, transid 216560, desc 6118, len 72, commit 6191, nex t trans offset 6174 bwrite: lseek to position 2048503808 (block=500123, dev=4): Invalid argument (none):/mnt/backups# Here is what I get on the console when I try to mount: mount: wrong fs type, bad option, bad superblock on /dev/hda1, or too many mounted file systems Replay Failure, unable to mount reiserfs_read_super: unable to initialize journal space reiserfs: checking transaction log (device 03:01) ... attempt to access beyond end of device 03:01: rw=1, want=2000496, limit=261 attempt to access beyond end of device 03:01: rw=1, want=2000500, limit=261 journal-1226: REPLAY FAILURE, fsck required! buffer write failed Replay Failure, unable to mount reiserfs_read_super: unable to initialize journal space Here's the output of debugreiserfs -j (I think - the journal header option) Filesystem state: consistency is not checked after last mounting Reiserfs super block in block 16 on 0x301 of format 3.6 with standard journal Count of blocks on the device: 524112 Number of bitmaps: 16 Blocksize: 4096 Free blocks (count of blocks - used [journal, bitmaps, data, reserved] blocks): 60981 Root block: 37661 Filesystem is NOT cleanly umounted Tree height: 5 Hash function used to sort names: "r5" Objectid map size 2, max 972 Journal parameters: Device [0x0] Magic [0x2d5b5922] Size 8193 blocks (including 1 for journal header) (first block 18) Max transaction length 1024 blocks Max batch size 900 blocks Max commit age 30 Blocks reserved by journal: 0 Fs state field: 0x0 sb_version: 2 inode generation number: 214154 UUID: cbf04f8b-8d1c-4f07-813e-e449c393c294 LABEL: Set flags in SB: ATTRIBUTES CLEAN Journal header (block #8210 of /dev/hda1): j_last_flush_trans_id 216559 j_first_unflushed_offset 6100 j_mount_id 101 Device [0x0] Magic [0x2d5b5922] Size 8193 blocks (including 1 for journal header) (first block 18) Max transaction length 1024 blocks Max batch size 900 blocks Max commit age 30 Is there an easy override for this? Why is the option there, if it is ignored in the standard case? Cheers, Sam. Please download recent version of reiserfsprogs at ftp://ftp.namesys.com/pub/reiserfsprogs/pre/reiserfsprogs-3.6.5-pre1.tar.gz and try one more time. Let us know what does it say.
reiserfsck --no-journal-available --do-as-i-fucking-say-bitch
Hi there, I have a reiserfs filesystem that has a corrupted journal; whenever I try to mount it, fsck it (including with the --no-journal-available option), etc reiserfs tries to seek beyond the end of the block device. Is there an easy way to blank the journal so that I can run reiserfsck on it? (none):/mnt/backups# reiserfsck /dev/hda1 <-reiserfsck, 2002-> reiserfsprogs 3.x.1b Will read-only check consistency of the filesystem on /dev/hda1 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes):Yes ### reiserfsck --check started at Fri Feb 7 10:43:10 2003 ### Replaying journal.. trans replayed: mountid 101, transid 216560, desc 6118, len 72, commit 6191, nex t trans offset 6174 bwrite: lseek to position 2048503808 (block=500123, dev=4): Invalid argument (none):/mnt/backups# reiserfsck --no-journal-available /dev/hda1 <-reiserfsck, 2002-> reiserfsprogs 3.x.1b Will read-only check consistency of the filesystem on /dev/hda1 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes):Yes Filesystem with standard journal found, --no-journal-availabel is ignored ### reiserfsck --check started at Fri Feb 7 10:43:29 2003 ### Replaying journal.. trans replayed: mountid 101, transid 216560, desc 6118, len 72, commit 6191, nex t trans offset 6174 bwrite: lseek to position 2048503808 (block=500123, dev=4): Invalid argument (none):/mnt/backups# Here is what I get on the console when I try to mount: mount: wrong fs type, bad option, bad superblock on /dev/hda1, or too many mounted file systems Replay Failure, unable to mount reiserfs_read_super: unable to initialize journal space reiserfs: checking transaction log (device 03:01) ... attempt to access beyond end of device 03:01: rw=1, want=2000496, limit=261 attempt to access beyond end of device 03:01: rw=1, want=2000500, limit=261 journal-1226: REPLAY FAILURE, fsck required! buffer write failed Replay Failure, unable to mount reiserfs_read_super: unable to initialize journal space Here's the output of debugreiserfs -j (I think - the journal header option) Filesystem state: consistency is not checked after last mounting Reiserfs super block in block 16 on 0x301 of format 3.6 with standard journal Count of blocks on the device: 524112 Number of bitmaps: 16 Blocksize: 4096 Free blocks (count of blocks - used [journal, bitmaps, data, reserved] blocks): 60981 Root block: 37661 Filesystem is NOT cleanly umounted Tree height: 5 Hash function used to sort names: "r5" Objectid map size 2, max 972 Journal parameters: Device [0x0] Magic [0x2d5b5922] Size 8193 blocks (including 1 for journal header) (first block 18) Max transaction length 1024 blocks Max batch size 900 blocks Max commit age 30 Blocks reserved by journal: 0 Fs state field: 0x0 sb_version: 2 inode generation number: 214154 UUID: cbf04f8b-8d1c-4f07-813e-e449c393c294 LABEL: Set flags in SB: ATTRIBUTES CLEAN Journal header (block #8210 of /dev/hda1): j_last_flush_trans_id 216559 j_first_unflushed_offset 6100 j_mount_id 101 Device [0x0] Magic [0x2d5b5922] Size 8193 blocks (including 1 for journal header) (first block 18) Max transaction length 1024 blocks Max batch size 900 blocks Max commit age 30 Is there an easy override for this? Why is the option there, if it is ignored in the standard case? Cheers, Sam.
Re: OT: Swapfile to RAM relation with 2.4.2X+
On Fri, 7 Feb 2003 02:47, Manuel Krause wrote: > In the beginning of 2.4.0+ a relation of swapfile-to-RAM of 2-to-1 was > recommended. Due to my several system changes to come in those times I Such recommendations are only generalisations. Ignore them and look at what your system is doing. If your swap space never runs out and you don't expect your usage patterns to require more (including cron jobs and periods of unexpected load) then you have enough. If you run out of swap space then you need more, also you should have some swap even if you have a lot of memory. There's always data that isn't used much and can be paged out to make room for more disk cache. BTW Anything that is worth saying in a .sig can be said in 4 lines. -- http://www.coker.com.au/selinux/ My NSA Security Enhanced Linux packages http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark http://www.coker.com.au/postal/Postal SMTP/POP benchmark http://www.coker.com.au/~russell/ My home page