Re: [reiserfs-list] Reserved Blocks
[EMAIL PROTECTED] wrote: many Linux distributions out there). As a sysadmin, you've just taken over one of those systems. How, exactly, do you go about splitting the filesystems up? This of course depends on how much unallocated disk space you have handy. The method of attack will differ based on whether you have a free partition or not. You may be able to mount a new filesystem and copy it, or you may be forced to dump to auxillary storage and reload. I know that, and you've just agreed that it's not straightforward, and not possible if you don't have any unpartitioned space available to you. to reason. If reiserfs lets me shrink the reserved blocks percentage to 5% or 3% rather than 10%, fantastic. If in this day and age you're quibbling over the difference between 10% and 5%, I wonder if you've already tuned the 'NBPI' value to match [...] I'm just pointing out that the limit is still there, it's just a little smaller. `Fantastic' was perhaps too strong a word. Maxstor just announced 320G drives. If you need that last 32G *that* badly, it's time to buy another one - and then think about the backup issues. ;) ^^ Heh. Definitely. Large disks are dangerous :-). I prefer to buy more smaller ones and mirror 'em. If you've not administered enough systems to appreciate why you'd want this, then fine. Hmm.. let's just say that I'd already been doing Unix for a while when I had Sun change a purchase order out from under me from Sun-2's to Sun-3's instead. Oh, you got that too? I was a bit pissed off when they insisted we replaced our 490s with Sparc 5's and fucked up our nice tidy X.25 cabling. I remember decommissioning the last shoebox QIC-150. Oh, and those nice Sun vacuum driven 9 tracks made loading reel to reel a pleasure. OK, I'll admit you may have a few years on me, but it's not the length of time it's the love gone into the networks you've looked after, how much you learned from them, and how much love you were able to return to the networks. Which is unquantifiable - so let's leave it at that. No, splitting up filesystems isn't perfect - especially if you don't have an underlying system like LVM so you can grow partitions as needed. However, I still say that it's the best of a number of non-perfect solutions. Well, I say implement all that don't introduce undue complexity and see which one yields the best results after 5 to 10 years. The skilled system administrator will appreciate them all. So there :-P -- Sam Vilain, [EMAIL PROTECTED] WWW: http://sam.vilain.net/ 7D74 2A09 B2D3 C30F F78E GPG: http://sam.vilain.net/sam.asc 278A A425 30A9 05B5 2F13 Real Programmers don`t write in RPG. RPG is for gum-chewing dimwits who maintain ancient payroll programs.
Re: [reiserfs-list] Strange Syslog messages
Originally to: Guilherme Salgado Guilherme, Try running badblocks -o badblocks.hdxx /dev/hdxx (if u have ide harddisks). i received some messages like yours and discovered that my ide hd has some bad blocks. Running it at the moment, I hope not as they are faily new, both HD's seem to displaying similar problems, It is an old P166 so maybe the motherboard is playing up Sean ... COFFEE.CUP empty - Operator shelled out to KITCHEN -- Message from TCOB1, Ireland's best BBS +353-95-43868 sometimes via tcob1.staticky.com - Gateway Information. This message originated from a Fidonet System (http://www.fidonet.org) and was gated at TCOB1 (http://www.tcob1.net) Please do not respond direct to this message but via the list
Re: [reiserfs-list] Strange Syslog messages
Originally to: Oleg Drokin Oleg, I am using a stock 2.4.19 kernel, on a system with reiserfs only. I have started getting a lot of: vs-13070: reiserfs_read_inode2: i/o failure occurred trying to find stat data of [21003 21048 0x0 SD] is_leaf: wrong item type for item *3.5*[21003 21047 0x1 IND], item_len 8, item_location 2214, free_space(entry_count) 0 vs-5150: search_by_key: invalid format found in block 16616. Fsck? vs-13070: reiserfs_read_inode2: i/o failure occurred trying to find stat data of [21003 21048 0x0 SD] Do you see any other messages from kernel at this same time? Nothing related to kernel, usually cron etc. if I boot into a rescue disk and run reiserfsck, I get no errors at all to be fixed running check only What reiserfsprogs version do you have? The lastest version, I made sure I was updated. Is this serious, a problem with the HD or what?? Hard to tell right now. I am wondering if it is my motherboard, as I have just ran badblocks on both HDs with nothing being reported. Sean ... COFFEE.CUP empty - Operator shelled out to KITCHEN -- Message from TCOB1, Ireland's best BBS +353-95-43868 sometimes via tcob1.staticky.com - Gateway Information. This message originated from a Fidonet System (http://www.fidonet.org) and was gated at TCOB1 (http://www.tcob1.net) Please do not respond direct to this message but via the list
Re: [reiserfs-list] reiserfsprogs-3.6.4-pre2
On 09/17/2002 03:23 PM, Vitaly Fertman wrote: Hi all, A new reiserfsprogs pre release is available at ftp.namesys.com/pub/reiserfsprogs/pre/reiserfsprogs-3.6.4-pre2.tar.gz Changes went into 3.6.4-pre2: fix-fixable sets correct item formats in item headers if needed. rebuild got some extra checks for invalid tails on pass0. fsck check does not complain on wrong file sizes if safelink exists. check dma mode/spead of harddrive and warn the user if it descreased -- it could happen due to some hardware problem. Bugs: during conversion tails to indirect items on pass2 and back conversion on semantic pass. not proper cleaning flags in item headers. during relocation of shared objects. new block allocating on pass2 (very rare case). Hi Vitaly! Does this mean these Bugs are [a] only in 3.6.4-pre2, so: newly created [b] in 3.6.4-pre2 and the previous -pres [c] or even in the 3.6.3 release,too ??? No, I'm not in doubt of reiserfscks development efforts in general, just want to know for sure. Thanks, Manuel Changes went into 3.6.4-pre1: Correction of nlinks on fix-fixable was disabled, because fix-fixable zeroes nlinks on the first pass and wants to increment them on semantic pass. But semantic pass is skipped if there are fatal corruptions. Exit codes were fixed. Warning/error messages were changed to more user friendly form. Changes which got into 3.6.3-pre1, but were not included into the release: Great speedups for pass2 of reiserfsck. Thanks, Vitaly Fertman
Re: [reiserfs-list] reiserfsprogs-3.6.4-pre2
On Wednesday 18 September 2002 03:55, Manuel Krause wrote: On 09/17/2002 03:23 PM, Vitaly Fertman wrote: Hi all, A new reiserfsprogs pre release is available at ftp.namesys.com/pub/reiserfsprogs/pre/reiserfsprogs-3.6.4-pre2.tar.gz Changes went into 3.6.4-pre2: fix-fixable sets correct item formats in item headers if needed. rebuild got some extra checks for invalid tails on pass0. fsck check does not complain on wrong file sizes if safelink exists. check dma mode/spead of harddrive and warn the user if it descreased -- it could happen due to some hardware problem. Bugs: during conversion tails to indirect items on pass2 and back conversion on semantic pass. not proper cleaning flags in item headers. during relocation of shared objects. new block allocating on pass2 (very rare case). Hi Vitaly! Does this mean these Bugs are [a] only in 3.6.4-pre2, so: newly created [b] in 3.6.4-pre2 and the previous -pres [c] or even in the 3.6.3 release,too ??? Manuel geh ins Bett oder lese richtig...;-) It should read (I think): * New stuff --- Changes... * with -pre2 fixed bugs... Good night! Dieter -- Dieter Nützel Graduate Student, Computer Science University of Hamburg Department of Computer Science home: Dieter.Nuetzel at hamburg.de (replace at with )
Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 - 2.4.19+data-logging (was:Compatibility of current 2.4.19.pending ...)
Hello! On Tue, Sep 17, 2002 at 07:39:39PM +0200, Manuel Krause wrote: Copy same amount of data from RAM/nowhere to FS. E.g. make a file with file names and sizes and write a script that writes this amount of data from /dev/zero with these same names and needed sizes into FS. (or just use RAMFS as your source if you have not much data and huge RAM) To be honest, this already exceeds my linux knowledge... I meant something to this extent: You run a script that runs over your filesystem and creates shell script that first creates whole dir structure of source dir and then for each file creates necessary command to recreate file of the same size: e.g for this directory contents: green@angband:~/z ls -lR .: total 1 drwxr-xr-x2 greengreen 114 Sep 18 09:08 t ./t: total 148 -rw-rw-r--1 greengreen 69570 Aug 10 16:34 inode.c -rw-rw-r--1 greengreen 66478 Aug 10 16:33 stree.c -rw-rw-r--1 greengreen 10256 Aug 10 16:32 tail_conversion.c Result of the work of the script would be: mkdir t mkdir t/z dd if=/dev/zero of=t/z/inode.c bs=69570 count=1 dd if=/dev/zero of=t/z/stree.c bs=66478 count=1 dd if=/dev/zero of=t/z/tail_conversion.c bs=10256 count=1 And you can run resulting script in target dir. I was fiddling with some test directories containing 195.8MB I copied to and from /dev/shm with swap turned off. # time cp -a /dev/shm/. /mnt/beta/z.Backup.3/ kernel 2.4.20-pre7 | kernel 2.4.20-pre6 real0m9.006s| real0m6.740s user0m0.190s| user0m0.230s sys 0m5.250s| sys 0m4.780s # rm -r /dev/shm/* # time cp -a /mnt/beta/z.Backup.3/. /dev/shm/ kernel 2.4.20-pre7 | kernel 2.4.20-pre6 real0m6.349s| real0m6.180s user0m0.210s| user0m0.220s sys 0m2.450s| sys 0m2.510s This dataset is way too small and entirely fits into your RAM I presume. So to avoid any distortion or results you'd better have all periodic stuff disabled. (though kupdated is still there) so it's better to run it several times. Also since it its into RAM, it must be flushed out, so I usually do this using such command: time sh -c cp -a /testfs0/linux-2.4.18 /mnt/ ; umount /mnt # time dd if=/dev/zero bs=1M count=1000 of=/mnt/beta/testfile.zero kernel 2.4.20-pre7 | kernel 2.4.20-pre6 real1m11.390s | real1m42.011s sys 0m11.230s | sys 0m5.620s Hm. While system time is less as expected, real time increased, that's strange. # time dd of=/dev/null bs=1M if=/mnt/beta/testfile.zero kernel 2.4.20-pre7 | kernel 2.4.20-pre6 real1m16.738s | real1m39.094s sys 0m5.460s| sys 0m5.930s And real time is bigger for reads too, so it seems data layout is different. That's really strange. If you can reproduce this behaviour, I am interested in getting debugreiserfs -d output for each case after you umount this volume (I assume that /mnt/beta/ filesystems contains nothing but this testfile.zero file). Compare 2.4.20-pre[67] if you see any difference. Ah, also copy your data from original disk location to /dev/null and measure time of that operation to know how much of total time is occupied by reads. Also you can calculate read and write throughput separately this way. And if reads are slower than writes - ... I'm definitely not sure if my lines above are something you meant. Yes, kind of, though you have omitted timings of copying original data to /dev/shm/ that will give us read speed from original media. In fact instead of turning of swap you can do mount none /mnt/ramfs -t ramfs command (if you have ramfs compiled in of course) and /mnt/ramfs is now kind of ram filesystem with very low overhead. It also cannot be swapped out so if you fill all of your RAM, your box will OOM ;) Byt the test itself is very small. Probably you need to run something like time find /source/that/needs/to/be/backed/up -type f -exec cat {} /dev/null \; to get read performance and implement a script like I mentioned in the beginning to measure writes. This way you do not need tons of RAM. Bye, Oleg