Re: reiserfsck core dumps

2004-01-12 Thread B. J. Zolp
BTW, this is on a Mandrake 9.0 machine using the stock reiserfsck 3.6.3 
version.



B. J. Zolp wrote:

I have a LVM partition that I cannot seem to mount anymore.  Mount 
tells me the standard Too many mounted files systems  message.  
So I decided to run reiserfsck on it, well when i do that it soon 
crashes with this output:

[EMAIL PROTECTED] root]# reiserfsck /dev/vg0/logical0
reiserfsck, 2002 - reiserfsprogs 3.6.3Will read-only check consistency 
of the filesystem on /dev/vg0/logical0
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes):Yes
###
reiserfsck --check started at Sun Jan 11 15:19:58 2004
###
bread: Cannot read a block # 119472128.



Aborted (core dumped)



Now, I am pretty sure the LVM is intact, all the drives are added 
correctly and all that is ok.  I ran badblocks on the partition and it 
gave me a list of about ~1000 badblocks.  Is there a way I can use 
this list of blocks to my advantage to recover this partition, I would 
like to get data from it.

Thanks






4.0 stable ?

2004-01-12 Thread Thomas Graham
when does it stable anyway ?!


-- 
HK Celtic Orchestra leader and coordanator: Thomas Graham Lau
Phone number: 852-93239670(24hours a day, 7days a week non-stop phone)
Web site: http://sml.dyndns.org
Email: [EMAIL PROTECTED]


Re: 4.0 stable ?

2004-01-12 Thread Nikita Danilov
Thomas Graham writes:
  when does it stable anyway ?!
  

It depends on your definition of stable. It is stable for ordinary use,
and as far as I know people are using it for home directories for some
time already. It is still possible to crash it under stress test if one
is inventive enough and has enough time to waste.

Nikita.


Re: 4.0 stable ?

2004-01-12 Thread Viktors Rotanovs
Nikita Danilov wrote:

Thomas Graham writes:
 when does it stable anyway ?!
It depends on your definition of stable. It is stable for ordinary use,
and as far as I know people are using it for home directories for some
time already. It is still possible to crash it under stress test if one
is inventive enough and has enough time to waste.
 

How many bugs were fixed since the last snapshot?
Isn't it time for a new snapshot for 2.6.1 (or 2.6.1-mm)?
I use Reiser4 as for everything on my desktop PC for several months, 
without any problems.
(currently 2.6.0 test11, Gentoo linux).
What's the status of compression support?

Nikita.
 





Re: reiaserfsck core dumps

2004-01-12 Thread Vitaly Fertman
On Monday 12 January 2004 01:39, B. J. Zolp wrote:
 I have a LVM partition that I cannot seem to mount anymore.  Mount tells
 me the standard Too many mounted files systems  message.  So I
 decided to run reiserfsck on it, well when i do that it soon crashes
 with this output:

 [EMAIL PROTECTED] root]# reiserfsck /dev/vg0/logical0
 reiserfsck, 2002 - reiserfsprogs 3.6.3

Please update the version of reiserfsprogs.

 Will read-only check consistency
 of the filesystem on /dev/vg0/logical0
 Will put log info to 'stdout'

 Do you want to run this program?[N/Yes] (note need to type Yes):Yes
 ###
 reiserfsck --check started at Sun Jan 11 15:19:58 2004
 ###

 bread: Cannot read a block # 119472128.

Do you see anything related in the syslog? Looks like a bad block
or an access beyond end of device.

 Aborted (core dumped)



 Now, I am pretty sure the LVM is intact, all the drives are added
 correctly and all that is ok.  I ran badblocks on the partition and it
 gave me a list of about ~1000 badblocks.  Is there a way I can use this
 list of blocks to my advantage to recover this partition, I would like
 to get data from it.


 Thanks

-- 
Thanks,
Vitaly Fertman




Re: --rebuild-tree, out of disk

2004-01-12 Thread Vitaly Fertman
On Monday 12 January 2004 07:16, David D. Huff Jr. wrote:
 Vitaly Fertman wrote:
 The system was using an abnormal amount of space, going through 3.8 Gig
 of space in a week that could not be accounted for. I checked directory
 sizes and file sizes, nothing added up to the space being used. It was
 only a webserver and disk growth shouldn't have been more that 35MB a
 month. I ran reiserfsck --check and it said it was OK but I knew it
 wasn't so when I ran --rebuild-tree, it ran out of space.
 
  what reiserfsprogs version do you use?

 3.6.11

  did you specify any option to reiserfsck?

 Yes, --rebuild-tree

 I'd allocated 2 Gig for swap, so I used cfdisk to reduce it to 1600 Mb
 (/dev/hda2) and deleted and re-allocated the reiserfs partition
 (/dev/hda1) to include the full size. When I run resize_reiserfs -f it
 says to run reiserfsck --rebuild-tree. Well I've done that and received
 the same out of disk condition. -That and using every other switch I can
 think of. I also used gpart but it only recognized /dev/hda2 and that
 there was a 27 Gig partition in front of it, then gpart would abort.
 
  so you increased the 25G fs up to 27 Gb and got out of space again,
  correct?

 No, it was already 27 Gig, it should have been 400 Mb larger but since
 resize_reiserfs won't run it still reports the original size. cfdisk
 reports the additional size correctly.

oops, I wanted to ask about the partition, not fs size.
did you run reiserfsck --rebuild-sb ? 
If you increased the size of the partition without running resizer,
the fs size is still the same. As you cannot run resizer -- fs is not 
in consistent state, you should run reiserfsck --rebuild-sb to fix
the fs size and then reiserfsck --rebuild-tree. Before rebuilding 
the treee it would be better to zero the increased part of the 
partition (dd if=/dev/zero ...)

Could you also run 
debugreiserfs -p /dev/xxx | bzip2 -c  xxx.bz2
_before_ running rebuild-sb and provide it somehow for downloading. 
It is probably still possible to find out if there was any problem with 
disk usage initially when reiserfsck reported no problem.

 At this point I'm thinking it is de-allocating bad sectors as fast as
 the drive can write.
 
  What do you mean by saying 'bad sectors'? Are there bad blocks on
  your drive or what? You should fix hardware problems first and read
  also http://www.namesys.com/bad-block-handling.html please.

 Thanks, I only had time to try once today and it didn't report any bad
 blocks. Work continues as time allows.

 For my next step I intend to dd the partition to another disk with a
 larger formatted partition and try to run --rebuild tree there.
 
  I would advise to zero the rest of the 'larger partition' beyond the
  source partition size to avoid any possible problem of mixing your
  valid data with old reiserfs data existed on the 'larger partition'.

 Good idea, I was concerned about old data showing up. I'll have to read
 up on how to zero the data, I've done that for the MBR in the past but
 not for data.

 For your edutainment, that bad disk is one of those IBM Deskstars
 mentioned on the namesys faq I did not know that there was class action
 suit pending in the matter, this is the 4th (Deskstar) drive to die less
 than three years old and on short notice,(all purchased from Jan 2000
 through June 2001, before their evils were known).

-- 
Thanks,
Vitaly Fertman




Re: 4.0 stable ?

2004-01-12 Thread Redeeman
On Mon, 2004-01-12 at 10:16, Nikita Danilov wrote:
 Thomas Graham writes:
   when does it stable anyway ?!
   
 
 It depends on your definition of stable. It is stable for ordinary use,
 and as far as I know people are using it for home directories for some
 time already. It is still possible to crash it under stress test if one
 is inventive enough and has enough time to waste.
 
my definition of stable is when namesys.com doesent say reiser4 final
testing, and will ship soon! :)
you know, when its marked stable :)

do you have a small idea of when? i have a 300gb hd waiting for
partitioning just because i want reiser4 ;)

 Nikita.

-- 
Regards, Redeeman
()  ascii ribbon campaign - against html e-mail 
/\- against microsoft attachments




bk pull problem (reiser4)

2004-01-12 Thread Sergey S. Kostyliov
Hello all,

I can not pull from reiser4 tree,
bk pull is just waited indefinitely:
[EMAIL PROTECTED] reiser4 $ bk parent
Parent repository is bk://bk.namesys.com/bk/reiser4
[EMAIL PROTECTED] reiser4 $ bk pull
Pull bk://bk.namesys.com/bk/reiser4
  - file://usr/local/src/bk/reiser4
(nothing more)

but bk clone is still posiible.

traceroute to thebsh.namesys.com (212.16.7.65), 30 hops max, 40 byte packets
 1  breuss-10.ws.ehouse.ru (192.168.114.10)  1.251 ms  2.773 ms  28.617 ms
 2  hobbit-gw.ehouse.ru (193.111.92.33)  11.445 ms  9.780 ms  9.920 ms
 3  express-ehouse.express.ru (212.24.42.9)  9.884 ms  9.800 ms  9.914 ms
 4  LYNX-M9.ATM6-0.30.M9-R1.msu.net (193.232.127.230)  4.985 ms  12.488 ms  4.978 ms
 5  LYNX-M9.ATM6-0.30.M9-R1.msu.net (193.232.127.230)  6.080 ms  5.403 ms *
 6  thebsh.namesys.com (212.16.7.65)  5.847 ms  5.469 ms  5.147 ms

-- 
   Best regards,
   Sergey S. Kostyliov [EMAIL PROTECTED]
   Public PGP key: http://sysadminday.org.ru/rathamahata.asc



Re: v3 logging speedups for 2.6

2004-01-12 Thread Chris Mason
On Mon, 2004-01-12 at 02:07, Jens Benecke wrote:
 Chris Mason wrote:
 
  Hello everyone,
  
  This is part one of the data logging port to 2.6, it includes all the
  cleanups and journal performance fixes.  Basically, it's everything
  except the data=journal and data=ordered changes.
  ftp.suse.com/pub/people/mason/patches/data-logging
 experimental/2.6.0-test11
 
 Hi,
 
 Does it make sense to apply those to 2.6.1-mm2? 
 
Not those at least, since I managed to screw up the diff.  I've got a
2.6.1 directory under experimental now with better patches.

I'm checking now to see if they apply to -mm2.

 Does except the data=ordered changes mean that data journalling ist _not_
 in there, or that that data journalling is there but hasn't been updated to
 what is there for 2.4.x yet?

Correct, but I'm almost there.  Thing got off track a lot during xmas
break.

-chris



Re: 4.0 stable ?

2004-01-12 Thread Viktors Rotanovs
Thomas Graham wrote:

are you fuly using reiser4 now (using it as root partition ? ) ? gentoo
default not supported reiser4, is it stable enought ?
I use it as root, but not as boot. I installed gentoo on reiserfs (v. 
3.6), then compiled new kernel with reiser4 support in it, rebooted, 
created reiser4 partition, copied everything there and added entry for 
it in grub.conf (with corresponding root= flag). One more reboot and 
things are twice as fast (at least copying /usr/src with kernel sources 
is 2x faster on reiser4 than on reiserfs).
I never had a problem with reiser4 except test7 kernel where there were 
slight interactivity problems when copying large sets of files.

Nikita Danilov wrote:

Thomas Graham writes:

when does it stable anyway ?!

It depends on your definition of stable. It is stable for ordinary use,
and as far as I know people are using it for home directories for some
time already. It is still possible to crash it under stress test if one
is inventive enough and has enough time to waste.


How many bugs were fixed since the last snapshot?
Isn't it time for a new snapshot for 2.6.1 (or 2.6.1-mm)?
I use Reiser4 as for everything on my desktop PC for several months,
without any problems.
(currently 2.6.0 test11, Gentoo linux).
What's the status of compression support?

Nikita.
Best Wishes,
Viktors



Re: reiaserfsck core dumps

2004-01-12 Thread B. J. Zolp


Vitaly Fertman wrote:

On Monday 12 January 2004 01:39, B. J. Zolp wrote:
 

I have a LVM partition that I cannot seem to mount anymore.  Mount tells
me the standard Too many mounted files systems  message.  So I
decided to run reiserfsck on it, well when i do that it soon crashes
with this output:
[EMAIL PROTECTED] root]# reiserfsck /dev/vg0/logical0
reiserfsck, 2002 - reiserfsprogs 3.6.3
   

Please update the version of reiserfsprogs.
 

Ok, I am runninf 3.6.11 now.  Still getting the same crash though, but 
new info:

The problem has occurred looks like a hardware problem.
If you have bad blocks, we advise you to get a new hard
drive, because once you get one bad block that the disk
drive internals cannot hide from your sight, the chances
of getting more are generally said to become much higher
(precise statistics are unknown to us), and this disk drive
is probably not expensive enough for you to risk your time
and data on it. If you don't want to follow that advice,
then if you have just a few bad blocks, try writing to the
bad blocks and see if the drive remaps the bad blocks (that
means it takes a block it has in reserve and allocates it
for use for requests of that block number).  If it cannot
remap the block, this could be quite bad, as it may mean
that so many blocks have gone bad that none remain in
reserve to allocate.
bread: Cannot read the block (119472128): (Input/output error).

Aborted (core dumped)

 

Will read-only check consistency
of the filesystem on /dev/vg0/logical0
Will put log info to 'stdout'
Do you want to run this program?[N/Yes] (note need to type Yes):Yes
###
reiserfsck --check started at Sun Jan 11 15:19:58 2004
###
bread: Cannot read a block # 119472128.
   

Do you see anything related in the syslog? Looks like a bad block
or an access beyond end of device.
 

Syslog is giving me:

Jan 12 11:07:45 orion kernel: hdb: dma_intr: status=0x51 { DriveReady 
SeekComplete Error }
Jan 12 11:07:45 orion kernel: hdb: dma_intr: error=0x40 { 
UncorrectableError }, LBAsect=10158456, high=0, low=10158456, 
sector=10158456
Jan 12 11:07:45 orion kernel: end_request: I/O error, dev 03:40 (hdb), 
sector 10158456

I  did smartctl -a /dev/hdb and found this in the output:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE 
WHEN_FAILED RAW_VALUE

5Reallocated_Sector_Ct 0x0033   008   001   
063Pre-fail   FAILING_NOW616

If I was to dd this drive to a new one would reiserfsck be more 
cooperative?  The odd thing is that my /dev/hda gave me a the same error 
a few weeks back and used dd to copy it to a new drive and things worked 
ok again.  I find it odd that two drives would have this error within 
such a short length of time.

Well thanks for your help


 

Aborted (core dumped)



Now, I am pretty sure the LVM is intact, all the drives are added
correctly and all that is ok.  I ran badblocks on the partition and it
gave me a list of about ~1000 badblocks.  Is there a way I can use this
list of blocks to my advantage to recover this partition, I would like
to get data from it.
Thanks
   

 




3.6.25 - Journal replayed back to 3 weeks ago

2004-01-12 Thread Neil Robinson
Hi,

this morning when I started up my notebook (running Windows XP) with a
VMware session running Gentoo, the boot sequence claimed that the
reiserfs drives had not been cleanly umounted (not true, I powered down
the usual way on Friday evening -- su to root and then issued the
poweroff command). It then replayed the journals of the two partitions
using reiserfs. When it finished and booted, it was as if my entire
machine had stepped back in time by 3 weeks or so (to around the 23rd of
December). Since then I had installed and built openoffice, emacs, and
numerous other bits and pieces. I also lost all of the email that was
living in the courier-imap server.

I am *very* concerned about this behaviour. I have successfully
restarted, booted, etc. literally dozens of times since mid-December. I
have now just installed a software RAID using RAID 5 on Gentoo and using
reiserfs for a fairly large system (250GB on 8 SCSI U160 drives) with an
available hot spare and a tape backup unit. Losing a few weeks of
relatively insignificant changes is nothing compared with possibly
losing the contents of my company's master file server. Can anyone tell
me why reiserfs rolled back all the way to mid-December in spite of
numerous reboots and how I can avoid a rerun of this scenario *ever*
again. Is ther some way to tell it to commit its changes that I am not
doing and should be aware of?

Ciao, Neil



Re: 3.6.25 - Journal replayed back to 3 weeks ago

2004-01-12 Thread Chris Mason
On Mon, 2004-01-12 at 13:47, Neil Robinson wrote:
 Hi,
 
 this morning when I started up my notebook (running Windows XP) with a
 VMware session running Gentoo, the boot sequence claimed that the
 reiserfs drives had not been cleanly umounted (not true, I powered down
 the usual way on Friday evening -- su to root and then issued the
 poweroff command). It then replayed the journals of the two partitions
 using reiserfs. When it finished and booted, it was as if my entire
 machine had stepped back in time by 3 weeks or so (to around the 23rd of
 December). Since then I had installed and built openoffice, emacs, and
 numerous other bits and pieces. I also lost all of the email that was
 living in the courier-imap server.
 
 I am *very* concerned about this behaviour. I have successfully
 restarted, booted, etc. literally dozens of times since mid-December. I
 have now just installed a software RAID using RAID 5 on Gentoo and using
 reiserfs for a fairly large system (250GB on 8 SCSI U160 drives) with an
 available hot spare and a tape backup unit. Losing a few weeks of
 relatively insignificant changes is nothing compared with possibly
 losing the contents of my company's master file server. Can anyone tell
 me why reiserfs rolled back all the way to mid-December in spite of
 numerous reboots and how I can avoid a rerun of this scenario *ever*
 again. Is ther some way to tell it to commit its changes that I am not
 doing and should be aware of?

That's not supposed to happen.  Lets start with details about which
version of the kernel you were using.

-chris




Re: 3.6.25 - Journal replayed back to 3 weeks ago

2004-01-12 Thread Neil Robinson
On Mon, 2004-01-12 at 19:04, Chris Mason wrote:
 That's not supposed to happen.  Lets start with details about which
 version of the kernel you were using.

Kernel 2.4.20-gentoo-r9.

Ciao, Neil



Re: v3 logging speedups for 2.6

2004-01-12 Thread Dieter Nützel
Am Donnerstag, 11. Dezember 2003 19:42 schrieb Chris Mason:
 On Thu, 2003-12-11 at 13:30, Dieter Nützel wrote:
  Am Donnerstag, 11. Dezember 2003 19:10 schrieb Chris Mason:
   Hello everyone,
  
   This is part one of the data logging port to 2.6, it includes all the
   cleanups and journal performance fixes.  Basically, it's everything
   except the data=journal and data=ordered changes.
  
   The 2.6 merge has a few new things as well, I've changed things around
   so that metadata and log blocks will go onto the system dirty lists.
   This should make it easier to improve log performance, since most of
   the work will be done outside the journal locks.
  
   The code works for me, but should be considered highly experimental. 
   In general, it is significantly faster than vanilla 2.6.0-test11, I've
   done tests with dbench, iozone, synctest and a few others.  streaming
   writes didn't see much improvement (they were already at disk speeds),
   but most other tests did.
  
   Anyway, for the truly daring among you:
  
   ftp.suse.com/pub/people/mason/patches/data-logging/experimental/2.6.0-t
  est11
  
   The more bug reports I get now, the faster I'll be able to stabilize
   things.  Get the latest reiserfsck and check your disks after each use.
 
  Chris,
 
  with which kernel should I start on my SuSE 9.0?
  A special SuSE 2.6.0-test11 + data logging?
  Or plane native? --- There are such much patches in SuSE kernels...

 For the moment you can only try it on vanilla 2.6.0-test11.   The suse
 2.6 rpms have acls/xattrs and the new logging stuff won't apply.

 Jeff and I will fix that when the logging merge is really complete.  At
 the rate I'm going, that should be by the end of next week, this part of
 the merge was the really tricky bits.

Chris,

can we have something against Gerd Knorr's [EMAIL PROTECTED] SuSE 2.6.1 kernel 
version, please?

reiserfs-journal-writer
Works fine (applies), still compiling...;-)


reiserfs-logging
Show some rejects:

SunWave1 src/linux# patch -p1 -E -N  ../patches/reiserfs-logging
patching file fs/reiserfs/journal.c
Hunk #38 FAILED at 2217.
Hunk #39 succeeded at 2256 (offset 3 lines).
Hunk #40 FAILED at 2294.
Hunk #41 succeeded at 2423 with fuzz 1 (offset 40 lines).
Hunk #42 succeeded at 2438 (offset 40 lines).
Hunk #43 succeeded at 2456 (offset 40 lines).
Hunk #44 succeeded at 2480 (offset 40 lines).
Hunk #45 succeeded at 2519 (offset 40 lines).
Hunk #46 succeeded at 2581 (offset 56 lines).
Hunk #47 succeeded at 2606 (offset 56 lines).
Hunk #48 succeeded at 2657 (offset 60 lines).
Hunk #49 succeeded at 2727 (offset 60 lines).
Hunk #50 succeeded at 2744 (offset 60 lines).
Hunk #51 succeeded at 2754 (offset 60 lines).
Hunk #52 succeeded at 2792 (offset 60 lines).
Hunk #53 succeeded at 2832 (offset 60 lines).
Hunk #54 succeeded at 2856 (offset 60 lines).
Hunk #55 succeeded at 2888 (offset 60 lines).
Hunk #56 succeeded at 2897 (offset 60 lines).
Hunk #57 succeeded at 2985 (offset 60 lines).
Hunk #58 FAILED at 3036.
Hunk #59 succeeded at 3062 (offset 64 lines).
Hunk #60 succeeded at 3096 with fuzz 1 (offset 67 lines).
Hunk #61 succeeded at 3113 (offset 67 lines).
Hunk #62 succeeded at 3147 (offset 67 lines).
Hunk #63 succeeded at 3163 (offset 67 lines).
Hunk #64 succeeded at 3176 (offset 67 lines).
Hunk #65 succeeded at 3183 (offset 67 lines).
Hunk #66 succeeded at 3219 (offset 67 lines).
Hunk #67 succeeded at 3241 (offset 67 lines).
3 out of 67 hunks FAILED -- saving rejects to file fs/reiserfs/journal.c.rej
patching file fs/reiserfs/objectid.c
patching file fs/reiserfs/super.c
Hunk #1 succeeded at 61 (offset 2 lines).
Hunk #2 succeeded at 90 (offset 2 lines).
Hunk #3 succeeded at 844 with fuzz 1 (offset 35 lines).
Hunk #4 succeeded at 862 with fuzz 2 (offset 37 lines).
Hunk #5 succeeded at 1442 with fuzz 1 (offset 47 lines).
patching file fs/reiserfs/ibalance.c
patching file fs/reiserfs/procfs.c
patching file fs/reiserfs/fix_node.c
patching file fs/reiserfs/inode.c
Hunk #1 FAILED at 960.
Hunk #2 succeeded at 1629 (offset 12 lines).
1 out of 2 hunks FAILED -- saving rejects to file fs/reiserfs/inode.c.rej
patching file fs/reiserfs/do_balan.c
patching file mm/page-writeback.c
patching file include/linux/reiserfs_fs_i.h
Hunk #2 FAILED at 50.
1 out of 2 hunks FAILED -- saving rejects to file 
include/linux/reiserfs_fs_i.h.rej
patching file include/linux/reiserfs_fs_sb.h
Hunk #1 succeeded at 107 (offset 1 line).
Hunk #2 succeeded at 121 (offset 1 line).
Hunk #3 FAILED at 155.
Hunk #4 succeeded at 166 (offset 5 lines).
Hunk #5 succeeded at 207 (offset 5 lines).
Hunk #6 succeeded at 228 (offset 5 lines).
Hunk #7 succeeded at 421 (offset 9 lines).
Hunk #8 succeeded at 491 (offset 24 lines).
Hunk #9 succeeded at 500 (offset 24 lines).
1 out of 9 hunks FAILED -- saving rejects to file 
include/linux/reiserfs_fs_sb.h.rej
patching file include/linux/reiserfs_fs.h

I haven't the time to do it myself, today...

-- 
Dieter Nützel
@home: Dieter.Nuetzel () hamburg ! de


Re: v3 logging speedups for 2.6

2004-01-12 Thread Dieter Nützel
Am Montag, 12. Januar 2004 21:08 schrieb Dieter Nützel:
 Am Donnerstag, 11. Dezember 2003 19:42 schrieb Chris Mason:
  On Thu, 2003-12-11 at 13:30, Dieter Nützel wrote:
   Am Donnerstag, 11. Dezember 2003 19:10 schrieb Chris Mason:
Hello everyone,
   
This is part one of the data logging port to 2.6, it includes all the
cleanups and journal performance fixes.  Basically, it's everything
except the data=journal and data=ordered changes.
   
The 2.6 merge has a few new things as well, I've changed things
around so that metadata and log blocks will go onto the system dirty
lists. This should make it easier to improve log performance, since
most of the work will be done outside the journal locks.
   
The code works for me, but should be considered highly experimental.
In general, it is significantly faster than vanilla 2.6.0-test11,
I've done tests with dbench, iozone, synctest and a few others. 
streaming writes didn't see much improvement (they were already at
disk speeds), but most other tests did.
   
Anyway, for the truly daring among you:
   
ftp.suse.com/pub/people/mason/patches/data-logging/experimental/2.6.0
   -t est11
   
The more bug reports I get now, the faster I'll be able to stabilize
things.  Get the latest reiserfsck and check your disks after each
use.
  
   Chris,
  
   with which kernel should I start on my SuSE 9.0?
   A special SuSE 2.6.0-test11 + data logging?
   Or plane native? --- There are such much patches in SuSE kernels...
 
  For the moment you can only try it on vanilla 2.6.0-test11.   The suse
  2.6 rpms have acls/xattrs and the new logging stuff won't apply.
 
  Jeff and I will fix that when the logging merge is really complete.  At
  the rate I'm going, that should be by the end of next week, this part of
  the merge was the really tricky bits.

 Chris,

 can we have something against Gerd Knorr's [EMAIL PROTECTED] SuSE 2.6.1
 kernel version, please?

 reiserfs-journal-writer
 Works fine (applies), still compiling...;-)

Works fine!

Greetings,
Dieter

 reiserfs-logging
 Show some rejects:

 SunWave1 src/linux# patch -p1 -E -N  ../patches/reiserfs-logging
 patching file fs/reiserfs/journal.c
 Hunk #38 FAILED at 2217.
 Hunk #39 succeeded at 2256 (offset 3 lines).
 Hunk #40 FAILED at 2294.
 Hunk #41 succeeded at 2423 with fuzz 1 (offset 40 lines).
 Hunk #42 succeeded at 2438 (offset 40 lines).
 Hunk #43 succeeded at 2456 (offset 40 lines).
 Hunk #44 succeeded at 2480 (offset 40 lines).
 Hunk #45 succeeded at 2519 (offset 40 lines).
 Hunk #46 succeeded at 2581 (offset 56 lines).
 Hunk #47 succeeded at 2606 (offset 56 lines).
 Hunk #48 succeeded at 2657 (offset 60 lines).
 Hunk #49 succeeded at 2727 (offset 60 lines).
 Hunk #50 succeeded at 2744 (offset 60 lines).
 Hunk #51 succeeded at 2754 (offset 60 lines).
 Hunk #52 succeeded at 2792 (offset 60 lines).
 Hunk #53 succeeded at 2832 (offset 60 lines).
 Hunk #54 succeeded at 2856 (offset 60 lines).
 Hunk #55 succeeded at 2888 (offset 60 lines).
 Hunk #56 succeeded at 2897 (offset 60 lines).
 Hunk #57 succeeded at 2985 (offset 60 lines).
 Hunk #58 FAILED at 3036.
 Hunk #59 succeeded at 3062 (offset 64 lines).
 Hunk #60 succeeded at 3096 with fuzz 1 (offset 67 lines).
 Hunk #61 succeeded at 3113 (offset 67 lines).
 Hunk #62 succeeded at 3147 (offset 67 lines).
 Hunk #63 succeeded at 3163 (offset 67 lines).
 Hunk #64 succeeded at 3176 (offset 67 lines).
 Hunk #65 succeeded at 3183 (offset 67 lines).
 Hunk #66 succeeded at 3219 (offset 67 lines).
 Hunk #67 succeeded at 3241 (offset 67 lines).
 3 out of 67 hunks FAILED -- saving rejects to file
 fs/reiserfs/journal.c.rej patching file fs/reiserfs/objectid.c
 patching file fs/reiserfs/super.c
 Hunk #1 succeeded at 61 (offset 2 lines).
 Hunk #2 succeeded at 90 (offset 2 lines).
 Hunk #3 succeeded at 844 with fuzz 1 (offset 35 lines).
 Hunk #4 succeeded at 862 with fuzz 2 (offset 37 lines).
 Hunk #5 succeeded at 1442 with fuzz 1 (offset 47 lines).
 patching file fs/reiserfs/ibalance.c
 patching file fs/reiserfs/procfs.c
 patching file fs/reiserfs/fix_node.c
 patching file fs/reiserfs/inode.c
 Hunk #1 FAILED at 960.
 Hunk #2 succeeded at 1629 (offset 12 lines).
 1 out of 2 hunks FAILED -- saving rejects to file fs/reiserfs/inode.c.rej
 patching file fs/reiserfs/do_balan.c
 patching file mm/page-writeback.c
 patching file include/linux/reiserfs_fs_i.h
 Hunk #2 FAILED at 50.
 1 out of 2 hunks FAILED -- saving rejects to file
 include/linux/reiserfs_fs_i.h.rej
 patching file include/linux/reiserfs_fs_sb.h
 Hunk #1 succeeded at 107 (offset 1 line).
 Hunk #2 succeeded at 121 (offset 1 line).
 Hunk #3 FAILED at 155.
 Hunk #4 succeeded at 166 (offset 5 lines).
 Hunk #5 succeeded at 207 (offset 5 lines).
 Hunk #6 succeeded at 228 (offset 5 lines).
 Hunk #7 succeeded at 421 (offset 9 lines).
 Hunk #8 succeeded at 491 (offset 24 lines).
 Hunk #9 succeeded at 500 (offset 24 lines).
 1 out of 9 hunks FAILED -- saving 

filesystem - database

2004-01-12 Thread Viktors Rotanovs
Hi,

I recently converted filesystem (reiser3.6) containing lots of small 
files (40 files, about 10 bytes each, Cyrus IMAP quota files) to CDB 
database format (http://cr.yp.to/cdb.html plus some patching to make it 
read-write), thus gaining significant performance improvement (load avg 
was 5, became 3).
What is the best way to do the same for other similar small files, using 
Reiser4? As far as I can understand, I could:
1) just put everything on Reiser4, with no changes
2) write some plugin for Reiser4
Is it possible to reduce file size on disk by not saving file ownership, 
modification time, etc.?
How much kernel's VFS interface, switching to kernel and back, directory 
caching, etc. does slow down these operations?

Best Wishes,
Viktors




[PATCHES] 2.6.1 Cleanups

2004-01-12 Thread Jeff Mahoney
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hey all -

Now that the 2.6.0 hurdle has been cleared and Linus seems to be more
open to cleanup-type patches, I have these to submit for comment.
I have 4 patches, descriptions follow:
* cleanup-01-basic-cleanup
-- This cleans up journal.c such that the ugly 50+ character
macro/derefs that are used repeatedly are evaluated once and
then accessed using a temporary variable. The resulting code
should be identical and makes it quite a bit nicer to read.
* cleanup-01-sb-opts
-- This eliminates individual #defines for superblock/mount
options and instead uses an enum. It's not like the actual
values of the mount options matter, and this just makes a list
of them. Accordingly, since the values aren't apparently in the
include, I added a BUG_ON to bail out of the value passes beyond
the size of the mount_opts variable.
* cleanup-02-bh-bits
-- This patch makes all the accesses for bh-b_state use the
appropriate macros, rather than accessing them directly.
* cleanup-03-bh-cleanup
-- This patch eliminates the local macro implementation for the
bh-b_state accessors/mutators and uses the FNS_BUFFER
implementation in fs.h, which automatically creates the macros
with on line of code.
* cleanup-04-sb-journal-elimination
-- This patch is similar to the basic-cleanup, except that it
focuses on the use of SB_JOURNAL(super) everywhere, and replaces
it with a local journal variable. Again, this makes the code
much easier to look at.
Opinions? Comments?

Patches can be found at
ftp://ftp.suse.com/pub/people/jeffm/reiserfs/kernel-v2.6/2.6.1/
- -Jeff

I apologize if this posted twice. The first message had the patches
attached, and crossed the post size limit. I haven't received a bounce yet.
- --
Jeff Mahoney
SuSE Labs
[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFAAxdqLPWxlyuTD7IRAuedAKCETcQI8uv3l2+KV2dZuuTsO3jVmQCcCaEZ
9TZxgECJvfcr/9CwHwrRLWw=
=i605
-END PGP SIGNATURE-


Re: reiaserfsck core dumps

2004-01-12 Thread Vitaly Fertman
 Jan 12 11:07:45 orion kernel: hdb: dma_intr: status=0x51 { DriveReady
 SeekComplete Error }
 Jan 12 11:07:45 orion kernel: hdb: dma_intr: error=0x40 {
 UncorrectableError }, LBAsect=10158456, high=0, low=10158456,
 sector=10158456
 Jan 12 11:07:45 orion kernel: end_request: I/O error, dev 03:40 (hdb),
 sector 10158456

So your harddrive has bad blocks, have a look at 
http://www.namesys.com/bad-block-handling.html
to get the info how to handle them. 
If you need our further assistance visit please our support 
page in terms of which we deal with hardware problems:
http://www.namesys.com/support.html

-- 
Thanks,
Vitaly Fertman


Re: reiaserfsck core dumps

2004-01-12 Thread B. J. Zolp
The site suggests doing this:

reiserfsck --badblocks file device

but as far as I can tell --badblocks is no longer supported.



Vitaly Fertman wrote:

Jan 12 11:07:45 orion kernel: hdb: dma_intr: status=0x51 { DriveReady
SeekComplete Error }
Jan 12 11:07:45 orion kernel: hdb: dma_intr: error=0x40 {
UncorrectableError }, LBAsect=10158456, high=0, low=10158456,
sector=10158456
Jan 12 11:07:45 orion kernel: end_request: I/O error, dev 03:40 (hdb),
sector 10158456
   

So your harddrive has bad blocks, have a look at 
	http://www.namesys.com/bad-block-handling.html
to get the info how to handle them. 
If you need our further assistance visit please our support 
page in terms of which we deal with hardware problems:
	http://www.namesys.com/support.html

 




another quota problem

2004-01-12 Thread Jakub Neumann
I compiled the 2.4.23 kernel with reiserFS, quota patches for
it form somewhere on the suse mirror and grsecurity.
But when I booted it, I got a unknown option usrquota error on
my /home (which is the only fs with quota enabled)...
Used 2.4.20 with ReiserFS quota before, and there was no such
problem.
my line from /etc/fstab:
/dev/sda7   /home   reiserfsnosuid,nodev,usrquota  0  2

I'd be happy if anybody could help me :)



raid0 and rfs4

2004-01-12 Thread Bret Towe
i recently tried reiser4 on a raid0 setup and well it didnt go so well
:\

what i got from dmesg was the following
dmesg was full up when i finally looked of the raid_make_request
all of those lines looked about the same cept the last 2 numbers

raid0_make_request bug: can't convert block across chunks or bigger than
512k 356200 124
raid0_make_request bug: can't convert block across chunks or bigger than 512k 357352 
124
raid0_make_request bug: can't convert block across chunks or bigger than 512k 358344 60
Unable to handle kernel NULL pointer dereference at virtual address 0004
 printing eip:
c021706f
*pde = 
Oops:  [#1]
CPU:0
EIP:0060:[c021706f]Not tainted
EFLAGS: 00010246
EIP is at reiser4_status_write+0xef/0x250
eax:    ebx: d3f4e000   ecx: c0425640   edx: 13fc5000
esi:    edi:    ebp: d3f4fe04   esp: d3f4fdd8
ds: 007b   es: 007b   ss: 0068
Process ktxnmgrd:run (pid: 993, threadinfo=d3f4e000 task=d4761310)
Stack: d1aa7ed8 d1aa7ba4 d5dd9580 d3f5ce00   0002 
   d3f5ce00 d3f5de80  d3f4e000 c0211034 0002  
    c03dd57e d42e24c0 d42e24c0 c020a965 d42e24c0 d3f4fe50 d3f5de80
Call Trace:
 [c0211034] reiser4_handle_error+0x44/0xe0
 [c020a965] finish_all_fq+0xa5/0xb0
 [c020a993] current_atom_finish_all_fq+0x23/0x70
 [c01fc74e] current_atom_complete_writes+0xe/0x30
 [c01fc885] commit_current_atom+0x115/0x250
 [c010b8ba] apic_timer_interrupt+0x1a/0x20
 [c01fd33a] try_commit_txnh+0x12a/0x1c0
 [c01fd408] commit_txnh+0x38/0xd0
 [c01fbbdf] txn_end+0x3f/0x50
 [c01fcdd4] commit_some_atoms+0x184/0x210
 [c020b736] scan_mgr+0x36/0x57
 [c020b388] ktxnmgrd+0x1a8/0x290
 [c020b1e0] ktxnmgrd+0x0/0x290
 [c0108c29] kernel_thread_helper+0x5/0xc

Code: 8b 48 04 31 c0 89 82 3c 00 00 c0 89 8a 38 00 00 c0 8b 45 00
 6note: ktxnmgrd:run[993] exited with preempt_count 1


the system is a dual pentium pro 200mhz (feel the speed! ;)
the devices md is using is 2 network block devices which are 2 hard drive partitions
on another computer
i have tested the same setup before its working on a solo system with reiserfs3
and the ppro also works fine with reiser3
im using kernel 2.6.1 only extra item it has is the 12-23 'all' patch
if you need more info or somethin do tell