Re: Congratulations! we have got hash function screwed up

2005-01-20 Thread Edward Shishkin
Hans Reiser wrote:
Grzegorz Jakiewicz wrote:
On Thu, 30 Dec 2004 08:40:51 -0800, Hans Reiser [EMAIL PROTECTED] 
wrote:
 

Fixing hash collisions in V3 to do them the way V4 does them would
create more bugs and user disruption than the current bug we have all
lived with for 5 years until now.  If someone thinks it is a small
change to fix it, send me a patch.  Better by far to fix bugs in V4,
which is pretty stable these days.
  

As I understeand, tea hash is based on tea (tiny encryption aglo),
which was the cause of xbox-linux sucess, and few others.
Pleas consider updating it to use xxtea algo. I know, it won't be
backward compatbile, but well.
Where is about all the others, I don't use them, and for me tea is the
only resonable hash to use on systems where I have very much great
number of files per directory (to name it, Maildirs).
Never had such problem myself, every hash function has a weaknes.
Nothing new. But providing another, much stronger hash, or correct tea
hash to use xxtea, would be something good indeed.
 

Edward, please look into whether we should use xxtea in Reiser4, and 
make a recommendation to me.  We aren't changing V3, it is stable and 
I want to leave it that way.

Hans

I found that:
1. xxtea is a correction to the Blocktea algorithm against the attack 
not related to the original tea or xtea.
2. xtea is an upgrade of tea algo which eliminates two minor weakness of 
the last one related to key attacks,
and not related to the collisions of tea hash (for each name tea hash 
uses ciphering by the key constructed
by  this name).
So imho it doesn't make sense to upgrade the core rounds used in tea 
hash. Any objections?

Edward.


Re: Congratulations! we have got hash function screwed up

2005-01-20 Thread Grzegorz Jaśkiewicz
All I know is that xxtea is fixed tea algo. If that fixes weakness in
crypto algo, than so it should make hashing better.
No doubt there is no ideal hash algo, but if base algo has weaknes,
using fixed one only can be better, Right ?

-- 
GJ


Re: Congratulations! we have got hash function screwed up

2005-01-19 Thread Hans Reiser
Grzegorz Jakiewicz wrote:
On Thu, 30 Dec 2004 08:40:51 -0800, Hans Reiser [EMAIL PROTECTED] wrote:
 

Fixing hash collisions in V3 to do them the way V4 does them would
create more bugs and user disruption than the current bug we have all
lived with for 5 years until now.  If someone thinks it is a small
change to fix it, send me a patch.  Better by far to fix bugs in V4,
which is pretty stable these days.
   

As I understeand, tea hash is based on tea (tiny encryption aglo),
which was the cause of xbox-linux sucess, and few others.
Pleas consider updating it to use xxtea algo. I know, it won't be
backward compatbile, but well.
Where is about all the others, I don't use them, and for me tea is the
only resonable hash to use on systems where I have very much great
number of files per directory (to name it, Maildirs).
Never had such problem myself, every hash function has a weaknes.
Nothing new. But providing another, much stronger hash, or correct tea
hash to use xxtea, would be something good indeed.
 

Edward, please look into whether we should use xxtea in Reiser4, and 
make a recommendation to me.  We aren't changing V3, it is stable and I 
want to leave it that way.

Hans


Re: Congratulations! we have got hash function screwed up

2005-01-19 Thread David Masover
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hans Reiser wrote:
| Grzegorz Jakiewicz wrote:
|
| On Thu, 30 Dec 2004 08:40:51 -0800, Hans Reiser [EMAIL PROTECTED]
| wrote:
|
|
| Fixing hash collisions in V3 to do them the way V4 does them would
| create more bugs and user disruption than the current bug we have all
| lived with for 5 years until now.  If someone thinks it is a small
| change to fix it, send me a patch.  Better by far to fix bugs in V4,
| which is pretty stable these days.
|
|
|
| As I understeand, tea hash is based on tea (tiny encryption aglo),
| which was the cause of xbox-linux sucess, and few others.
| Pleas consider updating it to use xxtea algo. I know, it won't be
| backward compatbile, but well.
| Where is about all the others, I don't use them, and for me tea is the
| only resonable hash to use on systems where I have very much great
| number of files per directory (to name it, Maildirs).
| Never had such problem myself, every hash function has a weaknes.
| Nothing new. But providing another, much stronger hash, or correct tea
| hash to use xxtea, would be something good indeed.
|
|
|
| Edward, please look into whether we should use xxtea in Reiser4, and
| make a recommendation to me.  We aren't changing V3, it is stable and I
| want to leave it that way.
You don't change, it gets forked.  You have a strange definition of
stable.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iQIVAwUBQe7h/HgHNmZLgCUhAQL5OA//TencDKPIk4QR8u3GIJC0gUp2vF+eS3H7
IQTZTufZqXTHhEP3n6ySNied7jeGi7BHL2Oxk2wY+nXWCY5mLGtvBHyOvWaP8oBF
sL7lNqhifljwJkA3yvqSFl7d5BUF3bTZTQ83BYnAPYdZccL4WRuFtxS7jH4xdK0g
lJV7w0x2nNkboy/GWRlqXE1GauBzoLBVJ9twsplxX5RSbHuPECGKZBs/fx2yNN5Y
zkqWxRpRWKIUgkiH/ziE/amAe1BHbV3ZkPx2VE54lbrz1TQox2qEHzOFW9ABQTUh
Nu+W3omw/1OGmZ2sF9X3kBIUhEz3/VK/wk1fG1dKAuhPMfVmjVTVCAyYiRPqZHsC
ZWwKrHxqMbTBvOYHHN8KAyChSLU1Qi2Lm5BnncfdUZ2hM5nQqOBoajX1ymSNiq5f
ykww8P02tJobX9u0UncMfi7wGsA/jW3k7GgWPjHM4A+wxACSiVIWRhR4gcqMFFsc
9nobQSiIiENpeAP6wmJrupPRMPcLUQUGZe+UBjyCHTo9ygAWngBqByTR+74tRrz+
OBcL1WforIOHS6FEHLGyEJVmsjz/0ibOuM6PAUHuNcrDlfa2dMc09Y1YPLXjvCjl
TPRh0OKel3vqq+tFFuh8Xvl/rD4BBM+eUxDjgpqH2NT9DbisYbDpkx+5WwSQMUqk
Z9kOPizEF7o=
=MxVy
-END PGP SIGNATURE-


Re: Congratulations! we have got hash function screwed up

2005-01-18 Thread Grzegorz Jaśkiewicz
On Thu, 30 Dec 2004 08:40:51 -0800, Hans Reiser [EMAIL PROTECTED] wrote:
 Fixing hash collisions in V3 to do them the way V4 does them would
 create more bugs and user disruption than the current bug we have all
 lived with for 5 years until now.  If someone thinks it is a small
 change to fix it, send me a patch.  Better by far to fix bugs in V4,
 which is pretty stable these days.

As I understeand, tea hash is based on tea (tiny encryption aglo),
which was the cause of xbox-linux sucess, and few others.
Pleas consider updating it to use xxtea algo. I know, it won't be
backward compatbile, but well.
Where is about all the others, I don't use them, and for me tea is the
only resonable hash to use on systems where I have very much great
number of files per directory (to name it, Maildirs).
Never had such problem myself, every hash function has a weaknes.
Nothing new. But providing another, much stronger hash, or correct tea
hash to use xxtea, would be something good indeed.

-- 
GJ


Re: flush earlier? (was Re: Congratulations! we have got hash function screwed up)

2005-01-09 Thread David Masover
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hans Reiser wrote:
[...]
| I'm not sure I understand that.  Is the idea of that to build up a write
| buffer which insists on flushing bytes off the front as they are added
| onto the back, without flushing huge chunks at once?
|
|
| Yes.
|
|
| Would that be as efficient at packing (no fragmentation)?
|
|
| No.
Really?  I'd think that it would waste more CPU, but you could actually
end up packing better, if you allocate for the end of the buffer as it's
flushed, because you can look at the whole buffer and what's already on
disk.  What would be a good idea with one flush could be inefficient by
the next, and this system can react faster to that change.
But, there's still laptops, and there's CPU usage.
Will it be hard to implement both approaches?  One example would be
someone trying to create a clean fs image, or someone who's installing
software, so they don't care how long it takes, but they do want it to
be tightly packed.  Someone who can't afford the repacker :(
| But the main problem was it would basically lock the fs entirely -- no
| access at all.
|
|
| Which is why we need to allow fusing a little bit.
What do you mean by fusing here?
| This especially hurts with read access.
|
|
| Read access?  Do you have one CPU?  Maybe the problem isn't what I
| thought it was.
Yes, I have one CPU.  But I only have one reiser4 drive per machine.  I
think that reiser4 locks all fs access until the flush is done, or at
least it used to.  But even if it didn't, reads take _forever_ when the
disk is in use, and reads are usually more urgent than writes (hence
asynchronous writes).  And the disk will be in use trying to write a
huge chunk of data for some time, and will take longer the more RAM you
have.
| For now, how about a quick fix.  Can we force a flush when we get to
| 80%?
|
|
| VM already has an asynchronous flush mechanism.  We just need to clean
| up our interaction with it a bit to smooth things out.
What I want to avoid is doing away with lazy writes entirely (laptops)
and doing something like ext3's flush every five seconds concept.  But
I also want to avoid truly _massive_ spikes in disk activity.
That's what the 80% is about.  You still get lazy writes, but they can
be at a lower priority than reads, hopefully minimizing seeks, too --
thrashing between reading and writing.  But when the lazy write happens,
you don't slow down the rest of the system, because it still has some
room to write.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iQIVAwUBQeG9l3gHNmZLgCUhAQJFqQ/9FH9OVdTEM3DLCfRef0Xc6N+ZA4Wv4kCp
T1W4OHbxq3AL31TcLv5hRDA8xhv3ivp/ODQfcGNyM2mg2UcV61xwHWxvUwQmBTPG
0Nl2o4uM51LuoqW4V1OoVPD4sSr/Ak08ahyeG1vZY5T5kpLk+V6Ms6jklRrQqTDZ
r6PgBvDyw7C9JVuX911MMIzOJpDjx2vJavJZu0C4M/X2k9Tl2A99O2gpLB3Vuvcn
kCKEctXb6K+bt+kiFrnsMl4tqBhrc6fYMSQq+ZP4uU1Ttzxz1T59lT+gtMf2yqZL
hUZgMZ/DNpEdyzU7QOTMFOHFr5l73D4YainEJ7XP9zHzMf4tkHEwUiE3/H7/0ktD
FIqAnJ7khiDwC9r/yqAliecqWg+fspvyLy85aVsmG/5RWbEMYHhyxM4e+vEM5EMs
5dJ1XpSuoVrujiKFx2h/5drQ4KFui9bplbfoSahEsFpdcJ+MtNYHvNOqJ5l2ZLfe
s75j4DtsbWTXK/tGFRmIcL8gPzlUszfXrRVtQw+bhIxAtxmoABuqbCMYDd9LyclO
Kyzy0Ha0AYQy5pIFwm5cTCkC7GBThLBzPk7863J4jxHJ23izg5B/IyEEhOk9NOx2
jC3zf6ZwD1nEIWbZ/yH4s2J8kXw/wno92GLwxI5hpPFPqZxxGMG7xeND1vhcHBvF
1fozX/nMpN4=
=puoc
-END PGP SIGNATURE-


Re: flush earlier? (was Re: Congratulations! we have got hash function screwed up)

2005-01-08 Thread Hans Reiser
David Masover wrote:

Hans Reiser wrote:
| David Masover wrote:
|
| Hans Reiser wrote:
| | Chris Dukes wrote:
| |
| |
| |
| | All filesystems will fail or suffer degraded performance under
| | certain conditions, you need to determine what conditions are
| acceptable
| | for your data.
| |
| |
| |
| | and each generation of software reduces the extent of such 
conditions.
| | Reiser4 fixes this problem cleanly.
|
| I think Reiser4's degraded performance condition is when it gets 
lots of
| RAM.  First, a disclaimer -- I don't have the latest reiser4 
patch.  But
| in all versions of the FS, I've found that if I'm ever trying to do
| anything when reiser finally decides to flush to disk, basically my
| whole system is locked up.  I haven't tested, but I think this would
| actually be worse with more RAM, because it would be longer until the
| flush was forced, so each flush would take longer.
|
| What is needed is some sort of estimator or estimate.  An estimator
| would be something that would flush when, based on recent fs load, it
| was reasonable to expect that RAM would fill up just as the flush was
| completing.  An estimate would be to flush if a certain percentage of
| RAM was full, and to go to synchronous mode if memory usage didn't go
| back below that percentage.
|
|
| We need to throttle rather than flush, so as to ensure that for every
| page added to an atom, at least X pages must reach disk, until close to
| the end of the atom when we just flush it out.

I'm not sure I understand that.  Is the idea of that to build up a write
buffer which insists on flushing bytes off the front as they are added
onto the back, without flushing huge chunks at once?
Yes.
Would that be as efficient at packing (no fragmentation)?
No.
But the main problem was it would basically lock the fs entirely -- no
access at all.
Which is why we need to allow fusing a little bit.
This especially hurts with read access.
Read access?  Do you have one CPU?  Maybe the problem isn't what I 
thought it was.

When fs is
under heavy load, I can wait several minutes to start a browser,
especially when a lot of writing is happening.
For now, how about a quick fix.  Can we force a flush when we get to
80%?
VM already has an asynchronous flush mechanism.  We just need to clean 
up our interaction with it a bit to smooth things out.

Is there a synchronous mode, and can we use that after 80%, or do
we have to fake it by trying to flush every few seconds?


Re: Congratulations! we have got hash function screwed up

2005-01-07 Thread Hans Reiser
Chris Dukes wrote:

All filesystems will fail or suffer degraded performance under
certain conditions, you need to determine what conditions are acceptable
for your data.
 

and each generation of software reduces the extent of such conditions.  
Reiser4 fixes this problem cleanly.


Re: Congratulations! we have got hash function screwed up

2005-01-07 Thread pcg
On Thu, Jan 06, 2005 at 09:55:20PM +0300, Edward Shishkin [EMAIL PROTECTED] 
wrote:
 On Thu, Jan 06, 2005 at 03:45:06PM +0300, Alex Zarochentsev 
 [EMAIL PROTECTED] wrote:
 
 Tea hash is designed to be more resistant.  

 
 
 Actually this can not be more resistant as it use the same 32-bit output 
 size.

Sure it can, filenames are not randomly distributed, so your argument doesn't
suffice to show that tea cannot be more resistent, as it could be more
resistent for other reasons.

That's why I originally wrote nicely-looking, which (if it wasn't clear)
was meant to say that filenames with somewhat similar names do collide
even with tea, which suppossedly was chosen to avoid this case.

 So to find a collision you just need to find hashes of 2^16 = 65536
 random documents.

True. It's even worse if these collisions happen to filenames occuring in
practise.

(I also agree to the rest of your mail)

-- 
The choice of a
  -==- _GNU_
  ==-- _   generation Marc Lehmann
  ---==---(_)__  __   __  [EMAIL PROTECTED]
  --==---/ / _ \/ // /\ \/ /  http://schmorp.de/
  -=/_/_//_/\_,_/ /_/\_\  XX11-RIPE


Re: Congratulations! we have got hash function screwed up

2005-01-07 Thread Chris Dukes
On Fri, Jan 07, 2005 at 09:22:02AM -0800, Hans Reiser wrote:
 Chris Dukes wrote:
 
 
 
 All filesystems will fail or suffer degraded performance under
 certain conditions, you need to determine what conditions are acceptable
 for your data.
 
  
 
 and each generation of software reduces the extent of such conditions.  
 Reiser4 fixes this problem cleanly.

Should reduce.  I currently have the misfortune of supporting
some software that seems to be of the For every bug fixed atleast
one bug with an equal level of impact is introduced variety.

-- 
Chris Dukes
hello stacklimit my old friend, I've come to visit you again


flush earlier? (was Re: Congratulations! we have got hash function screwed up)

2005-01-07 Thread David Masover
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hans Reiser wrote:
| Chris Dukes wrote:
|
|
|
| All filesystems will fail or suffer degraded performance under
| certain conditions, you need to determine what conditions are acceptable
| for your data.
|
|
|
| and each generation of software reduces the extent of such conditions.
| Reiser4 fixes this problem cleanly.
I think Reiser4's degraded performance condition is when it gets lots of
RAM.  First, a disclaimer -- I don't have the latest reiser4 patch.  But
in all versions of the FS, I've found that if I'm ever trying to do
anything when reiser finally decides to flush to disk, basically my
whole system is locked up.  I haven't tested, but I think this would
actually be worse with more RAM, because it would be longer until the
flush was forced, so each flush would take longer.
What is needed is some sort of estimator or estimate.  An estimator
would be something that would flush when, based on recent fs load, it
was reasonable to expect that RAM would fill up just as the flush was
completing.  An estimate would be to flush if a certain percentage of
RAM was full, and to go to synchronous mode if memory usage didn't go
back below that percentage.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iQIVAwUBQd8a4HgHNmZLgCUhAQKb+A//fps5uIUtnfrHGe0y3itGbggWGkDqV80O
MPlLDqmlyMRWAGij5f5F345OU7GBi7VVCkgXMJbemO7TJ82Yr8OQXPO1Ywrt1rL5
bdPej6rZD7RW7+2DRlT78XdvP2ZbqKQNjvIlmLvQIziGk2BfGxwzxt1P5vztIvfE
HtqiP9ImSbxFkjRjZmdeQC8koyEK0vqYeasRma2pC1ZsABXxRHdRPnm/OT1gZc5D
YAYqyHCcs+RFr6+KYs9TSh4H5pNcHq1kYUEcoNFI0ubMSnNK+DvlrhBPhi5fkxPT
8D86nvphYFn7gKob1A9DZzKQz0qFcbhY35KW8g4hwHUrAyQWQtOU5sh+eIF0hV+j
LRhI6Ao6lgSAyq/GuxVwZ64emiM70JvoRNDLt4YVs4VDWnTcarhUNCikEUjvSCqj
Ux2RWI+6YILFso1D0Zl+4J0hkSxZPurzvRGL9TWT3Gwkdlyfk77jRbgfvZ5oy2aC
Q4gv5h/j+7AngfzlYY8n20zlocGoW8GEpaSgWa4VimTwAuHYEoDMu0TdlHLZzMB6
u2CBoYy7brCaxUpaV7np5E/rfEPcccINVgea0/dVhvkVGjyMQHb8msyX1r4IXRRC
jVxP2veoyxsyTTmcOCAViF6Epe4fVIhTnafH7hVmcWEk0Vt+aHSbAJtYK9oFTz7E
RX0WHi9z4fs=
=UeYu
-END PGP SIGNATURE-


Re: flush earlier? (was Re: Congratulations! we have got hash function screwed up)

2005-01-07 Thread Hans Reiser
David Masover wrote:
Hans Reiser wrote:
| Chris Dukes wrote:
|
|
|
| All filesystems will fail or suffer degraded performance under
| certain conditions, you need to determine what conditions are 
acceptable
| for your data.
|
|
|
| and each generation of software reduces the extent of such conditions.
| Reiser4 fixes this problem cleanly.

I think Reiser4's degraded performance condition is when it gets lots of
RAM.  First, a disclaimer -- I don't have the latest reiser4 patch.  But
in all versions of the FS, I've found that if I'm ever trying to do
anything when reiser finally decides to flush to disk, basically my
whole system is locked up.  I haven't tested, but I think this would
actually be worse with more RAM, because it would be longer until the
flush was forced, so each flush would take longer.
What is needed is some sort of estimator or estimate.  An estimator
would be something that would flush when, based on recent fs load, it
was reasonable to expect that RAM would fill up just as the flush was
completing.  An estimate would be to flush if a certain percentage of
RAM was full, and to go to synchronous mode if memory usage didn't go
back below that percentage.
We need to throttle rather than flush, so as to ensure that for every 
page added to an atom, at least X pages must reach disk, until close to 
the end of the atom when we just flush it out.

Another missing and needed feature
Hans


Re: flush earlier? (was Re: Congratulations! we have got hash function screwed up)

2005-01-07 Thread David Masover
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hans Reiser wrote:
| David Masover wrote:
|
| Hans Reiser wrote:
| | Chris Dukes wrote:
| |
| |
| |
| | All filesystems will fail or suffer degraded performance under
| | certain conditions, you need to determine what conditions are
| acceptable
| | for your data.
| |
| |
| |
| | and each generation of software reduces the extent of such conditions.
| | Reiser4 fixes this problem cleanly.
|
| I think Reiser4's degraded performance condition is when it gets lots of
| RAM.  First, a disclaimer -- I don't have the latest reiser4 patch.  But
| in all versions of the FS, I've found that if I'm ever trying to do
| anything when reiser finally decides to flush to disk, basically my
| whole system is locked up.  I haven't tested, but I think this would
| actually be worse with more RAM, because it would be longer until the
| flush was forced, so each flush would take longer.
|
| What is needed is some sort of estimator or estimate.  An estimator
| would be something that would flush when, based on recent fs load, it
| was reasonable to expect that RAM would fill up just as the flush was
| completing.  An estimate would be to flush if a certain percentage of
| RAM was full, and to go to synchronous mode if memory usage didn't go
| back below that percentage.
|
|
| We need to throttle rather than flush, so as to ensure that for every
| page added to an atom, at least X pages must reach disk, until close to
| the end of the atom when we just flush it out.
I'm not sure I understand that.  Is the idea of that to build up a write
buffer which insists on flushing bytes off the front as they are added
onto the back, without flushing huge chunks at once?
Would that be as efficient at packing (no fragmentation)?
But the main problem was it would basically lock the fs entirely -- no
access at all.  This especially hurts with read access.  When fs is
under heavy load, I can wait several minutes to start a browser,
especially when a lot of writing is happening.
For now, how about a quick fix.  Can we force a flush when we get to
80%?  Is there a synchronous mode, and can we use that after 80%, or do
we have to fake it by trying to flush every few seconds?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iQIVAwUBQd9pm3gHNmZLgCUhAQLj6BAAnzjp7PNnNr952pOg1LLckELJbG0Cq/Kj
smMR5VknCSeM0ho30gnjWOzI9FE/O1nAk6sMEdEcX6MGwCB8KUeQLo2OebIQCMi2
PdzB+nh6VImqeOZkgccBMQL6OK7o8QHYhQ/QtqBQ+3UIXTuJdBG4YZjNAkYDXF3s
ObJcLDKwaN3yZCzrLmcpK2+BQSckBu797Z3mwnbj1jpOiki76BqCrMCI7+081V/j
A4aOBeSQwKXfsld4vdIbWZzKF2lpUGx1jzfynYMVfQ9sj/aqtpKMbwDWiUOBdSv+
OX01ARcU1O90ZdL+zjT6qhJXgXOjbZu5Tpp9slJaYuvSbv516DJn2l1A+ADlbEkk
fxEHx2RRT0rqtepiBKu+QsGrO3ixl4lGZtyx4d8cj/yRjbHtNNI0YYg5w1kJ+udE
H93Y3JpdJi6iBdKvblJpRW1bA/HR9g+p/PsQy6jdwcKthzjTvFwXjSfGCy0u+nF7
mKry8Axd3QiANs4x8Pc6CDdbFEiPjo4FVpfBASb5qq7TbQVSSpvzEoWPn5AE5xE9
ZBuHireWoGG5XzwxkhfS14Y6letoKx1tbkZHSULwlTdmNffxIXDf0cc//G4H8waU
T3T8FOHQZtK+lnAMD7ImeNInJlPKde90JGYUg1YGeVcJqgfvroc+Ke0F4YTlhKi6
zaj+QkeKf7Q=
=ev3/
-END PGP SIGNATURE-


Re: Congratulations! we have got hash function screwed up

2005-01-06 Thread Alex Zarochentsev
Hello,

On Tue, Dec 28, 2004 at 11:12:18PM +0100,  Marc A. Lehmann  wrote:
 Hi!
 
 When trying to upgrade or reinstall the xfonts-75dpi,
 xfonts-75dpi-transcoded or 100dpi  transcoded debian packages on my
 2.6.10-rc3 amd64 reiserfsv3 host, I get the following errors:
 
dpkg: error processing
/var/cache/apt/archives/xfonts-75dpi_4.3.0.dfsg.1-10_all.deb (--unpack):
 unable to make backup link of
 `./usr/X11R6/lib/X11/fonts/75dpi/lutBS19-ISO8859-1.pcf.gz' before
 installing new version: Device or resource busy
 dpkg-deb: subprocess paste killed by signal (Broken pipe)
Preparing to replace xfonts-75dpi-transcoded 4.3.0.dfsg.1-10 (using
  .../xfonts-75dpi-transcoded_4.3.0.dfsg.1-10_all.deb) ...
dpkg: error processing 
 /var/cache/apt/archives/xfonts-75dpi-transcoded_4.3.0.dfsg.1-10_all.deb 
 (--unpack):
 unable to make backup link of 
 `./usr/X11R6/lib/X11/fonts/75dpi/lutBS19-ISO8859-10.pcf.gz' before installing 
 new version: Device or resource busy
dpkg-deb: subprocess paste killed by signal (Broken pipe)
Errors were encountered while processing:
 /var/cache/apt/archives/xfonts-75dpi_4.3.0.dfsg.1-10_all.deb
 /var/cache/apt/archives/xfonts-75dpi-transcoded_4.3.0.dfsg.1-10_all.deb
 
 And at the same time, I get this in my kernel log:
 
ReiserFS: hdg2: warning: reiserfs_add_entry: Congratulations! we have got 
 hash function screwed up
 
 Sure sounds like a filesystem bug to me. Is this 2.6.10-rc3-specific or a
 generic bug in handling hash collisions?

Tea hash is designed to be more resistant.  

there is a generic problem with overloading of the generation counter, but
tea hash should mix file names better and have less chances to 'screw the hash
function up'.  

Does the debian install all X font files into one dir? May be you have your own
font files installed in the same dir? I suggest to split the dir into several
ones.

 Deleteing the fonts and installing the package works, but the next upgrade
 makes the error appear again.
 
 -- 
 The choice of a
   -==- _GNU_
   ==-- _   generation Marc Lehmann
   ---==---(_)__  __   __  [EMAIL PROTECTED]
   --==---/ / _ \/ // /\ \/ /  http://schmorp.de/
   -=/_/_//_/\_,_/ /_/\_\  XX11-RIPE

-- 
Alex.


Re: Congratulations! we have got hash function screwed up

2005-01-06 Thread pcg
On Thu, Jan 06, 2005 at 03:45:06PM +0300, Alex Zarochentsev [EMAIL PROTECTED] 
wrote:
  generic bug in handling hash collisions?
 
 Tea hash is designed to be more resistant.  

As the example posted shows, tea doesn't look better, it generates
nicely-looking collisions, too.

 Does the debian install all X font files into one dir?

No, but xfree nowadays comes with a lot of fonts because it stupidly makes
a copy of about each and every font in each and every encoding, leading to
many font files in the bitmapped category (75dpi and 100dpi).

 May be you have your own font files installed in the same dir?

I also have some other debian packages that install their fonts there, but
it should be less than 10 extra files.

 I suggest to split the dir into several ones.

I'd suggest getting rid of reiserfs on anything important. I can't have it
when my filesystem randomly returns errors when it should be working.

I wonder wether this hasn't any security relevance, as it allows attackers
easily to create filename holes in the filesystem that even root cannot
override.

Thanks for the suggestion, though! However, the workaround I currently use
(delete the dir, reinstall) works better, as it doesn't destroy debian's
idea of the filesystem layout.

-- 
The choice of a
  -==- _GNU_
  ==-- _   generation Marc Lehmann
  ---==---(_)__  __   __  [EMAIL PROTECTED]
  --==---/ / _ \/ // /\ \/ /  http://schmorp.de/
  -=/_/_//_/\_,_/ /_/\_\  XX11-RIPE


Re: Congratulations! we have got hash function screwed up

2005-01-06 Thread Hans Reiser
pcg( Marc)@goof(A.).(Lehmann )com wrote:
On Thu, Jan 06, 2005 at 03:45:06PM +0300, Alex Zarochentsev [EMAIL PROTECTED] wrote:
 

generic bug in handling hash collisions?
 

Tea hash is designed to be more resistant.  
   

As the example posted shows, tea doesn't look better, it generates
nicely-looking collisions, too.
 

You mean, in practice you hit them, or with an artificially generated 
set of filenames intended to cause collisions you get those collisions?



Re: Congratulations! we have got hash function screwed up

2005-01-06 Thread Spam
generic bug in handling hash collisions?
  

Tea hash is designed to be more resistant.  



As the example posted shows, tea doesn't look better, it generates
nicely-looking collisions, too.
  

 You mean, in practice you hit them, or with an artificially generated
 set of filenames intended to cause collisions you get those collisions?

 Excuse me, but do you mean that there are undocumented limits on what
 files can be named to, and how many files with similar or random
 names in a ReiferFS volume?

 This sounds bad...



Re: Congratulations! we have got hash function screwed up

2005-01-06 Thread Chris Dukes
On Thu, Jan 06, 2005 at 05:13:23PM +0100, Spam wrote:
 generic bug in handling hash collisions?
   
 
 Tea hash is designed to be more resistant.  
 
 
 
 As the example posted shows, tea doesn't look better, it generates
 nicely-looking collisions, too.
   
 
  You mean, in practice you hit them, or with an artificially generated
  set of filenames intended to cause collisions you get those collisions?
 
  Excuse me, but do you mean that there are undocumented limits on what
  files can be named to, and how many files with similar or random
  names in a ReiferFS volume?

No, I'd say it's pretty well documented that reiserfs fails under
certain hash collision conditions instead of continueing to work
(albeit more slowly).

The nature of the hash collisions must be pretty obvious if a shell
script can be written to demonstrate the problem.
 
  This sounds bad...

It's a risk assessment.  What are the odds of your normal data sets
hitting the bug or of someone with malicious intent introducing
a demonstration program vs the performance hit of a filesystem
without the problem.

All filesystems will fail or suffer degraded performance under
certain conditions, you need to determine what conditions are acceptable
for your data.

-- 
Chris Dukes
Warning: Do not use the reflow toaster oven to prepare foods after
it has been used for solder paste reflow. 
http://www.stencilsunlimited.com/stencil_article_page5.htm


Re: Congratulations! we have got hash function screwed up

2005-01-06 Thread Spam

  

 On Thu, Jan 06, 2005 at 05:13:23PM +0100, Spam wrote:
 generic bug in handling hash collisions?
   
 
 Tea hash is designed to be more resistant.  
 
 
 
 As the example posted shows, tea doesn't look better, it generates
 nicely-looking collisions, too.
   
 
  You mean, in practice you hit them, or with an artificially generated
  set of filenames intended to cause collisions you get those collisions?
 
  Excuse me, but do you mean that there are undocumented limits on what
  files can be named to, and how many files with similar or random
  names in a ReiferFS volume?

 No, I'd say it's pretty well documented that reiserfs fails under
 certain hash collision conditions instead of continueing to work
 (albeit more slowly).

 The nature of the hash collisions must be pretty obvious if a shell
 script can be written to demonstrate the problem.
 
  This sounds bad...

 It's a risk assessment.  What are the odds of your normal data sets
 hitting the bug or of someone with malicious intent introducing
 a demonstration program vs the performance hit of a filesystem
 without the problem.

  How can I assess the risk, if I do not know how to produce the bugs?
  You say certain conditions. But from what I read earlier in the
  thread, a directory with a fonts in them.?

 All filesystems will fail or suffer degraded performance under
 certain conditions, you need to determine what conditions are acceptable
 for your data.

  Slow can be acceptable. But failing? No, a filesystem should not
  fail.

-- 



Re: Congratulations! we have got hash function screwed up

2005-01-06 Thread Chris Dukes
On Thu, Jan 06, 2005 at 05:29:39PM +0100, Spam wrote:
 
  It's a risk assessment.  What are the odds of your normal data sets
  hitting the bug or of someone with malicious intent introducing
  a demonstration program vs the performance hit of a filesystem
  without the problem.
 
   How can I assess the risk, if I do not know how to produce the bugs?
   You say certain conditions. But from what I read earlier in the
   thread, a directory with a fonts in them.?

Since the concepts of simulators, hash function analysis, and dataset
modelling seem to escape you, perhaps you need to go for the black
and white Is any risk acceptable, given anecdotal data of one
unexpected failing condition and one script that can regularly create
the failing condition.
 
  All filesystems will fail or suffer degraded performance under
  certain conditions, you need to determine what conditions are acceptable
  for your data.
 
   Slow can be acceptable. But failing? No, a filesystem should not
   fail.

It should not fail 
1) When media fails
2) When transport hardware is not compliant with specs (permanently on
write caching anyone?)
3) Media has a limited lifetime
...

One thing I don't think I ever saw in this thread was
1) How old was the drive that saw the problem.
2) What was the drive lifetime used to calculate it's MTBF.
-- 
Chris Dukes
Warning: Do not use the reflow toaster oven to prepare foods after
it has been used for solder paste reflow. 
http://www.stencilsunlimited.com/stencil_article_page5.htm


Re: Congratulations! we have got hash function screwed up

2005-01-06 Thread Edward Shishkin
pcg( Marc)@goof(A.).(Lehmann )com wrote:
On Thu, Jan 06, 2005 at 03:45:06PM +0300, Alex Zarochentsev [EMAIL PROTECTED] wrote:
 

generic bug in handling hash collisions?
 

Tea hash is designed to be more resistant.  
   

Actually this can not be more resistant as it use the same 32-bit output 
size. So to find
a collision you just need to find hashes of  2^16 = 65536 random documents.

As the example posted shows, tea doesn't look better, it generates
nicely-looking collisions, too.
 


I'd suggest getting rid of reiserfs on anything important. I can't have it
when my filesystem randomly returns errors when it should be working.
I wonder wether this hasn't any security relevance, as it allows attackers
easily to create filename holes in the filesystem that even root cannot
override.
 

It should be a weighty reason to use strong hash function for creating 
entries because
stable hash means bad performance and more occupied place in stat-data: 
I am not
sure that even 160 bit will guarantee absence of collision for a long time..

Edward.
Thanks for the suggestion, though! However, the workaround I currently use
(delete the dir, reinstall) works better, as it doesn't destroy debian's
idea of the filesystem layout.
 




Re: Congratulations! we have got hash function screwed up

2004-12-31 Thread Matthias Andree
On Thu, 30 Dec 2004, Hans Reiser wrote:

 A working undelete can either hog disk space or die the moment some
 large write comes in. And if you're at that point, make it a versioning
 file system
 
 Well, yes, it should be one.
 
 darpa is paying for views, add in a little versioning and.

If the view is something between a transactional view in a SQL
database and a device-mapper snapshot, then yes, it might be close
enough. There's however always the problem of capacity conflicts, and
there may need to be a switch that prefers keep older versions over
discard older versions so that the admin with - by your leave - idiot
users has a chance to save his users' a**e*.

 - but then don't complain about space efficiency.

 This is an area where apple was smarter than Unix.  Having a trash can 
 is what real users need, more than they need performance..

Does Apple's trash can help against files that get overwritten in situ?
If not, it's insufficient to fix another common failure. My Mom is
prone (is that word applicable to human behavior?) to open a file
(say, a half year schedule of the local community), edit it, without
saving it under a new name - by the time she's completed her edit, she
has forgotten to rename it and boom, old file dead. Next week she wants
it back...

 I would however auto-empty the trash can when space got low

That isn't desired. See above.

 Well, it hasn't been coded solely because we haven't gotten around to it 
 what with all else that needs doing and still needs doing.  Remind me 
 about this in a year.:)

Save this mail to a file and have atd mail it to you. Or use a calendar :)

-- 
Matthias Andree


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Matthias Andree
Hans Reiser [EMAIL PROTECTED] writes:

Again, this is a lame excuse for a bug. First you declare some features on
your filesystem, later, when it turns out that it isn't being delivered,
you act as if this were a known condition.

 Well this is true, you are right.  Reiser4 is the fix though.

No, it isn't. Reiser4 is an alternative beast. Or will it transparently
fix the collision problem in a 3.5 or 3.6 file system, in a way that
is backwards compatible with 3.6 drivers? If not, please fix reiser3.6.

Given that Reiser4 isn't proven yet in the field (for that, it would
have to be used as the default file system by at least one major
distributor for at least a year), it is certainly not an option for
servers _yet_.

A file system that intransparently (i. e. not inode count or block
count) refuses to create a new file doesn't belong on _my_ production
machines, which shall migrate away from reiserfs on the next suitable
occasion (such as upgrades). There's ext3fs, jfs, xfs, and in 2006 or
2007, we'll talk about reiser4 again. Yes, I am conservative WRT file
systems and storage.

-- 
Matthias Andree


RE: Congratulations! we have got hash function screwed up

2004-12-30 Thread Yiannis Mavroukakis
 
Hello Matthias,

Your proven reasoning sounds a bit strange to me..Microsoft (aka major
distributor at least in my books) had her filesystems in the field for
ages, does this prove any of them good (or bad for that matter)?
I don't think I'd wait for a distributor to shove reiser4 down my
throat, just because the distributor seems to trust it, so the logical
course would be for me to try it out. I'll grant you that I am not using
it on the mission critical server, because our hosting provider will not
support it (ext3 addicts..oh well) but I do have it on my development
server, that does house critical code and receives all kinds of
hammering from yours truly; And I use it at home.
I suppose my point is, filesystem testing and adoption belongs to the
masses be they your average Joe Linux user or a sysadmin who feels
confident enough in the filesystem's abilities to take the plunge. I run
reiser4, I'm happy with it, it is stable enough to carry out my *own*
activities. 

Happy holidays,

Yiannis.

-Original Message-
From: Matthias Andree [mailto:[EMAIL PROTECTED] 
Sent: 30 December 2004 10:23
To: Hans Reiser
Cc: reiserfs-list@namesys.com; Stefan Traby
Subject: Re: Congratulations! we have got hash function screwed up

Hans Reiser [EMAIL PROTECTED] writes:

Again, this is a lame excuse for a bug. First you declare some 
features on your filesystem, later, when it turns out that it isn't 
being delivered, you act as if this were a known condition.

 Well this is true, you are right.  Reiser4 is the fix though.

No, it isn't. Reiser4 is an alternative beast. Or will it transparently
fix the collision problem in a 3.5 or 3.6 file system, in a way that
is backwards compatible with 3.6 drivers? If not, please fix reiser3.6.

Given that Reiser4 isn't proven yet in the field (for that, it would
have to be used as the default file system by at least one major
distributor for at least a year), it is certainly not an option for
servers _yet_.

A file system that intransparently (i. e. not inode count or block
count) refuses to create a new file doesn't belong on _my_ production
machines, which shall migrate away from reiserfs on the next suitable
occasion (such as upgrades). There's ext3fs, jfs, xfs, and in 2006 or
2007, we'll talk about reiser4 again. Yes, I am conservative WRT file
systems and storage.

--
Matthias Andree


This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs.

Note:__
This message is for the named person's use only. It may contain
confidential, proprietary or legally privileged information. No
confidentiality or privilege is waived or lost by any mistransmission.
If you receive this message in error, please immediately delete it and
all copies of it from your system, destroy any hard copies of it and
notify the sender. You must not, directly or indirectly, use, disclose,
distribute, print, or copy any part of this message if you are not the
intended recipient. Jaguar Freight Services and any of its subsidiaries
each reserve the right to monitor all e-mail communications through its
networks.
Any views expressed in this message are those of the individual sender,
except where the message states otherwise and the sender is authorized
to state them to be the views of any such entity.

This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs.


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Matthias Andree
Yiannis Mavroukakis [EMAIL PROTECTED] writes:

 Your proven reasoning sounds a bit strange to me..Microsoft (aka
 major distributor at least in my books) had her filesystems in the
 field for ages, does this prove any of them good (or bad for that
 matter)?

My reasoning mentioned a /required/, but not a /sufficient/ criterion.

In other words: not before it is proven in the field will I consider it
for production use.

Remember the Linux 2.2 reiserfs 3.5 NFS woes?
Remember the early XFS-NFS woes?

These are all reasons to avoid a shiny new file system for serious work.

 I don't think I'd wait for a distributor to shove reiser4 down my
 throat, just because the distributor seems to trust it, so the logical
 course would be for me to try it out. I'll grant you that I am not
 using it on the mission critical server, because our hosting provider
 will not support it (ext3 addicts..oh well)

For practical recovery reasons (error on root FS after a crash), ext3fs
is easier to handle. You can fsck the (R/O) root partition just fine
(e2fsck then asks you to reboot right away); for reiserfs, you'll have
to boot into some emergency or rescue system...

 but I do have it on my development server, that does house critical
 code and receives all kinds of hammering from yours truly; And I use
 it at home.

I reformatted my last at home reiserfs an hour ago and unloaded the
reiserfs kernel module, as the way how Hans has responded to the error
report is inacceptable.

Anyone is free to choose the file system, and as the simple
demonstration code posted earlier shows a serious flaw in reiserfs,
Hans's response was boldfaced, I ditched reiserfs3. End of story.

-- 
Matthias Andree


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Cal
--
and then at Thu, 30 Dec 2004 13:40:53 +0100, it was written ...
 ... 
 Anyone is free to choose the file system, and as the simple
 demonstration code posted earlier shows a serious flaw in reiserfs,
 Hans's response was boldfaced, I ditched reiserfs3. End of story.
 

Your policy and philosophy on file system selection are yours to enjoy as
you see fit, but the anger and angst ... ?   Phew!! 

cheers, Cal



RE: Congratulations! we have got hash function screwed up

2004-12-30 Thread Yiannis Mavroukakis
My reasoning mentioned a /required/, but not a /sufficient/ criterion.
In other words: not before it is proven in the field will I consider it
for production use.
Remember the Linux 2.2 reiserfs 3.5 NFS woes?
Remember the early XFS-NFS woes?
These are all reasons to avoid a shiny new file system for serious
work.

I agree, but you're generalising, this is not xfs and reiser4 is not 3.5
;)
If you don't try out the shiny new filesystem yourself, how can you
possibly dismiss it based on the past failures
of other filesystems? 


For practical recovery reasons (error on root FS after a crash), ext3fs
is easier to handle. You can fsck the (R/O) root partition just fine
(e2fsck then asks you to reboot right away); for reiserfs, you'll have
to boot into some emergency or rescue system...

No biggie for me, just have a removable media of some sort with your
running kernel and some basic tools.


Y.

Note:__
This message is for the named person's use only. It may contain
confidential, proprietary or legally privileged information. No
confidentiality or privilege is waived or lost by any mistransmission.
If you receive this message in error, please immediately delete it and
all copies of it from your system, destroy any hard copies of it and
notify the sender. You must not, directly or indirectly, use, disclose,
distribute, print, or copy any part of this message if you are not the
intended recipient. Jaguar Freight Services and any of its subsidiaries
each reserve the right to monitor all e-mail communications through its
networks.
Any views expressed in this message are those of the individual sender,
except where the message states otherwise and the sender is authorized
to state them to be the views of any such entity.

This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs.


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Matthias Andree
Yiannis Mavroukakis [EMAIL PROTECTED] writes:

 I agree, but you're generalising, this is not xfs and reiser4 is not 3.5
 ;)
 If you don't try out the shiny new filesystem yourself, how can you
 possibly dismiss it based on the past failures
 of other filesystems? 

I doubt new software is bug-free. I don't expect NFS problems with
reiser4 though, these should be in the regression tests. :-)

-- 
Matthias Andree


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Matthias Andree
Cal [EMAIL PROTECTED] writes:

 --
 and then at Thu, 30 Dec 2004 13:40:53 +0100, it was written ...
  ... 
  Anyone is free to choose the file system, and as the simple
  demonstration code posted earlier shows a serious flaw in reiserfs,
  Hans's response was boldfaced, I ditched reiserfs3. End of story.
  

 Your policy and philosophy on file system selection are yours to enjoy as
 you see fit, but the anger and angst ... ?   Phew!! 

I have no interest to deal with systems that have known and reproducible
cases of failure that are nondeterministic in practical use.

And Marc's documentation showed this is a real-world problem, not an
ivory tower problem.

The reiserfs story is over for me. All private machines I deal with are
reiserfs-free as of a few hours ago.

It was just one bug too many, and it was handled unprofessionally,
unlike many bugs before which had been dealt with on short notice
usually, or at least accepted for looking into.

I'll phase reiser3 out on my work machines as I see fit.

I have seen too many bugs in reiserfs3.

I do believe reiserfs4 fixes some design flaws of reiser3, and when the
implementation issues are all shaken out in one or two years' time, it
may be a good file system and I will look at it - I trust the reiserfs
team can learn from their mistakes.

I hope they learn that THIS handling of the error was wrong.

Who cares, not us for the past five years is not a proper response to
a real-world problem.

-- 
Matthias Andree


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Matthias Andree
On Thu, 30 Dec 2004, Hans Reiser wrote:

 Fixing hash collisions in V3 to do them the way V4 does them would 
 create more bugs and user disruption than the current bug we have all 
 lived with for 5 years until now.  If someone thinks it is a small 
 change to fix it, send me a patch.  Better by far to fix bugs in V4, 
 which is pretty stable these days.

Better to fix a known bug than create a file system vacuum before V4 is
really stable.

Anyways, I don't care any more, I'm phasing out ReiserFS v3 and have no
plans to try V4 before 2006.

-- 
Matthias Andree


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread pcg
On Wed, Dec 29, 2004 at 06:05:59PM -0800, Hans Reiser [EMAIL PROTECTED] wrote:
 
 Again, this is a lame excuse for a bug. First you declare some features on
 your filesystem, later, when it turns out that it isn't being delivered,
 you act as if this were a known condition.
  
 
 Well this is true, you are right.  Reiser4 is the fix though.

So that what happens to the filesystems develop once you have a new toy. Good
to know when planning my next server :)

 (Even if it were ok to fail file creation, the error generated is still
 wrong. It is a bug, no matter how you try to twist it).
  
 Blame Alan Cox for that, he changed it from -EHASHCOLLISION (or some 
 such error I invented, I forget) over my objections.

Blaming Cox for trying to fix your code and not getting it completely
right is not nice. After all, Cox also found that the error code is
inadequate. The point is that EBUSY is still bad, for open. ENOSPC is a
much better code, as it is a documented error code for open, whereas EBUSY
is not.

-- 
The choice of a
  -==- _GNU_
  ==-- _   generation Marc Lehmann
  ---==---(_)__  __   __  [EMAIL PROTECTED]
  --==---/ / _ \/ // /\ \/ /  http://schmorp.de/
  -=/_/_//_/\_,_/ /_/\_\  XX11-RIPE


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Esben Stien
Matthias Andree [EMAIL PROTECTED] writes:

 All private machines I deal with are reiserfs-free as of a few hours
 ago.

What do you use instead?

I really don't like that there is no undelete feature in reiserfs -
it's not planned for reiserfs-4 either. I see desperate users all the
time trying to get back what they mistakenly removed. It shouldn't be
hard with reiserfs either. There should be a selection of what files
to restore, so we can avoid the file merging problem with rebuilding
the tree from leaf nodes.

 I have seen too many bugs in reiserfs3.

Is there a list of these current issues?

 Who cares, not us for the past five years is not a proper response
 to a real-world problem.

I also reacted to this response. 

A flaw in the filesystem, in my opinion, is equivalent to the space
ship crashing and all crew members die.

-- 
Esben Stien is [EMAIL PROTECTED]
http://www.esben-stien.name
irc://irc.esben-stien.name/%23contact
[sip|iax]:[EMAIL PROTECTED]


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread pcg
On Thu, Dec 30, 2004 at 11:52:58AM -, Yiannis Mavroukakis [EMAIL 
PROTECTED] wrote:
 Your proven reasoning sounds a bit strange to me..Microsoft (aka major
 distributor at least in my books) had her filesystems in the field for
 ages, does this prove any of them good (or bad for that matter)?

You state that proven is the same as good, but why you do so escapes
me. In general, you can easily prove that black == white (etc.) by such
illogical reasoning.

The reasoning in fact is that file systems which are out in the field for
years have _known_ properties, are proven to work with some amount of
software, possibly including software that specicially has workarounds for
filesystem shortcomins (i.e. squid with it's multi-dir-hierarchy, which is
just a workaround for basically all non-reiserfs filesystems, and now also
for reiserfs3 filesystems, which also doesn't cope with millions of files
in a dir, as opposed to the original claims by it's developers).

This is cerftainly true for microsofts filesystems: you know what to expect
from vfat or ntfs, for example.

For new filesystems, the issue is completely different. Look at the file
is a directory approach that formerly was the default in reiserfs. This
breaks a number of programs, despite the expectancy by the reiserfs
authors that this is not the case (in my experience, stat path/. to
quickly check wether sth. is a file or a directory is pretty common). Unless
the filesystem is in the field for some time these bugs will not be found.

-- 
The choice of a
  -==- _GNU_
  ==-- _   generation Marc Lehmann
  ---==---(_)__  __   __  [EMAIL PROTECTED]
  --==---/ / _ \/ // /\ \/ /  http://schmorp.de/
  -=/_/_//_/\_,_/ /_/\_\  XX11-RIPE


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Christian Iversen
On Thursday 30 December 2004 18:07, Esben Stien wrote:
 Matthias Andree [EMAIL PROTECTED] writes:
  All private machines I deal with are reiserfs-free as of a few hours
  ago.

 What do you use instead?

 I really don't like that there is no undelete feature in reiserfs -
 it's not planned for reiserfs-4 either. I see desperate users all the
 time trying to get back what they mistakenly removed. It shouldn't be
 hard with reiserfs either. There should be a selection of what files
 to restore, so we can avoid the file merging problem with rebuilding
 the tree from leaf nodes.

  I have seen too many bugs in reiserfs3.

 Is there a list of these current issues?

  Who cares, not us for the past five years is not a proper response
  to a real-world problem.

 I also reacted to this response.

 A flaw in the filesystem, in my opinion, is equivalent to the space
 ship crashing and all crew members die.

To be more exact, it's equal to a mars probe stranding and making many 
scientists very nervous :-)

-- 
Regards,
Christian Iversen


RE: Congratulations! we have got hash function screwed up

2004-12-30 Thread Yiannis Mavroukakis
You state that proven is the same as good, but why you do so
escapes me. In general, you can easily prove that black == white (etc.)
by such illogical reasoning.

No I don't :) I merely say that proven does not equal good OR bad if a
distributor chooses to bundle the filesystem with a distribution. It can
be proven good or proven bad :) Clear?:) 



[..]Unless the filesystem is in the field for some time these bugs will
not be found.

I agree but only if the filesystem is used in the real world, and it's
adoption is not driven primarily by distributors.

BTW red|purple|brown might as well be black if you are colour blind ;)))
It's all a mater of perception. 


Note:__
This message is for the named person's use only. It may contain
confidential, proprietary or legally privileged information. No
confidentiality or privilege is waived or lost by any mistransmission.
If you receive this message in error, please immediately delete it and
all copies of it from your system, destroy any hard copies of it and
notify the sender. You must not, directly or indirectly, use, disclose,
distribute, print, or copy any part of this message if you are not the
intended recipient. Jaguar Freight Services and any of its subsidiaries
each reserve the right to monitor all e-mail communications through its
networks.
Any views expressed in this message are those of the individual sender,
except where the message states otherwise and the sender is authorized
to state them to be the views of any such entity.

This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs.


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Sander
Esben Stien wrote (ao):
 I really don't like that there is no undelete feature in reiserfs -
 it's not planned for reiserfs-4 either. I see desperate users all the
 time trying to get back what they mistakenly removed.

If you 'see desperate users all the time' you might be amoung the wrong
people ;-)

Maybe you can clue them in the wonderful world of making backups?

Next time it is not their mistake, but instead a broken harddisk.
Undelete wont save them then.

Or they just edited it instead of rm.


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Esben Stien
Sander [EMAIL PROTECTED] writes:

 If you 'see desperate users all the time' you might be amoung the
 wrong people ;-)

;), on the lists and on user groups.

 Maybe you can clue them in the wonderful world of making backups?

I always promote bacula, but you will always have those who don't do
the dance.

 Next time it is not their mistake, but instead a broken harddisk.
 Undelete wont save them then.

We got pretty good tools to restore from a hd with bad blocks. 

dd it, loop it, fsck it. 

 Or they just edited it instead of rm.

This is handled in user space.

-- 
Esben Stien is [EMAIL PROTECTED]
http://www.esben-stien.name
irc://irc.esben-stien.name/%23contact
[sip|iax]:[EMAIL PROTECTED]


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Sander
Esben Stien wrote (ao):
 Sander [EMAIL PROTECTED] writes:
  Next time it is not their mistake, but instead a broken harddisk.
  Undelete wont save them then.
 
 We got pretty good tools to restore from a hd with bad blocks. 
 
 dd it, loop it, fsck it. 
 
I'm sure a friend of mine disagrees with you after paying big bucks to a
Norway based disk recovery company after a disk crash and zero backups.
A Dutch recovery company couldn't recover the disk.

And restore from backup is always quicker and less stressful on the
nerves.

  Or they just edited it instead of rm.
 
 This is handled in user space.

Maybe in your situation. But in general I advice everybody to make
backups. Might also be the reason nobody wrote a reiser4 undelete plugin
yet.


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Esben Stien
Sander [EMAIL PROTECTED] writes:

 Maybe you can clue them in the wonderful world of making backups?

This is not always the case, btw

This might be caused by a faulty application as well if the system is
not secure enough.

Tools to scan the partition and give a list of possible restores
should not be hard to implement. 

-- 
Esben Stien is [EMAIL PROTECTED]
http://www.esben-stien.name
irc://irc.esben-stien.name/%23contact
[sip|iax]:[EMAIL PROTECTED]


RE: Congratulations! we have got hash function screwed up

2004-12-30 Thread Burnes, James
 We got pretty good tools to restore from a hd with bad blocks.
 
 dd it, loop it, fsck it.

Heh, heh.  That won't help you if a circuit board, spindle, read head or
other mechanism fails.  Then you better hope the data wasn't *that*
valuable or you know good platter recovery shop.

Backups are cheap.  Recovery is very expensive.  Ask the CIA ;-)

It's Hans and friend's responsibility to patch whatever looks like a
serious problem.  It's the user's responsibility to protect valuable
data.
File system reliability after that is really a question of operational
downtime expenses etc.

If people don't trouble themselves to perform backups:

1. They have accepted that risk or
2. They are too junior to have ever lost a couple months worth of work
because they were too lazy or inexperienced to perform backups.
3. They don't have any work that can't be rebuilt in a matter of hours
(see #1).

(BTW: If Hans is a little tired of working on Reiser3 it's probably
because he is currently stressed out making last minute tweaks on
Reiser4 and managing his team.

Cut him some slack.  Email conversations don't show a number of things
we take for granted, like that fact that the person we're talking to
looks really tired etc.  Unlike ext3, XFS and JFS, Reiser isn't funded
by someone with huge pockets.)

Jim Burnes





Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Spam

  

 Esben Stien wrote (ao):
 I really don't like that there is no undelete feature in reiserfs -
 it's not planned for reiserfs-4 either. I see desperate users all the
 time trying to get back what they mistakenly removed.

 If you 'see desperate users all the time' you might be amoung the wrong
 people ;-)

 Maybe you can clue them in the wonderful world of making backups?

  This is the standard Linux comment... But in reality, it just a way
  to say you do not know, or that linux lacks an (in this users
  opinion) important feature.

  In any case. Undelete has been since ages on many platforms. It IS a
  useful feature. Accidents CAN happen for many reasons and in some
  cases you may need to recover data.

  Besides, a deletion does not fully remove the data, but just unlinks
  it. In Reiser where there is tailing etc for small files this can be
  a problem. Either the little file might not be able to be recovered
  (shouldn't the data still exist, even if it is tailed), or the user
  need to use a non-tailing policy?

 Next time it is not their mistake, but instead a broken harddisk.
 Undelete wont save them then.

  Indeed. Backups are important. But the backup does not recover the
  last minute changes. They may be a day or a week old at best - even
  at larger sites!

 Or they just edited it instead of rm.
´
  well, overwritten data is not so easy to get back. But from what I
  understand in Linux, is that many applications actually write
  another file and then unlinks the old file? If that is the case then
  it may even be possible to get back some overwritten files!
  
  ~S

-- 



Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Esben Stien
Burnes, James [EMAIL PROTECTED] writes:

 dd it, loop it, fsck it.
 That won't help you if a circuit board, spindle, read head or other
 mechanism fails.

Sure, but that is not a simple case of bad blocks;)

 Then you better hope the data wasn't *that* valuable or you know
 good platter recovery shop.

We got IBAS in Norway; the price for an evaluation is close to a
handful of gold.

 Backups are cheap.  Recovery is very expensive.

Sure, but your not factoring in murphys law here. A tool to undelete
would come many people in handy who even got proper backup solutions.

Besides, a recovery is not the same as a feature to undelete. Such a
feature would maybe save a days work, which is a lot in some circles.

-- 
Esben Stien is [EMAIL PROTECTED]
http://www.esben-stien.name
irc://irc.esben-stien.name/%23contact
[sip|iax]:[EMAIL PROTECTED]


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Esben Stien
Sander [EMAIL PROTECTED] writes:

 dd it, loop it, fsck it. 
  
 I'm sure a friend of mine disagrees with you after paying big bucks
 to a Norway based disk recovery company after a disk crash and zero
 backups.  A Dutch recovery company couldn't recover the disk.

Probably IBAS;). What was the problem of the hd then?. Why couldn't
the dutchies do the job?

 And restore from backup is always quicker and less stressful on the
 nerves.

Like I said to James, this might not only be the fault of the user. 

 Maybe in your situation. But in general I advice everybody to make
 backups. Might also be the reason nobody wrote a reiser4 undelete
 plugin yet.

It's no use to talk about backups when we talk about undelete
features; backups are for recovery after nuclear explosions, faulty
hardware which makes it inaccessible to even retrieve data out of it
or something similar.

It would be too expensive to do all this backup'ing in userspace. When
a file gets deleted, a proper procedure to retrieve it would be to
umount the filesystem, scan it for the data which was removed and then
put it back in the tree again. It's not like reiserfs overwrites this
data, it's still there, so why should there be an artificial barrier
to getting this data back?.

-- 
Esben Stien is [EMAIL PROTECTED]
http://www.esben-stien.name
irc://irc.esben-stien.name/%23contact
[sip|iax]:[EMAIL PROTECTED]


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Chris Dukes
On Thu, Dec 30, 2004 at 07:46:09PM +0100, Esben Stien wrote:
 
 It would be too expensive to do all this backup'ing in userspace. When
 a file gets deleted, a proper procedure to retrieve it would be to
 umount the filesystem, scan it for the data which was removed and then
 put it back in the tree again. It's not like reiserfs overwrites this
 data, it's still there, so why should there be an artificial barrier
 to getting this data back?.

You're using the wrong tool for the job.
For what you're asking for you want Plan 9 and its
wormfs.

Thanks for playing.  

-- 
Chris Dukes
Warning: Do not use the reflow toaster oven to prepare foods after
it has been used for solder paste reflow. 
http://www.stencilsunlimited.com/stencil_article_page5.htm


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Sander
Esben Stien wrote (ao):
 Sander [EMAIL PROTECTED] writes:
  I'm sure a friend of mine disagrees with you after paying big bucks
  to a Norway based disk recovery company after a disk crash and zero
  backups. A Dutch recovery company couldn't recover the disk.
 
 Probably IBAS;). What was the problem of the hd then?. Why couldn't
 the dutchies do the job?

Maybe the Dutch company is not that good :-)
It could be IBAS, but they didn't mention a company name. It has to be a
well know one. And your mention of gold in your other mail is very true
of course :-)

 When a file gets deleted, a proper procedure to retrieve it would be
 to umount the filesystem, scan it for the data which was removed and
 then put it back in the tree again. It's not like reiserfs overwrites
 this data, it's still there, so why should there be an artificial
 barrier to getting this data back?.

I did recover data that way several times back when I didn't do backups.
The problem is though that one file does not occupy one spot on the
harddisk. It might be spread all over the place. While the data might
still be there (on an idle disk), you miss the much needed pointers to
the data.


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Matthias Andree
On Thu, 30 Dec 2004, Burnes, James wrote:

 (BTW: If Hans is a little tired of working on Reiser3 it's probably
 because he is currently stressed out making last minute tweaks on
 Reiser4 and managing his team.
 
 Cut him some slack.  Email conversations don't show a number of things
 we take for granted, like that fact that the person we're talking to
 looks really tired etc.  Unlike ext3, XFS and JFS, Reiser isn't funded
 by someone with huge pockets.)

I'm willing to grant ANY time-out, if Hans wrote I have a pile of
deadline reiser4 contract work before I can deal with that, fine, he
didn't but said use reiser4 instead. And that's inadequate.

And I say this without any emotions, red head, swelling veins and such.

-- 
Matthias Andree


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Matthias Andree
On Thu, 30 Dec 2004, Esben Stien wrote:

 Sure, but your not factoring in murphys law here. A tool to undelete
 would come many people in handy who even got proper backup solutions.

You're asking for a versioned file system. If reiserfs v4 doesn't offer
such properties, find something else that does.

 Besides, a recovery is not the same as a feature to undelete. Such a
 feature would maybe save a days work, which is a lot in some circles.

Backup more often. Staged backup schemes (hourly, daily, weekly,
monthly) with varying levels of differential or complete backups, plus
off-site archives, are probably a good idea for these circles then.

-- 
Matthias Andree


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Esben Stien
Sander [EMAIL PROTECTED] writes:

 I did recover data that way several times back when I didn't do backups.
 The problem is though that one file does not occupy one spot on the
 harddisk. It might be spread all over the place. While the data might
 still be there (on an idle disk), you miss the much needed pointers to
 the data.

I'm not talking about grep here. I'm talking about a reiser tool to
scan the partition for metadata and use this to recover the file by
putting it back in the tree.

Methods like grep is an absolute last resort, if, like in this case,
there is no tool that does the job.

-- 
Esben Stien is [EMAIL PROTECTED]
http://www.esben-stien.name
irc://irc.esben-stien.name/%23contact
[sip|iax]:[EMAIL PROTECTED]


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Hans Reiser
Esben Stien wrote:
I really don't like that there is no undelete feature in reiserfs -
it's not planned for reiserfs-4 either.
Blame Linus for that.  I would put it in, but he thinks it belongs in 
userspace.  I may still put it in someday and just not tell him.;-)  
I'll just let the users know about it and not him.;-)

I see desperate users all the
time trying to get back what they mistakenly removed. It shouldn't be
hard with reiserfs either. There should be a selection of what files
to restore, so we can avoid the file merging problem with rebuilding
the tree from leaf nodes.
 




Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Hans Reiser
pcg( Marc)@goof(A.).(Lehmann )com wrote:
which also doesn't cope with millions of files
in a dir, as opposed to the original claims by it's developers).
 

The generation number thing just isn't as good as teaching the tree to 
cope with duplicate keys.  Kudos to Nikita on that one.

 




Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Hans Reiser
Burnes, James wrote:
We got pretty good tools to restore from a hd with bad blocks.
dd it, loop it, fsck it.
   

Heh, heh.  That won't help you if a circuit board, spindle, read head or
other mechanism fails.  Then you better hope the data wasn't *that*
valuable or you know good platter recovery shop.
Backups are cheap.  Recovery is very expensive.  Ask the CIA ;-)
It's Hans and friend's responsibility to patch whatever looks like a
serious problem.  It's the user's responsibility to protect valuable
data.
File system reliability after that is really a question of operational
downtime expenses etc.
If people don't trouble themselves to perform backups:
1. They have accepted that risk or
2. They are too junior to have ever lost a couple months worth of work
because they were too lazy or inexperienced to perform backups.
3. They don't have any work that can't be rebuilt in a matter of hours
(see #1).
(BTW: If Hans is a little tired of working on Reiser3 it's probably
because he is currently stressed out making last minute tweaks on
Reiser4 and managing his team.
Cut him some slack.  Email conversations don't show a number of things
we take for granted, like that fact that the person we're talking to
looks really tired etc.  Unlike ext3, XFS and JFS, Reiser isn't funded
by someone with huge pockets.)
Jim Burnes


 

It seems a bit more shows than I would like.
My wife just got some 75 year old female judge filling in during the 
christmas holidays for the regular judge to take my kids away from me, 
without me getting a chance for a response to the allegations, until a 
hearing in 25 days from now, because I show my son movies about world 
war II, and my exposing him to violent computer games is alleged to 
induce traumatic stress disorder which somehow has never been reproduced 
in front of me.  Only in California.

I shall be alleging that little boys take to violent computer games like 
monkeys take to trees, and challenging anyone to reproduce in front of 
me evidence to the contrary.


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Adrian Ulrich

 A flaw in the filesystem, in my opinion, is equivalent to the space
 ship crashing and all crew members die.

No, it isn't..

A dying filesystem is a bad thing.. But it's just a filesystem..




Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Stefan Traby
On Thu, Dec 30, 2004 at 09:57:29PM +0100, Adrian Ulrich wrote:
 
  A flaw in the filesystem, in my opinion, is equivalent to the space
  ship crashing and all crew members die.
 
 No, it isn't..
 
 A dying filesystem is a bad thing.. But it's just a filesystem..

... that causes the space ship to crash.

Please think before you post.

-- 

  ciao - 
Stefan

  GNU's Not Unix  --IIS Isn't Secure  


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread brianmas
Quoting Stefan Traby [EMAIL PROTECTED]:

 On Thu, Dec 30, 2004 at 09:57:29PM +0100, Adrian Ulrich wrote:
 
   A flaw in the filesystem, in my opinion, is equivalent to the space
   ship crashing and all crew members die.
 
  No, it isn't..
 
  A dying filesystem is a bad thing.. But it's just a filesystem..

 ... that causes the space ship to crash.

 Please think before you post.

he's talking about the value of human life vs. some data.

--




Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Esben Stien
Hans Reiser [EMAIL PROTECTED] writes:

 I may still put it in someday and just not tell him.;-)
 I'll just let the users know about it and not him.;-)

Hehe, nice;)

-- 
Esben Stien is [EMAIL PROTECTED]
http://www.esben-stien.name
irc://irc.esben-stien.name/%23contact
[sip|iax]:[EMAIL PROTECTED]


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Matthias Andree
Spam [EMAIL PROTECTED] writes:

   In any case. Undelete has been since ages on many platforms. It IS a
   useful feature. Accidents CAN happen for many reasons and in some
   cases you may need to recover data.

   Besides, a deletion does not fully remove the data, but just unlinks
   it. In Reiser where there is tailing etc for small files this can be
   a problem. Either the little file might not be able to be recovered
   (shouldn't the data still exist, even if it is tailed), or the user
   need to use a non-tailing policy?

A working undelete can either hog disk space or die the moment some
large write comes in. And if you're at that point, make it a versioning
file system - but then don't complain about space efficiency.

   well, overwritten data is not so easy to get back. But from what I
   understand in Linux, is that many applications actually write
   another file and then unlinks the old file? If that is the case then
   it may even be possible to get back some overwritten files!

I see enough applications to just overwrite an output file. 

This whole discussion doesn't belong here until someone talks about
implementing a whole versioning system for reiser4.

-- 
Matthias Andree


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Spam


 Spam [EMAIL PROTECTED] writes:

   In any case. Undelete has been since ages on many platforms. It IS a
   useful feature. Accidents CAN happen for many reasons and in some
   cases you may need to recover data.

   Besides, a deletion does not fully remove the data, but just unlinks
   it. In Reiser where there is tailing etc for small files this can be
   a problem. Either the little file might not be able to be recovered
   (shouldn't the data still exist, even if it is tailed), or the user
   need to use a non-tailing policy?

 A working undelete can either hog disk space or die the moment some
 large write comes in. And if you're at that point, make it a versioning
 file system - but then don't complain about space efficiency.

  Yes, When data is overwritten then it is overwritten. The longer the
  user waits to try to recover data, the more risk of this happening.
  Undelete has existed a long time and people know it is not a
  foolproof thing. I do not think anyone asked for automatic backup
  features, but just tools that can try to recover accidental deletions.

   well, overwritten data is not so easy to get back. But from what I
   understand in Linux, is that many applications actually write
   another file and then unlinks the old file? If that is the case then
   it may even be possible to get back some overwritten files!

 I see enough applications to just overwrite an output file. 

  Yes, This was an example only.

  In any case. If there were tools that could scan and recover, even
  partly, deleted files then I would welcome them. I am sure lots of
  other people do too.

  It is very easy to say you need backups of your data, that you need
  versioning filesystems etc. But not all of this is possible for
  everyone. Just take a laptop as example. Making backups is not so
  easy to do frequently - especially not when traveling.

  Sure, if you run in a corporate environment you can do shadow
  copying or use other versioning systems and mount that over the
  network. But for the normal home or small business users this is
  not really what you can expect...

 This whole discussion doesn't belong here until someone talks about
 implementing a whole versioning system for reiser4.

  I think someone said they wanted undelete recovery features in
  reiser4 - which was what started this discussion?

´

-- 



Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread David Masover
Hans Reiser wrote:
Esben Stien wrote:
I really don't like that there is no undelete feature in reiserfs -
it's not planned for reiserfs-4 either.
Blame Linus for that.  I would put it in, but he thinks it belongs in 
userspace.  I may still put it in someday and just not tell him.;-)  
I'll just let the users know about it and not him.;-)
He has some good points, though.  If you're going to have a kernel, you 
want to keep it small, put stuff in only if it needs to be there.  And 
how much speed do we lose by putting stuff in userspace, if we do it 
right?  C'mon, if Doom 3 can run in userspace, surely some sort of trash 
can / recycle bin can, right?

Oh wait... Gnome/KDE already do that.


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread Hans Reiser
David Masover wrote:
Hans Reiser wrote:
Esben Stien wrote:
I really don't like that there is no undelete feature in reiserfs -
it's not planned for reiserfs-4 either.
Blame Linus for that.  I would put it in, but he thinks it belongs in 
userspace.  I may still put it in someday and just not tell him.;-)  
I'll just let the users know about it and not him.;-)

He has some good points, though.  If you're going to have a kernel, 
you want to keep it small, put stuff in only if it needs to be there.  
And how much speed do we lose by putting stuff in userspace, if we do 
it right?  C'mon, if Doom 3 can run in userspace, surely some sort of 
trash can / recycle bin can, right?

Oh wait... Gnome/KDE already do that.

the problem being that everyone uses rm not the gnome/kde trashcan  
literal copying of Apple doesn't work for unix because we use shells 
most of the time.


Re: Congratulations! we have got hash function screwed up

2004-12-30 Thread David Masover
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hans Reiser wrote:
| David Masover wrote:
|
| Hans Reiser wrote:
|
| Esben Stien wrote:
|
|
| I really don't like that there is no undelete feature in reiserfs -
| it's not planned for reiserfs-4 either.
|
| Blame Linus for that.  I would put it in, but he thinks it belongs in
| userspace.  I may still put it in someday and just not tell him.;-)
| I'll just let the users know about it and not him.;-)
|
|
|
| He has some good points, though.  If you're going to have a kernel,
| you want to keep it small, put stuff in only if it needs to be there.
| And how much speed do we lose by putting stuff in userspace, if we do
| it right?  C'mon, if Doom 3 can run in userspace, surely some sort of
| trash can / recycle bin can, right?
|
| Oh wait... Gnome/KDE already do that.
|
|
|
| the problem being that everyone uses rm not the gnome/kde trashcan
| literal copying of Apple doesn't work for unix because we use shells
| most of the time.
we being people who should be able to back things up and avoid really
stupid mistakes, and people who care about performance.
But you are right, this has to be lower down -- maybe glibc, maybe vfs,
maybe fs views -- because there's no standardization once you get to
things like gnome.
My point about Doom 3 is that I like the idea of a microkernel, if it
could be done fast enough.  Games talk to kernel space and hardware very
quickly, but it seems kernel-userspace communication isn't fast enough
for things like filesystems.
Maybe someone can engineer a way around that?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iQIVAwUBQdTqswisZLIF6uqOAQKczQ/8DeKVlFL5cS+KXnBaq9+w+rBkpak/LRld
1gyIyp4s0mVdpRNTGa5W2YPB0McOZj5TDUVZivAKScbd1srsV1QxlhUxzY6QMUwo
7Y9k+yJbm0J89Tnrap2DJmE/teN40Rw4thY5VyQ5cfugNFZtdrwgRsKqvsxV4cOQ
lIOci7SJuLUdgdtfLWORyEdYSe180P9XJMaGTxoVYSnwnBMbXX5C33OOVmJjEvU7
Pevro6GcfOvALj+ciI3AtvhdCbyasrevgfc1K/iZk3MwQFZKXRUtW5LcT1Lwax08
acn4n6f5ZhEQJWarv93EOn0T6KfXuD0uY9GbNRJS/ldPUg4tCaw3RfzVgdvpUVlw
uI+hPXsVpEGDyBm1aK3TQ30brUzBx/ILgblQvCe4QuHH5NWb63I0xPKbmqqmCdCR
cx06b3cbBCgO6QK3gqURHdyeQjlpJL92EXARmYLFjAnAicnKG95N4QiZsHIIuGvn
9iQcAS0bfU+elPbnfeiJnRTfEU6i1eLwSmsYo0i1m80U58TW604Dcfv2RcQrtluw
96eO9VB++8QWpl2Tv4G7u9yFrmOcx4Vo1tVkbFoPZvsP5sMqY6XhKowX5uWLjkwQ
hiAWb57HKxUTaPcW6CyGh+zr8g+Z8Po7TRA1GcdNMZZBfgNcAlZvlSDmOw7BHlMj
ixpwaj2lWYU=
=Z6t0
-END PGP SIGNATURE-


Re: Congratulations! we have got hash function screwed up

2004-12-29 Thread Stefan Traby
On Tue, Dec 28, 2004 at 11:12:18PM +0100,  Marc A. Lehmann  wrote:
 
ReiserFS: hdg2: warning: reiserfs_add_entry: Congratulations! we have got 
 hash function screwed up
 
 Sure sounds like a filesystem bug to me. Is this 2.6.10-rc3-specific or a
 generic bug in handling hash collisions?

I can confirm that with 2.6.10.
It is independent of hash function (r5, rupasov, tea) used.

Here a script that works independent of hash (feel free to forward it to
bugtraq - it's a showstopper bug):

#! /bin/sh
# reiserfs v3 denial of  creation attack (hash collisions)
# insider: EHASHCOLLISION EAGAIN!
#
ATTACK=
R5=
03435823 22067556 40799289 47672563 79051844 97783577 
000119162858 000125037032 000137894590 000156516313 000169273871 000175148046 
000193879879 000209384885 000228006608 000246738340 000252611615 000259495899 
000278117621 000296849354 000305480087 000311354361 000318228635 000342833642 
000361465375 000374212923 000392944656 000401576389 000414323937 000439929944 
000445803118 000464434950 000470309125 000483066683 000504545964 000523177697 
000560530152 000567404427 000573288700 000598893708 000600641166 000607515440 
000626147173 000632020447 000657626454 000676258187 000689005735 000729116749 
000747848481 000753721756 000779227762 000797959495 000806590218 000819338776 
000843943783 000862575506 000888080512 000921308252 000934065800 000946913259 
000952797533 000971419266 000984176814 001008625581 001027257304 001033130588 
001051862310 001058736595 001070494043 001077368318 001117479331 001136100064 
001148958612 001154831897 001160706070 001186211078 001213574633 001226322091 
001257801372 001263685647 001289190653 001295064928 001303796660 001316544109 
001322418393 001335175941 001366655122 001385286955 001406766136 001425397969 
001431271143 001462750424 001481382157 001502861438 001509735712 001521493170 
001528367445 001534240719 001552972451 001559846726 001578478459 001611705198 
001618589472 001661816201 001687321209 001714684774 00172743 001758911503 
001764795788 001777543236 001783417510 001823528524 001849033530 001867765263 
001873639538 001892270270 001907876277 001932381284 001945129832 001951003007 
001963860565 001976609013 001982492298 002012815329 002019699603 002031447061 
002038320336 002050078894 002062926342 002081558075

RUPASOV=
 16777216 33554432 50331648 67108864 83886080 
000100663296 000117440512 000134217728 000150994944 000167772160 000184549376 
000201326592 000218103808 000234881024 000251658240 000268435456 000285212672 
000301989888 000318767104 000335544320 000352321536 000369098752 000385875968 
000402653184 000419430400 000436207616 000452984832 000469762048 000486539264 
000503316480 000520093696 000536870912 000553648128 000570425344 000587202560 
000603979776 000620756992 000637534208 000654311424 000671088640 000687865856 
000704643072 000721420288 000738197504 000754974720 000771751936 000788529152 
000805306368 000822083584 000838860800 000855638016 000872415232 000889192448 
000905969664 000922746880 000939524096 000956301312 000973078528 000989855744 
001006632960 001023410176 001040187392 001056964608 001073741824 001090519040 
001107296256 001124073472 001140850688 001157627904 001174405120 001191182336 
001207959552 001224736768 001241513984 001258291200 001275068416 001291845632 
001308622848 001325400064 001342177280 001358954496 001375731712 001392508928 
001409286144 001426063360 001442840576 001459617792 001476395008 001493172224 
001509949440 001526726656 001543503872 001560281088 001577058304 001593835520 
001610612736 001627389952 001644167168 001660944384 001677721600 001694498816 
001711276032 001728053248 001744830464 001761607680 001778384896 001795162112 
001811939328 001828716544 001845493760 001862270976 001879048192 001895825408 
001912602624 001929379840 001946157056 001962934272 001979711488 001996488704 
002013265920 002030043136 002046820352 002063597568 002080374784 002097152000 
002113929216 002130706432 002147483648 002164260864

TEA=
04464160 41804440 80240100 91329029 000104181015 000113725885 
000126527488 000140392446 000158910938 000228997445 000230956744 000265118409 
000278488948 000294393023 000295253722 000300066283 000302103786 000330187358 
000345002932 000351581026 000363320013 000366148241 000398298703 000411084407 
000430270876 000450889104 000457353842 000459620112 000464658163 000465039241 
000472966466 000479773493 000485638992 000490029225 000519300138 000523222490 
000543871739 000550161091 000614863063 000628859470 000658101403 000705881242 
000707428465 000709541412 000710835913 000712765852 000747815906 000751391777 
000758206682 000759473821 000761493018 000807141251 000819925766 000822342439 
000844968698 000846939644 000856679997 000862332598 000897273990 000903164600 
000959685453 000966591643 000975714799 001026819859 001030872126 001052008464 
001101513177 00878931 001114914486 001126417564 

Re: Congratulations! we have got hash function screwed up

2004-12-29 Thread pcg
On Wed, Dec 29, 2004 at 07:55:29PM +0100, Stefan Traby [EMAIL PROTECTED] 
wrote:
 On Tue, Dec 28, 2004 at 11:12:18PM +0100,  Marc A. Lehmann  wrote:
  
 ReiserFS: hdg2: warning: reiserfs_add_entry: Congratulations! we have 
  got hash function screwed up
  
  Sure sounds like a filesystem bug to me. Is this 2.6.10-rc3-specific or a
  generic bug in handling hash collisions?
 
 I can confirm that with 2.6.10.
 It is independent of hash function (r5, rupasov, tea) used.

Interesting, I would have hoped it's not so easy to generate
collisions. Now that a debian package creates collisions, this issue has
become very real.

Another note: It seems that the error returned is wrong. I would expect
ENOSPC if reiserfs runs out of (key-)space, not EBUSY or whatever it
returns.

-- 
The choice of a
  -==- _GNU_
  ==-- _   generation Marc Lehmann
  ---==---(_)__  __   __  [EMAIL PROTECTED]
  --==---/ / _ \/ // /\ \/ /  http://schmorp.de/
  -=/_/_//_/\_,_/ /_/\_\  XX11-RIPE


Re: Congratulations! we have got hash function screwed up

2004-12-29 Thread Hans Reiser
Stefan Traby wrote:

Here a script that works independent of hash (feel free to forward it to
bugtraq - it's a showstopper bug):
 

It is not independent of hash, it is hardcoded to be hash specific.  It 
is not a showstopper bug --- almost nobody cared about it for the last 5 
years.  If you don't accept that quality of service condition on 
filename creation, use reiser4 or ext3. 


Re: Congratulations! we have got hash function screwed up

2004-12-29 Thread pcg
On Wed, Dec 29, 2004 at 01:05:38PM -0800, Hans Reiser [EMAIL PROTECTED] wrote:
 Stefan Traby wrote:
 
 
 
 Here a script that works independent of hash (feel free to forward it to
 bugtraq - it's a showstopper bug):
  
 
 is not a showstopper bug

If it keeps debian from being usable on reiserfs (mind you, xfonts-75 and
xfonts-100 are not unimportant packages), I'd call this a showstopper
indeed *g*.

 --- almost nobody cared about it for the last 5 
 years.

This is a lame excuse for a bug - after all, you promoted reiserfs of being
capable of storing many files in one directory instead of having to rely on
directory hierarchies for e.g. squid and other apps. But exactly that is not
possible with reiserfs, as too many files in one directory == collisions.

Also, it's a lie that nobody cared about this, after all, there ahd been
earlier reports.

And last not least, most apps do not create many files in one directory by
default, for compatibility with other filesystems, where this is too slow.

 If you don't accept that quality of service condition on filename
 creation,

Again, this is a lame excuse for a bug. First you declare some features on
your filesystem, later, when it turns out that it isn't being delivered,
you act as if this were a known condition.

(Even if it were ok to fail file creation, the error generated is still
wrong. It is a bug, no matter how you try to twist it).

 use reiser4 or ext3. 

reiser4, is, of course, far from being stable enough for such uses still.

-- 
The choice of a
  -==- _GNU_
  ==-- _   generation Marc Lehmann
  ---==---(_)__  __   __  [EMAIL PROTECTED]
  --==---/ / _ \/ // /\ \/ /  http://schmorp.de/
  -=/_/_//_/\_,_/ /_/\_\  XX11-RIPE


Re: Congratulations! we have got hash function screwed up

2004-12-29 Thread Christian Iversen
On Wednesday 29 December 2004 22:43, [EMAIL PROTECTED] ( Marc) (A.) (Lehmann ) 
wrote:
 On Wed, Dec 29, 2004 at 01:05:38PM -0800, Hans Reiser [EMAIL PROTECTED] 
wrote:
  Stefan Traby wrote:
  Here a script that works independent of hash (feel free to forward it to
  bugtraq - it's a showstopper bug):
 
  is not a showstopper bug

 If it keeps debian from being usable on reiserfs (mind you, xfonts-75 and
 xfonts-100 are not unimportant packages), I'd call this a showstopper
 indeed *g*.

Under what conditions does this occur? I have 5 installs of debian linux on 
reiserfs here at home, and I have never had such problems. Is it only in 
directories with thousands of other files?

-- 
Regards,
Christian Iversen


Re: Congratulations! we have got hash function screwed up

2004-12-29 Thread pcg
On Wed, Dec 29, 2004 at 10:46:46PM +0100, Christian Iversen [EMAIL PROTECTED] 
wrote:
  On Wed, Dec 29, 2004 at 01:05:38PM -0800, Hans Reiser [EMAIL PROTECTED] 
 wrote:
   Stefan Traby wrote:
   Here a script that works independent of hash (feel free to forward it to
   bugtraq - it's a showstopper bug):
  
   is not a showstopper bug
 
  If it keeps debian from being usable on reiserfs (mind you, xfonts-75 and
  xfonts-100 are not unimportant packages), I'd call this a showstopper
  indeed *g*.
 
 Under what conditions does this occur?

Under the conditions that I already wrote about: when upgrading an
existing xfonts-75dpi package. Installing works fine, upgrading does not,
presumably because dpkg creates a backup copy of every file upgraded
first, which then exceeds some internal reiserfs limit.

 I have 5 installs of debian linux on 
 reiserfs here at home, and I have never had such problems. Is it only in 
 directories with thousands of other files?

As I understands the bug, it happens when too many filenames in the same
directory happen to hash to the same file - reiserfs requires the hash of
filenames to be unqiue enough, otherwise it will not be able to create
more files with the same hashed name.

As the examples show, getting collisions is pretty straightforward and
easy, even with the tea hash (I can faintly remember hans reiser claiming
that this bug has been solved some years ago, and indeed this is the first
time it really bites me, albeit with a very real example).

As such, reiserfs v3 is not suitable for server operations, where such
irregular behaviour simply must not occur - consider this happening when
installing a kernel package, leaving your system in a non-bootable state
or so.

My specific case might depend on other packages - as I maintain some
i18n'ed software I have lots of extra font packages installed, although
I doubt many of them will end up in the 75dpi directory, but it might
be that my 75dpi and 100dpi dirs might be somewhat crowded - they both
contain 1888 files with similar names (and the standard hash, r5, is very
susceptible to similar names leading to similar hashes).

-- 
The choice of a
  -==- _GNU_
  ==-- _   generation Marc Lehmann
  ---==---(_)__  __   __  [EMAIL PROTECTED]
  --==---/ / _ \/ // /\ \/ /  http://schmorp.de/
  -=/_/_//_/\_,_/ /_/\_\  XX11-RIPE