Re: 13000Gig partition badblock check is the same -- do a reiserfsck again ? -> ran badblocks

2005-06-02 Thread Matthias Barremaecker

Hi,

I've ran
  badblocks -v -b 4096 -o badblocks2 /dev/hyperspace/hyperspace1

It said :
  Checking blocks 0 to 322256896
  Checking for bad blocks (read-only test): done
  Pass completed, 312 bad blocks found.

the file contains
  36434712
  36434756
  36434757
  36448896
  36448916
  38853584
  38853616
  38854632
  38854635
  38854636

That aren't 312 numbers... Or am I seeing it wrong ?

thanx!!


Dan Oglesby wrote:

[EMAIL PROTECTED] wrote:


On Thu, 02 Jun 2005 09:28:50 CDT, Dan Oglesby said:


latest versions.  Took two days to run, but it completed, and I ended 
up only losing 2 files out of over 1.1 million files on a 1TB RAID-5 
array.  That's not too bad, considering how many times the machine 
went up and down due to bad power in the building.




Buy a UPS. Now.  Even if it's just a big battery that will only keep you
running for 10 mins - at least that will give you enough time to do a 
clean

shutdown -h rather than get stuff trashed.

If you can't get money for it, just point at the lost-productivity costs
the *next* time the terabyte takes 2 days to recover.. and remind the 
boss that

you could be down for 2 days every time the lights flicker.. ;)



That's just it...  Bad power in the building was due to the building's 
UPS (big sucker, attached to a Cummins diesel generator) failing. That's 
been fixed, so the power in the building is OK again.


Previous to the UPS going down, this building hadn't lost power in many 
years.  We don't have to deal with this kind of problem very often.  ;-)


I've been using the array, and it's going great.  ReiserFS proves to be 
robust, as long as the hardware, power, and software on the system are 
in good working order.


--Dan




--
  Matthias Barremaecker, MH.BE - Arta nv
  0495 30 31 72

  http://mh.be/

  SERVER HOUSING per 1HE € 50 per maand
20Gig traffic, 100Mbit netwerk
Center te Antwerpen.



Re: File as a directory - VFS Changes

2005-06-02 Thread Faraz Ahmed
Hi;
  Why is this discussion revoling around Relational
Databases. The attributes of the files and files themselves, if were to be
modelled for querying a Realtional Database would really s**k.  The
attribute info is neither structured, nor is it unstructured, its
SEMI-STRUCTURED. Exceuting  Structured Query Lang(Sql) over semistrutured
data would result in
-> Harder modelling (almost a waste of effort),
-> Complex Quering (Eleganant system of no use because of the amout of joins
that would result in Quering , if you somehow model semi-structured data in
some structured Data Model);
The best option, to start would be with best COT. I feel
we should look at Loreal a stanford project. For hints about modelling our
"whatever".

Regards
Faraz :)


- Original Message - 
From: "Nikita Danilov" <[EMAIL PROTECTED]>
To: "Jonathan Briggs" <[EMAIL PROTECTED]>
Cc: "Hans Reiser" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
"Alexander G. M. Smith" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
; <[EMAIL PROTECTED]>; "Nate Diller"
<[EMAIL PROTECTED]>
Sent: Thursday, June 02, 2005 4:54 PM
Subject: Re: File as a directory - VFS Changes


> Jonathan Briggs writes:
>  > On Thu, 2005-06-02 at 14:38 +0400, Nikita Danilov wrote:
>  > > Jonathan Briggs writes:
>  > >  > On Wed, 2005-06-01 at 21:27 +0400, Nikita Danilov wrote:
>  > >  > [snip]
>  > >  > > Frankly speaking, I suspect that name-as-attribute is going to
limit
>  > >  > > usability of file system significantly.
>  >
>  > Usability as in features?  Or usability as in performance?
>
> Usability as in ease of use.
>
> [...]
>
>  >
>  > A index is an arrangement of information about the indexed items.  The
>  > index contents *belong* to the items.  An index by name?  That name
>  > belongs to the item.  An index by date?  Those dates are properties of
>
> In the flat world of relation databases, maybe. But almost nowhere else
> improper name is an attribute of its signified: variable is not an
> attribute of object it points to, URL is not an attribute of the web
> page, block number is not an attribute of data stored in that block on
> the disk, etc.
>
> [...]
>
>  >
>  > In the same way that you can descend a directory tree and copy the
names
>  > found into each item, you can check each item and copy the names found
>  > into a directory tree.
>
> Except that as was already discussed resulting directory tree is _bound_
> to be inconsistent with "real names".
>
>  >
>  > >
>  > > Indices cannot be reduced to real names (as rename is impossible to
>  > > implement efficiently), but real names can very well be reduced to
>  > > indices as exemplified by each and every UNIX file system out there.
>  > >
>  > > So, the question is: what real names buy one, that indices do not?
>  >
>  > By storing the names in the items, cycles become solvable because you
>  > can always look at the current directory's name(s) to see where you
>  > really are.  Every name becomes absolutely connected to the top of the
>  > namespace instead of depending on a parent pointer that may not ever
>  > connect to the top.
>
> But cycles are "solvable" in current file systems too: they simply do
> not exist there.
>
>  >
>  > If speeding up rename was very important, you can replace every
pathname
>  > component with a indirect reference instead of using simple strings.
>  > Changing directory levels is still difficult.
>
> It is not only speed that will be extremely hard to achieve in that
> design; atomicity (in the face of possible crash during rename), and
> concurrency control look problematic too.
>
>  >
>  > -- 
>  > Jonathan Briggs <[EMAIL PROTECTED]>
>  > eSoft, Inc.
>
> Nikita.
>



Performance Impacts of Graph Cycles due to Multiple Parents

2005-06-02 Thread Alexander G. M. Smith
Nikita Danilov wrote on Thu, 2 Jun 2005 14:03:54 +0400 in the
"Re: File as a directory - VFS Changes" thread:
> This is typical operation for a desktop usage, I agree. But desktop is
> not interesting. It doesn't pose technical difficulty to implement
> whatever indexing structure when your dataset is but a few dozen
> thousand objects [1].

Getting people to use something different does pose an interesting
social engineering problem :-).  I wonder how tough it was to move from
block records (descended from 80 byte punched cards) to streams of bytes.
I'd expect people complained of the inefficiencies of reading things
byte by byte, the uncertainty of where a record boundary was, and
possibly other limitations.  Did the complexity of having to put things
into directories, rather than just unassociated card decks, worry them?

> What _is_ interesting, is to make file system scalable. Solution
> that fails to move directory simply because sub-tree rooted at it
> is large is not scalable.

But that scalability isn't all that important, I don't see large systems
making use of a chaotic collection of cross links between items.  Since
they're so large, they're usually uniform and thus simple collections of
items, such as a series of scientific experiment observations over time.
Well, unless someone tries to do an AI knowledge representation as linked
files or something similarly weird.  Then it gets challenging.

There is some scalability in that operations going on in one subgraph
of the file system don't depend on things happening in the rest of it.

> That is, how atomicity guarantees of rename will be preserved? Note that
> many applications, like some mail servers crucially depend on rename
> atomicity to implement their transaction mini-engines.

Same as before.  Grab locks on all the children affected before doing
any work.  If there's a deadlock, or memory shortage, just report an
error back to the caller and don't change anything.  For the typical
mail server use, don't they just rename one file at a time?  If they
are moving whole large directories around, then it would be a problem.

Or require that the user move all the child files individually to avoid
dealing with deadlocks and long operations.  Simplistic you say?  As
simple as the kludge of doing "rm -r DirectoryName" to delete a
directory and all its contents just because a classical file system
can't handle the locking of large things.

> It happens all the time on my workstation, when I move Linux source
> trees around.

Good point.  Traversing all the children when you're moving the directory
would be expensive, and isn't needed if the children don't cause cycles.
A simple optimization for ordinary files (not cross linked) is to have a
flag saying "I am a tree" for every file system object.  Then when doing
a move or delete, the children of an object marked with that flag don't
need to be examined.  So for ordinary files, we can go back to getting
classical performance.  Maintaining the flag doesn't cost much if it
isn't changing, even less if directories (which means everything in a
directory-is-file system) keep a count the number of tree children they
have.  But when something does acquire or lose an extra parent, all its
parent directories have to be updated, possibly bubbling the change up
to the root.

- Alex


Re: File as a directory - VFS Changes

2005-06-02 Thread Nikita Danilov
Jonathan Briggs writes:
 > On Thu, 2005-06-02 at 14:38 +0400, Nikita Danilov wrote:
 > > Jonathan Briggs writes:
 > >  > On Wed, 2005-06-01 at 21:27 +0400, Nikita Danilov wrote:
 > >  > [snip]
 > >  > > Frankly speaking, I suspect that name-as-attribute is going to limit
 > >  > > usability of file system significantly.
 > 
 > Usability as in features?  Or usability as in performance?

Usability as in ease of use.

[...]

 > 
 > A index is an arrangement of information about the indexed items.  The
 > index contents *belong* to the items.  An index by name?  That name
 > belongs to the item.  An index by date?  Those dates are properties of

In the flat world of relation databases, maybe. But almost nowhere else
improper name is an attribute of its signified: variable is not an
attribute of object it points to, URL is not an attribute of the web
page, block number is not an attribute of data stored in that block on
the disk, etc.

[...]

 > 
 > In the same way that you can descend a directory tree and copy the names
 > found into each item, you can check each item and copy the names found
 > into a directory tree.

Except that as was already discussed resulting directory tree is _bound_
to be inconsistent with "real names".

 > 
 > > 
 > > Indices cannot be reduced to real names (as rename is impossible to
 > > implement efficiently), but real names can very well be reduced to
 > > indices as exemplified by each and every UNIX file system out there.
 > > 
 > > So, the question is: what real names buy one, that indices do not?
 > 
 > By storing the names in the items, cycles become solvable because you
 > can always look at the current directory's name(s) to see where you
 > really are.  Every name becomes absolutely connected to the top of the
 > namespace instead of depending on a parent pointer that may not ever
 > connect to the top.

But cycles are "solvable" in current file systems too: they simply do
not exist there.

 > 
 > If speeding up rename was very important, you can replace every pathname
 > component with a indirect reference instead of using simple strings.
 > Changing directory levels is still difficult.

It is not only speed that will be extremely hard to achieve in that
design; atomicity (in the face of possible crash during rename), and
concurrency control look problematic too.

 > 
 > -- 
 > Jonathan Briggs <[EMAIL PROTECTED]>
 > eSoft, Inc.

Nikita.


Re: 13000Gig partition badblock check is the same -- do a reiserfsck again ?

2005-06-02 Thread Dan Oglesby

[EMAIL PROTECTED] wrote:

On Thu, 02 Jun 2005 09:28:50 CDT, Dan Oglesby said:


latest versions.  Took two days to run, but it completed, and I ended up 
only losing 2 files out of over 1.1 million files on a 1TB RAID-5 
array.  That's not too bad, considering how many times the machine went 
up and down due to bad power in the building.



Buy a UPS. Now.  Even if it's just a big battery that will only keep you
running for 10 mins - at least that will give you enough time to do a clean
shutdown -h rather than get stuff trashed.

If you can't get money for it, just point at the lost-productivity costs
the *next* time the terabyte takes 2 days to recover.. and remind the boss that
you could be down for 2 days every time the lights flicker.. ;)


That's just it...  Bad power in the building was due to the building's 
UPS (big sucker, attached to a Cummins diesel generator) failing. 
That's been fixed, so the power in the building is OK again.


Previous to the UPS going down, this building hadn't lost power in many 
years.  We don't have to deal with this kind of problem very often.  ;-)


I've been using the array, and it's going great.  ReiserFS proves to be 
robust, as long as the hardware, power, and software on the system are 
in good working order.


--Dan


ADD 3+ inches today - don't get left behind

2005-06-02 Thread Silvester

Penis enlargement breakthrough!
http://www.jnaz.net/ss/





Do what you fear and fear disappears.  
Despair is vinegar from the wine of hope. 
I epitomize America.  
Hold a true friend with both hands.  
In Mexico we have a word for sushi: bait.





Re: 13000Gig partition badblock check is the same -- do a reiserfsck again ?

2005-06-02 Thread Valdis . Kletnieks
On Thu, 02 Jun 2005 09:28:50 CDT, Dan Oglesby said:

> latest versions.  Took two days to run, but it completed, and I ended up 
> only losing 2 files out of over 1.1 million files on a 1TB RAID-5 
> array.  That's not too bad, considering how many times the machine went 
> up and down due to bad power in the building.

Buy a UPS. Now.  Even if it's just a big battery that will only keep you
running for 10 mins - at least that will give you enough time to do a clean
shutdown -h rather than get stuff trashed.

If you can't get money for it, just point at the lost-productivity costs
the *next* time the terabyte takes 2 days to recover.. and remind the boss that
you could be down for 2 days every time the lights flicker.. ;)


pgp0bq8czXfZ1.pgp
Description: PGP signature


Re: File as a directory - VFS Changes

2005-06-02 Thread Jonathan Briggs
On Thu, 2005-06-02 at 14:38 +0400, Nikita Danilov wrote:
> Jonathan Briggs writes:
>  > On Wed, 2005-06-01 at 21:27 +0400, Nikita Danilov wrote:
>  > [snip]
>  > > Frankly speaking, I suspect that name-as-attribute is going to limit
>  > > usability of file system significantly.

Usability as in features?  Or usability as in performance?

>  > > 
>  > > Note, that in the "real world", only names from quite limited class are
>  > > attributes of objects, viz. /proper names/ like "France", or "Jonathan
>  > > Briggs". Communication wouldn't get any far if only proper names were
>  > > allowed.
>  > > 
>  > > Nikita.
>  > 
>  > Bringing up /proper names/ from the real world agrees with my idea
>  > though! :-)
> 
> I don't understand why if you are liberty to design new namespace model
> from scratch (it seems POSIX semantics are not binding in our case), you
> are going to faithfully replicate deficiencies of natural languages.
> 
> It is common trait in both science and engineering that when two flavors
> of the same functionality (real names vs. indices) arise, an attempt is
> made to reduce one of them to another, simplifying the system as a
> result.

A index is an arrangement of information about the indexed items.  The
index contents *belong* to the items.  An index by name?  That name
belongs to the item.  An index by date?  Those dates are properties of
the item.  Anything that can be indexed about an item can be described
as a property of the item.

Only for efficiency reasons are index data not included with the item
data.

> 
> In our case, motivation to reduce one type of names to another is even
> more pressing, as these types are incompatible: in the presence of
> cycles or dynamic queries, namespace visible through the directory
> hierarchy is different from the namespace of real names.

Queries create indexes based on properties of the items.  This is no
different from directories, which are indexes based on names of the
items.

In the same way that you can descend a directory tree and copy the names
found into each item, you can check each item and copy the names found
into a directory tree.

> 
> Indices cannot be reduced to real names (as rename is impossible to
> implement efficiently), but real names can very well be reduced to
> indices as exemplified by each and every UNIX file system out there.
> 
> So, the question is: what real names buy one, that indices do not?

By storing the names in the items, cycles become solvable because you
can always look at the current directory's name(s) to see where you
really are.  Every name becomes absolutely connected to the top of the
namespace instead of depending on a parent pointer that may not ever
connect to the top.

If speeding up rename was very important, you can replace every pathname
component with a indirect reference instead of using simple strings.
Changing directory levels is still difficult.

-- 
Jonathan Briggs <[EMAIL PROTECTED]>
eSoft, Inc.


signature.asc
Description: This is a digitally signed message part


Re: File as a directory - VFS Changes

2005-06-02 Thread Hubert Chan
On Thu, 2 Jun 2005 13:11:05 +0400, Nikita Danilov <[EMAIL PROTECTED]> said:

> Hans Reiser writes:
>> What about if we have it that only the first name a directory is
>> created with counts towards its reference count, and that if the
>> directory is moved if it is moved from its first name, the new name
>> becomes the one that counts towards the reference count?  A bit of a
>> hack, but would work.

> This means that list of names has to be kept together with every
> object (to find out where "true" reference has to be moved). And this
> makes rename of directory problematic, as lists of names of all
> directory children have to be updated.

Don't you just need to keep a pointer (inode number) to the parent
directory?  When you move a file, check if the parent inode number is
equal to the file's 'true parent' inode number, and if so, update the
'true parent' pointer.  And do a similar thing when you delete.

-- 
Hubert Chan <[EMAIL PROTECTED]> - http://www.uhoreg.ca/
PGP/GnuPG key: 1024D/124B61FA
Fingerprint: 96C5 012F 5F74 A5F7 1FF7  5291 AF29 C719 124B 61FA
Key available at wwwkeys.pgp.net.   Encrypted e-mail preferred.



Bring on the best software...at the most reasonable prices!

2005-06-02 Thread Morgan
Three steps to the software you need at the prices you want 
http://mmn.pet4a6pi4h7em87.stagedcn.com





A mind forever voyaging through strange seas of thought, alone.   
You can buy education, but wisdom is a gift from God.  





Re: 13000Gig partition badblock check is the same -- do a reiserfsck again ?

2005-06-02 Thread Matthias Barremaecker

Hi Dan,

I'm using Gentoo kernel

Linux hyperspace 2.6.11-gentoo-r6 #6 SMP Tue Apr 19 23:40:58 CEST 2005 
i686 Pentium III (Coppermine) GenuineIntel GNU/Linux


Kind regardes,

Matthias.


Dan Oglesby wrote:
What kind of kernel are you running?  I had to dump my RedHat kernel and 
use the latest kernel from http://www.kernel.org (2.4.30) for my system.


The problem I was having was due to the kernel being a bit old, and the 
hardware not handling bad blocks properly on an array.  Updating the 
software, drivers, firmware and BIOS on everything in my system allowed 
reiserfsck to work properly, and fix my filesystem.


--Dan

Matthias Barremaecker wrote:


Hi Dan,

I'm glad you have your data back and just lost 2 files. It gives me a 
bit of ... hope :))


The error I've got was :

2 directory entries were hashed with not set hash.
 731305 directory entries were hashed with "r5" hash.
  "r5" hash is selected
 Flushing..finished
  Read blocks (but not data blocks) 313507947
  Leaves among those 439626
  - corrected leaves 46
  - leaves all contents of which could not be
 saved and deleted 56
  pointers in indirect items to wrong area 3997 (zeroed)
  Objectids found 731997

 Pass 1 (will try to insert 439570 leaves):
 ### Pass 1 ###
 Looking for allocable blocks .. finished
 0%20%40%pass1.c 212 balance_condition_2_fails
 balance_condition_2_fails: block 1288402, pointer 168: The left
 delimiting key [1412453424 1412453424 0x10001000 ??? (15)] of the block
 (170046457) is wrong,the item cannot be found


What just doens't make any sens to me.

I'm using version  3.6.19.

If you have any ideas ...

Thanx.

kind regardes,


Matthias.




Dan Oglesby wrote:
 > Matthias Barremaecker wrote:
 >
 >> Hi,
 >>
 >> The first time, I did a bad block check, and feeded that list to the
 >> reiserfsck --rebuild-tree.
 >>
 >> The reiserfsck FAILED in phase 2.
 >>
 >> Now I have done a bad block check again and the list is the same, so
 >> no new bad blocks have occoured.
 >>
 >> Is it sane to do the reiserfsck --rebuild-tree again, or will it fail
 >> again?
 >>
 >> thanx.
 >>
 >> kind regardes,
 >>
 >> Matthias.
 >
 >
 >
 > What error did you get?  Same as before?
 >
 > FWIW, I was able to successfully recover my corrupted array!  Had to
 > update all hardware to latest BIOS/Firmware, all software and 
drivers to
 > latest versions.  Took two days to run, but it completed, and I 
ended up

 > only losing 2 files out of over 1.1 million files on a 1TB RAID-5
 > array.  That's not too bad, considering how many times the machine 
went

 > up and down due to bad power in the building.
 >
 > --Dan
 >
 >







--
  Matthias Barremaecker, MH.BE - Arta nv
  0495 30 31 72

  http://mh.be/

  SERVER HOUSING per 1HE € 50 per maand
20Gig traffic, 100Mbit netwerk
Center te Antwerpen.



Re: 13000Gig partition badblock check is the same -- do a reiserfsck again ?

2005-06-02 Thread Dan Oglesby
What kind of kernel are you running?  I had to dump my RedHat kernel and 
use the latest kernel from http://www.kernel.org (2.4.30) for my system.


The problem I was having was due to the kernel being a bit old, and the 
hardware not handling bad blocks properly on an array.  Updating the 
software, drivers, firmware and BIOS on everything in my system allowed 
reiserfsck to work properly, and fix my filesystem.


--Dan

Matthias Barremaecker wrote:

Hi Dan,

I'm glad you have your data back and just lost 2 files. It gives me a 
bit of ... hope :))


The error I've got was :

2 directory entries were hashed with not set hash.
 731305 directory entries were hashed with "r5" hash.
  "r5" hash is selected
 Flushing..finished
  Read blocks (but not data blocks) 313507947
  Leaves among those 439626
  - corrected leaves 46
  - leaves all contents of which could not be
 saved and deleted 56
  pointers in indirect items to wrong area 3997 (zeroed)
  Objectids found 731997

 Pass 1 (will try to insert 439570 leaves):
 ### Pass 1 ###
 Looking for allocable blocks .. finished
 0%20%40%pass1.c 212 balance_condition_2_fails
 balance_condition_2_fails: block 1288402, pointer 168: The left
 delimiting key [1412453424 1412453424 0x10001000 ??? (15)] of the block
 (170046457) is wrong,the item cannot be found


What just doens't make any sens to me.

I'm using version  3.6.19.

If you have any ideas ...

Thanx.

kind regardes,


Matthias.




Dan Oglesby wrote:
 > Matthias Barremaecker wrote:
 >
 >> Hi,
 >>
 >> The first time, I did a bad block check, and feeded that list to the
 >> reiserfsck --rebuild-tree.
 >>
 >> The reiserfsck FAILED in phase 2.
 >>
 >> Now I have done a bad block check again and the list is the same, so
 >> no new bad blocks have occoured.
 >>
 >> Is it sane to do the reiserfsck --rebuild-tree again, or will it fail
 >> again?
 >>
 >> thanx.
 >>
 >> kind regardes,
 >>
 >> Matthias.
 >
 >
 >
 > What error did you get?  Same as before?
 >
 > FWIW, I was able to successfully recover my corrupted array!  Had to
 > update all hardware to latest BIOS/Firmware, all software and drivers to
 > latest versions.  Took two days to run, but it completed, and I ended up
 > only losing 2 files out of over 1.1 million files on a 1TB RAID-5
 > array.  That's not too bad, considering how many times the machine went
 > up and down due to bad power in the building.
 >
 > --Dan
 >
 >





Re: 13000Gig partition badblock check is the same -- do a reiserfsck again ?

2005-06-02 Thread Matthias Barremaecker

Hi Dan,

I'm glad you have your data back and just lost 2 files. It gives me a 
bit of ... hope :))


The error I've got was :

2 directory entries were hashed with not set hash.
 731305 directory entries were hashed with "r5" hash.
  "r5" hash is selected
 Flushing..finished
  Read blocks (but not data blocks) 313507947
  Leaves among those 439626
  - corrected leaves 46
  - leaves all contents of which could not be
 saved and deleted 56
  pointers in indirect items to wrong area 3997 (zeroed)
  Objectids found 731997

 Pass 1 (will try to insert 439570 leaves):
 ### Pass 1 ###
 Looking for allocable blocks .. finished
 0%20%40%pass1.c 212 balance_condition_2_fails
 balance_condition_2_fails: block 1288402, pointer 168: The left
 delimiting key [1412453424 1412453424 0x10001000 ??? (15)] of the block
 (170046457) is wrong,the item cannot be found


What just doens't make any sens to me.

I'm using version  3.6.19.

If you have any ideas ...

Thanx.

kind regardes,


Matthias.




Dan Oglesby wrote:
> Matthias Barremaecker wrote:
>
>> Hi,
>>
>> The first time, I did a bad block check, and feeded that list to the
>> reiserfsck --rebuild-tree.
>>
>> The reiserfsck FAILED in phase 2.
>>
>> Now I have done a bad block check again and the list is the same, so
>> no new bad blocks have occoured.
>>
>> Is it sane to do the reiserfsck --rebuild-tree again, or will it fail
>> again?
>>
>> thanx.
>>
>> kind regardes,
>>
>> Matthias.
>
>
>
> What error did you get?  Same as before?
>
> FWIW, I was able to successfully recover my corrupted array!  Had to
> update all hardware to latest BIOS/Firmware, all software and drivers to
> latest versions.  Took two days to run, but it completed, and I ended up
> only losing 2 files out of over 1.1 million files on a 1TB RAID-5
> array.  That's not too bad, considering how many times the machine went
> up and down due to bad power in the building.
>
> --Dan
>
>

--
  Matthias Barremaecker, MH.BE - Arta nv
  0495 30 31 72

  http://mh.be/

  SERVER HOUSING per 1HE € 50 per maand
20Gig traffic, 100Mbit netwerk
Center te Antwerpen.


Dan Oglesby wrote:

Matthias Barremaecker wrote:


Hi,

The first time, I did a bad block check, and feeded that list to the 
reiserfsck --rebuild-tree.


The reiserfsck FAILED in phase 2.

Now I have done a bad block check again and the list is the same, so 
no new bad blocks have occoured.


Is it sane to do the reiserfsck --rebuild-tree again, or will it fail 
again?


thanx.

kind regardes,

Matthias.




What error did you get?  Same as before?

FWIW, I was able to successfully recover my corrupted array!  Had to 
update all hardware to latest BIOS/Firmware, all software and drivers to 
latest versions.  Took two days to run, but it completed, and I ended up 
only losing 2 files out of over 1.1 million files on a 1TB RAID-5 
array.  That's not too bad, considering how many times the machine went 
up and down due to bad power in the building.


--Dan




--
  Matthias Barremaecker, MH.BE - Arta nv
  0495 30 31 72

  http://mh.be/

  SERVER HOUSING per 1HE € 50 per maand
20Gig traffic, 100Mbit netwerk
Center te Antwerpen.



Re: 13000Gig partition badblock check is the same -- do a reiserfsck again ?

2005-06-02 Thread Dan Oglesby

Matthias Barremaecker wrote:


Hi,

The first time, I did a bad block check, and feeded that list to the 
reiserfsck --rebuild-tree.


The reiserfsck FAILED in phase 2.

Now I have done a bad block check again and the list is the same, so 
no new bad blocks have occoured.


Is it sane to do the reiserfsck --rebuild-tree again, or will it fail 
again?


thanx.

kind regardes,

Matthias.



What error did you get?  Same as before?

FWIW, I was able to successfully recover my corrupted array!  Had to 
update all hardware to latest BIOS/Firmware, all software and drivers to 
latest versions.  Took two days to run, but it completed, and I ended up 
only losing 2 files out of over 1.1 million files on a 1TB RAID-5 
array.  That's not too bad, considering how many times the machine went 
up and down due to bad power in the building.


--Dan


13000Gig partition badblock check is the same -- do a reiserfsck again ?

2005-06-02 Thread Matthias Barremaecker

Hi,

The first time, I did a bad block check, and feeded that list to the 
reiserfsck --rebuild-tree.


The reiserfsck FAILED in phase 2.

Now I have done a bad block check again and the list is the same, so no 
new bad blocks have occoured.


Is it sane to do the reiserfsck --rebuild-tree again, or will it fail again?

thanx.

kind regardes,

Matthias.
--
  Matthias Barremaecker, MH.BE - Arta nv
  0495 30 31 72

  http://mh.be/

  SERVER HOUSING per 1HE € 50 per maand
20Gig traffic, 100Mbit netwerk
Center te Antwerpen.



RE: File as a directory - VFS Changes

2005-06-02 Thread Faraz Ahmed
 Hi Nikita;


 The problems of files not fitting in the query of the smart folder is a
 serious one. We had implemented this same thing for our semantic
filesystem.

For ex we create a MP3 file is a JPEG folder things it wont ever get
listed.
This will fundamentally change the way users see your filesytem, the users
expect to see the files in the folder they created. This it self should be a
default search criteria.
We almost solved this by having the "parentdirectory" as a attribute of the
file. All the smart folders have thier query transparently modified as
"where type=jpg Or parentdirectory=thisdirectory". This make the virtual
folder stuff work as EXTENSION to standard file/directory relationship
rather than work as RELPLACEMENT.

Personal experience says that user dont digest any change to UNIX
filesystem mode. Anything extra is OK but replacements are BAD. Think of it
you created a C file in a virtual folder for "h" files the files wont get
listed(althoug they will exist). THEN WHAT??? the user has to search it BAD,
your whole fancy virtual directory USECASE itself is lost and eventually we
endup solving nothing.



Other issues include this display name stuff etc. They are bad. what if
two files with same display name get listed in the same virtual directory.
No point in creating a problem and then solving it. Good Work though we dont
want to get booged down once WinFS is released.
Regards
Faraz.




Many specialty drugs, including injectables commonly stocked and available.

2005-06-02 Thread James

We guarantee lowest price on quality medications
http://qomcz.vsy6zed6snd3zwv.bonnyka.com



If everybody's thinking alike, somebody isn't thinking.  
Truth is always exciting. Speak it, then, Life is dull without it. 
The secret to creativity is knowing how to hide your sources.  





Re: File as a directory - VFS Changes

2005-06-02 Thread Nikita Danilov
Jonathan Briggs writes:
 > On Wed, 2005-06-01 at 21:27 +0400, Nikita Danilov wrote:
 > [snip]
 > > Frankly speaking, I suspect that name-as-attribute is going to limit
 > > usability of file system significantly.
 > > 
 > > Note, that in the "real world", only names from quite limited class are
 > > attributes of objects, viz. /proper names/ like "France", or "Jonathan
 > > Briggs". Communication wouldn't get any far if only proper names were
 > > allowed.
 > > 
 > > Nikita.
 > 
 > Bringing up /proper names/ from the real world agrees with my idea
 > though! :-)

I don't understand why if you are liberty to design new namespace model
from scratch (it seems POSIX semantics are not binding in our case), you
are going to faithfully replicate deficiencies of natural languages.

It is common trait in both science and engineering that when two flavors
of the same functionality (real names vs. indices) arise, an attempt is
made to reduce one of them to another, simplifying the system as a
result.

In our case, motivation to reduce one type of names to another is even
more pressing, as these types are incompatible: in the presence of
cycles or dynamic queries, namespace visible through the directory
hierarchy is different from the namespace of real names.

Indices cannot be reduced to real names (as rename is impossible to
implement efficiently), but real names can very well be reduced to
indices as exemplified by each and every UNIX file system out there.

So, the question is: what real names buy one, that indices do not?

[...]

 > -- 
 > Jonathan Briggs <[EMAIL PROTECTED]>
 > eSoft, Inc.

Nikita.


Re: File as a directory - VFS Changes

2005-06-02 Thread Nikita Danilov
Alexander G. M. Smith writes:

[...]

 > 
 > The typical worst case operation will be deleting a link to your photo
 > from a directory you decided didn't classify it properly.  The photo may
 > be in several directories, such as Cottage, Aunt and Bottles if it is
 > a picture of a champaign bottle you polished off at your aunt's cottage.
 > You decide that it shouldn't really be in the Aunt folder, so you delete
 > it (or rather the link) from there.

This is typical operation for a desktop usage, I agree. But desktop is
not interesting. It doesn't pose technical difficulty to implement
whatever indexing structure when your dataset is but a few dozen
thousand objects [1]. What _is_ interesting, is to make file system
scalable. Solution that fails to move directory simply because sub-tree
rooted at it is large is not scalable.

 > 
 > The traversal starts with recursively finding all the children of the
 > deleted object, which will include the photo and all attributish
 > subobjects (thumbnail, description, ...).  Not too bad, maybe a
 > dozen objects.  Then reconnect those children to objects which have
 > a known good path to the root, reached through whatever parents remain.

And at that moment user hits ^C...

That is, how atomicity guarantees of rename will be preserved? Note that
many applications, like some mail servers crucially depend on rename
atomicity to implement their transaction mini-engines.

And concurrency issues also don't look bright: what if while

mv /d0/d1/d2/d2 /b0/b1/b2

is performed and thread is in the middle of scanning descendants of
/d0/d1/d2/d2 recursively, another thread does

mv /d0/d1 /c0/c1/c2

? Obviously scanning cannot take locks on individual files as it sees
them (because, namespace being an arbitrary graph, this will
deadlock). The only remaining solution is to take whole-fs-lock during
every rename/link/unlink operation. Which is another nail to the
scalability coffin.

[...]

 > 
 > Now if you move the directory containing millions of files, then it's
 > going to take a while.  And if it has a hard link down to another
 > directory, that gets traversed too.  But that won't happen too often,
 > only around spring time when you're reorganizing your mail archives.

It happens all the time on my workstation, when I move Linux source
trees around.

 > 
 > - Alex

Nikita.

Footnotes: 
[1]  Implementing things like Spotlight does not require
any innovation at the file system layer (and not coincidentally,
Spotlight is based on almost 20 years old BSDLite kernel code).



Re: File as a directory - VFS Changes

2005-06-02 Thread Nikita Danilov
Hans Reiser writes:
 > What about if we have it that only the first name a directory is created
 > with counts towards its reference count, and that if the directory is
 > moved if it is moved from its first name, the new name becomes the one
 > that counts towards the reference count?   A bit of a hack, but would work.

This means that list of names has to be kept together with every object
(to find out where "true" reference has to be moved). And this makes
rename of directory problematic, as lists of names of all directory
children have to be updated.

 > 
 > Hans

Nikita.


Re: File as a directory - VFS Changes

2005-06-02 Thread Hans Reiser
Alexander G. M. Smith wrote:

>Hans Reiser wrote on Tue, 31 May 2005 11:32:04 -0700:
>  
>
>>What about if we have it that only the first name a directory is created
>>with counts towards its reference count, and that if the directory is
>>moved if it is moved from its first name, the new name becomes the one
>>that counts towards the reference count?   A bit of a hack, but would work.
>>
>>
>
>Sounds a lot like what I did earlier.  Files got really deleted when the
>true name was the only name for a file (only one parent in other words).
>But I also had a large cycle finding pause when any file movement happened.
>I'm not sure if it would still be needed.
>
>Nikita Danilov wrote:
>  
>
>>- if garbage collection is implemented through the reference counting
>>(which is the only known way tractable for a file system), then cycles
>>are never collected.
>>[...]
>>But the garbage collection problem is still there. You are more than
>>welcome to solve it by implementing generation mark-and-sweep GC on file
>>system scale. :-)
>>
>>
>
>There are at least two choices:
>
>Bite the bullet and have a file system that is occasionally slow due to
>cycle checking, but only when the user somehow makes a huge cycle.  Keep
>in mind that this only happens when you use the new functionality, if you
>only create files with one parent, it should be as fast as regular file
>systems.  I see its features being useful for desktop use, not servers,
>so the occasional speed hit is less annoyance than the lack of features
>(the ability to file your files in several places).
>  
>
I prefer the above to the below.

>Another way is to not delete the files when they get unlinked.  Similar
>to some other allocation management systems, have a background thread
>doing the garbage collection and cycle tracing.  The drawback is that
>you might run out of disc space if you're creating files faster than
>the collector is cleaning up.
>
>I wonder if you can combine a wandering journal (or whatever it is called,
>where the journalled data blocks become the file's current contents) with
>the copy type garbage collection (is that the same as a 2 generation mark
>and sweep?).  Copy type collection copies all known reachable objects to
>an empty half of the disk.  When that's done, the original half is marked
>empty and the next pass copies in the other direction.  Could work nicely
>if you have two disk drives.  Yet another PhD topic on garbage collection
>for someone to research :-)
>
>There are lots of other garbage collection schemes that might be
>applicable to file systems with cycles.  It could work, maybe with
>decent speed too!
>
>- Alex
>
>
>  
>