Returned mail

2005-05-31 Thread Symantec_AntiVirus_for_SMTP_Gateways_mailhost
--- The message cannot be delivered to the following address. ---

[EMAIL PROTECTED]Mailbox unknown or not accepting mail.
553 [EMAIL PROTECTED]... No such user [EMAIL PROTECTED]

Reporting-MTA: Symantec_AntiVirus_for_SMTP_Gateways_mailhost@nfic.com
Final-Recipient: rfc822;jtobey@banta-im.com
Action: failed
Status: 5.1.1
Diagnostic-Code: X-Notes; Cannot route mail to user (jtobey@banta-im.com).
---BeginMessage---
I have corrected your document.

File attachment: doc01.zip
A file attached to this email was removed
because it was infected with a virus.

Result: Virus Detected
Virus Name: [EMAIL PROTECTED]
File Attachment: doc01.zip
Attachment Status: deleted
---End Message---


Re: File as a directory - VFS Changes

2005-05-31 Thread Nikita Danilov
Alexander G. M. Smith writes:
  Nikita Danilov wrote on Mon, 30 May 2005 15:00:52 +0400:
   Nothing in VFS prevents files from supporting both read(2) and
   readdir(3). The problem is with link(2): VFS assumes that directories
   form _tree_, that is, every directory has well-defined parent.
  
  At least that's one problem that's solveable.  Just define one of
  the parents as the master parent directory, with a guaranteed path
  up to the root, and have the others as auxiliary parents.  That
  also gives you a good path name to each and every file-thing.
  
  The VFS or the file system (depending on where the designers want
  to split the work) will still have to handle cycles in the graph
  to recompute the new master parents, when an old one gets deleted
  or moved.

Cycle may consists of more graph nodes than fits into memory. Cycle
detection is crucial for rename semantics, and if
cycle-just-about-to-be-formed doesn't fit into memory it's not clear how
to detect it, because tree has to be locked while checked for cycles, and
one definitely doesn't want to keep such a lock over IO.

  
  - Alex

Nikita.


Re: Reiserfs 1300G partition on lvm problem ...

2005-05-31 Thread Dan Oglesby

Matthias Barremaecker wrote:

Hi,

Thanx for your reply.

The data is not THAT importend, all our importend data is backuped 4 
times, inc. original (well, 3 times now, since the 1300gig machine is 
broke).


I did a bit furtur reasearch and maybe this is something to think about 
if you use reiserfs :


I did a bad block check and I have 10 bad blocks of 4096bytes on 1300Gig 
and ... that is the reason reiserfs will not work anymore.


I guess this sux. I rather have that the data on the bad blocks is just 
corupted but the rest is accesseble.


I'm doing a --rebuild-tree with the bad block list. Hopes this works.


Aren't there any tools to substract data from a broken reiserfs partition ?


kind regardes,

Matthias.






I'm currently in a similar situation:  1TB (RAID-5) ReiserFS filesystem 
running on RedHat 7.2 with a Promise SX6000 controller.  Everything was 
running stock version/firmware/BIOS, and it suddently developed 39 bad 
blocks after a power outage.


reiserfsck --rebuild-tree (version 3.6.4, the latest for RH 7.2) failed 
to complete after the controller kept hard locking/crashing.


So, I've compiled a plain 2.4.30 kernel (from kernel.org, not RedHat), 
updated the BIOS/Firmware on the SX6000, and compiled and installed the 
latest reiserfsprogs (3.6.19, I believe?), and have been running 
badblocks (non-destructive) for the last four days.  Nothing has turned 
up so far.


I'm hoping the next time I run a --rebuild-tree, it will be able to 
complete.  I really do need to get the data off of this filesystem.


Good luck with your recovery.

--Dan





[EMAIL PROTECTED] wrote:


On Sun, 29 May 2005 21:25:54 +0200, Matthias Barremaecker said:



but that sais it is a fysical drive error




Physical drive errors.  Your hardware is broken.  Isn't much that 
Reiserfs

can do about it.



What can I do.




1) Call whoever you get hardware support from.

2) Be ready to restore from backups.

3) If you didn't have RAID-5 (or similar) set up, or a good backup, 
consider

it a learning experience.

If your data is important enough that you'll care if you lose it, you 
should take

steps to make sure you won't lose it... It's that simple.

(Just for the record, if we have important info, it gets at least 
RAID5, a
backup to tape or other device, *and* a *second* backup off-site.  And 
my shop

is far from the most paranoid about such things.)







Re: Reiserfs 1300G partition on lvm problem ...

2005-05-31 Thread Matthias Barremaecker

Hi Dan,

My recovery goes rather well...

Does the badblocks count or anything ? With me it counted.

You have to write the bad block no's to a file and feed that to the 
reiserfschk.


I didn't completed a full badblock check coz I knew the badblocks could 
only be at the beginning of the lvm 'array', but I excpect it to take as 
long as a reiserfsck --tree-rebuild.


If you don't see anything counting -- start worrying.

Good luck to you to.

I have still 18 hours to go... I realy hope I can mount the dam thing 
after that and then I can start looking for the bad disk.


And I'm not using any type of raid for that data.

kind regardes,

Matthias.

Dan Oglesby wrote:

Matthias Barremaecker wrote:


Hi,

Thanx for your reply.

The data is not THAT importend, all our importend data is backuped 4 
times, inc. original (well, 3 times now, since the 1300gig machine is 
broke).


I did a bit furtur reasearch and maybe this is something to think 
about if you use reiserfs :


I did a bad block check and I have 10 bad blocks of 4096bytes on 
1300Gig and ... that is the reason reiserfs will not work anymore.


I guess this sux. I rather have that the data on the bad blocks is 
just corupted but the rest is accesseble.


I'm doing a --rebuild-tree with the bad block list. Hopes this works.


Aren't there any tools to substract data from a broken reiserfs 
partition ?



kind regardes,

Matthias.






I'm currently in a similar situation:  1TB (RAID-5) ReiserFS filesystem 
running on RedHat 7.2 with a Promise SX6000 controller.  Everything was 
running stock version/firmware/BIOS, and it suddently developed 39 bad 
blocks after a power outage.


reiserfsck --rebuild-tree (version 3.6.4, the latest for RH 7.2) failed 
to complete after the controller kept hard locking/crashing.


So, I've compiled a plain 2.4.30 kernel (from kernel.org, not RedHat), 
updated the BIOS/Firmware on the SX6000, and compiled and installed the 
latest reiserfsprogs (3.6.19, I believe?), and have been running 
badblocks (non-destructive) for the last four days.  Nothing has turned 
up so far.


I'm hoping the next time I run a --rebuild-tree, it will be able to 
complete.  I really do need to get the data off of this filesystem.


Good luck with your recovery.

--Dan





[EMAIL PROTECTED] wrote:


On Sun, 29 May 2005 21:25:54 +0200, Matthias Barremaecker said:



but that sais it is a fysical drive error





Physical drive errors.  Your hardware is broken.  Isn't much that 
Reiserfs

can do about it.



What can I do.





1) Call whoever you get hardware support from.

2) Be ready to restore from backups.

3) If you didn't have RAID-5 (or similar) set up, or a good backup, 
consider

it a learning experience.

If your data is important enough that you'll care if you lose it, you 
should take

steps to make sure you won't lose it... It's that simple.

(Just for the record, if we have important info, it gets at least 
RAID5, a
backup to tape or other device, *and* a *second* backup off-site.  
And my shop

is far from the most paranoid about such things.)









--
  Matthias Barremaecker, MH.BE - Arta nv
  0495 30 31 72

  http://mh.be/

  SERVER HOUSING per 1HE  50 per maand
20Gig traffic, 100Mbit netwerk
Center te Antwerpen.



Re: File as a directory - VFS Changes

2005-05-31 Thread Hans Reiser
Nikita Danilov wrote:

Alexander G. M. Smith writes:
  Nikita Danilov wrote on Mon, 30 May 2005 15:00:52 +0400:
   Nothing in VFS prevents files from supporting both read(2) and
   readdir(3). The problem is with link(2): VFS assumes that directories
   form _tree_, that is, every directory has well-defined parent.
  
  At least that's one problem that's solveable.  Just define one of
  the parents as the master parent directory, with a guaranteed path
  up to the root, and have the others as auxiliary parents.  That
  also gives you a good path name to each and every file-thing.
  
  The VFS or the file system (depending on where the designers want
  to split the work) will still have to handle cycles in the graph
  to recompute the new master parents, when an old one gets deleted
  or moved.

Cycle may consists of more graph nodes than fits into memory. 

There are pathname length restrictions already in the kernel that should
prevent that, yes?

Cycle
detection is crucial for rename semantics, and if
cycle-just-about-to-be-formed doesn't fit into memory it's not clear how
to detect it, because tree has to be locked while checked for cycles, and
one definitely doesn't want to keep such a lock over IO.

  
  - Alex

Nikita.


  




Re: Reiserfs 1300G partition on lvm problem ...

2005-05-31 Thread Dan Oglesby

Matthias Barremaecker wrote:

Hi Dan,

My recovery goes rather well...

Does the badblocks count or anything ? With me it counted.



So far, I haven't seen any bad blocks written to my output file.  I'm 
not in front of the machine (remote location), so I can't see what's on 
the console, where I'm actually running the command.


You have to write the bad block no's to a file and feed that to the 
reiserfschk.




Yeah, that's what I'm doing.

I didn't completed a full badblock check coz I knew the badblocks could 
only be at the beginning of the lvm 'array', but I excpect it to take as 
long as a reiserfsck --tree-rebuild.


If you don't see anything counting -- start worrying.



I have no idea if there are even real bad blocks on my array.  I think 
the controller's BIOS/firmware was so old, it didn't know how to deal 
with the power outage in a sane manner.  The last reiserfsck failed 
after a day of running, so I'm thinking if I have a problem, it's 
towards the end of my array.  Time will tell.



Good luck to you to.

I have still 18 hours to go... I realy hope I can mount the dam thing 
after that and then I can start looking for the bad disk.




Thanks, and same here...  :-)


And I'm not using any type of raid for that data.



JBOD?

--Dan


kind regardes,

Matthias.

Dan Oglesby wrote:


Matthias Barremaecker wrote:


Hi,

Thanx for your reply.

The data is not THAT importend, all our importend data is backuped 4 
times, inc. original (well, 3 times now, since the 1300gig machine is 
broke).


I did a bit furtur reasearch and maybe this is something to think 
about if you use reiserfs :


I did a bad block check and I have 10 bad blocks of 4096bytes on 
1300Gig and ... that is the reason reiserfs will not work anymore.


I guess this sux. I rather have that the data on the bad blocks is 
just corupted but the rest is accesseble.


I'm doing a --rebuild-tree with the bad block list. Hopes this works.


Aren't there any tools to substract data from a broken reiserfs 
partition ?



kind regardes,

Matthias.






I'm currently in a similar situation:  1TB (RAID-5) ReiserFS 
filesystem running on RedHat 7.2 with a Promise SX6000 controller.  
Everything was running stock version/firmware/BIOS, and it suddently 
developed 39 bad blocks after a power outage.


reiserfsck --rebuild-tree (version 3.6.4, the latest for RH 7.2) 
failed to complete after the controller kept hard locking/crashing.


So, I've compiled a plain 2.4.30 kernel (from kernel.org, not RedHat), 
updated the BIOS/Firmware on the SX6000, and compiled and installed 
the latest reiserfsprogs (3.6.19, I believe?), and have been running 
badblocks (non-destructive) for the last four days.  Nothing has 
turned up so far.


I'm hoping the next time I run a --rebuild-tree, it will be able to 
complete.  I really do need to get the data off of this filesystem.


Good luck with your recovery.

--Dan





[EMAIL PROTECTED] wrote:


On Sun, 29 May 2005 21:25:54 +0200, Matthias Barremaecker said:



but that sais it is a fysical drive error






Physical drive errors.  Your hardware is broken.  Isn't much that 
Reiserfs

can do about it.



What can I do.






1) Call whoever you get hardware support from.

2) Be ready to restore from backups.

3) If you didn't have RAID-5 (or similar) set up, or a good backup, 
consider

it a learning experience.

If your data is important enough that you'll care if you lose it, 
you should take

steps to make sure you won't lose it... It's that simple.

(Just for the record, if we have important info, it gets at least 
RAID5, a
backup to tape or other device, *and* a *second* backup off-site.  
And my shop

is far from the most paranoid about such things.)













Re: File as a directory - VFS Changes

2005-05-31 Thread Nikita Danilov
Hello Hans,

Hans Reiser writes:
  Nikita Danilov wrote:
  
  Alexander G. M. Smith writes:
Nikita Danilov wrote on Mon, 30 May 2005 15:00:52 +0400:
 Nothing in VFS prevents files from supporting both read(2) and
 readdir(3). The problem is with link(2): VFS assumes that directories
 form _tree_, that is, every directory has well-defined parent.

At least that's one problem that's solveable.  Just define one of
the parents as the master parent directory, with a guaranteed path
up to the root, and have the others as auxiliary parents.  That
also gives you a good path name to each and every file-thing.

The VFS or the file system (depending on where the designers want
to split the work) will still have to handle cycles in the graph
to recompute the new master parents, when an old one gets deleted
or moved.
  
  Cycle may consists of more graph nodes than fits into memory. 
  
  There are pathname length restrictions already in the kernel that should
  prevent that, yes?

UNIX namespaces are not _that_ retarded. :-)

int main(int argc, char **argv)
{
int i;

for (i = 0; ; ++ i) {
mkdir(foo, 0777);
chdir(foo);
if ((i % 1000) == 0)
printf(%i\n, i);
}
return 0;
}

run it for a while, interrupt, and do

$ find foo
$ rm -frv foo

  
  Cycle
  detection is crucial for rename semantics, and if
  cycle-just-about-to-be-formed doesn't fit into memory it's not clear how
  to detect it, because tree has to be locked while checked for cycles, and
  one definitely doesn't want to keep such a lock over IO.
  

- Alex
  

Nikita.

  
  

  


Re: File as a directory - VFS Changes

2005-05-31 Thread Valdis . Kletnieks
On Tue, 31 May 2005 08:04:42 PDT, Hans Reiser said:

 Cycle may consists of more graph nodes than fits into memory. 
 
 There are pathname length restrictions already in the kernel that should
 prevent that, yes?

The problem is that although a *single* pathname can't be longer than some
length, you can still create a cycle.  Consider for instance a pathname 
restriction
of 1024 chars.  Filenames A, B, and C are all 400 characters long.  A points at 
B,
B points at C - and C points back to A.

Also, although the set of inodes *in the cycle* fits in memory, the set of
inodes *in the entire graph* that has to be searched to verify the presence of
a cycle may not (in general, you have to be ready to examine *all* the inodes
unless you can do some pruning (unallocated, provably un-cycleable, and so
on)).  THis is the sort of thing that you can afford to do in userspace during
an fsck, but certainly can't do in the kernel on every syscall that might
create a cycle...



pgpdt2U5lIsqK.pgp
Description: PGP signature


-help

2005-05-31 Thread Samuel Leuthold

unsubscribe



Re: File as a directory - Ordered Relations

2005-05-31 Thread Jonathan Briggs
On Mon, 2005-05-30 at 01:19 -0700, Hans Reiser wrote:
 [EMAIL PROTECTED] wrote:
 
 On Fri, 27 May 2005 23:56:35 CDT, David Masover said:
 
   
 
 Hans, comment please?  Is this approaching v5 / v6 / Future Vision?  It
 does seem more than a little clunky when applied to v4...
 
 
 Well, if you read our whitepaper, we consider relational algebra to be a
 functional subset of what we will implement (which implies we think
 relational algebra should be possible in the filesystem naming.)
 
 
 I'm not Hans, but I *will* ask How much of this is *rationally* doable
 without some help from the VFS?.
 
 Think of VFS as a standards committee.  That means that 5-15 years after
 we implement it, they will copy it, break it, and then demand that we
 conform to their breakage. 
 
 Anytimes someone says it should go into VFS, what they really mean is,
 nobody should get ahead of them because it will increase their workload.;-)
 
 VFS is a baseline.  Once you support VFS, and your performance is good,
 you can start to innovate.  Next year we finally start to seriously
 innovate, after 10 years of groundwork.  The storage layer was never the
 interesting part of our plans, not to me.

Why innovate in the filesystem though, when it would work just as well
or better in the VFS layer?  Files as directories and meta-files would
work for all filesystems.  Ext3 with extended attributes could support
the same file structures as Reiser4.  Reiser4 would then be the most
efficient implementation of the general case.

From the last LKML discussion, it didn't look to me as if the kernel
maintainers are going to accept Reiser4's stranger features into the
mainline kernel, so if you're going to be implementing and maintaining
them separately anyway, why not do it in the implementation of all
namespaces, in the VFS code?
-- 
Jonathan Briggs [EMAIL PROTECTED]
eSoft, Inc.


signature.asc
Description: This is a digitally signed message part


Re: File as a directory - VFS Changes

2005-05-31 Thread Jonathan Briggs
On Tue, 2005-05-31 at 12:30 -0400, [EMAIL PROTECTED] wrote:
 On Tue, 31 May 2005 08:04:42 PDT, Hans Reiser said:
 
  Cycle may consists of more graph nodes than fits into memory. 
  
  There are pathname length restrictions already in the kernel that should
  prevent that, yes?
 
 The problem is that although a *single* pathname can't be longer than some
 length, you can still create a cycle.  Consider for instance a pathname 
 restriction
 of 1024 chars.  Filenames A, B, and C are all 400 characters long.  A points 
 at B,
 B points at C - and C points back to A.
 
 Also, although the set of inodes *in the cycle* fits in memory, the set of
 inodes *in the entire graph* that has to be searched to verify the presence of
 a cycle may not (in general, you have to be ready to examine *all* the inodes
 unless you can do some pruning (unallocated, provably un-cycleable, and so
 on)).  THis is the sort of thing that you can afford to do in userspace during
 an fsck, but certainly can't do in the kernel on every syscall that might
 create a cycle...

You can avoid cycles by redefining the problem.

Every file or data object has one single True Name which is their
inode or OID.  Each data object then has one or more names as
properties.  Names are either single strings with slash separators for
directories, or each directory element is a unique object in an object
list.  Directories then become queries that return the set of objects
holding that directory name.  The query results are of course cached and
updated whenever a name property changes.

Now there are no cycles, although a naive Unix find program could get
stuck in a loop.
-- 
Jonathan Briggs [EMAIL PROTECTED]
eSoft, Inc.


signature.asc
Description: This is a digitally signed message part


Re: File as a directory - VFS Changes

2005-05-31 Thread Hans Reiser
What happens when you unlink the True Name?

Hans

Jonathan Briggs wrote:


You can avoid cycles by redefining the problem.

Every file or data object has one single True Name which is their
inode or OID.  Each data object then has one or more names as
properties.  Names are either single strings with slash separators for
directories, or each directory element is a unique object in an object
list.  Directories then become queries that return the set of objects
holding that directory name.  The query results are of course cached and
updated whenever a name property changes.

Now there are no cycles, although a naive Unix find program could get
stuck in a loop.
  




Re: File as a directory - Ordered Relations

2005-05-31 Thread Hans Reiser
Jonathan Briggs wrote:


Why innovate in the filesystem though, when it would work just as well
or better in the VFS layer?

Why don't we just have one filesystem, think of the advantages.
;-)

I don't try to get other people to follow my lead anymore, I just ship
code that works.  Putting it into VFS requires getting others to follow
my lead.  Ain't gonna happen.  Getting them to leave me alone to
innovate in my corner of the kernel?  Might happen if I fight for it,
but it will be a real struggle.

Hans


Re: File as a directory - VFS Changes

2005-05-31 Thread Jonathan Briggs
Either that isn't allowed, or it immediately vanishes from all
directories.

If deleting by OID isn't allowed, then every name property must be
removed in order to delete the file.

Personally, I would allow deleting the OID.  It would be a convenient
way to be sure every instance of a file was deleted.

On Tue, 2005-05-31 at 09:59 -0700, Hans Reiser wrote:
 What happens when you unlink the True Name?
 
 Hans
 
 Jonathan Briggs wrote:
 
 
 You can avoid cycles by redefining the problem.
 
 Every file or data object has one single True Name which is their
 inode or OID.  Each data object then has one or more names as
 properties.  Names are either single strings with slash separators for
 directories, or each directory element is a unique object in an object
 list.  Directories then become queries that return the set of objects
 holding that directory name.  The query results are of course cached and
 updated whenever a name property changes.
 
 Now there are no cycles, although a naive Unix find program could get
 stuck in a loop.
   
 
 
-- 
Jonathan Briggs [EMAIL PROTECTED]
eSoft, Inc.


signature.asc
Description: This is a digitally signed message part


Re: Reiserfs 1300G partition on lvm problem ...

2005-05-31 Thread Christian
On Tue, May 31, 2005 17:09, Dan Oglesby said:

 So far, I haven't seen any bad blocks written to my output file.  I'm
 not in front of the machine (remote location), so I can't see what's on
 the console, where I'm actually running the command.

use screendump(1) to see what's on the console. it comes in a package
called console-tools here.

 with the power outage in a sane manner.  The last reiserfsck failed
 after a day of running, so I'm thinking if I have a problem, it's
 towards the end of my array.  Time will tell.

sorry if i missed the rest of the thread: please make sure you're using
the latest version of reiserfsprogs.

...my 2 cents,
Christian.
-- 
make bzImage, not war



Reiser4/Encryption plugin stability?

2005-05-31 Thread ADT
Hi everyone,

I'm looking into using reiser4 and it's encryption plugin on a number
of new CentOS4 servers I will be building.  I've been doing various
searches via google and the list archives, and I've seen a few emails
from last year which indicated that the encryption plugin wasn't yet
ready for primetime but it would be soon.

Any chance I could get a status update?  

Thanks,
Aaron

-- 
http://synfin.net/


Re: File as a directory - VFS Changes

2005-05-31 Thread Nikita Danilov
Jonathan Briggs writes:
  On Tue, 2005-05-31 at 12:30 -0400, [EMAIL PROTECTED] wrote:
   On Tue, 31 May 2005 08:04:42 PDT, Hans Reiser said:
   
Cycle may consists of more graph nodes than fits into memory. 

There are pathname length restrictions already in the kernel that should
prevent that, yes?
   
   The problem is that although a *single* pathname can't be longer than some
   length, you can still create a cycle.  Consider for instance a pathname 
   restriction
   of 1024 chars.  Filenames A, B, and C are all 400 characters long.  A 
   points at B,
   B points at C - and C points back to A.
   
   Also, although the set of inodes *in the cycle* fits in memory, the set of
   inodes *in the entire graph* that has to be searched to verify the 
   presence of
   a cycle may not (in general, you have to be ready to examine *all* the 
   inodes
   unless you can do some pruning (unallocated, provably un-cycleable, and so
   on)).  THis is the sort of thing that you can afford to do in userspace 
   during
   an fsck, but certainly can't do in the kernel on every syscall that might
   create a cycle...
  
  You can avoid cycles by redefining the problem.
  
  Every file or data object has one single True Name which is their
  inode or OID.  Each data object then has one or more names as
  properties.  Names are either single strings with slash separators for
  directories, or each directory element is a unique object in an object
  list.  Directories then become queries that return the set of objects
  holding that directory name.  The query results are of course cached and
  updated whenever a name property changes.
  
  Now there are no cycles, although a naive Unix find program could get
  stuck in a loop.

Huh? Cycles are still here.

Query D0 returns D1, query D1 returns D2, ... query DN returns D0. The
problem is not in the mechanism used to encode tree/graph structure. The
problem is in the limitations imposed by required semantics:

   (R) every object except some selected root is Reachable. (No leaks.)

   (G) unused objects are sooner or later discarded. (Garbage
   collection.)

Neither requirement is compatible with cycles in the directory
structure:

 - from (R) it follows that object can be discarded only if it empty (as
 a directory). All nodes in a cycle are not empty (because each of them
 contains at least a reference to the next one), and hence none of them
 can be ever removed;

 - if garbage collection is implemented through the reference counting
 (which is the only known way tractable for a file system), then cycles
 are never collected.

Unless you are talking about a two-level naming scheme, where One True
Names are visible to the user. In that case reachability problem
evaporates, because manipulations with normal directory structure never
make node unreachable---it is always accessible through its True
Name.

But the garbage collection problem is still there. You are more than
welcome to solve it by implementing generation mark-and-sweep GC on file
system scale. :-)

Nikita.


Re: File as a directory - VFS Changes

2005-05-31 Thread Hans Reiser
Well,. if you allow multiple true names, then you start to resemble
something I suggested a few years ago, in which I outlined a taxonomy of
links, and suggested that some links would count towards the reference
count and some would not.

Of course, that does nothing for the cycle problem..

How are cycles handled for symlinks currently?

Hans

Jonathan Briggs wrote:

Either that isn't allowed, or it immediately vanishes from all
directories.

If deleting by OID isn't allowed, then every name property must be
removed in order to delete the file.

Personally, I would allow deleting the OID.  It would be a convenient
way to be sure every instance of a file was deleted.

On Tue, 2005-05-31 at 09:59 -0700, Hans Reiser wrote:
  

What happens when you unlink the True Name?

Hans

Jonathan Briggs wrote:



You can avoid cycles by redefining the problem.

Every file or data object has one single True Name which is their
inode or OID.  Each data object then has one or more names as
properties.  Names are either single strings with slash separators for
directories, or each directory element is a unique object in an object
list.  Directories then become queries that return the set of objects
holding that directory name.  The query results are of course cached and
updated whenever a name property changes.

Now there are no cycles, although a naive Unix find program could get
stuck in a loop.
 

  




Re: Reiser4/Encryption plugin stability?

2005-05-31 Thread Edward Shishkin

ADT wrote:


Hi everyone,

I'm looking into using reiser4 and it's encryption plugin on a number
of new CentOS4 servers I will be building.  I've been doing various
searches via google and the list archives, and I've seen a few emails
from last year which indicated that the encryption plugin wasn't yet
ready for primetime but it would be soon.

Any chance I could get a status update?  


Thanks,
Aaron

 



Hello.
It will be available in reiser4.1 via pseudo interface, but only for 
compression transform

for a while, wait for beta-release..

Thanks,
Edward.



Re: File as a directory - VFS Changes

2005-05-31 Thread Hans Reiser
What about if we have it that only the first name a directory is created
with counts towards its reference count, and that if the directory is
moved if it is moved from its first name, the new name becomes the one
that counts towards the reference count?   A bit of a hack, but would work.

Hans

Nikita Danilov wrote:

Jonathan Briggs writes:
  On Tue, 2005-05-31 at 12:30 -0400, [EMAIL PROTECTED] wrote:
   On Tue, 31 May 2005 08:04:42 PDT, Hans Reiser said:
   
Cycle may consists of more graph nodes than fits into memory. 

There are pathname length restrictions already in the kernel that should
prevent that, yes?
   
   The problem is that although a *single* pathname can't be longer than some
   length, you can still create a cycle.  Consider for instance a pathname 
   restriction
   of 1024 chars.  Filenames A, B, and C are all 400 characters long.  A 
   points at B,
   B points at C - and C points back to A.
   
   Also, although the set of inodes *in the cycle* fits in memory, the set of
   inodes *in the entire graph* that has to be searched to verify the 
   presence of
   a cycle may not (in general, you have to be ready to examine *all* the 
   inodes
   unless you can do some pruning (unallocated, provably un-cycleable, and so
   on)).  THis is the sort of thing that you can afford to do in userspace 
   during
   an fsck, but certainly can't do in the kernel on every syscall that might
   create a cycle...
  
  You can avoid cycles by redefining the problem.
  
  Every file or data object has one single True Name which is their
  inode or OID.  Each data object then has one or more names as
  properties.  Names are either single strings with slash separators for
  directories, or each directory element is a unique object in an object
  list.  Directories then become queries that return the set of objects
  holding that directory name.  The query results are of course cached and
  updated whenever a name property changes.
  
  Now there are no cycles, although a naive Unix find program could get
  stuck in a loop.

Huh? Cycles are still here.

Query D0 returns D1, query D1 returns D2, ... query DN returns D0. The
problem is not in the mechanism used to encode tree/graph structure. The
problem is in the limitations imposed by required semantics:

   (R) every object except some selected root is Reachable. (No leaks.)

   (G) unused objects are sooner or later discarded. (Garbage
   collection.)

Neither requirement is compatible with cycles in the directory
structure:

 - from (R) it follows that object can be discarded only if it empty (as
 a directory). All nodes in a cycle are not empty (because each of them
 contains at least a reference to the next one), and hence none of them
 can be ever removed;

 - if garbage collection is implemented through the reference counting
 (which is the only known way tractable for a file system), then cycles
 are never collected.

Unless you are talking about a two-level naming scheme, where One True
Names are visible to the user. In that case reachability problem
evaporates, because manipulations with normal directory structure never
make node unreachable---it is always accessible through its True
Name.

But the garbage collection problem is still there. You are more than
welcome to solve it by implementing generation mark-and-sweep GC on file
system scale. :-)

Nikita.


  




reiser4 on large block devices

2005-05-31 Thread Aaron Porter

I'm trying to create a reiser4 filesystem on a ~4tb block device,
but I'm getting the error Fatal: The partition size is too big. The FAQ
seems to list a max filesystem size of 16tb. Am I missing something?

diablo:~# mkreiser4 /dev/sda3
mkreiser4 1.0.4
Copyright (C) 2001, 2002, 2003, 2004 by Hans Reiser, licensing governed by 
reiser4progs/COPYING. 

Block size 4096 will be used. 
Linux 2.6.11.8 is detected.   
Uuid f22c4f94-4fcd-4655-b4e0-6665048db8ce will be used.   
Fatal: The partition size is too big. 
Reiser4 is going to be created on /dev/sda3.  
(Yes/No): no
diablo:~#


Achieve stronger and harder erections

2005-05-31 Thread Nathan

Wish you could be better?
http://www.jnaz.net/ss/





Power is the ability not to have to please.
The man who has confidence in himself gains the confidence of others. 
Everybody should be able to make some music...That's the cosmic dance!  
Nothing is at last sacred but the integrity of your own mind. 
Who to himself is law, no law doth need.





Re: File as a directory - VFS Changes

2005-05-31 Thread Jonathan Briggs
On Tue, 2005-05-31 at 15:01 -0600, Jonathan Briggs wrote:
 I should create an example.
 
 Wherever I used True Name previously, use OID instead.  True Name was
 simply another term for a unique object identifier.
 
 Three files with OIDs of 1001, 1002, and 1003.
 Object 1001:
 name: /tmp/A/file1
 name: /tmp/A/B/file1
 name: /tmp/A/B/C/file1
 
 Object 1002:
 name: /tmp/A/file2
 
 Object 1003:
 name: /tmp/A/B/file3
 
 Three query objects (directories) with OIDs of 1, 2, and 3.
 Object 1:
 name: /tmp/A
 name: /tmp/A/B/C/A
 query: name begins with /tmp/A/
 query result cache: B-2, file1-1001, file2-1002
 
 Object 2:
 name: /tmp/A/B
 query: name begins with /tmp/A/B/
 query result cache: C-3, file1-1001, file3-1003
 
 Object 3:
 name: /tmp/A/B/C
 query: name begins with /tmp/A/B/C/
 query result cache: A-1, file1-1001
 
 Now there is a A - B - C - A directory loop.  But removing
 name: /tmp/A/B/C/A from Object 1 fixes the loop.  Deleting Object 1 also
 fixes the loop.  Deleting any of Object 1, 2 or 3 does not affect any
 other object, because in this scheme, directory objects do not need to
 actually exist: they are just queries that return objects with certain
 names.

I forgot to address Nikita's point about reclaiming lost cycles.  In
this case, let me create Object 4 for /tmp
Object 4:
name: /tmp
query: name begins with /tmp/
query result cache: A-1

Now, if we delete Object 4, are Objects 1,2,3 lost?  I would say not
because they still have names.  When the shell calls chdir(/tmp) a new
query object (directory) must be created dynamically, and Objects
1001,1002,1003 still have their names that start with /tmp and so they
immediately appear again.  Their names still start with /, so the top
level query will still find them and /tmp as well.

Therefore, the cycle is never detached and lost.
-- 
Jonathan Briggs [EMAIL PROTECTED]
eSoft, Inc.


signature.asc
Description: This is a digitally signed message part


Re: Reiser4/Encryption plugin stability?

2005-05-31 Thread David Masover
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Edward Shishkin wrote:
 ADT wrote:
 
 Hi everyone,

 I'm looking into using reiser4 and it's encryption plugin on a number
 of new CentOS4 servers I will be building.  I've been doing various
 searches via google and the list archives, and I've seen a few emails
 from last year which indicated that the encryption plugin wasn't yet
 ready for primetime but it would be soon.

 Any chance I could get a status update? 
 Thanks,
 Aaron

  

 
 Hello.
 It will be available in reiser4.1 via pseudo interface, but only for
 compression transform
 for a while, wait for beta-release..

Weird.  I would have thought compression would be harder than encryption...

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iQIVAwUBQpzcZngHNmZLgCUhAQL/gg/+Ja0cIHyTRMGchPBlsMlgL7xYqu/sOi/b
qypODPehGoHjRf9RzgBlDmNP4O5IIKW88eOeEzCycybJi7Jzr5jIAFs//4JJ8MMI
YFp+XQ9ejQBAtolm3MkJR6wpsOaRkPXtN5XcGVuMpjiWSSxB0mQyzKTBPmL1btfI
mnIFnguJQjNKfi2bvCtqzNoxkksHJ4hxrwuZ9KtOIMDe6oMK80k4Z2LCu0cAx8ie
Ds7XAb6b8hxj21uasHYZPzfzCYxemr3oyJgQuSdd4faVDtKXoA2Yni6CCtB+9vOG
VXxRBVsAzNrYO6B4NcgY0tlLKf77BBmV+fQrZquMb4wApPZI1q0bH2G+U/3hq/5J
Chg6uoqvXPjTcf6Wxamok/5C7Lhju29jSowWsQ49IQg6erycoBaM5Y0JZ0FPJFAa
TLTYp/LyTLEmp86dpghH3bTHG1dGF4TfbjJdVPzku/GtEHPlJ6jiKBnJY8qrUBdX
Qz79W2EuVdnYuLpNYfniijyr8jB9KIzhBsDxTmORYx8QC+Nfaatw4uCSBd96jqd4
Ttpi6VtDIn63nUXskEEhMUr5KNtYu49lnHkdPWmJwFYDOfkORd/vNANKoHzGB8F3
WWzrpC1oBPhCwhH/VISNhzv03p79k8cS08ZysJnx/eqDtR0fWmGnrjawbE2rLloZ
Q8xeOtIs7p0=
=O/Rt
-END PGP SIGNATURE-


Re: File as a directory - VFS Changes

2005-05-31 Thread Nikita Danilov
Jonathan Briggs writes:
  On Tue, 2005-05-31 at 15:01 -0600, Jonathan Briggs wrote:
   I should create an example.
   
   Wherever I used True Name previously, use OID instead.  True Name was
   simply another term for a unique object identifier.
   
   Three files with OIDs of 1001, 1002, and 1003.
   Object 1001:
   name: /tmp/A/file1
   name: /tmp/A/B/file1
   name: /tmp/A/B/C/file1
   
   Object 1002:
   name: /tmp/A/file2
   
   Object 1003:
   name: /tmp/A/B/file3
   
   Three query objects (directories) with OIDs of 1, 2, and 3.
   Object 1:
   name: /tmp/A
   name: /tmp/A/B/C/A
   query: name begins with /tmp/A/
   query result cache: B-2, file1-1001, file2-1002
   
   Object 2:
   name: /tmp/A/B
   query: name begins with /tmp/A/B/
   query result cache: C-3, file1-1001, file3-1003
   
   Object 3:
   name: /tmp/A/B/C
   query: name begins with /tmp/A/B/C/
   query result cache: A-1, file1-1001
   
   Now there is a A - B - C - A directory loop.  But removing
   name: /tmp/A/B/C/A from Object 1 fixes the loop.  Deleting Object 1 also
   fixes the loop.  Deleting any of Object 1, 2 or 3 does not affect any
   other object, because in this scheme, directory objects do not need to
   actually exist: they are just queries that return objects with certain
   names.

One problem with the above is that directory structure is inconsistent
with lists of names associated with objects. For example, file1 is a
child of /tmp/A/B/C/A, but Object 1001 doesn't list /tmp/A/B/C/A/file1
among its names.

  
  I forgot to address Nikita's point about reclaiming lost cycles.  In
  this case, let me create Object 4 for /tmp
  Object 4:
  name: /tmp
  query: name begins with /tmp/
  query result cache: A-1
  
  Now, if we delete Object 4, are Objects 1,2,3 lost?  I would say not
  because they still have names.  When the shell calls chdir(/tmp) a new
  query object (directory) must be created dynamically, and Objects
  1001,1002,1003 still have their names that start with /tmp and so they
  immediately appear again.  Their names still start with /, so the top
  level query will still find them and /tmp as well.

Object 4 is /tmp. Once it was removed what does it _mean_ for, say,
Object 1003 to have a name /tmp/A/B/file3? What is /tmp bit there?
Just a string? If so, and your directories are but queries, what does it
mean for directory to be removed? How mv /tmp/A /tmp/A1 is implemented?
By scanning whole file system and updating leaf name-lists?

It seems that what you are proposing is a radical departure from file
system namespace as we know it. :-) In your scheme all structural
information is encoded in leaves _only_, and directories just do some
kind of pattern matching. This is closer to a relational database than
to the current file-systems where directories are the only source of
the structural inform


Re: File as a directory - VFS Changes

2005-05-31 Thread Jonathan Briggs
On Wed, 2005-06-01 at 02:36 +0400, Nikita Danilov wrote:
 Jonathan Briggs writes:
   On Tue, 2005-05-31 at 15:01 -0600, Jonathan Briggs wrote:
I should create an example.

Wherever I used True Name previously, use OID instead.  True Name was
simply another term for a unique object identifier.

Three files with OIDs of 1001, 1002, and 1003.
Object 1001:
name: /tmp/A/file1
name: /tmp/A/B/file1
name: /tmp/A/B/C/file1

Object 1002:
name: /tmp/A/file2

Object 1003:
name: /tmp/A/B/file3

Three query objects (directories) with OIDs of 1, 2, and 3.
Object 1:
name: /tmp/A
name: /tmp/A/B/C/A
query: name begins with /tmp/A/
query result cache: B-2, file1-1001, file2-1002

Object 2:
name: /tmp/A/B
query: name begins with /tmp/A/B/
query result cache: C-3, file1-1001, file3-1003

Object 3:
name: /tmp/A/B/C
query: name begins with /tmp/A/B/C/
query result cache: A-1, file1-1001

Now there is a A - B - C - A directory loop.  But removing
name: /tmp/A/B/C/A from Object 1 fixes the loop.  Deleting Object 1 also
fixes the loop.  Deleting any of Object 1, 2 or 3 does not affect any
other object, because in this scheme, directory objects do not need to
actually exist: they are just queries that return objects with certain
names.
 
 One problem with the above is that directory structure is inconsistent
 with lists of names associated with objects. For example, file1 is a
 child of /tmp/A/B/C/A, but Object 1001 doesn't list /tmp/A/B/C/A/file1
 among its names.

file1 *appears* to be a child because it is actually returned as the
query result for its name of /tmp/A/file1 because A is a query
for /tmp/A/.  If the shell was smart enough to normalize its path by
asking the directory for its name, it would know that /tmp/A/B/C/A
was /tmp/A.   But yes, a stupid program could be confused by the
difference between names.

 
   
   I forgot to address Nikita's point about reclaiming lost cycles.  In
   this case, let me create Object 4 for /tmp
   Object 4:
   name: /tmp
   query: name begins with /tmp/
   query result cache: A-1
   
   Now, if we delete Object 4, are Objects 1,2,3 lost?  I would say not
   because they still have names.  When the shell calls chdir(/tmp) a new
   query object (directory) must be created dynamically, and Objects
   1001,1002,1003 still have their names that start with /tmp and so they
   immediately appear again.  Their names still start with /, so the top
   level query will still find them and /tmp as well.
 
 Object 4 is /tmp. Once it was removed what does it _mean_ for, say,
 Object 1003 to have a name /tmp/A/B/file3? What is /tmp bit there?
 Just a string? If so, and your directories are but queries, what does it
 mean for directory to be removed? How mv /tmp/A /tmp/A1 is implemented?
 By scanning whole file system and updating leaf name-lists?

Well, the name doesn't mean anything. :-)  It is just a convenient
metadata for describing where to find the file in a hierarchy, and for
Unix compatibility.

If a directory was removed by a standard rm -rf, it would work as
expected because it would descend the tree removing names (unlink) from
each object it found.

Moving an object with mv would change its name.  Moving a top-level
directory like /usr would require visiting every object starting
with /usr and doing an edit.  A compression scheme could be used where
the most-used top-level directory names were replaced with lookup
tables, then /usr could be renamed just once in the table.

 It seems that what you are proposing is a radical departure from file
 system namespace as we know it. :-) In your scheme all structural
 information is encoded in leaves _only_, and directories just do some
 kind of pattern matching. This is closer to a relational database than
 to the current file-systems where directories are the only source of
 the structural inform

Yes. :-)  It is radical, and the idea is taken from databases.  I
thought that seemed to be the direction Reiser filesystems were moving.
In this scheme a name is just another bit of metadata and not
first-class important information.  The name-query directories would be
there for traditional filesystem users and Unix compatibility.  They
would probably be virtual and dynamic, only being created when needed
and only being persistent if assigned meta-data (extra names (links),
non-default permission bits, etc) or for performance reasons (faster to
load from cache than searching every file).
-- 
Jonathan Briggs [EMAIL PROTECTED]
eSoft, Inc.


signature.asc
Description: This is a digitally signed message part


Re: File as a directory - VFS Changes

2005-05-31 Thread Alexander G. M. Smith
Nikita Danilov wrote on Tue, 31 May 2005 13:34:55 +0400:
 Cycle may consists of more graph nodes than fits into memory. Cycle
 detection is crucial for rename semantics, and if
 cycle-just-about-to-be-formed doesn't fit into memory it's not clear how
 to detect it, because tree has to be locked while checked for cycles, and
 one definitely doesn't want to keep such a lock over IO.

Sometimes you'll just have to return an error code if the rename operation
is too complex to be done.  The user will have to then delete individual
leaf files to make the situation simpler.  I hope this won't happen very
often.

On the plus side, the detection of all the files that may be affected
means you can now delete a directory directly, contents and all, if all
the related inodes fit into memory.

- Alex