Re: /usr (was Re: Big filesystems.)

2005-02-16 Thread Javier Kohen
So you can mount it read-only locally or from a network exported drive. 
You need the root filesystem for booting up, but /usr shouldn't be 
required for the first part of the process (until all local and network 
filesystems have been loaded), otherwise it indicates a problem with the 
boot scripts.

Ron Johnson wrote:
On Wed, 2005-02-16 at 12:00 -0500, Adam Skutt wrote:
David Wood wrote:
[snip]
these days, I use ext3 for /, /usr, /boot and XFS for everything else.

On modern (everything since 1995) drives, presuming that /home and
and "data" directories have their own partitions, why segregate 
/usr out from / ?


--
Javier Kohen <[EMAIL PROTECTED]>
ICQ: blashyrkh #2361802
Jabber: [EMAIL PROTECTED]
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


/usr (was Re: Big filesystems.)

2005-02-16 Thread Ron Johnson
On Wed, 2005-02-16 at 12:00 -0500, Adam Skutt wrote:
> David Wood wrote:
[snip]
> these days, I use ext3 for /, /usr, /boot and XFS for everything else.

On modern (everything since 1995) drives, presuming that /home and
and "data" directories have their own partitions, why segregate 
/usr out from / ?

-- 
-
Ron Johnson, Jr.
Jefferson, LA USA
PGP Key ID 8834C06B I prefer encrypted mail.

PETA - People Eating Tasty Animals


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-16 Thread Adam Skutt
David Wood wrote:
Just speaking in terms of Linux, did you really have as many problems 
with the more popular filesystems as the less popular ones?
Yes, I have had them.  To be fair, these days, the only times I hear of 
ext2/ext3 crashing is on older kernels and when someone was doing 
something they weren't supposed to (like running the HDD full tilt and 
cutting the power).

So ext2/ext3 seem pretty stable, and when a major bug appears, it seems 
to be fixed within 24 hours typically (usually by an immediate kernel 
release).

So yes, the more widely-used ones seem less likely to crash and cause 
data corruption under "normal" circumstances, these days.  Same wasn't 
true of 2.0/2.2 though ;)  I personally haven't had a ext2/ext3 crash in 
years, but my ext2/ext3 system also tend to not take as much abuse; 
these days, I use ext3 for /, /usr, /boot and XFS for everything else.

That being said, I don't consider anything but ext2/3 to be widespread 
enough to apply this to.

YMMV in all this of course, I just wanted to point that every filesystem 
on every platform I've ever used (production or otherwise) has failed me 
at one point or another, so taking frequent backups is criticial ;)

Adam
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-16 Thread David Wood
On Tue, 15 Feb 2005, Adam Skutt wrote:
I've had filesystem corruption on every fileystem I've used that's caused 
data loss:
FAT16, FAT32, NTFSv4, NTFSv5, ext2, ext3, ReiserV3, XFS, VMS' ODS-11, HFS.
It's funny, you got me thinking. Have I ever seen an "independent" 
corruption problem with the older Windows filesystems? I have no idea how 
I could tell. Like trying to measure the longevity of a car driven by a 16 
year old alcoholic. Remember the Windows 95 bug where it would crash if 
just left running idle for something like 38 days? It took them years to 
discover it.  :)

Just speaking in terms of Linux, did you really have as many problems with 
the more popular filesystems as the less popular ones?

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Per Bojsen
*** Regarding Re: Big filesystems.; Adam Skutt <[EMAIL PROTECTED]> adds:

Adam> I don't know what it's doing, but I'm assuming it's journal
Adam> replay + perhaps pre-caching of the b-tree.

The journal replay is actually a separate phase.  It has already
happened at the time of mount when we are experiencing the seconds
long delays.  I have to say, though, that I would much rather wait the
8-10 seconds it takes to replay the journal and mount the partition
than wait through fsck on a corrupt ext2 partition of that size :-)

Per

-- 
Per Bojsen  <[EMAIL PROTECTED]>
7 Francis Road
Billerica, MA 01821-3618
USA


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread Adam Skutt
David Wood wrote:
OK, the moral of the story is to use what everybody else uses. AND get a 
big backup tape.  ;)
Just the latter, not the former.
I've had filesystem corruption on every fileystem I've used that's 
caused data loss:
FAT16, FAT32, NTFSv4, NTFSv5, ext2, ext3, ReiserV3, XFS, VMS' ODS-11, HFS.

Good backups are the most important part.  All FS' will fail you eventually.
Adam

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Adam Skutt
Jeffrey W. Baker wrote:
Maybe the moral is you should use what everybody else uses.  ext2/3 and
to some extent Reiser are very well tested because practically everybody
uses them.  Major bugs in ext3 are readily apparent because it has
millions of users.  Major bugs in XFS are not found until one of sgi's
17 customers happens to trip over one.
Way more than 17 users use XFS.  I know way more people who use XFS over 
reiser, because of all the reiser corruption bugs circa 2.4.8.  Though 
now it seems it's XFS' turn.

By this logic, no one should use GFS, right, because only RHEL customers 
 use it?

It doesn't follow per se, but it is a reasonable rule of thumb in general.
We switched to VxFS on EMC.  6 months later VxFS destroyed an extremely
large filesystem.  We were out of the mainstream.  Never been tested.
I dunno what platform you were doing this on, but VxFS on EMC hardware 
using Solaris is a mainstream platform.

Adam
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread David Wood
On Tue, 15 Feb 2005, Jeffrey W. Baker wrote:
Maybe the moral is you should use what everybody else uses.  ext2/3 and
to some extent Reiser are very well tested because practically everybody
uses them.  Major bugs in ext3 are readily apparent because it has
millions of users.  Major bugs in XFS are not found until one of sgi's
17 customers happens to trip over one.  Worse, XFS is the sole user of
lots of in-kernel code.  ext3 uses lots of underlying code that is also
used by other kernel pieces.  The segmentation helps in rooting out
problems.
OK, the moral of the story is to use what everybody else uses. AND get a 
big backup tape.  ;)

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Jeffrey W. Baker
On Tue, 2005-02-15 at 16:50 -0500, David Wood wrote:
> On Tue, 15 Feb 2005, Jeffrey W. Baker wrote:
> > That said, XFS is still your best choice if you've hit the hard limits
> > in ext3.
> 
> Ahh... _that_ said, it looks like (until they fix it) XFS is the best 
> choice for punishing your enemies with.  :o
> 
> I hate to say it, but this is not the only place I have heard Linux/XFS 
> horror stories. Of course I actually love experimenting, and there's 
> nothing wrong with a work in progress, just so long as it's labeled.
> 
> I guess the moral of the story is that if you've got a big partition, I 
> hope you've got an even bigger backup tape.  ;)

Maybe the moral is you should use what everybody else uses.  ext2/3 and
to some extent Reiser are very well tested because practically everybody
uses them.  Major bugs in ext3 are readily apparent because it has
millions of users.  Major bugs in XFS are not found until one of sgi's
17 customers happens to trip over one.  Worse, XFS is the sole user of
lots of in-kernel code.  ext3 uses lots of underlying code that is also
used by other kernel pieces.  The segmentation helps in rooting out
problems.

It's a many-eyes problem.  There's a "Safe Operating Area" for most
software.  Probably if you went off and made a 4TB ext3 volume with the
journal on a iSCSI device and then scattered it with 150 million files,
you would run into a problem.  That's unknown territory.  But staying
within a normal < 1TB volume with < 10 million files will almost
certainly work without a problem.

Once I worked at a company and we bought NetApp filers.  NetApp sales
engineers told us there was no such thing as fsck for WAFL, their
filesystem.  6 months later we encountered something uniquely horrible
known as a "wack".  "Oh," they suddenly remembered, "there's no fsck
unless..."  We were not a mainstream user.  They had never tested our
workload.

We switched to VxFS on EMC.  6 months later VxFS destroyed an extremely
large filesystem.  We were out of the mainstream.  Never been tested.

We switched to many small Linux servers with medium-sized ext2
partitions.  No problems ever.

End of fable :)

-jwb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread Adam Skutt
Jeffrey W. Baker wrote:
And the only common thread to all of these is XFS.  You can say that
maybe NFS is the problem, maybe this or that is the problem but the fact
is that all the other filesystems survive in the environment and not
XFS.
I never implied NFS was the problem.  What I did state was saying that 
XFS is unsafe over MD or LVM because NFS against a XFS filesystem causes 
corruption doesn't follow.

Both of the problems I mentioned were XFS specific bugs.  I never 
implied they were NFS releated.

Note that CIFS causes the same corruption that NFS does.  Is Samba also
broken?
No, I never implied that.
Note that XFS will corrupt a loop device on any kernel.  I guess the
loop device isn't "below" XFS in your world view?
Proof?
XFS is broken because it uses its own buffer management layer, imported
to the tune of 1000s of lines of code from Irix.  It maybe be in the
kernel but it doesn't use kernel facilities and/or it uses them
incorrectly.
The pagecache layer has more or less been fully integrated as of 2.6. 
It was the reason it wasn't initially imported in 2.4, but lots of work 
went on to make it a complete member of the Linux kernel.

If you want to have a very large filesystem with lots of files *and* you
are only going to use it in the "normal" fashion - directly attached
storage not exported via NFS or CIFS - *and* you are only going to use
RHEL or SuSE kernels (not kernel.org) then you will probably be fine.
But you should realize what you are getting into.
I run a debian kernel exporting data via CIFS every day to myself and 
others, frequently pushing > 10Mb/s against several files with no data 
corruption whatsoever.

Without more details, there's no way to know if you what you were hit 
are *known* bugs and therefore expected behavior.

While unfortunate, they are a reality of using any filesystem but 
ext2/ext3 on Linux (and and occasional reality of even them).

Adam
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread David Wood
On Tue, 15 Feb 2005, Jeffrey W. Baker wrote:
Surely you aren't implying that Reiser uses anything as pedestrian as a
b-tree!  Why, Reiser's tree format is so novel, so utterly perfect, that
no human could have ever thought of it.  I understand their patent
applications is sailing through the approval process, greeted by nothing
but disbelief and Hosannas.
LOL.
I actually like ReiserFS. I do take Hans with a grain of salt, since quite 
a while after he declared his FS "ready for production" I found out his 
consistency check/recovery tools were still "Beta," and it wasn't uncommon 
for them to just choke and dump core. Maybe this is arguable, but I always 
thought a working "*fsck" was part of the whole production package.

This was all some time back. The problems were eventually addressed (from 
what I gather) years ago, and I haven't had any non-hardware-related 
"incidents" with the FS in years of pretty constant abuse.

Unfortunately XFS also repeatedly swallowed a number of my volumes.  I
found it to be more unstable than any filesystem I have used (save
VxFS).  When using XFS, one must not read from the underlying device, or
one risks corruption.  This leads one to believe that using XFS on LVM,
md, or enbd would be somewhat risky.  fsck.xfs is sometimes at a loss to
recover anything at all in these situations, even after running for
days.
That said, XFS is still your best choice if you've hit the hard limits
in ext3.
Ahh... _that_ said, it looks like (until they fix it) XFS is the best 
choice for punishing your enemies with.  :o

I hate to say it, but this is not the only place I have heard Linux/XFS 
horror stories. Of course I actually love experimenting, and there's 
nothing wrong with a work in progress, just so long as it's labeled.

I guess the moral of the story is that if you've got a big partition, I 
hope you've got an even bigger backup tape.  ;)

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Jeffrey W. Baker
On Tue, 2005-02-15 at 16:18 -0500, Adam Skutt wrote:

> What kernel?  What situation.  Unfortunately, especially as of late, 
> there have been several known "gotcha" situations that will cause data 
> corruption, especially under 2.6 (4K stacks being one, very full 
> fileystems another).

And the only common thread to all of these is XFS.  You can say that
maybe NFS is the problem, maybe this or that is the problem but the fact
is that all the other filesystems survive in the environment and not
XFS.

Note that CIFS causes the same corruption that NFS does.  Is Samba also
broken?

Note that XFS will corrupt a loop device on any kernel.  I guess the
loop device isn't "below" XFS in your world view?

XFS is broken because it uses its own buffer management layer, imported
to the tune of 1000s of lines of code from Irix.  It maybe be in the
kernel but it doesn't use kernel facilities and/or it uses them
incorrectly.

If you want to have a very large filesystem with lots of files *and* you
are only going to use it in the "normal" fashion - directly attached
storage not exported via NFS or CIFS - *and* you are only going to use
RHEL or SuSE kernels (not kernel.org) then you will probably be fine.
But you should realize what you are getting into.

-jwb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread Adam Skutt
Jeffrey W. Baker wrote:
> The latter part of this sentence is not supported by the former,
You are correct, rather googling reveals that the locking model in the 
kernel doesn't support it, and there may be raced conditions triggered 
only by having a block device mounted and doing raw I/O on it at the 
same time.

The kernel developers are certainly not interested in supporting such 
behavior, so it still isn't safe.

No, actually, it isn't.  Look at the XFS mailing list.  There's a number
of reports of boinked volumes just in the last month.
What kernel?   What distro?  Unbuntu has some patch that causes XFS 
corrpution under seemingly random circumstances (I'm working on tracking 
it down).

2.6 has had a number of XFS-related bugs that could cause data 
corruption under a number of situations.

This does not explain why XFS is frequently reported to become corrupted
when exported over NFS.
Because the NFS code isn't locking properly?  Because the XFS code isn't 
locking properly?

Just because XFS over NFS causes corruption doesn't mean that XFS LVM 
and MD shouldn't be trusted.  NFS is above XFS, LVM and MD are below it.

It doesn't really follow to not trust XFS on MD and LVM due to 
corruption using NFS.

I have *zero* trust in Linux's NFS implementation anyway (even though 
it's purportedly become better) since it bit me so bad back in teh 2.2 days.


I came across this conclusion by losing numerous large filesystems in
the course of only 6 month before abandoning XFS.
What kernel?  What situation.  Unfortunately, especially as of late, 
there have been several known "gotcha" situations that will cause data 
corruption, especially under 2.6 (4K stacks being one, very full 
fileystems another).

Adam
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Kyle Rose
> Surely you aren't implying that Reiser uses anything as pedestrian as a
> b-tree!  Why, Reiser's tree format is so novel, so utterly perfect, that
> no human could have ever thought of it.  I understand their patent
> applications is sailing through the approval process, greeted by nothing
> but disbelief and Hosannas.

Had words with Hans recently? :)

Seriously, I've been very happy with ResierFS *in most instances* in
the 5 years I've been using it.  Unfortunately, this one problem is
particularly irritating.  I'm hoping they fixed it in R4.

Cheers,
Kyle


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread Adam Skutt
Jeffrey W. Baker wrote:
Unfortunately XFS also repeatedly swallowed a number of my volumes.  I
found it to be more unstable than any filesystem I have used (save
VxFS).  When using XFS, one must not read from the underlying device, or
one risks corruption. 
In Linux, doing this on *any* filesystem will potentially cause corruption.
This is why e2dump/e2restore are *unsafe* and not to be used.
Raw I/O operations bypass the buffer cache, so of course it'd be corrupted.
If you're dicking with device access while a partition is mounted and 
you lose data, you deserve what you get.  This behavior has been a no-no 
forever.

 This leads one to believe that using XFS on LVM,
md, or enbd would be somewhat risky. 
I dunno what 'enbd' is, but XFS on LVM or md is perfectly safe.  They're 
kernel drivers for starters, so they can coordinate block I/O 
operations.  They also sit below XFS in the I/O layer, XFS just sees 
them as another partition.

I dunno how you came across this conclusion at all.
ADam
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Jeffrey W. Baker
On Tue, 2005-02-15 at 15:43 -0500, Adam Skutt wrote:
> Jeffrey W. Baker wrote:
> > Unfortunately XFS also repeatedly swallowed a number of my volumes.  I
> > found it to be more unstable than any filesystem I have used (save
> > VxFS).  When using XFS, one must not read from the underlying device, or
> > one risks corruption. 
> In Linux, doing this on *any* filesystem will potentially cause corruption.
> 
> This is why e2dump/e2restore are *unsafe* and not to be used.
> 
> Raw I/O operations bypass the buffer cache, so of course it'd be corrupted.

The latter part of this sentence is not supported by the former,

> If you're dicking with device access while a partition is mounted and 
> you lose data, you deserve what you get.  This behavior has been a no-no 
> forever.

Reading from the device never causes corruption of any other filesystem.
You can dump an ext3 filesystem all day long.  You'll get a corrupt
dump, but you won't get a corrupt volume.

>   This leads one to believe that using XFS on LVM,
> > md, or enbd would be somewhat risky. 
> I dunno what 'enbd'

enbd is the enhanced network block device.

>  is, but XFS on LVM or md is perfectly safe.

No, actually, it isn't.  Look at the XFS mailing list.  There's a number
of reports of boinked volumes just in the last month.

>   They're 
> kernel drivers for starters, so they can coordinate block I/O 
> operations.  They also sit below XFS in the I/O layer, XFS just sees 
> them as another partition.

This does not explain why XFS is frequently reported to become corrupted
when exported over NFS.

> I dunno how you came across this conclusion at all.

I came across this conclusion by losing numerous large filesystems in
the course of only 6 month before abandoning XFS.

-jwb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread Jeffrey W. Baker
On Tue, 2005-02-15 at 14:59 -0500, Adam Skutt wrote:
> Kyle Rose wrote:
> > 
> > What bothers me is file deletion time.  Anyone have any clue why
> > ReiserFS takes so long to delete files, and why the delete operation
> > evidently blocks all other FS operations?  It seems that ReiserFS
> > should log the delete, and then have a kernel thread handling cleanup
> > in the background in such a way that it doesn't cause other operations
> > to block.
> It's because ReiserV3 has to rebalance two b-trees: one for the 
> metadata, and one for the actual data itself.  This is slow, especially 
> on large directories, I'd imagine.

Surely you aren't implying that Reiser uses anything as pedestrian as a
b-tree!  Why, Reiser's tree format is so novel, so utterly perfect, that
no human could have ever thought of it.  I understand their patent
applications is sailing through the approval process, greeted by nothing
but disbelief and Hosannas.

Right, so in my experience XFS destroys all other filesystems on
metadata operations.  XFS also formats and mounts the volume much more
quickly than Reiser or ext2/3.  XFS also supports very large volumes,
very large files, and more directory entries than any of the others.  I
used to have a giant test tar archive that only XFS could have extracted
in my expected lifetime.

Unfortunately XFS also repeatedly swallowed a number of my volumes.  I
found it to be more unstable than any filesystem I have used (save
VxFS).  When using XFS, one must not read from the underlying device, or
one risks corruption.  This leads one to believe that using XFS on LVM,
md, or enbd would be somewhat risky.  fsck.xfs is sometimes at a loss to
recover anything at all in these situations, even after running for
days.

That said, XFS is still your best choice if you've hit the hard limits
in ext3.

-jwb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread Adam Skutt
Kyle Rose wrote:
What bothers me is file deletion time.  Anyone have any clue why
ReiserFS takes so long to delete files, and why the delete operation
evidently blocks all other FS operations?  It seems that ReiserFS
should log the delete, and then have a kernel thread handling cleanup
in the background in such a way that it doesn't cause other operations
to block.
It's because ReiserV3 has to rebalance two b-trees: one for the 
metadata, and one for the actual data itself.  This is slow, especially 
on large directories, I'd imagine.

Adam
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Adam Skutt
Per Bojsen wrote:
I'm seeing several second mount times for big ReiserFS partitions as
well at every boot.  I was assuming it is doing some sort of sanity
check that depends on the size of the partition as part of the mount.
I'm seeing this on LVM.  Are you using LVM?
I haven't used reiserfs in ages, but I did use it in 2.4 sans LVM and it 
is indeed dependent on the size of the volume/partition.

I don't know what it's doing, but I'm assuming it's journal replay + 
perhaps pre-caching of the b-tree.

Per

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread David Wood
On Tue, 15 Feb 2005, Kyle Rose wrote:
Same here, but that doesn't bother me so much.
What bothers me is file deletion time.  Anyone have any clue why
ReiserFS takes so long to delete files, and why the delete operation
evidently blocks all other FS operations?  It seems that ReiserFS
should log the delete, and then have a kernel thread handling cleanup
in the background in such a way that it doesn't cause other operations
to block.
Yeah, I know this is offtopic. :)
The MythTV guys did a bunch of comparisons on filesystems, because they 
run into this issue a lot; using big partitions, holding big files, 
deleteing them frequently, etc etc. I recall they highlighted this problem 
with reiser, although I honestly haven't noticed it myself. I gather XFS 
gets high marks, but then there are rumors of trouble with it.

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread David Wood
On Tue, 15 Feb 2005, Per Bojsen wrote:
*** Regarding Re: Big filesystems.; David Wood <[EMAIL PROTECTED]> adds:
David> Yes, that's every time.
I'm seeing several second mount times for big ReiserFS partitions as
well at every boot.  I was assuming it is doing some sort of sanity
check that depends on the size of the partition as part of the mount.
I'm seeing this on LVM.  Are you using LVM?
Yes. I hadn't thought it was specific to LVM, though, since I have much 
smaller filesystems in the same volume group that mount more or less 
instantly. Then again, I haven't tried without it.

It's too bad. I want the machine to boot faster, but I don't want to 
switch filesystems.  :)

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Kyle Rose
David Wood <[EMAIL PROTECTED]> writes:

> Yes, that's every time.

Same here, but that doesn't bother me so much.

What bothers me is file deletion time.  Anyone have any clue why
ReiserFS takes so long to delete files, and why the delete operation
evidently blocks all other FS operations?  It seems that ReiserFS
should log the delete, and then have a kernel thread handling cleanup
in the background in such a way that it doesn't cause other operations
to block.

Yeah, I know this is offtopic. :)

Kyle


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread Per Bojsen
*** Regarding Re: Big filesystems.; David Wood <[EMAIL PROTECTED]> adds:

David> Yes, that's every time.

I'm seeing several second mount times for big ReiserFS partitions as
well at every boot.  I was assuming it is doing some sort of sanity
check that depends on the size of the partition as part of the mount.
I'm seeing this on LVM.  Are you using LVM?

Per

-- 
Per Bojsen  <[EMAIL PROTECTED]>
7 Francis Road
Billerica, MA 01821-3618
USA


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread David Wood
Yes, that's every time.
On Tue, 15 Feb 2005, Tom Vier wrote:
On Tue, Feb 15, 2005 at 09:24:00AM -0500, David Wood wrote:
FWIW, I had expected ReiserFS to perform well at large partition sizes.
For the most part it does, but I was surprised to see there is already a
noticeable (~7-8 sec) delay mounting a ~400GB LVM part.
Did you mount it more than once? The first time you mount after mkreiserfs,
it takes a few seconds.
--
Tom Vier <[EMAIL PROTECTED]>
DSA Key ID 0x15741ECE
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Goswin von Brederlow
Tobias Prousa <[EMAIL PROTECTED]> writes:

> On Monday 14 February 2005 23:14, Juergen Kreileder wrote:
>
>> See 'Maximum On-Disk Sizes of the Filesystems' on
>> http://www.suse.de/~aj/linux_lfs.html
>
> man, thats strange. could anyone please tell me why a single file in reiserfs 
> can be bigger than the whole reiserfs filesystem? I don't get it.
>
> just curious,
> Tobi

Sparse files?

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread Tobias Prousa
On Monday 14 February 2005 23:14, Juergen Kreileder wrote:

> See 'Maximum On-Disk Sizes of the Filesystems' on
> http://www.suse.de/~aj/linux_lfs.html

man, thats strange. could anyone please tell me why a single file in reiserfs 
can be bigger than the whole reiserfs filesystem? I don't get it.

just curious,
Tobi


pgp1niCU5mPvd.pgp
Description: PGP signature


Re: Big filesystems.

2005-02-15 Thread Tom Vier
On Tue, Feb 15, 2005 at 09:24:00AM -0500, David Wood wrote:
> FWIW, I had expected ReiserFS to perform well at large partition sizes. 
> For the most part it does, but I was surprised to see there is already a 
> noticeable (~7-8 sec) delay mounting a ~400GB LVM part.

Did you mount it more than once? The first time you mount after mkreiserfs,
it takes a few seconds.

-- 
Tom Vier <[EMAIL PROTECTED]>
DSA Key ID 0x15741ECE


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-15 Thread David Wood
FWIW, I had expected ReiserFS to perform well at large partition sizes. 
For the most part it does, but I was surprised to see there is already a 
noticeable (~7-8 sec) delay mounting a ~400GB LVM part.

On Mon, 14 Feb 2005, Ron Johnson wrote:
On Mon, 2005-02-14 at 23:14 +0100, Juergen Kreileder wrote:
Ben Russo <[EMAIL PROTECTED]> writes:
[snip]
,
| Kernel 2.6: For both 32-bit systems with option CONFIG_LBD set and for
| 64-bit systems: The size of a file system is limited to 2^73 (far too
| much for today). On 32-bit systems the size of a file is limited to 2
 ^
For clarity, I think that this should be added here:
"without CONFIG_LBD,"
| TiB. Note that not all filesystems and hardware drivers might handle
--
-
Ron Johnson, Jr.
Jefferson, LA USA
PGP Key ID 8834C06B I prefer encrypted mail.
"Microkernels have won."
Andrew Tanenbaum, January 1992
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Big filesystems.

2005-02-15 Thread Goswin von Brederlow
"Jeffrey W. Baker" <[EMAIL PROTECTED]> writes:

> On Mon, 2005-02-14 at 14:49 -0500, Ben Russo wrote:
>> I have a Big disk array with 3 1.6TB RAID 5 LUNs and a global hot spare.
>> I want to bind them together into a single 4.8TB filesystem with LVM.
>> 
>> I have an AMD Opteron Processor in the server to attach to the array.
>> I assume that I need a 64bit 2.6 series kernel yes?
>> Are there any special build options necessary, or do the regular 64-bit 
>> 2.6 kernels allow such large filesystems?
>
> ext3 with the standard 4KB block will allow a 4TB filesystem.  XFS is
> practically unlimited.

Does 8-64K blocksize now work for ext2/3?

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-14 Thread Ron Johnson
On Mon, 2005-02-14 at 23:14 +0100, Juergen Kreileder wrote:
> Ben Russo <[EMAIL PROTECTED]> writes:
[snip]
> ,
> | Kernel 2.6: For both 32-bit systems with option CONFIG_LBD set and for
> | 64-bit systems: The size of a file system is limited to 2^73 (far too
> | much for today). On 32-bit systems the size of a file is limited to 2
  ^
For clarity, I think that this should be added here:
"without CONFIG_LBD,"

> | TiB. Note that not all filesystems and hardware drivers might handle

-- 
-
Ron Johnson, Jr.
Jefferson, LA USA
PGP Key ID 8834C06B I prefer encrypted mail.

"Microkernels have won."
Andrew Tanenbaum, January 1992


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-14 Thread Juergen Kreileder
Ben Russo <[EMAIL PROTECTED]> writes:

> I have a Big disk array with 3 1.6TB RAID 5 LUNs and a global hot
> spare.  I want to bind them together into a single 4.8TB filesystem
> with LVM.
>
> I have an AMD Opteron Processor in the server to attach to the
> array.  I assume that I need a 64bit 2.6 series kernel yes?  Are
> there any special build options necessary, or do the regular 64-bit
> 2.6 kernels allow such large filesystems?

See 'Maximum On-Disk Sizes of the Filesystems' on
http://www.suse.de/~aj/linux_lfs.html

,
| Kernel 2.6: For both 32-bit systems with option CONFIG_LBD set and for
| 64-bit systems: The size of a file system is limited to 2^73 (far too
| much for today). On 32-bit systems the size of a file is limited to 2
| TiB. Note that not all filesystems and hardware drivers might handle
| such large filesystems.
`

Juergen

-- 
Juergen Kreileder, Blackdown Java-Linux Team
http://www.blackdown.org/java-linux/java2-status/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Big filesystems.

2005-02-14 Thread Jeffrey W. Baker
On Mon, 2005-02-14 at 14:49 -0500, Ben Russo wrote:
> I have a Big disk array with 3 1.6TB RAID 5 LUNs and a global hot spare.
> I want to bind them together into a single 4.8TB filesystem with LVM.
> 
> I have an AMD Opteron Processor in the server to attach to the array.
> I assume that I need a 64bit 2.6 series kernel yes?
> Are there any special build options necessary, or do the regular 64-bit 
> 2.6 kernels allow such large filesystems?

ext3 with the standard 4KB block will allow a 4TB filesystem.  XFS is
practically unlimited.

2.6 would clearly be better.  2.4 barely works on AMD64.

-jwb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]