Re: Filesystem overhead

2003-08-04 Thread michael . odonnell
>we'd know how to get the best performance when writing >absolutely synchronously ie. the data had to actually >end up on a disk platter and not just get parked in >some cache, like in the buffer cache or in the drive >controller's cache. I should have mentioned why we instrumented rewrite vers

Re: Filesystem overhead

2003-08-04 Thread michael . odonnell
|>Very cool, that was revealing. Perhaps this discussion |>can evolve into how journalling (e.g. ext3, etc.) works |>and why it is good/bad. Anybody? | | I would like to see some real metrics on: | ext2 | ext3 | JFS | XFS | ReiserFS FWIW (which may not be much in the context of the current di

Re: Filesystem overhead

2003-08-04 Thread Andrew W. Gaunt
Ben, et.al., Your explaination has corrected some misconceptions I had regarding journaling filesystems. Thanks. I think I've gleaned that the journal is an "add before you subtract" kind of system, meaning you never put at risk information you don't have a copy of squirrled away somewhere els

Re: Filesystem overhead

2003-08-02 Thread bscott
On Wed, 30 Jul 2003, at 8:35am, [EMAIL PROTECTED] wrote: > Very cool, that was revealing. Perhaps this discussion can evolve into how > journalling (e.g. ext3, etc.) works and why it is good/bad. Anybody? If a system crashes (software, hardware, power, whatever) in the middle of a write transact

Re: Filesystem overhead

2003-07-31 Thread Bob Bell
On Wed, Jul 30, 2003 at 05:48:06PM -0400, Bill Freeman <[EMAIL PROTECTED]> wrote: Also, thinking about it later, I'm likely wrong that an inode was a whole block, even with 512 byte blocks. More likely is that original UFS inodes were 64 bytes, with 8 fitting in a 512 byte block. I'm sorry

Re: Filesystem overhead

2003-07-31 Thread Andrew W. Gaunt
We're got some boxen here each with a 3ware RAID controller 7x200GB disks in RAID5 configuration. They run Redhat 8 with LVM splitting the array into logical volumes. The logical volumes have ext3 filesystems on them. I think it's kind of neat that we can use tools to manage them that apparentl

Re: Filesystem overhead

2003-07-30 Thread bscott
On Wed, 30 Jul 2003, at 7:12pm, [EMAIL PROTECTED] wrote: > Probably, the best thing that ext3 has going for it is its compatibility > with ext2. Yah. I would also go so far as to say that EXT3 is the most robust (in terms of implementation) journaling filesystem available on Linux. Not because

Re: Filesystem overhead

2003-07-30 Thread Jerry Feldman
On Wed, 30 Jul 2003 17:23:56 -0400 Tom Buskey <[EMAIL PROTECTED]> wrote: > I favor running a journalling FS. You're much less likely to get > corruption from a crash. If you do crash, your system will come up > faster. The performance hit is negligable. > > I've used ReiserFS in the past on my

Re: Filesystem overhead

2003-07-30 Thread Bill Freeman
I noticed in the text that I sent to Ben that he quoted, that I typed "...washing machine sided drives...". I really meant, should anyone be in doubt, "...washing machine SIZED drives...". Also, thinking about it later, I'm likely wrong that an inode was a whole block, even with 5

Re: Filesystem overhead

2003-07-30 Thread Tom Buskey
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Jerry Feldman wrote: | On Wed, 30 Jul 2003 08:35:16 -0400 | "Andrew W. Gaunt" <[EMAIL PROTECTED]> wrote: | | |>Very cool, that was revealing. Perhaps this discussion |>can evolve into how journalling (e.g. ext3, etc.) works |>and why it is good/bad.

Re: Filesystem overhead

2003-07-30 Thread Jerry Feldman
On Wed, 30 Jul 2003 08:35:16 -0400 "Andrew W. Gaunt" <[EMAIL PROTECTED]> wrote: > Very cool, that was revealing. Perhaps this discussion > can evolve into how journalling (e.g. ext3, etc.) works > and why it is good/bad. Anybody? I would like to see some real metrics on: ext2 ext3 JFS XFS Reiser

Re: Filesystem overhead

2003-07-30 Thread Andrew W. Gaunt
Very cool, that was revealing. Perhaps this discussion can evolve into how journalling (e.g. ext3, etc.) works and why it is good/bad. Anybody? [EMAIL PROTECTED] wrote: Hello world! Okay, I have satisfied my curiosity in this matter. Bill Freeman <[EMAIL PROTECTED]>, who replied to me off-li

Re: Filesystem overhead

2003-07-29 Thread bscott
al(?) Unix File System), but most of the concepts (if not the numbers) apply to EXT2/3 as well. -- Begin forwarded message -- Date: Mon, 28 Jul 2003 13:24:16 -0400 From: Bill Freeman <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Subject: Filesystem overhead Ben, I&#

Re: Filesystem overhead

2003-07-28 Thread bscott
On Mon, 28 Jul 2003, at 7:50pm, [EMAIL PROTECTED] wrote: > if I remember correctly, it's that du gets the size in K and then if you > ask for bytes, converts the K to bytes... but it's been awhile. What "du" actually does is use the "stat" family of system calls to find information about the fil

Re: Filesystem overhead

2003-07-28 Thread Ben Boulanger
On Mon, 28 Jul 2003 [EMAIL PROTECTED] wrote: > For example, I have an image of a data CD in a single file. The actual > size of the logical file (as reported by "stat", "ls", and other tools) is > 526,397,440 bytes. However, the "du" utility says it uses 526,917,632 > bytes. That is a differen

Re: Filesystem overhead

2003-07-28 Thread bscott
On Mon, 28 Jul 2003, at 2:11pm, [EMAIL PROTECTED] wrote: > That is most likely the meta data kept by most logging file systems. Or > its the duplicate superblocks. (You see these when mkfs is run.) I would buy that if I was comparing the size of the raw device (partition) to the available spac

Re: Filesystem overhead

2003-07-28 Thread Bruce Dawson
That is most likely the meta data kept by most logging file systems. Or its the duplicate superblocks. (You see these when mkfs is run.) ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss

Filesystem overhead

2003-07-28 Thread bscott
Hello list, Red Hat Linux 7.3 EXT3 filesystem Linux kernel 2.4.20-18.7 EXT3 filesystem driver 2.4-0.9.19 glibc 2.2.5-43 GNU fileutils 4.1-10.1 I'm aware of the fact that there is overhead in any filesystem, due to things like metadata, internal fragmentation, and such. Up until n