"Happy Kamote Foundation" <[EMAIL PROTECTED]> writes:
> Yumyum.
>
> Ill cough out something.
>
> linux may be ahead in performance but not in filesystem. What advanced
> feature of
> the filesystem that linux is advanced of? having more filesystems that
> all do the
> same thing with varying degress of data loss and corruption? that's
> not advancement
> for me.
>
> and for speed, not all cases linux is beyond. i've tested this both on NetBSD
> (without softupdates) and gentoo linux (ext3) on the same machine.
>
> $ time tar xzf
> $ time cp -R my-large-collection my-large-collection-2
>
> Result:
> NetBSD:
> 11.22s - 29.41s
> linux:
> 18.32s - 38.42s
This is such a microbenchmark.
* You've only given absolute times. How much is spent in userspace? in
kernelspace? Is the time taken absolutely because of the filesystem?
Or the layers between the filesystem code and the hardware (i.e. the
PCI drivers, the I/O scheduler in Linux or the equivalent in
NetBSD).
* The two filesystems lay out data in different ways, and the two
filesystems have different strategies for dealing with corruption
etc. (which is, I guess, the point of this thing all). However:
- ext3 as a filesystem has several journalling options. Which one
was used? 'data=writeback' would have the greatest throughput;
'data=journal' would be the safest, yet slowest.
* Again, I/O scheduler has something to do with performance too. There
are three scheduler policies in the 2.6 series-- cfq, deadline,
as. There's also a noop scheduler, which is basically a "don't
really do any scheduling" thing.
In fact, it may all lie in the I/O scheduling, and not the filesystem!
The only way to really investigate this in any sane and statistically
significant manner is:
- Install or port the filesystem to Linux (or NetBSD), without
changing major parts. This way, the essential middle layers are the
same, and you're really testing filesystem performance (and not
inadvertently the middle layers such as the I/O scheduler)
- Rewrite major parts of the kernel of each, or at least reconfigure
them to have the middle layers perform the same way -- which, IMHO,
is crazy and very difficult in the "you might as well write another
kernel" way.
- Run a variety of other tests where you exercise the middle layers
between filesystem and hardware, where you can use as a guide to
figure out how much of the performance is due to the middle
layers. Not as accurate as the first.
> you are welcome to test it too.
>
> linux has many filesystem, none of them actually appear to be stable,
> especially
> during very heavy loads. bsd doesn't want many filesystems that all do the
> same
> thing, only with different sets of problems.
Umm... Linux does have UFS, although the older version and read-only
(IIRC).
And it has JFS -- proven; XFS -- also proven; ext2 -- not journaled, but
stable and proven...
I could name you the filesystems, and I could tell you that they have
gone through a lot of engineering man-hours and use. And they have been
proven stable.
BTW, what do you mean *exactly* by "stable...during very heavy loads"?
Do you mean that the filesystems crash (make the kernel panic)? Or that
they cope with the I/O load suboptimally? What exactly?
> and looking at the changelog,
> http://www.kernel.org/pub/linux/kernel/v2.6/testing/ChangeLog-2.6.17-rc4
>
> [XFS] Fix a possible metadata buffer (AGFL) refcount leak when fixing
> an AG freelist
> [XFS] Fix a project quota space accounting leak on rename.
> [XFS] Fix a possible forced shutdown due to mishandling write barriers
> with remount,ro
>
> http://www.kernel.org/pub/linux/kernel/v2.6/testing/ChangeLog-2.6.17-rc6
>
> [PATCH] ext3 resize: fix double unlock_super()
>
> so this is what you called stable?
Mind you, the fact that 2.6 is almost currently equivalent to a odd
series kernel (i.e. experimental) right now, albeit more stable, that's
to be expected.
However, to point out too:
- The rate of change in the Linux kernel is IIRC greater than in
NetBSD (I can't grep the stats source at the moment though; Google
yourself ;)
- Note you quoted a *possible* metadata buffer refcount leak -- it's
been spotted that it *may* occur. You also pointed a *possible*
forced shutdown due to mishandling write barriers with
remount,rw -- the remounting of a filesystem from ro to rw is an
edge case and is not typical. In the same manner, do such features
occur in the NetBSD filesystem?
- Note the patch in ext3 -- that's for the ext3 resizing code. How
many times will you be resizing your filesystem? Yet another edge
case.
Edge cases do not a bad filesystem make. The fact that they exist only
proves that the code is being widely used. That's it.
> i don't believe in benchmarks either, i want to prove it by myself.
> fefe's benchmark
> is one example of such tests. heck, he even asked the openbsd mailing
> list on "how
> to fine-tune openbsd" AFTER publishing the benchmark. he benchmarked a system
> he
> really doesn't know.
Benchmarks prove nothing. See above.
--
JM Ibanez
Senior Software Engineer
Orange & Bronze Software Labs, Ltd. Co.
[EMAIL PROTECTED]
http://software.orangeandbronze.com/
_________________________________________________
Philippine Linux Users' Group (PLUG) Mailing List
[email protected] (#PLUG @ irc.free.net.ph)
Read the Guidelines: http://linux.org.ph/lists
Searchable Archives: http://archives.free.net.ph