When things didn't match up that was a clue that either

    - the benchmark was broken
    - the code was broken
[...]

I would carry out an object-oriented dualism here.


[1] methods (kernel module) ---- [2] objects (formatted partition)

|                                |
|                                |

[3] benchmarks ----------------- [4] user-space utilities (fsck)


User-space utilities investigate "object corruptions",
whereas benchmarks investigate "software corruptions"
(including bugs in source code, broken design, etc, etc..)

It is clear that "software" can be "corrupted" by a larger
number of ways than "objects". Indeed, it is known that
dual space V* (of all linear functions over V) is a much
more complex object than V.

So benchmark is a process which takes a set of methods
(we consider only "software" benchmarks) and puts numerical
values populated with a special (the worst) value CRASH.

Three main categories of benchmarks using:

1) Internal testing

An engineer makes optimizations in a file system (e.g. for a
customer) via choosing functions or plugins as winners in
a set of internal (local) "nominations".

2) Business plans

A system administrator chooses a "winner" in some (global)
"nomination" of file systems in accordance with internal
business-plans.

3) Flame and politics

Someone presents a "nomination" (usually with the "winner"
among restricted number of nominated members) to the public
while nobody asked him to do it.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to