There seems to be a inverse square law of Moore's Law at work between
the amount of available processor power and the actual end-user
applications neing used on a typical day seem to be inverse
perportiant and the actual ratio is apparently a constant over time.
I have always wondered why UNIX and FreeBSD in particular (vs. Windows
or even the old non-BSD based MacOS) have a constant that is much
closer to the end of preferring to idle the processor then have it
chase down some low priority daemon that never gets called.

One thing that occured to me in this process is that there is most
defently an growth curve over time of the ratio of meta-data/code and
the actual engine (local or remote).   This force tool developers to
err on the side of caution when writing any middle ware and/or build
utilities... thus these increase the ratio on favorablely and then the
OS and App need to compensate for this caution by a more complex API
which then just starts the cycle all over again... I have always
marveled about how UNIX and BSD in particular avoided the feature and
every possible case handled heavy OS and as a result the upper layers
need to conform to a much simpler API (thus dampining the cycle)...
comments?
_______________________________________________
freebsd-chat@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-chat
To unsubscribe, send any mail to "freebsd-chat-unsubscr...@freebsd.org"

Reply via email to