Steve Fairhead wrote:

<snip>

Second, you mentioned embedded work, which is my main work area. Yes,
embedded stuff needs to be stable long-term - but the Internet isn't:
threats change, and OpenBSD evolves. A classic solution to that (which I've
used) is to simply accept that the legacy embedded stuff should not be
directly connected to the Internet, and to use a current (or at least
regularly maintained) OpenBSD machine as a gateway. Or, to put it another
way: use the right tools for the job.

Hey Steve, long time no chat.... I've not been reading c.a.e. for awhile.
I finally got Novell NFS 3.0 working, thanks to a melange of code from
patches (thanks for your initial participation).

I agree online threats change; my argument is for a stable core o/s, with
patches made for threat mitigation and stable API and ABI and configuration
within a major release number, to make life easier for small shops that
can't afford to shoot at moving targets all the time. I need to run on
old hardware, and reading the commits and changes scares me no end that
performance issues would cripple my systems if I continually 'upgraded'.

Managing threats requires resources, and it should be up to the user to
understand and choose the solutions to threat management within the scope
of the hardware resources available to him. Performance data is often
lacking, so I take a conservative approach and backport what I need
and then test for stability and performance on my hardware. This approach
isn't much in evidence within obsd development, as Theo stated, it
doesn't 'excite' the developers, and of course mature hardware is often
no longer available to developers so support is dropped.

I had argued for a 'tiered' release structure, e.g. major releases which
are expected to run well on a certain class of hardware over a long term,
and minor releases which address bugs and online threats. No one expects
MS Windows XP to run at all on a 486/33 with 16MB RAM, but they do expect
Win98SE to do so, and indeed that o/s is still a viable product to many
people. Telling them they can only have 'Vista' is of benefit only to MS,
which relies on forced migration increasingly as a business model. Telling
folks, 'hardware is cheap, buy something newer', doesn't address the user
of dedicated systems which employ certain architectural constraints but
rather targets mainly members of that vast set of commodity computer users,
or suggests costly upgrades in the dedicated spaces.

Some time ago I had posed performance questions in the openbsd-sparc lists
in hopes that I could get performance and resource data that could direct my
decisions regarding 'upgrades' on older sparc architectures; replies were
essentially along the lines of 'try it', which I guess in an open source
environment is a fair expectation, however on a rapid-release cycle, I
just cannot manage this.

Having profiling data on system calls, library functions, facilities like
'pf', etc. for various architectures, updated on each release, would go a
long way towards permitting an objective analysis for upgrade decisions.
Certainly, when a release drops support for my hardware, that is a show
stopper right there and everything else is moot.

I recently ported ucos-ii to a twenty year old mcu, because for me it was
the right tool for the job, and the advantages of the architecture outweighed
pressures to use a newer part; layering comm stacks, interpreters and mini-guis
on top of that produced a framework for a large number of projects that
leveraged investment in ICE and development systems, and was the only
cost-effective solution for various projects.

Newer isn't always better, and in tough economic times, and even for 'green'
reasons, I would argue for more attention to optimization for mature hardware.

Regards,

Michael

Reply via email to