On Fri, Jan 17, 2014 at 11:32:41PM +0000, Miod Vallat wrote:
> >                And it's not full emulator if it doesn't emulate the
> > bugs.
> 
> It's almost bedtime in Europe. Do you mind if I tell you a bedtime
> story?
> 
> Years ago, a (back then) successful company selling high-end Unix-based
> workstations, having been designing its own systems and core components
> for years, started designing a new generation of workstations.
<snip>
> Assuming someone would write an emulator for that particular system:
> - if the ``unreliable read'' behaviour is not emulated, according to
>   your logic, it's a bug in the emulator, which has to be fixed.
> - if the behaviour is emulated, how can we know it is correctly
>   emulated, since even the designers of the chip did not spend enough
>   time tracking down the exact conditions leading to the misbehaviour
>   (and which bogus value would be put on the data bus).
> 
> You may argue that, since the kernel has a workaround for this issue,
> this is a moot point. But if some developer has a better idea for the
> kernel heuristic, how can the new code be tested, if not on the real
> hardware?
> 

The problem with this story is that the purported reasons for supporting old
architectures is to shake out bugs. How do the bugs get shaken out? By
exercising shared, core functionality in distinctive ways.

Idiosyncracies such as the above are not the type of thing that helps shake
out core bugs.

So there are two ways to resolve this discrepency: either it simply makes
more sense to shift to emulated environments for older hardware; or one of
the primary reasons also includes actually running on creaky, old
hardware--the coolness factor.

I suspect the coolness factor looms large. And there's nothing wrong with
that. OTOH, there's a strong case to be made for simply inventing crazy
architectures out of whole cloth and writing an emulator for them.

Reply via email to