I'm confused about what you're asking. If you apply an optimizer to an
algorithm, it absolutely shouldn't affect the output. When we debug or
report errors, it should always be in reference to the original source code.

Or do you mean some other form of 'optimized'? I might rephrase your
question in terms of 'levels of service' and graceful degradation (e.g.
switching from video conferencing to teleconferencing gracefully if it
turns out the video uses too much bandwidth), then there's a lot of
research there. One course I took - survivable networks and systems -
heavily explored that subject, along with resilience. Resilience involves
quickly recovering to a better level of service once the cause for the
fault is removed (e.g. restoring the video once the bandwidth is
available).

Achieving ability to "fall back" gracefully can be a challenge. Things can
go wrong many more ways than they can go right. Things can break in many
more ways than they can be whole. A major issue is 'partial failure' -
because partial failure means partial success. Often some state has been
changed before the failure occurs. It can be difficult to undo those
changes.



On Tue, Jul 30, 2013 at 1:22 PM, Casey Ransberger
<casey.obrie...@gmail.com>wrote:

> Thought I had: when a program hits an unhandled exception, we crash, often
> there's a hook to log the crash somewhere.
>
> I was thinking: if a system happens to be running an optimized version of
> some algorithm, and hit a crash bug, what if it could fall back to the
> suboptimal but conceptually simpler "Occam's explanation?"
>
> All other things being equal, the simple implementation is usually more
> stable than the faster/less-RAM solution.
>
> Is anyone aware of research in this direction?
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to