On 10/31/2013 2:24 PM, eles wrote:
On Thursday, 31 October 2013 at 18:46:07 UTC, Walter Bright wrote:
On 10/31/2013 9:00 AM, eles wrote:
What if the hardware fails? Such as a bad memory bit that flips a bit in the
perfect software, and now it decides to launch nuclear missiles?

If that happens, any software verification could become useless. On the latest
project that I'm working on, we simply went with two identical (but not
independently-developed, just identical) hardwares, embedded software on them.

A comparator compares the two outputs. Any difference results in an emergency
procedure (either a hardware reboot through a watchdog, either a controlled
shutdown - to avoid infinite loop reboot).

What I posted on HN:

------------------

All I know in detail is the 757 system, which uses triply-redundant hydraulic systems. Any computer control of the flight control systems (such as the autopilot) can be quickly locked out by the pilot who then reverts to manual control. The computer control systems were dual, meaning two independent computer boards. The boards were designed independently, had different CPU architectures on board, were programmed in different languages, were developed by different teams, the algorithms used were different, and a third group would check that there was no inadvertent similarity.

An electronic comparator compared the results of the boards, and if they differed, automatically locked out both and alerted the pilot. And oh yea, there were dual comparators, and either one could lock them out.

This was pretty much standard practice at the time.

Note the complete lack of "we can write software that won't fail!" nonsense. This attitude permeates everything in airframe design, which is why air travel is so incredibly safe despite its inherent danger.

https://news.ycombinator.com/item?id=6639097

Reply via email to