On Wed, Jan 9, 2019 at 10:07 AM Eric S. Raymond <e...@thyrsus.com> wrote:

> > Reasonable people can disagree, but I favor rewriting over
> > transpilation, for draining that swamp.
>
> The problem is that in general nobody *does* rewrite old
> infrastructure code.  It tends to work just well enough to fester in
> place for a long time, and then you get shocks like the Heartbleed
> bug.  It is my aim to head off that sort of thing in the future.
>

I'm curious about why transpilation would have significantly mitigated the
Heartbleed bug.

It sounds as though the idea is a mass transpilation of old libraries, but
that's going to require more than just a library. Suppose openssl had been
transpiled into Go well in advance of the attack. This isn't going to stop
old C code from linking against the old library, and it's all going to stay
there. So you'd need to transpile all the programs which use the library
too, it seems to me, and release them into all the distros, and have them
all agree. That seems a rather high hurdle. You could, of course, just
release a transpiled version of a bunch of libraries, but now my guess is
there's two things to maintain instead of one, and nothing ever makes the
old one go away.

But suppose that hurdle is fixed, and think about the specific Heartbleed
bug. The bug involved a mistake in unmarshalling a particular type of SSL
packet, trusting a length field from an inbound packet; the fix was to see
the length and discard the incoming packet if it was bogusly long.

The specific problem was that if the length field was bogusly big, the code
would do the echo reply with the next N bytes after the incoming packet in
memory, which could include whatever was in memory after the packet.

The transpiler is going to be amazing clever to even get this code right.
It's code which bounces around *char manually unmarshalling a piece of
structured data. Consider this: hbtype = *p++.

What's that going to be? Is it going to be turned into pointer arithmetic
using the unsafe package? Now you'll just get the same bug again. Obviously
new Go code would stick the packet in a []byte, and then use the facilities
of the binary package to read the bits, and the attempt to read past the
end of the slice will fail. But how is a transpiler going to automatically
take the C code of openssl and do that?

Suppose it has a way, however. Now you have Go code which will have a
bounds fault instead of a data leak. That's better, I suppose - the
resulting bug is now "the server crashes" instead of "the server maybe
leaks a key". This is an improvement, but a packet-of-death across a widely
used library this puts the world in a not dissimilar position in terms of
the level of panic and rapid response everybody needs.

So I'm not quite seeing it. It seems like a great idea from the outside
("hey, we could turn all these programs into memory-safe Go programs,
automatically, what a win!") but in practice, I'm not sure I see such a
transpiler actually working in a way that would achieve the result - and
the end is to preserve a profound denial of service attack anyway.

Thomas

-- 

memegen delenda est

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to