On 11-08-09 07:28 PM, David Herman wrote:
I'll try to stick up for tail calls, at least down the road (I'd say having them in the first release doesn't really matter).
I don't think they'll be very easy to retrofit if we don't keep 'em alive. Though they're not really alive now. And for various reasons this is seeming more and more dubious to me.
- As you guys have said, this issue seems pretty tied in with the memory management semantics, particularly with refcounting. But I'm still not convinced that refcounting has proved itself. We'll need to have a GC before long, and once we do, I think it would still be worthwhile to consider the possibility of decreasing or eliminating our use of refcounting. If we were to move away from refcounting, the "callee-drops" semantics goes away.
This may well be -- we'll probably do a branch that does gc-on-all-shared-nodes just to see how it performs -- but I don't want to get into imaginary accounting on the refcounting costs. The issues are more that
- Separate from the take/drop issue, I'm probably too far away from the compiler to know what the stack movement is that's hurting the non-tail-call cases. What's the issue there?
The caller of a non-tail-call has to adjust the stack back to its "proper" position when a callee returns, because the ABI to support tail calls in the callee (at the caller's choice) means that arg-passing is ownership-passing, and it effectively gave up control of the portion of the stack that it put the outgoing args into when it passed them to the callee. This is called "callee-pop" arguments. And it costs the callers another sp -= args after each call.
Beyond that though, there's this basic fact that for a few cases they won't work. Like .. trans-crate calls probably won't, and move-in arguments on polymorphic values won't, possibly/probably self calls in wrapped objects, some of the things marijn and patrick have been discussing for guaranteeing alias lifetimes, possibly a bunch of others; we keep running into cases where the constraint is "that's nice but breaks tail calls". So it's more a question of trying to figure whether it's worth bending all these other things into contortions to preseve them. We're hardly using them now.
- I find the state machine use case pretty compelling. It's a pretty natural style, which is easy to code up; it might not be as efficient as loops, depending on how we implemented it, but if it's more understandable/maintainable it can be worth the trade-off.
It can definitely read quite naturally, yes. Not debating that, just weighing costs vs. benefits.
- A variation of state machines is jump tables, e.g. in a bytecode interpreter.
I think the optimization case there is actually computed gotos, not tail calls. You don't want to throw out the interpreter frame; you want to return to it for the next opcode. We don't support computed gotos (though I guess we could; it'd probably be 'unsafe' :)
- Full disclosure: I agree that tail calls are less important for Rust than for, say, Erlang, because of loops and because of iteration abstractions (either iterators or blocks, depending on which way we go). But I still think they make for a nice tool that's hard to replicate when it's not provided by the language.
Agreed it's hard to replicate. What I'm getting at is that at present, it seems they're rather hard to *support* everywhere, period. How useful are tail calls if they only work in "very restricted" circumstances? It's a bit of an offense against language orthogonality and compositionality, y'know?
-Graydon _______________________________________________ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev