On 12-06-14 10:57 AM, Niko Matsakis wrote:
One thing: right now, x += [5] is extra optimized in that it does not create the RHS vector.
I know, I mentioned that in the last email. It's gruesome and has to go.
What are the shortcomings of overloaded operators? Here are some things I can think of. - No way to do x += 5 - x[5] = 3 only works for built-in types - x[5] += 3 only works for built-in types - &x[5] only works for built-in types
Yeah. This gets gnarly fast. I would prefer our solution not wind up looking like blitz++ though. Maybe I was wrong to dismiss the decomposed-to-calls version, but it requires that we support a move-based version and a copy-based version? Or reify the region used for expression temporaries or something?
The questions then are: - Which of these patterns should we support at all? - Which can be implemented directly and which are desugared? - For desugaring, what fallbacks are in place? - Finally, how can we possibly implement this in our compiler in a nice way?
Fair.
I think the current implementation of overloading is fragile, in that it affects every downstream client which must consider "two ways" of interpreting a random set of expressions (expr_unary, expr_binary, expr_index, maybe others). This accounts for several bugs (eholk just found one the other day). I was thinking of refactoring things so that all operators ALWAYS have an entry in the "overload" table, and one of those entries is the "intrinsic" definition.
Maybe. Not sure how this will play out.
are overloaded operators all the time in the later analysis passes and only trans has to special case things (and maybe not even that, if we use actual rusti intrinsics?)
No, we need to be able to constant-fold manually in the front- or middle- passes, not just punt the whole task to LLVM. Sad but true.
-Graydon _______________________________________________ Rust-dev mailing list [email protected] https://mail.mozilla.org/listinfo/rust-dev
