I'm not sure I follow.
Assuming that the trait `T` has no method that uses `Self`, then any `impl`
requiring `T` should happily accept an `T` / `@T`. Why penalize
non-self-referential traits (like `Writer`), just because some traits (like
`AdderIncr`) are self-referential?
On Sun, Oct 20, 2013
I run into the following problem (the code below is a toy example).
```
use std::io::Writer; // Makes no difference if added/removed.
trait PrintWithSpice {
fn print(self, writer: Writer, spice: bool);
}
struct Bar {
bar: ~PrintWithSpice,
}
impl Bar {
pub fn print(self, writer:
Ugh, I was too optimistic. Yes, I can write my code using `MyWriter`, but I
can't cast any @Writer (such as `io::stdout()`) to it. I guess I should
just use `@Writer` everywhere for now :-(
This raises the question of how come the compiler is smart enough to figure
out a `@Writer` has the trait
If T is a trait, its trait objects ~T, @T and T do not implement T. There
is an implementation of Writer for @Writer, but not for ~Writer or Writer
which is why you're seeing that error.
Steven Fackler
On Fri, Oct 18, 2013 at 11:27 PM, Oren Ben-Kiki o...@ben-kiki.org wrote:
Ugh, I was too
Hmmm That sounds strange. Shouldn't `obj: T` allow me to invoke
`obj.method_of_T()`?
For example, how did I manage to invoke the `data.print(...)` method via
the borrowed `data: PrintWithSpice` pointer? Automatic dereference? And if
so, why didn't it work for `Writer` as well?
On Sat, Oct
Consider this program:
trait AdderIncr {
fn add(self, x: Self) - Self;
fn incr(mut self);
}
impl AdderIncr for int {
fn add(self, x: int) - int { *self + x }
fn incr(mut self) { *self += 1; }
}
fn incrAdd(x: mut AdderIncr, y: mut AdderIncr) {
x.incr();
x.add(y);
}
fn main() {}
It fails to