Steven Schveighoffer: > That's not what your example showed. It showed comparing two allocated > pointers *outside* a pure function, and expected them to be equal. I see > that as disallowing all pointer comparisons.
My first examples were not so good, I am sorry :-) Thank you for not ignoring my posts despite that. The idea was that if you allocate memory inside a pure function, then the result of this memory allocation is a @transparent pointer/reference. And that a @transparent pointer/reference can't be read (but you are allowed to dereference it, overwrite it, etc). So this is OK, because the transparency of the pointer is respected: pure @transparent(int*) foo() { return new int; // allocates just one int on the heap } void main() { @transparent int* i = foo(); // OK *i = 10; // OK } > I just wonder if it would be worth it. Probably not, but we need to understand what the holes are, before accepting to keep them in the language :-) > It also has some unpleasant effects. For example, the object equality > operator does this: > > bool opEquals(Object o1, Object o2) > { > if(o1 is o2) > return true; > ... > } > > So this optimization would be unavailable inside pure functions, no? Or > require a dangerous cast? Ick. You can't give a @transparent Object to function that expects an Object, because transparent references disallow something that you are allowed to do on a not transparent reference/pointer: pure @transparent Object foo() { return new Object(); } void opEquals(Object o1, Object o2) {} void main() { @transparent Object o = foo(); opEquals(o, o); // not allowed } > Would it be enough to just require this type of restriction in pure @safe > functions? I don't know. Currently @safe is a wrong name, it means means @memorySafe. So I think that currently "@memorySafe" and "pure" are almost orthogonal concepts. Bye, bearophile