On Tuesday, February 27, 2018 11:33:04 Simen Kjærås via Digitalmars-d wrote: > And trust me, the compiler complains about both of these. > Possibly rightfully in the first example, but the latter never > does anything scary with the given pointers.
As I understand it, the way that @safety checks generally work is they check whether a particular operation is @safe or not. They don't usually care about what is then done with the result. So, if you do something like take the address of something, that's immediately @system regardless of what you do with the result. That changes on some level with DIP 1000 and scope, because then it uses scope to ensure that the lifetime of stuff like pointers doesn't exceed the lifetime of what they point to so that it can then know that taking the address is @safe, but without DIP 1000, it takes very little for something to become @system. e.g. this is compiles with -dip1000 but otherwise doesn't: void main() @safe { int i; assert(&i !is null); } Now, the compiler does seem to be a bit smarter with dynamic arrays and ptr given that this compiles without -dip1000 void main() @safe { int[] i; assert(i.ptr !is null); } However, this doesn't compile with -dip1000: void main() @safe { int[] i; auto j = i.ptr; assert(j !is null); } and not even this compiles with -dip1000: void main() @safe { int[] i; scope j = i.ptr; assert(j !is null); } though I'm inclined to think that that's a bug from what I understand of -dip1000. In any case, @safety checks tend to be fairly primitive, so once you start mucking around with pointers, it's not hard to write code that gets treated as @system because of a single expression in the code that is clearly @safe within the context of the function, but the compiler can't see it. And for better or worse, accessing a dynamic array's ptr member is now @system, because it's not @safe in all circumstances. If the compiler were smarter, then a number of uses of ptr would probably be @safe, but its analysis for stuff like that is usually pretty primitive, in part because making it sophisticated requires stuff like code flow analysis, which the compiler doesn't do a lot of, precisely because it is complicated and easy to get wrong. Walter is particularly leery about making it so that stuff is an error or not based on code flow analysis, and @safe falls into that camp. Clearly, some of that is going on with DIP 1000, but that seems to be largely by using the type system to solve the problem rather than doing much in the way of code flow analysis. - Jonathan M Davis