On Saturday, 1 June 2013 at 21:02:44 UTC, Jonathan M Davis wrote:
On Saturday, June 01, 2013 21:59:18 monarch_dodra wrote:
The way I understood it, @safe defines a list of things that
are
or aren't legal inside the implementation of a function. It
also
changes the scheme of bounds checking, in release code.
What bothers me though, is that from an interface point of
view,
it doesn't really mean anything (or at least, I haven't really
understood anything). AFAIK: if I call something "@safe",
chances
of a core dump are relatively "lower", but they can still
happen:
* A function that accepts a pointer as an argument can be
marked
safe, so all bets are off there, no, since the pointer can be
dereferenced?
* Member functions for structs that have pointers, too, can be
marked safe...
Or does it only mean "if you give me valid pointers, I can't
core
dump*"?
(*ignoring current flaws, such as escaping slices from static
arrays)
The main reason about this question is that now I'm confused
about @trusted: what are the conditions a developer needs to
take
into account before marking a function "@trusted" ?
Ditto for member functions, when they operate on pointer
members.
Can those be @safe?
Yeah, overall, I'm confused as to what "@safe" means from an
interface point of view :(
@safe is for memory safety, meaning that @safe code cannot
corrupt memory. You
can get segfaults due to null pointers and the like, but you
can't have code
which writes passed the end of a buffer, or which uses a freed
memory, or does
anything else which involves writing or reading from memory
which variables
aren't supposed to have access to.
Assuming that there are no bugs in @safe, the one thing that
can invalidate it
is @trusted. With @trusted code, it is the _programmer_ who is
then
guaranteeing that the code is actually @safe. The code is doing
something
which is potentially not safe (and therefore is considered
@system by the
compiler) but which _could_ be safe if the code is correct, and
if the
programmer is marking the code as @trusted, they are then
telling the compiler
that they've verified that the code isn't doing anything which
could corrupt
memory. As long as the programmer doesn't screw that up, then
any @safe code
calling that @trusted function is indeed @safe, but if the
programmer screwed
it up, then you could still get memory corruption. However,
here's really no
way to get around that problem with a systems language, since
most code needs
to eventually call something that's @system (e.g. all I/O needs
@system stuff
internally). But by limiting how much code is @system or
@trusted, most code
is @safe with a minimal amount of code having to have been
verified by an
appropriately competent programmer as being @trusted.
- Jonathan M Davis
OK. But by that standard, can't (mostly) anything be trusted?
What about something that writes garbage, to a memory location it
was *asked* to write to? Or if wrong usage of the function can
lead to an inconsistence memory state, but without "out of bounds
accesses"?
For instance: "emplace!T(T* p)": This function takes the address
of a T, and writes T.init over it. It does a memcopy, so it can't
be @safe, but I can 100% guarantee I'm not doing anything wrong,
so I'm marking it as @trusted. This should be fine, right? Or is
raw memory copying alway unsafe?
Now, technically, emplace can't be called in @safe code, since it
requires a pointer to begin with.
But still, if I were to give emplace an "already constructed
object", it will happily clobber that object for me, leaking the
destructor, and possibly putting the program in an invalid memory
state.
Now, it was *my* fault for calling emplace with an already built
object, but it was the (@trusted) emplace that clobbered-it.
--------
Long story short, I'm having trouble drawing the line between
system and trusted functions, especially in regards to these low
level operations. The same question also works for, say "move" or
"uninitializedArray": both 100% guarantee bounded memory access,
but both can leave you with garbage in your memory...