On 7/4/11 4:40 PM, bearophile wrote:
Andrei:

unsafe(overflows) { // code here }

This approach has a number of issues.

This approach is the one used by Delphi, Ada and C# (C# has a way to
specify even inside a single expression), so somehow it's doable.

I didn't say it wasn't doable. There, you're _quoting_ my answer: "This approach has a number of issues."

Second, programmers are notoriously bad at choosing which code is
affecting bottom line performance, yet this feature explicitly puts
the burden on the coder. So code will be littered with amends, yet
still be overall slower. This feature has very poor scalability.<

You are looking at it from the wrong point of view.

There we go with "wrong" again.

The
overflow-safety of the code is not about performance, it's about
safety, that is about the level of safety you accept in a part of the
code.

Isn't it about the way you define safety, too?

It's about trust. If you can't accept (trust) a piece of code
to be overflow-unsafe, then you can't accept it, regardless the
amount of performance you desire.

This makes no sense. How about trusting or not a piece of code that has possible bugs in it? That's unverifiable and unfalsifiable. That's why the notion of "memory safety" carries weight: no matter what it does, a memory-safe module cannot compromise the integrity of the type system. That _is_ about trust, not integral overflow.

Of course they're not the same thing. Commonalities and
differences.

I meant that safety is not the same thing as shifting the definition
of a range. Overflow tests are not going to produce isomorphic code.

And I meant that either choice could go either way, that reasonable people may disagree about what the choice should be, and that D will not adopt either in the foreseeable future.

Well they also are a solid way to slow down all code.

The slowdown doesn't touch floating point numbers, some Phobos
libraries that contain trusted code, memory accesses, disk and net
accesses, it doesn't influence GUI code a lot, and in practice it's
usually acceptable to me, especially while I am developing/debugging
code.

I think you are underestimating the impact of the change. Enumerating parts that won't be affected doesn't make the affected parts less frequent.

You are using a different version of safety than D does. D defines
very precisely safety as memory safety. Your definition is larger,
less precise, and more ad-hoc.<

D has @safe that is about memory safety. But in D there is more than
just @safe, the D Zen is about the whole D language, and in D there
are many features that help to make it safer beyond memory safety.

I'm sure you know more about the D Zen than most people, but it's not impossible that automatic overflow checking is not a part of it.

Probably one good thing to get past is the attitude that in such a
discussion the other is "wrong".<

In a recent answer I have tried to explain why you can't implement
compile-time overflows in library code:
http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=139950

 Here I am right or wrong, it's not a matter of POV.

I'm afraid it is, as your argument is rife with just debatable statements, and is generally surprising weak compared with the zeal with with you're quoting it. Perhaps I fail to get the gist of your statements, so allow me:

"There are routines for run-time overflow tests in C and C++, but I am not seeing them used." -> subjective. It all says you're frequenting other circles than other people.

"While in Delphi I use overflow tests all the time and I see code written by other people that have runtime overflow tests switched on." -> subjective, relates what you do and see.

"I think that to catch integral overflow bugs in programs you can't just add a SafeInt struct, you need a compiler-wide switch. Otherwise most people will not use it." -> subjective and unsubstantiated. Also, if people don't use SafeInts that frequently may actually be because it's not a problem at the top of their list. I mean how can you draw that conclusion from that hypothesis?

"Array bound tests are able to catch bugs in normal D code written by everybody because you don't need to use a SafeArray instead of the built in arrays and because array bound tests are active on default, you need a switch to disable them. A bit more syntax is needed to disable tests locally, where needed." -> array bounds checking is a _vital_ feature for safe code, not an option.

I don't see how all of this makes you feel you're in possession of a bulletproof case, and the remaining issue is to make sure the others understand and internalize it.

I have also explained that taking an arbitrary D code written by you
and replacing all instances of int and uint with safeInt and safeUint
is not going to happen. It's like array bounds in D code. If you need
to add a safeArray type to spot all array bound overflows, then it's
a lost cause, people are not going to do it (well, the case with a
safeArray is better, because I am using a safe array in C++).

I think there is a bit of confusion between "explaining" and "exposing one's viewpoint".

These are simple matters of which understanding does not require
any amount of special talent or competence.<

Right, but several people don't have experience with using languages
with integral overflows, so while their intelligence is plenty to
understand what those tests do, they don't have the experience about
the bugs they avoid and the slowdown they cause in practice in tens
of practical situations. Some ratical experience about a feature is
quite important when you want to judge it, I have seen it many times,
with dynamic typing, good tuples, array bound tests, pure functions,
and so on, so I suspect the same happens with integral overflows.


So the first step is to understand that some may actually value a
different choice with different consequences than yours because
they find the costs unacceptable.<

I am here to explain the base of my values :-) I have tried to use
rational arguments where possible.

I think there's a fair amount of contradiction in said values. People on this forum have noticed that you sometimes directly contradict your own posts. Which is fine as long as there's understanding that over-insistence in any particular direction may hurt good language design.

As someone who makes numerous posts and bug reports regarding speed
of D code, you should definitely have more appreciation for that
view.<

I'm not a one trick pony, I am a bit more flexible than you think :-)
In this case I think that fast but wrong/buggy programs are useless.
Correctness comes first, speed second. I think D design goals agree
with this view of mine :-)

Then it should come as a surprise to you that Walter disagrees. There's a bit of cognitive dissonance there, isn't there? How do you assuage it?


Andrei

Reply via email to