On Saturday, 14 April 2012 at 18:07:41 UTC, Jerome BENOIT wrote:
On 14/04/12 18:38, F i L wrote:
On Saturday, 14 April 2012 at 15:44:46 UTC, Jerome BENOIT wrote:
On 14/04/12 16:47, F i L wrote:
Jerome BENOIT wrote:
Why would a compiler set `real' to 0.0 rather then 1.0, Pi, .... ?

Because 0.0 is the "lowest" (smallest, starting point, etc..)

quid -infinity ?

The concept of zero is less meaningful than -infinity. Zero is the logical starting place because zero represents nothing (mathematically)

zero is not nothing in mathematics, on the contrary !

0 + x = 0 // neutral for addition
0 * x = 0 // absorbing for multiplication
0 / x = 0 if  (x <> 0) // idem
| x / 0 | = infinity if (x <> 0)

Just because mathematical equations behave differently with zero doesn't change the fact that zero _conceptually_ represents "nothing"

It's default for practical reason. Not for mathematics sake, but for the sake of convenience. We don't all study higher mathematics but we're all taught to count since we where toddlers. Zero makes sense as the default, and is compounded by the fact that Int *must* be zero.

0 / 0 = NaN // undefined

Great! Yet another reason to default to zero. That way, "0 / 0" bugs have a very distinct fingerprint.


, which is inline with how pointers behave (only applicable to memory, not scale).

pointer value are also bounded.

I don't see how that's relevant.


Considering the NaN blow up behaviour, for a numerical folk the expected behaviour is certainly setting NaN as default for real. Real number are not meant here for coders, but for numerical folks:

Of course FP numbers are meant for coders... they're in a programming language. They are used by coders, and not every coder that uses FP math *has* to be well trained in the finer points of mathematics simply to use a number that can represent fractions in a conceptually practical way.


D applies here a rule gain along experiences from numerical people.

I'm sorry I can't hear you over the sound of how popular Java and C# are. Convenience is about productivity, and that's largely influence by how much prior knowledge someone needs before being able to understand a features behavior.

(ps. if you're going to use Argumentum ad Verecundiam, I get to use Argumentum ad Populum).


For numerical works, because 0 behaves nicely most of the time, non properly initialized variables may not detected because the output data can sound resoneable; on the other hand, because NaN blows up, such detection is straight forward: the output will be a NaN output which will jump to your face very quickly.

I gave examples which address this. This behavior is only [debatably] beneficial in corner cases on FP numbers specifically. I don't think that sufficient justification in light of reasons I give above.


This is a numerical issue, not a coding language issue.

No, it's both. We're not Theoretical physicists we're Software Engineers writing a very broad scope of different programs.


Personally in my C code, I have taken the habit to initialise real numbers (doubles) with NaN: in the GSL library there is a ready to use macro: GSL_NAN. (Concerning, integers I used extreme value as INT_MIN, INT_MAX, SIZE_MAX. ...).

Only useful because C defaults to garbage.


I would even say that D may go further by setting a kind of NaN for integers (and for chars).

You may get your with if Arm64 takes over.

Reply via email to