"But I agree, we need to do something about it."

This still bothers me, since the compiler has a constant for either 32 or 64 
bit ints, could that not be user defined for ints and floats?

Here are the reasons it bothers me:

  * All C/C++ libraries expect 32 bits
  * All GPUs expect 32 bits
  * Many functions and loops are auto vectorised by GCC and other C compilers, 
and 64 bits are not. (SIMD)
  * The standard library is written with float or int instead of auto. (Perhaps 
this could be fixed?)



I still only write in Nim, this is the reason I don't want a 1.0 that has 
mistakes that are impossible to fix. Otherwise I could not be happier with the 
language. 

Reply via email to