f32 (and even f16) is critical for fast computation for deep learning, 
especially on GPU and like for games. F32 is more than enough for the majority 
of data (images, sounds, text, financial data).

However from a library writer point of view, declaring a type/data 
structure/function signature as T, float or float32 is not worth the change of 
Nim default. That is just wrote once.

Besides, I see the following arguments in favor of changing the default to 
float32:

  * Speed/Memory in microbenchmarks. This probably have influence on people 
looking at "as fast-as-C or as-efficient-as-C" languages and might help Nim 
popularity
  * Consumer GPU have fast f32 and slow/emulated f64.
  * Usability:



Currently if you want to declare a seq[seq[float32]] you have to use f32 at 
each subsequence: let s = @[@[1'f32,2,3],@[4'f32,5,6]].

Arguments against are:

  * It's better to have accurate result slowly than wrong results fast. 
Especially for debugging
  * It's a breaking change
  * Compatibility with existing C and C++ libraries.



My take would be to have an easy to understand default, either:

  * "We default to 32-bit int and floats everywhere"



or

  * "We default to the platform pointer size"



And document it in the manual.

Reply via email to