On Wednesday, 16 September 2015 at 08:38:25 UTC, deadalnix wrote:

Also, predictable size mean you can split your dataset and process it in parallel, which is impossible if sizes are random.

I don't recall how he would deal with something similar to cache misses when you have to promote or demote a unum. However, my recollection of the book is that there was quite a bit of focus on a unum representation that has the same size as a double. If you only did the computations with this format, I would expect the sizes would be more-or-less fixed. Promotion would be pretty rare, but still possible, I would think.

Compared to calculations with doubles there might not be a strong case for energy efficiency (but I don't really know for sure). My understanding was that the benefit for energy efficiency is only when you use a smaller sized unum instead of a float. I don't recall how he would resolve your point about cache misses.

Anyway, while I can see a benefit from using unum numbers (accuracy, avoiding overflow, etc.) rather than floating point numbers, I think that performance or energy efficiency would have to be within range of floating point numbers for it to have any meaningful adoption.

Reply via email to