On 2022-11-15 21:36, Phillip Susi wrote:

Jakob Bohm via openssl-users <openssl-users@openssl.org> writes:

Performance wise, using a newer compiler that implements int64_t etc. via
frequent library calls, while technically correct, is going to run
unnecessarily slow compared to having algorithms that actually use the
optimal integral sizes for the hardware/compiler combination.
Why would you think that?  If you can rewrite the code to break things
up into 32 bit chunks and handle overflows etc, the compiler certainly
can do so at least as well, and probably faster than you ever could.

When a compiler breaks up operations, it will do so separately for
every operation such as +, -, *, /, %, <<, >> .  In doing so,
compilers will generally use expansions that are supposedly
valid for all numbers, while manually breaking up code can often
skip cases not possible in the algorithm in question, for example
taking advantage of some values always being less than
SIZE_T_MAX.

Also, I already mentioned that some compilers do the breaking
incorrectly, resulting in code that makes incorrect calculations.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Reply via email to