On Friday, 15 April 2016 at 08:03:53 UTC, Nordlöw wrote:
On Wednesday, 13 April 2016 at 22:25:12 UTC, Nick B wrote:
What is absolute time-determinism in a CPU architectures ?

Take the expression "absolute time-determinism" with a grain of salt. I'm saying that eventhough the machine code doesn't contain any branches and caches have been invalidated prior to the start of each frame, the execution time of the program may still contain variations that depend on the *values* being inputted into the calculation.

The reason for this is that, on some CPU-architectures, some instructions such as trigonometric functions are implemented using microcode that requires different amount of clock-cycles depending on the parameter(s) set to them. At my work, these variations have actually been measured and documented and are used to calculate worst-variations of WCET. A compiler backend, such as DMDs, could be enhanced to leverage these variations automatically.

It seems to me that you're also a slave to many details of the
compiler back-end, notably exactly what instructions are output.
That will likely change under different optimization levels, and
can also change in unexpected ways when nearby code changes and
instructions get re-ordered by a peephole optimizer that decides
it now has a chance to kick in and modify surrounding code.  Not
to mention that you're subject to optimizer changes over time in
successive versions of the compiler.  I'm curious:  how often do
you consider it necessary to re-validate all the assumptions that
were made in a particular code review?

Reply via email to