https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98577
kargl at gcc dot gnu.org changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |kargl at gcc dot gnu.org --- Comment #3 from kargl at gcc dot gnu.org --- (In reply to Chinoune from comment #2) > program main > use iso_fortran_env > implicit none > ! > integer(int32) :: count_rate_i32 > integer(int64) :: t1, t2, t3, t4, t5, t6 > real(real32) :: count_rate_r32 > real(real64) :: count_rate_r64 (brevity) > $ gfortran-10 -O3 bug_gcc_98577_2.f90 -o test.x > $ ./test.x > count_rate_r32: 568052096. > count_rate_i32: 568763156.84800005 > count_rate_r64: 0.71658250000000001 > > Can you explain these results?! Harald just explained it to you. % gfcg -o z -Wall a.f90 && ./z integer kind: 1 2 4 8 count: -127 -32767 996821773 996821773517757 rate: 0 0 1000 1000000000 max: 0 0 2147483647 9223372036854775807 real: kind 4 8 10 16 rate: 1000.0 1000000000.0 1000000000.0 1000000000.0 For integer(4) and real(4), the number of ticks per second is 1000, i.e., count_rate. For integer(4) and real(4), the count is done in units of 1/1000, i.e., 1/count_rate. For integer(8) (and integer(16) if support) and real(8), real(10), and real(16), the number of ticks per second is 1000000000. For integer(8) (and integer(16) if support), count is done in units of 1/1000000000, ie., 1/count_rate. In your program when you do your scaling, it is mixing units. print*, "count_rate_r32:", (t2-t1)/count_rate_r32 t2-t1 is nanosecond time scale. count_rate_r32 is millisecond time scale. print*, "count_rate_i32:", (t4-t3)/real(count_rate_i32,real64) t4-t3 is nanosecond time scale. count_rate_i32 is millisecond time scale. print*, "count_rate_r64:", (t6-t5)/count_rate_r64 t6-t5 is nanosecond time scale. count_rate_r64 is nanosecond time scale.