https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98577
--- Comment #21 from kargl at gcc dot gnu.org --- (In reply to Chinoune from comment #20) > won't fix. This is hilarious! Now, I know why you are so confused. >From your code in comment #2 call system_clock( t1, count_rate_r32 ) c = matmul( a, b ) call system_clock( t2 ) t1 and t2 are integer(8) and count_rate_r32 is integer(4). In the first call to system_clock(), the clock rate is set 1000 and t1 counts ticks on the millisecond time scale. In the second call to system_clock(), t2 counts ticks on the nanosecond time scale. gfortran uses the least kind type parameter of the actual arguments to determine which time scale to use. So, for the first call of system_clock(), min(kind(t1), kind(count_rate_r32)) = 4, and you get milliseconds. For the second, kind(t2) = 8, and you get nanoseconds. Intel, which appears to be your gold standard, chooses to use the kind type parameter of the first argument. Both implementations are correct because ...... these are processor-dependent values. To paraphrase, Steve Lionel (former Intel Fortran compiler engineer), asks "why one would mix types?" See comp.lang.fortran for his comment. Note, the sequence call system_clock(count_rate_r32) call system_clock(t1) c = matmul( a, b ) call system_clock(t2) will give you the wrong timing with both gfortran and Intel, or will give a correct relative timing. So, to summarize, you are seeing processor-dependent behavior. There is no bug, here. There is nothing to fix.