Rprof() has an argument `interval = 0.02` that controls how frequently
sampling takes place. On Linux the maximum sampling frequency is once
every 10 ms and on other platforms its once every 1 ms, per
help("Rprof"):

"What is feasible is machine-dependent. On Linux, R requires the
interval to be at least 10ms, on all other platforms at least 1ms.
Shorter intervals will be rounded up with a warning."

implemented in 
<https://github.com/r-devel/r-svn/blob/eb498f735e6b592c3db53d6824be3a7c30d4c4d5/src/main/eval.c#L897-L907>:

#if defined(linux) || defined(__linux__)
    if (dinterval < 0.01) {
    dinterval = 0.01;
    warning(_("interval too short for this platform, using '%f'"), dinterval);
    }
#else
    if (dinterval < 0.001) {
    dinterval = 0.001;
    warning(_("interval too short, using '%f'"), dinterval);
    }
#endif

Q. These limits were introduced on 2022-11-18 (r83369) by Tomas K.
How were these limits chosen? Is it that the Linux limit of 10 ms
applies to all Linux distributions, kernels, and hardware, or was this
limit picked to work on most systems? Do they need to be re-visited
over time? I would imagine that the limit would depend on hardware and
the speed on the file system that Rprof() writes too, but I find it a
bit odd that it would be hardcoded to an absolute walltime period.

FWIW, I just recompiled R-devel on my Ubuntu Linux laptop to allow for
1 ms, and the collected data look as what I'd expect also at this
resolution. Without the tweak, a lot of profiled calls clocks in at 10
ms.

Thanks,

Henrik

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Reply via email to