Thierry: thanks much for your feedback, and apologies for this tardy response.
You pointed me in the right direction. I did not appreciate how even if the
algorithm ultimately has O(n^2) behavior, it can take a big n to overcome large
coefficents on lower order terms (e.g. the O(1) and O(n) par
Ideally, you would use a more functional programming approach:
minimal <- function(rows, cols){
x <- matrix(NA_integer_, ncol = cols, nrow = 0)
for (i in seq_len(rows)){
x <- rbind(x, rep(i, 10))
}
x
}
minimaly <- function(rows, cols){
x <- matrix(NA_integer_, ncol = cols, nrow = 0)
Dear Brent,
I can confirm your timings with
library(microbenchmark)
microbenchmark(
mkFrameForLoop(100, 10),
mkFrameForLoop(200, 10),
mkFrameForLoop(400, 10)
)
but profiling your code shown that rbind only uses a small fraction on the
cpu time used by the function.
profvis::profvis({mkFra
Subtitle: or, more likely, am I benchmarking wrong?
I am new to R, but I have read that R is a hive of performance pitfalls. A
very common case is if you try to accumulate results into a typical immutable R
data structure.
Exhibit A for this confusion is this StackOverflow question on an algor
4 matches
Mail list logo