Yes that's how it works, but a single run does not provide sufficient precision unless your sample size is enormous. When you partition into tenths again the partitions will be different so yes there is some randomness. Averaging over 100 times averages out the randomness. Or just use the bootstrap with B=300 (depending on sample size).
Frank viostorm wrote: > > Thanks so much for the reply it was exceptionally helpful! A couple of > questions: > > 1. I was under the impression that k-fold with B=10 would train on 9/10, > validate on 1/10, and repeat 10 times for each different 1/10th. Is this > how the procedure works in R? > > 2. Is the reason you recommend repeating k-fold 100 times because the > partitioning is random, ie not 1st 10th, 2nd 10, et cetera so you might > obtain slightly different results? > ----- Frank Harrell Department of Biostatistics, Vanderbilt University -- View this message in context: http://r.789695.n4.nabble.com/recommendation-on-B-for-validate-lrm-tp3486200p3508187.html Sent from the R help mailing list archive at Nabble.com. ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.