I have found that ".compressTipLabel" can take quite a while for larger tree
distributions. By vectorizing the for loop, I've got substantial speed
improvements. For really big distributions, multithreading definitely helps
(but does come with a memory cost).
Code is below. HTH.
Joseph.
# rea
Thank you very much for the suggestions,
I will take a look into these softwares and packages, especially xgrid and
mcapply(). They will be surely a great help for minimizing the time for the
analysis I am doing. So in this case, I think the best way is to divide the
analysis through cores, making
I agree with Daniel that going in parallel is probably overkill in this
case. However, if you do want to get into parallelization with R, a good
place to start is the CRAN task view on high performance and parallel
computing: http://cran.r-project.org/web/views/HighPerformanceComputing.html.
Like a
Dear Jose,
Is this a problem in practice? The calculation of branch lengths Brian
describes do not sound very time-consuming to run. If you're dealing with
an enormous tree or enormous sample, they would take some time - but
presumably, the greater bottleneck would be obtaining the sample of trees
Dear all,
I want to create a consensus tree with branch lengths (Brian O'Meara's
function on post "*[R-sig-phylo] Why no branch lengths on consensus
trees?*) using
a Mac Workstation. However, if I only type the function on it, R will not
use all cores for running the analysis. I would like to know