Thank you all for your suggestions! I must say I am amazed by the number of
people who consider helping out another! Fells like it was a good idea to
start using R - back when I was still using Perl for such tasks, I'd been
happy to have this kind of support!
@ Gheorghe Postelnicu: Unfortunately,
Dear all,
for the past two weeks, I've been working on a script to retrieve word pairs
and calculate some of their statistics using R. Everything seemed to work
fine until I switched from a small test dataset to the 'real thing' and
noticed what a runtime monster I had devised!
I could reduce
The data.table package might be of use to you, but lacking a reproducible
example [1] I think I will leave figuring out just how to you.
Being on Nabble you may not be able to see the footer appended to every
message on this MAILING LIST. For your benefit, here it is:
* R-help@r-project.org
On 8 Dec 2014, at 21:21, apeshifter ch_k...@gmx.de wrote:
The last relic of the afore-mentioned for-loop that goes through all the
word pairs and tries to calculate some statistics on them is the following
line of code:
typefreq.after1[i]-length(unique(word2[which(word1==word1[i])]))
(where
2 ideas (haven't tried them):
1. if your data is in a data frame, did you try using the by function?
Seems it would do the grouping for you.
2. Since you mention the cpu cores, you could use libraries like foreach
and %dopar% or mcapply.
I would try 1. and see if it provides a sufficient
5 matches
Mail list logo