no, I'm not. mostly conventional use afaik. if this should not be
happening, I can trace it down to a small reproducible example to
figure it out.
--
Ivo Welch (ivo.we...@ucla.edu)
--
Ivo Welch (ivo.we...@ucla.edu)
http://www.ivo-welch.info/
J. Fred Weston Distinguished Professor of Fi
ugghhh---apologies. although in 2020, it would be nice if the mailing
list had an automatic html filter (or even bouncer!)
I am using macos. alas, my experiments suggest that `mclapply()` on a
32-core intel system with 64GB of RAM, where the input data frame is
8GB and the output is about 500MB
if I understand correctly, R makes a copy of the full environment for each
process. thus, even if I have 32 processors, if I only have 64GB of RAM
and my R process holds about 10GB, I should probably not spawn 32 processes.
has anyone written a function that sets the number of cores for use (in
m
I would like to put together a set of my collected utility functions and
share them internally. (I don't think they are of any broader interest.)
To do this, I still want to follow good practice. I am particularly
confused about writing docs.
* for documentation, how do I refer to '@'-type docum
dear R wizards: `optimize()` requires the user to provide the
brackets. I can write a bracketing routine, given a function and a
starting point, but I was wondering whether there was already a
"standard" user-exposed implementation. (Presumably, this is used in
nlm, too; alas, nlm is in C, not n
e probably looking at adjacent points,
pretending that they are linear, and mark where a line between them
intercepts the level, and then hope that some sanity prevents me from
connecting disconnected levels. not my plan...]
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-
n interactive mode
regards, /iaw
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
J. Fred Weston Professor of Finance
Anderson School at UCLA, C519
Director, UCLA Anderson Fink Center for Finance and Investments
Free Finance Textbook, http://book.ivo-welch.info/
Editor, Cr
within(d, z <- sin(x+y))
quartz()
contourplot( z ~ x * y, data = d)
am I committing an error, or is there something more robust or at
least verbose, perhaps?
help appreciated. /iaw
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
_
gards, /iaw
----
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://
does anyone have a kaveri based system with R recompiled to use its GPU?
is this even possible today?
regards, /iaw
PS: I am trying to collect benchmarks
http://r.ivo-welch.info/
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted
I am personally happy to live with R for its power despite
its drawbacks; but IMHO it is just too much to ask from a set of bewildered
novice master students.
I hope the R team will at some point in the future pick up on making the
core language less mysterious upon setting an option, at least in &q
easier.)
sincerely, /iaw
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting
I am struggling with a contour plot. I want to place a cross over the
minimum. alas, I don't seem to be able to map axes appropriately.
here is what I mean:
N <- 1000
rm <- rnorm(N, mean=0.0, sd=0.4)
rx <- rnorm(N, mean=0.0, sd=0.4)
rt <- rnorm(N, mean=0.0, sd=0.4)
exploss <- function(hdgM,hdg
ive of z(), function x is not an enclosing environment.
do I write a while loop to look back, or is there a standard R function
that searches all calling environments until it finds one that works?
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
duh!
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
J. Fred Weston Professor of Finance
Anderson School at UCLA, C519
Director, UCLA Anderson Fink Center for Finance and Investments
Free Finance Textbook, http://book.ivo-welch.info/
Editor, Critical Finance Review, http
py the globalenv, run my code, see what objects have been changed (how?),
move the changed and new functions into my a environment, and then restore
globalenv. or is this already done somewhere else?
/iaw
----
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version d
for (nm in c("x")) print(fama.macbeth( y ~ d[[nm]], din=d ))
or whatever.
the output in both cases should be the same, preferably even knowing
that the name of the variable is really "x" and not nm. is there a
standard common way to do this?
regards,
/iaw
Ivo W
avoid losing the column name. I wonder why such vectors don't
keep a name attribute of some sort.
there is probably an "R way" of doing this. is there?
/iaw
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted]]
___
below. best to be ignored.
/iaw
Ivo Welch (ivo.we...@gmail.com)
--- the as75.R file, at least as of july 2013
### see end of file for documentation
debug <- 0
if (debug) cat(rep("\n",10)) ## visual separation
dyn.load("Ras75.so") # created with R CMD SH
thx, jim. makes perfect sense now.
I guess a logical in R has a few million possible values ;-).
(Joke. I realize that 4 bytes is to keep the code convenient and faster.)
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Mon, Jul 15, 2013 at 4:26 PM, jim holtman wrote:
> I can g
reference?
I may be blanking here---maybe the answer is obvious.)
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLE
large data sets. I saw some
specific solutions on stackoverflow, a couple requiring even less
parsimonious user code. is everyone using bigmemory? or SQL? or ...
? I am leaning towards the SSD solution. am I overlooking some
simpler recommended solution?
/iaw
Ivo Welch (ivo.we...@gmai
Dear R group:
I just bought a Haswell i7-4770k machine, so I went through the
trouble of creating a small website with some comparative benchmarks.
I also made it easy for others to contribute benchmarks. if you are
interested, it's all at http://R.ivo-welch.info/ . enjoy.
/iaw
----
Ivo
thanks, mark. these are excellent starting pointers. I will get to them
asap. I hope I won't need to bother you more.
regards,
/iaw
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listi
ved a gazillion
times.
could someone please point me to some simple textbook = howto treatments of
this problem and/or R packages that implement this? feel free to point out
your own work...this way I can cite it.
regards,
/iaw
----
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version de
] ~ d[,2]) ) ## calculate two coefficients and return
them
}
results <- mclapply( 1:1, run )
stumped over something that should be easy...pointer appreciated.
/iaw
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted]]
ly has no decompression
penalty and becomes close to native fast reading of .gz data. chances
are this is because it has .gz support baked in. gzfile does not help
with read.csv, however.
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Mon, Jun 10, 2013 at 10:09 AM, ivo welch wrote:
>> Sur
useful only for big files anyway.
is it possible to block write.csv across multiple threads in mclapply? or
hook a single-thread function into the mclapply collector?
/iaw
On Wed, Jun 5, 2013 at 5:06 AM, Duncan Murdoch wrote:
> On 13-06-05 12:08 AM, ivo welch wrote:
>
>> thx, greg.
that a
built-in R filter function could provide.
Ivo Welch (ivo.we...@gmail.com)
On Tue, Jun 4, 2013 at 2:56 PM, Greg Snow <538...@gmail.com> wrote:
> Some possibilities using existing tools.
>
> If you create a file connection and open it before reading from it (or
> wr
ss, and
make sure that the processes don't step on one another. (why did R
not use a dot after "mc" for parallel lapply?) presumably, to keep it
simple, mcfilter.csv would keep a counter of read chunks and block
write chinks until the next sequential chunk in order arrives.
just a sug
I read Hadley's excellent Rcpp tutorial at
https://github.com/hadley/devtools/wiki/Rcpp. Alas, there is one part
that is missing---how do I maintain a memory region between calls?
Here is a stylized example of what I mean:
extern "C" {
#include
#include
}
#include
class silly {
double *v; i
Gentlemans as 274 algorithm allows weights, so adding an obs with a weight
of -1 would do the trick of removing obs, too.
This may be a good job for hadwell wickhams c code interface.
On May 27, 2013 12:47 PM, "Berend Hasselman" wrote:
>
> On 27-05-2013, at 17:12, ivo welch w
o updating
models, not observations. even if it did, given the speed of lm(), I
don't think it will be that useful.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Mon, May 27, 2013 at 9:26 AM, Bert Gunter wrote:
> Ivo:
>
> 1. You should not be fitting linear models as you des
rors of coefficients.
before I get started on this, I just wanted to inquire whether someone
has already written such a function.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listi
##.#. Live Go...
> Live: OO#.. Dead: OO#.. Playing
> Research Engineer (Solar/BatteriesO.O#. #.O#. with
> /Software/Embedded Controllers) .OO#. .OO#. rocks...1k
> --------
an do to drastically speed up
R on intel i7 by going to FP32?
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/
gs, or does it handle this line by line as it goes?
(if it does the former, I can write a perl pre-filter)
/iaw
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch
norm(N)*10))
print(head(pairs)) ## works
p <- polr( y ~ x , method="probit", data=pairs)
print(summary(p))
pairs is saved as a name. eventually, summary.polr thinks it is the
pair function, not the pair data frame.
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Mon, May 13, 2013
dear R experts---how do I determine what summary(polr( y ~ x )) calls?
it is not summary.lm(polr(y~x)) or summary.mlm or summary.glm, or
stats:::summary.lm or ... in fact, none of the summary methods
seem to invoke what summary invokes.
advice, as always, appreciated.
regards,
/iaw
Ivo
you get what
you pay for.
more generally, I am still wondering why we have an lm and a
summary.lm function, rather than just one function and one resulting
object for parsimony, given that the summary.lm function is fast and
does not increase the object size.
Ivo Welch (ivo.we...@gmail.com)
I ended up wrapping my own new "ols()" function in the end. it is my
replacement for lm() and summary.lm. this way, I don't need to alter
internals. in case someone else needs it, it is included. of course,
feel free to ignore.
docs[["ols"]] <- c(Rd= '
@TITLE ols.R
@AUTHOR ivo.we...@gmail.com
# the standard errors
}
if (!is.null(lm.hook)) lm.hook() ## has access to everything that
lm() has already computed
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEAS
ooops...never mind. I mixed up "title" and "main" as options.
Ivo Welch (ivo.we...@gmail.com)
On Wed, May 8, 2013 at 1:54 PM, Steve Lianoglou
wrote:
> Hi Ivo,
>
> On Wed, May 8, 2013 at 1:37 PM, ivo welch wrote:
>> dear R-experts---first, a sugges
my own title"
plot(ee)
alas, I cannot figure out how to get rid of the title altogether.
attr(ee,"call") <- NULL gives me two quotation marks ("") . is it
possible to remove the title altogether?
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
_
ess=TRUE, data=d, method="probit" ))
print(pp)
cat(" (main) now we do the pseudo R^2\n")
print(pR2(pp))
cat(" ok\n")
summarize(d)
Error in eval(expr, envir, enclos) : object 'p' not found.
Ivo Welch (ivo.we...@gmail.
ipt in
perl and self-parse the .Rd files that are strewn around my hard drive, or
whether there is an idiomatic R way...
best,
/iaw
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https
es seems like a
good idea. git and control is overkill here.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/l
el related" for lm; and
another categorization for importance (e.g., like "common" for lm and
"obscure" for ..). Such categorizations require intelligence.
if I am going to do this for myself, I think a csv spreadsheet may be a
good idea to make it easy to resort by keys.
actually, I had it right all along. that is,
m<- runif(); s<- runif(); df<-runif()*10+1 # get some
parameters...any parameters
x <- rt( 10, df )*s + m # create random draws
library(MASS)
fitdistr(x, "t") # confirm properties
will work. (josh suggested working with the skewness parameter,
there a better way to do this? there
is a non-centrality parameter ncp in rt, but one parameter ncp cannot
subsume two (m and s), of course. my first attempt was to draw
rt(..., df=2.63)*s+m, but this was obviously not it.
advice appreciated.
/iaw
Ivo Welch (ivo.we...@gmail.com)
http
curiosity question: I was wondering whether the R binaries and BLAS
libraries for ubuntu linux are compiled using SSE4 and AVX support.
this probably can go a long way towards a unified memory bus GPGPU
substitute.
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info
as only one change---the
optional argument. this is not hard to do, but I now run the risk that the
R team could change by(). I wish I could at least test whether the by()
function changes from release to release to alert me, but functions are not
atomic and therefore cannot be compared.
what
m greatly indebted to you, Uwe, and a
couple of others to have helped me out many times to solve problems I have
run into. without r-help, I would have given up on it.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
On Tue, Feb 26, 2013 at 6:35 AM, Uwe Ligges wrot
* lagseries:52:: Need more observations than 1*
## [b] evaluate every {{ }} construct and insert output into the string
## [c] abort
"%or%" <- function (e1, e2) {
if (!e1) { if (is.character(e2)) abort.estring(e2) else eval(e2) }
}
I do not know whether it is possible to bu
atex and R?
I could live with rolling my own IF ONLY my ide/editor would
understand my new format. I do not have the skills to tell emacs how
to switch syntax-highlighting in the middle of files.
this collective path-dependence will plague us for decades, having
created a cost
"column a1 is not numeric"
}
I also find it useful to have an optional length argument for
is.numeric or is.character, which requires the argument to be of
length x, but this is another story.
/iaw
On Fri, Feb 8, 2013 at 4:01 PM, ivo welch wrote:
> dear R experts---I am try
a", class(d), "with
names", names(d)), but I also want to be define %or% so that I can
write
(is.data.frame(d)) %or% "d is a ::class(d):: with names ::names(d)::" ;
operators don't take variable arguments afaik. :-(.
advice
ev.off; and/or if pdfFonts could take an optional
.pfbdir that embedFonts would search. but this is pretty painless
already as is, esp to how it was 5 years ago. thanks to all the folks
that put in the hard work to make this possible.
[PS: The abov
is it possible to throw a stop() that is so hard that it will escape
even tryCatch?
/iaw
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http
eight than .Rd and more heavyweight than
just '#' comments.
/iaw
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-proj
actually, I may have found what I was looking for in
https://github.com/hadley/devtools/wiki/Philosophy
---
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting
contemplating the core logic of
lm() extensions is distracting at the first baby steps. similarly,
rstudio is a really nice IDE, but I don't think that roxygen2 and
devtools need rstudio.
could someone point me to a simpler "starting" document for roxygen2
and devtools, if such exis
n all default autovivification off in my
program, but that's not possible.)
/iaw
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www
e a way to add a TaskCallback only when a
user resizes the window?
help appreciated...
/iaw
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://w
t 8:00 AM, David Winsemius wrote:
>
> On Jan 7, 2013, at 6:58 PM, ivo welch wrote:
>
>> hi david---can you give just a little more of an example? the
>> function should work with call by order, call by name, and data frame
>> whose columns are the names. /iaw
>>
hi david---can you give just a little more of an example? the
function should work with call by order, call by name, and data frame
whose columns are the names. /iaw
Ivo Welch (ivo.we...@gmail.com)
On Mon, Jan 7, 2013 at 4:25 PM, David Winsemius wrote:
>
> On Jan 7, 2013, at 3
t BS has to parse an '...' argument, but there could be a
couple of magical R functions that might make this easier than I would
do it with my planned clunky version. what's the elegant version?
/iaw
Ivo Welch (ivo.we...@gmail.com)
ded to the docs for mclapply?
/iaw
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting
#x27;s version of "by" is mclapply( split(
ds, factor ), FUN )
I don't know the poor man's version of "ave".
sincerely, /iaw
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
__
R-help@r-project.org mailing list
e non-vectorized version.
/iaw
On Thu, Nov 8, 2012 at 9:53 AM, R. Michael Weylandt
wrote:
> On Thu, Nov 8, 2012 at 3:05 PM, ivo welch wrote:
>> dear R experts--- I have (many) unidimensional root problems. think
>>
>> loc.of.root <- uniroot( f= function(x,a) log( exp(
iles. I could put each column
into its own vector and then combine into a data frame, but this seems
ugly. is there a better way to embed data frames? I searched for the
answer via google, but could not find it. it wasn't obvious in the
data import/export guide.
re
great. thanks. exactly what I wanted. /iaw
Ivo Welch (ivo.we...@gmail.com)
On Thu, May 31, 2012 at 2:53 PM, David L Carlson wrote:
> a <- data.frame(x=runif(4), y=runif(4), z=runif(4))
> b <- capture.output(a)
> c <- paste(b, "\n", sep=""
thanks, jeff. no, not capture.output(), but thanks for pointing me to it
(I did not know it). capture.output flattens the data frame. I want the
print.data.frame output, so that I can feed it to cat, and get reasonable
newlines, too.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
J. Fred
dear R experts---is there a function that prints a data frame to a string?
cat() cannot handle lists, so I cannot write cat("your data frame is:\n",
df, "\n").
regards, /iaw
Ivo Welch (ivo.we...@gmail.com)
[[alternative
fixed effects). could someone please point me to packages, if
any, that would help me estimate such models? (can these problems be split
over many different cores?)
advice appreciated.
/iaw
Ivo Welch (ivo.we...@brown.edu, ivo.we...@gmail.com)
CV Starr Professor of Economics (Finance), Brown
I cleaned up my old benchmarking code and added checks for missing
data to compare various ways of finding OLS regression coefficients.
I thought I would share this for others. the long and short of it is
that I would recommend
ols.crossprod = function (y, x) {
x <- as.ma
thx, guys, almost there. This is good fodder for the vignette or ?parallel.
Steps:
(1) install package "snow" on all machines which you want to be part
of a cluster.
(2) run under R
library(parallel)
cl <- makeCluster(c("localhost", "calc.localdomain"), "SOCK")
result <- parLapply(cl=cl, X=1:10
Indices, stopCluster
bash: /Library/Frameworks/R.framework/Resources/bin/Rscript: No such
file or directory
but if I use two linux machines, it works.
now, how do I use parallel's mclapply with it?
best,
/iaw
Ivo Welch (ivo.we...@gmail.com)
J. Fred Weston Professor of Finance
Anderson S
Sys.sleep(1); x } ) # on both, please
iaw
On Wed, Apr 18, 2012 at 1:01 PM, ivo welch wrote:
> Dear R experts:
>
> could someone please point me to a page that explains how to set up
> more than 1 machine for library parallel (which is quickly becoming my
> favorite!)
...
_
listener processes on each of my slaves by hand. R would start slave
processes automatically on each slave that has a a listener running.
I don't have the time/ability to set up full clustering
quasi-supercomputer solutions.
/iaw
----
Ivo Welch (ivo.we...@gmai
where I got it right.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Sat, Mar 31, 2012 at 8:35 AM, ivo welch wrote:
> "what is the problem you are trying to solve?"
>
> elegance, ease, and readability in my programs.
>
> R has morphed from a data manipulation, gr
like me,
plus the google archives here, are angels. without your help, I could
not use R.]
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-pr
nt. #4 is exactly what
I wanted. can list[] be added into the standard core R as a feature?
it would seem like a natural part of the syntax for functions
returning multiple values.
justin---mea culpa.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Fri, Mar 30, 2012 at 5:08 PM, Justin Ha
d <- x[[2]]
rm(x)
which seems awful. is there a nicer syntax?
regards, /iaw
Ivo Welch (ivo.we...@brown.edu, ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the pos
ound
# this is where I had given up, but the following works:
> new.data=old.data
> new.data[recalc.please]= old.data[recalc.please]^2
> new.data
[1] 11 144 13 196 15 256 17 324 19 400
sorry, guys.
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Tue, Mar 27, 2012 at 7:27 PM, ilai wro
ide the
FUN.ON.ROWS, but this is costly in terms of execution time. are there
obvious solutions? advice appreciated.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PL
fter any R
program print/abort sequences have played out.
besides, "sink=TRUE, split=TRUE" could be a nice additional option to
"source".
sincerely,
/iaw
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted]]
_
Dear R experts---I think I need to figure out how to stop in my error
function without triggering an error again. so, I think I need the
equivalent of C's exit(0) call. Here is what I mean:
$ R CMD BATCH die.R
and die.R is
# in my .Rprofile, but for now in die.R
options(error=function(e) print
Dear R experts---I may have asked this in the past, but I don't think
I figured out how to do this. I would like to execute traceback()
automatically if my R program dies---every R programI ever invoke. I
guessed that I could have wrapped my entire R code into
tryCatch(
... oodles of R code
,
gh it does have good
package documentation. it does have some unexpected behavior:
mymatrix[1:2,] is a matrix, but mymatrix[1:1,] is a numeric. huh?
data.table is necessary for reasonably fast data manipulation, but
data.table giveth and taketh. it has some really strange unexpected
behavior---mydatatab
ve unified memory space, so the data
copy problem is hopefully long gone.
/iaw
Ivo Welch (ivo.we...@gmail.com)
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE d
dear R readers---I thought I would post the following snippet of R
code that makes by() like operations easier and faster on multicore
machines for R novices and amateurs. I hope it helps some. YMMV.
feel free to ignore.
PS: I wish R had a POD-like documentation system for end users that
are not
how to get coef standard errors faster in
this case. summary.lm() is really slow.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
J. Fred Weston Professor of Finance
Anderson School at UCLA, C519
On Mon, Oct 10, 2011 at 11:30 PM, Joshua Wiley wrote:
>
course, knowing how to do this myself fast
now by hand, this is not so important for me. but it may help some
other novices.
thanks again everybody.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Mon, Oct 10, 2011 at 9:31 PM, William Dunlap wrote:
> The following avoids the
odes further.
am I doing something wrong? is there an alternative to split()?
sincerely,
/iaw
Ivo Welch (ivo.we...@gmail.com)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guid
that makes all splits.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Mon, Oct 10, 2011 at 11:07 AM, Joshua Wiley wrote:
> Hi Ivo,
>
> My suggestion would be to only pass lapply (or mclapply) the indices.
> That should be fast, subsetting with data table should also be fast
dear r experts---Is there a multicore equivalent of by(), just like
mclapply() is the multicore equivalent of lapply()?
if not, is there a fast way to convert a data.table into a list based
on a column that lapply and mclapply can consume?
advice appreciated...as always.
regards,
/iaw
Ivo
which I
couldn't do this AT ALL.
regards,
/iaw
Ivo Welch (ivo.we...@gmail.com)
On Sun, Oct 9, 2011 at 10:42 AM, Patrick Burns wrote:
> I think you are looking for the 'data.table'
> package.
>
> On 09/10/2011 17:31, ivo welch wrote:
>>
>> D
then loop over the main data set to supplement
it.
is there a recommended way of doing such tasks in R, either super-fast
(so that I merge many many times) or space efficient (so that I merge
once and store the results)?
sincerely,
/iaw
Ivo Welch (ivo.we...@gmail.com)
_
TPATH=", absolute.path.to.font.files, sep="")
stopifnot(system( paste(commandline, paste(fname, ".PDF", sep=""),
paste(fname, ".pdf", sep="") ) ) ==0 )
options( pdf.current= NULL )
}
# and a test
pdf.start("test-berasans")
plot( 1:10
1 - 100 of 199 matches
Mail list logo