Re: [R] Get list of ODBC data sources?
Jim Porzak wrote: Hello R Helpers, Before setting up a connection with RODBC, I would like to present my users with a pick list of ODBC data sources available in their environment. I may be missing something, but don't see anything in RODBC itself to return list of sources for use in select.list(). Any hints? I'm running 2.3.0 on Win XP SP2. Simply type odbcConnect() Uwe Ligges __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Environment problems
Dear list readers, Can someone of you explain this behavior. Here's a toy example: Start by constructing a function tmp fix(tmp) In the default editor enter this one-liner: hist(rnorm(10)) close the editor and run environment on the function. environment(tmp) environment: R_GlobalEnv Open the editor and remove the last parenthesis, this will make the editor choke. fix(tmp) Error in edit(name, file, title, editor) : an error occurred on line 4 use a command like x - edit() to recover Put the paranthesis back: edit()-tmp environment(tmp) environment: base tmp() Error in tmp() : could not find function hist And as you can see, the function doesn't work anymore... Yes, I know I can manually change the environment back to .GlobalEnv, but is this the way it supposed to work? This example is done in R.Version()$version.string [1] Version 2.3.0 Patched (2006-04-25 r37924) on WindowsXP Cheers, Hans -- * Hans Gardfjell Ecology and Environmental Science Umeå University 90187 Umeå, Sweden email: [EMAIL PROTECTED] phone: +46 907865267 mobile: +46 705984464 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Get list of ODBC data sources?
On Mon, 22 May 2006, Jim Porzak wrote: Hello R Helpers, Before setting up a connection with RODBC, I would like to present my users with a pick list of ODBC data sources available in their environment. I may be missing something, but don't see anything in RODBC itself to return list of sources for use in select.list(). Any hints? No, nor is there anything in ODBC per se. You can use (in a GUI environment) the completion facilities of odbcDriverConnect. I'm running 2.3.0 on Win XP SP2. But the ODBC data provider could be on a remote machine running a different OS. It may be possible to ask Windows which User/System DSNs it knows about, but that would not be part of ODBC nor portable. -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Environment problems
Your R-patched is nearly a month old. For 2.3.1 beta: o edit() would default the environment of a function to .BaseEnv, instead of to .GlobalEnv. so this is already changed in the current test versions of the next release of R. On Tue, 23 May 2006, Hans Gardfjell wrote: Dear list readers, Can someone of you explain this behavior. Here's a toy example: Start by constructing a function tmp fix(tmp) In the default editor enter this one-liner: hist(rnorm(10)) close the editor and run environment on the function. environment(tmp) environment: R_GlobalEnv Open the editor and remove the last parenthesis, this will make the editor choke. fix(tmp) Error in edit(name, file, title, editor) : an error occurred on line 4 use a command like x - edit() to recover Put the paranthesis back: edit()-tmp environment(tmp) environment: base tmp() Error in tmp() : could not find function hist And as you can see, the function doesn't work anymore... Yes, I know I can manually change the environment back to .GlobalEnv, but is this the way it supposed to work? This example is done in R.Version()$version.string [1] Version 2.3.0 Patched (2006-04-25 r37924) on WindowsXP Cheers, Hans -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Environment problems
Hans Gardfjell wrote: Dear list readers, Can someone of you explain this behavior. Here's a toy example: Start by constructing a function tmp fix(tmp) In the default editor enter this one-liner: hist(rnorm(10)) close the editor and run environment on the function. environment(tmp) environment: R_GlobalEnv Open the editor and remove the last parenthesis, this will make the editor choke. fix(tmp) Error in edit(name, file, title, editor) : an error occurred on line 4 use a command like x - edit() to recover Put the paranthesis back: edit()-tmp environment(tmp) environment: base tmp() Error in tmp() : could not find function hist And as you can see, the function doesn't work anymore... Yes, I know I can manually change the environment back to .GlobalEnv, but is this the way it supposed to work? No, but it has already been fixed. Please try the beta version of R-2.3.1. Uwe Ligges This example is done in R.Version()$version.string [1] Version 2.3.0 Patched (2006-04-25 r37924) on WindowsXP Cheers, Hans __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nls: 'data' error? (was formula error?)
From ?nls data: an optional data frame in which to evaluate the variables in 'formula'. From the printout of 'Temp' it appears 'Temp' is a matrix. Assuming 'temp' is the same as 'Temp', it is not a data frame as required, and the message is consistent with feeding eval() a numeric matrix where a data frame is expected. On Mon, 22 May 2006, H. Paul Benton wrote: So thanks for the help, I have a matrix (AB) which in the first column has my bin numbers so -4 - +4 in 0.1 bin units. Then I have in the second column the frequency from some data. I have plotted them and they look roughly Gaussian. So I want to fit them/ find/optimize mu, sigma, and A. So I call the nls function : nls_AB - nls(x ~ (A/sig*sqrt(2*pi))* exp(-1*((x-mu)^2/(2* sig^2))),data=temp, start= list(A=0.1, mu=0.01, sig=0.5), trace=TRUE) Error in eval(expr, envir, enclos) : numeric 'envir' arg not of length one Temp looks like this: bin x [1,] -4.0 0 [2,] -3.9 0 [3,] -3.8 0 .etc [41,] 0.0 241 [42,] 0.1 229 [43,] 0.2 258 [44,] 0.3 305 [45,] 0.4 370 [46,] 0.5 388 So I don't get my error message. I looked at doing class(fo - (x ~ (A/sig*sqrt(2*pi))* exp(-1*((x-mu)^2/(2* sig^2) terms(fo) and that seems to work. So if anyone has any ideas I would welcome them. -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] package installation problem
(this after asking the package author) Hi I cannot install the rmvnorm package under R-2.3.0, or R-2.3.1 beta. It installs fine under R-2.2.1. transcript for installation under R-2.3.0 follows. Robin-Hankins-Computer:~/scratch% R --version R version 2.3.0 (2006-04-24) Copyright (C) 2006 R Development Core Team R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under the terms of the GNU General Public License. For more information about these matters, see http://www.gnu.org/copyleft/gpl.html. Robin-Hankins-Computer:~/scratch% sudo R CMD INSTALL mvtnorm_0.7-2.tar.gz * Installing *source* package 'mvtnorm' ... ** libs gfortran -fPIC -fno-common -g -O2 -c mvt.f -o mvt.o gcc -no-cpp-precomp -I/Library/Frameworks/R.framework/Resources/ include -I/Library/Frameworks/R.framework/Resources/include -I/sw/ include -I/usr/local/include -fPIC -fno-common -Wall -pedantic -O2 -std=gnu99 -c randomF77.c -o randomF77.o gcc -flat_namespace -bundle -undefined suppress -L/sw/lib -L/usr/ local/lib -o mvtnorm.so mvt.o randomF77.o -L/usr/local/gfortran/lib/ gcc/powerpc-apple-darwin8.2.0/4.1.0 -L/usr/local/gfortran/lib - lgfortran -lgcc_s -lSystemStubs -lSystem -F/Library/Frameworks/ R.framework/.. -framework R ** arch - i386 gfortran-4.0 -arch i386 -fPIC -fno-common -g -O2 -march=pentium-m - mtune=prescott -c mvt.f -o mvt.o make: gfortran-4.0: Command not found make: *** [mvt.o] Error 127 chmod: /Library/Frameworks/R.framework/Versions/2.3/Resources/library/ mvtnorm/libs/i386/*: No such file or directory ** arch - ppc gfortran-4.0 -arch ppc -fPIC -fno-common -g -O2 -c mvt.f -o mvt.o make: gfortran-4.0: Command not found make: *** [mvt.o] Error 127 chmod: /Library/Frameworks/R.framework/Versions/2.3/Resources/library/ mvtnorm/libs/ppc/*: No such file or directory ERROR: compilation failed for package 'mvtnorm' ** Removing '/Library/Frameworks/R.framework/Versions/2.3/Resources/ library/mvtnorm' ** Restoring previous '/Library/Frameworks/R.framework/Versions/2.3/ Resources/library/mvtnorm' Robin-Hankins-Computer:~/scratch% anyone? -- Robin Hankin Uncertainty Analyst National Oceanography Centre, Southampton European Way, Southampton SO14 3ZH, UK tel 023-8059-7743 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Ordinal Independent Variables
On Mon, 22 May 2006, Frank E Harrell Jr wrote: Rick Bilonick wrote: When I run lrm from the Design package, I get a warning about contrasts when I include an ordinal variable: Warning message: Variable ordfac is an ordered factor. You should set options(contrasts=c(contr.treatment,contr.treatment)) or Design will not work properly. in: Design(eval(m, sys.parent())) I don't get this message if I use glm with family=binomial. It produces linear and quadratic contrasts. If it's improper to do this for an ordinal variable, why does glm not balk? Rick B. Standard regression methods don't make good use of ordinal predictors and just have to treat them as categorical. Design is a bit picky about this. If the predictor has numeric scores for the categories, you can get a test of adequacy of the scores (with k-2 d.f. for k categories) by using scored(predictor) in the formula. Or just create a factor( ) variable to hand to Design. Contrasts in S/R are used to set the coding of factors, and model.matrix() does IMO 'make good use of ordinal predictors'. I don't know what is meant by 'Standard regression methods': the charitable interpretation is that these are the overly restrictive methods used by certain statistical packages. (I first learnt of the use of polynomial codings for ordinal factors in the late 1970s, when I first learnt anything about ANOVA, so to me they are 'standard'.) So are you saying this is a design deficiency in package Design, or that the authors of S ca 1991 were wrong to allow arbitrary contrasts? -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] checking package dependencies
On Saturday 20 May 2006 20:14, Uwe Ligges wrote: Richard M. Heiberger wrote: [...] 1. cygwin is not supported. 2. Access is denied suggests this is not an R but a problem of your (OS/cygwin ? ) setup. Uwe Ligges I also thank you for the answer. The problem seem to have vanished (and I haven't done anything in particular). When it didn't work, I remember I checked the permissions and everything was OK. Really have no idea what went wrong, but now it works. Best, Adrian -- Adrian DUSA Romanian Social Data Archive 1, Schitu Magureanu Bd 050025 Bucharest sector 5 Romania Tel./Fax: +40 21 3126618 \ +40 21 3120210 / int.101 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] a question about gradient in optim
Hi, I am using optim to estimate the parameters of a smooth transition autoregressive model. I wrote a function to return the gradient for each observation. So, for k parameters and n observations my function returns a (n x k) matrix. The gr argument of optim expects a (k x 1) vector. Now, what is the correct way of passing my gradient matrix to optim? Mehmet __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] a question about gradient in optim
On Tue, 23 May 2006, Mehmet Balcilar wrote: Hi, I am using optim to estimate the parameters of a smooth transition autoregressive model. I wrote a function to return the gradient for each observation. So, for k parameters and n observations my function returns a (n x k) matrix. The gr argument of optim expects a (k x 1) vector. Now, what is the correct way of passing my gradient matrix to optim? gr is documented to be a function, not a vector! Since you are optimizing a numeric function such as the log-likelihood, the derivative is a vector, not a matrix, and your gr function should return a vector. We have far too few details, but for example did you forget to sum over observations? -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] package installation problem
Hello everyone. After some very welcome offline advice, I now have the mvtnorm package working on R-2.3.0; here is my solution for MacOSX. There were two problems: first, make could not find gcc-4.0. To solve this, add the relevant directory to PATH. For me, this is PATH=$PATH:/usr/local/gcc4.0/bin/ Now the second problem is that the file for -lgfortran can't be found. To solve this, add FLIBS=-L/usr/local/gcc4.0/lib to the Makevars file. best wishes rksh On 23 May 2006, at 08:25, Robin Hankin wrote: (this after asking the package author) Hi I cannot install the rmvnorm package under R-2.3.0, or R-2.3.1 beta. It installs fine under R-2.2.1. transcript for installation under R-2.3.0 follows. Robin-Hankins-Computer:~/scratch% R --version R version 2.3.0 (2006-04-24) Copyright (C) 2006 R Development Core Team R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under the terms of the GNU General Public License. For more information about these matters, see http://www.gnu.org/copyleft/gpl.html. Robin-Hankins-Computer:~/scratch% sudo R CMD INSTALL mvtnorm_0.7-2.tar.gz * Installing *source* package 'mvtnorm' ... ** libs gfortran -fPIC -fno-common -g -O2 -c mvt.f -o mvt.o gcc -no-cpp-precomp -I/Library/Frameworks/R.framework/Resources/ include -I/Library/Frameworks/R.framework/Resources/include -I/sw/ include -I/usr/local/include -fPIC -fno-common -Wall -pedantic -O2 -std=gnu99 -c randomF77.c -o randomF77.o gcc -flat_namespace -bundle -undefined suppress -L/sw/lib -L/usr/ local/lib -o mvtnorm.so mvt.o randomF77.o -L/usr/local/gfortran/lib/ gcc/powerpc-apple-darwin8.2.0/4.1.0 -L/usr/local/gfortran/lib - lgfortran -lgcc_s -lSystemStubs -lSystem -F/Library/Frameworks/ R.framework/.. -framework R ** arch - i386 gfortran-4.0 -arch i386 -fPIC -fno-common -g -O2 -march=pentium-m - mtune=prescott -c mvt.f -o mvt.o make: gfortran-4.0: Command not found make: *** [mvt.o] Error 127 chmod: /Library/Frameworks/R.framework/Versions/2.3/Resources/library/ mvtnorm/libs/i386/*: No such file or directory ** arch - ppc gfortran-4.0 -arch ppc -fPIC -fno-common -g -O2 -c mvt.f -o mvt.o make: gfortran-4.0: Command not found make: *** [mvt.o] Error 127 chmod: /Library/Frameworks/R.framework/Versions/2.3/Resources/library/ mvtnorm/libs/ppc/*: No such file or directory ERROR: compilation failed for package 'mvtnorm' ** Removing '/Library/Frameworks/R.framework/Versions/2.3/Resources/ library/mvtnorm' ** Restoring previous '/Library/Frameworks/R.framework/Versions/2.3/ Resources/library/mvtnorm' Robin-Hankins-Computer:~/scratch% anyone? -- Robin Hankin Uncertainty Analyst National Oceanography Centre, Southampton European Way, Southampton SO14 3ZH, UK tel 023-8059-7743 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting- guide.html -- Robin Hankin Uncertainty Analyst National Oceanography Centre, Southampton European Way, Southampton SO14 3ZH, UK tel 023-8059-7743 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] a question about gradient in optim
I guess I just need to sum over observations. Here are my functions: # # function to compute gradient vector of the two-regime LSTAR model # dlogistic - function(theta,x,z,g.scale) { n = nrow(x) k = length(theta) k1 = (k-2)/2 + 2 G = logistic(z,g.scale,exp(theta[1]),theta[2]) dgam = exp(theta[1]) * (x %*% theta[3:k1] - x %*% theta[(k1+1):k]) * (G*(1-G)) * (z-theta[2])/g.scale dc = (x %*% theta[(k1+1):k] - x %*% theta[3:k1]) * (G*(1-G)) * (exp(theta[1])/g.scale)*rep(1,T) return(cbind(dgam,dc)); } # # function to compute the value of the logistic transition function # logistic - function(z,g.scale,gam,c) { return(1./(1+exp(-(gam/g.scale)*(z-c } # gradlstar # # function to compute gradient matrix of parameter estimates for # LSTAR model # # gradlstar = function(theta,x,z,g.scale) { dgamdc=dlogistic(theta,x,z,g.scale) G = logistic(z,g.scale,exp(theta[1]),theta[2]) dphi_1 = x*(1-G) dphi_2 = x*G grad = cbind(dgamdc,dphi_1,dphi_2) return(grad) } - I guess apply(grad,2,sum) will do what I want. Thanks. Prof Brian Ripley wrote: On Tue, 23 May 2006, Mehmet Balcilar wrote: Hi, I am using optim to estimate the parameters of a smooth transition autoregressive model. I wrote a function to return the gradient for each observation. So, for k parameters and n observations my function returns a (n x k) matrix. The gr argument of optim expects a (k x 1) vector. Now, what is the correct way of passing my gradient matrix to optim? gr is documented to be a function, not a vector! Since you are optimizing a numeric function such as the log-likelihood, the derivative is a vector, not a matrix, and your gr function should return a vector. We have far too few details, but for example did you forget to sum over observations? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] How can you buy R?
G'day Duncan, DM == Duncan Murdoch [EMAIL PROTECTED] writes: DM On 5/22/2006 3:55 AM, Berwin A Turlach wrote: I agree with you on this. Probably I was to terse in my writing and produced misunderstandings. I never intended to say something about the rights that the user has with regards to P alone. My comments were directed towards the linked product P+Q. In particular, it is not clear to me whether one can execute such a product without violating copyright laws. DM The GPL is quite explicit on this: as Deepayan said, it DM confers rights to copy, modify and redistribute P. [..] Yes, so he did. But I refer above to copyright laws, not the GPL. It is not clear to me under which rules/licence/laws the combined product falls and whether you have the right to execute it. And in some places, including Australia, copyright laws are getting really strange and restrictive, so it seems. DM Now, I suppose you might argue that executing P+Q makes a copy DM of it in memory, [...] Yes, I read arguments along these lines on gnu.misc.disucss. Personally, I wouldn't use them because I know too little about these processes (and what happens under static linking, dynamic linking and so on) to make any statements on such issues. From following such discussion, I only got the impression that these points seem to be relevant in defining when (under copyright law) a derivative work is produced. DM but I think countries that have modernized their copyright DM laws recognize that this is something you have a right to do DM with a legally acquired copy. I doubt it, I don't think that the lawyers understand these technicalities either. ;-)) Cheers, Berwin __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] File scanning begining and mid row
Hi I am trying to read and restructure some ascii data. Each row of data consists of 543 ascii characters - the values of which are integers. I would like to the read the following data only in to a dataframe - (a) the first 8 digits - which represent one variable; and (b) the last 9 variables those characters begin at character position 508 in each row. Is this possible in R? If so how. Many thanks S. ?? http://mail.nana.co.il [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] How can you buy R?
G'day Deepayan, DS == Deepayan Sarkar [EMAIL PROTECTED] writes: DS On 5/22/06, Berwin A Turlach [EMAIL PROTECTED] wrote: DS [...] [...] Should perhaps better be formulated as: My understanding was that in that moment a product was created that would have to be wholly under the GPL, so the person who did the linking was violating the GPL and it is not clear whether anyone is allowed to use the linked product. DS I think you are still missing the point. [...] Quite possible, as I said early on IANAL. And these discussion really starts to remind me too much of those that I read in gnu.misc.discuss. Since I never participated in them, I don't see why I should here. And that group is probably a better forum to discuss all these issues. If some of the guys who always tried to argue that they found a way to circumvent the GPL are still hanging around, I am sure they are happy if you come along and confirm that according to your understanding of the GPL eveything they are doing is o.k.. :) DS The act of creating a derivative work is NOT governed by the DS GPL, Yes, as a part of the GPL that I quoted earlier states. it is the (local?) copyright law which defines when a derivative work is created. The GPL just stipulates under which licence this derivative work has to be. DS so it cannot possibly by itself violate the GPL. Fair enough. There are probably several people in gnu.misc.discuss who would be happy to hear this. :) DS The question of violation only applies when the creator of DS this derivative work wishes to _distribute_ it. This is like DS me writing a book that no one else ever reads; it doesn't DS matter if I have plagiarized huge parts of it. This point is DS not as academic as you might think. Hey, I work in an academic environment, so it is hard to imagine that I would view any point as being too academic. :) DS It is well known that Google uses a customized version of DS Linux for their servers; however, they do not distribute this DS customized version, and hence are under no obligation to DS provide the changes (and they do not, in fact). This is NOT a DS violation of the GPL. I agree and would have never claimed anything different. You are stating the obvious here. [...] If one scenario is not on, I don't see how the other one could be acceptable either. Except that in the first scenario there is a clear intend of circumventing the GPL. [...] DS That's your choice, but the situations are not symmetric, and DS quite deliberately so. That's why I studied mathematics and not law. I readily accept that there is some logic in law, it is just that I never got it. For me, if I make someone else link a GPL product P with a non-GPL product Q, then this is the same, whether I was the provider of P or Q. DS The FSF's plan was not to produce a completely independent and DS fully functional 'GNU system' at once (which would be DS unrealistic), but rather produce replacements of UNIX tools DS one by one. It was entirely necessary to allow these new DS versions to operate within the older, proprietary system. Wasn't your argument above, in response to the scenario that I was describing, that it is not necessary to explicitly allow this because a user can never violate the GPL? As long as you operate on a proprietary system and not distributing anything, why would there all of a sudden be a problem? DS In fact, GCC was not the first piece of software released DS under the GPL, My guess is that the first piece of software released under the GPL was Emacs, but it is quite likely that I will be corrected on this point. DS and until then the only way to use GPL software was to compile DS them using a non-free compiler. I know, I have compiled a lot of GPL software with non GCC compilers; and I have been using GCC when it was still standing for GNU C Compiler. But thanks for the history lesson anyhow. :) Cheers, Berwin __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Regression through the origin
__ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Regression through the origin
Dear R-users: Sorry for the naiveness of my question but I have been trying in the R-help and the CRAN website without any success. I am trying to perform a regression through the origin (without intercept) and my main concern is about its evaluative statistics. It is clear for me that R squared does not make sense if you do not have an intercept in your model and how big is the assumption that the response is zero when all the predictors are zero. If I still want to perform a regression in that conditions, does R have any options to evaluate the model adequacy correctly? Thanks in advance, Leonardo Trujillo. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] multiple plots with par mfg
Hi, I'm trying to add points to 2 plots on the fly using par(mfg=vector) so switch between them. However, the appropriate scales aren't switched when changing from one plot to another, e.g. par(mfcol=c(2,1)) plot(1,1, col=blue)# blue plot plot(1.2,1.2, col=red) # red plot points(1.1,1.1) # appears to bottom left of red point par(mfg=c(1,1)) # switch plots points(1.1,1.1) # should appear at top of blue point, but appears as on red plot # version _ platform powerpc-apple-darwin7.9.0 arch powerpc os darwin7.9.0 system powerpc, darwin7.9.0 status Patched major2 minor2.1 year 2006 month03 day 02 svn rev 37488 language R Is this a bug? if not, can anyone suggest a way of appending to 2 separate plots on the fly. Thanks Yan Wong Leeds __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] multiple plots with par mfg
On Tue, 23 May 2006, Yan Wong wrote: Hi, I'm trying to add points to 2 plots on the fly using par(mfg=vector) so switch between them. However, the appropriate scales aren't switched when changing from one plot to another, e.g. par(mfcol=c(2,1)) plot(1,1, col=blue)# blue plot plot(1.2,1.2, col=red) # red plot points(1.1,1.1) # appears to bottom left of red point par(mfg=c(1,1)) # switch plots points(1.1,1.1) # should appear at top of blue point, but appears as on red plot # version _ platform powerpc-apple-darwin7.9.0 arch powerpc os darwin7.9.0 system powerpc, darwin7.9.0 status Patched major2 minor2.1 year 2006 month03 day 02 svn rev 37488 language R Is this a bug? if not, can anyone suggest a way of appending to 2 separate plots on the fly. No, it is user error. par(mfg=) specifies where the next figure will the drawn, and points() does not draw a figure but adds to one. As the help page says: 'mfg' A numerical vector of the form 'c(i, j)' where 'i' and 'j' indicate which figure in an array of figures is to be drawn next (if setting) or is being drawn (if enquiring). You need to use screen() or layout() to switch back to an existing plot. Thanks Yan Wong Leeds __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Regression through the origin
Hi, a first step to answer your question: Regression through the origin (= without intercept) can be done by explicitly stating in the 'formula' argument '-1' If you check the help page of 'lm' for example: help(lm) It will say in the 'Details' section: A formula has an implied intercept term. To remove this use either 'y ~ x - 1' or 'y ~ 0 + x'. See 'formula' for more details of allowed formulae. An example is given at the end of the help page: ## Annette Dobson (1990) An Introduction to Generalized Linear Models. ## Page 9: Plant Weight Data. ctl - c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt - c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group - gl(2,10,20, labels=c(Ctl,Trt)) weight - c(ctl, trt) anova(lm.D9 - lm(weight ~ group)) summary(lm.D90 - lm(weight ~ group - 1))# omitting intercept I hope this helps a bit. Best, Roland -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Trujillo L. Sent: Tuesday, May 23, 2006 12:54 PM To: r-help@stat.math.ethz.ch Subject: [R] Regression through the origin Dear R-users: Sorry for the naiveness of my question but I have been trying in the R-help and the CRAN website without any success. I am trying to perform a regression through the origin (without intercept) and my main concern is about its evaluative statistics. It is clear for me that R squared does not make sense if you do not have an intercept in your model and how big is the assumption that the response is zero when all the predictors are zero. If I still want to perform a regression in that conditions, does R have any options to evaluate the model adequacy correctly? Thanks in advance, Leonardo Trujillo. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- This mail has been sent through the MPI for Demographic Rese...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Regression through the origin
Trujillo L. skreiv: Sorry for the naiveness of my question but I have been trying in the R-help and the CRAN website without any success. I am trying to perform a regression through the origin (without intercept) and my main concern is about its evaluative statistics. It is clear for me that R squared does not make sense if you do not have an intercept in your model There are many different definitions of R². Most of them are equivalent *only* for the simple linear regression model, y = a + b · x + ε. R (the program) use a different formula for calculating R² when you fit a regression through the origin than for a simple linear regression; and the definition used *does* make sense. (Some other statistics software use the same definition in the two cases, which makes the resulting statistic meaningless in the case of regression through the origin.) The (IHMO) most sensible way to interprete R² is as the proportional reduction in variation when fitting a more complex model over fitting a simpler model. See this excellent paper for a thorough discussion: Anderson-Sprecher R. (1994). ‘Model comparisons and R²’. The American Statistician, volume 48, number 2, pages 113–117. And for a discussion of the different definitions of R², especially when using transformed variables, see: Kvålseth T.O. (1985). ‘Cautionary note about R²’. The American Statistician, volume 39, number 4, pages 279–285. If I still want to perform a regression in that conditions, does R have any options to evaluate the model adequacy correctly? Try some diagnostic plots: x = rnorm(100) y = 1 + x + rnorm(100) l=lm(y~x-1) # Note: Wrong model (we need an intercept)! summary(l) par(mfrow=c(2,2)) plot(l) As you can see, the fit is not very good. You might also want to try: plot(y~x) abline(l,col=red) -- Karl Ove Hufthammer __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Ordinal Independent Variables
Prof Brian Ripley wrote: On Mon, 22 May 2006, Frank E Harrell Jr wrote: Rick Bilonick wrote: When I run lrm from the Design package, I get a warning about contrasts when I include an ordinal variable: Warning message: Variable ordfac is an ordered factor. You should set options(contrasts=c(contr.treatment,contr.treatment)) or Design will not work properly. in: Design(eval(m, sys.parent())) I don't get this message if I use glm with family=binomial. It produces linear and quadratic contrasts. If it's improper to do this for an ordinal variable, why does glm not balk? Rick B. Standard regression methods don't make good use of ordinal predictors and just have to treat them as categorical. Design is a bit picky about this. If the predictor has numeric scores for the categories, you can get a test of adequacy of the scores (with k-2 d.f. for k categories) by using scored(predictor) in the formula. Or just create a factor( ) variable to hand to Design. Contrasts in S/R are used to set the coding of factors, and model.matrix() does IMO 'make good use of ordinal predictors'. I don't know what is meant by 'Standard regression methods': the charitable interpretation is that these are the overly restrictive methods used by certain statistical packages. (I first learnt of the use of polynomial codings for ordinal factors in the late 1970s, when I first learnt anything about ANOVA, so to me they are 'standard'.) So are you saying this is a design deficiency in package Design, or that the authors of S ca 1991 were wrong to allow arbitrary contrasts? Brian, What I meant was that unlike the case of ordinal response varables where multiple intercepts in logistical models do not cost degrees of freedom because the ordering constraint is fully utilized, ordinal predictors require k-1 degrees of freedom for k levels using any standard contrast. Special methods (e.g. pool adjacent violators to impose a monotonicity constraint) would have to be used to get a lot out of the ordinal nature of the predictor. There's nothing wrong with allowing arbitrary contrasts; more progress has been made in statistics for ordinal responses than ordinal predictors. Frank -- Frank E Harrell Jr Professor and Chair School of Medicine Department of Biostatistics Vanderbilt University __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] multiple plots with par mfg
On 23 May 2006, at 12:48, Prof Brian Ripley wrote: On Tue, 23 May 2006, Yan Wong wrote: if not, can anyone suggest a way of appending to 2 separate plots on the fly. No, it is user error. par(mfg=) specifies where the next figure will the drawn, and points() does not draw a figure but adds to one. As the help page says: 'mfg' A numerical vector of the form 'c(i, j)' where 'i' and 'j' indicate which figure in an array of figures is to be drawn next (if setting) or is being drawn (if enquiring). OK. I didn't appreciate the distinction between drawing and adding. You need to use screen() or layout() to switch back to an existing plot. Thanks, but the help page for screen says The behavior associated with returning to a screen to add to an existing plot is unpredictable and may result in problems that are not readily visible. I assume this to mean that I shouldn't do it using screen(). I can't find any description of how to add points to several different plots generated after a layout() call. Is there a way? Cheers Yan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] error message
Hello, we work in nlme, with R version 2.2.1. We ran the follwing sentence, a growth function, formula(my data.gd) L.gd ~ Largo | ID Rich- function(Largo, Linf, K, t0, m) Linf *(1-exp(-K(edad-t0)))^(1/(1-m)) Rich.nlme - nlme(Largo ~ Rich(edad, Linf, K, t0, m), + data = L.gd, + fixed = Linf + K +t0 +m~ 1, + start = list(fixed = c(800, 0.3,-0.5,0.3))) And the program gave us this error message, Error: Singularity in backsolve at level 0, block 1 In addition: Warning messages: 1: NaNs produced in: log(x) 2: NaNs produced in: log(x) Does anyone know what this means? We would apreciate if you could point us to a reference Lic. Gabriela Escati Peñaloza Biología y Manejo de Recursos Acuáticos Centro Nacional Patagónico(CENPAT). CONICET Bvd. Brown s/nº. (U9120ACV)Pto. Madryn Chubut Argentina Tel: 54-2965/451301/451024/451375/45401 (Int:277) Fax: 54-29657451543 _ Horóscopos, Salud y belleza, Chistes, Consejos de amor: el contenido más divertido para tu celular está en Yahoo! Móvil. Obtenelo en http://movil.yahoo.com.ar __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] flatten a list to a true list
Hi, I want to flatten a list: flatten(list(a=1,b=2),list(c=3,d=4)) - L - list(a=1,b=2,c=3,d=4) which is L $a [1] 1 $b [1] 2 $c [1] 3 $d [1] 4 L[1] $a [1] 1 L[[1]] [1] 1 What I used so far is M - unlist(c(list(a=1,b=2),list(c=3,d=4))), but this gives M a b c d 1 2 3 4 M[1] a 1 M[[1]] [1] 1 This shows that L has named components, but L is a named vector. What would be the functions L - F1(M) M - F2(L) like? Generally: Flatten a tree down to level n, where level 0 is the root. I searched the net under flatten, but it showed only unlist. Thanks for help Christian -- Dr. Christian W. Hoffmann, Swiss Federal Research Institute WSL Mathematics + Statistical Computing Zuercherstrasse 111 CH-8903 Birmensdorf, Switzerland Tel +41-44-7392-277 (office) -111(exchange) Fax +41-44-7392-215 (fax) [EMAIL PROTECTED] http://www.wsl.ch/staff/christian.hoffmann International Conference 5.-7.6.2006 Ekaterinburg Russia Climate changes and their impact on boreal and temperate forests http://ecoinf.uran.ru/conference/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] About stringdot (ksvm)
Hello R Helpers I want to use string kernel by ksvm. Is there an error in my operation? 1)dataset test data aaa 1 abb -1 bbb 1 2)operation library(kernlab) x-c(aaa,abb,bbb) x [1] aaa abb bbb class(x) [1] character xl-list(x) y-c(1,-1,1) y [1] 1 -1 1 z-list(x,y) z [[1]] [1] aaa bab bbb [[2]] [1] 1 -1 1 s.svm-ksvm(xl,kernel=stringdot,kpar=list(lambda=0.5)) Error in as.double.default(t(x)) : (list) object cannot be coerced to 'double' s.svm-ksvm(z,kernel=stringdot,kpar=list(lambda=0.5)) Error in as.double.default(t(x)) : (list) object cannot be coerced to 'double' --- Masanori Higashihara, JAIST: Japan Advanced Institute of Science and Technology. JAPAN. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] flatten a list to a true list
How about using c() ? c(list(a=1,b=2),list(c=3,d=4)) - L L L $a [1] 1 $b [1] 2 $c [1] 3 $d [1] 4 is.list(L) [1] TRUE How true of a list do you want? (-; Hth, ingmar From: Christian Hoffmann [EMAIL PROTECTED] Date: Tue, 23 May 2006 14:11:01 +0200 To: r-help@stat.math.ethz.ch Subject: [R] flatten a list to a true list flatten(list(a=1,b=2),list(c=3,d=4)) - L __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] after identify labels dissapear XP
Greetings: Using 'identify' to label points on a plot works just fine. However, when saving under 'metafile' or using the clipboard the labels dissapear. I believe it's an SDI issue. I am running last R with last Tinn-r under XP up to date. Anything I can do besides going back to MDI :-)? Thanks, Mihai Nica, ABD Jackson State University ITT Tech Instructor 170 East Griffith Street G5 Jackson, MS 39201 601 914 0361 The least of learning is done in the classrooms. - Thomas Merton __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] after identify labels dissapear XP
On 5/23/2006 8:49 AM, Mihai Nica wrote: Greetings: Using 'identify' to label points on a plot works just fine. However, when saving under 'metafile' or using the clipboard the labels dissapear. I believe it's an SDI issue. I am running last R with last Tinn-r under XP up to date. Anything I can do besides going back to MDI :-)? I see this inconsistently in the current R-devel and R-patched. It's easiest to see with history turned on. For example, x - rnorm(100) y - rnorm(100) plot(x,y) # make sure history recording is turned on identify(x,y,1:100) # Mark a few points, then stop plot(y, x) identify(y,x,1:100) # Mark some more points # Now hit PgUp to go to the previous page: sometimes the identifiers show up, sometimes not. Hitting PgDn sometimes loses the latest ones. I haven't spotted what the pattern is in what causes the losses yet. Mihai, can you put together a recipe that *always* loses the identifiers? Duncan Murdoch Thanks, Mihai Nica, ABD Jackson State University ITT Tech Instructor 170 East Griffith Street G5 Jackson, MS 39201 601 914 0361 The least of learning is done in the classrooms. - Thomas Merton __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] iraq statistics - OT
On 5/19/06, Gabor Grothendieck [EMAIL PROTECTED] wrote: I came across this one: http://www.nysun.com/article/32787 which says that the violent death rate in Iraq (which presumably includes violent deaths from the war) is lower than the violent death rate in major American cities. Does anyone have any insights from statistics on how to interpret this? Since I posted this a number of people have responded online and offline and have included a number of links. I am providing the links from the offline responders. I was hoping to summarize all this in an objective fashion focused on statistics but there is enough information here that I was concerned I might not do a thorough job so I am simply providing the links plus some short comments to summarize the links that did not already appear on the list. The original idea of making this comparison was apparently due to US Rep. Steve King and the first link gives his rebuttal to critics who made similar comments to those shown on the list so far. His main points are that the data does not come from him, he used published figures on icasualty.com (and for US the sources cited in the link) and that his original comments were in the context of civilian safety and so it would not be appropriate to include police which is why he excluded them (I had originally thought it included all violent deaths but that is not the case). Since my original post was about vacationing in Iraq I would think that excluding police would also apply to that too. A number of people on the list did point out that defining violent death was key. http://www.opinionjournal.com/best/?id=110008402 Part of the previous link is in response to the following link: http://www.opinionjournal.com/best/?id=110008392#iraq Original source of the iraq data: http://icasualties.org/oif/ Story of one person who tried to pursue the numbers: http://zenbeatnik.blogspot.com/2006/05/where-numbers-lead.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] iraq statistics - OT
Note that the reference to icasualty.com should be icasualty.org. On 5/23/06, Gabor Grothendieck [EMAIL PROTECTED] wrote: On 5/19/06, Gabor Grothendieck [EMAIL PROTECTED] wrote: I came across this one: http://www.nysun.com/article/32787 which says that the violent death rate in Iraq (which presumably includes violent deaths from the war) is lower than the violent death rate in major American cities. Does anyone have any insights from statistics on how to interpret this? Since I posted this a number of people have responded online and offline and have included a number of links. I am providing the links from the offline responders. I was hoping to summarize all this in an objective fashion focused on statistics but there is enough information here that I was concerned I might not do a thorough job so I am simply providing the links plus some short comments to summarize the links that did not already appear on the list. The original idea of making this comparison was apparently due to US Rep. Steve King and the first link gives his rebuttal to critics who made similar comments to those shown on the list so far. His main points are that the data does not come from him, he used published figures on icasualty.com (and for US the sources cited in the link) and that his original comments were in the context of civilian safety and so it would not be appropriate to include police which is why he excluded them (I had originally thought it included all violent deaths but that is not the case). Since my original post was about vacationing in Iraq I would think that excluding police would also apply to that too. A number of people on the list did point out that defining violent death was key. http://www.opinionjournal.com/best/?id=110008402 Part of the previous link is in response to the following link: http://www.opinionjournal.com/best/?id=110008392#iraq Original source of the iraq data: http://icasualties.org/oif/ Story of one person who tried to pursue the numbers: http://zenbeatnik.blogspot.com/2006/05/where-numbers-lead.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] lattice package - question on plotting lines
Thanks Tony. That solved a huge part of what I was trying to do. I was going through the trellis documentation (Bell labs) too. Is there any other documentation/manual that you would recommend. many thanks! Tim Tony [EMAIL PROTECTED] wrote: On Mon, 2006-05-22 at 18:36 -0700, Tim Smith wrote: Hi all, I was trying to plot a graph using the following data: method percent accuracy group A1 4 0.8529 cns A1 10 0.8412 cns A1 15 0.8235 cns A2 4 0.9353 cns A2 10 0.9412 cns A2 15 0.9471 cns A1 4 0.8323 col A1 10 0.8452 col A1 15 0.8484 col A2 4 0.8839 col A2 10 0.8677 col A2 15 0.8678 col # The code I'm using to generate the graphs is: ### code : xyplot(accuracy ~ percent|group,data = k, groups = method, allow.multiple = TRUE, scales = same,type=o, xlab = % of genes, ylab = Accuracy, main = Accuracy , aspect = 2/3, panel = function(...) { panel.superpose(...) } ) I have tried to use the 'get' and 'set' functions of 'par()', but can't figured it out. I have two questions: i) How can I set it so that the line plotted for A1 in all the plots is 'solid', and the one for A2 is 'dotted' for both the groups (cns and col) ii) How can I set the abline feature to show a horizontal line at different points on the y-axis for each of the two graphs? Any help would be highly appreciated. many thanks. - Sneak preview the all-new Yahoo.com. It's not radically different. Just radically better. [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Does this get you closer? xyplot(accuracy ~ percent|group,data = k, groups = method, allow.multiple = TRUE, scales = same,type=o, par.settings=list(superpose.line = list(lty=1:2)), # added xlab = % of genes, ylab = Accuracy, main = Accuracy , aspect = 2/3, panel = function(...) { panel.superpose(...) } ) -- Tony - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Distribution Identification/Significance testing
Hi, What are methods for identifying the right distribution for the dataset? As far as I know Fisher test (p alpha) for stat. significance or min(square error) are two criteria for deciding. What are the other alternatives? - CONFIDENCE INTERVAL?. If any, how can I accomplish them in R. Thanx in advance. Sachin - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] package installation problem
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Robin, Thanks for the tip. I already had gcc in my path, but also had to add the -lgfortran line in Makevars. Worked as you said. Cheers Robert On May 23, 2006, at 1:57 AM, Robin Hankin wrote: Hello everyone. After some very welcome offline advice, I now have the mvtnorm package working on R-2.3.0; here is my solution for MacOSX. There were two problems: first, make could not find gcc-4.0. To solve this, add the relevant directory to PATH. For me, this is PATH=$PATH:/usr/local/gcc4.0/bin/ Now the second problem is that the file for -lgfortran can't be found. To solve this, add FLIBS=-L/usr/local/gcc4.0/lib to the Makevars file. best wishes rksh On 23 May 2006, at 08:25, Robin Hankin wrote: (this after asking the package author) Hi I cannot install the rmvnorm package under R-2.3.0, or R-2.3.1 beta. It installs fine under R-2.2.1. transcript for installation under R-2.3.0 follows. Robin-Hankins-Computer:~/scratch% R --version R version 2.3.0 (2006-04-24) Copyright (C) 2006 R Development Core Team R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under the terms of the GNU General Public License. For more information about these matters, see http://www.gnu.org/copyleft/gpl.html. Robin-Hankins-Computer:~/scratch% sudo R CMD INSTALL mvtnorm_0.7-2.tar.gz * Installing *source* package 'mvtnorm' ... ** libs gfortran -fPIC -fno-common -g -O2 -c mvt.f -o mvt.o gcc -no-cpp-precomp -I/Library/Frameworks/R.framework/Resources/ include -I/Library/Frameworks/R.framework/Resources/include -I/sw/ include -I/usr/local/include -fPIC -fno-common -Wall -pedantic -O2 -std=gnu99 -c randomF77.c -o randomF77.o gcc -flat_namespace -bundle -undefined suppress -L/sw/lib -L/usr/ local/lib -o mvtnorm.so mvt.o randomF77.o -L/usr/local/gfortran/lib/ gcc/powerpc-apple-darwin8.2.0/4.1.0 -L/usr/local/gfortran/lib - lgfortran -lgcc_s -lSystemStubs -lSystem -F/Library/Frameworks/ R.framework/.. -framework R ** arch - i386 gfortran-4.0 -arch i386 -fPIC -fno-common -g -O2 -march=pentium- m - mtune=prescott -c mvt.f -o mvt.o make: gfortran-4.0: Command not found make: *** [mvt.o] Error 127 chmod: /Library/Frameworks/R.framework/Versions/2.3/Resources/ library/ mvtnorm/libs/i386/*: No such file or directory ** arch - ppc gfortran-4.0 -arch ppc -fPIC -fno-common -g -O2 -c mvt.f -o mvt.o make: gfortran-4.0: Command not found make: *** [mvt.o] Error 127 chmod: /Library/Frameworks/R.framework/Versions/2.3/Resources/ library/ mvtnorm/libs/ppc/*: No such file or directory ERROR: compilation failed for package 'mvtnorm' ** Removing '/Library/Frameworks/R.framework/Versions/2.3/Resources/ library/mvtnorm' ** Restoring previous '/Library/Frameworks/R.framework/Versions/2.3/ Resources/library/mvtnorm' Robin-Hankins-Computer:~/scratch% anyone? -- Robin Hankin Uncertainty Analyst National Oceanography Centre, Southampton European Way, Southampton SO14 3ZH, UK tel 023-8059-7743 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting- guide.html -- Robin Hankin Uncertainty Analyst National Oceanography Centre, Southampton European Way, Southampton SO14 3ZH, UK tel 023-8059-7743 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting- guide.html - -- Robert M. Ullrey Special Reportage for Crises Intervention and Mediation Reportage Spécial pour des Crises Interposition et Médiation 133 Antelope Street Woodland, CA 95695 USA mobile (téléphone cellulaire): +01-916-600-5816 Skype (téléphone international): robert.ullrey email: [EMAIL PROTECTED] -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (Darwin) iD8DBQFEcxdQ0ZwlS5OIzRcRAo0oAJ9b3MZ7Tm2wLkMkzUiiXHcGdV3qZQCglPgu dmj4FLMI90RtDQKXwP/C0XQ= =DWc5 -END PGP SIGNATURE- __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Avoiding a memory copy by [[
Hi, n = 1000 L = list(a=integer(n), b=integer(n)) L[[2]][1:10] gives me the first 10 items of the 2nd vector in the list L. It works fine. However it appears to copy the entire L[[2]] vector in memory first, before subsetting it. It seems reasonable that [[ can't know that all that is to be done is to do [1:10] on the result and therefore a copy in memory of the entire vector L[[2]] is not required. Only a new vector length 10 need be created. I see why [[ needs to make a copy in general. L[[c(2,1)]] gives me the 1st item of the 2nd vector in the list L. It works fine, and does not appear to copy L[[2]] in memory first. Its much faster as n grows large. But I need more than 1 element of the vector L[[c(2,1:10)]] fails with Error: recursive indexing failed at level 2 Is there a way I can obtain the first 10 items of L[[2]] without a memory copy of L[[2]] ? Thanks! Matthew R 2.1.1 [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] multiple plots with par mfg
The following works for my quick tests, but being undocumented it is not guarenteed to work for all situations. The best thing to do is to create the first plot, add everything to the first plot that you need to, then go on to the 2nd plot, etc. If you really need to go back to the first plot to add things after plotting the 2nd plot then here are a couple of ideas: Look at the examples for the cnvrt.coords function in the TeachingDemos package (my quick test showed they work with layout as well as par(mfrow=...)). The other option is when you use par(mfg) to go back to a previous plot you also need to reset the usr coordinates, for example: par(mfrow=c(2,2)) plot(rnorm(10), rnorm(10) tmp - par('usr') hist(rgamma(1000,3)) # changes coordinate system par(mfg=c(1,1)) # go back to first plot points(0,0, col='red') # wrong place, based on hist coordinate system par(usr=tmp) # reset coordinates to correct values points(0,0, col='blue') # now it is in the right place Hope this helps, -- Gregory (Greg) L. Snow Ph.D. Statistical Data Center Intermountain Healthcare [EMAIL PROTECTED] (801) 408-8111 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yan Wong Sent: Tuesday, May 23, 2006 6:05 AM To: R-help Subject: Re: [R] multiple plots with par mfg On 23 May 2006, at 12:48, Prof Brian Ripley wrote: On Tue, 23 May 2006, Yan Wong wrote: if not, can anyone suggest a way of appending to 2 separate plots on the fly. No, it is user error. par(mfg=) specifies where the next figure will the drawn, and points() does not draw a figure but adds to one. As the help page says: 'mfg' A numerical vector of the form 'c(i, j)' where 'i' and 'j' indicate which figure in an array of figures is to be drawn next (if setting) or is being drawn (if enquiring). OK. I didn't appreciate the distinction between drawing and adding. You need to use screen() or layout() to switch back to an existing plot. Thanks, but the help page for screen says The behavior associated with returning to a screen to add to an existing plot is unpredictable and may result in problems that are not readily visible. I assume this to mean that I shouldn't do it using screen(). I can't find any description of how to add points to several different plots generated after a layout() call. Is there a way? Cheers Yan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] How can you buy R?
On 5/23/06, Berwin A Turlach [EMAIL PROTECTED] wrote: G'day Deepayan, DS == Deepayan Sarkar [EMAIL PROTECTED] writes: DS I think you are still missing the point. [...] Quite possible, as I said early on IANAL. And these discussion really starts to remind me too much of those that I read in gnu.misc.discuss. Since I never participated in them, I don't see why I should here. And that group is probably a better forum to discuss all these issues. If some of the guys who always tried to argue that they found a way to circumvent the GPL are still hanging around, I am sure they are happy if you come along and confirm that according to your understanding of the GPL eveything they are doing is o.k.. :) [...] DS The FSF's plan was not to produce a completely independent and DS fully functional 'GNU system' at once (which would be DS unrealistic), but rather produce replacements of UNIX tools DS one by one. It was entirely necessary to allow these new DS versions to operate within the older, proprietary system. Wasn't your argument above, in response to the scenario that I was describing, that it is not necessary to explicitly allow this because a user can never violate the GPL? As long as you operate on a proprietary system and not distributing anything, why would there all of a sudden be a problem? It's not necessary as long as you are only a user. It is necessary when you want to be more than a user and copy, modify and distribute. That's the difference between free software and ``Free software'' [1] [1] http://www.gnu.org/philosophy/free-sw.html In any case, I'm not interested in a technical discussion about the GPL either, I was only trying to respond to (what in my opinion was) some misinformation about the GPL. I have no interest in convincing you personally, and I wouldn't have bothered if this wasn't a public list. I think I have stated my point of view clearly enough, so I'm going to stop here. -Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Avoiding a memory copy by [[
On Tue, 23 May 2006, Matthew Dowle wrote: Hi, n = 1000 L = list(a=integer(n), b=integer(n)) L[[2]][1:10] gives me the first 10 items of the 2nd vector in the list L. It works fine. However it appears to copy the entire L[[2]] vector in memory first, before subsetting it. It seems reasonable that [[ can't know that all that is to be done is to do [1:10] on the result and therefore a copy in memory of the entire vector L[[2]] is not required. Only a new vector length 10 need be created. I see why [[ needs to make a copy in general. L[[c(2,1)]] gives me the 1st item of the 2nd vector in the list L. It works fine, and does not appear to copy L[[2]] in memory first. Its much faster as n grows large. But I need more than 1 element of the vector L[[c(2,1:10)]] fails with Error: recursive indexing failed at level 2 Note that [[ ]] is documented to only ever return one element, so this is invalid. Is there a way I can obtain the first 10 items of L[[2]] without a memory copy of L[[2]] ? Use .Call -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Accessing The Output of NLS
Hi, Probably a trivial question: if you do type something like: out-nls( here goes the model ), then you can type out or summary(out) to see the fitted parameters, the quality of the fit etc... but what if you want to get the fitted parameters as a vector to re-use them straight away in the following computations? Kind Regards Lorenzo __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Accessing The Output of NLS
LI == Lorenzo Isella [EMAIL PROTECTED] writes: LI Hi, Probably a trivial question: if you do type something LI like: out-nls( here goes the model ), then you can type out LI or summary(out) to see the fitted parameters, the quality of LI the fit etc... but what if you want to get the fitted LI parameters as a vector to re-use them straight away in the LI following computations? ?coef ?? Cheers, Berwin == Full address Berwin A Turlach Tel.: +61 (8) 6488 3338 (secr) School of Mathematics and Statistics+61 (8) 6488 3383 (self) The University of Western Australia FAX : +61 (8) 6488 1028 35 Stirling Highway Crawley WA 6009e-mail: [EMAIL PROTECTED] Australiahttp://www.maths.uwa.edu.au/~berwin __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] multiple plots with par mfg
On 23 May 2006, at 15:57, Greg Snow wrote: The best thing to do is to create the first plot, add everything to the first plot that you need to, then go on to the 2nd plot, etc. Yes, I realise that. The problem is that the data are being simulated on the fly, and I wish to display multiple plots which are updated as the simulation progresses. So I do need to return to each plot on every generation of the simulation. If you really need to go back to the first plot to add things after plotting the 2nd plot then here are a couple of ideas: Look at the examples for the cnvrt.coords function in the TeachingDemos package (my quick test showed they work with layout as well as par(mfrow=...)). The other option is when you use par(mfg) to go back to a previous plot you also need to reset the usr coordinates, for example: Aha. I didn't realise that the usr coordinates could be stored and reset using par. Hope this helps, I think that's exactly what I need. Thank you very much. Yan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Statistical Power
How can I compute a power analysis on a multi-factor within-subjects design? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Avoiding a memory copy by [[
Thanks. I looked some more and found that L$b[1:10] doesn't seem to copy L$b. If that's correct why does L[[2]][1:10] copy L[[2]] ? -Original Message- From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] Sent: 23 May 2006 16:23 To: Matthew Dowle Cc: 'r-help@stat.math.ethz.ch' Subject: Re: [R] Avoiding a memory copy by [[ On Tue, 23 May 2006, Matthew Dowle wrote: Hi, n = 1000 L = list(a=integer(n), b=integer(n)) L[[2]][1:10] gives me the first 10 items of the 2nd vector in the list L. It works fine. However it appears to copy the entire L[[2]] vector in memory first, before subsetting it. It seems reasonable that [[ can't know that all that is to be done is to do [1:10] on the result and therefore a copy in memory of the entire vector L[[2]] is not required. Only a new vector length 10 need be created. I see why [[ needs to make a copy in general. L[[c(2,1)]] gives me the 1st item of the 2nd vector in the list L. It works fine, and does not appear to copy L[[2]] in memory first. Its much faster as n grows large. But I need more than 1 element of the vector L[[c(2,1:10)]] fails with Error: recursive indexing failed at level 2 Note that [[ ]] is documented to only ever return one element, so this is invalid. Is there a way I can obtain the first 10 items of L[[2]] without a memory copy of L[[2]] ? Use .Call -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] glmmADMB and the GPL -- formerly-- How to buy R.
Dear List, Some of you have been following the discussion of the GPL and its inclusion in the glmmADMB package we created for R users. I would like to provide a bit of background and include an email we received from Prof. Ripley so that everyone can be aware of how some might use the GPL to try to force access to proprietary software. I think this is interesting because many have voiced the opinion about the benign nature of the GPL and that commercial enterprises who avoid it do so mainly out of ignorance. I have noticed two things: Users of the R-help list appear to rely largely on the advice of a rather small number of statistical experts. Second, the R users regard R as being more cutting edge and up to date than lists devoted to commercial statistical packages like SAS. For these reasons I was surprised to see the following post on the web in reply to a question on negative binomial mixed models. https://stat.ethz.ch/pipermail/r-help/2005-February/066146.html I thought that this was bad advice as certainly our ADMB-RE software could handle this problem easily. However one never knows exactly what sort of data people might use in a particular example that could lead to difficulties so I decided to code up a program that R users could test for this problem. However R users are used to a different approach for model formulation so that it was difficult for the average R user to access the program. I approached Anders Nielsen who is both an experienced ADMB user and R user and asked him to write an interface in R which would make the program more accessible to R users. He created a package and the whole thing seems to have had some success with at least one PhD thesis based on calculations using it. The R code that Anders wrote is simply an interface which takes the R specification for the model and outputs a data file in the format the an ADMB program expects. The ADMB program is a stand alone exe. The R script then reads the ADMB output files and presents the results to the user in a more familiar R format. Now it appears at some revision someone put a GPL notice on this package although Anders states that he did not do so, and and he is certain that it was not originally included by him. In any event the R script is easily extracted from the package by those who know how to do so and we have no problem with making the ADMB-RE source to the exe (TPL file) available. In fact the original was on our web site but was modified as we made to program more robust to deal with difficult data sets. The compiled TPL file links with our proprietary libraries and we have no intention of providing the source for these, but that is exactly what Prof. Ripley seems to be demanding since he claims that he wants the program to run on his computer which it apparently does not do at present. Prof. Ripley seems to feel that he is a qualified spokesman for the open source community. I have no idea what the community at large feels about this. What follows is Hans Skaug's post with Prof. Ripley's reply. On Mon, 22 May 2006, H. Skaug wrote: About glmmADMB and GPL: We were not very cautious when we put in the GPL statement. What we wanted to say was that the use of glmmADMB is free, and does not require a license for AD Model Builder. But that is not what you said, and you are legally and morally bound to fulfill the promise you made. Am I correct in interpreting this discussion so that all we have to do is to remove the License: GPL statement from the DESCRIPTION file (and everywhere else it may occur), and there will be no conflict between glmmADMB and the rules of the R community? I have made a request under the GPL. `All' you have to do is to fulfill it. We have temporarily withdrawn glmmADMB until this question has been settled. You can withdraw the package, but it has already been distributed under GPL, and those who received it under GPL have the right to redistribute it under GPL, including the sources you are obliged to give them. That's part of the `freedom' that GPL gives. hans Brian Ripley wrote: The issue in the glmmADMB example is not if they were required to release it under GPL (my reading from the GPL FAQ is that they probably were not, given that communication is between processes and the R code is interpreted). Rather, it is stated to be under GPL _but_ there is no source code offer for the executables (and the GPL FAQ says that for anonymous FTP it should be downloadable via the same site, and the principles apply equally to HTTP sites). As the executables are not for my normal OS and I would like to exercise my freedom to try the GPLed code, I have requested the sources from the package maintainer. -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of
[R] Manipulating code?
Dear expeRts, I am currently struggling with the problem of finding cut points for a set of stimulus variables. I would like to obtain cut points iteratively for each variable by re-applying a dichotomised variable in the model and then recalculate it. I planned to have fixed names for the dichotomised variables so I could use the same syntax for every recalculation of the whole model. I furthermore want to reiterate the process until no cut point changes any more. My problem is in accomplishing this syntactically. How can I pass a variable name to a function without getting lost in as.symbol and eval and parse mayhem? I am feeling I am thinking too much in macro expansion à la SAS when trying to tackle this. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Avoiding a memory copy by [[
On 5/23/06, Matthew Dowle [EMAIL PROTECTED] wrote: Hi, n = 1000 L = list(a=integer(n), b=integer(n)) L[[2]][1:10] gives me the first 10 items of the 2nd vector in the list L. It works fine. However it appears to copy the entire L[[2]] vector in memory first, before subsetting it. It seems reasonable that [[ can't know that all that is to be done is to do [1:10] on the result and therefore a copy in memory of the entire vector L[[2]] is not required. Only a new vector length 10 need be created. I see why [[ needs to make a copy in general. L[[c(2,1)]] gives me the 1st item of the 2nd vector in the list L. It works fine, and does not appear to copy L[[2]] in memory first. Its much faster as n grows large. But I need more than 1 element of the vector L[[c(2,1:10)]] fails with Error: recursive indexing failed at level 2 Is there a way I can obtain the first 10 items of L[[2]] without a memory copy of L[[2]] ? I think environments will help you out here: n 1000 env - new.env() env$a - integer(n) env$b - integer(n) env$a[1:10] /Henrik Thanks! Matthew R 2.1.1 [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Statistical Power
For other than the basic situations I generally use simulation to estimate power. Follow these basics steps: Write a function that takes as input the things that you may want to change in estimating power (sample size, effect size, standard deviations, ...). Inside the function generate random data based on the inputs and your study design and computes the p-value that you are interested in and returns that p-value. Then use the function replicate or sapply to run this function a bunch of times (I usually do about 1,000) and save the p-values in a vector. The estimated power is then mean(outvec 0.05) (or whatever your alpha level is). The website: http://maven.smith.edu/~nhorton/R/ has an example of simulating power for a mixed effects model (though it uses a loop rather than replicate). Hope this helps, -- Gregory (Greg) L. Snow Ph.D. Statistical Data Center Intermountain Healthcare [EMAIL PROTECTED] (801) 408-8111 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Christopher Brown Sent: Tuesday, May 23, 2006 9:54 AM To: R-help@stat.math.ethz.ch Subject: [R] Statistical Power How can I compute a power analysis on a multi-factor within-subjects design? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Avoiding a memory copy by [[
On 5/23/06, Matthew Dowle [EMAIL PROTECTED] wrote: Thanks. I looked some more and found that L$b[1:10] doesn't seem to copy L$b. If that's correct why does L[[2]][1:10] copy L[[2]] ? I forgot, this is probably what I was told in discussion about UseMethod($) the other day: The $ operator is very special. Its second argument (the one after the operator) is not evaluated. For [[ it is. This is probably also why the solution with environment works. I think some with the more knowledge about the R core has to give you the details on this, and especially why $ is special in the first place (maybe because of the example you're giving). /Henrik -Original Message- From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] Sent: 23 May 2006 16:23 To: Matthew Dowle Cc: 'r-help@stat.math.ethz.ch' Subject: Re: [R] Avoiding a memory copy by [[ On Tue, 23 May 2006, Matthew Dowle wrote: Hi, n = 1000 L = list(a=integer(n), b=integer(n)) L[[2]][1:10] gives me the first 10 items of the 2nd vector in the list L. It works fine. However it appears to copy the entire L[[2]] vector in memory first, before subsetting it. It seems reasonable that [[ can't know that all that is to be done is to do [1:10] on the result and therefore a copy in memory of the entire vector L[[2]] is not required. Only a new vector length 10 need be created. I see why [[ needs to make a copy in general. L[[c(2,1)]] gives me the 1st item of the 2nd vector in the list L. It works fine, and does not appear to copy L[[2]] in memory first. Its much faster as n grows large. But I need more than 1 element of the vector L[[c(2,1:10)]] fails with Error: recursive indexing failed at level 2 Note that [[ ]] is documented to only ever return one element, so this is invalid. Is there a way I can obtain the first 10 items of L[[2]] without a memory copy of L[[2]] ? Use .Call -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Avoiding a memory copy by [[
On Tue, 23 May 2006, Henrik Bengtsson wrote: On 5/23/06, Matthew Dowle [EMAIL PROTECTED] wrote: Thanks. I looked some more and found that L$b[1:10] doesn't seem to copy L$b. If that's correct why does L[[2]][1:10] copy L[[2]] ? I forgot, this is probably what I was told in discussion about UseMethod($) the other day: The $ operator is very special. Its second argument (the one after the operator) is not evaluated. For [[ it is. This is probably also why the solution with environment works. I think some with the more knowledge about the R core has to give you the details on this, and especially why $ is special in the first place (maybe because of the example you're giving). That's not the reason here: the internal code for [[ duplicates for vector lists but not pairlists. That could be replaced by a NAMED optimization, although we would not do that until 2.4.0 (for which Thomas Lumley has written profiling code for memory use and duplication). /Henrik -Original Message- From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] Sent: 23 May 2006 16:23 To: Matthew Dowle Cc: 'r-help@stat.math.ethz.ch' Subject: Re: [R] Avoiding a memory copy by [[ On Tue, 23 May 2006, Matthew Dowle wrote: Hi, n = 1000 L = list(a=integer(n), b=integer(n)) L[[2]][1:10] gives me the first 10 items of the 2nd vector in the list L. It works fine. However it appears to copy the entire L[[2]] vector in memory first, before subsetting it. It seems reasonable that [[ can't know that all that is to be done is to do [1:10] on the result and therefore a copy in memory of the entire vector L[[2]] is not required. Only a new vector length 10 need be created. I see why [[ needs to make a copy in general. L[[c(2,1)]] gives me the 1st item of the 2nd vector in the list L. It works fine, and does not appear to copy L[[2]] in memory first. Its much faster as n grows large. But I need more than 1 element of the vector L[[c(2,1:10)]] fails with Error: recursive indexing failed at level 2 Note that [[ ]] is documented to only ever return one element, so this is invalid. Is there a way I can obtain the first 10 items of L[[2]] without a memory copy of L[[2]] ? Use .Call -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Avoiding a memory copy by [[
That development sounds excellent. I'm happy to help test it, just let me know. Until 2.4.0 then I'll do something like the following, because I need to deal with list integer locations rather than names : eval(parse(text=paste(L$',names(L)[2],'[1:10],sep=))) This works well but if there is an easier way until 2.4.0, please let me know. Thank you and Henrik for your replies. -Original Message- From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] Sent: 23 May 2006 17:47 To: Henrik Bengtsson Cc: Matthew Dowle; r-help@stat.math.ethz.ch Subject: Re: [R] Avoiding a memory copy by [[ On Tue, 23 May 2006, Henrik Bengtsson wrote: On 5/23/06, Matthew Dowle [EMAIL PROTECTED] wrote: Thanks. I looked some more and found that L$b[1:10] doesn't seem to copy L$b. If that's correct why does L[[2]][1:10] copy L[[2]] ? I forgot, this is probably what I was told in discussion about UseMethod($) the other day: The $ operator is very special. Its second argument (the one after the operator) is not evaluated. For [[ it is. This is probably also why the solution with environment works. I think some with the more knowledge about the R core has to give you the details on this, and especially why $ is special in the first place (maybe because of the example you're giving). That's not the reason here: the internal code for [[ duplicates for vector lists but not pairlists. That could be replaced by a NAMED optimization, although we would not do that until 2.4.0 (for which Thomas Lumley has written profiling code for memory use and duplication). /Henrik -Original Message- From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] Sent: 23 May 2006 16:23 To: Matthew Dowle Cc: 'r-help@stat.math.ethz.ch' Subject: Re: [R] Avoiding a memory copy by [[ On Tue, 23 May 2006, Matthew Dowle wrote: Hi, n = 1000 L = list(a=integer(n), b=integer(n)) L[[2]][1:10] gives me the first 10 items of the 2nd vector in the list L. It works fine. However it appears to copy the entire L[[2]] vector in memory first, before subsetting it. It seems reasonable that [[ can't know that all that is to be done is to do [1:10] on the result and therefore a copy in memory of the entire vector L[[2]] is not required. Only a new vector length 10 need be created. I see why [[ needs to make a copy in general. L[[c(2,1)]] gives me the 1st item of the 2nd vector in the list L. It works fine, and does not appear to copy L[[2]] in memory first. Its much faster as n grows large. But I need more than 1 element of the vector L[[c(2,1:10)]] fails with Error: recursive indexing failed at level 2 Note that [[ ]] is documented to only ever return one element, so this is invalid. Is there a way I can obtain the first 10 items of L[[2]] without a memory copy of L[[2]] ? Use .Call -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] fubnctions for overall mean and its se
Hi, I am interested in obtaining estimate of population mean density of a certain area by using kridging. I can get the predicted values at individual locations. I suppose that I can obtain estimate of population density by averaging those individual predicted values, but how about the se of the estimate? Any help is appreciated. Nancy __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Manipulating code?
Macro stuff à la SAS is something that should be avoided whenever possible - it's messy, limited, and limiting. (I've done it ocasionally and it works, but I think it's best not to go there.) Read the documentation on lists (in particular named lists), and keep everything in one or more lists. For example: lst - list() for (v in c(var1,var2,var3)) lst[[v]] - runif(sample(c(50,100),1)) for (v in c(var1,var2,var3)) print(sd(lst[[v]])) -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Johannes Hüsing Sent: Tuesday, May 23, 2006 12:26 PM To: r-help@stat.math.ethz.ch Subject: [R] Manipulating code? Dear expeRts, I am currently struggling with the problem of finding cut points for a set of stimulus variables. I would like to obtain cut points iteratively for each variable by re-applying a dichotomised variable in the model and then recalculate it. I planned to have fixed names for the dichotomised variables so I could use the same syntax for every recalculation of the whole model. I furthermore want to reiterate the process until no cut point changes any more. My problem is in accomplishing this syntactically. How can I pass a variable name to a function without getting lost in as.symbol and eval and parse mayhem? I am feeling I am thinking too much in macro expansion à la SAS when trying to tackle this. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Manipulating code?
Johannes Hüsing [EMAIL PROTECTED] writes: Dear expeRts, I am currently struggling with the problem of finding cut points for a set of stimulus variables. I would like to obtain cut points iteratively for each variable by re-applying a dichotomised variable in the model and then recalculate it. I planned to have fixed names for the dichotomised variables so I could use the same syntax for every recalculation of the whole model. I furthermore want to reiterate the process until no cut point changes any more. My problem is in accomplishing this syntactically. How can I pass a variable name to a function without getting lost in as.symbol and eval and parse mayhem? I am feeling I am thinking too much in macro expansion à la SAS when trying to tackle this. I think a simple example of what you are trying to do might be needed. But take a look at the help pages for assign() and get(). These functions make it easy to go from string containing name of variable to the actual variable, etc. + seth __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Manipulating code?
Its not entirely clear to me what you want to do but these will do the indicated regression on the subset of the data for which Species is setosa and then do it again but for the subset for which Species is virginica: lm(Sepal.Length ~ Sepal.Width, iris, subset = Species == setosa) lm(Sepal.Length ~ Sepal.Width, iris, subset = Species == virginica) Does that answer your question? On 5/23/06, Johannes Hüsing [EMAIL PROTECTED] wrote: Dear expeRts, I am currently struggling with the problem of finding cut points for a set of stimulus variables. I would like to obtain cut points iteratively for each variable by re-applying a dichotomised variable in the model and then recalculate it. I planned to have fixed names for the dichotomised variables so I could use the same syntax for every recalculation of the whole model. I furthermore want to reiterate the process until no cut point changes any more. My problem is in accomplishing this syntactically. How can I pass a variable name to a function without getting lost in as.symbol and eval and parse mayhem? I am feeling I am thinking too much in macro expansion à la SAS when trying to tackle this. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] my first R program
I have attached a graph image of what i am trying to plot. The x-axis in the image(rs*), would correspond to the position column in my first file. The ticks are not equidistant but are placed based on their proportionate distance from the eachother. So, the horizontal line i need to draw would be for example: from file 2: col1 =1 corresponds to position 120(in file1) col2 = 2 (position 134) p-value = 0.45 (p-val on y-axis, actually i will be plotting the -log10 of p-val) so i am trying to draw a line from positions 120 to 134 on x-axis at their corresponding p-value(0.45) on y-axis What i have done so far: 1) read the files into a data frame object: mpos = read.table(file 1, header=TRUE) snpem = read.table(file 2, header=TRUE) 2) snpem[4] = -log10(snpem[3]) 3) i tried some of the suggestions that i got from people here... so trying to map the position col(first file) to the col1 and col2 didnt work position = mpos[2] col1 = snpem[1] so, now when i did position[col1], it was giving an error Thanks for the help.. -kiran Quoting Albyn Jones [EMAIL PROTECTED]: I'm not sure I understand what you are trying to do exactly, but to associate the col1 and col2 with their corresponding position values looks like (position[col1], position[col2]) so you _might_ want something like plot(position[col1],pval,xlim=c(110,175),ylim=c(0,1),type=n,xlab=Position) segments(position[col1],pval, position[col2],pval) abline(h=.5) regards albyn Quoting [EMAIL PROTECTED]: Hello, This is my first attempt at using R. Still trying to figure out and understand how to work with data frames. I am trying to plot the following data(example). Some experimental data i am trying to plot here. 1) i have 2 files 2) First File: Number Position 1 120 2 134 3 156 4 169 5 203 3) Second File: Col1Col2p-val 1 2 0.45 1 2 0.56 2 3 0.56 2 3 0.68 2 3 0.88 3 4 0.76 3 5 0.79 3 5 0.92 I am trying to plot this with position as x-axis and p-val as the y-axis. The col1 and col2 in the second file correspond to the number column in first file. I am having trouble to figure out how to associate the col1 and col2 with their corresponding position values The x-axis should start with 120 as that is the min value and next values should be spaced proportionally away from the first. I tried using the percentage method to place them...but couldnt completely get it correct. so it would look like : | ||| | 120 134 156 169203 Hopefully i explained it correctly. i would like to plot the p-value as horizontal lines drawn between the col1 and col2 values (ie: positions) So, the plot will have as many horizontal lines as the rows in the second file. And ONE reference horizontal line passing thru the plot at p-val=0.5, to see what values lie below that and what above it. I have made some progress in plotting the horizontal axis, but having trouble bringing all the data together.Not sure yet how to manipulate them using the data frames:( Any suggestions and tips will be greatly appreciated. Thank you -Kiran - This email is intended only for the use of the individual or...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html - This email is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this email message is not the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is prohibited. If you have received this email in error, please notify the sender and destroy/delete all copies of the transmittal. Thank you. - __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] problem with ad.test
dear experts, i am a novice and have been trying to use the anderson-darling test on a simple text file with one column of data. i have followed the example in the manual to read from a file into a vector(mm). i am able to see the summary stats with summary(mm) however, when i try to use the ad.test package, it keeps coming up with the following error messages, ad.test(mm) Error in [.data.frame(x, complete.cases(x)) : undefined columns selected ad.test(EQ) Error in inherits(x, factor) : object EQ not found EQ is the name ( in the top row of the column, imported with read.table( file, header=TRUE) of the column of the data. i am really sorry if this is very basic, but i have not been able to locate anything specific on how to avoid this error message in the archives. any help is highly appreciated, bhisma -- ___ Bhismadev Chakrabarti Department of Psychiatry University of Cambridge Cambridge, UK __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] shapes in rgl
Does anyone have a way of producing solid shapes other than spheres in rgl? I am using rgl to produce a simple visualisation of a forest model results using lollipops. Its just a bit of fun, but as many of the trees are pines I would like to depict their crowns as cones. If there is a solution I need it to work under windows. Here is the example. library(rgl) library(misc3d) Trees3d-function(x,y,z,rad,cols=lightgreen){ rgl.bg(color=white) rgl.spheres(x,(z-rad),-y,rad,col=cols,alpha=1) x-rep(x,each=3) y-rep(y,each=3) z-rep(z-rad*2,each=3) a-seq(3,length(x),by=3) y[a]-NA x[a]-NA z[a]-NA a-seq(1,length(x),by=3) z[a]-0 lines3d(x,y,z,col=brown,size=5,add=T) rgl.bbox(color=black, emission=lightgreen, specular=#FF, shininess=5, alpha=0.8 ) } x-runif(100,0,100) y-runif(100,0,100) z-runif(100,10,30) rad-z/5 Trees3d(x,y,z,rad) Thanks, Duncan Golicher -- Dr Duncan Golicher Ecologia y Sistematica Terrestre Conservación de la Biodiversidad El Colegio de la Frontera Sur San Cristobal de Las Casas, Chiapas, Mexico Email: [EMAIL PROTECTED] Tel: 967 674 9000 ext 1310 Fax: 967 678 2322 Celular: 044 9671041021 United Kingdom Skypein; 020 7870 6251 Skype name: duncangolicher Download Skype from http://www.skype.com __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] conditional replacement
Hi How can do this in R. df 48 1 35 32 80 If df 30 then replace it with 30 and else if df 60 replace it with 60. I have a large dataset so I cant afford to identify indexes and then replace. Desired o/p: 48 30 35 32 60 Thanx in advance. Sachin __ [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] conditional replacement
x - 10*1:10 pmin(pmax(x, 30), 60) # 30 30 30 40 50 60 60 60 60 60 On 5/23/06, Sachin J [EMAIL PROTECTED] wrote: Hi How can do this in R. df 48 1 35 32 80 If df 30 then replace it with 30 and else if df 60 replace it with 60. I have a large dataset so I cant afford to identify indexes and then replace. Desired o/p: 48 30 35 32 60 Thanx in advance. Sachin __ [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] conditional replacement
On Tue, 2006-05-23 at 11:40 -0700, Sachin J wrote: Hi How can do this in R. df 48 1 35 32 80 If df 30 then replace it with 30 and else if df 60 replace it with 60. I have a large dataset so I cant afford to identify indexes and then replace. Desired o/p: 48 30 35 32 60 Thanx in advance. Sachin One approach is to combine the use of two ifelse() statements: ifelse(df 30, 30, ifelse(df 60, 60, df)) [1] 48 30 35 32 60 Recall that if the condition (1st argument) is TRUE, then the second argument is evaluated and returned. If the condition is FALSE, then the third argument is evaluated, which in this case is another ifelse(). The same logic follows within that function. See ?ifelse HTH, Marc Schwartz __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] shapes in rgl
Try this function (and modify it to your hearts content): rgl.cones - function(x,y,z,h=1,r=0.25, n=36, ...){ r - rep(r, length.out=length(x)) h - rep(h, length.out=length(x)) step - 2*pi/n for (i in seq(along=x)){ for (j in seq(0, 2*pi-step, length=n)){ tmp.x - x[i] + c(0, cos(j)*r[i], cos(j+step)*r[i]) tmp.z - z[i] + c(0, sin(j)*r[i], sin(j+step)*r[i]) tmp.y - y[i] + h[i]/2*c(1,-1,-1) rgl.triangles(tmp.x,tmp.y,tmp.z,...) } } } Hope this helps, -- Gregory (Greg) L. Snow Ph.D. Statistical Data Center Intermountain Healthcare [EMAIL PROTECTED] (801) 408-8111 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Duncan Golicher Sent: Tuesday, May 23, 2006 12:29 PM To: r-help@stat.math.ethz.ch Subject: [R] shapes in rgl Does anyone have a way of producing solid shapes other than spheres in rgl? I am using rgl to produce a simple visualisation of a forest model results using lollipops. Its just a bit of fun, but as many of the trees are pines I would like to depict their crowns as cones. If there is a solution I need it to work under windows. Here is the example. library(rgl) library(misc3d) Trees3d-function(x,y,z,rad,cols=lightgreen){ rgl.bg(color=white) rgl.spheres(x,(z-rad),-y,rad,col=cols,alpha=1) x-rep(x,each=3) y-rep(y,each=3) z-rep(z-rad*2,each=3) a-seq(3,length(x),by=3) y[a]-NA x[a]-NA z[a]-NA a-seq(1,length(x),by=3) z[a]-0 lines3d(x,y,z,col=brown,size=5,add=T) rgl.bbox(color=black, emission=lightgreen, specular=#FF, shininess=5, alpha=0.8 ) } x-runif(100,0,100) y-runif(100,0,100) z-runif(100,10,30) rad-z/5 Trees3d(x,y,z,rad) Thanks, Duncan Golicher -- Dr Duncan Golicher Ecologia y Sistematica Terrestre Conservación de la Biodiversidad El Colegio de la Frontera Sur San Cristobal de Las Casas, Chiapas, Mexico Email: [EMAIL PROTECTED] Tel: 967 674 9000 ext 1310 Fax: 967 678 2322 Celular: 044 9671041021 United Kingdom Skypein; 020 7870 6251 Skype name: duncangolicher Download Skype from http://www.skype.com __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] conditional replacement
you could try something like: ifelse(df 30, 30, ifelse(df 60, 60, df)) I hope it helps. Best, Dimitris Dimitris Rizopoulos Ph.D. Student Biostatistical Centre School of Public Health Catholic University of Leuven Address: Kapucijnenvoer 35, Leuven, Belgium Tel: +32/(0)16/336899 Fax: +32/(0)16/337015 Web: http://med.kuleuven.be/biostat/ http://www.student.kuleuven.be/~m0390867/dimitris.htm Quoting Sachin J [EMAIL PROTECTED]: Hi How can do this in R. df 48 1 35 32 80 If df 30 then replace it with 30 and else if df 60 replace it with 60. I have a large dataset so I cant afford to identify indexes and then replace. Desired o/p: 48 30 35 32 60 Thanx in advance. Sachin __ [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] conditional replacement
Sachin J wrote: Hi How can do this in R. df 48 1 35 32 80 If df 30 then replace it with 30 and else if df 60 replace it with 60. I have a large dataset so I cant afford to identify indexes and then replace. Desired o/p: 48 30 35 32 60 Thanx in advance. Sachin __ Try: pmax(pmin(df, 60), 30) assuming df is numeric (and not a data.frame). ifelse is also an option. --sundar __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] conditional replacement
Thank you Gabor,Marc,Dimitrios and Sundar. Sachin Gabor Grothendieck [EMAIL PROTECTED] wrote: x - 10*1:10 pmin(pmax(x, 30), 60) # 30 30 30 40 50 60 60 60 60 60 On 5/23/06, Sachin J wrote: Hi How can do this in R. df 48 1 35 32 80 If df 30 then replace it with 30 and else if df 60 replace it with 60. I have a large dataset so I cant afford to identify indexes and then replace. Desired o/p: 48 30 35 32 60 Thanx in advance. Sachin __ [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Manipulating code?
Johannes Hüsing [EMAIL PROTECTED] writes: [...] I think a simple example of what you are trying to do might be needed. I don't think so, as ... But take a look at the help pages for assign() and get(). ... this seems to be what I was looking for. Many thanks! Greetings Johannes __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] conditional replacement
Sachin, there's another slower but more flexible way than Gabor's solution: ifelse(x30,30,ifelse(x60,60,x)) HTH, Rogerio. - Original Message - From: Sachin J [EMAIL PROTECTED] To: R-help@stat.math.ethz.ch Sent: Tuesday, May 23, 2006 3:40 PM Subject: [R] conditional replacement Hi How can do this in R. df 48 1 35 32 80 If df 30 then replace it with 30 and else if df 60 replace it with 60. I have a large dataset so I cant afford to identify indexes and then replace. Desired o/p: 48 30 35 32 60 Thanx in advance. Sachin __ [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] transpose dataset to PC-ORD?
Hello: I need to take a species-sample matrix and transpose it to the format used by PC-ORD for analysis. Unfortunately, the number of species is very large (5000), and so this operation cannot be performed simply in an application like Excel, which has a 255 column limit. So, I wrote relatively simple code in R that I hoped would do this (appended below). But there are glitches. The format needed for PC-ORD (where NA shows an empty cell): NA,3,sites,NA NA,3,species,NA NA,Q,Q,Q NA,sp1,sp2,sp3 site1,1,0,0 site2,0,1,2 site3,0,3,0 2 cells in first row indicate number of samples (rows), the second column indicates number of species (columns), the third row indicates variable type (Q = quantitative), and the fourth row shows column headers (species names). So, one can create a transposable matrix in a spreadsheet where 5000+ species are the rows: NA,NA,NA,NA,site1,site2,site3 3,3,Q,sp1,1,0,0 sites,species,Q,sp2,0,1,3 NA,NA,Q,sp3,0,2,0 It is important that the data file written out is totally clean and ready to go for PC-ORD, because I cannot open and edit it in a spreadsheet. However, the code performs the transpose operation and writes the file, but now the former row IDs are the first row in the new file (NA,1,2,3), and the 4 leading spaces are X, X.1, X.2, X.3. I'd like to delete the first row and delete the first 4 values of column1, without deleting the column. NA,1,2,3 X,3,islands,NA X.1,3,speciesNA X.2,Q,Q,Q X.3,sp1,sp2,sp3 site1,1,0,0 site2,0,1,2 site3,0,3,0 I have tried various tricks that I will not list/belabor here (various col.names, row.names, header, Extract, etc commands). Any further hints on code that will either stop R from adding these, or strip them at the end? (PS, yes, I can learn how to my multivariate analyses in R and skip PC-ORD, but I am time limited on this one, and it seems that this code could be very useful in numerous ways) Many thanks for the help, Dan Gruner (Windows XP, R vers2.2) ##transpose datasets to convert to PC-ORD format data-read.csv(data.csv, header=TRUE, as.is=T, strip.white=T, na.strings=NA) data-as.matrix(data) data.trans - t(data) write.csv(data.trans, file = datatransp.csv, quote = F, na = ) *** Daniel S. Gruner, Postdoctoral Scholar Bodega Marine Lab, University of California -- Davis PO Box 247, 2099 Westside Rd Bodega Bay, CA 94923-0247 (o) 707.875.2022 (f) 707.875.2009 (m) 707.338.5722 email: dsgruner_at_ucdavis.edu http://www.bml.ucdavis.edu/facresearch/gruner.html http://www.hawaii.edu/ant/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] how to multiply a constant to a matrix?
This is very strange: I want compute the following in R: g = B/D * solve(A) where B and D are quadratics so they are just a scalar number, e.g. B=t(a) %*% F %*% a; I want to multiply B/D to A^(-1), but R just does not allow me to do that and it keeps complaining that nonconformable array, etc. I tried the following two tricks and they worked: as.numeric(B/D) * solve(A) diag(as.numeric(B/D), 5, 5) %*% solve (A) But if R cannot intelligently do scalar and matrix multiplication, it is really problemetic. It basically cannot be used to do computations, since in complicated matrix algebras, you have to distinguish where is scalar, and scalars obtained from quadratics cannot be directly used to multiply another matrix, etc. It is going to a huge mess... Any thoughts? [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] transpose dataset to PC-ORD?
I do not know exactly what you are looking for but it seems that you are writing the column names (which become row names) when transposing the data. So to fix this try using write.table(..., sep=,, row.names=F) Jean Daniel Gruner wrote: Hello: I need to take a species-sample matrix and transpose it to the format used by PC-ORD for analysis. Unfortunately, the number of species is very large (5000), and so this operation cannot be performed simply in an application like Excel, which has a 255 column limit. So, I wrote relatively simple code in R that I hoped would do this (appended below). But there are glitches. The format needed for PC-ORD (where NA shows an empty cell): NA,3,sites,NA NA,3,species,NA NA,Q,Q,Q NA,sp1,sp2,sp3 site1,1,0,0 site2,0,1,2 site3,0,3,0 2 cells in first row indicate number of samples (rows), the second column indicates number of species (columns), the third row indicates variable type (Q = quantitative), and the fourth row shows column headers (species names). So, one can create a transposable matrix in a spreadsheet where 5000+ species are the rows: NA,NA,NA,NA,site1,site2,site3 3,3,Q,sp1,1,0,0 sites,species,Q,sp2,0,1,3 NA,NA,Q,sp3,0,2,0 It is important that the data file written out is totally clean and ready to go for PC-ORD, because I cannot open and edit it in a spreadsheet. However, the code performs the transpose operation and writes the file, but now the former row IDs are the first row in the new file (NA,1,2,3), and the 4 leading spaces are X, X.1, X.2, X.3. I'd like to delete the first row and delete the first 4 values of column1, without deleting the column. NA,1,2,3 X,3,islands,NA X.1,3,speciesNA X.2,Q,Q,Q X.3,sp1,sp2,sp3 site1,1,0,0 site2,0,1,2 site3,0,3,0 I have tried various tricks that I will not list/belabor here (various col.names, row.names, header, Extract, etc commands). Any further hints on code that will either stop R from adding these, or strip them at the end? (PS, yes, I can learn how to my multivariate analyses in R and skip PC-ORD, but I am time limited on this one, and it seems that this code could be very useful in numerous ways) Many thanks for the help, Dan Gruner (Windows XP, R vers2.2) ##transpose datasets to convert to PC-ORD format data-read.csv(data.csv, header=TRUE, as.is=T, strip.white=T, na.strings=NA) data-as.matrix(data) data.trans - t(data) write.csv(data.trans, file = datatransp.csv, quote = F, na = ) *** Daniel S. Gruner, Postdoctoral Scholar Bodega Marine Lab, University of California -- Davis PO Box 247, 2099 Westside Rd Bodega Bay, CA 94923-0247 (o) 707.875.2022 (f) 707.875.2009 (m) 707.338.5722 email: dsgruner_at_ucdavis.edu http://www.bml.ucdavis.edu/facresearch/gruner.html http://www.hawaii.edu/ant/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] multiple plots with par mfg
Hi, An other possibility might be to use two devices and use dev.set to go from one to another : x11() # the first device (may be windows() or quartz() depending on you OS) plot(1,1, col=blue) # blue plot x11() # the second plot(1.2,1.2, col=red) # red plot points(1.1,1.1) # appears to bottom left of red point dev.set(dev.prev()) # switch plots points(1.1,1.1) Le 23.05.2006 17:54, Yan Wong a écrit : On 23 May 2006, at 15:57, Greg Snow wrote: The best thing to do is to create the first plot, add everything to the first plot that you need to, then go on to the 2nd plot, etc. Yes, I realise that. The problem is that the data are being simulated on the fly, and I wish to display multiple plots which are updated as the simulation progresses. So I do need to return to each plot on every generation of the simulation. If you really need to go back to the first plot to add things after plotting the 2nd plot then here are a couple of ideas: Look at the examples for the cnvrt.coords function in the TeachingDemos package (my quick test showed they work with layout as well as par(mfrow=...)). The other option is when you use par(mfg) to go back to a previous plot you also need to reset the usr coordinates, for example: Aha. I didn't realise that the usr coordinates could be stored and reset using par. Hope this helps, I think that's exactly what I need. Thank you very much. Yan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- visit the R Graph Gallery : http://addictedtor.free.fr/graphiques mixmod 1.7 is released : http://www-math.univ-fcomte.fr/mixmod/index.php +---+ | Romain FRANCOIS - http://francoisromain.free.fr | | Doctorant INRIA Futurs / EDF | +---+ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] transpose dataset to PC-ORD?
Daniel, I can help somewhat I think. PC-ORD also allows data input in what it calls database format, where each row is sample, taxon, abundance There as many rows/sample as there are non-zero species, and only three columns. To get your taxon data.frame (currently samples as rows, species as columns, called data in your example) in that format try dematrify(data,file='whatever.csv') with the function pasted below (watch out for email-altered line breaks). That will create a CSV file you can import into PC-ORD. Just to encourage you a little, you really should try the Ecology packages in R. See packages vegan, ade-4, and labdsv, for example, and take a look at http://ecology.msu.montana.edu/labdsv/R Dave R. * dematrify - function (df,filename=NULL,sep=,) { tmp - which(df0,arr.ind=TRUE) stack - NULL samples - row.names(tmp) taxon - names(df)[tmp[,2]] abund - rep(NA,nrow(tmp)) for (i in 1:nrow(tmp)) { abund[i] - df[samples[i],taxon[i]] stack - rbind(stack,paste(samples[i],sep,taxon[i],sep,abund[i],\n,sep=)) } if (is.null(filename)) { tmp2 - cbind(samples,taxon,abund) tmp2 - data.frame(tmp2[order(tmp2[,1]),]) return(tmp2) } else { stack - sort(stack) sink(file=filename) cat(stack) sink() } } Daniel Gruner wrote: Hello: I need to take a species-sample matrix and transpose it to the format used by PC-ORD for analysis. Unfortunately, the number of species is very large (5000), and so this operation cannot be performed simply in an application like Excel, which has a 255 column limit. So, I wrote relatively simple code in R that I hoped would do this (appended below). But there are glitches. The format needed for PC-ORD (where NA shows an empty cell): NA,3,sites,NA NA,3,species,NA NA,Q,Q,Q NA,sp1,sp2,sp3 site1,1,0,0 site2,0,1,2 site3,0,3,0 2 cells in first row indicate number of samples (rows), the second column indicates number of species (columns), the third row indicates variable type (Q = quantitative), and the fourth row shows column headers (species names). So, one can create a transposable matrix in a spreadsheet where 5000+ species are the rows: NA,NA,NA,NA,site1,site2,site3 3,3,Q,sp1,1,0,0 sites,species,Q,sp2,0,1,3 NA,NA,Q,sp3,0,2,0 It is important that the data file written out is totally clean and ready to go for PC-ORD, because I cannot open and edit it in a spreadsheet. However, the code performs the transpose operation and writes the file, but now the former row IDs are the first row in the new file (NA,1,2,3), and the 4 leading spaces are X, X.1, X.2, X.3. I'd like to delete the first row and delete the first 4 values of column1, without deleting the column. NA,1,2,3 X,3,islands,NA X.1,3,speciesNA X.2,Q,Q,Q X.3,sp1,sp2,sp3 site1,1,0,0 site2,0,1,2 site3,0,3,0 I have tried various tricks that I will not list/belabor here (various col.names, row.names, header, Extract, etc commands). Any further hints on code that will either stop R from adding these, or strip them at the end? (PS, yes, I can learn how to my multivariate analyses in R and skip PC-ORD, but I am time limited on this one, and it seems that this code could be very useful in numerous ways) Many thanks for the help, Dan Gruner (Windows XP, R vers2.2) ##transpose datasets to convert to PC-ORD format data-read.csv(data.csv, header=TRUE, as.is=T, strip.white=T, na.strings=NA) data-as.matrix(data) data.trans - t(data) write.csv(data.trans, file = datatransp.csv, quote = F, na = ) *** Daniel S. Gruner, Postdoctoral Scholar Bodega Marine Lab, University of California -- Davis PO Box 247, 2099 Westside Rd Bodega Bay, CA 94923-0247 (o) 707.875.2022 (f) 707.875.2009 (m) 707.338.5722 email: dsgruner_at_ucdavis.edu http://www.bml.ucdavis.edu/facresearch/gruner.html http://www.hawaii.edu/ant/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- David W. Roberts office 406-994-4548 Professor and Head FAX 406-994-3190 Department of Ecology email [EMAIL PROTECTED] Montana State University Bozeman, MT 59717-3460 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] shapes in rgl
Thanks so much Greg. I was thinking along similar lines but just couldn't see how to do it. Great trick, just what I needed. They didn't have to be solid, in fact these are potentially more pine like. Now, I wonder what other shapes can be made this way. Duncan Greg Snow wrote: Try this function (and modify it to your hearts content): rgl.cones - function(x,y,z,h=1,r=0.25, n=36, ...){ r - rep(r, length.out=length(x)) h - rep(h, length.out=length(x)) step - 2*pi/n for (i in seq(along=x)){ for (j in seq(0, 2*pi-step, length=n)){ tmp.x - x[i] + c(0, cos(j)*r[i], cos(j+step)*r[i]) tmp.z - z[i] + c(0, sin(j)*r[i], sin(j+step)*r[i]) tmp.y - y[i] + h[i]/2*c(1,-1,-1) rgl.triangles(tmp.x,tmp.y,tmp.z,...) } } } Hope this helps, -- Dr Duncan Golicher Ecologia y Sistematica Terrestre Conservación de la Biodiversidad El Colegio de la Frontera Sur San Cristobal de Las Casas, Chiapas, Mexico Email: [EMAIL PROTECTED] Tel: 967 674 9000 ext 1310 Fax: 967 678 2322 Celular: 044 9671041021 United Kingdom Skypein; 020 7870 6251 Skype name: duncangolicher Download Skype from http://www.skype.com __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Can lmer() fit a multilevel model embedded in a regression?
I've thought about this a bit and am just short of simulating data, for which I do not have time. But, if there are some available data I would be happy to experiment. Based on my understanding of the model and data structure, I do think it is possible to estimate using lmer, but I think it may push some limits, especially as the structure of the random effects seems to be very large with many covariances. This can be controlled by assuming independence of the group-level errors, hence making the model more parsimonious and simpler for lmer to estimate(see the Bates article on lmer in R News). There is a large number of variables, so using as.formula would be wise, but for illustration here is what I think the lmer syntax would look like: fm1 - lmer(outcome ~ food_1*folic_1 + food_2*folic_2 + ... + food_82*folic_82 + sex + age + (food_1 + food_2 + ... + food_82|id), data, family =binomial(link='logit'), method = Laplace, control = list(usePQL=FALSE) ) One can assess the reasonableness of the model using the MCMCsamp() function. This returns an object of class mcmc and so all diagnostics can be performed using the various functions in the coda package. I might suggest experimenting with this code for a much smaller set of columns in the X matrix for foods. I must admit that I think of the model notation slightly different than written in this exchange. My inclination is to think of the model as a linear model with a covariance structure that accounts for correlations in the data by incorporating random effects. HTH, Harold -Original Message- From: Andrew Gelman [mailto:[EMAIL PROTECTED] Sent: Mon 5/22/2006 11:12 AM To: Doran, Harold Cc: r-help@stat.math.ethz.ch; [EMAIL PROTECTED] Subject:Re: [R] Can lmer() fit a multilevel model embedded in a regression? Harold, I think we use slightly different notation (I like to use variance parameters rather than covariance matrices). Let me try to write it in model form: Data points y_i, i=1,...,800 800 x 84 matrix of predictors, X: for columns j=1,...,82, X_{i,j} is the amount of food j consumed by person i. X_{i,83} is an indicator (1 if male, 0 if female), and X_{i,84} is the age of person i. Data-level model: Pr (y_i=1) = inverse.logit (X_i*beta), for i=1,...,800, with independent outcomes. beta is a (column) vector of length 84. Group-level model: for j=1,...,82: beta_j ~ Normal (gamma_0 + gamma_1 * u_j, sigma^2_{beta}). u is a vector of length 82, where u_j = folate concentration in food j gamma_0 and gamma_1 are scalar coefficients (for the group-level model), and sigma_{beta} is the sd of the group-level errors. It would be hopeless to estimate all the betas using maximum likelihood: that's 800 data points and 84 predictors, the results will just be too noisy. But it should be ok using the 2-level model above. The question is: can I fit in lmer()? Thanks again. Andrew Doran, Harold wrote: So, in the hierarchical notation, does the model look like this (for the linear predictor): DV = constant + food_1(B_1) + food_2(B_2) + ... + food_82(B_82) + sex(B_83) + age(B_84) food_1 = gamma_00 + gamma_01(folic) + r_01 food_2 = gamma_10 + gamma_11(folic) + r_02 ... food_82 = gamma_20 + gamma_21(folic) + r_82 where r_qq ~ N(0, Psi) and Psi is an 82-dimensional covariance matrix. I usually need to see this in model form as it helps me translate this into lmer syntax if it can be estimated. From what I see, this would be estimating 82(82+1)/2 = 3403 parameters in the covariance matrix. What I'm stuck on is below you say it would be hopeless to estimate the 82 predictors using ML. But, if I understand the model correctly, the multilevel regression still resolves the predictors (fixed effects) using ML once estimates of the variances are obtained. So, I feel I might still be missing something. -Original Message- From: Andrew Gelman [mailto:[EMAIL PROTECTED] Sent: Sun 5/21/2006 7:35 PM To: Doran, Harold Cc: r-help@stat.math.ethz.ch; [EMAIL PROTECTED] Subject:Re: [R] Can lmer() fit a multilevel model embedded in a regression? Harold, I get confused by the terms fixed and random. Our first-level model (in the simplified version we're discussing here) has 800 data points (the persons in the study) and 84 predictors: sex, age, and 82 coefficients for foods. The second-level model has 82 data points (the foods) and two predictors: a constant term and folic acid concentration. It would be hopeless to estimate the 82 food coefficients via maximum likelihood, so the idea is to do a multilevel model, with a regression of these coefficients on the constant term and folic acid. The group-level model has a residual variance. If the group-level residual variance is 0, it's equivalent to ignoring food, and just using total folic acid as an individual predictor. If the group-level residual variance is infinity, it's
[R] Survey proportions... Can I use population as denominator?
Just giving the survey package a spin... I'm accustomed to stata, and it seems very similar in many respects. One thing is throwing me, however. I've gotten my data in, and specified the design. Looks like the weighting is right (based on published population estimates from these data), but now I'd like to check my marginal means for proportions against those that have been published. I'd think that svyratio would do the trick, but one needs to specify both the numerator and the denominator... I'm looking for a simple ratio of males to females, and this is eluding me. I can't just use a 1 as the denominator, and I want the whole population described as two values that add to one. The help screens have not been fruitful so far. I'm suffering from interference, I'm sure, where knowledge of another package is getting in the way of seeing what I need to do. Anyone able to nudge me in the right direction? Much obliged... -- --- David L. Van Brunt, Ph.D. mailto:[EMAIL PROTECTED] [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] shapes in rgl
On 5/23/2006 5:00 PM, Duncan Golicher wrote: Thanks so much Greg. I was thinking along similar lines but just couldn't see how to do it. Great trick, just what I needed. They didn't have to be solid, in fact these are potentially more pine like. Now, I wonder what other shapes can be made this way. Take a look at the qmesh3d man page, and demo(shapes3d) for more ideas. The nice thing about the qmesh stuff is that you can define a shape once, then transform it to display it in different locations, or at different sizes, etc. Duncan Murdoch Duncan Greg Snow wrote: Try this function (and modify it to your hearts content): rgl.cones - function(x,y,z,h=1,r=0.25, n=36, ...){ r - rep(r, length.out=length(x)) h - rep(h, length.out=length(x)) step - 2*pi/n for (i in seq(along=x)){ for (j in seq(0, 2*pi-step, length=n)){ tmp.x - x[i] + c(0, cos(j)*r[i], cos(j+step)*r[i]) tmp.z - z[i] + c(0, sin(j)*r[i], sin(j+step)*r[i]) tmp.y - y[i] + h[i]/2*c(1,-1,-1) rgl.triangles(tmp.x,tmp.y,tmp.z,...) } } } Hope this helps, __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] exporting long character vectors to dbf
Hi - I need to export data to openoffice base, where one of the elements is a long character vector (255 characters.) write.dbf exports it as varchar, truncating the data. Any idea how to do this? thanks, -eduardo __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] multiple plots with par mfg
saveSubplot - function() { if (!exists(subplotPars, mode=list)) subplotPars - list(); p - par(no.readonly=TRUE); mfg - p$mfg; key - mfg[1]*(mfg[3]-1)+mfg[2]; subplotPars[[key]] - p; invisible(key); } restoreSubplot - function(mfg) { opar - par(); if (length(mfg) == 2) mfg - c(mfg, par(mfg)[3:4]); key - mfg[1]*(mfg[3]-1)+mfg[2]; p - subplotPars[[key]]; # Move 'mfg' last mfg - p$mfg; p$mfg - NULL; p$mfg - mfg; par(p); invisible(opar); } par(mfrow=c(2,2)); par(lwd=2, pch=19); plot(rnorm(10), rnorm(10)); saveSubplot(); par(lwd=1, pch=0); hist(rgamma(1000,3)); saveSubplot(); restoreSubplot(c(1,1)); points(0,0, col=red); /Henrik On 5/23/06, Romain Francois [EMAIL PROTECTED] wrote: Hi, An other possibility might be to use two devices and use dev.set to go from one to another : x11() # the first device (may be windows() or quartz() depending on you OS) plot(1,1, col=blue) # blue plot x11() # the second plot(1.2,1.2, col=red) # red plot points(1.1,1.1) # appears to bottom left of red point dev.set(dev.prev()) # switch plots points(1.1,1.1) Le 23.05.2006 17:54, Yan Wong a écrit : On 23 May 2006, at 15:57, Greg Snow wrote: The best thing to do is to create the first plot, add everything to the first plot that you need to, then go on to the 2nd plot, etc. Yes, I realise that. The problem is that the data are being simulated on the fly, and I wish to display multiple plots which are updated as the simulation progresses. So I do need to return to each plot on every generation of the simulation. If you really need to go back to the first plot to add things after plotting the 2nd plot then here are a couple of ideas: Look at the examples for the cnvrt.coords function in the TeachingDemos package (my quick test showed they work with layout as well as par(mfrow=...)). The other option is when you use par(mfg) to go back to a previous plot you also need to reset the usr coordinates, for example: Aha. I didn't realise that the usr coordinates could be stored and reset using par. Hope this helps, I think that's exactly what I need. Thank you very much. Yan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- visit the R Graph Gallery : http://addictedtor.free.fr/graphiques mixmod 1.7 is released : http://www-math.univ-fcomte.fr/mixmod/index.php +---+ | Romain FRANCOIS - http://francoisromain.free.fr | | Doctorant INRIA Futurs / EDF | +---+ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] exporting long character vectors to dbf
I assume this is (or was) a specification issue. I think write.dbf uses the shapefile library (C not R library) so it applies to the use of shapefiles and just happens to have been included in the foreign package because it has a generic usefullness. (Is that a word?) Since I very rarely care about the elegance of my solutions, just that they work, I would try saving the file in another format that you can get open office to read and let it do the conversion rather than trying to get R to do it. I'm sure OpenOffice can deal with straightforward text files if that's a last resort. Tom -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Eduardo Leoni Sent: Wednesday, 24 May 2006 7:13 AM To: r-help@stat.math.ethz.ch Subject: [R] exporting long character vectors to dbf Hi - I need to export data to openoffice base, where one of the elements is a long character vector (255 characters.) write.dbf exports it as varchar, truncating the data. Any idea how to do this? thanks, -eduardo __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] dendrogram plotting problem
Dear List RGui Version : 2.3.0 User : 1 month I am having the *dendrogram plotting problem * The code i tried: library(cluster) DD-DataSetS01022 # 575 x 2 matrix VC-hclust(dist(DD),ave) *Warning message: NAs introduced by coercion* ( what does it mean? Is that the problem?) plot(VC,hang=-2) Output: http://roughjade.blogspot.com Can anyone guide me? Thanks in advance, JJ --- -- Lecturer J. Joshua Thomas KDU College Penang Campus Research Student, University Sains Malaysia [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] exporting long character vectors to dbf
On Wed, 24 May 2006, Mulholland, Tom wrote: I assume this is (or was) a specification issue. I think write.dbf uses the shapefile library (C not R library) so it applies to the use of shapefiles and just happens to have been included in the foreign package because it has a generic usefullness. (Is that a word?) Actually, the width of the dbf field comes from the lines in write.dbf: else if (is.character(x)) { mf - max(nchar(x[!is.na(x)])) precision[i] - min(max(nlen, mf), 254) scale[i] - 0 so that's the limit (254, not 255). This is stated as a limitation of the dbf format at http://www.clicketyclick.dk/databases/xbase/format/data_types.html so I don't think you can do what you want with .dbf. Since I very rarely care about the elegance of my solutions, just that they work, I would try saving the file in another format that you can get open office to read and let it do the conversion rather than trying to get R to do it. I'm sure OpenOffice can deal with straightforward text files if that's a last resort. Indeed. Using an ODBC driver (and RODBC) to write to the database might be a good option. Tom -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Eduardo Leoni Sent: Wednesday, 24 May 2006 7:13 AM To: r-help@stat.math.ethz.ch Subject: [R] exporting long character vectors to dbf Hi - I need to export data to openoffice base, where one of the elements is a long character vector (255 characters.) write.dbf exports it as varchar, truncating the data. Any idea how to do this? thanks, -eduardo __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] glmmADMB and the GPL -- formerly-- How to buy R.
G'day Dave, I have read your e-mail now several time and can't make up my mind if you want a genuine discussion or just trying to do some flame-baiting. But here are my 2 cents. And, in case that you don't read through the whole reply, let me make it clear to you that this is my personal opinion, that probably few people (if any) on this list might agree with me and that I definitely not speak for the list. DF == dave fournier [EMAIL PROTECTED] writes: DF Some of you have been following the discussion of the GPL and DF its inclusion in the glmmADMB package we created for R users. True in my case. DF I would like to provide a bit of background and include an DF email we received from Prof. Ripley [...] It is usually considered bad form to forward privately sent e-mails to a public forum. Some people are even going so far as to argue that e-mails, as other communications, are copyright protected material and that by posting private e-mails, or other communications, to public forums without the permission of the person who sent the private e-mail the poster is breaching copyright laws. So I hope you asked Brian for his permission to post his private e-mail, because I don't remember seeing it posted to any of the mailing lists related to R. In any case, if you wish to positively engage with a community, I would advise you to learn about the rules according to which that community plays. DF so that everyone can be aware of how some might use the GPL to DF try to force access to proprietary software. Well, that is only possible if the software was released under the GPL. So what is the problem? DF I think this is interesting because many have voiced the DF opinion about the benign nature of the GPL and that commercial DF enterprises who avoid it do so mainly out of ignorance. I must have missed these opinions being expressed in this particular thread, but have a vague idea what you are talking about. Though, I have the impression that you are a bit confused as there are two issues: 1) Commercial enterprises who release their software under the GPL. Since these enterprises released their software under the GPL, they should not be ignorant about it and what it implies. If they are, they should sack their lawyers and get better advise. 2) Commercial enterprises who say that they don't want to port their product to Linux (or other GPL based operation systems) with the argument that this would force them to release the source code of their software. To those enterprises it is usually pointed out that they are misinterpreting (or, if you wish, ignorant of) the GPL and that by providing their software on a GPL'd platform they are not forced to supply source code and they can release their software under other licences if they wish. (And it seems that several commercial enterprises got this message as there is quite a bit of commercial software available under Linux these days: S-PLUS, Matlab, Mathematica, Maple,) The case of glmmADMB seems to fall under the first category, it was released under the GPL and you should have been aware of what this means because you decided to release it under the GPL. DF I have noticed two things: Users of the R-help list appear to DF rely largely on the advice of a rather small number of DF statistical experts. How did you notice this? A lot of readers of mailing list choose to reply in private e-mails instead off replies to the list. My default is to reply-to-sender and not reply-to-all; other people's mail-tool have other defaults. The R mailing lists are (as far as I know) configured that reply-to-sender goes only to the sender of the e-mail, not the whole list. Thus, you should be aware that by looking at what gets posted on R-help will give you a biased sample. DF Second, the R users regard R as being more cutting edge and up DF to date than lists devoted to commercial statistical packages DF like SAS. Sorry, I can't parse this sentence. Do you mean that R users regard commercial statistical statistical packages like SAS as being less cutting edge than R? Or that people on lists devoted to commercial statistical packages like SAS have a different opinion about R than R users? Or that R users regard R as being more cutting edge than some other mailing lists? Was there any purpose in this statement other than flame-baiting? DF For these reasons I was surprised to see the following post on DF the web in reply to a question on negative binomial mixed DF models. DF https://stat.ethz.ch/pipermail/r-help/2005-February/066146.html DF I thought that this was bad advice as certainly our ADMB-RE DF software could handle this problem easily. Fair enough. More flame-baiting or would you kindly let us know whether ADMB-RE was available in February 2005? Was glmmADMB (readily) available at that time? I note that the quoted e-mail
Re: [R] lattice package - question on plotting lines
On 5/23/06, Tim Smith [EMAIL PROTECTED] wrote: Thanks Tony. That solved a huge part of what I was trying to do. I was going through the trellis documentation (Bell labs) too. Is there any other documentation/manual that you would recommend. Taking a look at the Changes file, e.g. via file.show(system.file(Changes, package = lattice)) may (or may not) be helpful, to the extent that (1) it should list features unique to lattice (and hence not documented in the Bell Labs docs) and (2) it may suggest reasonably specific places in the lattice help pages where details of such features might be found Your mileage may vary. -Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html