[R] Depth vs Temp graph for different transects
Hi All, Have the following code. The graph works well plotting the 15 transect for me however the legend shows a total of 22 transects. The original data has 22 transects numbered from 1 to 22. New data set got only 15. How can I get the legend to show only the transects plotted. # Create Line Chart TAll - read.csv(TAll Data.csv) # convert factor to numeric for convenience TAll$Tran - as.numeric(TAll$Trans) nTrans - max(TAll$Trans) # get the range for the x and y axis xrange - range(TAll$Temp) yrange - range(TAll$Depth) # set up the plot plot(xrange, yrange, ylim = rev(yrange), type=n, xlab=Temp (deg C), ylab=Depth (m) ) colors - rainbow(nTrans) linetype - c(1:nTrans) plotchar - seq(1,1+nTrans,1) # add lines for (i in 1:nTrans) { tree - subset(TAll, Trans==i) lines(tree$Temp, tree$Depth, type=b, lwd=1.5, lty=linetype[i], col=colors[i], pch=plotchar[i]) } # add a legend legend(xrange[-2], yrange[-2], 1:nTrans, cex=0.8, col=colors, pch=plotchar, lty=linetype, title=Transect) Thanks for the help, Tinus -- M.J. Sonnekus PhD Candidate (The Phytoplankton of the southern Agulhas Current Large Marine Ecosystem (ACLME)) Department of Botany South Campus Nelson Mandela Metropolitan University PO Box 77000 Port Elizabeth South Africa 6031 Cell: 082 080 9638 E-mail: tsonne...@gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Issues with fa() function in psych
Hi William, Recently I noticed that if the requested rotation is not available, principal function also defaults to rotate=“none” without any WARNING. You had earlier fixed the same issue with fa in version 1.4.4. Kindly include the same for principal also. Also, as I had pointed out earlier in my trailing mails, is there any update on the following suggestion: The fa() function doesn't account for 'Heywood cases' (communality greater than 1) and never ever throws out any error related to that which other softwares do. This is a serious and common issue in iterative factor analysis and hence should have been accounted for. Awaiting your revert, Thanks and Regards, Sagnik On Fri, May 16, 2014 at 10:54 AM, sagnik chakravarty sagnik.st...@gmail.com wrote: Hi William, Thanks for the update. I see this package has so many capabilities ! I will suggest further for its development if anything else comes to my mind. Regards, Sagnik On Thu, May 15, 2014 at 6:34 AM, William Revelle li...@revelle.net wrote: Sagnik, I did some more checking and in fact you can do equamax through GPA rotation. (Gunter Nickel pointed this out in a post to R-help). I will implement this in version 1.4.6 (1.4.5 is now working its way through the various CRAN mirrors). You might like 1.4.5 in that I have added various ways of displaying confidence intervals (cats eye plots) as well as upper and lower confidence limits for correlations (cor.plot.upperLowerCi) Bill On Apr 10, 2014, at 1:22 AM, sagnik chakravarty sagnik.st...@gmail.com wrote: Thanks a lot Bill and Revelle for your helpful response. It would have been great if I could know when we can expect the release of the edited version 1.4.4. Sagnik On Wed, Apr 9, 2014 at 8:05 PM, William Revelle li...@revelle.net wrote: Sagnik raises the question as to why the psych package does not offer the ‘equamax’ rotation. It is because all rotations are handled through the GPArotation package which does not offer equamax. Sagnik also points out that if the requested rotation is not available, fa defaults to rotate=“none” without any warning. I have fixed that for the next release (1.4.4). (1.4.4 also will fix a bug in corr.test introduced into 1.4.3). The question about why printing just the loadings matrix leaves blank cells? That is because the loadings matrix of class “loadings” which the default print function prints with a cut = .3. Using the example from Sagnik, print(efa_pa$loadings,cut=0) will match the output of efa_pa. The fm=“pa” option runs conventional principal axis factor analysis (ala SPSS). As documented, this iterates max.iter times Not all factor programs that do principal axes do iterative solutions. The example from the SAS manual (Chapter 26) is such a case. To achieve that solution, it is necessary to specify that the max.iterations = 1. Comparing that solution to an iterated one (the default) shows that iterations improve the solution. In addition, fm=minres or fm=mle produces even better solutions for this example.” The com column is factor complexity using the index developed by Hofmann (1978). It is a row wise measure of item complexity. I have added more documentation to this in 1.4.4 Bill On Apr 8, 2014, at 2:28 AM, Pascal Oettli kri...@ymail.com wrote: Hello, And what about submitting your suggestions directly to the package author/maintainer? And please don't post in HTML. Regards, Pascal On Tue, Apr 8, 2014 at 3:13 PM, sagnik chakravarty sagnik.st...@gmail.com wrote: Hi Team, I was using your psych package for factor analysis and was also comparing the results with SAS results. I have some suggestions and/or confusions regarding the fa() function in the package: - The fa() function *doesn't account for Heywood cases* (communality greater than 1) and never ever throws out any error related to that which other softwares do. This is a serious and common issue in iterative factor analysis and hence should have been accounted for. - The fa() function doesn't provide equamax rotation in its rotation list and still if you specify *rotation=equamax*, it will run without throwing out any error and even mentioning in the result that equamax has been applied. But I have thoroughly compared results from *rotation=none* and *rotation=equamax* options and they are exactly same. *That means fa() is not doing the rotation at all and yet telling that it is doing that!!* I have even mentioned *rotation=crap* option just to check and surprisingly it ran(without any error) with the result showing: *Factor Analysis using method = gls* * Call: fa(r = cor_mat, nfactors = 4, n.obs = 69576, rotate = crap, fm = gls)* I hope you understand the severity of this bug and hence request you
Re: [R] Building R for better performance
could you tell us if the same/similar performance benefits we should expect when gnu complier suite + MKL are teamed up? and how to configure such a compilation? many thanks On 04/03/14 21:44, Anspach, Jonathan P wrote: Greetings, I'm a software engineer with Intel. Recently I've been investigating R performance on Intel Xeon and Xeon Phi processors and RH Linux. I've also compared the performance of R built with the Intel compilers and Intel Math Kernel Library to a default build (no config options) that uses the GNU compilers. To my dismay, I've found that the GNU build always runs on a single CPU core, even during matrix operations. The Intel build runs matrix operations on multiple cores, so it is much faster on those operations. Running the benchmark-2.5 on a 24 core Xeon system, the Intel build is 13x faster than the GNU build (21 seconds vs 275 seconds). Unfortunately, this advantage is not documented anywhere that I can see. Building with the Intel tools is very easy. Assuming the tools are installed in /opt/intel/composerxe, the process is simply (in bash shell): $ . /opt/intel/composerxe/bin/compilervars.sh intel64 $ ./configure --with-blas=-L/opt/intel/composerxe/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm --with-lapack CC=icc CFLAGS=-O2 CXX=icpc CXXFLAGS=-O2 F77=ifort FFLAGS=-O2 FC=ifort FCFLAGS=-O2 $ make $ make check My questions are: 1) Do most system admins and/or R installers know about this performance difference, and use the Intel tools to build R? 2) Can we add information on the advantage of building with the Intel tools, and how to do it, to the installation instructions and FAQ? I can post my data if anyone is interested. Thanks, Jonathan Anspach Sr. Software Engineer Intel Corp. jonathan.p.ansp...@intel.com 713-751-9460 __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] r convert current date format from y-m-d to m/d/y
Hi, Use ?format format(d, %m/%d/%Y) #[1] 09/01/2014 A.K. On Monday, September 1, 2014 5:26 AM, Velappan Periasamy veepsi...@gmail.com wrote: d=Sys.Date() 2014-09-01 How to convert this 2014-09-01 to 09/01/2014 format? (ie y-m-d to m/d/y format) thanks veepsirtt __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Unexpected behavior when giving a value to a new variable based on the value of another variable
Thank you John, Jim, Jeff and both Davids for your answers. After trying different combinations of values for the variable samplem, it looks like if age is greater than 65, R applies the correct code 1 whatever the value of samplem, but if age is less than 65, it just copies the values of samplem to sample. I do not understand why it does so. In any case, Jim's syntax work very well, although I do not understand why either. Answering to Jim, I just wanted a variable that could identify individuals with some characteristics (not only age, as in this example that has been oversimplified). Best regards, Angel Rodriguez-Laso -Mensaje original- De: John McKown [mailto:john.archie.mck...@gmail.com] Enviado el: vie 29/08/2014 14:46 Para: Angel Rodriguez CC: r-help Asunto: Re: [R] Unexpected behavior when giving a value to a new variable based on the value of another variable On Fri, Aug 29, 2014 at 3:53 AM, Angel Rodriguez angel.rodrig...@matiainstituto.net wrote: Dear subscribers, I've found that if there is a variable in the dataframe with a name very similar to a new variable, R does not give the correct values to this latter variable based on the values of a third value: snip Any clue for this behavior? snip Thank you very much. Angel Rodriguez-Laso Research project manager Matia Instituto Gerontologico That is unusual, but appears to be documented in a section from ?`[` quote Character indices Character indices can in some circumstances be partially matched (see pmatch) to the names or dimnames of the object being subsetted (but never for subassignment). Unlike S (Becker et al p. 358)), R never uses partial matching when extracting by [, and partial matching is not by default used by [[ (see argument exact). Thus the default behaviour is to use partial matching only when extracting from recursive objects (except environments) by $. Even in that case, warnings can be switched on by options(warnPartialMatchDollar = TRUE). Neither empty () nor NA indices match any names, not even empty nor missing names. If any object has no names or appropriate dimnames, they are taken as all and so match nothing. /quote Note the commend about partial matching in the middle paragraph in the quote above. -- There is nothing more pleasant than traveling and meeting new people! Genghis Khan Maranatha! John McKown [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Correlation Matrix with a Covariate
R Help - I'm trying to run a correlation matrix with a covariate of age and will at some point will also want to covary other variables concurrently. I'm using the psych package and have tried other methods such as writing a loop to extract semi-partial correlations, but it does not seem to be working. How can I accomplish this? library(psych) set.cor(y = (66:76), x = c(1:64), z = 65, data = dat5) Error in solve.default(x.matrix, xy.matrix) : Lapack routine dgesv: system is exactly singular: U[54,54] = 0 structure(list(cope8_StriatumMask_7_betas_mean = c(11.47, -0.6002, -11.59, -52.51, 36.63, -36.99, -26.89, 21.68, 3.776, 19.35, -56.44, -11.41, -1.825, 5.327, -2.886, 11.91, 43.99, 42.17, 21.19, -1.14, -9.156, -3.53, -12.79, 33.88, -35.92, 7.613, -61.59, -6.754, 2.672, -25.09, 19.87, -21.09, -37.97, -11.07, -7.276, 21.94, -18.94, -16.83, 19.96, 7.533, -44.57, -23.17, 22.54), cope7_StriatumMask_7_betas_mean = c(-11.47, 0.6002, 11.59, 52.51, -36.63, 36.99, 26.89, -21.68, -3.776, -19.35, 56.44, 11.41, 1.825, -5.327, 2.886, -11.91, -43.99, -42.17, -21.19, 1.14, 9.156, 3.53, 12.79, -33.88, 35.92, -7.613, 61.59, 6.754, -2.672, 25.09, -19.87, 21.09, 37.97, 11.07, 7.276, -21.94, 18.94, 16.83, -19.96, -7.533, 44.57, 23.17, -22.54), cope6_StriatumMask_7_betas_mean = c(-15.11, -20.31, -24.63, -17.44, -33.64, -38.77, -37.67, -27.93, -13.28, -7.351, -9.147, -2.561, -28.92, -26.39, 65.4, -30.5, -6.315, 6.017, -59.84, -12.24, 14.86, -36.17, -14.39, -11.74, -19.17, 41.23, 4.33, 19.48, 12.88, -29.51, 32.7, -35.65, -49.27, -4.2, 16.27, -9.706, -19.26, -22.49, -54.12, -48.03, 82.09, 1.946, -68.66), cope5_StriatumMask_7_betas_mean = c(15.11, 20.31, 24.63, 17.44, 33.64, 38.77, 37.67, 27.93, 13.28, 7.351, 9.147, 2.561, 28.92, 26.39, -65.4, 30.5, 6.315, -6.017, 59.84, 12.24, -14.86, 36.17, 14.39, 11.74, 19.17, -41.23, -4.33, -19.48, -12.88, 29.51, -32.7, 35.65, 49.27, 4.2, -16.27, 9.706, 19.26, 22.49, 54.12, 48.03, -82.09, -1.946, 68.66), cope4_StriatumMask_7_betas_mean = c(14.75, 15.24, 4.935, 4.835, 10.32, -19.23, -14.4, 39.01, 7.088, 16.77, 10.78, 4.741, 15.2, 6.144, -3.572, 9.995, 42.44, 36.24, 23.44, -0.4025, 10.57, -8.036, 0.4127, -5.74, 7.335, -5.735, -32.63, -3.122, 16.36, -6.741, 9.36, -2.567, -5.515, 22.81, 9.99, -2.034, 10.38, -16.34, 37.27, 12.11, -2.593, 5.341, 11.64), cope3_StriatumMask_7_betas_mean = c(3.8, 8.024, 6.609, 56.04, -17.79, 27.05, 10.92, 19.12, 4.651, -1.325, 66.15, 16.36, 17.79, 2.177, -4.663, 4.534, 7.788, 3.822, 6.74, -3.433, 20.96, 1.451, 16.81, -31.09, 26.59, -8.875, 27.25, -1.37, 8.634, 22.93, -10.99, 18.74, 34.3, 34.26, 6.989, -20.35, 38.61, 14.78, 14.24, 8.544, 42.79, 27.42, -12.53), cope2_StriatumMask_7_betas_mean = c(-22.45, -18.01, -28.1, -10.91, -33.2, -24.75, -9.261, 4.794, -11.8, -4.1, -20.08, 17.07, -22.33, -12.02, 21.62, -10.31, 10, 3.088, -45.09, -32.44, 5.73, -21.31, -12.11, -1.46, -21.26, 36.54, 7.501, 11.95, 11.9, -6.681, 22.08, -24.31, -22.79, -28.37, 23.36, -1.101, -3.984, -6.315, -28.91, -35.64, 43.7, 8.317, -67.12), cope1_StriatumMask_7_betas_mean = c(-3.331, -0.6516, -3.038, 23.44, 2.235, 20.09, 16.99, 18.96, 1.348, 4.113, -14.4, -2.449, 4.893, 12, -18.31, 15.32, 16.84, 1.516, 15.34, -1.8, -11.98, 9.774, 2.055, 8.772, -50.38, -5.62, 2.985, 4.993, 7.956, 19.07, -10.01, 12.46, 13.24, -15.76, 9.281, 8.894, 12.9, 14.74, 23.37, 10.72, -16.88, 6.961, -8.737), cope8_StriatumMask_6_betas_mean = c(24.61, 16.14, -9.034, -59.09, 23.45, -6.531, -15.25, 44.85, -0.97, 19.71, -34.17, -13.74, 16.35, 22.46, -9.196, -1.073, 48.33, 52.74, 29.12, 24.44, -4.246, 3.402, -26.58, 35.38, -32.03, 10.15, -65.41, -12.14, 3.905, -2.271, 4.286, -1.484, -32.3, 0.3132, -25.83, 6.807, -14.56, -6.214, 16.98, 27.18, -55.46, -38.73, 35.93), cope7_StriatumMask_6_betas_mean = c(-24.61, -16.14, 9.034, 59.09, -23.45, 6.531, 15.25, -44.85, 0.97, -19.71, 34.17, 13.74, -16.35, -22.46, 9.196, 1.073, -48.33, -52.74, -29.12, -24.44, 4.246, -3.402, 26.58, -35.38, 32.03, -10.15, 65.41, 12.14, -3.905, 2.271, -4.286, 1.484, 32.3, -0.3132, 25.83, -6.807, 14.56, 6.214, -16.98, -27.18, 55.46, 38.73, -35.93), cope6_StriatumMask_6_betas_mean = c(-11.24, -17.26, -17.59, -27.92, -42.3, -39.53, -49.99, -11.12, -23.51, 4.573, -2.713, 18.23, -13.94, -31.14, 72.81, -23.42, 18.89, 9.695, -30.2, 2.776, 12.57, -15.8, -11.16, -19.92, -34.84, 5.129, -8.743, 1.578, 31.23, -23.09, 24.02, -50.99, -52.38, -15.81, 14.61, -26.57, -23.76, -24.59, -41.38, -42.29, 48.82, -6.491, -57.52), cope5_StriatumMask_6_betas_mean = c(11.24, 17.26, 17.59, 27.92, 42.3, 39.53, 49.99, 11.12, 23.51, -4.573, 2.713, -18.23, 13.94, 31.14, -72.81, 23.42, -18.89, -9.695, 30.2, -2.776, -12.57, 15.8, 11.16, 19.92, 34.84, -5.129, 8.743, -1.578, -31.23, 23.09, -24.02, 50.99, 52.38, 15.81, -14.61, 26.57, 23.76, 24.59, 41.38, 42.29, -48.82, 6.491, 57.52), cope4_StriatumMask_6_betas_mean = c(16.28, 22.74, 1.5, -10.25, 0.712, 0.6925, -13.95, 43.29, -0.8349, 6.348, 13.2, 16.73, 13.11, 20.33, -18.84, 13.71, 47.84, 41.51, 32.87, 7.421, 18.35, -7.158, -9.818, -5.952, 13.16,
[R] SpectrumBackground
Hello there ... Using package Peaks to run the function SNIP on a csv file with 19 spectrum. While trying to run: ### doing SNIP for every spectra require(Peaks) for (i in 1:NROW(Q)) { Q.t[i,]-Q[i,]-SpectrumBackground(as.numeric(as.vector(Q[i,]))) print(i) } Got the following error: Error in .Call(R_SpectrumBackground, as.vector(y), as.integer(iterations), : R_SpectrumBackground not available for .Call() for package Peaks Any suggestions on how to correct this ? Thanks for the help, Edmir FiberPolymer Science, NCSU [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] simulation data with mixed variables
dear all members i am trying to simulate data with mixed ordered categorical and dichotomous variables with 200 observation and 10 var. 5 ordered categorical and 5 dichotomous and i want to put a high correlation between variables so i must find correlation between dichotomous and the correlation between ordered categorical and correlation between mixed ordered categorical and dichotomous data. i know all types of corr. except correlation between mixed. i hope anyone help me to solve this problem. Many thanks in advance Thanoon [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Correlation Matrix with a Covariate
Thanks for including your data with dput(). I'm not familiar with set correlation, but altogether you are working with 76 variables (columns) and only 46 observations. Since the error message says the system is exactly singlular, it is likely that you have too many variables for the number of observations or one of your columns is a linear combination of (can be predicted exactly from) the others. - David L Carlson Department of Anthropology Texas AM University College Station, TX 77840-4352 -Original Message- From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On Behalf Of Patzelt, Edward Sent: Monday, September 1, 2014 7:47 AM To: R-help@r-project.org Subject: [R] Correlation Matrix with a Covariate R Help - I'm trying to run a correlation matrix with a covariate of age and will at some point will also want to covary other variables concurrently. I'm using the psych package and have tried other methods such as writing a loop to extract semi-partial correlations, but it does not seem to be working. How can I accomplish this? library(psych) set.cor(y = (66:76), x = c(1:64), z = 65, data = dat5) Error in solve.default(x.matrix, xy.matrix) : Lapack routine dgesv: system is exactly singular: U[54,54] = 0 structure(list(cope8_StriatumMask_7_betas_mean = c(11.47, -0.6002, -11.59, -52.51, 36.63, -36.99, -26.89, 21.68, 3.776, 19.35, -56.44, -11.41, -1.825, 5.327, -2.886, 11.91, 43.99, 42.17, 21.19, -1.14, -9.156, -3.53, -12.79, 33.88, -35.92, 7.613, -61.59, -6.754, 2.672, -25.09, 19.87, -21.09, -37.97, -11.07, -7.276, 21.94, -18.94, -16.83, 19.96, 7.533, -44.57, -23.17, 22.54), cope7_StriatumMask_7_betas_mean = c(-11.47, 0.6002, 11.59, 52.51, -36.63, 36.99, 26.89, -21.68, -3.776, -19.35, 56.44, 11.41, 1.825, -5.327, 2.886, -11.91, -43.99, -42.17, -21.19, 1.14, 9.156, 3.53, 12.79, -33.88, 35.92, -7.613, 61.59, 6.754, -2.672, 25.09, -19.87, 21.09, 37.97, 11.07, 7.276, -21.94, 18.94, 16.83, -19.96, -7.533, 44.57, 23.17, -22.54), cope6_StriatumMask_7_betas_mean = c(-15.11, -20.31, -24.63, -17.44, -33.64, -38.77, -37.67, -27.93, -13.28, -7.351, -9.147, -2.561, -28.92, -26.39, 65.4, -30.5, -6.315, 6.017, -59.84, -12.24, 14.86, -36.17, -14.39, -11.74, -19.17, 41.23, 4.33, 19.48, 12.88, -29.51, 32.7, -35.65, -49.27, -4.2, 16.27, -9.706, -19.26, -22.49, -54.12, -48.03, 82.09, 1.946, -68.66), cope5_StriatumMask_7_betas_mean = c(15.11, 20.31, 24.63, 17.44, 33.64, 38.77, 37.67, 27.93, 13.28, 7.351, 9.147, 2.561, 28.92, 26.39, -65.4, 30.5, 6.315, -6.017, 59.84, 12.24, -14.86, 36.17, 14.39, 11.74, 19.17, -41.23, -4.33, -19.48, -12.88, 29.51, -32.7, 35.65, 49.27, 4.2, -16.27, 9.706, 19.26, 22.49, 54.12, 48.03, -82.09, -1.946, 68.66), cope4_StriatumMask_7_betas_mean = c(14.75, 15.24, 4.935, 4.835, 10.32, -19.23, -14.4, 39.01, 7.088, 16.77, 10.78, 4.741, 15.2, 6.144, -3.572, 9.995, 42.44, 36.24, 23.44, -0.4025, 10.57, -8.036, 0.4127, -5.74, 7.335, -5.735, -32.63, -3.122, 16.36, -6.741, 9.36, -2.567, -5.515, 22.81, 9.99, -2.034, 10.38, -16.34, 37.27, 12.11, -2.593, 5.341, 11.64), cope3_StriatumMask_7_betas_mean = c(3.8, 8.024, 6.609, 56.04, -17.79, 27.05, 10.92, 19.12, 4.651, -1.325, 66.15, 16.36, 17.79, 2.177, -4.663, 4.534, 7.788, 3.822, 6.74, -3.433, 20.96, 1.451, 16.81, -31.09, 26.59, -8.875, 27.25, -1.37, 8.634, 22.93, -10.99, 18.74, 34.3, 34.26, 6.989, -20.35, 38.61, 14.78, 14.24, 8.544, 42.79, 27.42, -12.53), cope2_StriatumMask_7_betas_mean = c(-22.45, -18.01, -28.1, -10.91, -33.2, -24.75, -9.261, 4.794, -11.8, -4.1, -20.08, 17.07, -22.33, -12.02, 21.62, -10.31, 10, 3.088, -45.09, -32.44, 5.73, -21.31, -12.11, -1.46, -21.26, 36.54, 7.501, 11.95, 11.9, -6.681, 22.08, -24.31, -22.79, -28.37, 23.36, -1.101, -3.984, -6.315, -28.91, -35.64, 43.7, 8.317, -67.12), cope1_StriatumMask_7_betas_mean = c(-3.331, -0.6516, -3.038, 23.44, 2.235, 20.09, 16.99, 18.96, 1.348, 4.113, -14.4, -2.449, 4.893, 12, -18.31, 15.32, 16.84, 1.516, 15.34, -1.8, -11.98, 9.774, 2.055, 8.772, -50.38, -5.62, 2.985, 4.993, 7.956, 19.07, -10.01, 12.46, 13.24, -15.76, 9.281, 8.894, 12.9, 14.74, 23.37, 10.72, -16.88, 6.961, -8.737), cope8_StriatumMask_6_betas_mean = c(24.61, 16.14, -9.034, -59.09, 23.45, -6.531, -15.25, 44.85, -0.97, 19.71, -34.17, -13.74, 16.35, 22.46, -9.196, -1.073, 48.33, 52.74, 29.12, 24.44, -4.246, 3.402, -26.58, 35.38, -32.03, 10.15, -65.41, -12.14, 3.905, -2.271, 4.286, -1.484, -32.3, 0.3132, -25.83, 6.807, -14.56, -6.214, 16.98, 27.18, -55.46, -38.73, 35.93), cope7_StriatumMask_6_betas_mean = c(-24.61, -16.14, 9.034, 59.09, -23.45, 6.531, 15.25, -44.85, 0.97, -19.71, 34.17, 13.74, -16.35, -22.46, 9.196, 1.073, -48.33, -52.74, -29.12, -24.44, 4.246, -3.402, 26.58, -35.38, 32.03, -10.15, 65.41, 12.14, -3.905, 2.271, -4.286, 1.484, 32.3, -0.3132, 25.83, -6.807, 14.56, 6.214, -16.98, -27.18, 55.46, 38.73, -35.93), cope6_StriatumMask_6_betas_mean = c(-11.24, -17.26, -17.59, -27.92, -42.3, -39.53, -49.99, -11.12, -23.51, 4.573, -2.713, 18.23, -13.94, -31.14,
Re: [R] Depth vs Temp graph for different transects
On Sep 1, 2014, at 3:52 AM, Tinus Sonnekus wrote: Hi All, Have the following code. The graph works well plotting the 15 transect for me however the legend shows a total of 22 transects. The original data has 22 transects numbered from 1 to 22. New data set got only 15. How can I get the legend to show only the transects plotted. # Create Line Chart TAll - read.csv(TAll Data.csv) You have set up a situation where we can only guess. # convert factor to numeric for convenience TAll$Tran - as.numeric(TAll$Trans) nTrans - max(TAll$Trans) # get the range for the x and y axis xrange - range(TAll$Temp) yrange - range(TAll$Depth) # set up the plot plot(xrange, yrange, ylim = rev(yrange), type=n, xlab=Temp (deg C), ylab=Depth (m) ) colors - rainbow(nTrans) linetype - c(1:nTrans) plotchar - seq(1,1+nTrans,1) # add lines for (i in 1:nTrans) { tree - subset(TAll, Trans==i) lines(tree$Temp, tree$Depth, type=b, lwd=1.5, lty=linetype[i], col=colors[i], pch=plotchar[i]) } # add a legend legend(xrange[-2], yrange[-2], 1:nTrans, cex=0.8, col=colors, pch=plotchar, lty=linetype, title=Transect) If nTrans is 22 then you are getting what you ask for. If you subsetted a dataset where TAll$Trans was a factor then it's perfecty possible that the legend would have more items than the subset. Perhaps you should use `length` or length(unique(.))` rather than `max`. -- David. Thanks for the help, Tinus -- M.J. Sonnekus PhD Candidate (The Phytoplankton of the southern Agulhas Current Large Marine Ecosystem (ACLME)) Department of Botany South Campus Nelson Mandela Metropolitan University PO Box 77000 Port Elizabeth South Africa 6031 Cell: 082 080 9638 E-mail: tsonne...@gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. David Winsemius, MD Alameda, CA, USA __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] SpectrumBackground
I think you can get yourself going by calling Peaks:::.First.lib(dirname(find.package(Peaks)), Peaks) to get Peaks' DLL loaded. .First.lib is not getting called. You should ask the package's maintainer, maintainer(Peaks), to fix up the statup procedures. It the help files had examples or the package had any tests I think this would have been caught by CRAN. It does have two demo's, which do not work for the same reason. Bill Dunlap TIBCO Software wdunlap tibco.com On Mon, Sep 1, 2014 at 7:25 AM, Edmir Silva edmirsi...@live.com wrote: Hello there ... Using package Peaks to run the function SNIP on a csv file with 19 spectrum. While trying to run: ### doing SNIP for every spectra require(Peaks) for (i in 1:NROW(Q)) { Q.t[i,]-Q[i,]-SpectrumBackground(as.numeric(as.vector(Q[i,]))) print(i) } Got the following error: Error in .Call(R_SpectrumBackground, as.vector(y), as.integer(iterations), : R_SpectrumBackground not available for .Call() for package Peaks Any suggestions on how to correct this ? Thanks for the help, Edmir FiberPolymer Science, NCSU [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Building R for better performance
On Mon, Sep 01, 2014 at 12:25:01PM +0100, lejeczek wrote: could you tell us if the same/similar performance benefits we should expect when gnu complier suite + MKL are teamed up? and how to configure such a compilation? many thanks Jonathan, thanks for these very interesting results. I'm curious how the single core performance of the different compiler compare. Sometimes it's desirable to keep code to a single core, e.g., if you have assigned n jobs to n cores you don't want each job trying to grab more cores. Ross Boylan __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Unexpected behavior when giving a value to a new variable based on the value of another variable
On 01 Sep 2014, at 13:08 , Angel Rodriguez angel.rodrig...@matiainstituto.net wrote: Thank you John, Jim, Jeff and both Davids for your answers. After trying different combinations of values for the variable samplem, it looks like if age is greater than 65, R applies the correct code 1 whatever the value of samplem, but if age is less than 65, it just copies the values of samplem to sample. I do not understand why it does so. It's because indexed assignment is really (white lie alert: it's actually worse) N$sample - `[-`(`$`(N, `sample`), index, value) and since N$sample isn't there from the outset, partial matching kicks in for the `$`bit and makes the right hand side equivalent to the same thing with `samplem`. The result still gets assigned to N$sample, but the value is the same that N$samplem would get from N$samplem[N$age = 65] - 1 Notice the difference if you do N$sample - NA N$sample[N$age = 65] - 1 N age samplem sample 1 67 NA 1 2 62 1 NA 3 74 1 1 4 61 1 NA 5 60 1 NA 6 55 1 NA 7 60 1 NA 8 59 1 NA 9 58 NA NA -pd In any case, Jim's syntax work very well, although I do not understand why either. Answering to Jim, I just wanted a variable that could identify individuals with some characteristics (not only age, as in this example that has been oversimplified). Best regards, Angel Rodriguez-Laso -Mensaje original- De: John McKown [mailto:john.archie.mck...@gmail.com] Enviado el: vie 29/08/2014 14:46 Para: Angel Rodriguez CC: r-help Asunto: Re: [R] Unexpected behavior when giving a value to a new variable based on the value of another variable On Fri, Aug 29, 2014 at 3:53 AM, Angel Rodriguez angel.rodrig...@matiainstituto.net wrote: Dear subscribers, I've found that if there is a variable in the dataframe with a name very similar to a new variable, R does not give the correct values to this latter variable based on the values of a third value: snip Any clue for this behavior? snip Thank you very much. Angel Rodriguez-Laso Research project manager Matia Instituto Gerontologico That is unusual, but appears to be documented in a section from ?`[` quote Character indices Character indices can in some circumstances be partially matched (see pmatch) to the names or dimnames of the object being subsetted (but never for subassignment). Unlike S (Becker et al p. 358)), R never uses partial matching when extracting by [, and partial matching is not by default used by [[ (see argument exact). Thus the default behaviour is to use partial matching only when extracting from recursive objects (except environments) by $. Even in that case, warnings can be switched on by options(warnPartialMatchDollar = TRUE). Neither empty () nor NA indices match any names, not even empty nor missing names. If any object has no names or appropriate dimnames, they are taken as all and so match nothing. /quote Note the commend about partial matching in the middle paragraph in the quote above. -- There is nothing more pleasant than traveling and meeting new people! Genghis Khan Maranatha! John McKown [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Peter Dalgaard, Professor, Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Email: pd@cbs.dk Priv: pda...@gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] help.start() has a faulty link
On 31 Aug 2014, at 18:19 , Prof Brian Ripley rip...@stats.ox.ac.uk wrote: On 31/08/2014 12:31, Rui Barradas wrote: Hello, With 'help.start()' an HTML browser interface to help pops up. In the section 'Miscellaneous Material' if you click on 'User Manuals' an error occurs: Error in vignettes[i, PDF] : subscript out of bounds Is this a bug? A missing link? We have very little idea: you have not told us the 'at a minumum' information required in the posting guide. But it works for me in R 3.1.1: most likely there is a bug in the way you installed R (whatever that was ...). For example, maybe you did not install the vignette PDFs I see it with a default (I think) install of 3.0.2 on OSX (haven't gotten around to upgrading the laptop). Oddly, the Vignettes entry on the Help menu works fine, but the User Manuals entry in the R Help window produces the error. Vignettes installed or not, it doesn't sound like the right message to get. Poking around: Following debug(tools:::makeVignetteTable) I see debug: Outfile - vignettes[i, PDF] Browse[2] i [1] 1 Browse[2] vignettes Package File utils Sweave.Rnw Title utils Sweave User Manual PDF utils Sweave.pdf R utils Sweave.R Browse[2] str(vignettes) chr [1:4, 1:2] utils utils utils utils Sweave.Rnw ... - attr(*, dimnames)=List of 2 ..$ : chr [1:4] File Title PDF R ..$ : chr [1:2] Package which looks like something is up with the construction of the vignettes table. Presumably, it wants to be N x 5 with column names Package File Title PDF R, right? (Followups should likely go to r-devel rather than r-help.) -pd Rui Barradas __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Brian D. Ripley, rip...@stats.ox.ac.uk Emeritus Professor of Applied Statistics, University of Oxford 1 South Parks Road, Oxford OX1 3TG, UK __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Peter Dalgaard, Professor, Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Email: pd@cbs.dk Priv: pda...@gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Linear regression of 0/1 response ElemStatLearn (Fig. 2.1 the elements of statistical learning)
This is a list for R questions, not statistics or algebra, but if you set g=.5 and solve the linear model for x2 (ignore e), you will have your answer, eg: .5 = B1 + B2*X1 + B3*X2 where B1, 2, 3 are the three coefficients of the linear model, coef()[1], [2], [3]. - David L Carlson Department of Anthropology Texas AM University College Station, TX 77840-4352 -Original Message- From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On Behalf Of Denis Kazakiewicz Sent: Monday, September 1, 2014 5:27 AM To: r-help@r-project.org Subject: [R] Linear regression of 0/1 response ElemStatLearn (Fig. 2.1 the elements of statistical learning) Hello In chapter 2 ESL book authors write: Let's look at example of linear model in a classification context They fit a simple linear model g = 0.3290614 -0.0226360x1 + 0.2495983x2 + e, where g is given with values 0 or 1. Then they made a decision boundary where yhat, if yhat0.5 then yellow. Question: There is a separation line on the x1x2 plot. Where did intercept and slope for this line come from? In the ElemStatLearn R package, they simply put as abline( (0.5-coef(x.mod)[1] http://i.stack.imgur.com/ANaTc.png)/coef(x.mod)[3], -coef(x.mod)[2]/coef(x.mod)[3]), where first term is the intercept, and second term is slope for this line Regards Denis __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Adjusted R2 for Multivariate Regression Trees (MRT)
Dear fellows, I am using MVPARTwrap package to built a MRT of 25 pollen samples collected from 5 different ecosystems, on my analysis I will include adjusted R2. Based on MVPARTwrap package I want to get adjusted R2 for my MRT for this, I am using the code below. #step 1 - Building MRT. Pre_euro.mvpart - mvpart(data.matrix(mydata.2) ~ .,5Ecosystems, margin=0.02, cp=0, xv=pick, xval=nrow(mydata.1), xvmult=100, which=4) #step 2- Adjusted R2 R2aGDF(MRT.mite.tree, T=40, tau_const=0.6, 5Ecosystems) However, if run adjusted R2 code (step 2) 100 times, I will get 100 different results. Which one is correct? Does anyone can help me? Any help is very welcome. Cheers. Jackson Rodrigues [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Adjusted R2 for Multivariate Regression Trees (MRT) (ignore the previous message)
Dear fellows, I am using MVPARTwrap package to built a MRT of 25 pollen samples collected from 5 different ecosystems, on my analysis I will include adjusted R2. Based on MVPARTwrap package I want to get adjusted R2 for my MRT for this, I am using the code below. #step 1 - Building MRT. Pre_euro.mvpart - mvpart(data.matrix(mydata.2) ~ .,5Ecosystems, margin=0.02, cp=0, xv=pick, xval=nrow(mydata.1), xvmult=100, which=4) MRT.mite.tree-MRT(Pre_euro.mvpart, 10, LABELS=LABELS) #step 2- Adjusted R2 R2aGDF(MRT.mite.tree, T=40, tau_const=0.6, 5Ecosystems) However, if run adjusted R2 code (step 2) 100 times, I will get 100 different results. Which one is correct? Does anyone can help me? Any help is very welcome. Cheers. Jackson Rodrigues [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Plot Lines instead of colour bands in R
Hi, I have a plotting issue which I am trying to resolve in R. Please load my attached sample data (I used dput(lapply(sim.summary,head,1)) but the data are too large) to R, install Rglimclim package and run this code which shows an example plot I would like to change. My main function, myplot is found in the attached R object: #-- require(Rglimclim) #http://www.ucl.ac.uk/~ucakarc/work/rain_glm.html myplot(sim.summary,plot.titles=,which.stats=Mean,quantiles=c(0,0.025,0.5,0.975,1), imputation=obs.summary,which.sites=NULL,which.timescales=daily,colours.sim=c(magenta,darkorchid1,deeppink3,yellow), cex.lab=1.4,cex.axis=1.5,ylabs=Precipitation (mm)) mtext(text=expression(paste(italic(Mean[C]))),font=3, side=3, line=1, cex=1.3, col=black) #--- I would like to remove the colours completely (EXCEPT THE BOLD BLACK COLOUR BAND) and replace them with line types in R. That is, I want to specify various inbuilt R line types/line colours/line width for quantiles=c(0,0.025,0.5,0.975,1). The colours can be removed by setting colours in colours.sim=c(magenta,darkorchid1,deeppink3,yellow) to white. Practically, I would like my final code after modifying myplot function to look like: #- myplot(sim.summary,plot.titles=,which.stats=Mean,quantiles=c(0,0.025,0.5,0.975,1), imputation=obs.summary,which.sites=NULL,which.timescales=daily,plot.type=c(l,l,l,l,l), line.type=c(2,3,4,5,6),linecol.type=c('green4','red','blue','darkorchid1','deeppink3'), line.width=c(2,2,2,2,2), cex.lab=1.4,cex.axis=1.5,ylabs=Precipitation (mm)) mtext(text=expression(paste(italic(Mean[C]))),font=3, side=3, line=1, cex=1.3, col=black) #- I have 20 such graphs to develop. Thanks for any inputs. AT.__ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] URLdecode problems
Hey all, So, I'm attempting to decode some (and I don't know why anyone did this) URl-encoded user agents. Running URLdecode over them generates the error: Error in rawToChar(out) : embedded nul in string Okay, so there's an embedded nul - fair enough. Presumably decoding the URL is exposing it in a format R doesn't like. Except when I try to dig down and work out what an encoded nul looks like, in order to simply remove them with something like gsub(), I end up with several different strings, all of which apparently resolve to an embedded nul: URLdecode(0;%20@%gIL) Error in rawToChar(out) : embedded nul in string: '0; @\0L' In addition: Warning message: In URLdecode(0;%20@%gIL) : out-of-range values treated as 0 in coercion to raw URLdecode(%20%use) Error in rawToChar(out) : embedded nul in string: ' \0e' In addition: Warning message: In URLdecode(%20%use) : out-of-range values treated as 0 in coercion to raw I'm a relative newb to encodings, so maybe the fault is simply in my understanding of how this should work, but - why are both strings being read as including nuls, despite having different values? And how would I go about removing said nuls? -- Oliver Keyes Research Analyst Wikimedia Foundation [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Depth vs Temp graph for different transects
On Sep 1, 2014, at 12:32 PM, David Winsemius dwinsem...@comcast.net wrote: On Sep 1, 2014, at 3:52 AM, Tinus Sonnekus wrote: Hi All, Have the following code. The graph works well plotting the 15 transect for me however the legend shows a total of 22 transects. The original data has 22 transects numbered from 1 to 22. New data set got only 15. How can I get the legend to show only the transects plotted. # Create Line Chart TAll - read.csv(TAll Data.csv) You have set up a situation where we can only guess. If this data is generated by a CTD or similar instrument then I highly recommend you use Dan Kelley's oce package. It will help you manage, analyze and display the cast data. http://cran.r-project.org/web/packages/oce/index.html Cheers, Ben # convert factor to numeric for convenience TAll$Tran - as.numeric(TAll$Trans) nTrans - max(TAll$Trans) # get the range for the x and y axis xrange - range(TAll$Temp) yrange - range(TAll$Depth) # set up the plot plot(xrange, yrange, ylim = rev(yrange), type=n, xlab=Temp (deg C), ylab=Depth (m) ) colors - rainbow(nTrans) linetype - c(1:nTrans) plotchar - seq(1,1+nTrans,1) # add lines for (i in 1:nTrans) { tree - subset(TAll, Trans==i) lines(tree$Temp, tree$Depth, type=b, lwd=1.5, lty=linetype[i], col=colors[i], pch=plotchar[i]) } # add a legend legend(xrange[-2], yrange[-2], 1:nTrans, cex=0.8, col=colors, pch=plotchar, lty=linetype, title=Transect) If nTrans is 22 then you are getting what you ask for. If you subsetted a dataset where TAll$Trans was a factor then it's perfecty possible that the legend would have more items than the subset. Perhaps you should use `length` or length(unique(.))` rather than `max`. -- David. Thanks for the help, Tinus -- M.J. Sonnekus PhD Candidate (The Phytoplankton of the southern Agulhas Current Large Marine Ecosystem (ACLME)) Department of Botany South Campus Nelson Mandela Metropolitan University PO Box 77000 Port Elizabeth South Africa 6031 Cell: 082 080 9638 E-mail: tsonne...@gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. David Winsemius, MD Alameda, CA, USA __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] rgl zooming to an arbitrary location
On 31/08/2014, 11:12 PM, Gareth Davies wrote: I have been using rgl to view xyz point clouds containing topographic data ( with around 10^5 - 10^6 points). It's working well aside from one thing: I would like to be able to zoom into an arbitrary part of the plot. However so far I could only figure out how to zoom into the centre. See the example below -- in this case, I cannot 'zoom-in' to anything other than the central hill in the topography, whereas I would like to be able to zoom to an arbitrary location. Is there a way to get around this? Yes, you can manually set the userMatrix transformation. See ?par3d for a discussion about how rgl figures out what to display. See the example in ?rgl.setMouseCallbacks for some code that does something like what you want. There are plans to make this a little simpler in the next major release, but no definite release date. Duncan Murdoch ## # EXAMPLE CODE ## library(rgl) # Make up some topography x=runif(1e+06, min=0,max=1000) y=runif(1e+06, min=0,max=1000) # Elevation with 'hill' in the centre z=sin(x/50.)+cos(y/50.) + 30*exp(-((x-500)^2+(y-500)^2)*0.001) plot3d(x,y,z,col=z+3,aspect=FALSE) # Now try zooming [right mouse button]. I can only zoom into the central hill, not elsewhere. Below are the details of my R Install sessionInfo() R version 3.1.1 (2014-07-10) Platform: x86_64-unknown-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_AU.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_AU.UTF-8LC_COLLATE=en_AU.UTF-8 [5] LC_MONETARY=en_AU.UTF-8LC_MESSAGES=en_AU.UTF-8 [7] LC_PAPER=en_AU.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_AU.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] rgl_0.93.1098unstructInterp_0.0-1 rgeos_0.3-6 [4] rgdal_0.8-16 sp_1.0-15geometry_0.3-4 [7] magic_1.5-6 abind_1.4-0 SearchTrees_0.5.2 [10] roxygen2_4.0.1 loaded via a namespace (and not attached): [1] digest_0.6.4grid_3.1.1 lattice_0.20-29 Rcpp_0.11.2 [5] stringr_0.6.2 tools_3.1.1 __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] help.start() has a faulty link
On 01 Sep 2014, at 20:52 , peter dalgaard pda...@gmail.com wrote: I see it with a default (I think) install of 3.0.2 on OSX (haven't gotten around to upgrading the laptop). Oddly, the Vignettes entry on the Help menu works fine, but the User Manuals entry in the R Help window produces the error. Vignettes installed or not, it doesn't sound like the right message to get. Poking around: Following debug(tools:::makeVignetteTable) I see A bit of further poking reveals that it comes from a missing drop=FALSE, which has already been fixed in r-patched, specifically by $ svn log -c 66144 ./src/library/tools/R/dynamicHelp.R r66144 | ripley | 2014-07-14 12:09:56 +0200 (Mon, 14 Jul 2014) | 1 line port tweaks from trunk $ svn diff -x -w -c 66144 ./src/library/tools/R/dynamicHelp.R Index: src/library/tools/R/dynamicHelp.R === --- src/library/tools/R/dynamicHelp.R (revision 66143) +++ src/library/tools/R/dynamicHelp.R (revision 66144) @@ -1,7 +1,7 @@ # File src/library/tools/R/dynamicHelp.R # Part of the R package, http://www.R-project.org # -# Copyright (C) 1995-2013 The R Core Team +# Copyright (C) 1995-2014 The R Core Team # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by @@ -54,7 +54,7 @@ vinfo - getVignetteInfo(pkg) if (nrow(vinfo)) out - c(out, paste0('h2Manuals in package', sQuote(pkg),'/h2'), -makeVignetteTable(cbind(Package=pkg, vinfo[,c(File, Title, PDF, R)]))) +makeVignetteTable(cbind(Package=pkg, vinfo[,c(File, Title, PDF, R), drop = FALSE]))) } out - c(out, hr\n/body/html) list(payload = paste(out, collapse=\n)) -- Notice that 3.1.1 was on July 10, so this won't be in it. -- Peter Dalgaard, Professor, Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Email: pd@cbs.dk Priv: pda...@gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] URLdecode problems
I would guess that the original URLs were encoded somehow (non-ASCII), and the person who received them didn't understand how to deal with them either and url-encoded them with the thought that they would not lose information that way. Unfortunately, they probably lost the meta information as to how they were originally encoded, and without that this turns into a detective job that will likely need C's ability (perhaps via RCpp) to ignore type information to put things back. If you are lucky all strings were originally encoded the same way... if really lucky they were all UTF8 or UTF16 (which would have nuls and other odd bytes). Proceeding with the broken strings you have now will almost certainly not work. The fragments shown are not even vaguely recognizable as URLs, so I don't see how we can do anything meaningful with them. Please read the Posting Guide. One point made there to note is that if C becomes part of the question then R-devel becomes the more appropriate list. The other is that for all of these lists plain text email is expected (nor HTML). --- Jeff NewmillerThe . . Go Live... DCN:jdnew...@dcn.davis.ca.usBasics: ##.#. ##.#. Live Go... Live: OO#.. Dead: OO#.. Playing Research Engineer (Solar/BatteriesO.O#. #.O#. with /Software/Embedded Controllers) .OO#. .OO#. rocks...1k --- Sent from my phone. Please excuse my brevity. On September 1, 2014 9:02:33 AM PDT, Oliver Keyes oke...@wikimedia.org wrote: Hey all, So, I'm attempting to decode some (and I don't know why anyone did this) URl-encoded user agents. Running URLdecode over them generates the error: Error in rawToChar(out) : embedded nul in string Okay, so there's an embedded nul - fair enough. Presumably decoding the URL is exposing it in a format R doesn't like. Except when I try to dig down and work out what an encoded nul looks like, in order to simply remove them with something like gsub(), I end up with several different strings, all of which apparently resolve to an embedded nul: URLdecode(0;%20@%gIL) Error in rawToChar(out) : embedded nul in string: '0; @\0L' In addition: Warning message: In URLdecode(0;%20@%gIL) : out-of-range values treated as 0 in coercion to raw URLdecode(%20%use) Error in rawToChar(out) : embedded nul in string: ' \0e' In addition: Warning message: In URLdecode(%20%use) : out-of-range values treated as 0 in coercion to raw I'm a relative newb to encodings, so maybe the fault is simply in my understanding of how this should work, but - why are both strings being read as including nuls, despite having different values? And how would I go about removing said nuls? __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Plot Lines instead of colour bands in R
We have no idea: This email list strips most attachments. I'm almost certain that you don't need all of your too large data to provide a small reproducible example, either. Please see http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example for some ideas on how to do that without requiring attachments. If this is something specific to the package you mention, it might be worthwhile to start by asking the package maintainer for help. Sarah On Mon, Sep 1, 2014 at 3:29 PM, Zilefac Elvis zilefacel...@yahoo.com wrote: Hi, I have a plotting issue which I am trying to resolve in R. Please load my attached sample data (I used dput(lapply(sim.summary,head,1)) but the data are too large) to R, install Rglimclim package and run this code which shows an example plot I would like to change. My main function, myplot is found in the attached R object: #-- require(Rglimclim) #http://www.ucl.ac.uk/~ucakarc/work/rain_glm.html myplot(sim.summary,plot.titles=,which.stats=Mean,quantiles=c(0,0.025,0.5,0.975,1), imputation=obs.summary,which.sites=NULL,which.timescales=daily,colours.sim=c(magenta,darkorchid1,deeppink3,yellow), cex.lab=1.4,cex.axis=1.5,ylabs=Precipitation (mm)) mtext(text=expression(paste(italic(Mean[C]))),font=3, side=3, line=1, cex=1.3, col=black) #--- I would like to remove the colours completely (EXCEPT THE BOLD BLACK COLOUR BAND) and replace them with line types in R. That is, I want to specify various inbuilt R line types/line colours/line width for quantiles=c(0,0.025,0.5,0.975,1). The colours can be removed by setting colours in colours.sim=c(magenta,darkorchid1,deeppink3,yellow) to white. Practically, I would like my final code after modifying myplot function to look like: #- myplot(sim.summary,plot.titles=,which.stats=Mean,quantiles=c(0,0.025,0.5,0.975,1), imputation=obs.summary,which.sites=NULL,which.timescales=daily,plot.type=c(l,l,l,l,l), line.type=c(2,3,4,5,6),linecol.type=c('green4','red','blue','darkorchid1','deeppink3'), line.width=c(2,2,2,2,2), cex.lab=1.4,cex.axis=1.5,ylabs=Precipitation (mm)) mtext(text=expression(paste(italic(Mean[C]))),font=3, side=3, line=1, cex=1.3, col=black) #- I have 20 such graphs to develop. Thanks for any inputs. AT. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Sarah Goslee http://www.functionaldiversity.org __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] rgl zooming to an arbitrary location
Fantastic -- the pan3d function in the example for rgl.setMouseCallbacks solves the problem. For illustration: ## library(rgl) # Get the pan3d function by running this rgl example [ignore the error caused by not having an open rgl device] example(rgl.setMouseCallbacks) #Make up some topography x=runif(1e+06, min=0,max=1000) y=runif(1e+06, min=0,max=1000) # Elevation with 'hill' in the centre z=sin(x/50.)+cos(y/50.) + 30*exp(-((x-500)^2+(y-500)^2)*0.001) plot3d(x,y,z,col=z+3,aspect=FALSE) pan3d(3) # This makes my 'middle' mouse button function like pan On 02/09/14 06:25, Duncan Murdoch wrote: On 31/08/2014, 11:12 PM, Gareth Davies wrote: I have been using rgl to view xyz point clouds containing topographic data ( with around 10^5 - 10^6 points). It's working well aside from one thing: I would like to be able to zoom into an arbitrary part of the plot. However so far I could only figure out how to zoom into the centre. See the example below -- in this case, I cannot 'zoom-in' to anything other than the central hill in the topography, whereas I would like to be able to zoom to an arbitrary location. Is there a way to get around this? Yes, you can manually set the userMatrix transformation. See ?par3d for a discussion about how rgl figures out what to display. See the example in ?rgl.setMouseCallbacks for some code that does something like what you want. There are plans to make this a little simpler in the next major release, but no definite release date. Duncan Murdoch ## # EXAMPLE CODE ## library(rgl) # Make up some topography x=runif(1e+06, min=0,max=1000) y=runif(1e+06, min=0,max=1000) # Elevation with 'hill' in the centre z=sin(x/50.)+cos(y/50.) + 30*exp(-((x-500)^2+(y-500)^2)*0.001) plot3d(x,y,z,col=z+3,aspect=FALSE) # Now try zooming [right mouse button]. I can only zoom into the central hill, not elsewhere. Below are the details of my R Install sessionInfo() R version 3.1.1 (2014-07-10) Platform: x86_64-unknown-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_AU.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_AU.UTF-8LC_COLLATE=en_AU.UTF-8 [5] LC_MONETARY=en_AU.UTF-8LC_MESSAGES=en_AU.UTF-8 [7] LC_PAPER=en_AU.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_AU.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] rgl_0.93.1098unstructInterp_0.0-1 rgeos_0.3-6 [4] rgdal_0.8-16 sp_1.0-15geometry_0.3-4 [7] magic_1.5-6 abind_1.4-0 SearchTrees_0.5.2 [10] roxygen2_4.0.1 loaded via a namespace (and not attached): [1] digest_0.6.4grid_3.1.1 lattice_0.20-29 Rcpp_0.11.2 [5] stringr_0.6.2 tools_3.1.1 __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] depth of labels of axis
Hi there, With the following code, plot(1:5, xaxt = n) axis(1, at = 1:5, labels = c(expression(E[g]), E, expression(E[j]), E, expression(E[t]))) you may notice that the E within labels of axis(1) are not at the same depth. So the vision of axis(1) labels is something like wave. Is there a possible way to typeset the labels so that they are have the same depth? Any suggestions will be really appreciated. Thanks in advance. Best regards, Jinsong __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Building R for better performance
could you tell us if the same/similar performance benefits we should expect when gnu complier suite + MKL are teamed up? and how to configure such a compilation? many thanks On 04/03/14 21:44, Anspach, Jonathan P wrote: Greetings, I'm a software engineer with Intel. Recently I've been investigating R performance on Intel Xeon and Xeon Phi processors and RH Linux. I've also compared the performance of R built with the Intel compilers and Intel Math Kernel Library to a default build (no config options) that uses the GNU compilers. To my dismay, I've found that the GNU build always runs on a single CPU core, even during matrix operations. The Intel build runs matrix operations on multiple cores, so it is much faster on those operations. Running the benchmark-2.5 on a 24 core Xeon system, the Intel build is 13x faster than the GNU build (21 seconds vs 275 seconds). Unfortunately, this advantage is not documented anywhere that I can see. Building with the Intel tools is very easy. Assuming the tools are installed in /opt/intel/composerxe, the process is simply (in bash shell): $ . /opt/intel/composerxe/bin/compilervars.sh intel64 $ ./configure --with-blas=-L/opt/intel/composerxe/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm --with-lapack CC=icc CFLAGS=-O2 CXX=icpc CXXFLAGS=-O2 F77=ifort FFLAGS=-O2 FC=ifort FCFLAGS=-O2 $ make $ make check My questions are: 1) Do most system admins and/or R installers know about this performance difference, and use the Intel tools to build R? 2) Can we add information on the advantage of building with the Intel tools, and how to do it, to the installation instructions and FAQ? I can post my data if anyone is interested. Thanks, Jonathan Anspach Sr. Software Engineer Intel Corp. jonathan.p.ansp...@intel.com 713-751-9460 __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Building R for better performance
Is MKL open source software? If not, that could be the sticking point. Simon. On 02/09/14 07:24, lejeczek wrote: could you tell us if the same/similar performance benefits we should expect when gnu complier suite + MKL are teamed up? and how to configure such a compilation? many thanks On 04/03/14 21:44, Anspach, Jonathan P wrote: Greetings, I'm a software engineer with Intel. Recently I've been investigating R performance on Intel Xeon and Xeon Phi processors and RH Linux. I've also compared the performance of R built with the Intel compilers and Intel Math Kernel Library to a default build (no config options) that uses the GNU compilers. To my dismay, I've found that the GNU build always runs on a single CPU core, even during matrix operations. The Intel build runs matrix operations on multiple cores, so it is much faster on those operations. Running the benchmark-2.5 on a 24 core Xeon system, the Intel build is 13x faster than the GNU build (21 seconds vs 275 seconds). Unfortunately, this advantage is not documented anywhere that I can see. Building with the Intel tools is very easy. Assuming the tools are installed in /opt/intel/composerxe, the process is simply (in bash shell): $ . /opt/intel/composerxe/bin/compilervars.sh intel64 $ ./configure --with-blas=-L/opt/intel/composerxe/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm --with-lapack CC=icc CFLAGS=-O2 CXX=icpc CXXFLAGS=-O2 F77=ifort FFLAGS=-O2 FC=ifort FCFLAGS=-O2 $ make $ make check My questions are: 1) Do most system admins and/or R installers know about this performance difference, and use the Intel tools to build R? 2) Can we add information on the advantage of building with the Intel tools, and how to do it, to the installation instructions and FAQ? I can post my data if anyone is interested. Thanks, Jonathan Anspach Sr. Software Engineer Intel Corp. jonathan.p.ansp...@intel.com 713-751-9460 __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Simon Blomberg, BSc (Hons), PhD, MAppStat, AStat. Senior Lecturer and Consultant Statistician School of Biological Sciences The University of Queensland St. Lucia Queensland 4072 Australia T: +61 7 3365 2506 email: S.Blomberg1_at_uq.edu.au http://www.evolutionarystatistics.org Policies: 1. I will NOT analyse your data for you. 2. Your deadline is your problem. Statistics is the grammar of science - Karl Pearson. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] URLdecode problems
Hi Oliver, I think you're being misled by the default behaviour of warnings: they all get displayed at once, before control returns to the console. If you making them immediate, you get a slightly more informative error: URLdecode(0;%20@%gIL) Warning in URLdecode(0;%20@%gIL) : out-of-range values treated as 0 in coercion to raw Error in rawToChar(out) : embedded nul in string: '0; @\0L' So the out of range value (%g...) is getting converted to a raw(0), aka a nul. Then rawToChar() chokes. The code for URLdecode is simple enough that I'd recommend rewriting yourself to better handle bad inputs. Hadley On Mon, Sep 1, 2014 at 11:02 AM, Oliver Keyes oke...@wikimedia.org wrote: Hey all, So, I'm attempting to decode some (and I don't know why anyone did this) URl-encoded user agents. Running URLdecode over them generates the error: Error in rawToChar(out) : embedded nul in string Okay, so there's an embedded nul - fair enough. Presumably decoding the URL is exposing it in a format R doesn't like. Except when I try to dig down and work out what an encoded nul looks like, in order to simply remove them with something like gsub(), I end up with several different strings, all of which apparently resolve to an embedded nul: URLdecode(0;%20@%gIL) Error in rawToChar(out) : embedded nul in string: '0; @\0L' In addition: Warning message: In URLdecode(0;%20@%gIL) : out-of-range values treated as 0 in coercion to raw URLdecode(%20%use) Error in rawToChar(out) : embedded nul in string: ' \0e' In addition: Warning message: In URLdecode(%20%use) : out-of-range values treated as 0 in coercion to raw I'm a relative newb to encodings, so maybe the fault is simply in my understanding of how this should work, but - why are both strings being read as including nuls, despite having different values? And how would I go about removing said nuls? -- Oliver Keyes Research Analyst Wikimedia Foundation [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- http://had.co.nz/ __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] depth of labels of axis
On Sep 1, 2014, at 4:40 PM, Jinsong Zhao wrote: Hi there, With the following code, plot(1:5, xaxt = n) axis(1, at = 1:5, labels = c(expression(E[g]), E, expression(E[j]), E, expression(E[t]))) you may notice that the E within labels of axis(1) are not at the same depth. So the vision of axis(1) labels is something like wave. Is there a possible way to typeset the labels so that they are have the same depth? I'm not sure that we share an interpretation of the term depth in this context. I'm interpreting your request to me vertical alighnment. Any suggestions will be really appreciated. Read the help page, especially the paragraph about padj. I will admit that I thought the description of the actions of padj=0 and padj=1 were not what I experienced when I tried alternate versions. It did not seem to me that padj=0 produced top alignment. -- David Winsemius, MD Alameda, CA, USA __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.