Re: [R] Three most useful R package

2010-03-04 Thread Karl Ove Hufthammer
On Wed, 3 Mar 2010 11:52:48 -0700 Greg Snow greg.s...@imail.org wrote:
 I also want a package that when people misuse certain functions/techniques 
 it will cause a small door on the side of their monitor/computer to 
 open and a mechanical hand will come out and slap them upside the 
 head. But that package will not be useful until the hardware support 
 is available.

Such a package but without the hardware support would certainly 
be useful. (For LaTeX we have the 'nag' package, with a similar 
functionality.)

-- 
Karl Ove Hufthammer

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Newb question re. read.table...

2010-03-04 Thread Karl Ove Hufthammer
On Thu, 4 Mar 2010 11:34:49 +1300 Rolf Turner r.tur...@auckland.ac.nz 
wrote:
 There are various ways to access components of a data frame:
 
   * plot(con$rel,con$len)
   * plot(con[[rel]],con[[len]])
   * plot(con[,rel],con[,len])
   * with(con, plot(rel,len))

And: plot(len~rel, data=con)

-- 
Karl Ove Hufthammer

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Three most useful R package

2010-03-04 Thread Jim Lemon

On Wed, 3 Mar 2010 11:52:48 -0700 Greg Snow greg.s...@imail.org wrote:
 I also want a package that when people misuse certain
 functions/techniques it will cause a small door on the
 side of their monitor/computer to open and a mechanical
 hand will come out and slap them upside the head. But
 that package will not be useful until the hardware
 support is available.

Hi Greg,
Had not my grandfather and a workmate, over 60 years ago, worked out a 
way to do something that no one else had been able to do by running a 
machine in a way that it was not supposed to be run, I might agree with you.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] two questions for R beginners

2010-03-04 Thread David A.G

To me, as a biologist recycled to biostats, I have always worked with Excel and 
then SPSS and moving to R was difficult (and still is, since I am still 
learning).

Being a self-taught person, I learn R looking for examples in Google, which 
many times takes me to Rwiki or other. I sometimes post questions and most of 
the answers were helpful, but I have found that sometimes the answers have been 
too short or didn´t give enough hints as to how to follow, and that has stopped 
me from asking again in order not to annoy experts. I have not answered too 
many questions from newbies but I have tried to explain as much as I could. 
Sometimes I find it better not to answer rather than just answering a short 
vague answer. Please, examples, examples, examples!

I found most difficult the different data types, since I understand excel as a 
data frame with columns and rows, and that´s it. Then as someone has already 
commented, the class, mode and str functions helped a lot. But I think that to 
me, examples are the way to let people learn. 
From that, I moved to use loops, and am still nervous when people suggest 
ussing *apply functions, I can´t get down to use them!. I find loops more 
logical, and can´t see the way of moving them to *apply.

Finally, I am not a Linux expert , and I cannot get round to install and 
organise a proper R directory and keep updated. I have once tried to use a 
package that needed the development R version and was only prepared for Linux 
R, but couldn´t keep the R-devel versions updated. Some more step-by-step would 
help sometimes.

Thanks for a great tool!




 Date: Tue, 2 Mar 2010 12:44:23 -0600
 From: keo.orms...@gmail.com
 To: landronim...@gmail.com
 CC: r-help@r-project.org; pbu...@pburns.seanet.com
 Subject: Re: [R] two questions for R beginners
 
 Liviu Andronic escribió:
  On Mon, Mar 1, 2010 at 11:49 PM, Liviu Andronic landronim...@gmail.com 
  wrote:

  On 3/1/10, Keo Ormsby keo.orms...@gmail.com wrote:
  
   Perhaps my biggest problem was that I couldn't (and still haven't) seen
  *absolute beginners* documents.
 

  there was once a link posted on r-sig-teaching that would probably fit
  your needs, but I cannot find it now.
 
  
 
  OK, I found it. Below is an excerpt of that r-sig-teaching e-mail.
  Liviu
 
  On Thu, Jul 2, 2009 at 2:19 PM, Robert W. Hayden hay...@mv.mv.com wrote:

  I think such a website would be a real asset.  It would be most useful
  if it either were restricted to intro. stats. OR organized so that
  materials for real beginners were easy to extract from all the
  materials for programmers and Ph.D. statisticians.  As a relative
  beginner myself, I find the usual resources useless.  In self defense,
  I created materials for my own beginning students:
 
   http://courses.statistics.com/software/R/Rhome.htm
  
 Hi Liviu,
 This is indeed the best site for introduction I have seen. Although it 
 still assumes some things that at first might seem unintuitive to the 
 absolute beginner I talk about. For instance, in the first page, it 
 shows that you can do sqrt(x), where x can be a vector, and return a 
 vector of the square roots of each number. Although this is high school 
 matrix algebra, most users expect that the input to square root function 
 to be a single number, not a matrix, as in Excel or a calculator. Other 
 concepts that are not explicitly introduced are R workspace, the use 
 of arguments in functions (with or without the =), etc. Others are 
 things like  diff(range(rainfall)) , where you have the output of one 
 function used as the input to another, all in the same command line. All 
 these things seem very basic, but can be difficult if you are trying to 
 learn on your own with no prior experience in programming.
 I hope I am not sounding too difficult and contrarian, I am just trying 
 to share my experience with starting with R, and in trying to convey 
 this learning to my colleagues and students. In the end, I did find 
 everything I needed to learn, and now I feel at ease with R, and I 
 believe that almost anybody that can use Excel or something like it, 
 could learn R.
 
 Thank you for the information,
 Best wishes,
 Keo.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
  
_
Hotmail: Free, trusted and rich email service.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Subset using partial values

2010-03-04 Thread Newbie19_02

Thanks very much David and Henrique for your help.  It has made my life much
simpler.
-- 
View this message in context: 
http://n4.nabble.com/Subset-using-partial-values-tp1576614p1577792.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] IMPORTANT! How work constrOptim? Why error in this routine???

2010-03-04 Thread Barbara . Rogo

I have to calculate the value of a set of parameter that minimize a function 
(scarti) with constrains. I know that scarti is
right.
Then, why I have error??? I don't understand!!! Help, thanks, it's very 
important!!!
This is the routine:
-
#Stima su tassi swap del modello CIR


swap=c(1.311,1.878,2.248,2.556,2.81,3.031,3.216,3.3525,3.491,3.583,3.786,3.963,4.062,4.022,3.944)
scadswap=c(1,2,3,4,5,6,7,8,9,10,12,15,20,25,30)

swapint=approx(scadswap,swap,xout=1:30,method=linear)$y

flussi=mat.or.vec(nr=30,nc=30)

for (k in 1:30){
  flussi[k,]=c(rep(swapint[k],k-1),100+swapint[k],rep(0,30-k))

}

A=rbind(flussi)

PMerc=rep(100,30)

scarti=function(r,d,fi,ni){
  vs=mat.or.vec(nr=30,nc=1)
  for (s in 1:30){
a=(d*exp(fi*s)/(fi*(exp(d*s)-1)+d))^(ni)
b=((exp(d*s)-1)/(fi*(exp(d*s)-1)+d))
vs[s]=a*exp(-r*b)
  }
  PMod=A%*%vs
  return(sum((PMerc-PMod)^2))
}

parCIR=constrOptim(c(0.0088,0.3339,0.3092,1.7530),scarti,NULL,ui=rbind(c(1,0,0,0),c(0,1,-1,0),c(0,1,0,0),c(0,0,0,1)),ci=c(0,0,0,1))$par
---
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adonis(), design

2010-03-04 Thread Kay Cichini

inclusion of repeated measures of course should gain power, but here I
guess one would have to restrict permutations and that's what may reduce
power drastically if sample size is small, at least that's how I understood
it.
I have also dug out a thread where someone asked for random factors in
adonis() and that's quite the same topic, I think. Gavin answered that
function permuted.Index2() would serve - but still I don't know how to do it
practically..
There is one of my postings in the r forge vegan forum that handles the same
problem with sampling design for beta.disper() and if it turns out that in
both cases the key is permuted.Index2() this thread may be redundant. 

Greetings,
Kay
-- 
View this message in context: 
http://n4.nabble.com/adonis-design-tp1568989p1577801.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] IMPORTANT! How work constrOptim? Why error in this routine???

2010-03-04 Thread Ingmar Visser
The objective function, scarti, needs a vector as input and not 4 separate
arguments.
constrOptim will call

pp- c(0.0088,0.3339,0.3092,1.7530)
scarti(pp)

which produces the error

hth, Ingmar

On Thu, Mar 4, 2010 at 10:27 AM, barbara.r...@uniroma1.it wrote:


 I have to calculate the value of a set of parameter that minimize a
 function (scarti) with constrains. I know that scarti is
 right.
 Then, why I have error??? I don't understand!!! Help, thanks, it's very
 important!!!
 This is the routine:
 -
 #Stima su tassi swap del modello CIR



 swap=c(1.311,1.878,2.248,2.556,2.81,3.031,3.216,3.3525,3.491,3.583,3.786,3.963,4.062,4.022,3.944)
 scadswap=c(1,2,3,4,5,6,7,8,9,10,12,15,20,25,30)

 swapint=approx(scadswap,swap,xout=1:30,method=linear)$y

 flussi=mat.or.vec(nr=30,nc=30)

 for (k in 1:30){
  flussi[k,]=c(rep(swapint[k],k-1),100+swapint[k],rep(0,30-k))

 }

 A=rbind(flussi)

 PMerc=rep(100,30)

 scarti=function(r,d,fi,ni){
  vs=mat.or.vec(nr=30,nc=1)
  for (s in 1:30){
a=(d*exp(fi*s)/(fi*(exp(d*s)-1)+d))^(ni)
b=((exp(d*s)-1)/(fi*(exp(d*s)-1)+d))
vs[s]=a*exp(-r*b)
  }
  PMod=A%*%vs
  return(sum((PMerc-PMod)^2))
 }


 parCIR=constrOptim(c(0.0088,0.3339,0.3092,1.7530),scarti,NULL,ui=rbind(c(1,0,0,0),c(0,1,-1,0),c(0,1,0,0),c(0,0,0,1)),ci=c(0,0,0,1))$par
 ---
[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] precision issue?

2010-03-04 Thread Alexander Nervedi

Hi R Gurus,

I am trying to figure out what is going on here.

 a - 68.08
 b - a-1.55
 a-b
[1] 1.55
 a-b == 1.55
[1] FALSE
 round(a-b,2) == 1.55
[1] TRUE
 round(a-b,15) == 1.55
[1] FALSE

Why should (a - b) == 1.55 fail when in fact b has been defined to be a - 1.55? 
 Is this a precision issue? How do i correct this?

Alex
  
_
Hotmail: Free, trusted and rich email service.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] running R from Notepad++ in Windows 7

2010-03-04 Thread Robert Kinley
gReetings

Running R 2.10.1 with Notepad++ and NpptoR.exe, when I submit a script to 
run :-

1. On my work XP machine , it runs fine in an already-open Rgui window

2. On my home Windows 7 machine, it ignores the existing Rgui window and 
fires up 
a fresh one, which doesn't use the same .RData, so it fails.

Anyone else hit this and, ideally, solved it ... ?

cheers  Bob

   Robert Kinley 
  b...@lilly.com
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to create a SVG plot

2010-03-04 Thread Lesong Tsai

Hi!

I want to know how to create a SVG plot with R.

savePlot() can't make it .

waiting for your answer. thank you .


-- 
View this message in context: 
http://n4.nabble.com/How-to-create-a-SVG-plot-tp1577685p1577685.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to create a SVG plot

2010-03-04 Thread Mario Valle

library(cairoDevice)
Cairo_svg()


On 04-Mar-10 8:13, Lesong Tsai wrote:


Hi!

I want to know how to create a SVG plot with R.

savePlot() can't make it .

waiting for your answer. thank you .




--
Ing. Mario Valle
Data Analysis and Visualization Group| 
http://www.cscs.ch/~mvalle

Swiss National Supercomputing Centre (CSCS)  | Tel:  +41 (91) 610.82.60
v. Cantonale Galleria 2, 6928 Manno, Switzerland | Fax:  +41 (91) 610.82.82

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] only actual variable names in all.names()

2010-03-04 Thread Vito Muggeo (UniPa)

dear all,
When I use all.vars(), I am interest in extracting only the variable names..
Here a simple example

all.vars(as.formula(y~poly(x,k)+z))

returns
[1] y x k z

and I would like to obtain
y x z

Where is the trick?

many thanks
vito


--

Vito M.R. Muggeo
Dip.to Sc Statist e Matem `Vianelli'
Università di Palermo
viale delle Scienze, edificio 13
90128 Palermo - ITALY
tel: 091 23895240
fax: 091 485726/485612
http://dssm.unipa.it/vmuggeo

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] KmL 1.1.1

2010-03-04 Thread Christophe Genolini
‘kml’ is an implementation of k-means for longitudinal data (or 
trajectories). This algorithm is able to deal with missing value
and provides an easy way to re roll the algorithm several times, varying 
the starting conditions and/or the number of clusters looked for.


KmL 1.1.1 addition:
- 7 imputations methods for longitudinal data
- Calculus of three qualities criterion (CalinskiHarabatz, RayTuri, 
DaviesBouldin)

- Implementation of the Frechet distance between two trajectories
- Calculus of the Frechet path
- Optimization of the function 'dist' for trajectories
- Possibility to use three different ways to define the starting conditions
- Correction of minor bugs


Christophe Genolini

___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Odp: precision issue?

2010-03-04 Thread Petr PIKAL
Hi

r-help-boun...@r-project.org napsal dne 04.03.2010 10:36:43:

 
 Hi R Gurus,
 
 I am trying to figure out what is going on here.
 
  a - 68.08
  b - a-1.55
  a-b
 [1] 1.55
  a-b == 1.55
 [1] FALSE
  round(a-b,2) == 1.55
 [1] TRUE
  round(a-b,15) == 1.55
 [1] FALSE
 
 Why should (a - b) == 1.55 fail when in fact b has been defined to be a 
- 1.
 55?  Is this a precision issue? How do i correct this?

In real world those definitions of b are the same but not in computer 
world. See FAQ 7.31

Use either rounding or all.equal.

 all.equal(a-b, 1.55)
[1] TRUE

To all, this is quite common question and it is documented in FAQs. 
However there is no such issue in Excel or some other spreadsheet 
programs, therefore maybe a confusion from novices. 

I wonder if there could be some type of global option which will get rid 
of these users mistakes or misunderstandings by setting some threshold 
option for equality testing by use ==.

Regards
Petr




 
 Alex
 
 _
 Hotmail: Free, trusted and rich email service.
 
[[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] precision issue?

2010-03-04 Thread Berend Hasselman


Alexander Nervedi wrote:
 
 I am trying to figure out what is going on here.
 
 a - 68.08
 b - a-1.55
 a-b
 [1] 1.55
 a-b == 1.55
 [1] FALSE
 round(a-b,2) == 1.55
 [1] TRUE
 round(a-b,15) == 1.55
 [1] FALSE
 
 Why should (a - b) == 1.55 fail when in fact b has been defined to be a -
 1.55?  Is this a precision issue? How do i correct this?
 

Read th R FAQ.
See 7.31 Why doesn't R think these numbers are equal?

Berend
-- 
View this message in context: 
http://n4.nabble.com/precision-issue-tp1577815p1577903.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Emacs for R

2010-03-04 Thread Gustave Lefou
Thank you Uwe, Peter and David !

2010/3/2 David Cross d.cr...@tcu.edu

 There exist a number of resources on the web for ESS, which one can find
 easily with a Google search on Emacs Speaks Statistics.  Three articles
 that did not turn up in my search just now are:

 A. J. Rossini et al. Emacs Speaks Statistics ..., Unpublished, 2001,
 University of Washington (ross...@u.washington.edu)
 A. J. Rossini et al. Emacs Speaks Statistics ..., Journal of Computational
 and Graphical Statistics, 2004, 13(1), 247-261.
 R. M. Heidberger. Emacs Speaks Statistics ..., DSC 2001 Proceedings of the
 2nd International Workshop on Distributed Statistical Computing, March
 15-17, Vienna, Austria

 Cheers

 David Cross
 d.cr...@tcu.edu
 www.davidcross.us






 On Mar 1, 2010, at 10:41 AM, Gustave Lefou wrote:

   Dear all,

 From the recent discussion, I have wondered where I could find some quick

 step documentation on Emacs for R (especially on Windows).

 All I have found is that 80 pages pdf http://ess.r-project.org/ess.pdf

 Maybe I am asking for too much ?

 Best,
 Gustave

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.htmlhttp://www.r-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] counting the number of ones in a vector

2010-03-04 Thread Gavin Simpson
On Thu, 2010-03-04 at 00:03 +0100, Randall Wrong wrote:
 Thanks to all of you !
 
 (Benjamin Nutter, Henrique Dallazuanna, Tobias Verbeke, Jorge Ivan
 Velez, David Reinke and Gavin Simpson)
  
  
 x - c(1, 1, 1, NA, NA, 2, 1, NA)
  
  table(x)[1]
 1 
 4 
  
 Why do I get two numbers ?

It is a printing a named vector. The 1 is the group of factor level,
the 4 is the count, try:

unname(table(x)[1])

and

str(table(x)[1])

etc to see what is going on.

HTH

G

  
 Thanks,
 Randall
  
 
  
 2010/2/26 Nutter, Benjamin nutt...@ccf.org
 But if x has any missing values:
 
  x - c(1, 1, 1, NA, NA, 2, 1, NA)
 
  sum( x == 1)
 [1] NA
 
  sum(x==1, na.rm=TRUE)
 [1] 4
 
 
 
 
 
 -Original Message-
 From: r-help-boun...@r-project.org
 [mailto:r-help-boun...@r-project.org] On Behalf Of Henrique
 Dallazuanna
 Sent: Friday, February 26, 2010 9:47 AM
 To: Randall Wrong
 Cc: r-help@r-project.org
 Subject: Re: [R] counting the number of ones in a vector
 
 Try:
 
 sum(x == 1)
 
 On Fri, Feb 26, 2010 at 11:40 AM, Randall Wrong
 randall.wr...@gmail.com wrote:
  Dear R users,
 
  I want to count the number of ones in a vector x.
 
  That's what I did : length( x[x==1] )
 
  Is that a good solution ?
  Thank you very much,
  Randall
 
 [[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible
 code.
 
 
 
 
 --
 Henrique Dallazuanna
 Curitiba-Paraná-Brasil
 25° 25' 40 S 49° 16' 22 O
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible
 code.
 
 
 
 ===
 
 P Please consider the environment before printing this e-mail
 
 Cleveland Clinic is ranked one of the top hospitals
 in America by U.S.News  World Report (2009).
 Visit us online at http://www.clevelandclinic.org for
 a complete listing of our services, staff and
 locations.
 
 
 Confidentiality Note:  This message is intended for use
 
 
 only by the individual or entity to which it is addressed
 and may contain information that is privileged,
 confidential, and exempt from disclosure under applicable
 law.  If the reader of this message is not the intended
 recipient or the employee or agent responsible for
 delivering the message to the intended recipient, you are
 hereby notified that any dissemination, distribution or
 copying of this communication is strictly prohibited.  If
 you have received this communication in error,  please
 contact the sender immediately and destroy the material in
 its entirety, whether electronic or hard copy.  Thank you.
 
 
 

-- 
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
 Dr. Gavin Simpson [t] +44 (0)20 7679 0522
 ECRC, UCL Geography,  [f] +44 (0)20 7679 0565
 Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
 Gower Street, London  [w] http://www.ucl.ac.uk/~ucfagls/
 UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] logistic regression by group?

2010-03-04 Thread Corey Sparks

Hi, first, you should always provide some repeatable code for us to have a
look at, that shows what you have tried so far.  
That being said,  you can use the subset= option  in glm to subdivide your
data and run separate models like that, e.g.

fit.1-glm(y~x1+x2, data=yourdat, family=binomial, subset=group==1)
fit.2-glm(y~x1+x2, data=yourdat, family=binomial, subset=group==2)

where group is your grouping variable.
Which should give you that kind of stratified model.
Hope this helps,
Corey

-
Corey Sparks, PhD
Assistant Professor
Department of Demography and Organization Studies
University of Texas at San Antonio
501 West Durango Blvd
Monterey Building 2.270C
San Antonio, TX 78207
210-458-3166
corey.sparks 'at' utsa.edu
https://rowdyspace.utsa.edu/users/ozd504/www/index.htm
-- 
View this message in context: 
http://n4.nabble.com/logistic-regression-by-group-tp1577655p1577971.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Odp: precision issue?

2010-03-04 Thread Ted Harding
On 04-Mar-10 10:50:56, Petr PIKAL wrote:
 Hi
 
 r-help-boun...@r-project.org napsal dne 04.03.2010 10:36:43:
 Hi R Gurus,
 
 I am trying to figure out what is going on here.
 
  a - 68.08
  b - a-1.55
  a-b
 [1] 1.55
  a-b == 1.55
 [1] FALSE
  round(a-b,2) == 1.55
 [1] TRUE
  round(a-b,15) == 1.55
 [1] FALSE
 
 Why should (a - b) == 1.55 fail when in fact b has been defined
 to be a - 1.55?  Is this a precision issue? How do i correct this?
 
 In real world those definitions of b are the same but not in computer 
 world. See FAQ 7.31
 
 Use either rounding or all.equal.
 
 all.equal(a-b, 1.55)
 [1] TRUE
 
 To all, this is quite common question and it is documented in FAQs. 
 programs, therefore maybe a confusion from novices. 
 
 I wonder if there could be some type of global option which will
 get rid of these users mistakes or misunderstandings by setting
 some threshold option for equality testing by use ==.
 
 Regards
 Petr
 
 Alex

Interesting suggestion, but in my view it would probably give
rise to more problems than it would avoid!

The fundamental issue is that many inexperienced users are not
aware that once 68.08 has got inside the computer (as used by
R and other programs which do fixed-length binary arithmetic)
it is no longer 68.08 (though 1.55 is still 1.55).

Since == tests for equality of stored binary representations,
it is inevitable that it will often return FALSE when the user
would logically expect TRUE (as in Alex's query above). When
a naive users encounters this, and is led to raise a query on
the list, a successful reply will have the effect that yet one
more user has learned something. This is useful.

As the help page ?Comparison (also accessible from ?== etc.)
states:

  Do not use '==' and '!=' for tests, such as in 'if' expressions,
  where you must get a single 'TRUE' or 'FALSE'.  Unless you are
  absolutely sure that nothing unusual can happen, you should use
  the 'identical' function instead.

  For numerical and complex values, remember '==' and '!=' do not
  allow for the finite representation of fractions, nor for rounding
  error.  Using 'all.equal' with 'identical' is almost always
  preferable.  See the examples.

It can on occasion be useful to be able to test for exact equality
of internal binary representations. Also, what is really going on
when == appears to fail can be ascertained by evaluating the
within-computer difference between allegedly equivalent expressions.
Thus, for instance:

  a - 68.08
  b - a-1.55
  a-b == 1.55
  # [1] FALSE
  (a-b) - 1.55
  # [1] -2.88658e-15

  all.equal((a-b), 1.55)
  # [1] TRUE

I think that introducing a default tolerance option to ==,
thus making it masquerade as all.equal(), would both suppress
the capability of == to test exact equality of internal
representations, and contribute to persistence of misconceptions
by naive users. At least the current situation contributes to
removing the latter.

The result of the latter could well be that a complicated
program goes astray because a deep-lying use of == gives
the equal within tolerance result rather than the not
exactly equal result, leading to some really difficult
queries to the list!

Of course, the use of all.equal((a-b), 1.55) is a lot longer
than the use of (a-b) == 1.55, and there is an understandable
argument for something as snappy as == for testing approximate
equality. But I think that subverting == is the wrong way to
go about it. Leave == alone and, according to taste, introduce
a new operator (which the user can define for himself anyway),
say %==%:

  %==% - function(x,y){ all.equal(x,y) }
  (a-b) %==% 1.55
  # [1] TRUE

Or perhaps

  %==% - function(x,y){ identical(all.equal(x,y),True) }

as in the Example in ?Comparison. Then you have a snappy shorthand,
and it operates exactly as all.equal() does and with the same
tolerance, and it won't break anything else as a result of
subverting ==.

Ted.


E-Mail: (Ted Harding) ted.hard...@manchester.ac.uk
Fax-to-email: +44 (0)870 094 0861
Date: 04-Mar-10   Time: 12:35:08
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Rows classification problem

2010-03-04 Thread Carlos Guerra
Dear all,

I have a table like this:

 a - read.csv(test.csv, header = TRUE, sep = ;)
 a

 UTM pUrbpUrb_class  
1  NF188520,160307 NA  
2  NF188651,965649 NA  
3  NF189326,009581 NA  
4  NF1894  3,141484 NA   
5  NF189564,296826 NA  
6  NF189614,174068 NA  
7  NF189740,985589 NA  
8  NF189834,054325 NA  
9  NF189920,657632 NA  
10NF198254,712737 NA 
11NF198356,016067 NA 
12NF1984  5,977961 NA  

What I wanted to do is to obtain classified values for pUrb_class in relation 
to pUrb as if:

pUrb 20  ---  pUrb_class = 1
pUrb 20-40  ---  pUrb_class = 2
pUrb 40-60  ---  pUrb_class = 3


Can anyone help me?

Thanks in advance,
Carlos
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Histogram color

2010-03-04 Thread Ashta
In a  histogram , is it possible to have different colors?
Example. I generated

x - rnorm(100)
hist(x)

I want the histogram to have different colors based on the following condition
 mean(x)+sd(x)   with red color and  mean(x) - sd(x) with red color as
well. The  middle  one with blue color.
Is it possible to do that in R?
Thanks

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] End of line marker?

2010-03-04 Thread David Winsemius


On Mar 3, 2010, at 2:22 PM, jonas garcia wrote:


Dear R users,

I am trying to read a huge file in R. For some reason, only a part  
of the

file is read. When I further investigated, I found that in one of my
non-numeric columns, there is one odd character responsible for  
this, which

I reproduce bellow:

In case you cannot see it, it looks like a right arrow, but it is  
not the

one you get from microsoft word in menu insert symbol.

I think my dat file is broken and that funny character is an EOL  
marker that
makes R not read the rest of the file. I am sure the character is  
there by
chance but I fear that it might be present in some other big files I  
have to
work with as well. So, is there any clever way to remove this  
inconvenient
character in R avoiding having to edit the file in notepad and  
remove it

manually?

Code I am using:

read.csv(new3.dat, header=F)

Warning message:
In read.table(file = file, header = header, sep = sep, quote =  
quote,  :

 incomplete final line found by readTableHeader on 'new3.dat'


I think you should identify the offending line by using the  
count.fields function and fix it with an editor.



--
David


I am working with R 2.10.1 in windows XP.

Thanks in advance

Jonas

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Odp: precision issue?

2010-03-04 Thread Duncan Murdoch

On 04/03/2010 7:35 AM, (Ted Harding) wrote:

On 04-Mar-10 10:50:56, Petr PIKAL wrote:
 Hi
 
 r-help-boun...@r-project.org napsal dne 04.03.2010 10:36:43:

 Hi R Gurus,
 
 I am trying to figure out what is going on here.
 
  a - 68.08

  b - a-1.55
  a-b
 [1] 1.55
  a-b == 1.55
 [1] FALSE
  round(a-b,2) == 1.55
 [1] TRUE
  round(a-b,15) == 1.55
 [1] FALSE
 
 Why should (a - b) == 1.55 fail when in fact b has been defined

 to be a - 1.55?  Is this a precision issue? How do i correct this?
 
 In real world those definitions of b are the same but not in computer 
 world. See FAQ 7.31
 
 Use either rounding or all.equal.
 
 all.equal(a-b, 1.55)

 [1] TRUE
 
 To all, this is quite common question and it is documented in FAQs. 
 programs, therefore maybe a confusion from novices. 
 
 I wonder if there could be some type of global option which will

 get rid of these users mistakes or misunderstandings by setting
 some threshold option for equality testing by use ==.
 
 Regards

 Petr
 
 Alex


Interesting suggestion, but in my view it would probably give
rise to more problems than it would avoid!

The fundamental issue is that many inexperienced users are not
aware that once 68.08 has got inside the computer (as used by
R and other programs which do fixed-length binary arithmetic)
it is no longer 68.08 (though 1.55 is still 1.55).
  


I think most of what you write above and below is true, but your 
parenthetical remark is not.  1.55 can't be represented exactly in the 
double precision floating point format used in R.  It doesn't have a 
terminating binary expansion, so it will be rounded to a binary 
fraction.  I believe you can see the binary expansion using this code:


x - 1.55
for (i in 1:70) {
 cat(floor(x)) 
 if (i == 1) cat(.)

  x - 2*(x - floor(x))
}

which gives

1.10001100110011001100110011001100110011001100110011010

Notice how it becomes a repeating expansion, with 0011 repeated 12 
times, but then it finishes with 010, because we've run 
out of bits.

So 1.55 is actually stored as a number which is ever so slightly bigger.

Duncan Murdoch

Since == tests for equality of stored binary representations,
it is inevitable that it will often return FALSE when the user
would logically expect TRUE (as in Alex's query above). When
a naive users encounters this, and is led to raise a query on
the list, a successful reply will have the effect that yet one
more user has learned something. This is useful.

As the help page ?Comparison (also accessible from ?== etc.)
states:

  Do not use '==' and '!=' for tests, such as in 'if' expressions,
  where you must get a single 'TRUE' or 'FALSE'.  Unless you are
  absolutely sure that nothing unusual can happen, you should use
  the 'identical' function instead.

  For numerical and complex values, remember '==' and '!=' do not
  allow for the finite representation of fractions, nor for rounding
  error.  Using 'all.equal' with 'identical' is almost always
  preferable.  See the examples.

It can on occasion be useful to be able to test for exact equality
of internal binary representations. Also, what is really going on
when == appears to fail can be ascertained by evaluating the
within-computer difference between allegedly equivalent expressions.
Thus, for instance:

  a - 68.08
  b - a-1.55
  a-b == 1.55
  # [1] FALSE
  (a-b) - 1.55
  # [1] -2.88658e-15

  all.equal((a-b), 1.55)
  # [1] TRUE

I think that introducing a default tolerance option to ==,
thus making it masquerade as all.equal(), would both suppress
the capability of == to test exact equality of internal
representations, and contribute to persistence of misconceptions
by naive users. At least the current situation contributes to
removing the latter.

The result of the latter could well be that a complicated
program goes astray because a deep-lying use of == gives
the equal within tolerance result rather than the not
exactly equal result, leading to some really difficult
queries to the list!

Of course, the use of all.equal((a-b), 1.55) is a lot longer
than the use of (a-b) == 1.55, and there is an understandable
argument for something as snappy as == for testing approximate
equality. But I think that subverting == is the wrong way to
go about it. Leave == alone and, according to taste, introduce
a new operator (which the user can define for himself anyway),
say %==%:

  %==% - function(x,y){ all.equal(x,y) }
  (a-b) %==% 1.55
  # [1] TRUE

Or perhaps

  %==% - function(x,y){ identical(all.equal(x,y),True) }

as in the Example in ?Comparison. Then you have a snappy shorthand,
and it operates exactly as all.equal() does and with the same
tolerance, and it won't break anything else as a result of
subverting ==.

Ted.


E-Mail: (Ted Harding) ted.hard...@manchester.ac.uk
Fax-to-email: +44 (0)870 094 0861
Date: 04-Mar-10   Time: 12:35:08
-- 

Re: [R] Odp: precision issue?

2010-03-04 Thread Marc Schwartz
On Mar 4, 2010, at 4:50 AM, Petr PIKAL wrote:

 Hi
 
 r-help-boun...@r-project.org napsal dne 04.03.2010 10:36:43:
 
 
 Hi R Gurus,
 
 I am trying to figure out what is going on here.
 
 a - 68.08
 b - a-1.55
 a-b
 [1] 1.55
 a-b == 1.55
 [1] FALSE
 round(a-b,2) == 1.55
 [1] TRUE
 round(a-b,15) == 1.55
 [1] FALSE
 
 Why should (a - b) == 1.55 fail when in fact b has been defined to be a 
 - 1.
 55?  Is this a precision issue? How do i correct this?
 
 In real world those definitions of b are the same but not in computer 
 world. See FAQ 7.31
 
 Use either rounding or all.equal.
 
 all.equal(a-b, 1.55)
 [1] TRUE
 
 To all, this is quite common question and it is documented in FAQs. 
 However there is no such issue in Excel or some other spreadsheet 
 programs, therefore maybe a confusion from novices. 
 

snip

Actually, there are floating point issues in Excel. Search the archives using 
Excel rounding and you will see discussions dating back to circa 2003 on 
this, which also reference the differences in OO.org's Calc and Gnumeric's 
handling of floats.

Also:

  http://support.microsoft.com/default.aspx?scid=kb;[LN];214118

  http://support.microsoft.com/default.aspx?scid=kb;EN-US;78113

HTH,

Marc Schwartz

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Odp: precision issue?

2010-03-04 Thread Petr PIKAL
OK, I understand. Your suggestion is far better then mine. Maybe it could 
be incorporated in base R and mentioned on help page for logical operators 
as one of standard way for testing equality of fractional numbers.

Regards
Petr

r-help-boun...@r-project.org napsal dne 04.03.2010 13:35:11:

 On 04-Mar-10 10:50:56, Petr PIKAL wrote:
  Hi
  
  r-help-boun...@r-project.org napsal dne 04.03.2010 10:36:43:
  Hi R Gurus,
  
  I am trying to figure out what is going on here.
  
   a - 68.08
   b - a-1.55
   a-b
  [1] 1.55
   a-b == 1.55
  [1] FALSE
   round(a-b,2) == 1.55
  [1] TRUE
   round(a-b,15) == 1.55
  [1] FALSE
  
  Why should (a - b) == 1.55 fail when in fact b has been defined
  to be a - 1.55?  Is this a precision issue? How do i correct this?
  
  In real world those definitions of b are the same but not in computer 
  world. See FAQ 7.31
  
  Use either rounding or all.equal.
  
  all.equal(a-b, 1.55)
  [1] TRUE
  
  To all, this is quite common question and it is documented in FAQs. 
  programs, therefore maybe a confusion from novices. 
  
  I wonder if there could be some type of global option which will
  get rid of these users mistakes or misunderstandings by setting
  some threshold option for equality testing by use ==.
  
  Regards
  Petr
  
  Alex
 
 Interesting suggestion, but in my view it would probably give
 rise to more problems than it would avoid!
 
 The fundamental issue is that many inexperienced users are not
 aware that once 68.08 has got inside the computer (as used by
 R and other programs which do fixed-length binary arithmetic)
 it is no longer 68.08 (though 1.55 is still 1.55).
 
 Since == tests for equality of stored binary representations,
 it is inevitable that it will often return FALSE when the user
 would logically expect TRUE (as in Alex's query above). When
 a naive users encounters this, and is led to raise a query on
 the list, a successful reply will have the effect that yet one
 more user has learned something. This is useful.
 
 As the help page ?Comparison (also accessible from ?== etc.)
 states:
 
   Do not use '==' and '!=' for tests, such as in 'if' expressions,
   where you must get a single 'TRUE' or 'FALSE'.  Unless you are
   absolutely sure that nothing unusual can happen, you should use
   the 'identical' function instead.
 
   For numerical and complex values, remember '==' and '!=' do not
   allow for the finite representation of fractions, nor for rounding
   error.  Using 'all.equal' with 'identical' is almost always
   preferable.  See the examples.
 
 It can on occasion be useful to be able to test for exact equality
 of internal binary representations. Also, what is really going on
 when == appears to fail can be ascertained by evaluating the
 within-computer difference between allegedly equivalent expressions.
 Thus, for instance:
 
   a - 68.08
   b - a-1.55
   a-b == 1.55
   # [1] FALSE
   (a-b) - 1.55
   # [1] -2.88658e-15
 
   all.equal((a-b), 1.55)
   # [1] TRUE
 
 I think that introducing a default tolerance option to ==,
 thus making it masquerade as all.equal(), would both suppress
 the capability of == to test exact equality of internal
 representations, and contribute to persistence of misconceptions
 by naive users. At least the current situation contributes to
 removing the latter.
 
 The result of the latter could well be that a complicated
 program goes astray because a deep-lying use of == gives
 the equal within tolerance result rather than the not
 exactly equal result, leading to some really difficult
 queries to the list!
 
 Of course, the use of all.equal((a-b), 1.55) is a lot longer
 than the use of (a-b) == 1.55, and there is an understandable
 argument for something as snappy as == for testing approximate
 equality. But I think that subverting == is the wrong way to
 go about it. Leave == alone and, according to taste, introduce
 a new operator (which the user can define for himself anyway),
 say %==%:
 
   %==% - function(x,y){ all.equal(x,y) }
   (a-b) %==% 1.55
   # [1] TRUE
 
 Or perhaps
 
   %==% - function(x,y){ identical(all.equal(x,y),True) }
 
 as in the Example in ?Comparison. Then you have a snappy shorthand,
 and it operates exactly as all.equal() does and with the same
 tolerance, and it won't break anything else as a result of
 subverting ==.
 
 Ted.
 
 
 E-Mail: (Ted Harding) ted.hard...@manchester.ac.uk
 Fax-to-email: +44 (0)870 094 0861
 Date: 04-Mar-10   Time: 12:35:08
 -- XFMail --
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list

Re: [R] only actual variable names in all.names()

2010-03-04 Thread Henrique Dallazuanna
Try this:

all.vars(substitute(y ~ poly(x, k) + z, list(k = NA)))

Or if you don't want substitute the var k:

gsub(.*\\((.*),.*, \\1, rownames(attr(terms(y ~ poly(x, k) + z),
'factors')))


On Thu, Mar 4, 2010 at 6:57 AM, Vito Muggeo (UniPa)
vito.mug...@unipa.it wrote:
 dear all,
 When I use all.vars(), I am interest in extracting only the variable names..
 Here a simple example

 all.vars(as.formula(y~poly(x,k)+z))

 returns
 [1] y x k z

 and I would like to obtain
 y x z

 Where is the trick?

 many thanks
 vito


 --
 
 Vito M.R. Muggeo
 Dip.to Sc Statist e Matem `Vianelli'
 Università di Palermo
 viale delle Scienze, edificio 13
 90128 Palermo - ITALY
 tel: 091 23895240
 fax: 091 485726/485612
 http://dssm.unipa.it/vmuggeo

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Type-I v/s Type-III Sum-Of-Squares in ANOVA

2010-03-04 Thread S Ellison


 John Fox j...@mcmaster.ca 02/03/2010 02:19 
There's also a serious question about whether one would
be interested in main effects defined as averages over the level of
the
other factor when interactions are present. 

My personal take on this particular chestnut is that I often want to
ask something about the relative size of the effects. If the so-called
main effect(s) is/are very much larger than the interactions, one may
be able to make generalisations which have practical use. If the effects
are much of a size, there's nothing much to be gained by asking about
main effects.  

Mind you, that's probably a crit of significance testing as the be-all
and end-all, rather than a problem with type I-III. Asking 'how big is
it?' is a step beyond 'is it there?'.

Steve E

***
This email and any attachments are confidential. Any use...{{dropped:8}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Rows classification problem

2010-03-04 Thread Carlos Guerra
Dear all,

I have a table like this:

a - read.csv(test.csv, header = TRUE, sep = ;)
a

UTM pUrbpUrb_class  
1  NF188520,160307 NA  
2  NF188651,965649 NA  
3  NF189326,009581 NA  
4  NF1894  3,141484 NA   
5  NF189564,296826 NA  
6  NF189614,174068 NA  
7  NF189740,985589 NA  
8  NF189834,054325 NA  
9  NF189920,657632 NA  
10NF198254,712737 NA 
11NF198356,016067 NA 
12NF1984  5,977961 NA  

What I wanted to do is to obtain classified values for pUrb_class in relation 
to pUrb as if:

pUrb 20  ---  pUrb_class = 1
pUrb 20-40  ---  pUrb_class = 2
pUrb 40-60  ---  pUrb_class = 3


Can anyone help me?

Thanks in advance,
Carlos
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Histogram color

2010-03-04 Thread David Winsemius


On Mar 4, 2010, at 7:41 AM, Ashta wrote:


In a  histogram , is it possible to have different colors?
Example. I generated

x - rnorm(100)
hist(x)

I want the histogram to have different colors based on the following  
condition

mean(x)+sd(x)   with red color and  mean(x) - sd(x) with red color as
well. The  middle  one with blue color.


bkrs - hist(x); hist(x, col= ifelse(bkrs$mids 0-1,red, ifelse(bkrs 
$mids 0+1, blue, black ) ) )


This has the side effect of first plotting a histogram with bars  
that have default fill but then gathers the values in an object bkrs  
that is used to assign colors based on midpoints. Following your lead,  
I used the default values for mean and SD, so filling them in is left  
as an an exercise for the poster.




Is it possible to do that in R?
Thanks

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Rows classification problem

2010-03-04 Thread Henrique Dallazuanna
Try this:

a$pUrb_class - cut(a$pUrb, c(-Inf, 20, 40, 60, Inf), labels = 1:4)

On Thu, Mar 4, 2010 at 11:11 AM, Carlos Guerra
carlosguerra@gmail.com wrote:
 Dear all,

 I have a table like this:

 a - read.csv(test.csv, header = TRUE, sep = ;)
 a

        UTM         pUrb                    pUrb_class
 1      NF1885    20,160307         NA
 2      NF1886    51,965649         NA
 3      NF1893    26,009581         NA
 4      NF1894      3,141484         NA
 5      NF1895    64,296826         NA
 6      NF1896    14,174068         NA
 7      NF1897    40,985589         NA
 8      NF1898    34,054325         NA
 9      NF1899    20,657632         NA
 10    NF1982    54,712737         NA
 11    NF1983    56,016067         NA
 12    NF1984      5,977961         NA

 What I wanted to do is to obtain classified values for pUrb_class in 
 relation to pUrb as if:

 pUrb 20      ---  pUrb_class = 1
 pUrb 20-40  ---  pUrb_class = 2
 pUrb 40-60  ---  pUrb_class = 3
 

 Can anyone help me?

 Thanks in advance,
 Carlos
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Rows classification problem

2010-03-04 Thread Jorge Ivan Velez
Hi Carlos,

Take a look at ?cut, ?ifelse and ?transform for some ideas. Also, the
function recode in car might help.

HTH,
Jorge


On Thu, Mar 4, 2010 at 7:35 AM, Carlos Guerra  wrote:

 Dear all,

 I have a table like this:

  a - read.csv(test.csv, header = TRUE, sep = ;)
  a

 UTM pUrbpUrb_class
 1  NF188520,160307 NA
 2  NF188651,965649 NA
 3  NF189326,009581 NA
 4  NF1894  3,141484 NA
 5  NF189564,296826 NA
 6  NF189614,174068 NA
 7  NF189740,985589 NA
 8  NF189834,054325 NA
 9  NF189920,657632 NA
 10NF198254,712737 NA
 11NF198356,016067 NA
 12NF1984  5,977961 NA

 What I wanted to do is to obtain classified values for pUrb_class in
 relation to pUrb as if:

 pUrb 20  ---  pUrb_class = 1
 pUrb 20-40  ---  pUrb_class = 2
 pUrb 40-60  ---  pUrb_class = 3
 

 Can anyone help me?

 Thanks in advance,
 Carlos
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nu-SVM crashes in e1071

2010-03-04 Thread David Meyer

For the records: this was a bug in e1071 (which is already fixed  new
version on CRAN).

Best
David

Steve Lianoglou wrote:

Hi,

On Wed, Mar 3, 2010 at 4:08 AM, Häring, Tim (LWF)
tim.haer...@lwf.bayern.de wrote:

(...)


While you're sending your bug report to David, perhaps you can try the
SVM from kernlab.

It relies on code from libsvm, too, but ... you never know. It can't
hurt to try.

Hi Steve,

thanks for that hint.
I tried ksvm()-function bet get an error message:

model - ksvm(soil_unit~., train, type=nu-svc)
Using automatic sigma estimation (sigest) for RBF or laplace kernel
Error in votematrix[i, ret  0] - votematrix[i, ret  0] + 1 :
 NAs are not allowed in subscripted assignments

But there are no NAs in my dataset. I checked it with
summary(is.na(train))


All the same, it seems there might be something funky with your data?

Not sure how to help debug further. If you like you send an *.rda file
of your soil_unit data offline and I can try, but otherwise I guess
you're on your own?

What if you remove some of the columns of your matrix? Will this
eventually work?
-steve



--
Priv.-Doz. Dr. David Meyer
Department of Information Systems and Operations

WU
Wirtschaftsuniversität Wien
Vienna University of Economics and Business
Augasse 2-6, 1090 Vienna, Austria
Tel: +43-1-313-36-4393
Fax: +43-1-313-36-90-4393
HP:  http://ec.wu.ac.at/~meyer

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help

2010-03-04 Thread mahalakshmi sivamani
Hi all ,

I have one query.

i have list of some  .cel files. in my program i have to mention the path of
these .cel files

part of my program is,

rna.data-exprs(justRMA(filenames=file.names, celfile.path=*datadir*,
sampleNames=sample.names, phenoData=pheno.data,
cdfname=cleancdfname(hg18_Affymetrix U133A)))


in the place of datadir i have to mention the character string of the
directory of these .cel files. I don't know how to give the path for all
these files.

i set the path as given below,


 rna.data-exprs(justRMA(filenames=file.names, celfile.path=*D:/MONO1.CEL
D:/MONO2.CEL D:/MONO3.CEL D:/MACRO1.CEL D:/MACRO2.CEL
D:/MACRO3.CEL*,sampleNames=sample.names, phenoData=pheno.data,
cdfname=cleancdfname(hg18_Affymetrix U133A)))


it shows this error,

Error: unexpected string constant in
rna.data-exprs(justRMA(filenames=file.names, celfile.path=D:/MONO1
D:/MONO2


could u please help me in this case.


Thanks in advance.


with regards,

S.Mahalakshmi

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Please help me how to make input files in Extremes Toolkit model

2010-03-04 Thread Huyen Quan
Dear sir/madam
 
My name is Quan, I am a PhD student in Korea. my major is Hydrological in Water 
Resources Engineering. I am interested in Extremes Toolkit model and I known 
you from information in internet. 
I installed successfully this model but I didn't know how to make the type of  
files input for this model such as: Flood.dat; Ozone4H.dat; Flood.R; 
HEAT.R. (free download these files of example and show in attachment files)
I want to learn the way make these files from sample 
files. http://www.isse.ucar.edu/extremevalues/evtk.html 
 
but i can not download those files because this link was errored. 
 
Please help me how to get this file and to make files input, if you have some 
sample files, please send to me to learn and make them.
 
thank you so much for your help me and hope to see your infomation as soon as 
possible.
 
Mr. Ngo Quan


  __
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] cluster with mahalanobis distance

2010-03-04 Thread Friedrich Leisch


Model-based clustering, e.g. using package mclust will do what you
want: it uses normal densities to calculate similarities of objects to
clusters, which is a monotone transformation of Mahalanobis distance
(basically what's inside the exp() of the multivariate Gaussian
density).

If you believe that Mahalanobis distance is the right one for your
data, then believing in a multivariate Gaussian model isn't too
far-fetched ...

Best,
Fritz



 On Wed, 3 Mar 2010 16:55:19 -0800 (PST),
 Phil Spector (PS) wrote:

   Albyn -
   That's a very important fact that I overlooked in my 
   original response.  Thanks for pointing it out.
 - Phil


   On Wed, 3 Mar 2010, Albyn Jones wrote:

   Note: this procedure assumes that all clusters have the same covariance 
matrix.
   
   albyn
   
   On Wed, Mar 03, 2010 at 01:23:37PM -0800, Phil Spector wrote:
   The manhattan distance and the Mahalanobis distances are quite different.
   One of the main differences is that a covariance matrix is necessary to
   calculate the Mahalanobis
   distance, so it's not easily accomodated by dist.  There is a function in
   base R which does calculate the Mahalanobis
   distance -- mahalanobis().  So if you pass a distance matrix
   calculated by mahalanobis() to the clustering function, you'll
   get what you want.
   - Phil Spector
   Statistical Computing Facility
   Department of Statistics
   UC Berkeley
   spec...@stat.berkeley.edu
   
   
   On Wed, 3 Mar 2010, Tal Galili wrote:
   
   when you create the distance function to put into the hclust, use:
   
   dist(x, method = manhattan)
   
   
   Tal
   
   
   
   Contact
   Details:---
   Contact me: tal.gal...@gmail.com |  972-52-7275845
   Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
   www.r-statistics.com (English)
   
--
   
   
   
   
   On Wed, Mar 3, 2010 at 9:14 PM, naama nw...@technion.ac.il wrote:
   
   
   How can I perform cluster analysis using the mahalanobis distance 
instead
   of
   the euclidean distance?
   thank you
   Naama Wolf
   
   --
   View this message in context:
   
http://n4.nabble.com/cluster-with-mahalanobis-distance-tp1577038p1577038.html
   Sent from the R help mailing list archive at Nabble.com.
   
   __
   R-help@r-project.org mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide
   http://www.R-project.org/posting-guide.html
   and provide commented, minimal, self-contained, reproducible code.
   
   
   [[alternative HTML version deleted]]
   
   __
   R-help@r-project.org mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
   and provide commented, minimal, self-contained, reproducible code.
   
   
   __
   R-help@r-project.org mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
   and provide commented, minimal, self-contained, reproducible code.
   
  

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] What is assign attribute?

2010-03-04 Thread rkevinburton

I am sorry I still don't understand.

In the example you give the 'assign' vecotr is a vector of length 6 and there 
are indeed 6 columns in the data frame. But the formula only has two variables 
namely 'Month' and 'Wind'. Where do the values if the 'assign' vector come 
from? I see '0 1 1 1 1 2'. What is '1' an index to? '2'? Maybe I am get 
confused with what the term 'factors' does to the formula.

Thanks agiain for your help.

Kevin

 Peter Dalgaard p.dalga...@biostat.ku.dk wrote: 
 rkevinbur...@charter.net wrote:
  I am just curious. Every once and a while I see an attribute attached to an 
  object called assign. What meaning does this have? For example:
  
  dist ~ speed, data=cars
  
  forms a matrix like:
  
   num [1:50, 1:2] 1 1 1 1 1 1 1 1 1 1 ...
   - attr(*, dimnames)=List of 2
..$ : chr [1:50] 1 2 3 4 ...
..$ : chr [1:2] (Intercept) speed
   - attr(*, assign)= int [1:2] 0 1
  
  The dimnames attribute is fairly self-explanatory. I just am not sure 
  what the assign attribute means.
 
 It has to do with the mapping between terms of the formula and columns 
 of the design matrix:
 
   str(model.matrix(Ozone~factor(Month)+Wind,data=airquality))
   num [1:116, 1:6] 1 1 1 1 1 1 1 1 1 1 ...
   - attr(*, dimnames)=List of 2
..$ : chr [1:116] 1 2 3 4 ...
..$ : chr [1:6] (Intercept) factor(Month)6 factor(Month)7 
 factor(Month)8 ...
   - attr(*, assign)= int [1:6] 0 1 1 1 1 2
   - attr(*, contrasts)=List of 1
..$ factor(Month): chr contr.treatment
 
 I.e. columns 2:5 belong to the first non-intercept term, factor(Month). 
 (Notice that it is implicitly assumed that you have the corresponding 
 terms() output to hand, likewise for the contrasts attribute.)
 -- 
 O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
   (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
 ~~ - (p.dalga...@biostat.ku.dk)  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help

2010-03-04 Thread Alberto Goldoni
Dear Mahalakshmi,
the simplest way to do this and to avoid your error is to open R
directtly for the folder in which there are your .CEL files and then:

data-ReadAffy() #read all the .CEL files in the working director

data.rma-rma(data) #in order to use rma on your .CEL files

hope helps

2010/3/4 mahalakshmi sivamani mahasiva1...@gmail.com:
 Hi all ,

 I have one query.

 i have list of some  .cel files. in my program i have to mention the path of
 these .cel files

 part of my program is,

 rna.data-exprs(justRMA(filenames=file.names, celfile.path=*datadir*,
 sampleNames=sample.names, phenoData=pheno.data,
 cdfname=cleancdfname(hg18_Affymetrix U133A)))


 in the place of datadir i have to mention the character string of the
 directory of these .cel files. I don't know how to give the path for all
 these files.

 i set the path as given below,


 rna.data-exprs(justRMA(filenames=file.names, celfile.path=*D:/MONO1.CEL
 D:/MONO2.CEL D:/MONO3.CEL D:/MACRO1.CEL D:/MACRO2.CEL
 D:/MACRO3.CEL*,sampleNames=sample.names, phenoData=pheno.data,
 cdfname=cleancdfname(hg18_Affymetrix U133A)))


 it shows this error,

 Error: unexpected string constant in
 rna.data-exprs(justRMA(filenames=file.names, celfile.path=D:/MONO1
 D:/MONO2


 could u please help me in this case.


 Thanks in advance.


 with regards,

 S.Mahalakshmi

        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
-
Dr. Alberto Goldoni
Bologna, Italy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] filtering signals per day

2010-03-04 Thread anna

ok this is simpler thank for helping  guys :)

-
Anna Lippel
-- 
View this message in context: 
http://n4.nabble.com/filtering-signals-per-day-tp1577044p1578176.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] sum of list elements

2010-03-04 Thread Eleni Christodoulou
Dear list,

I have some difficulty in manipulating list elements. More specifically, I
am performing svm regression and have a list of lists, called pred.svm. The
elements of the second list are 3D arrays. Thus I have pred.svm[[i]][[j]],
with 1=i=5 and 1=j=20.
I want to take the sum of the elements a specific array dimension across all
j, for one i. Mathematically speaking, I want to calculate *W* as:

  *W = pred.svm[[i]][[1]][1,2,5] + pred.svm[[i]][[2]][1,2,5]+
pred.svm[[i]][[3]][1,2,5]+...+ pred.svm[[i]][[20]][1,2,5]*

I have tried to apply the *lapply() *function but it seems that its
arguments can only be vector elements of a list...Do I need to convert the
array data to vector data?

Any advice would be very welcome!

Thanks a lot,
Eleni

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Passing rq models to Fstats

2010-03-04 Thread Jonathan P Daily
I have seen literature on using a combination of the 'strucchange' and 
'segmented' packages to find and fit piecewise linear models. I have been 
trying to apply the same methods to quantile regression from the quantreg 
package, but am having issues using a rq object where the function 
assumes a lm. Has anyone had this issue?
--
Jonathan P. Daily
Technician - USGS Leetown Science Center
11649 Leetown Road
Kearneysville WV, 25430
(304) 724-4480
Is the room still a room when its empty? Does the room,
 the thing itself have purpose? Or do we, what's the word... imbue it.
 - Jubal Early, Firefly
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Type-I v/s Type-III Sum-Of-Squares in ANOVA

2010-03-04 Thread Ista Zahn
On Thu, Mar 4, 2010 at 9:03 AM, S Ellison s.elli...@lgc.co.uk wrote:


 John Fox j...@mcmaster.ca 02/03/2010 02:19 
There's also a serious question about whether one would
be interested in main effects defined as averages over the level of
 the
other factor when interactions are present.

 My personal take on this particular chestnut is that I often want to
 ask something about the relative size of the effects. If the so-called
 main effect(s) is/are very much larger than the interactions, one may
 be able to make generalisations which have practical use.

Sure. But if there is an interaction, main effect generalizations are
going to be less precise than generalizations based on simple effects.
I agree that there are some situations in which it makes sense to
ignore a small but significant interaction, but I think this should be
a rare exception to the rule don't interpret main effects in the
presence of an interaction.

-Ista

If the effects
 are much of a size, there's nothing much to be gained by asking about
 main effects.

 Mind you, that's probably a crit of significance testing as the be-all
 and end-all, rather than a problem with type I-III. Asking 'how big is
 it?' is a step beyond 'is it there?'.

 Steve E

 ***
 This email and any attachments are confidential. Any u...{{dropped:18}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sum of list elements

2010-03-04 Thread Dimitris Rizopoulos
do these lists contain 3D arrays of the same dimensions? If yes, then 
you could use


Reduce(+,  pred.svm[[i]])[1,2,5]

otherwise a for-loop will also be clear and efficient, e.g.,

W - pred.svm[[i]][[1]][1,2,5]
for (j in 2:20) {
W - W + pred.svm[[i]][[j]][1,2,5]
}


I hope it helps.

Best,
Dimitris


On 3/4/2010 4:02 PM, Eleni Christodoulou wrote:

Dear list,

I have some difficulty in manipulating list elements. More specifically, I
am performing svm regression and have a list of lists, called pred.svm. The
elements of the second list are 3D arrays. Thus I have pred.svm[[i]][[j]],
with 1=i=5 and 1=j=20.
I want to take the sum of the elements a specific array dimension across all
j, for one i. Mathematically speaking, I want to calculate *W* as:

   *W = pred.svm[[i]][[1]][1,2,5] + pred.svm[[i]][[2]][1,2,5]+
pred.svm[[i]][[3]][1,2,5]+...+ pred.svm[[i]][[20]][1,2,5]*

I have tried to apply the *lapply() *function but it seems that its
arguments can only be vector elements of a list...Do I need to convert the
array data to vector data?

Any advice would be very welcome!

Thanks a lot,
Eleni

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Dimitris Rizopoulos
Assistant Professor
Department of Biostatistics
Erasmus University Medical Center

Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
Tel: +31/(0)10/7043478
Fax: +31/(0)10/7043014

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] mysqlWriteTable . error in your SQL syntax?

2010-03-04 Thread Vladimir Morozov
Hi,

Can somebody advice on weird mysqlWriteTable bug.

 mysqlWriteTable(conn, 'comparison',design2, row.names = F, overwrite=T)

Error in mysqlExecStatement(conn, statement, ...) :

RS-DBI driver: (could not run statement: You have an error in your SQL syntax; 
check the manual that corresponds to your MySQL server version for the right 
syntax to use near 'condition text,

treatment text,

condition2 text,

conditionNum double,

impo' at line 12)

[1] FALSE

Warning message:

In mysqlWriteTable(conn, comparison, design2, row.names = F, overwrite = T) :

could not create table: aborting mysqlWriteTable



The problem seems to be in double qoutes:



syntax to use near 'condition text,



I have tracked down it to the problem with specific column in my frame

If  condition clumn is exluded, it works fine

 mysqlWriteTable(conn, 'comparison',design2[,-10], row.names = F, overwrite=T)

[1] TRUE



 class(design2[,10])

[1] character

 packageDescription('RMySQL')

Package: RMySQL

Version: 0.7-4

Date: 2009-04-07

Title: R interface to the MySQL database

Author: David A. James and Saikat DebRoy

Maintainer: Jeffrey Horner jeff.hor...@vanderbilt.edu

Description: Database interface and MySQL driver for R. This version

complies with the database interface definition as implemented

in the package DBI 0.2-2.

LazyLoad: true

Depends: R (= 2.8.0), methods, DBI (= 0.2-2), utils

License: GPL-2

URL: http://biostat.mc.vanderbilt.edu/RMySQL

Collate: S4R.R zzz.R MySQLSupport.R dbObjectId.R MySQL.R

Packaged: Tue Apr 7 15:19:44 2009; hornerj

Repository: CRAN

Date/Publication: 2009-04-14 17:26:56

Built: R 2.10.1; x86_64-unknown-linux-gnu; 2010-01-16 19:42:24 UTC;

unix







Vladimir Morozov
Sr. Computational Biologist
ALS Therapy Development Institute
215 First Street, Cambridge MA, 02142
Phone: 617-441-7242
www.als.nethttp://www.als.net/
Want to help stop ALS? Become an ALS Ambassador and take action. Learn more 
online at www.als.net/ambassadorhttp://www.als.net/ambassador



***
The information contained in this electronic message is ...{{dropped:21}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] ifthen() question

2010-03-04 Thread AC Del Re
Hi All,

I am using a specialized aggregation function to reduce a dataset with
multiple rows per id down to 1 row per id. My function work perfect when
there are 1 id but alters the 'var.g' in undesirable ways when this
condition is not met, Therefore, I have been trying ifthen() statements to
keep the original value when length of unique id == 1 but I cannot get it to
work. e.g.:

#function to aggregate effect sizes:
aggs - function(g, n.1, n.2, cor = .50) {
  n.1 - mean(n.1)
  n.2 - mean(n.2)
  N_ES - length(g)
  corr.mat - matrix (rep(cor, N_ES^2), nrow=N_ES)
  diag(corr.mat) - 1
  g1g2 - cbind(g) %*% g
  PSI - (8*corr.mat + g1g2*corr.mat^2)/(2*(n.1+n.2))
  PSI.inv - solve(PSI)
  a - rowSums(PSI.inv)/sum(PSI.inv)
  var.g - 1/sum(PSI.inv)
  g - sum(g*a)
  out-cbind(g,var.g, n.1, n.2)
  return(out)
  }


# automating this procedure for all rows of df. This format works perfect
when there is  1 id per row only:

agg_g - function(id, g, n.1, n.2, cor = .50) {
  st - unique(id)
  out - data.frame(id=rep(NA,length(st)))
  for(i in 1:length(st))   {
out$id[i] - st[i]
out$g[i] - aggs(g=g[id==st[i]], n.1= n.1[id==st[i]],
   n.2 = n.2[id==st[i]], cor)[1]
out$var.g[i] - aggs(g=g[id==st[i]], n.1= n.1[id==st[i]],
  n.2 = n.2[id==st[i]], cor)[2]
out$n.1[i] - round(mean(n.1[id==st[i]]),0)
out$n.2[i] - round(mean(n.2[id==st[i]]),0)
  }
  return(out)
}


# The attempted solution using ifthen() and minor changes to function but
it's not working properly:
agg_g - function(df,var.g, id, g, n.1, n.2, cor = .50) {
  df$var.g - var.g
  st - unique(id)
  out - data.frame(id=rep(NA,length(st)))
  for(i in 1:length(st))   {
out$id[i] - st[i]
out$g[i] - aggs(g=g[id==st[i]], n.1= n.1[id==st[i]],
   n.2 = n.2[id==st[i]], cor)[1]
out$var.g[i]-ifelse(length(st[i])==1, df$var.g[id==st[i]],
 aggs(g=g[id==st[i]],
  n.1= n.1[id==st[i]],
  n.2 = n.2[id==st[i]], cor)[2])
out$n.1[i] - round(mean(n.1[id==st[i]]),0)
out$n.2[i] - round(mean(n.2[id==st[i]]),0)
  }
  return(out)
}

# sample data:
id-c(1, rep(1:19))
n.1-c(10,20,13,22,28,12,12,36,19,12,36,75,33,121,37,14,40,16,14,20)
n.2 - c(11,22,10,20,25,12,12,36,19,11,34,75,33,120,37,14,40,16,10,21)
g - c(.68,.56,.23,.64,.49,-.04,1.49,1.33,.58,1.18,-.11,1.27,.26,.40,.49,
.51,.40,.34,.42,1.16)
var.g -
c(.08,.06,.03,.04,.09,.04,.009,.033,.0058,.018,.011,.027,.026,.0040,
.049,.0051,.040,.034,.0042,.016)
df-data.frame(id, n.1,n.2, g, var.g)

Any help is much appreciated,

AC

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Three most useful R package

2010-03-04 Thread Ivan Calandra

Hi all,

1) I mostly use the base packages. But if I should decide for three 
others, it would be
- plyr: I've just started to use it in some specific cases, but it seems 
really powerful and practical
- doBy is also quite good but I use only one function from it 
(summaryBy). For now I have what I need, but it might prove useful later

- xlsReadWrite: it's fast and good.

2) I've serach a lot recently to read/write xls files. As you can guess, 
I've ended up using xlsReadWrite which is really intuitive to use. The 
problem is that you cannot append (neither append lines on the same 
sheet nor append other sheets on the same files). RODBC can append 
sheets on a file, but it is not as flexible.
Since I often want to export results from tests that are stored as lists 
of differing size (different number of columns AND rows), I'm still 
stuck with write.csv().


So I would like to have the both appending options (i.e. line + sheets) 
in a package as easy to use as xlsReadWrite. Basically xlsReadWritePro, 
but for free (am I asking too much?).


Thanks for asking!
Regards,
Ivan


On 2 March 2010 21:13, Ralf Bralf.bie...@gmail.com  wrote:
   

Hi R-fans,

I would like put out a question to all R users on this list and hope
it will create some feedback and discussion.

1) What are your 3 most useful R package? and

 
2) What R package do you still miss and why do you think it would make

a useful addition?

Pulling answers together for these questions will serve as a guide for
new users and help people who just want to get a hint where to look
first. Happy replying!

Best,
Ralf

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

 






__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Analogue to SPSS regression commands ENTER and REMOVE in R?

2010-03-04 Thread Dimitri Liakhovitski
I am not sure if this question has been asked before - but is there a
procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
regression commands in SPSS?
Thanks a lot!

-- 
Dimitri Liakhovitski
Ninah.com
dimitri.liakhovit...@ninah.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Type-I v/s Type-III Sum-Of-Squares in ANOVA

2010-03-04 Thread Ravi Kulkarni

Hello,
  Since I initiated this discussion some days ago, I discovered a paper that
may be of interest:

 ANOVA for unbalanced data: Use Type II instead of Type III sums
of squares
  by ØYVIND LANGSRUD
 Statistics and Computing 13: 163–167, 2003

  Ravi Kulkarni
-- 
View this message in context: 
http://n4.nabble.com/Type-I-v-s-Type-III-Sum-Of-Squares-in-ANOVA-tp1573657p1578273.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sum of list elements

2010-03-04 Thread Eleni Christodoulou
Thank you Dimitris!
I have 3D arrays of the same dimensions, so Reduce worked...

Best,
Eleni


On Thu, Mar 4, 2010 at 5:13 PM, Dimitris Rizopoulos 
d.rizopou...@erasmusmc.nl wrote:

 do these lists contain 3D arrays of the same dimensions? If yes, then you
 could use

 Reduce(+,  pred.svm[[i]])[1,2,5]

 otherwise a for-loop will also be clear and efficient, e.g.,


 W - pred.svm[[i]][[1]][1,2,5]
 for (j in 2:20) {
W - W + pred.svm[[i]][[j]][1,2,5]
 }


 I hope it helps.

 Best,
 Dimitris



 On 3/4/2010 4:02 PM, Eleni Christodoulou wrote:

 Dear list,

 I have some difficulty in manipulating list elements. More specifically, I
 am performing svm regression and have a list of lists, called pred.svm.
 The
 elements of the second list are 3D arrays. Thus I have pred.svm[[i]][[j]],
 with 1=i=5 and 1=j=20.
 I want to take the sum of the elements a specific array dimension across
 all
 j, for one i. Mathematically speaking, I want to calculate *W* as:

   *W = pred.svm[[i]][[1]][1,2,5] + pred.svm[[i]][[2]][1,2,5]+
 pred.svm[[i]][[3]][1,2,5]+...+ pred.svm[[i]][[20]][1,2,5]*

 I have tried to apply the *lapply() *function but it seems that its
 arguments can only be vector elements of a list...Do I need to convert the
 array data to vector data?

 Any advice would be very welcome!

 Thanks a lot,
 Eleni

[[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 --
 Dimitris Rizopoulos
 Assistant Professor
 Department of Biostatistics
 Erasmus University Medical Center

 Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
 Tel: +31/(0)10/7043478
 Fax: +31/(0)10/7043014


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] mysqlWriteTable . error in your SQL syntax?

2010-03-04 Thread Vladimir Morozov

 I have partialy figured out the problem condition might be some intrenal 
MySQL function/variable and  can't be use as column name directly.

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of Vladimir Morozov
Sent: Thursday, March 04, 2010 10:25 AM
To: 'r-help@r-project.org'
Subject: [R] mysqlWriteTable . error in your SQL syntax?

Hi,

Can somebody advice on weird mysqlWriteTable bug.

 mysqlWriteTable(conn, 'comparison',design2, row.names = F,
 overwrite=T)

Error in mysqlExecStatement(conn, statement, ...) :

RS-DBI driver: (could not run statement: You have an error in your SQL syntax; 
check the manual that corresponds to your MySQL server version for the right 
syntax to use near 'condition text,

treatment text,

condition2 text,

conditionNum double,

impo' at line 12)

[1] FALSE

Warning message:

In mysqlWriteTable(conn, comparison, design2, row.names = F, overwrite = T) :

could not create table: aborting mysqlWriteTable



The problem seems to be in double qoutes:



syntax to use near 'condition text,



I have tracked down it to the problem with specific column in my frame

If  condition clumn is exluded, it works fine

 mysqlWriteTable(conn, 'comparison',design2[,-10], row.names = F,
 overwrite=T)

[1] TRUE



 class(design2[,10])

[1] character

 packageDescription('RMySQL')

Package: RMySQL

Version: 0.7-4

Date: 2009-04-07

Title: R interface to the MySQL database

Author: David A. James and Saikat DebRoy

Maintainer: Jeffrey Horner jeff.hor...@vanderbilt.edu

Description: Database interface and MySQL driver for R. This version

complies with the database interface definition as implemented

in the package DBI 0.2-2.

LazyLoad: true

Depends: R (= 2.8.0), methods, DBI (= 0.2-2), utils

License: GPL-2

URL: http://biostat.mc.vanderbilt.edu/RMySQL

Collate: S4R.R zzz.R MySQLSupport.R dbObjectId.R MySQL.R

Packaged: Tue Apr 7 15:19:44 2009; hornerj

Repository: CRAN

Date/Publication: 2009-04-14 17:26:56

Built: R 2.10.1; x86_64-unknown-linux-gnu; 2010-01-16 19:42:24 UTC;

unix







Vladimir Morozov
Sr. Computational Biologist
ALS Therapy Development Institute
215 First Street, Cambridge MA, 02142
Phone: 617-441-7242
www.als.nethttp://www.als.net/
Want to help stop ALS? Become an ALS Ambassador and take action. Learn more 
online at www.als.net/ambassadorhttp://www.als.net/ambassador



***
The information contained in this electronic message is ...{{dropped:30}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Three most useful R package

2010-03-04 Thread Ivan Calandra

Hi Matthew,

Sorry that I've read your email a bit too late and answered to the 
thread before.


You're right about Crantastic, it's a really good thing, and I actually 
voted for the packages I really use (even though a little). It was 
suggested to me by a r-helper some time ago.
I was also surprised to see that there are some 19 users (and it 
increased recently, didn't it?) of plyr (which is from the replies 
obviously one of the most used packages). Looks like few people vote on 
Crantastic and I find it sad.
When I'm now looking for a package I go directly there and can find all 
the basic information on all packages, though sometimes the general aim 
of the package is not clearly stated.
Of course, the real potential would be revealed only if all users vote 
for, comment on and write reviews on packages.


Crantastic rules! Thanks for pointing it out!

Ivan

PS: I'm gonna write some comments on a package or 2!


Le 3/3/2010 18:48, Matthew Dowle a écrit :

Dieter,

One way to check if a package is active, is by looking on r-forge. If you
are referring to data.table you would have found it is actually very active
at the moment and is far from abandoned.

What you may be referring to is a warning, not an error, with v1.2 on
R2.10+.  That was fixed many moons ago. The r-forge version is where its at.

Rather than commenting in public about a warning on a package, and making a
conclusion about its abandonment, and doing this without copying the
maintainer, perhaps you could have contacted the maintainer to let him know
you had found a problem.  That would have been a more community spirited
action to take.  Doing that at the time you found out would have been
helpful too rather than saving it up for now.  Or you can always check the
svn logs yourself,  as the r-forge guys even made that trivial to do.

All,

Can we please now stop this thread ?  The crantastic people worked hard to
provide a better solution.  If the community refuses to use crantastic,
thats up to the community, but to start now filling up r-help with votes on
packages when so much effort was put in to a much much better solution ages
ago?  Its as quick to put your votes into crantastic as it is to write to
r-help.  What your problem, folks, with crantastic?   The second reply
mentioned crantastic but you all chose to ignore it,  it seems.  If you want
to vote, use crantastic.  If you don't want to vote,  don't vote.  But using
r-help to vote ?!  The better solution is right there:
http://crantastic.org/

Matthew


Dieter Mennedieter.me...@menne-biomed.de  wrote in message
news:1267626882999-1576618.p...@n4.nabble.com...
   


Rob Forler wrote:
 

And data.table because it does aggregation about 50x times faster than
plyr
(which I used to use a lot).


   

This is correct, from the error message its spits out one has to conclude
that is was abandoned at R-version 2.4.x

Dieter




--
View this message in context:
http://n4.nabble.com/Three-most-useful-R-package-tp1575671p1576618.html
Sent from the R help mailing list archive at Nabble.com.

 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Date conversion problem

2010-03-04 Thread Newbie19_02

Hi All,

I have a character data.frame that contains character columns and date
columns.  I've manage to convert some of my character columns to a date
format using as.Date(x, format=%m/%d/%y).  

An example of one of my dates is 
 PROCHIDtDeath icdcucd date_admission1 date_admission_2
CAO0004563   NANA  2005-09-01 NA
CAO0073505   NANA  1998-03-05 NA
CAO0079987   NANA  2002-04-14 NA
CAO0182089   NANA  2007-06-10 11/06/07
CAO0194809 17/02/2005 I64  2004-09-04 14/02/05
CAO0204000   NANA  1999-05-31 NA
CAO027   NANA  1999-07-29 NA
CAO0330844 29/11/2001 I64NA NA
CAO0395045   NANA  2007-02-13 14/02/07
CAO0507333   NANA  2005-10-08 NA


I have converted date_admission1 from a character to a date.  I used the
same script to convert DtDeath but it returns the dates in this format:

 NA   NA   NA   NA   0017-02-20
 [6] NA   NA   0029-11-20 NA   NA  
[11] NA   NA   0013-10-20 NA   NA  
[16] NA   0007-12-20 NA   NA   NA  
[21] NA   NA   NA   NA   NA  
[26] NA   NA   NA   NA   NA  
[31] NA   NA   NA   NA   NA  
[36] NA   NA   NA   NA   NA  
[41] NA   NA   NA   NA   NA  
[46] NA   0029-01-20 0018-05-20 NA   NA  
[51] NA   NA   NA   NA   NA  
[56] NA   0013-07-20 NA   NA   NA  
[61] NA   0026-07-20 NA   NA   NA  
[66] 0029-04-20 NA   NA   NA   0012-12-20
[71] NA   NA   NA   NA   NA  
[76] NA   NA   NA   NA   NA  
[81] NA   NA   0022-01-20 NA   0029-05-20
[86] NA   NA   NA   NA   0022-02-20
[91] NA  

I've tried as.Date(as.character(DtDeath, %d/%m/%y) just in case and have
used different versions of the format (%m/%d/%Y, and the reverse)but still
get the incorrect format.  I'm not sure what the problem is?

Thanks,
natalie
-- 
View this message in context: 
http://n4.nabble.com/Date-conversion-problem-tp1578296p1578296.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analogue to SPSS regression commands ENTER and REMOVE in R?

2010-03-04 Thread Michael Conklin
I bet you stirred the pot here because you arre asking  about stepwise
procedures.  Look at step, or stepAIC in the MASS library.

\Mike


On Thu, 4 Mar 2010 07:47:34 -0800
Dimitri Liakhovitski ld7...@gmail.com wrote:

 I am not sure if this question has been asked before - but is there a
 procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
 regression commands in SPSS?
 Thanks a lot!


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] fisher.test gives p1

2010-03-04 Thread Jacob Wegelin


The purpose of this email is to

(1) report an example where fisher.test returns p  1

(2) ask if there is a reliable way to avoid p1 with fisher.test.

If one has designed one's code to return an error when it finds a nonsensical 
probability, of course a value of p1 can cause havoc.

Example:


junk-data.frame(score=c(rep(0,14), rep(1,29), rep(2, 16)))
junk-rbind(junk, junk)
junk$group-c(rep(DOG, nrow(junk)/2), rep(kitty, nrow(junk)/2))
table(junk$score, junk$group)


DOG kitty
  0  1414
  1  2929
  2  1616

dput(fisher.test(junk$score, junk$group)$p.value)

1.12


dput(fisher.test(junk$score, junk$group, simulate.p.value=TRUE)$p.value)

1

In this particular case, specifying a simulated p value solved the problem. But is 
there a reliable way to avoid p1 in general?


sessionInfo()
R version 2.10.1 (2009-12-14) 
x86_64-apple-darwin9.8.0


locale:
[1] en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] tools_2.10.1




Thanks for any insight

Jacob A. Wegelin
Assistant Professor
Department of Biostatistics
Virginia Commonwealth University
730 East Broad Street Room 3006
P. O. Box 980032
Richmond VA 23298-0032
U.S.A.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analogue to SPSS regression commands ENTER and REMOVE in R?

2010-03-04 Thread Ista Zahn
Hi Dimitri,
It works a bit differently:

## The SPSS way:
compute dum1 = 0.
compute dum2 = 0.
if(grp = b) dum1 = 1.
if(grp = c) dum2 = 1.
exe.

regression
  /var = y x1 x2 z1 z2 grp
  /des = def
  /sta = def zpp cha tol f
  /dep = y
  /met = enter x1 x2
  /met = enter z1 z2
  /met = enter dum1 dum2.

## The R way:
contrasts(Dat$grp) - contr.treatment(n=3, base=1)

m.x - lm(y ~ x1 + x2, data=Dat)
m.xz - update(m.x, . ~ . + z1 + z2)
m.xzg - update(m.xz, . ~ . + grp)
anova(m.x, m.xz, m.xzg)

Hope it helps,
Ista

On Thu, Mar 4, 2010 at 10:47 AM, Dimitri Liakhovitski ld7...@gmail.com wrote:
 I am not sure if this question has been asked before - but is there a
 procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
 regression commands in SPSS?
 Thanks a lot!

 --
 Dimitri Liakhovitski
 Ninah.com
 dimitri.liakhovit...@ninah.com

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analogue to SPSS regression commands ENTER and REMOVE in R?

2010-03-04 Thread Ista Zahn
Hi Michael,
I don't think Dimitry was asking for stepwise procedures, but rather
how to add sets of variables to a model, for example to see if set B
predicts over and above some standard set A.

-Ista

On Thu, Mar 4, 2010 at 11:14 AM, Michael Conklin
michael.conk...@markettools.com wrote:
 I bet you stirred the pot here because you arre asking  about stepwise
 procedures.  Look at step, or stepAIC in the MASS library.

 \Mike


 On Thu, 4 Mar 2010 07:47:34 -0800
 Dimitri Liakhovitski ld7...@gmail.com wrote:

 I am not sure if this question has been asked before - but is there a
 procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
 regression commands in SPSS?
 Thanks a lot!


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analogue to SPSS regression commands ENTER and REMOVE in R?

2010-03-04 Thread Ista Zahn
Oops, forgot the example data used in previous post:

set.seed(133)
Dat - data.frame(y=rnorm(10),
  x1=rnorm(10),
  x2=rnorm(10),
  z1=rnorm(10),
  z2=rnorm(10),
  grp = factor(c(rep(a, 3), rep(b, 4), rep(c, 3)))
  )

On Thu, Mar 4, 2010 at 11:14 AM, Ista Zahn istaz...@gmail.com wrote:
 Hi Dimitri,
 It works a bit differently:

 ## The SPSS way:
 compute dum1 = 0.
 compute dum2 = 0.
 if(grp = b) dum1 = 1.
 if(grp = c) dum2 = 1.
 exe.

 regression
  /var = y x1 x2 z1 z2 grp
  /des = def
  /sta = def zpp cha tol f
  /dep = y
  /met = enter x1 x2
  /met = enter z1 z2
  /met = enter dum1 dum2.

 ## The R way:
 contrasts(Dat$grp) - contr.treatment(n=3, base=1)

 m.x - lm(y ~ x1 + x2, data=Dat)
 m.xz - update(m.x, . ~ . + z1 + z2)
 m.xzg - update(m.xz, . ~ . + grp)
 anova(m.x, m.xz, m.xzg)

 Hope it helps,
 Ista

 On Thu, Mar 4, 2010 at 10:47 AM, Dimitri Liakhovitski ld7...@gmail.com 
 wrote:
 I am not sure if this question has been asked before - but is there a
 procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
 regression commands in SPSS?
 Thanks a lot!

 --
 Dimitri Liakhovitski
 Ninah.com
 dimitri.liakhovit...@ninah.com

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




 --
 Ista Zahn
 Graduate student
 University of Rochester
 Department of Clinical and Social Psychology
 http://yourpsyche.org




-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analogue to SPSS regression commands ENTER and REMOVE in R?

2010-03-04 Thread David Winsemius


On Mar 4, 2010, at 10:47 AM, Dimitri Liakhovitski wrote:


I am not sure if this question has been asked before - but is there a
procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
regression commands in SPSS?
Thanks a lot!


I haven't used SPSS for 25 years (excluding a brief session where I  
talked an inexperienced SPSS user through creating a multi-way table  
of mortality ratios a couple of years ago) but my memory is that those  
operations drop or add terms to a model formula and produced  
voluminous paper output as a result. Looking at the lm function help  
page, I guessed that I might find more information at the anova.lm  
help page and sure enough, there at the bottom of the page was  
drop1. My vague memory that there was a matching add1 was rewarded  
with confirmation when I followed the drop1 link.


I generally just make another call to lm().




--
Dimitri Liakhovitski
Ninah.com
dimitri.liakhovit...@ninah.com


--

David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Please help me how to make input files in Extremes Toolkit model

2010-03-04 Thread David Winsemius


On Mar 4, 2010, at 5:47 AM, Huyen Quan wrote:


Dear sir/madam

My name is Quan, I am a PhD student in Korea. my major is  
Hydrological in Water Resources Engineering. I am interested in  
Extremes Toolkit model and I known you from information in internet.
I installed successfully this model but I didn't know how to make  
the type of  files input for this model such as: Flood.dat;  
Ozone4H.dat; Flood.R; HEAT.R. (free download these files of example  
and show in attachment files)


No files were attached. Did you read the Posting Guide section that  
describes what sorts of files can be sent through the r-help server?




I want to learn the way make these files from sample files. 
http://www.isse.ucar.edu/extremevalues/evtk.html

but i can not download those files because this link was errored.


Sorry to hear of your difficulties, but don't you think this is an  
issue to be resolved by your local administrator? That link works  
perfectly well for me.




Please help me how to get this file and to make files input, if you  
have some sample files, please send to me to learn and make them.


thank you so much for your help me and hope to see your infomation  
as soon as possible.


Mr. Ngo Quan




David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Unsigned Posts; Was Setting graphical parameters

2010-03-04 Thread Bert Gunter
Folks:

Rolf's (appropriate, in my view) response below seems symptomatic of an
increasing tendency of posters to hide their identities with pseudonyms and
fake headers. While some of this may be due to identity paranoia (which I
think is overblown for this list), I suspect that a good chunk of it is lazy
students trying to beat the system by having us do their homework. The
etiquette of this list has traditionally been, like the software, open: we
sign our names. I would urge helpeRs to adhere to this etiquette and ignore
unsigned posts.

Contrary views welcome, either via public or private responses.

-- Bert

Bert Gunter
Genentech Nonclinical Statistics

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Rolf Turner
Sent: Wednesday, March 03, 2010 6:42 PM
To: Pitmaster
Cc: r-help@r-project.org
Subject: Re: [R] Setting graphical parameters


Do your own expletive deleted homework!

It looks pretty trivial; what's your problem?

cheers,

Rolf Turner

On 4/03/2010, at 2:39 PM, Pitmaster wrote:

 
 Hi guys... I have problem with this excersise... 
 
 Consider the pressure data frame again.
 
 (a) Plot pressure against temperature, and use the following
 command to pass a curve through these data:
 
 curve((0.168 + 0.007*x)ˆ(20/3), from=0, to=400, add=TRUE)
 
 (b) Now, apply the power transformation y3/20 to the pressure data values.
 Plot these transformed values against temperature. Is a linear
 or nonlinear relationship evident now? Use the abline() function
 to pass a straight line through the points. (You need an intercept
 and slope for this – see the previous part of this question to obtain
 appropriate values.)
 
 (c) Add a suitable title to the graph.
 
 (d) Re-do the above plots, but use the mfrow() function to display
 them in a 2 × 1 layout on the graphics page. Repeat once again
 using a 1 × 2 layout.
 
 DATA:
 pressure
   temperature pressure
 10   0.0002
 2   20   0.0012
 3   40   0.0060
 4   60   0.0300
 5   80   0.0900
 6  100   0.2700
 7  120   0.7500
 8  140   1.8500
 9  160   4.2000
 10 180   8.8000
 11 200  17.3000
 12 220  32.1000
 13 240  57.
 14 260  96.
 15 280 157.
 16 300 247.
 17 320 376.
 18 340 558.
 19 360 806.
 
 
 Can anyone know the solution ?

##
Attention: 
This e-mail message is privileged and confidential. If you are not the 
intended recipient please delete the message and notify the sender. 
Any views or opinions presented are solely those of the author.

This e-mail has been scanned and cleared by MailMarshal 
www.marshalsoftware.com
##

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analogue to SPSS regression commands ENTER and REMOVE in R?

2010-03-04 Thread Dimitri Liakhovitski
Yes, I was indeed asking about stepwise procedures!

On Thu, Mar 4, 2010 at 11:29 AM, David Winsemius dwinsem...@comcast.net wrote:

 On Mar 4, 2010, at 10:47 AM, Dimitri Liakhovitski wrote:

 I am not sure if this question has been asked before - but is there a
 procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
 regression commands in SPSS?
 Thanks a lot!

 I haven't used SPSS for 25 years (excluding a brief session where I talked
 an inexperienced SPSS user through creating a multi-way table of mortality
 ratios a couple of years ago) but my memory is that those operations drop or
 add terms to a model formula and produced voluminous paper output as a
 result. Looking at the lm function help page, I guessed that I might find
 more information at the anova.lm help page and sure enough, there at the
 bottom of the page was drop1. My vague memory that there was a matching
 add1 was rewarded with confirmation when I followed the drop1 link.

 I generally just make another call to lm().



 --
 Dimitri Liakhovitski
 Ninah.com
 dimitri.liakhovit...@ninah.com

 --

 David Winsemius, MD
 Heritage Laboratories
 West Hartford, CT





-- 
Dimitri Liakhovitski
Ninah.com
dimitri.liakhovit...@ninah.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analogue to SPSS regression commands ENTER and REMOVE in R?

2010-03-04 Thread Dimitri Liakhovitski
Yes, David is absolutely right - that's it!

On Thu, Mar 4, 2010 at 12:21 PM, Dimitri Liakhovitski ld7...@gmail.com wrote:
 Yes, I was indeed asking about stepwise procedures!

 On Thu, Mar 4, 2010 at 11:29 AM, David Winsemius dwinsem...@comcast.net 
 wrote:

 On Mar 4, 2010, at 10:47 AM, Dimitri Liakhovitski wrote:

 I am not sure if this question has been asked before - but is there a
 procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
 regression commands in SPSS?
 Thanks a lot!

 I haven't used SPSS for 25 years (excluding a brief session where I talked
 an inexperienced SPSS user through creating a multi-way table of mortality
 ratios a couple of years ago) but my memory is that those operations drop or
 add terms to a model formula and produced voluminous paper output as a
 result. Looking at the lm function help page, I guessed that I might find
 more information at the anova.lm help page and sure enough, there at the
 bottom of the page was drop1. My vague memory that there was a matching
 add1 was rewarded with confirmation when I followed the drop1 link.

 I generally just make another call to lm().



 --
 Dimitri Liakhovitski
 Ninah.com
 dimitri.liakhovit...@ninah.com

 --

 David Winsemius, MD
 Heritage Laboratories
 West Hartford, CT





 --
 Dimitri Liakhovitski
 Ninah.com
 dimitri.liakhovit...@ninah.com




-- 
Dimitri Liakhovitski
Ninah.com
dimitri.liakhovit...@ninah.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] ascending permutation within treatment

2010-03-04 Thread eugen pircalabelu
Hi R-users,
I have a question related to permutations in R. 
I started learning something about the permutation/randomization tests using 
Edgington and Onghena (2007) and i tried replicating some of the examples in 
the book (i have seen also the packages from R that concern permutation ), but 
at some point i got stuck.

If i have 3 treatments, 7 subjects in an experiment, and a 2 of them take 
treatment A, 2 take treatment B and 3 take treatment C then there are 7!/2!2!3! 
=210 ways in which they can be assigned. So how can i create sequences in R 
that are of the form


  A BC
1 2 | 3 4 | 5 6 7
1 2 | 3 5 | 4 6 7
1 2 | 3 6 | 4 5 7
1 2 | 3 7 | 4 5 6
1 2 | 4 5 | 3 6 7
.
6 7 | 3 4 | 1 2 5
6 7 | 3 5 | 1 2 4
6 7 | 4 5 | 1 2 3



The list is generated taking into account that in each treatment the sequence 
is always ascending, and so this way it can be avoided listing 7!=5040, but 
only 7!/2!2!3! =210 sequences will be listed.

Can anyone  please, offer some guidance, or some advice on how to generate this 
permutations with this constraint that the order within the treatment is 
ascending.
My hope is that using only this relevant set can greatly improve the time 
needed for the computation, when i have a larger sample.

I have seen also the recommendations to sample a lot of permutations, and i 
have no problem in doing that, everything goes smoothly and fast, but i want 
also to be able to do this.

Thank you very much and have a great day ahead!

 




Eugen Pircalabelu
(0032)471 842 140
(0040)727 839 293

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fisher.test gives p1

2010-03-04 Thread Martin Maechler
 Jacob Wegelin jacobwege...@fastmail.fm
 on Thu, 4 Mar 2010 11:15:51 -0500 (EST) writes:

 The purpose of this email is to

 (1) report an example where fisher.test returns p  1

 (2) ask if there is a reliable way to avoid p1 with
 fisher.test.

 If one has designed one's code to return an error when it
 finds a nonsensical probability, of course a value of
 p1 can cause havoc.

 Example:

 junk-data.frame(score=c(rep(0,14), rep(1,29), rep(2,
 16))) junk-rbind(junk, junk) junk$group-c(rep(DOG,
 nrow(junk)/2), rep(kitty, nrow(junk)/2))
 table(junk$score, junk$group)

  DOG kitty 0 14 14 1 29 29 2 16 16
 dput(fisher.test(junk$score, junk$group)$p.value)
 1.12
 
 dput(fisher.test(junk$score, junk$group,
 simulate.p.value=TRUE)$p.value)
 1

 In this particular case, specifying a simulated p value
 solved the problem. But is there a reliable way to avoid
 p1 in general?

Yes, using the very latest version of R-devel 
(svn revision = 51204)  

:-)

Of course, the above is just the result of rounding error
propagation in an extreme case.

I've now simply replaced PVAL  by  max(0, min(1, PVAL))
in the one place it seems sensible.

Martin Maechler, ETH Zurich 

 sessionInfo()
 R version 2.10.1 (2009-12-14) x86_64-apple-darwin9.8.0

 locale: [1]
 en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8

 attached base packages: [1] stats graphics grDevices utils
 datasets methods base

 loaded via a namespace (and not attached): [1]
 tools_2.10.1
 

 Thanks for any insight

 Jacob A. Wegelin Assistant Professor Department of
 Biostatistics Virginia Commonwealth University 730 East
 Broad Street Room 3006 P. O. Box 980032 Richmond VA
 23298-0032 U.S.A.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unrealistic dispersion parameter for quasibinomial

2010-03-04 Thread Etienne Bellemare Racine

Ben Bolker wrote :

The dispersion parameter depends on the Pearson residuals,
not the deviance residuals (i.e., scaled by expected variance).
I haven't checked into this in great detail, but the Pearson
residual of your first data set is huge, probably because
the fitted value is tiny (and hence the expected variance is
tiny) and the observed value is 0.2.


dfr - df.residual(model2)
deviance(model2)/dfr
d2 - sum(residuals(model2,pearson)^2) 
(disp2 - d2/dfr) 
fitted(model2)

residuals(model2,pearson)
Sorry to dig that one from one year ago, but it seems it is still 
interesting (at least to me).
From summary.glm, it looks like the deviance is obtained by the working 
type deviance, which gave close results to pearson type (when working 
residuals are multiplied by the weights). I couldn't find a lot of 
information on the working type.  How is it computed (from ?glm, I know 
that these are «the residuals in the final iteration of the IWLS fit») ?


sum(model2$weights * residuals(model2, type=working)^2)
sum(residuals(model2, type=pearson)^2)

Maybe I'm wrong, but could someone clarify that (which type is used and 
what difference it makes) ?


Thank you in advance,
Etienne

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] End of line marker?

2010-03-04 Thread David Winsemius

On Mar 4, 2010, at 12:50 PM, jonas garcia wrote:

 Thank you so much for your reply.

 I can identify the characters very easily in a couple of files. The  
 reason I am worried is that I have thousands of files to read in.  
 The files were produced in a very old MS-DOS software that records  
 information on oceanographic data and geographic position during a  
 survey.

 My main goal is read all these files into R for further analysis.  
 Most of the files are cleared of these EOL markers but some are not.  
 I only noticed the problem by chance when I was looking and  
 comparing one of them. I wonder if I can solve this problem using R,  
 without having to go for text editors separately.

I could not  see the character you pasted, so maybe others were  
similarly in the dark or under water as might be a more  
appropriate analogy here. Can you look at the character with a hex- 
editor or translate it to a character code so that better minds than  
mine might be able to comment on what sort of weirdness might have  
been introduced and how it could be scanned?

-- 
David.

 Help on this would be much appreciated.
 Thanks again

 J


 On 3/4/10, David Winsemius dwinsem...@comcast.net wrote:

 On Mar 3, 2010, at 2:22 PM, jonas garcia wrote:

 Dear R users,

 I am trying to read a huge file in R. For some reason, only a part  
 of the
 file is read. When I further investigated, I found that in one of my
 non-numeric columns, there is one odd character responsible for  
 this, which
 I reproduce bellow:
 In case you cannot see it, it looks like a right arrow, but it is  
 not the
 one you get from microsoft word in menu insert symbol.

 I think my dat file is broken and that funny character is an EOL  
 marker that
 makes R not read the rest of the file. I am sure the character is  
 there by
 chance but I fear that it might be present in some other big files I  
 have to
 work with as well. So, is there any clever way to remove this  
 inconvenient
 character in R avoiding having to edit the file in notepad and  
 remove it
 manually?

 Code I am using:

 read.csv(new3.dat, header=F)

 Warning message:
 In read.table(file = file, header = header, sep = sep, quote =  
 quote,  :
  incomplete final line found by readTableHeader on 'new3.dat'

 I think you should identify the offending line by using the  
 count.fields function and fix it with an editor.


 -- 
 David

 I am working with R 2.10.1 in windows XP.

 Thanks in advance

 Jonas

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 David Winsemius, MD
 Heritage Laboratories
 West Hartford, CT



David Winsemius, MD
Heritage Laboratories
West Hartford, CT


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Modified R/S statistic

2010-03-04 Thread Alexandra Almeida
Hi R users,

Does anyone know if Lo's modified R/S statistic is implemented in R?

Thank you very much,
Alexandra Almeida

-- 
Alexandra R M de Almeida

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] two questions for R beginners

2010-03-04 Thread Kevin Wright
Patrick,

1.  Implicit intercepts.  Implicit intercepts are not too bad for the main
model, but they creep in occasionally in strange places where they might not
be expected.  For example, in some of the variance structures specified in
lme, (~x) automatically expands to (~1+x). Venables said in the Exegeses
paper: For teaching purposes it would be useful to have a switch that
required users to include the intercept term in formulae if it is needed.
This would definitely help more students than it would hinder. In other words
it should be possible to override the automatic intercept term.

2.  Working with colors.  There are a number of functions in R for working
with colors and since colors can be specified by palette number, name,
hexadecimal string, values between 0 and 1, or values between 0 and 256,
things can be confusing.  One problem is that not all functions accept the
same type of arguments or produce the same type of return values.  For
example, the awkward need of t and conversion to [0,255] in adding alpha
levels to a color:
rgb(t(col2rgb(c(navy,maroon))),alpha=120,max=255)

3. Factors. R  tries to convert everything that it possibly can into a
factor.  Except, occasionally, it doesn't try.  Further, after sub-setting
data so that some factor levels have no data, too many functions fail.  I
shouldn't need to use drop.levels from gdata package all over the place to
keep automated scripts running smoothly.  Let's not forget:
R as.numeric(factor(c(NA,0,1)))
[1] NA  1  2

4.

is.list(list(1)[1])
[1] TRUE

is.matrix(matrix(1)[1,])
[1] FALSE

Ouch. Ouch. Ouch.

5. Most useful: apropos and Rseek.

Best,

Kevin


On Thu, Feb 25, 2010 at 11:31 AM, Patrick Burns pbu...@pburns.seanet.comwrote:

 * What were your biggest misconceptions or
 stumbling blocks to getting up and running
 with R?

 * What documents helped you the most in this
 initial phase?

 I especially want to hear from people who are
 lazy and impatient.

 Feel free to write to me off-list.  Definitely
 write off-list if you are just confirming what
 has been said on-list.

 --
 Patrick Burns
 pbu...@pburns.seanet.com
 http://www.burns-stat.com
 (home of 'The R Inferno' and 'A Guide for the Unwilling S User')

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Kevin Wright

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Screen settings for point of view in lattice and misc3d

2010-03-04 Thread Greg Snow
If I remember correctly, the order in which you specify x, y, and z matters for 
wireframe, so you may want to try rotating by x first, then z.

You may also find the rotate.wireframe function in the TeachingDemos package 
(make sure that you have also loaded the tcltk package) useful in finding the 
rotation, it lets you adjust the rotation by using sliders to see the effect, 
or the tkexamp function in the same package could be used for a different 
interface.

Hope this helps,

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
 project.org] On Behalf Of Waichler, Scott R
 Sent: Wednesday, March 03, 2010 2:22 PM
 To: r-h...@stat.math.ethz.ch
 Subject: [R] Screen settings for point of view in lattice and misc3d
 
 I'm making some 3D plots with contour3d from misc3d and wireframe from
 lattice.  I want to view them from below; i.e. the negative z-axis.  I
 can't figure out how to do so.  I would like my point of view looking
 up from below, with the z, y, and x axes positive going away.  Can
 anyone tell me the correct settings for screen to achieve this?  Here
 is what I've found so far:
 
  screen=list(z=-40, x=-60, y=0), # looking down and away in negative x
 direction
  screen=list(z=40, x=60, y=0),  # domain turned upside down, looking up
 and away in neg. x direction
  screen=list(z=-40, x=60, y=0),  # domain turned upside down, looking
 up and away in pos. x direction
  screen=list(z=40, x=-60, y=0), # looking down and away in positive
 x direction
 
 
 Scott Waichler
 Pacific Northwest National Laboratory
 P.O. Box 999, Richland, WA  99352
 scott.waich...@pnl.gov
 509-372-4423, 509-341-4051 (cell)
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] counting the number of ones in a vector

2010-03-04 Thread sjaffe

I got tired of writing length(which()) so I define a useful function which I
source in my .Rprofile:

count - function( x ) length(which(x))

Then:

count( x == 1 )


-- 
View this message in context: 
http://n4.nabble.com/counting-the-number-of-ones-in-a-vector-tp1570700p1578549.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] estimagint MLE of covariance matrix subject to constraints

2010-03-04 Thread Jason S

Hi there,

I need a MLE of a covariance matrix under the constraint that particular 
elements of the inverse covariance matrix are zero. I can't find any 
function/package that'd do that. Any suggestions?

Jason



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] End of line marker?

2010-03-04 Thread jonas garcia
Thank you so much for your reply.



I can identify the characters very easily in a couple of files. The reason I
am worried is that I have thousands of files to read in. The files were
produced in a very old MS-DOS software that records information on
oceanographic data and geographic position during a survey.



My main goal is read all these files into R for further analysis. Most of
the files are cleared of these EOL markers but some are not. I only noticed
the problem by chance when I was looking and comparing one of them. I wonder
if I can solve this problem using R, without having to go for text editors
separately.



Help on this would be much appreciated.

Thanks again



J


On 3/4/10, David Winsemius dwinsem...@comcast.net wrote:


 On Mar 3, 2010, at 2:22 PM, jonas garcia wrote:

 Dear R users,

 I am trying to read a huge file in R. For some reason, only a part of the
 file is read. When I further investigated, I found that in one of my
 non-numeric columns, there is one odd character responsible for this,
 which
 I reproduce bellow:
 In case you cannot see it, it looks like a right arrow, but it is not the
 one you get from microsoft word in menu insert symbol.

 I think my dat file is broken and that funny character is an EOL marker
 that
 makes R not read the rest of the file. I am sure the character is there by
 chance but I fear that it might be present in some other big files I have
 to
 work with as well. So, is there any clever way to remove this inconvenient
 character in R avoiding having to edit the file in notepad and remove it
 manually?

 Code I am using:

 read.csv(new3.dat, header=F)

 Warning message:
 In read.table(file = file, header = header, sep = sep, quote = quote,  :
  incomplete final line found by readTableHeader on 'new3.dat'


 I think you should identify the offending line by using the count.fields
 function and fix it with an editor.


 --
 David


 I am working with R 2.10.1 in windows XP.

 Thanks in advance

 Jonas

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.htmlhttp://www.r-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 David Winsemius, MD
 Heritage Laboratories
 West Hartford, CT



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Removing colon from numerical data

2010-03-04 Thread LCOG1

Basic question, looked through the forum and documentation but didnt see a
solution.

So consider 

O-c(1:20)
D-c(1:,2:,3:,4:,5:,6:,7:,8:,9:,10:,11:,12:,13:,14:,15:,16:,
17:,18:,19:,20:)
Time-c(51:70)

AveTT-data.frame(O,D,Time)


I would like to remove the colon from the D column's data.  This is how
the data is being given to me and its too big to put into excel to remove
the colons.  I tried the below but neither returns what i want.  

AveTT$D-as.numeric(AveTT$D)

AveTT$D-substr(AveTT$D,1,nchar(AveTT$D)-1)

so i want 
  O   D Time
1   1  1:   51
2   2  2:   52
3   3  3:   53
4   4  4:   54
5   5  5:   55
6   6  6:   56
7   7  7:   57
8   8  8:   58
9   9  9:   59
10 10 10:   60

to become

  O   D Time
1   1  1   51
2   2  2   52
3   3  3   53
4   4  4   54
5   5  5   55
6   6  6   56
7   7  7   57
8   8  8   58
9   9  9   59
10 10 10   60

while maintaining the data's integrity.  Thanks 

JR

-- 
View this message in context: 
http://n4.nabble.com/Removing-colon-from-numerical-data-tp1578397p1578397.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Plot help

2010-03-04 Thread ManInMoon

Is there an easy way to do two things:

I have a dataframe with headers and 18 columns

I want to plot all the columns on same y axis plot(df) does this.

BUT

1. There is no legend, the legend function seems pedantic - surely there
must be an easy way to just pick up my headers?

2. How do I vary the colours. By default they repeat quite frequently and I
have lots of red lines etc.

Moon
-- 
View this message in context: 
http://n4.nabble.com/Plot-help-tp1578510p1578510.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Removing colon from numerical data

2010-03-04 Thread David Winsemius


On Mar 4, 2010, at 12:12 PM, LCOG1 wrote:



Basic question, looked through the forum and documentation but didnt  
see a

solution.

So consider

O-c(1:20)
D- 
c 
(1 
:,2 
:,3 
:,4 
:,5:,6:,7:,8:,9:,10:,11:,12:,13:,14:,15:,16:,

17:,18:,19:,20:)
Time-c(51:70)

AveTT-data.frame(O,D,Time)


 AveTT$D - gsub(:, , AveTT$D)




I would like to remove the colon from the D column's data.  This  
is how
the data is being given to me and its too big to put into excel to  
remove

the colons.  I tried the below but neither returns what i want.

AveTT$D-as.numeric(AveTT$D)

AveTT$D-substr(AveTT$D,1,nchar(AveTT$D)-1)

so i want
 O   D Time
1   1  1:   51
2   2  2:   52
3   3  3:   53
4   4  4:   54
5   5  5:   55
6   6  6:   56
7   7  7:   57
8   8  8:   58
9   9  9:   59
10 10 10:   60

to become

 O   D Time
1   1  1   51
2   2  2   52
3   3  3   53
4   4  4   54
5   5  5   55
6   6  6   56
7   7  7   57
8   8  8   58
9   9  9   59
10 10 10   60

while maintaining the data's integrity.  Thanks

JR

--
View this message in context: 
http://n4.nabble.com/Removing-colon-from-numerical-data-tp1578397p1578397.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Removing colon from numerical data

2010-03-04 Thread Henrique Dallazuanna
Try this:

D - as.numeric(gsub([[:punct:]], , D))

On Thu, Mar 4, 2010 at 2:12 PM, LCOG1 jr...@lcog.org wrote:

 Basic question, looked through the forum and documentation but didnt see a
 solution.

 So consider

 O-c(1:20)
 D-c(1:,2:,3:,4:,5:,6:,7:,8:,9:,10:,11:,12:,13:,14:,15:,16:,
 17:,18:,19:,20:)
 Time-c(51:70)

 AveTT-data.frame(O,D,Time)


 I would like to remove the colon from the D column's data.  This is how
 the data is being given to me and its too big to put into excel to remove
 the colons.  I tried the below but neither returns what i want.

 AveTT$D-as.numeric(AveTT$D)

 AveTT$D-substr(AveTT$D,1,nchar(AveTT$D)-1)

 so i want
  O   D Time
 1   1  1:   51
 2   2  2:   52
 3   3  3:   53
 4   4  4:   54
 5   5  5:   55
 6   6  6:   56
 7   7  7:   57
 8   8  8:   58
 9   9  9:   59
 10 10 10:   60

 to become

  O   D Time
 1   1  1   51
 2   2  2   52
 3   3  3   53
 4   4  4   54
 5   5  5   55
 6   6  6   56
 7   7  7   57
 8   8  8   58
 9   9  9   59
 10 10 10   60

 while maintaining the data's integrity.  Thanks

 JR

 --
 View this message in context: 
 http://n4.nabble.com/Removing-colon-from-numerical-data-tp1578397p1578397.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] counting the number of ones in a vector

2010-03-04 Thread Nordlund, Dan (DSHS/RDA)
 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
 Behalf Of sjaffe
 Sent: Thursday, March 04, 2010 10:59 AM
 To: r-help@r-project.org
 Subject: Re: [R] counting the number of ones in a vector
 
 
 I got tired of writing length(which()) so I define a useful function which I
 source in my .Rprofile:
 
 count - function( x ) length(which(x))
 
 Then:
 
 count( x == 1 )
 

How about  sum(x==1) ?  No need to write a new function, and it is even 2 
characters less to type.

Hope this is helpful,

Dan

Daniel J. Nordlund
Washington State Department of Social and Health Services
Planning, Performance, and Accountability
Research and Data Analysis Division
Olympia, WA  98504-5204

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Is it possible to recursively update a function?

2010-03-04 Thread Seeker
Here is the test code.

foo-function(x) exp(-x)
for (i in 1:5)
{
foo-function(x) foo(x)*x
foo(2)
}

The error is evalution nested too deeply. I tried Recall() but it
didn't work either. Thanks a lot for your input.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] logistic regression by group?

2010-03-04 Thread Noah Silverman
Corey,

Thanks for the quick reply.

I cant give any sample code as I don't know how to code this in R. 
That's why I tried to pass along some pseudo code.

I'm looking for the best beta that maximize likelihood over all the
groups.  So, while your suggestion is close, it isn't quite what I need.

I've seen the formula written as:

L = product( exp(xb) / sum(exp(xb)) )

Where sum(exp(xb))  represents the sum of all the items in the group.

Does that make sense?

-N


On 3/4/10 4:04 AM, Corey Sparks wrote:
 Hi, first, you should always provide some repeatable code for us to have a
 look at, that shows what you have tried so far.  
 That being said,  you can use the subset= option  in glm to subdivide your
 data and run separate models like that, e.g.

 fit.1-glm(y~x1+x2, data=yourdat, family=binomial, subset=group==1)
 fit.2-glm(y~x1+x2, data=yourdat, family=binomial, subset=group==2)

 where group is your grouping variable.
 Which should give you that kind of stratified model.
 Hope this helps,
 Corey

 -
 Corey Sparks, PhD
 Assistant Professor
 Department of Demography and Organization Studies
 University of Texas at San Antonio
 501 West Durango Blvd
 Monterey Building 2.270C
 San Antonio, TX 78207
 210-458-3166
 corey.sparks 'at' utsa.edu
 https://rowdyspace.utsa.edu/users/ozd504/www/index.htm


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fisher.test gives p1

2010-03-04 Thread Bernardo Rangel Tura
On Thu, 2010-03-04 at 11:15 -0500, Jacob Wegelin wrote:
 The purpose of this email is to
 
 (1) report an example where fisher.test returns p  1
 
 (2) ask if there is a reliable way to avoid p1 with fisher.test.
 
 If one has designed one's code to return an error when it finds a 
 nonsensical probability, of course a value of p1 can cause havoc.
 
 Example:
 
  junk-data.frame(score=c(rep(0,14), rep(1,29), rep(2, 16)))
  junk-rbind(junk, junk)
  junk$group-c(rep(DOG, nrow(junk)/2), rep(kitty, nrow(junk)/2))
  table(junk$score, junk$group)
 
  DOG kitty
0  1414
1  2929
2  1616
  dput(fisher.test(junk$score, junk$group)$p.value)
 1.12

Hi jacob,

I think this is cover in R FAQ 7.31, but look this command
all.equal(dput(fisher.test(matrix(c(14,14,29,29,16,16),byrow=T,ncol=2))$p.value),1)
1.12
[1] TRUE

P.S
R FAQ 7.31 -
http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-these-numbers-are-equal_003f

-- 
Bernardo Rangel Tura, M.D,MPH,Ph.D
National Institute of Cardiology
Brazil

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is it possible to recursively update a function?

2010-03-04 Thread jim holtman
What exactly are you trying to do?  'foo' calls 'foo' calls 'foo' 
 How did you expect it to stop the recursive calls?

On Thu, Mar 4, 2010 at 2:08 PM, Seeker zhongm...@gmail.com wrote:
 Here is the test code.

 foo-function(x) exp(-x)
 for (i in 1:5)
 {
 foo-function(x) foo(x)*x
 foo(2)
 }

 The error is evalution nested too deeply. I tried Recall() but it
 didn't work either. Thanks a lot for your input.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is it possible to recursively update a function?

2010-03-04 Thread Uwe Ligges



On 04.03.2010 20:08, Seeker wrote:

Here is the test code.

foo-function(x) exp(-x)
for (i in 1:5)
{
foo-function(x) foo(x)*x
foo(2)



Hmmm, wenn do you think does the evaluation stop? Your recursion has an 
infinity depth.
If you cannot get the recursion right (and even if you can): Try to get 
around without recursion, it is in most cases a bad idea in R: You are 
wasting memory and it is rather slow compared to iterative approaches.


Uwe Ligges





}

The error is evalution nested too deeply. I tried Recall() but it
didn't work either. Thanks a lot for your input.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Odp: precision issue?

2010-03-04 Thread Ted Harding
On 04-Mar-10 13:35:42, Duncan Murdoch wrote:
 On 04/03/2010 7:35 AM, (Ted Harding) wrote:
 On 04-Mar-10 10:50:56, Petr PIKAL wrote:
  Hi
  
  r-help-boun...@r-project.org napsal dne 04.03.2010 10:36:43:
  Hi R Gurus,
  
  I am trying to figure out what is going on here.
  
   a - 68.08
   b - a-1.55
   a-b
  [1] 1.55
   a-b == 1.55
  [1] FALSE
   round(a-b,2) == 1.55
  [1] TRUE
   round(a-b,15) == 1.55
  [1] FALSE
  
  Why should (a - b) == 1.55 fail when in fact b has been defined
  to be a - 1.55?  Is this a precision issue? How do i correct this?
  
  In real world those definitions of b are the same but not in
  computer 
  world. See FAQ 7.31
  
  Use either rounding or all.equal.
  
  all.equal(a-b, 1.55)
  [1] TRUE
  
  To all, this is quite common question and it is documented in FAQs. 
  programs, therefore maybe a confusion from novices. 
  
  I wonder if there could be some type of global option which will
  get rid of these users mistakes or misunderstandings by setting
  some threshold option for equality testing by use ==.
  
  Regards
  Petr
  
  Alex

 Interesting suggestion, but in my view it would probably give
 rise to more problems than it would avoid!

 The fundamental issue is that many inexperienced users are not
 aware that once 68.08 has got inside the computer (as used by
 R and other programs which do fixed-length binary arithmetic)
 it is no longer 68.08 (though 1.55 is still 1.55).
   
 
 I think most of what you write above and below is true, but your 
 parenthetical remark is not.  1.55 can't be represented exactly in the 
 double precision floating point format used in R.  It doesn't have a 
 terminating binary expansion, so it will be rounded to a binary 
 fraction.  I believe you can see the binary expansion using this code:
 
 x - 1.55
 for (i in 1:70) {
   cat(floor(x)) 
   if (i == 1) cat(.)
x - 2*(x - floor(x))
 }
 
 which gives
 
 1.10001100110011001100110011001100110011001100110011010
 
 Notice how it becomes a repeating expansion, with 0011 repeated 12 
 times, but then it finishes with 010, because we've run
 out of bits.
 So 1.55 is actually stored as a number which is ever so slightly
 bigger.
 
 Duncan Murdoch

Of course you are quite right! My parenthetical remark was
an oversight and a blunder -- not so much a typo as a braino,
due to crossing wires between dividing by 2 (1.5 = 1 + 1/2)
and dividing by 10 ( 1.55 != 1 + 1/2 + (1/2)/2) )!

However, your detailed explanation will certainly be useful
to some. And your method of producing the binary expansion
is very neat indeed.

 [the rest of my stuff snipped]

Ted.


E-Mail: (Ted Harding) ted.hard...@manchester.ac.uk
Fax-to-email: +44 (0)870 094 0861
Date: 04-Mar-10   Time: 20:11:39
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Histogram color

2010-03-04 Thread Greg Snow
Here is another approach:

x - rnorm(100)

tmp - hist(x, plot=FALSE)
plot(tmp, col='blue')
tu - par('usr')
par(xpd=FALSE)
clip( tu[1], mean(x) - sd(x), tu[3], tu[4] )
plot(tmp, col='red', add=TRUE)
clip( mean(x) + sd(x), tu[2], tu[3], tu[4] )
plot(tmp, col='red', add=TRUE)



-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
 project.org] On Behalf Of Ashta
 Sent: Thursday, March 04, 2010 5:42 AM
 To: R help
 Subject: [R] Histogram color
 
 In a  histogram , is it possible to have different colors?
 Example. I generated
 
 x - rnorm(100)
 hist(x)
 
 I want the histogram to have different colors based on the following
 condition
  mean(x)+sd(x)   with red color and  mean(x) - sd(x) with red color as
 well. The  middle  one with blue color.
 Is it possible to do that in R?
 Thanks
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Three most useful R package

2010-03-04 Thread Greg Snow
Well, the HeadSlap package would of course require the esp package so that it 
could tell the difference between someone doing something clever and someone 
doing something because everyone else does.

For example, user 1 calls the pie function, HeadSlap using esp finds out that 
user 1 will also be creating a bar chart and dot plot of same data to use in a 
presentation comparing the types of plots and showing why you should not use 
pie charts.  HeadSlap allows pie chart to be created.

User 2 calls the pie function because they think pie charts are pretty and have 
never learned better, HeadSlap first issues a warning/error with references to 
Cleveland and others.  Further attempts to use pie by same user for same reason 
result in activating the hardware.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
 project.org] On Behalf Of Jim Lemon
 Sent: Thursday, March 04, 2010 1:49 AM
 To: r-h...@stat.math.ethz.ch
 Subject: Re: [R] Three most useful R package
 
 On Wed, 3 Mar 2010 11:52:48 -0700 Greg Snow greg.s...@imail.org
 wrote:
   I also want a package that when people misuse certain
   functions/techniques it will cause a small door on the
   side of their monitor/computer to open and a mechanical
   hand will come out and slap them upside the head. But
   that package will not be useful until the hardware
   support is available.
 
 Hi Greg,
 Had not my grandfather and a workmate, over 60 years ago, worked out a
 way to do something that no one else had been able to do by running a
 machine in a way that it was not supposed to be run, I might agree with
 you.
 
 Jim
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fisher.test gives p1

2010-03-04 Thread Ted Harding
On 04-Mar-10 19:27:16, Bernardo Rangel Tura wrote:
 On Thu, 2010-03-04 at 11:15 -0500, Jacob Wegelin wrote:
 The purpose of this email is to
 
 (1) report an example where fisher.test returns p  1
 (2) ask if there is a reliable way to avoid p1 with fisher.test.
 
 If one has designed one's code to return an error when it finds a
 nonsensical probability, of course a value of p1 can cause havoc.
 
 Example:
 
  junk-data.frame(score=c(rep(0,14), rep(1,29), rep(2, 16)))
  junk-rbind(junk, junk)
  junk$group-c(rep(DOG, nrow(junk)/2), rep(kitty, nrow(junk)/2))
  table(junk$score, junk$group)
 
  DOG kitty
0  1414
1  2929
2  1616
  dput(fisher.test(junk$score, junk$group)$p.value)
 1.12
 
 Hi jacob,
 
 I think this is cover in R FAQ 7.31, but look this command
 all.equal(dput(fisher.test(matrix(c(14,14,29,29,16,16),byrow=T,ncol=2))$
 p.value),1)
 1.12
 [1] TRUE
 
 P.S
 R FAQ 7.31 -
 http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-the
 se-numbers-are-equal_003f
 
 -- 
 Bernardo Rangel Tura, M.D,MPH,Ph.D
 National Institute of Cardiology
 Brazil

This is yet another example where the advice (in my posting
in the precision thread), that if an anomaly occurs it is
is useful to check the magnitude of the anomaly, applies.

Then:
  a - 68.08 ; b - a-1.55 ; a-b == 1.55
  # [1] FALSE
  (a-b) - 1.55
  # [1] -2.88658e-15

Here (as Jakob found for himself):
  p = 1.12

Or, more precisely (using Jakob's data):

  fisher.test(junk$score, junk$group)$p.value -1
  # [1] 5.728751e-14

(which is not quite the same, but makes the same point).

The presence of a discrepancy of the order of 1e-14 or 1e-15
(or smaller) is always a strong clue that errors due to the
accumulation of imprecise binary representations may have
occurred.

Martin's replacement with max(0, min(1, PVAL)) will remove
such anomalies (which still will not guarantee that a result
of 0.9974 is exact, but that is probably not
important; what is important is that it does not break the
logic of probability.

Ted.


E-Mail: (Ted Harding) ted.hard...@manchester.ac.uk
Fax-to-email: +44 (0)870 094 0861
Date: 04-Mar-10   Time: 20:33:03
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Hi

2010-03-04 Thread hussain abu-saaq

How Can I write this this matlab code in R:


options=optimset('TolFun',1e-9,'TolX',1e-9,'MaxIter',1e8,'MaxFunEvals',1e8);
c=c/2;
[alpha, delta, epsilon, nofcup] = ustrs(set_date,mat_date);
y = fminsearch('pbond',.15,options,p,c,nofcup,delta/epsilon);
y = 200*y;



Note 
pbond is a function in Matlab  I already wrote in R


ustrs is a function in Matlab I already convert into r


Thank you

HI

  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Hi

2010-03-04 Thread stephen sefick
I would help, but I don't know matlab.

Stephen

On Thu, Mar 4, 2010 at 2:50 PM, hussain abu-saaq hussain...@hotmail.com wrote:

 How Can I write this this matlab code in R:


 options=optimset('TolFun',1e-9,'TolX',1e-9,'MaxIter',1e8,'MaxFunEvals',1e8);
 c=c/2;
 [alpha, delta, epsilon, nofcup] = ustrs(set_date,mat_date);
 y = fminsearch('pbond',.15,options,p,c,nofcup,delta/epsilon);
 y = 200*y;



 Note
 pbond is a function in Matlab  I already wrote in R


 ustrs is a function in Matlab I already convert into r


 Thank you

 HI


        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Stephen Sefick

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods.  We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] End of line marker?

2010-03-04 Thread jim holtman
Have you considered reading the file in a binary/raw, finding the
offending character and replacing it with a blank (or whatever and
then writing the file back out).  You can then probably process it
using read.table.;

On Thu, Mar 4, 2010 at 12:50 PM, jonas garcia
garcia.jona...@googlemail.com wrote:
 Thank you so much for your reply.



 I can identify the characters very easily in a couple of files. The reason I
 am worried is that I have thousands of files to read in. The files were
 produced in a very old MS-DOS software that records information on
 oceanographic data and geographic position during a survey.



 My main goal is read all these files into R for further analysis. Most of
 the files are cleared of these EOL markers but some are not. I only noticed
 the problem by chance when I was looking and comparing one of them. I wonder
 if I can solve this problem using R, without having to go for text editors
 separately.



 Help on this would be much appreciated.

 Thanks again



 J


 On 3/4/10, David Winsemius dwinsem...@comcast.net wrote:


 On Mar 3, 2010, at 2:22 PM, jonas garcia wrote:

 Dear R users,

 I am trying to read a huge file in R. For some reason, only a part of the
 file is read. When I further investigated, I found that in one of my
 non-numeric columns, there is one odd character responsible for this,
 which
 I reproduce bellow:
 In case you cannot see it, it looks like a right arrow, but it is not the
 one you get from microsoft word in menu insert symbol.

 I think my dat file is broken and that funny character is an EOL marker
 that
 makes R not read the rest of the file. I am sure the character is there by
 chance but I fear that it might be present in some other big files I have
 to
 work with as well. So, is there any clever way to remove this inconvenient
 character in R avoiding having to edit the file in notepad and remove it
 manually?

 Code I am using:

 read.csv(new3.dat, header=F)

 Warning message:
 In read.table(file = file, header = header, sep = sep, quote = quote,  :
  incomplete final line found by readTableHeader on 'new3.dat'


 I think you should identify the offending line by using the count.fields
 function and fix it with an editor.


 --
 David


 I am working with R 2.10.1 in windows XP.

 Thanks in advance

 Jonas

        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.htmlhttp://www.r-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 David Winsemius, MD
 Heritage Laboratories
 West Hartford, CT



        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Equation for model generated by auto.arima

2010-03-04 Thread testuser

I would like to know how to use the coefficients generated by the ARIMA model
to predict future values. What formula should be used with the coeffcients
to determine the future values.

Thanks
-- 
View this message in context: 
http://n4.nabble.com/Equation-for-model-generated-by-auto-arima-tp1578756p1578756.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is it possible to recursively update a function?

2010-03-04 Thread Seeker
I need to update posterior dist function upon the coming results and
find the posterior mean each time.

On Mar 4, 1:31 pm, jim holtman jholt...@gmail.com wrote:
 What exactly are you trying to do?  'foo' calls 'foo' calls 'foo' 
  How did you expect it to stop the recursive calls?





 On Thu, Mar 4, 2010 at 2:08 PM, Seeker zhongm...@gmail.com wrote:
  Here is the test code.

  foo-function(x) exp(-x)
  for (i in 1:5)
  {
  foo-function(x) foo(x)*x
  foo(2)
  }

  The error is evalution nested too deeply. I tried Recall() but it
  didn't work either. Thanks a lot for your input.

  __
  r-h...@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guidehttp://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.

 --
 Jim Holtman
 Cincinnati, OH
 +1 513 646 9390

 What is the problem that you are trying to solve?

 __
 r-h...@r-project.org mailing listhttps://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guidehttp://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.- Hide 
 quoted text -

 - Show quoted text -

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] partial differentiation of a dynamic expression

2010-03-04 Thread Melody Ghahramani
Hello,
 
It is possible to use 'deriv' when the expression itself is dynamic? I have 
data where the conditional mean is time-varying (like a GARCH(1,1) model) as
 
mu_{t} = omega1 + alpha1*N_{t-1} + beta1*mu_{t-1}.
 
The parameter vector is c(omega1, alpha1, beta1) and N_t is the 
observation at each time.
 
I want two things:
1. the derivative of mu_t with respect to each parameter. So for example, the 
first derivative with respect to omega1 is 1 + beta1* d mu_{t-1}/ d omega1.
2. Once I have the expression in 1, I want its first partial derivative.
 
Ultimately, I want to find the root of an estimating function using 
Newton-Raphson, where
 
param - param - solve(mat)%*% param, 
 
and where param is an estimating function which has first partial derivatives 
in it and 'mat' is a matrix which involves the first partial derivatives of 
elements in 'param'.
 
Thanks,
Melody
 
 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Equation for model generated by auto.arima

2010-03-04 Thread Stephan Kolassa

Hi,

the help page for arima() suggests looking at predict.Arima(), so take a 
look at ?predict.Arima(). You will probably not use the coefficients, 
but just feed it the output from arima(). And take a look at 
auto.arima() in the forecast package.


HTH
Stephan


testuser schrieb:

I would like to know how to use the coefficients generated by the ARIMA model
to predict future values. What formula should be used with the coeffcients
to determine the future values.

Thanks


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] missing date and time intervals in data frame

2010-03-04 Thread Kindra Martinenko
I posted a similar question, but feel it needs a bit more elaboration.

I have a data frame (read from a csv file) that may have missing rows.  Each
day has 7 time intervals associated with it, with a range from 17:00 hrs to
18:00 hrs in 10 minute bins.

What I am looking for is a script that will run through the data frame and
insert NAin the Volume column for any dates that are missing a time
interval.  For example:

  DateTime Camera
 Volume
57  2009-10-09 5:00:00 PM MANBRIN_RIVER_NB210
58  2009-10-09 5:10:00 PM MANBRIN_RIVER_NB207
59  2009-10-09 5:20:00 PM MANBRIN_RIVER_NB250
60  2009-10-09 5:30:00 PM MANBRIN_RIVER_NB193
61  2009-10-09 5:40:00 PM MANBRIN_RIVER_NB205
62  2009-10-09 6:00:00 PM MANBRIN_RIVER_NB185

Note that between row 61 and row 62, there is a missing time interval (5:50
PM).  I want the data frame to look like this:

 Date Time   Camera
 Volume
57  2009-10-09 5:00:00 PM MANBRIN_RIVER_NB210
58  2009-10-09 5:10:00 PM MANBRIN_RIVER_NB207
59  2009-10-09 5:20:00 PM MANBRIN_RIVER_NB250
60  2009-10-09 5:30:00 PM MANBRIN_RIVER_NB193
61  2009-10-09 5:40:00 PM MANBRIN_RIVER_NB205
*62  2009-10-09 5:50:00 PM MANBRIN_RIVER_NB  NA*
62  2009-10-09 6:00:00 PM MANBRIN_RIVER_NB185


Thanks in advance,
Kindra

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] missing date and time intervals in data frame

2010-03-04 Thread David Winsemius


On Mar 4, 2010, at 4:45 PM, Kindra Martinenko wrote:


I posted a similar question, but feel it needs a bit more elaboration.

I have a data frame (read from a csv file) that may have missing  
rows.  Each
day has 7 time intervals associated with it, with a range from 17:00  
hrs to

18:00 hrs in 10 minute bins.

What I am looking for is a script that will run through the data  
frame and

insert NAin the Volume column for any dates that are missing a time
interval.  For example:

 DateTime Camera
Volume
57  2009-10-09 5:00:00 PM MANBRIN_RIVER_NB210
58  2009-10-09 5:10:00 PM MANBRIN_RIVER_NB207
59  2009-10-09 5:20:00 PM MANBRIN_RIVER_NB250
60  2009-10-09 5:30:00 PM MANBRIN_RIVER_NB193
61  2009-10-09 5:40:00 PM MANBRIN_RIVER_NB205
62  2009-10-09 6:00:00 PM MANBRIN_RIVER_NB185


Here is one method of generating a series of time points at 10 minute  
intervals:


 as.POSIXlt(5:00:00 PM, format=%I:%M:%s %p ) + (1:20)*60*10
 [1] 2010-03-04 05:10:00 EST 2010-03-04 05:20:00 EST 2010-03-04  
05:30:00 EST
 [4] 2010-03-04 05:40:00 EST 2010-03-04 05:50:00 EST 2010-03-04  
06:00:00 EST
 [7] 2010-03-04 06:10:00 EST 2010-03-04 06:20:00 EST 2010-03-04  
06:30:00 EST
[10] 2010-03-04 06:40:00 EST 2010-03-04 06:50:00 EST 2010-03-04  
07:00:00 EST
[13] 2010-03-04 07:10:00 EST 2010-03-04 07:20:00 EST 2010-03-04  
07:30:00 EST
[16] 2010-03-04 07:40:00 EST 2010-03-04 07:50:00 EST 2010-03-04  
08:00:00 EST

[19] 2010-03-04 08:10:00 EST 2010-03-04 08:20:00 EST

Applying that to the solution you referenced should finish the job.

--
David.


Note that between row 61 and row 62, there is a missing time  
interval (5:50

PM).  I want the data frame to look like this:

Date Time   Camera
Volume
57  2009-10-09 5:00:00 PM MANBRIN_RIVER_NB210
58  2009-10-09 5:10:00 PM MANBRIN_RIVER_NB207
59  2009-10-09 5:20:00 PM MANBRIN_RIVER_NB250
60  2009-10-09 5:30:00 PM MANBRIN_RIVER_NB193
61  2009-10-09 5:40:00 PM MANBRIN_RIVER_NB205
*62  2009-10-09 5:50:00 PM MANBRIN_RIVER_NB  NA*
62  2009-10-09 6:00:00 PM MANBRIN_RIVER_NB185


Thanks in advance,
Kindra

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help

2010-03-04 Thread Jim Lemon

On 03/04/2010 10:30 PM, mahalakshmi sivamani wrote:

Hi all ,

I have one query.

i have list of some  .cel files. in my program i have to mention the path of
these .cel files

part of my program is,

rna.data-exprs(justRMA(filenames=file.names, celfile.path=*datadir*,
sampleNames=sample.names, phenoData=pheno.data,
cdfname=cleancdfname(hg18_Affymetrix U133A)))


in the place of datadir i have to mention the character string of the
directory of these .cel files. I don't know how to give the path for all
these files.

i set the path as given below,



rna.data-exprs(justRMA(filenames=file.names, celfile.path=*D:/MONO1.CEL

D:/MONO2.CEL D:/MONO3.CEL D:/MACRO1.CEL D:/MACRO2.CEL
D:/MACRO3.CEL*,sampleNames=sample.names, phenoData=pheno.data,
cdfname=cleancdfname(hg18_Affymetrix U133A)))


it shows this error,

Error: unexpected string constant in
rna.data-exprs(justRMA(filenames=file.names, celfile.path=D:/MONO1
D:/MONO2



Hi Mahalakshmi,
If you want to pass more than one string as an argument, you must pass 
them as a character vector.


celfile.path=c(D:/MONO1.CEL,D:/MONO2.CEL,D:/MONO3.CEL,
 D:/MACRO1.CEL,D:/MACRO2.CEL,D:/MACRO3.CEL)

Although this is not guaranteed to make your function work correctly.

Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Running script with double-click

2010-03-04 Thread Matt Asher

Hi,

I need to be able to run an R script by double-clicking the file name in 
Windows. I've tried associating the .r extension with the different R 
.exe's in /bin but none seems to work. Some open R then close right 
away, and Rgui.exe gives the message ARGUMENT /my/file.r __ignored__ 
before opening a new, blank session.


I've tried Google and looking in the R for Windows FAQ but didn't see 
anything.


Thanks.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Hi

2010-03-04 Thread Rob Forler
a quick google of fminsearch in R

resulted in this
http://www.google.com/search?q=fminsearch+in+Rie=utf-8oe=utf-8aq=trls=org.mozilla:en-US:officialclient=firefox-a

take a look. there appears to be a function called optim that you can look
at

http://sekhon.berkeley.edu/stats/html/optim.html

before you ask a question on r-help I would highly suggest doing at least a
small amount of googling.

On Thu, Mar 4, 2010 at 3:01 PM, stephen sefick ssef...@gmail.com wrote:

 I would help, but I don't know matlab.

 Stephen

 On Thu, Mar 4, 2010 at 2:50 PM, hussain abu-saaq hussain...@hotmail.com
 wrote:
 
  How Can I write this this matlab code in R:
 
 
 
 options=optimset('TolFun',1e-9,'TolX',1e-9,'MaxIter',1e8,'MaxFunEvals',1e8);
  c=c/2;
  [alpha, delta, epsilon, nofcup] = ustrs(set_date,mat_date);
  y = fminsearch('pbond',.15,options,p,c,nofcup,delta/epsilon);
  y = 200*y;
 
 
 
  Note
  pbond is a function in Matlab  I already wrote in R
 
 
  ustrs is a function in Matlab I already convert into r
 
 
  Thank you
 
  HI
 
 
 [[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 



 --
 Stephen Sefick

 Let's not spend our time and resources thinking about things that are
 so little or so large that all they really do for us is puff us up and
 make us feel like gods.  We are mammals, and have not exhausted the
 annoying little problems of being mammals.

-K. Mullis

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unsigned Posts; Was Setting graphical parameters

2010-03-04 Thread Jim Lemon

On 03/05/2010 04:11 AM, Bert Gunter wrote:

Folks:

Rolf's (appropriate, in my view) response below seems symptomatic of an
increasing tendency of posters to hide their identities with pseudonyms and
fake headers. While some of this may be due to identity paranoia (which I
think is overblown for this list), I suspect that a good chunk of it is lazy
students trying to beat the system by having us do their homework. The
etiquette of this list has traditionally been, like the software, open: we
sign our names. I would urge helpeRs to adhere to this etiquette and ignore
unsigned posts.

Contrary views welcome, either via public or private responses.


Hi Bert (and anyone else who wishes to read this),

I had a working hypothesis about this, but decided to check the data 
before replying. Looking at the ten most recent obvious pseudonyms, all 
were from free email accounts like gmail or yahoo. A few of these 
included all or part of the name of the user anyway. As such accounts 
tend to be used for lots of things, the users may well be concerned 
about identification, even if everyone on the R help list is of the 
highest moral standing. I think Rolf's guess about the provenance of the 
request was based upon the carefully copied format, which Blind Freddie 
could see was homework (and the requester actually admitted it!). 
Personally, I never answer requests that have the due date for the 
assignment at the bottom.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Running script with double-click

2010-03-04 Thread Peter Alspach
Tena koe Matt

I tend to create a .bat file with one line:

R\R-Current\bin\R CMD BATCH yourScript.R

where you replace R\R-Current\bin\R with the path to your R, and the
.bat file is in the same folder as yourScript.R.

There may be better ways, and doubtless someone will enlighten us both
if there are.

HTH 

Peter Alspach

 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of Matt Asher
 Sent: Friday, 5 March 2010 12:43 p.m.
 To: r-help@r-project.org
 Subject: [R] Running script with double-click
 
 Hi,
 
 I need to be able to run an R script by double-clicking the 
 file name in Windows. I've tried associating the .r extension 
 with the different R .exe's in /bin but none seems to work. 
 Some open R then close right away, and Rgui.exe gives the 
 message ARGUMENT /my/file.r __ignored__ before opening a 
 new, blank session.
 
 I've tried Google and looking in the R for Windows FAQ but 
 didn't see anything.
 
 Thanks.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   >