[R] [R-pkgs] smacof package for multidimensional scaling

2008-06-08 Thread Patrick Mair

Dear UserR's,

The smacof package (see also our PsychoR repository on 
http://r-forge.r-project.org/projects/psychor/) is uploaded on CRAN.


This package provides the following approaches of multidimensional 
scaling (MDS) based on stress minimization by means of majorization 
(smacof): - Simple smacof on symmetric dissimilarity matrices

- smacof for rectangular matrices (unfolding models)
- smacof with constraints on the configuration (linear, unique, 
diagonal, or user-specified constraints; fitting simplex or circumplex)
- 3-way smacof for individual differences (including constraints for 
idioscal, indscal, and identity)

- Sphere projections (spherical smacof, primal and dual algorithm).

Each of these approaches is implemented in a metric and nonmetric manner 
including primary, secondary, and tertiary approaches for tie handling. 
Various 2- and 3D-plots are provided and a package vignette is included.


Patrick

___
R-packages mailing list
[EMAIL PROTECTED]
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lsmeans

2008-06-08 Thread Dieter Menne
John Fox  mcmaster.ca> writes:

> Actually, the effects package does exactly what you suggest for continuous
> predictors.

But not for lme.

Dieter

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] histogram tick labels

2008-06-08 Thread Lawrence Hanser
Dear Friends,

I am doing a rather simple histogram on a vector of data, MR.  I set
breaks for the intervals:

hist(MR,breaks=c(0, 2.9, 5.9, 8.9, 11.9,14.9, 17.9, 20.9))

My question is, how do I change the labels on the tick marks?  I have
looked at ?hist and can't find a clue...

Thanks in advance.

Larry

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] exponential distribution

2008-06-08 Thread Prof Brian Ripley
What do you mean by 'correct' here?  If you do manually what glm() does, 
you would expect to get the same answer, but that is not independent 
verification.


As far as I recall either glm nor survreg are calculation the exact 
variance in this case -- they are both making use of asymptotic theory. 
glm makes use of the expected (Fisher) information evaluated at the MLE, 
survreg of the observed information -- those are asymptotically equivalent 
but there is some evidence preferring the observed information.


As I hinted, I've not checked this for several years and I will leave it 
to you to study the documentation and code.


On Sun, 8 Jun 2008, Antonio Gasparrini wrote:


Dear all,

I've tried to solve the Es. 12, cap 4 of "Introduction to GLM" by 
Annette Dobson. It's about the relationship between survival time of 
leukemia patients and blood cell count. I tried to fit a model with 
exponential distribution, first by glm (family gamma and then dispersion 
parameter fixed to 1) and then with survreg. They gave me the same point 
estimates but the standard errors are slightly different. I checked the 
results building the routine manually, and it seems that the glm results 
are the correct one.


###
data <- data.frame(y=c(65,156,100,134,16,108,121,4,39,143,56,26,22,1,1,5,65),
x=c(3.36,2.88,3.63,3.41,3.78,4.02,4.00,4.23,3.73,3.85,3.97,4.51,
4.54,5.00,5.00,4.72,5.00))

model1 <- glm(y~x,family=Gamma(link="log"),data)
summary(model1,dispersion=1)
model2 <- survreg(Surv(y) ~ x, data, dist="exponential")
summary(model2)

X <- model.matrix(model1)
y <- as.vector(data$y)
b <- as.matrix(c(1,1))  # STARTING VALUES
iter <- matrix(0,7,2)
iter[1,] <- b
for (i in 2:7) {
W <- diag(rep(1,length(y)))
z <- X%*%b + (y-exp(X%*%b))*1/exp(X%*%b)
b <- solve(t(X)%*%W%*%X) %*% (t(X)%*%W%*%z)
iter[i,] <- b
}
summary(model1,dispersion=1)$coef
summary(model2)
iter[nrow(temp),]
sqrt(diag(solve(t(X)%*%W%*%X)))
###

Can you explain if this difference is due to an error?
Thanks in advance


Antonio Gasparrini
Public and Environmental Health Research Unit (PEHRU)
London School of Hygiene & Tropical Medicine
Keppel Street, London WC1E 7HT, UK
Office: 0044 (0)20 79272406 - Mobile: 0044 (0)79 64925523
http://www.lshtm.ac.uk/people/gasparrini.antonio ( 
http://www.lshtm.ac.uk/pehru/ )


--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Default Argument Passing in Script

2008-06-08 Thread Gundala Viswanath
Hi all,

Currently I run R script with arguments the following ways

$ R --vanilla < myscript.R  ARGUMENT1

And in my script it is encoded as:

__BEGIN__
args<-commandArgs()
do_sth(args[3])


My question is that is there a way to set a default
argument inside the R script?

In Perl analogically would be:

my $param = $ARGV[0] || "default_argument";

I am wondering how can this be done in R.

-- 
Gundala Viswanath

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R and C/C++ for loop

2008-06-08 Thread juiceorange


I want to call C/C++ in R to do the loops. Is it doable?
If my whole code is a big loop, is it better to do the inverse thing, i.e.
call R function in C/C++?
Whichever is better, is there any online material which guide me the
procedure?

Thank you!
-- 
View this message in context: 
http://www.nabble.com/R-and-C-C%2B%2B-for-loop-tp17724633p17724633.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Multivariate adaptive regression spline (MARS)

2008-06-08 Thread Handayani Situmorang
Hi, all!!!
  I don't know if it is the right place to ask about MARS. I don't learn about 
statistics, but i have something to do about it. I'm confuse what MARS 
algorithm do (backward and forward stepwise). when i run 'mars' function from 
'mda' library, it produced some outputs, but i don't understand enough what's 
all about. Are there anyone can give any easy way to learn about MARS???
   
   
  Thanx a lot before
   
  Handa

   
-
 Yahoo! Toolbar kini dilengkapi dengan Search Assist.   Download sekarang juga.
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sample size in bootstrap(boot)

2008-06-08 Thread Tim Hesterberg
bootstrap() and samp.bootstrap() are part of the S+Resample package,
see http://www.insightful.com/downloads/libraries

You could modify boot() to allow sampling with size other than n.

Use caution when bootstrapping with a sample size other than n.
The usual reason for bootstrapping is inference (standard errors,
confidence intervals) using the actual data, including the actual
sample size, not some other data that you don't have.

However, there are reasons to sample with other sample sizes, e.g.:
* Planning for future work, e.g. planning for a clinical trial with
  large n based on current sample data with small n.  You may want to
  try different n, to see how that would affect standard errors or
  normality of sampling distributions.
* Better accuracy.  Bootstrap standard errors are biased downward,
  corresponding to computing the usual sample standard deviation using
  a divisor of n instead of (n-1).  Bootstrap distributions tend to
  be too narrow.  One remedy is to sample with size (n-1).  For others
  see:
Hesterberg, Tim C. (2004), Unbiasing the Bootstrap-Bootknife Sampling
vs. Smoothing, Proceedings of the Section on Statistics and the
Environment, American Statistical Association, 2924-2930.
http://home.comcast.net/~timhesterberg/articles/JSM04-bootknife.pdf

Tim Hesterberg
(formerly of Insightful, now Google, and only now catching up on R-help)

>   Hi Dan,
>
>   Thanks  for response yes i do know that bootstrap samples generated by
>   function boot are of the same size as original dataset but somewhere in the
>   R-help threads i saw a suggestion that one can control sample size (n) by
>   using the following command(plz see below) but my problem is it doesnt work
>   it gives error ( error in : n * nboot : non-numeric argument to binary
>   operator)
>
>   bootstrap(data,statistic,sampler=samp.bootstrap(size=20))
>
>this is what somebody on R help suggested... can we fix that error somehow
>   ?
>
>   On Wed, 26 Mar 2008 08:26:22 -0700 "Nordlund, Dan (DSHS/RDA)" wrote:
>   > > -Original Message-
>   > > From: [EMAIL PROTECTED]
>   > > [mailto:[EMAIL PROTECTED] On Behalf Of Zaihra T
>   > > Sent: Wednesday, March 26, 2008 7:57 AM
>   > > To: Jan T. Kim; R-help@r-project.org
>   > > Subject: ! [R] sample size in bootstrap(boot)
>   > >
>   > >
>   > > Hi,
>   > >
>   > > Can someone tell me how to control sample size (n) in
>   > > bootstrap function
>   > > boot in R. Can we give some option like we give for #
>   > > of repeated
>   > > samples(R=say 100).
>   > >
>   > > Will appreciate any help.
>   > >
>   > > thanks
>   >
>   > I don't believe so. Isn't one of the differences between the bootstrap and
>   other kinds of
>   > resampling that the bootstrap samples with replacement a sample of the
>   same size as the
>   > original data? You could use the function sample() to select your subsets
>   and compute your
>   > statistics of interest.
>   >
>   > Hope this is helpful,
>   >
>   > Dan
>   >
>   > Daniel J. Nordlund
>   > Research and Data Analysis
>   > Washington State Department of Social and! Health Services
>   > Olympia, WA 98504-5204

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lsmeans

2008-06-08 Thread John Fox
Dear Hadley,

Actually, the effects package does exactly what you suggest for continuous
predictors.

Regards,
 John

--
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On
> Behalf Of hadley wickham
> Sent: June-08-08 3:48 PM
> To: Frank E Harrell Jr
> Cc: John Fox; Douglas Bates; [EMAIL PROTECTED]; Dieter Menne
> Subject: Re: [R] lsmeans
> 
> > Well put Doug.  I would add another condition, which I don't know how to
> > state precisely.  The settings for the other terms, which are usually
> > marginal medians, modes, or means, must make sense when considered
jointly.
> >  Frequently when all adjustment covariates are set to overall marginal
> means
> > the resulting "subject" is very atypical.
> >
> > To me much of the problem is solved one one develops a liking for
predicted
> > values and differences in them.
> 
> Maybe I'm still misunderstanding, but isn't that exactly what effects
> displays are?  They're just some way to allow you to say, I'm
> interested in variables x, y and z, and I don't really care about the
> other variables in the model - what are some typical predictions?
> 
> The effects package implements this idea for categorical x, y, and z,
> but the basic idea remains the same for continuous variables - except
> instead of using all the levels of the factor, you'd use a grid within
> the range of the data.
> 
> Hadley
> 
> 
> --
> http://had.co.nz/
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optim, constrOptim: setting some parameters equal to each other

2008-06-08 Thread Katharine Mullen
the example I just mailed had an error; it should have been:

## objective function that depends on all parameters, here a, b, c
obfun <- function(a,b,c,dd)
sum((dd - (exp(a * 1:100) + exp(b * 1:50) + exp(c * 1:25) ))^2)

fr <- function(x, eqspec, dd, obfun) {
  ## assign variables for parameter values given in x
  for(i in 1:length(x))
assign(names(x)[i], x[[i]])
  ## use eqspec to assign parameter values determined w/equality constr.
  for(i in 1:length(eqspec))
assign( names(eqspec)[i], x[[ eqspec[[i]] ]])
  ## now have all parameter values, call objective function
  obfun(a,b,c,dd)
}

dd <- exp(-.05 * 1:100) + exp(-.05 * 1:50) + exp(.005 * 1:25)

## let par contain all parameters that are not constrained and
## all parameters on right side of constraints of form a=b

## let eqspec give all dependent parameters on left side of
## the constraints of form a=b
## e.g.
## for constraint a=b
xx <- optim(par=list(b=-4, c=-.1), fn=fr, eqspec=list(a="b"),
dd=dd, obfun=obfun)

## for constraint b=a
x1 <- optim(par=list(a=-3, c=-.1), fn=fr, eqspec=list(b="a"),
dd=dd, obfun=obfun)

On Sun, 8 Jun 2008, Katharine Mullen wrote:

> Here is an example w/optim where you have an objective function
> F(x) where you have (possibly a few) constraints of form x_i=x_j, and you
> can specify the constraints flexibly, which is what I _think_ you want.
> An example from you would have been nice.
>
> ## objective function that depends on all parameters, here a, b, c
> obfun <- function(a,b,c,dd)
> sum((dd - (exp(a * 1:100) + exp(b * 1:50) + exp(c * 1:25) ))^2)
>
> ## fn for optim
> fr <- function(x, eqspec, dd, obfun) {
>   ## assign variables for parameter values given in x
>   for(i in 1:length(x))
> assign(names(x)[i], x[[i]])
>   ## use eqspec to assign parameter values determined w/equ. constr.
>   for(i in 1:length(eqspec))
> assign( names(eqspec)[i], x[[ eqspec[[i]] ]])
>   ## now have all parameter values, call objective function
>   obfun(a,b,c,d, dd)
> }
>
> dd <- exp(-.05 * 1:100) + exp(-.05 * 1:50) + exp(.005 * 1:25)
>
> ## let par contain all parameters that are not eq. constrained and
> ## all parameters on right side of constraints of form a=b
>
> ## let eqspec give all dependent parameters on left side of
> ## the constraints of form a=b
>
> ## e.g.
> ## for constraint a=b
> xx <- optim(par=list(b=-4, c=-.1), fn=fr, eqspec=list(a="b"),
> dd=dd, obfun=obfun)
>
> ## for constraint b=a
> x1 <- optim(par=list(a=-4, c=-.1), fn=fr, eqspec=list(b="a"),
> dd=dd, obfun=obfun)
>
> On Sun, 8 Jun 2008, Alex F. Bokov wrote:
>
> > Hello, and apologies for the upcoming naive questions. I am a biologist
> > who is trying to teach himself the appropriate areas of math and stats.
> > I welcome pointers to suggested background reading just as much as I do
> > direct answers to my question.
> >
> > Let's say I have a function F() that takes variables (a,b,c,a1,b1,c1)
> > and returns x, and I want to find the values of these variables that
> > result in a minimum value of x. That's a straightforward application of
> > optim(). However, for the same function I also need to obtain values
> > that return the minimum value of x subject to the following constraints:
> > a=a1, b=b1, c=c1, a=a1 && b=b1, a=a1 && c=c1, ... and so on, for any
> > combination of these constraints including a=a1, b=b1, c=c1. The brute
> > force way to do this with optim() would be to write into F() an immense
> > switch statement anticipating every possible combination of constrained
> > variables. Obviously this is inefficient and unmaintainable. Therefore,
> > my question is:
> >
> > Is the purpose of constrOptim() this exact type of problem? If so, how
> > does one express the constraint I described above in terms of the ui,
> > ci, and theta parameters? Are there any introductory texts I should have
> > read for this to be obvious to me from constrOptim's documentation?
> >
> > If constrOptim() is not the answer to this problem, can anybody suggest
> > any more elegant alterntives to the switch-statement-from-hell approach?
> >
> > Thank you very, very much in advance. I thought I understood R
> > reasonably well until I started banging my head against this problem!
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting 

Re: [R] Installation of R bindings on Windows

2008-06-08 Thread Gabor Grothendieck
>From within R try this:

file.path(R.home(), "bin", "R.dll", fsep = "\\")


On Sun, Jun 8, 2008 at 2:19 PM, Axel Etzold <[EMAIL PROTECTED]> wrote:
> Dear all,
>
> I am trying to install a package of R bindings for the Ruby language on 
> Windows Vista ...
> this involves some compilation work with Mingw. (The analogous process on 
> Linux
> Ubuntu went fine, but for the Windows installation, I need to provide the 
> location of the
> file analogous to the /usr/lib/R/lib/libR.so library. I've been searching for 
> libR.dll, Rlib.dll,
> everywhere, but can't find it .
>
> What is the analogon of libR.so on Windows ?
>
> Thank you very much.
>
> Best regards,
>
> Axel
> --
>
> Jetzt dabei sein: http://www.shortview.de/[EMAIL PROTECTED]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optim, constrOptim: setting some parameters equal to each other

2008-06-08 Thread Katharine Mullen
Here is an example w/optim where you have an objective function
F(x) where you have (possibly a few) constraints of form x_i=x_j, and you
can specify the constraints flexibly, which is what I _think_ you want.
An example from you would have been nice.

## objective function that depends on all parameters, here a, b, c
obfun <- function(a,b,c,dd)
sum((dd - (exp(a * 1:100) + exp(b * 1:50) + exp(c * 1:25) ))^2)

## fn for optim
fr <- function(x, eqspec, dd, obfun) {
  ## assign variables for parameter values given in x
  for(i in 1:length(x))
assign(names(x)[i], x[[i]])
  ## use eqspec to assign parameter values determined w/equ. constr.
  for(i in 1:length(eqspec))
assign( names(eqspec)[i], x[[ eqspec[[i]] ]])
  ## now have all parameter values, call objective function
  obfun(a,b,c,d, dd)
}

dd <- exp(-.05 * 1:100) + exp(-.05 * 1:50) + exp(.005 * 1:25)

## let par contain all parameters that are not eq. constrained and
## all parameters on right side of constraints of form a=b

## let eqspec give all dependent parameters on left side of
## the constraints of form a=b

## e.g.
## for constraint a=b
xx <- optim(par=list(b=-4, c=-.1), fn=fr, eqspec=list(a="b"),
dd=dd, obfun=obfun)

## for constraint b=a
x1 <- optim(par=list(a=-4, c=-.1), fn=fr, eqspec=list(b="a"),
dd=dd, obfun=obfun)

On Sun, 8 Jun 2008, Alex F. Bokov wrote:

> Hello, and apologies for the upcoming naive questions. I am a biologist
> who is trying to teach himself the appropriate areas of math and stats.
> I welcome pointers to suggested background reading just as much as I do
> direct answers to my question.
>
> Let's say I have a function F() that takes variables (a,b,c,a1,b1,c1)
> and returns x, and I want to find the values of these variables that
> result in a minimum value of x. That's a straightforward application of
> optim(). However, for the same function I also need to obtain values
> that return the minimum value of x subject to the following constraints:
> a=a1, b=b1, c=c1, a=a1 && b=b1, a=a1 && c=c1, ... and so on, for any
> combination of these constraints including a=a1, b=b1, c=c1. The brute
> force way to do this with optim() would be to write into F() an immense
> switch statement anticipating every possible combination of constrained
> variables. Obviously this is inefficient and unmaintainable. Therefore,
> my question is:
>
> Is the purpose of constrOptim() this exact type of problem? If so, how
> does one express the constraint I described above in terms of the ui,
> ci, and theta parameters? Are there any introductory texts I should have
> read for this to be obvious to me from constrOptim's documentation?
>
> If constrOptim() is not the answer to this problem, can anybody suggest
> any more elegant alterntives to the switch-statement-from-hell approach?
>
> Thank you very, very much in advance. I thought I understood R
> reasonably well until I started banging my head against this problem!
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optim, constrOptim: setting some parameters equal to each other

2008-06-08 Thread Spencer Graves
 If 'F' is twice differentiable, this could be done fairly easily 
by writing 'F' as a function to be maximized with a1 = a + da, etc., and 
using the 'activePars' argument in maxNR{maxLik} to specify the 
constraints you want. 

 More specifically, consider the following: 


 Fmax <- function(x){
  -F(x[1], x[2], x[3], x[1]+x[4], x[2]+x[5], x[3]+x[6])
 }

 You may know that if your 'F' is the negative of a 
log(likelihood), then you can test statistical hypothesis about whether 
a=a1, etc., using the fact that under commonly met regularity 
conditions, 2*log(likelihood ratio) is approximately distributed as 
chi-square.  Moreover, this approximation is often quite good -- and can 
be evaluated with a simple Monte Carlo, permuting the order of your 
response variable if you have one. 

 Hope this helps. 
 Spencer Graves


Alex F. Bokov wrote:

Hello, and apologies for the upcoming naive questions. I am a biologist who is 
trying to teach himself the appropriate areas of math and stats. I welcome 
pointers to suggested background reading just as much as I do direct answers to 
my question.

Let's say I have a function F() that takes variables (a,b,c,a1,b1,c1) and returns x, and I want 
to find the values of these variables that result in a minimum value of x. That's a 
straightforward application of optim(). However, for the same function I also need to obtain 
values that return the minimum value of x subject to the following constraints: a=a1, b=b1, 
c=c1, a=a1 && b=b1, a=a1 && c=c1, ... and so on, for any combination of these 
constraints including a=a1, b=b1, c=c1. The brute force way to do this with optim() would be to 
write into F() an immense switch statement anticipating every possible combination of 
constrained variables. Obviously this is inefficient and unmaintainable. Therefore, my question 
is:

Is the purpose of constrOptim() this exact type of problem? If so, how does one 
express the constraint I described above in terms of the ui, ci, and theta 
parameters? Are there any introductory texts I should have read for this to be 
obvious to me from constrOptim's documentation?

If constrOptim() is not the answer to this problem, can anybody suggest any 
more elegant alterntives to the switch-statement-from-hell approach?

Thank you very, very much in advance. I thought I understood R reasonably well 
until I started banging my head against this problem!

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lsmeans

2008-06-08 Thread hadley wickham
> Well put Doug.  I would add another condition, which I don't know how to
> state precisely.  The settings for the other terms, which are usually
> marginal medians, modes, or means, must make sense when considered jointly.
>  Frequently when all adjustment covariates are set to overall marginal means
> the resulting "subject" is very atypical.
>
> To me much of the problem is solved one one develops a liking for predicted
> values and differences in them.

Maybe I'm still misunderstanding, but isn't that exactly what effects
displays are?  They're just some way to allow you to say, I'm
interested in variables x, y and z, and I don't really care about the
other variables in the model - what are some typical predictions?

The effects package implements this idea for categorical x, y, and z,
but the basic idea remains the same for continuous variables - except
instead of using all the levels of the factor, you'd use a grid within
the range of the data.

Hadley


-- 
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lsmeans

2008-06-08 Thread John Fox
Dear Frank,

This point is also correct, but whether this is a serious issue depends upon
the structure of the model. With a linear predictor and, in a generalized
linear model, expressing fitted values on the scale of the linear predictor,
differences are preserved regardless of the "typical" values chosen. Often,
interest is primarily in the "shape" of an interaction.

Regards,
 John

--
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox


> -Original Message-
> From: Frank E Harrell Jr [mailto:[EMAIL PROTECTED]
> Sent: June-08-08 3:13 PM
> To: Douglas Bates
> Cc: John Fox; Dieter Menne; [EMAIL PROTECTED]
> Subject: Re: [R] lsmeans
> 
> Douglas Bates wrote:
> > On 6/7/08, John Fox <[EMAIL PROTECTED]> wrote:
> >> Dear Dieter,
> >>
> >>  I don't know whether I qualify as a "master," but here's my brief take
on
> >>  the subject: First, I dislike the term "least-squares means," which
seems
> to
> >>  me like nonsense. Second, what I prefer to call "effect displays" are
> just
> >>  judiciously chosen regions of the response surface of a model, meant
to
> >>  clarify effects in complex models. For example, a two-way interaction
is
> >>  displayed by absorbing the constant and main-effect terms in the
> interaction
> >>  (more generally, absorbing terms marginal to a particular term) and
> setting
> >>  other terms to typical values. A table or graph of the resulting
fitted
> >>  values is, I would argue, easier to grasp than the coefficients, the
> >>  interpretation of which can entail complicated mental arithmetic.
> >
> > I like that explanation, John.
> >
> > As I'm sure you are aware, the key phrase in what you wrote is
> > "setting other terms to typical values".  That is, these are
> > conditional cell means, yet they are almost universally misunderstood
> > - even by statisticians who should know better - to be marginal cell
> > means.  A more subtle aspect of that phrase is the interpretation of
> > "typical".  The user is not required to specify these typical values -
> > they are calculated from the observed data.
> >
> > If there are no interactions with the "other terms" and if the values
> > chosen for those other terms based on the observed data are indeed
> > typical of the values for which we wish to make inferences with the
> > model then these conditional cell means may tell us something about
> > the marginal cell means.  But if either of those conditions fails then
> > these conditional means can be very different from the marginal means.
> 
> Well put Doug.  I would add another condition, which I don't know how to
> state precisely.  The settings for the other terms, which are usually
> marginal medians, modes, or means, must make sense when considered
> jointly.  Frequently when all adjustment covariates are set to overall
> marginal means the resulting "subject" is very atypical.
> 
> To me much of the problem is solved one one develops a liking for
> predicted values and differences in them.
> 
> Frank
> 
> >
> > I wouldn't have any problem at all with providing conditional cell
> > means, especially if the user were required to specify the values at
> > which to fix the other terms in the model, but that is not what people
> > think they are getting.  I don't want to encourage them in their
> > delusions by letting them think i can evaluate marginal cell means as
> > a single, conditional evaluation.
> >
> >>  > -Original Message-
> >>  > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> project.org]
> >>  On
> >>  > Behalf Of Dieter Menne
> >>  > Sent: June-07-08 4:36 AM
> >>  > To: [EMAIL PROTECTED]
> >>  > Subject: Re: [R] lsmeans
> >>  >
> >>  > John Fox  mcmaster.ca> writes:
> >>  >
> >>  > > I intend at some point to extend the effects package to linear and
> >>  > > generalized linear mixed-effects models, probably using lmer()
rather
> >>  > > than lme(), but as you discovered, it doesn't handle these models
> now.
> >>  > >
> >>  > > It wouldn't be hard, however, to do the computations yourself,
using
> >>  > > the coefficient vector for the fixed effects and a suitably
> constructed
> >>  > > model-matrix to compute the effects; you could also get standard
> errors
> >>  > > by using the covariance matrix for the fixed effects.
> >>  > >
> >>  >
> >>  > >> Douglas Bates:
> >>  > https://stat.ethz.ch/pipermail/r-sig-mixed-models/2007q2/000222.html
> >>  > >>
> >>  > My big problem with lsmeans is
> >>  > that I have never been able to understand how they should be
> >>  > calculated and, more importantly, why one should want to calculate
> >>  > them.  In other words, what do lsmeans represent and why should I be
> >>  > interested in these particular values?
> >>  > >>
> >>  >
> >>  > Truly Confused, torn apart by the Masters
> >>  >
> >>  > Dieter
> >>  >
> >>  > __
> >>  > R-help@r-project.org mailing list
> >>  > 

Re: [R] lsmeans

2008-06-08 Thread John Fox
Dear Hadley,

Unfortunately, the term "marginal" gets used in two quite different ways,
and Searle's "population marginal means" would, I believe, be more clearly
called "population conditional means" or "population partial means." This is
more or less alternative terminology for "least-squares means" (to which
Searle rightly objects).

Regards,
 John

--
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox


> -Original Message-
> From: hadley wickham [mailto:[EMAIL PROTECTED]
> Sent: June-08-08 2:52 PM
> To: Douglas Bates
> Cc: John Fox; Dieter Menne; [EMAIL PROTECTED]
> Subject: Re: [R] lsmeans
> 
> On Sun, Jun 8, 2008 at 12:58 PM, Douglas Bates <[EMAIL PROTECTED]>
wrote:
> > On 6/7/08, John Fox <[EMAIL PROTECTED]> wrote:
> >> Dear Dieter,
> >>
> >>  I don't know whether I qualify as a "master," but here's my brief take
on
> >>  the subject: First, I dislike the term "least-squares means," which
seems
> to
> >>  me like nonsense. Second, what I prefer to call "effect displays" are
> just
> >>  judiciously chosen regions of the response surface of a model, meant
to
> >>  clarify effects in complex models. For example, a two-way interaction
is
> >>  displayed by absorbing the constant and main-effect terms in the
> interaction
> >>  (more generally, absorbing terms marginal to a particular term) and
> setting
> >>  other terms to typical values. A table or graph of the resulting
fitted
> >>  values is, I would argue, easier to grasp than the coefficients, the
> >>  interpretation of which can entail complicated mental arithmetic.
> >
> > I like that explanation, John.
> >
> > As I'm sure you are aware, the key phrase in what you wrote is
> > "setting other terms to typical values".  That is, these are
> > conditional cell means, yet they are almost universally misunderstood
> > - even by statisticians who should know better - to be marginal cell
> > means.  A more subtle aspect of that phrase is the interpretation of
> > "typical".  The user is not required to specify these typical values -
> > they are calculated from the observed data.
> >
> 
> How does Searle's "population marginal means" fit in to this?  The
> paper describes a PMM as "expected value of an observed marginal mean
> as if there were one observation in every cell." - which was what I
> thought happened in the effects display.  Is this a subtly on the
> definition of typical, or is that PMM's are only described for pure
> ANOVA's (i.e. no continuous variables in model)?
> 
> Hadley
> 
> --
> http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lsmeans

2008-06-08 Thread John Fox
Dear Doug,

Your point is correct, of course, but if people are interested in computing
marginal means (or marginal cell means), then they can do so simply and
don't need a statistical model. I think that when such a model is fit,
interest is typically in conditioning on the other explanatory variables.

(Also see my responses to Hadley and Frank's points.)

Regards,
 John

--
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On
> Behalf Of Douglas Bates
> Sent: June-08-08 1:58 PM
> To: John Fox
> Cc: Dieter Menne; [EMAIL PROTECTED]
> Subject: Re: [R] lsmeans
> 
> On 6/7/08, John Fox <[EMAIL PROTECTED]> wrote:
> > Dear Dieter,
> >
> >  I don't know whether I qualify as a "master," but here's my brief take
on
> >  the subject: First, I dislike the term "least-squares means," which
seems
> to
> >  me like nonsense. Second, what I prefer to call "effect displays" are
just
> >  judiciously chosen regions of the response surface of a model, meant to
> >  clarify effects in complex models. For example, a two-way interaction
is
> >  displayed by absorbing the constant and main-effect terms in the
> interaction
> >  (more generally, absorbing terms marginal to a particular term) and
> setting
> >  other terms to typical values. A table or graph of the resulting fitted
> >  values is, I would argue, easier to grasp than the coefficients, the
> >  interpretation of which can entail complicated mental arithmetic.
> 
> I like that explanation, John.
> 
> As I'm sure you are aware, the key phrase in what you wrote is
> "setting other terms to typical values".  That is, these are
> conditional cell means, yet they are almost universally misunderstood
> - even by statisticians who should know better - to be marginal cell
> means.  A more subtle aspect of that phrase is the interpretation of
> "typical".  The user is not required to specify these typical values -
> they are calculated from the observed data.
> 
> If there are no interactions with the "other terms" and if the values
> chosen for those other terms based on the observed data are indeed
> typical of the values for which we wish to make inferences with the
> model then these conditional cell means may tell us something about
> the marginal cell means.  But if either of those conditions fails then
> these conditional means can be very different from the marginal means.
> 
> I wouldn't have any problem at all with providing conditional cell
> means, especially if the user were required to specify the values at
> which to fix the other terms in the model, but that is not what people
> think they are getting.  I don't want to encourage them in their
> delusions by letting them think i can evaluate marginal cell means as
> a single, conditional evaluation.
> 
> >  > -Original Message-
> >  > From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
> >  On
> >  > Behalf Of Dieter Menne
> >  > Sent: June-07-08 4:36 AM
> >  > To: [EMAIL PROTECTED]
> >  > Subject: Re: [R] lsmeans
> >  >
> >  > John Fox  mcmaster.ca> writes:
> >  >
> >  > > I intend at some point to extend the effects package to linear and
> >  > > generalized linear mixed-effects models, probably using lmer()
rather
> >  > > than lme(), but as you discovered, it doesn't handle these models
now.
> >  > >
> >  > > It wouldn't be hard, however, to do the computations yourself,
using
> >  > > the coefficient vector for the fixed effects and a suitably
> constructed
> >  > > model-matrix to compute the effects; you could also get standard
> errors
> >  > > by using the covariance matrix for the fixed effects.
> >  > >
> >  >
> >  > >> Douglas Bates:
> >  > https://stat.ethz.ch/pipermail/r-sig-mixed-models/2007q2/000222.html
> >  > >>
> >  > My big problem with lsmeans is
> >  > that I have never been able to understand how they should be
> >  > calculated and, more importantly, why one should want to calculate
> >  > them.  In other words, what do lsmeans represent and why should I be
> >  > interested in these particular values?
> >  > >>
> >  >
> >  > Truly Confused, torn apart by the Masters
> >  >
> >  > Dieter
> >  >
> >  > __
> >  > R-help@r-project.org mailing list
> >  > https://stat.ethz.ch/mailman/listinfo/r-help
> >  > PLEASE do read the posting guide
> >  http://www.R-project.org/posting-guide.html
> >  > and provide commented, minimal, self-contained, reproducible code.
> >
> >  __
> >  R-help@r-project.org mailing list
> >  https://stat.ethz.ch/mailman/listinfo/r-help
> >  PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> >  and provide commented, minimal, self-contained, reproducible code.
> >
> 
> __
> R-help@r-project.org mailing list
> https://stat.et

Re: [R] how to important a date file into R

2008-06-08 Thread Carl Witthoft

> Hi:
> I have a data file in the following format. The first three digits 
stand for the ID of a respondent such as 402, 403. Different respondents 
may have the same ID. Followed the ID are 298 single digit number 
ranging from 1 to 5. My question is how to read this data file into R. I 
tried "scan" and "read" but they do not work because the numbers in the 
file are not separated. Any suggestions?

> Thank you!
>

The answers provided to date (read.fwf()) look just fine.   I thought 
I'd mention that you could always pre-process the data file in any text 
editor to insert commmas or tabs between every number (other than that 
leading 3-digit number) and then use scan() or read.csv()


Carl

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optim, constrOptim: setting some parameters equal to each other

2008-06-08 Thread Alex F. Bokov
Hello, and apologies for the upcoming naive questions. I am a biologist who is 
trying to teach himself the appropriate areas of math and stats. I welcome 
pointers to suggested background reading just as much as I do direct answers to 
my question.

Let's say I have a function F() that takes variables (a,b,c,a1,b1,c1) and 
returns x, and I want to find the values of these variables that result in a 
minimum value of x. That's a straightforward application of optim(). However, 
for the same function I also need to obtain values that return the minimum 
value of x subject to the following constraints: a=a1, b=b1, c=c1, a=a1 && 
b=b1, a=a1 && c=c1, ... and so on, for any combination of these constraints 
including a=a1, b=b1, c=c1. The brute force way to do this with optim() would 
be to write into F() an immense switch statement anticipating every possible 
combination of constrained variables. Obviously this is inefficient and 
unmaintainable. Therefore, my question is:

Is the purpose of constrOptim() this exact type of problem? If so, how does one 
express the constraint I described above in terms of the ui, ci, and theta 
parameters? Are there any introductory texts I should have read for this to be 
obvious to me from constrOptim's documentation?

If constrOptim() is not the answer to this problem, can anybody suggest any 
more elegant alterntives to the switch-statement-from-hell approach?

Thank you very, very much in advance. I thought I understood R reasonably well 
until I started banging my head against this problem!

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lsmeans

2008-06-08 Thread Frank E Harrell Jr

Douglas Bates wrote:

On 6/7/08, John Fox <[EMAIL PROTECTED]> wrote:

Dear Dieter,

 I don't know whether I qualify as a "master," but here's my brief take on
 the subject: First, I dislike the term "least-squares means," which seems to
 me like nonsense. Second, what I prefer to call "effect displays" are just
 judiciously chosen regions of the response surface of a model, meant to
 clarify effects in complex models. For example, a two-way interaction is
 displayed by absorbing the constant and main-effect terms in the interaction
 (more generally, absorbing terms marginal to a particular term) and setting
 other terms to typical values. A table or graph of the resulting fitted
 values is, I would argue, easier to grasp than the coefficients, the
 interpretation of which can entail complicated mental arithmetic.


I like that explanation, John.

As I'm sure you are aware, the key phrase in what you wrote is
"setting other terms to typical values".  That is, these are
conditional cell means, yet they are almost universally misunderstood
- even by statisticians who should know better - to be marginal cell
means.  A more subtle aspect of that phrase is the interpretation of
"typical".  The user is not required to specify these typical values -
they are calculated from the observed data.

If there are no interactions with the "other terms" and if the values
chosen for those other terms based on the observed data are indeed
typical of the values for which we wish to make inferences with the
model then these conditional cell means may tell us something about
the marginal cell means.  But if either of those conditions fails then
these conditional means can be very different from the marginal means.


Well put Doug.  I would add another condition, which I don't know how to 
state precisely.  The settings for the other terms, which are usually 
marginal medians, modes, or means, must make sense when considered 
jointly.  Frequently when all adjustment covariates are set to overall 
marginal means the resulting "subject" is very atypical.


To me much of the problem is solved one one develops a liking for 
predicted values and differences in them.


Frank



I wouldn't have any problem at all with providing conditional cell
means, especially if the user were required to specify the values at
which to fix the other terms in the model, but that is not what people
think they are getting.  I don't want to encourage them in their
delusions by letting them think i can evaluate marginal cell means as
a single, conditional evaluation.


 > -Original Message-
 > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 On
 > Behalf Of Dieter Menne
 > Sent: June-07-08 4:36 AM
 > To: [EMAIL PROTECTED]
 > Subject: Re: [R] lsmeans
 >
 > John Fox  mcmaster.ca> writes:
 >
 > > I intend at some point to extend the effects package to linear and
 > > generalized linear mixed-effects models, probably using lmer() rather
 > > than lme(), but as you discovered, it doesn't handle these models now.
 > >
 > > It wouldn't be hard, however, to do the computations yourself, using
 > > the coefficient vector for the fixed effects and a suitably constructed
 > > model-matrix to compute the effects; you could also get standard errors
 > > by using the covariance matrix for the fixed effects.
 > >
 >
 > >> Douglas Bates:
 > https://stat.ethz.ch/pipermail/r-sig-mixed-models/2007q2/000222.html
 > >>
 > My big problem with lsmeans is
 > that I have never been able to understand how they should be
 > calculated and, more importantly, why one should want to calculate
 > them.  In other words, what do lsmeans represent and why should I be
 > interested in these particular values?
 > >>
 >
 > Truly Confused, torn apart by the Masters
 >
 > Dieter
 >
 > __
 > R-help@r-project.org mailing list
 > https://stat.ethz.ch/mailman/listinfo/r-help
 > PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 > and provide commented, minimal, self-contained, reproducible code.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




--
Frank E Harrell Jr   Professor and Chair   School of Medicine
 Department of Biostatistics   Vanderbilt University

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, sel

Re: [R] lsmeans

2008-06-08 Thread hadley wickham
On Sun, Jun 8, 2008 at 12:58 PM, Douglas Bates <[EMAIL PROTECTED]> wrote:
> On 6/7/08, John Fox <[EMAIL PROTECTED]> wrote:
>> Dear Dieter,
>>
>>  I don't know whether I qualify as a "master," but here's my brief take on
>>  the subject: First, I dislike the term "least-squares means," which seems to
>>  me like nonsense. Second, what I prefer to call "effect displays" are just
>>  judiciously chosen regions of the response surface of a model, meant to
>>  clarify effects in complex models. For example, a two-way interaction is
>>  displayed by absorbing the constant and main-effect terms in the interaction
>>  (more generally, absorbing terms marginal to a particular term) and setting
>>  other terms to typical values. A table or graph of the resulting fitted
>>  values is, I would argue, easier to grasp than the coefficients, the
>>  interpretation of which can entail complicated mental arithmetic.
>
> I like that explanation, John.
>
> As I'm sure you are aware, the key phrase in what you wrote is
> "setting other terms to typical values".  That is, these are
> conditional cell means, yet they are almost universally misunderstood
> - even by statisticians who should know better - to be marginal cell
> means.  A more subtle aspect of that phrase is the interpretation of
> "typical".  The user is not required to specify these typical values -
> they are calculated from the observed data.
>

How does Searle's "population marginal means" fit in to this?  The
paper describes a PMM as "expected value of an observed marginal mean
as if there were one observation in every cell." - which was what I
thought happened in the effects display.  Is this a subtly on the
definition of typical, or is that PMM's are only described for pure
ANOVA's (i.e. no continuous variables in model)?

Hadley

-- 
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Installation of R bindings on Windows

2008-06-08 Thread Axel Etzold
Dear all,

I am trying to install a package of R bindings for the Ruby language on Windows 
Vista ...
this involves some compilation work with Mingw. (The analogous process on Linux
Ubuntu went fine, but for the Windows installation, I need to provide the 
location of the
file analogous to the /usr/lib/R/lib/libR.so library. I've been searching for 
libR.dll, Rlib.dll, 
everywhere, but can't find it .

What is the analogon of libR.so on Windows ?

Thank you very much.

Best regards,

Axel 
-- 

Jetzt dabei sein: http://www.shortview.de/[EMAIL PROTECTED]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lsmeans

2008-06-08 Thread Douglas Bates
On 6/7/08, John Fox <[EMAIL PROTECTED]> wrote:
> Dear Dieter,
>
>  I don't know whether I qualify as a "master," but here's my brief take on
>  the subject: First, I dislike the term "least-squares means," which seems to
>  me like nonsense. Second, what I prefer to call "effect displays" are just
>  judiciously chosen regions of the response surface of a model, meant to
>  clarify effects in complex models. For example, a two-way interaction is
>  displayed by absorbing the constant and main-effect terms in the interaction
>  (more generally, absorbing terms marginal to a particular term) and setting
>  other terms to typical values. A table or graph of the resulting fitted
>  values is, I would argue, easier to grasp than the coefficients, the
>  interpretation of which can entail complicated mental arithmetic.

I like that explanation, John.

As I'm sure you are aware, the key phrase in what you wrote is
"setting other terms to typical values".  That is, these are
conditional cell means, yet they are almost universally misunderstood
- even by statisticians who should know better - to be marginal cell
means.  A more subtle aspect of that phrase is the interpretation of
"typical".  The user is not required to specify these typical values -
they are calculated from the observed data.

If there are no interactions with the "other terms" and if the values
chosen for those other terms based on the observed data are indeed
typical of the values for which we wish to make inferences with the
model then these conditional cell means may tell us something about
the marginal cell means.  But if either of those conditions fails then
these conditional means can be very different from the marginal means.

I wouldn't have any problem at all with providing conditional cell
means, especially if the user were required to specify the values at
which to fix the other terms in the model, but that is not what people
think they are getting.  I don't want to encourage them in their
delusions by letting them think i can evaluate marginal cell means as
a single, conditional evaluation.

>  > -Original Message-
>  > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
>  On
>  > Behalf Of Dieter Menne
>  > Sent: June-07-08 4:36 AM
>  > To: [EMAIL PROTECTED]
>  > Subject: Re: [R] lsmeans
>  >
>  > John Fox  mcmaster.ca> writes:
>  >
>  > > I intend at some point to extend the effects package to linear and
>  > > generalized linear mixed-effects models, probably using lmer() rather
>  > > than lme(), but as you discovered, it doesn't handle these models now.
>  > >
>  > > It wouldn't be hard, however, to do the computations yourself, using
>  > > the coefficient vector for the fixed effects and a suitably constructed
>  > > model-matrix to compute the effects; you could also get standard errors
>  > > by using the covariance matrix for the fixed effects.
>  > >
>  >
>  > >> Douglas Bates:
>  > https://stat.ethz.ch/pipermail/r-sig-mixed-models/2007q2/000222.html
>  > >>
>  > My big problem with lsmeans is
>  > that I have never been able to understand how they should be
>  > calculated and, more importantly, why one should want to calculate
>  > them.  In other words, what do lsmeans represent and why should I be
>  > interested in these particular values?
>  > >>
>  >
>  > Truly Confused, torn apart by the Masters
>  >
>  > Dieter
>  >
>  > __
>  > R-help@r-project.org mailing list
>  > https://stat.ethz.ch/mailman/listinfo/r-help
>  > PLEASE do read the posting guide
>  http://www.R-project.org/posting-guide.html
>  > and provide commented, minimal, self-contained, reproducible code.
>
>  __
>  R-help@r-project.org mailing list
>  https://stat.ethz.ch/mailman/listinfo/r-help
>  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>  and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] label points on a graph

2008-06-08 Thread Paul Adams
Hello everyone,
I have a plot and I am wanting to label points as 1 through 20
I have the following code for the plot:
dat<-read.table(file="C:\\Documents and Settings\\Owner\\My Documents\\colon 
cancer1.txt",header=T,row.names=1)
file.show(file="C:\\Documents and Settings\\Owner\\My Documents\\colon 
cancer1.txt")
plot(dat[1:20,1:2],type='p',xlab='normal1',ylab='normal2',main='Two normal 
samples--first 20 genes',pch=15,col='blue')
grid(nx=5,ny=5,col="black",lty="dotted",lwd=2)
Do I use the label function text(x,y,names)? If so do I specify to put 1-20 
above the points as text(5,5,1:20,pos=3)?
Any help would be much appreciated
Paul


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R + Linux

2008-06-08 Thread ronggui
I used Debian, Ubuntu, Mandrake etc., and use FreeBSD now. I like
FreeBSD a bit more than Linux. R can run smoothly in FreeBSD as well.


On Sat, Jun 7, 2008 at 2:13 AM, steven wilson <[EMAIL PROTECTED]> wrote:
> Dear all;
>
> I'm planning to install Linux on my computer to run R (I'm bored of
> W..XP). However, I haven't used Linux before and I would appreciate,
> if possible, suggestions/comments about what could be the best option
> install, say Fedora, Ubuntu or OpenSuse which to my impression are the
> most popular ones (at least on the R-help lists). The computer is a PC
> desktop with 4GB RAM and  Intel Quad-Core Xeon processor and will be
> used only to run R.
>
> Thanks
> Steven
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
HUANG Ronggui, Wincent http://ronggui.huang.googlepages.com/
Bachelor of Social Work, Fudan University, China
Master of sociology, Fudan University, China
Ph.D. Candidate, CityU of HK.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multinormality

2008-06-08 Thread Jorge Ivan Velez
Dear Hanen,
Try also mshapiro.test in the mvnormtest package.

HTH,

Jorge


On Sun, Jun 8, 2008 at 6:01 AM, hanen <[EMAIL PROTECTED]> wrote:

>
> is there any function under R that allows me to test the normality of my 92
> sumples?
> --
> View this message in context:
> http://www.nabble.com/multinormality-tp17717230p17717230.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to important a date file into R

2008-06-08 Thread Gabor Grothendieck
Try this.  From your printout it seems that there are some
extraneous spaces in the file so first we read it in and remove
the spaces and just in case remove any completely blank lines.
Then we re-read it using read.fwf.  Note that the widths= argument
in read.fwf can be a list where we specify the widths of successive
5 lines as 5 vectors:

Lines <- readLines("mydata.dat")
Lines <- gsub(" ", "", Lines)
Lines <- subset(Lines, Lines != "")

ones <- function(n) rep(1, n)
mydata <- read.fwf(textConnection(Lines),
   list(c(3, ones(8)), ones(80), ones(80), ones(80), ones(50)))


On Sun, Jun 8, 2008 at 12:09 AM, yyan liu <[EMAIL PROTECTED]> wrote:
> Hi:
>   I have a data file in the following format.  The first three digits stand 
> for the ID of a respondent such as 402, 403. Different respondents may have 
> the same ID. Followed the ID are 298 single digit number ranging from 1 to 5. 
> My question is how to read this data file into R. I tried "scan" and "read" 
> but they do not work because the numbers in the file are not separated.  Any 
> suggestions?
> Thank you!
>
> zhong
> ---
> 40212221211
> 11212323114345531221314222132311445542524542111412113212124145315324113113112411
> 14153314211125142144131411153115432414111312411541141535124251213111
> 515122424114113114133521115122213321142331141413151433212221211454124214
> 1153311211331512415451231113131512
>  4021211
> 31122312125321111311444321432343211313223213222134323223114122313322
> 1334311311242122241332422322121343132132221233222322211522241443114351312113
> 5222123231232321331322211142312112411323122131143223251332112123231333121222
> 11422211214312332312324351213121121311233412212512
>  40312191211
> 211123111215344311512341121332452221121411221211231231112541341231311221
> 242331131244421212112134422332112213211125321223221344241211424252112314
> 111212242114141124211141332231511345133211131132431122111311121443131114
> 3254132223211512414542145121131541
>  40322191211
> 3112431112141133314314211131444331311315212111221131221115333113115111411312
> 1541411211554414421132532421321112115134521115241214121535153531514152111425
> 32141123112335112251241243511135125221114411232214131114
> 354144431312413531134131141511
>
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R + Linux

2008-06-08 Thread Hank Stevens
I typically use a Mac (I love it) with Aquamacs, LaTeX, and R, and  
recently started using Linux Ubuntu as well. Ubuntu is the only  
distribution I have ever tried, and I really like it. I like it so  
much that I would have made my switch complete, but cannot find  
replacement for the PDF cut and paste functionality of the Mac.


Ubuntu has amazing built in upgrade GUI's and other bells and whistles  
that a Mac user like me has come to depend on.


Hank

On Jun 6, 2008, at 3:18 PM, Douglas Bates wrote:

On Fri, Jun 6, 2008 at 1:13 PM, steven wilson <[EMAIL PROTECTED]>  
wrote:

Dear all;



I'm planning to install Linux on my computer to run R (I'm bored of
W..XP). However, I haven't used Linux before and I would appreciate,
if possible, suggestions/comments about what could be the best option
install, say Fedora, Ubuntu or OpenSuse which to my impression are  
the
most popular ones (at least on the R-help lists). The computer is a  
PC

desktop with 4GB RAM and  Intel Quad-Core Xeon processor and will be
used only to run R.


Ah yes, we haven't had a good flame war for a long time.  Let's start
discussing the relative merits of various Linux distributions.  That
should heat things up a bit.

I can only speak about Ubuntu.  I have used it exclusively for several
years now and find it to be superb.  In my opinion it is easy to
install and maintain and has very good support for R (take a bow,
Dirk).

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




Dr. Hank Stevens, Associate Professor
338 Pearson Hall
Botany Department
Miami University
Oxford, OH 45056

Office: (513) 529-4206
Lab: (513) 529-4262
FAX: (513) 529-4243
http://www.cas.muohio.edu/~stevenmh/
http://www.cas.muohio.edu/ecology
http://www.muohio.edu/botany/

"If the stars should appear one night in a thousand years, how would men
believe and adore." -Ralph Waldo Emerson, writer and philosopher  
(1803-1882)


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] txt file, 14000+ rows, only last 8000 appear

2008-06-08 Thread Philipp Pagel
On Fri, Jun 06, 2008 at 02:30:56PM -0700, RobertsLRRI wrote:
> when I load my data file in txt format into the R workstation I lose about
> 6000 rows, this is a problem. Is there a limit to the display capabilities
> for the workstation?  is all the information there and I just can't see the
> first couple thousand rows?

There are many possiblities. Yes, there can be a limit of what R will
diplay to you. E.g. when I load a large data frame and have R print it
to the screen I get :

> foo <- read.table('some-large-table')
> foo
[... lots of row here ...]
178011019 7519  -0.242   -0.158   -50 0 
   0
178113729 7371  -0.255   -0.296   -50 0 
   0
1782   11720868035   8.0003.027 0 0 
   0
17831064612121  -0.8920.008   -50 0 
   0
17841310012342  -0.560   -0.051   -50 0 
   0
17851377211613   0.0860.742 0 0 
   0
 [ reached getOption("max.print") -- omitted 4936 rows ]]

but the data is all there:

> dim(foo)
[1] 6721   56

So the first thing you need to do is figure out if R is reading all data
or not.  In case you find that you really end up with too few rows, I'd
have a look at the 'quote' option of read.table (assuming that's how you
read the file).

In order to give better advice, we'd need to know what you did and why
you think not all data is there.

cu
Philipp

-- 
Dr. Philipp Pagel
Lehrstuhl für Genomorientierte Bioinformatik
Technische Universität München
Wissenschaftszentrum Weihenstephan
85350 Freising, Germany
http://mips.gsf.de/staff/pagel

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multinormality

2008-06-08 Thread Maria Rizzo
For a test of multivariate normality, there is mvnorm.etest in the
energy package.

 Maria Rizzo

On Sun, Jun 8, 2008 at 6:01 AM, hanen <[EMAIL PROTECTED]> wrote:
>
> is there any function under R that allows me to test the normality of my 92
> sumples?
> --
> View this message in context: 
> http://www.nabble.com/multinormality-tp17717230p17717230.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Adding a Rotated Density Plot to an Existing Plot

2008-06-08 Thread Rory Winston

Thanks Jim! Thats just what I want.

Cheers
Rory

jim holtman wrote:

I forgot to make sure the axis ranges were the same:
 
x <- rnorm(1000)

x <- x + exp(-x/2)
layout(matrix(rep(c(1,1,2), 2), 2, 3, byrow=TRUE))
boxplot(x, ylim=c(0.5,4))
rug(jitter(x), side=2)
y <- density(x)
plot(y$y, y$x, type='l',ylim=c(0.5,4))


On Sun, Jun 8, 2008 at 7:07 AM, Rory Winston <[EMAIL PROTECTED] 
> wrote:


Hi

Consider the following graph:

x <- rnorm(1000)
x <- x + exp(-x/2)
layout(matrix(rep(c(1,1,2), 2), 2, 3, byrow=TRUE))
boxplot(x)
rug(jitter(x), side=2)
plot(density(x))


What I would really like to do is to have the density plot rotated
by 90 degrees so that I can see it line up with the rug plot on
the side axis. I cant seem to do this. I have seen references to
the grid package and tried some examples, but I cant seem to get
it to play well with the boxplot. Does anyone know how I can do this?

Cheers
Rory

__
R-help@r-project.org  mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.




--
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] AUC steady state calculations

2008-06-08 Thread Bart Joosen
Hi,

a collegue has send me an excel sheet with some plasma values, and now he wants 
to know the AUC steady state.
I took a look at the CRAN taskviews, and came up with PK, PKtools, ...
The AUC calculation, no problem with that, but how do I calculate the steady 
state?
One way of thinking was with the aid of multiple t-test, to find where we 
couldn't find difference between the different measuring points, and take this 
as steady state, but I'm not sure that this is the right way.

As I'm not home in the pharmacokinetic world, I was hoping someone with some 
more experience can shed some light on this (altough for me) dark material.

Kind Regards

Bart

PS: Here some example data:
dat<- structure(list(Sample = c(1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 
2L, 2L, 2L), tijd = c(12L, 36L, 60L, 84L, 108L, 132L, 12L, 36L, 
60L, 84L, 108L, 132L), conc = c(0.621518061431366, 0.87531564366726, 
0.916311538568891, 0.880947260781843, 0.852202744098934, 0.218909173985895, 
1.22305496551836, 1.30075841227452, 0.995918019674464, 1.33214099618361, 
1.42613784296527, 0.290928921761672)), .Names = c("Sample", "Time", 
"conc"), row.names = c(NA, -12L), class = "data.frame")
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] exponential distribution

2008-06-08 Thread Antonio Gasparrini
Dear all,
 
I've tried to solve the Es. 12, cap 4 of "Introduction to GLM" by Annette 
Dobson.
It's about the relationship between survival time of leukemia patients and 
blood cell count.
I tried to fit a model with exponential distribution, first by glm (family 
gamma and then dispersion parameter fixed to 1) and 
then with survreg.
They gave me the same point estimates but the standard errors are slightly 
different.
I checked the results building the routine manually, and it seems that the glm 
results are the correct one.
 
###
data <- data.frame(y=c(65,156,100,134,16,108,121,4,39,143,56,26,22,1,1,5,65),
 x=c(3.36,2.88,3.63,3.41,3.78,4.02,4.00,4.23,3.73,3.85,3.97,4.51,
 4.54,5.00,5.00,4.72,5.00))

model1 <- glm(y~x,family=Gamma(link="log"),data)
summary(model1,dispersion=1)
model2 <- survreg(Surv(y) ~ x, data, dist="exponential")
summary(model2)

X <- model.matrix(model1)
y <- as.vector(data$y)
b <- as.matrix(c(1,1))  # STARTING VALUES
iter <- matrix(0,7,2)
iter[1,] <- b
for (i in 2:7) {
 W <- diag(rep(1,length(y)))
 z <- X%*%b + (y-exp(X%*%b))*1/exp(X%*%b)
 b <- solve(t(X)%*%W%*%X) %*% (t(X)%*%W%*%z)
 iter[i,] <- b
}
summary(model1,dispersion=1)$coef
summary(model2)
iter[nrow(temp),]
sqrt(diag(solve(t(X)%*%W%*%X)))
###

Can you explain if this difference is due to an error?
Thanks in advance


Antonio Gasparrini
Public and Environmental Health Research Unit (PEHRU)
London School of Hygiene & Tropical Medicine
Keppel Street, London WC1E 7HT, UK
Office: 0044 (0)20 79272406 - Mobile: 0044 (0)79 64925523
http://www.lshtm.ac.uk/people/gasparrini.antonio ( 
http://www.lshtm.ac.uk/pehru/ )

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] multinormality

2008-06-08 Thread hanen

is there any function under R that allows me to test the normality of my 92
sumples?
-- 
View this message in context: 
http://www.nabble.com/multinormality-tp17717230p17717230.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Adding a Rotated Density Plot to an Existing Plot

2008-06-08 Thread jim holtman
I forgot to make sure the axis ranges were the same:

x <- rnorm(1000)
x <- x + exp(-x/2)
layout(matrix(rep(c(1,1,2), 2), 2, 3, byrow=TRUE))
boxplot(x, ylim=c(0.5,4))
rug(jitter(x), side=2)
y <- density(x)
plot(y$y, y$x, type='l',ylim=c(0.5,4))


On Sun, Jun 8, 2008 at 7:07 AM, Rory Winston <[EMAIL PROTECTED]> wrote:

> Hi
>
> Consider the following graph:
>
> x <- rnorm(1000)
> x <- x + exp(-x/2)
> layout(matrix(rep(c(1,1,2), 2), 2, 3, byrow=TRUE))
> boxplot(x)
> rug(jitter(x), side=2)
> plot(density(x))
>
>
> What I would really like to do is to have the density plot rotated by 90
> degrees so that I can see it line up with the rug plot on the side axis. I
> cant seem to do this. I have seen references to the grid package and tried
> some examples, but I cant seem to get it to play well with the boxplot. Does
> anyone know how I can do this?
>
> Cheers
> Rory
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Adding a Rotated Density Plot to an Existing Plot

2008-06-08 Thread jim holtman
This gets you close to the solution that you want:

x <- rnorm(1000)
x <- x + exp(-x/2)
layout(matrix(rep(c(1,1,2), 2), 2, 3, byrow=TRUE))
boxplot(x)
rug(jitter(x), side=2)
y <- density(x)
plot(y$y, y$x, type='l')




On Sun, Jun 8, 2008 at 7:07 AM, Rory Winston <[EMAIL PROTECTED]> wrote:

> Hi
>
> Consider the following graph:
>
> x <- rnorm(1000)
> x <- x + exp(-x/2)
> layout(matrix(rep(c(1,1,2), 2), 2, 3, byrow=TRUE))
> boxplot(x)
> rug(jitter(x), side=2)
> plot(density(x))
>
>
> What I would really like to do is to have the density plot rotated by 90
> degrees so that I can see it line up with the rug plot on the side axis. I
> cant seem to do this. I have seen references to the grid package and tried
> some examples, but I cant seem to get it to play well with the boxplot. Does
> anyone know how I can do this?
>
> Cheers
> Rory
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] eliminating and relabeling the first column

2008-06-08 Thread jim holtman
I am not sure exactly what you were trying to do.  read.table will read your
data into an object with R and you can change the row names:

> dat <- read.table(textConnection("H1 H2 H3
+ 1 2 3
+ 4 54 6
+ 6 7 8
+ 3 2 1"), header=TRUE)
> closeAllConnections()
> dat
  H1 H2 H3
1  1  2  3
2  4 54  6
3  6  7  8
4  3  2  1
> row.names(dat) <- paste('g', seq(nrow(dat)), sep="")
> dat
   H1 H2 H3
g1  1  2  3
g2  4 54  6
g3  6  7  8
g4  3  2  1
>
It does not change the file on the disk and therefore file.show will display
the original file.  If you want to rewrite the file, then look at
write.table.

On Sun, Jun 8, 2008 at 1:48 AM, Paul Adams <[EMAIL PROTECTED]> wrote:

> Hello everyone,
> I have a data frame in which I am wanting to eliminate the row labels and
> then relabel the rows with
> g1-g2000.I have used the following code:
> dat<-read.table(file="C:\\Documents and Settings\\Owner\\My
> Documents\\colon cancer1.txt",header=T,row.names=1)
> file.show(file="C:\\Documents and Settings\\Owner\\My Documents\\colon
> cancer1.txt")
> I thought that this would eliminate the row labels (first column) because
> of the header=T and row.names=1 argument.
> Then I tried to add the following to relabel the rows (first column)
> g1-g2000
> dat<-read.table(file="C:\\Documents and Settings\\Owner\\My
> Documents\\colon cancer1.txt",header=T,row.names=1)
> row.names(dat)<-paste("g",c(1:nrow(dat)),sep="")
> file.show(file="C:\\Documents and Settings\\Owner\\My Documents\\colon
> cancer1.txt")
> However I still get the same row labels that I have to begin with.
> Any help or explantation would be much appreciated.
> Paul
>
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Adding a Rotated Density Plot to an Existing Plot

2008-06-08 Thread Rory Winston

Hi

Consider the following graph:

x <- rnorm(1000)
x <- x + exp(-x/2)
layout(matrix(rep(c(1,1,2), 2), 2, 3, byrow=TRUE))
boxplot(x)
rug(jitter(x), side=2)
plot(density(x))


What I would really like to do is to have the density plot rotated by 90 
degrees so that I can see it line up with the rug plot on the side axis. 
I cant seem to do this. I have seen references to the grid package and 
tried some examples, but I cant seem to get it to play well with the 
boxplot. Does anyone know how I can do this?


Cheers
Rory

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] favorite useful tools?

2008-06-08 Thread Michael Dewey

At 15:16 07/06/2008, Carl Witthoft wrote:

Hi,
I'm relatively new to R, so I don't know the full list of base (or 
popular add-on packages)  functions and tools available.  For 
example, I tripped across mention of rle() in a message about some 
other problem. rle() turned out to be a handy shortcut to splitting 
some of my data by magnitude (vaguely like a sequence-based histogram).
So I thought I'd ask: what small, or obscure, tools and functions in 
R do you find handy or 'cool' to use in your work?


In a similar vein to your comment about rle (which I found in the 
same way as you and for which I would never have thought of looking) 
perhaps I could suggest str()? I picked this up by reading R-help.


When R's behaviour is unexpected (to you) using
str(your_dataframe)
often reveals why.

And for a bonus
1 - if the function has a data= parameter then always use it
2 - use with() rather than attaching and detaching things

These have saved me hours.


Thanks
Carl




Michael Dewey
http://www.aghmed.fsnet.co.uk

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] SARIMA Model Problem

2008-06-08 Thread ruangkij s
Hi..
   
   I get a below data from sofeware . For the SARIMA  equation should be 
   
  (1 - 0.991B^{12})z_t + 43.557 = (1+0.37B)(1-0,915B^{12})a_t  as somebody 
advise me ..
   
  Then I use it to confrim the forecasting output but it look like this 
equation is not correct ..
   
  because it 's big different form sofeware ...
   
  Would you please let confrim above SARIMA  equation corect or not ?If not 
what is the correct one?
   
  Remark : This model is used daily demand .
   
  Arima Model ( 0,0,1)(1,0,1)
   
  No Transformation
  Constant   >> 43.557 , t = 10.09 
MA >> -0.37  , t = -4.806,Lag 1
AR,Seasonal>> 0.991  , t = 48.098,Lag 1
MR,Seasonal>> 0.915  , t = 9.487 ,Lag 1
   
  Forcasting Output
  42.06226126
37.74729393
37.59225405
32.2574074
42.15854259
47.98722125
59.33493345
44.09434137
37.79761267
37.64391567
32.35527663
42.17065344
47.94884716
59.19827147
44.08968535
37.84749558
37.69512982
32.45229818
42.18265938
47.91080545
59.06279319
44.08506965

   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to important a date file into R

2008-06-08 Thread Dr. S. B. Nguah

Hi,

 ?read.fwf

Try,
read.fwf("filename.ext",width=c(3,rep(1,298)),header=F)

Blay


yyan liu wrote:
> 
> Hi: 
>I have a data file in the following format.  The first three digits
> stand for the ID of a respondent such as 402, 403. Different respondents
> may have the same ID. Followed the ID are 298 single digit number ranging
> from 1 to 5. My question is how to read this data file into R. I tried
> "scan" and "read" but they do not work because the numbers in the file are
> not separated.  Any suggestions? 
> Thank you!
> 
> zhong
> ---
> 40212221211   
>  
> 11212323114345531221314222132311445542524542111412113212124145315324113113112411
> 14153314211125142144131411153115432414111312411541141535124251213111
> 515122424114113114133521115122213321142331141413151433212221211454124214
> 1153311211331512415451231113131512
>   
>  4021211  
>   
> 31122312125321111311444321432343211313223213222134323223114122313322
> 1334311311242122241332422322121343132132221233222322211522241443114351312113
> 5222123231232321331322211142312112411323122131143223251332112123231333121222
> 11422211214312332312324351213121121311233412212512
>   
>  40312191211  
>   
> 211123111215344311512341121332452221121411221211231231112541341231311221
> 242331131244421212112134422332112213211125321223221344241211424252112314
> 111212242114141124211141332231511345133211131132431122111311121443131114
> 3254132223211512414542145121131541
>   
>  40322191211  
>   
> 3112431112141133314314211131444331311315212111221131221115333113115111411312
> 1541411211554414421132532421321112115134521115241214121535153531514152111425
> 32141123112335112251241243511135125221114411232214131114
> 354144431312413531134131141511
>   
> 
> 
>   
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 


-
Blay S
KATH
Kumasi, Ghana.
-- 
View this message in context: 
http://www.nabble.com/how-to-important-a-date-file-into-R-tp17715519p17716831.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Need to run R on Fedora 9.

2008-06-08 Thread Paul Smith
On Sun, Jun 8, 2008 at 9:33 AM, Peter Dalgaard <[EMAIL PROTECTED]> wrote:
>>> I'm in need of a version of R that will run on fedora 9 and haven't been
>>> able
>>> to find one.  I need this in order to run Bioconductor.  Any advice?
>>
>> Why not say (as root)
>> yum install R
>> ?
>>
>> Worked for me.  The point is that there is now a Fedora rpm for R.
>
> I'm still on F8, but I think you might want to make that "yum install R
> R-devel", so that you get the build tools. Otherwise, (some?) source
> packages will get you in trouble.

Here, on F9, I simply did:

yum install R R-devel

and it worked successfully.

Paul

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Need to run R on Fedora 9.

2008-06-08 Thread Peter Dalgaard

Jonathan Baron wrote:

On 06/05/08 09:28, RobertsLRRI wrote:
  

I'm in need of a version of R that will run on fedora 9 and haven't been able
to find one.  I need this in order to run Bioconductor.  Any advice?



Why not say (as root)
yum install R
?

Worked for me.  The point is that there is now a Fedora rpm for R.
  
I'm still on F8, but I think you might want to make that "yum install R 
R-devel", so that you get the build tools. Otherwise, (some?) source 
packages will get you in trouble.




--
  O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
 c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.