[R] Advice on Spatial Statistics

2005-10-05 Thread José Eugenio Lozano Alonso

Hello,

I have point spatial information (information linked to points on a
map), and I'd like to make inferences on the full extent of my zone. I
know the Kriging method, but I also know that it seems to be obsolete
and new approaches have been given.

I'd appreciate any advice on readings about all these methods on Spatial
Statistics, books, papers or articles.

Thanks in advance,
Jose Lozano

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fisher's discriminant functions

2005-10-05 Thread Leonardo Lami
Hi,
I have the same problem.
I found a solution but I think there is something of more simple and correct.

I take a group and I put 0 the values of the other group, after I use the 
function "glm" to abtain the  Fisher's discriminant function for this group.
I repeat the same for all the groups.

Some times ago i try to make the same query but without results.

Best wishes
Leonardo

Alle 22:22, giovedì 29 settembre 2005, Kjetil Holuerson ha scritto:
> This are in various packages, you could have a look at
> ade4 (on CRAN).
>
> Kjetil
>
> C NL wrote:
> > Hi everyone,
> >
> >   I'm trying to solve a problem about how to get the
> > Fisher's discriminant functions of a "lda" (linear
> > discriminant analysis) object, I mean, the object
> > obtained from doing "lda(formula, data)" function of
> > the package MASS in R-project. This object gives me
> > the canonical linear functions (n-1 coefficients
> > matrix of n groups at least), and only with this
> > information I could predict the group of an
> > observation data using the "predict" function. But
> > what I need is the Fisher's discriminant functions (n
> > coefficients matrix of n groups) in order to classify
> > my future data.
> >
> >The object "predict" gives me only the following
> > attributes "x", "posterior" and "class", but none of
> > them are the coefficients matrix of the Fisher's
> > discriminant functions, and the reason why I'm not
> > using the "predict" function for my predictions is
> > because the time spent is very high for what I'm
> > expecting, about 0.5 seconds while I can obtain this
> > prediction with the Fisher's discriminant functions
> > faster.
> >
> >So, I don't know if there's a package which I can
> > use to obtain the mentioned coefficients matrix of the
> > Fisher's discriminant functions.
> >
> >I anyone can help, I would appreciate it greatly.
> >
> > Thank you and regards.
> >
> >Carlos Niharra López
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide!
> > http://www.R-project.org/posting-guide.html
>
> --
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html

-- 
Leonardo Lami
[EMAIL PROTECTED]www.faunalia.it
Via Colombo 3 - 51010 Massa e Cozzile (PT), Italy   Tel: (+39)349-1310164
GPG key @: hkp://wwwkeys.pgp.net http://www.pgp.net/wwwkeys.html
https://www.biglumber.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] problem accumulating array within a function over loops

2005-10-05 Thread Jonathan Williams
Dear R helpers,

I am having trouble with an array inside a loop.

I wish to accumulate the results of a function in an array within
the function, over several loops of a program outside the function.

The problem is that the array seems to re-set at every entry to the
function. Here is an example, so you can see what I mean.


#First, I declare the array and loop control variables
maxrun=3; a=array(NA, c(3,5)); run=0

# Then I define the function "testf"
testf=function(x,y){
print(paste('Run:',run)) #check that the function knows about "run"
a[run,1:3]=runif(3); a[run,4]=x; a[run,5]=y #collect numbers into array "a"
print(paste("Row", run, "of a:")); print(a[run,]) #check what row 'run' of
"a" contains
print("Whole of a:"); print(a) # check what all of "a" contains
}

#Finally, I loop through "testf" maxrun occasions
for (run in 1:maxrun) testf(run,run*10)
#

Here is the output:-

[1] "Run: 1"
[1] "Row 1 of a:"
[1]  0.4637560  0.8872455  0.6421500  1.000 10.000
[1] "Whole of a:"
  [,1]  [,2][,3] [,4] [,5]
[1,] 0.4637560 0.8872455 0.642151   10
[2,]NANA  NA   NA   NA
[3,]NANA  NA   NA   NA
[1] "Run: 2"
[1] "Row 2 of a:"
[1]  0.4841261  0.8835118  0.9862266  2.000 20.000
[1] "Whole of a:"
  [,1]  [,2]  [,3] [,4] [,5]
[1,]NANANA   NA   NA
[2,] 0.4841261 0.8835118 0.98622662   20
[3,]NANANA   NA   NA
[1] "Run: 3"
[1] "Row 3 of a:"
[1]  0.7638856  0.6667588  0.6102928  3.000 30.000
[1] "Whole of a:"
  [,1]  [,2]  [,3] [,4] [,5]
[1,]NANANA   NA   NA
[2,]NANANA   NA   NA
[3,] 0.7638856 0.6667588 0.61029283   30

You can see that array a correctly collects the numbers at each loop.
But, each successive loop loses the contents of previous loops. I am
hoping to keep the results of each successive loop in "a", so that
after maxrun runs, "a" looks like:-

  [,1]  [,2][,3]   [,4] [,5]
[1,] 0.4637560 0.8872455 0.64215  1   10
[2,] 0.4841261 0.8835118 0.98622662   20
[3,] 0.7638856 0.6667588 0.61029283   30

(I have made this by cutting and pasting from the above output, to show
what I hoped testf would produce).

I am sure I must be making a simple but fundamental error. Would
someone be so kind as to show me what this is and how to correct it?
I have struggled with it for over an hour, without success!
I am running R 2.1.1 on a Windows 2000 machine.

Thanks, in advance, for your help,

Jonathan Williams

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem accumulating array within a function over loops

2005-10-05 Thread Dimitris Rizopoulos
try it this way:

a <- array(NA, c(3, 5))
for(i in 1:nrow(a))
a[i, ] <- c(runif(3), i, i * 10)
a


I hope it helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://www.med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message - 
From: "Jonathan Williams" 
<[EMAIL PROTECTED]>
To: "Ethz. Ch" 
Sent: Wednesday, October 05, 2005 11:46 AM
Subject: [R] problem accumulating array within a function over loops


> Dear R helpers,
>
> I am having trouble with an array inside a loop.
>
> I wish to accumulate the results of a function in an array within
> the function, over several loops of a program outside the function.
>
> The problem is that the array seems to re-set at every entry to the
> function. Here is an example, so you can see what I mean.
>
> 
> #First, I declare the array and loop control variables
> maxrun=3; a=array(NA, c(3,5)); run=0
>
> # Then I define the function "testf"
> testf=function(x,y){
> print(paste('Run:',run)) #check that the function knows about "run"
> a[run,1:3]=runif(3); a[run,4]=x; a[run,5]=y #collect numbers into 
> array "a"
> print(paste("Row", run, "of a:")); print(a[run,]) #check what row 
> 'run' of
> "a" contains
> print("Whole of a:"); print(a) # check what all of "a" contains
> }
>
> #Finally, I loop through "testf" maxrun occasions
> for (run in 1:maxrun) testf(run,run*10)
> #
>
> Here is the output:-
>
> [1] "Run: 1"
> [1] "Row 1 of a:"
> [1]  0.4637560  0.8872455  0.6421500  1.000 10.000
> [1] "Whole of a:"
>  [,1]  [,2][,3] [,4] [,5]
> [1,] 0.4637560 0.8872455 0.642151   10
> [2,]NANA  NA   NA   NA
> [3,]NANA  NA   NA   NA
> [1] "Run: 2"
> [1] "Row 2 of a:"
> [1]  0.4841261  0.8835118  0.9862266  2.000 20.000
> [1] "Whole of a:"
>  [,1]  [,2]  [,3] [,4] [,5]
> [1,]NANANA   NA   NA
> [2,] 0.4841261 0.8835118 0.98622662   20
> [3,]NANANA   NA   NA
> [1] "Run: 3"
> [1] "Row 3 of a:"
> [1]  0.7638856  0.6667588  0.6102928  3.000 30.000
> [1] "Whole of a:"
>  [,1]  [,2]  [,3] [,4] [,5]
> [1,]NANANA   NA   NA
> [2,]NANANA   NA   NA
> [3,] 0.7638856 0.6667588 0.61029283   30
>
> You can see that array a correctly collects the numbers at each 
> loop.
> But, each successive loop loses the contents of previous loops. I am
> hoping to keep the results of each successive loop in "a", so that
> after maxrun runs, "a" looks like:-
>
>  [,1]  [,2][,3]   [,4] [,5]
> [1,] 0.4637560 0.8872455 0.64215  1   10
> [2,] 0.4841261 0.8835118 0.98622662   20
> [3,] 0.7638856 0.6667588 0.61029283   30
>
> (I have made this by cutting and pasting from the above output, to 
> show
> what I hoped testf would produce).
>
> I am sure I must be making a simple but fundamental error. Would
> someone be so kind as to show me what this is and how to correct it?
> I have struggled with it for over an hour, without success!
> I am running R 2.1.1 on a Windows 2000 machine.
>
> Thanks, in advance, for your help,
>
> Jonathan Williams
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fisher's discriminant functions

2005-10-05 Thread Prof Brian Ripley

On Wed, 5 Oct 2005, Leonardo Lami wrote:


Hi,
I have the same problem.
I found a solution but I think there is something of more simple and correct.

I take a group and I put 0 the values of the other group, after I use the
function "glm" to abtain the  Fisher's discriminant function for this group.
I repeat the same for all the groups.


Which family in glm?  There is a way to do Fisher's LDF by least-squares 
regression, but some further computations are required.  The details are 
in section 3.2 of my PRNN book.


lda() does do Fisher's discriminant function, but note Fisher only did 
this for 2 groups and 1 function.  The predict method evaluates the LDF, 
so just look at the code to see how it does it (you need to be careful 
about a lot of details to maintain accuracy).




Some times ago i try to make the same query but without results.

Best wishes
Leonardo

Alle 22:22, giovedì 29 settembre 2005, Kjetil Holuerson ha scritto:

This are in various packages, you could have a look at
ade4 (on CRAN).

Kjetil

C NL wrote:

Hi everyone,

  I'm trying to solve a problem about how to get the
Fisher's discriminant functions of a "lda" (linear
discriminant analysis) object, I mean, the object
obtained from doing "lda(formula, data)" function of
the package MASS in R-project. This object gives me
the canonical linear functions (n-1 coefficients
matrix of n groups at least), and only with this
information I could predict the group of an
observation data using the "predict" function. But
what I need is the Fisher's discriminant functions (n
coefficients matrix of n groups) in order to classify
my future data.

   The object "predict" gives me only the following
attributes "x", "posterior" and "class", but none of
them are the coefficients matrix of the Fisher's
discriminant functions, and the reason why I'm not
using the "predict" function for my predictions is
because the time spent is very high for what I'm
expecting, about 0.5 seconds while I can obtain this
prediction with the Fisher's discriminant functions
faster.

   So, I don't know if there's a package which I can
use to obtain the mentioned coefficients matrix of the
Fisher's discriminant functions.

   I anyone can help, I would appreciate it greatly.

Thank you and regards.

   Carlos Niharra López

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html


--

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html


--
Leonardo Lami
[EMAIL PROTECTED]www.faunalia.it
Via Colombo 3 - 51010 Massa e Cozzile (PT), Italy   Tel: (+39)349-1310164
GPG key @: hkp://wwwkeys.pgp.net http://www.pgp.net/wwwkeys.html
https://www.biglumber.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] problem accumulating array within a function over loops

2005-10-05 Thread vincent
Jonathan Williams a écrit :

> maxrun=3; a=array(NA, c(3,5)); run=0
> 
> testf=function(x,y){
> print(paste('Run:',run)) #check that the function knows about "run"
> a[run,1:3]=runif(3); a[run,4]=x; a[run,5]=y #collect numbers into array "a"
> }

a outside testf is a "global variable".
a inside testf is a "local variable", ie a variable local to testf().
They are not the same.

As far as I remember, to assign a global variable from inside a
function you have to use the <-- operator.

(By the way, for 2dim arrays, there is also matrix().)

hih
Vincent

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] mathgraph - inputs allowed?

2005-10-05 Thread Sara Mouro
Dear All,

I am trying to use the library(mathgraph) but I do not understand (or find
in any help file) the type of Input needed.

I have prepared the Adjacency Matrix for the data, which consists in a
380x26 cells with 0/1 (1 if there is a link between those vertices, 0
otherwise).

Can I use that as an Input? Could you please explain me how?
Or should I just use a vector for each line (vertices)

In fact, I could not understand how to complete the formula referred in the
help file:
Mathgraph(formula, directed=T/F, data=.)

I would be very grateful if anyone could send me some examples on that.

Best regards,
Sara Maltez Mouro

I

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] eliminate t() and %*% using crossprod() and solve(A,b)

2005-10-05 Thread Robin Hankin
Hi

I have a square matrix Ainv of size N-by-N where N ~  1000
I have a rectangular matrix H of size N by n where n ~ 4.
I have a vector d of length N.

I need   X  = solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d

and

H %*% X.

It is possible to rewrite X in the recommended crossprod way:

X <-  solve(quad.form(Ainv, H), crossprod(crossprod(Ainv, H), d))

where quad.form() is a little function that evaluates a quadratic form:

  quad.form <- function (M, x){
 jj <- crossprod(M, x)
 return(drop(crossprod(jj, jj)))
}


QUESTION:

how  to calculate

H %*% X

in the recommended crossprod way?  (I don't want to take a transpose
because t() is expensive, and I know that %*% is slow).





--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Labeling the theme of shp file in R

2005-10-05 Thread JIRUŠE Marek

Dear R-users,

I am using the package maptools to draw a shp file in R with a function 
Map2poly() without problem. I would appreciate to know how can I add the labels 
of the objects in the map.
Thank you for any advice.

library(maptools)
try <- read.shape(system.file("data shp/okresy.shp", package="maptools"))
mappolys <- Map2poly(try,quiet=FALSE)
plot(mappolys)

Marek Jiruse
Data Production Specialist
GfK Praha, s.r.o.
Geologická 2
CZ - 152 00 Praha 5
tel.: +420 296 555  694
fax: +420 251 815 800
e-mail: [EMAIL PROTECTED]
http://www.gfk.cz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Joining Dataframes

2005-10-05 Thread Florence Combes
Matt,

you may want to set the option all to TRUE in :

merge(species1.effort, species2.effort, by='date', all=TRUE)

in order to also have in the result matrix the lines with 'date' that is not
commun for the both.
see ?merge for details.
hope this helps,

Florence.

---
Transcriptome platform of the ENS, Paris, France
http://transcriptome.ens.fr

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] eliminate t() and %*% using crossprod() and solve(A,b)

2005-10-05 Thread Prof Brian Ripley
On Wed, 5 Oct 2005, Robin Hankin wrote:

> I have a square matrix Ainv of size N-by-N where N ~  1000
> I have a rectangular matrix H of size N by n where n ~ 4.
> I have a vector d of length N.
>
> I need   X  = solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d
>
> and
>
> H %*% X.
>
> It is possible to rewrite X in the recommended crossprod way:
>
> X <-  solve(quad.form(Ainv, H), crossprod(crossprod(Ainv, H), d))
>
> where quad.form() is a little function that evaluates a quadratic form:
>
>  quad.form <- function (M, x){
> jj <- crossprod(M, x)
> return(drop(crossprod(jj, jj)))
> }

That is not the same thing: it gives t(H) %*% Ainv %*% t(Ainv) %*% H .

> QUESTION:
>
> how  to calculate
>
> H %*% X
>
> in the recommended crossprod way?  (I don't want to take a transpose
> because t() is expensive, and I know that %*% is slow).

Have you some data to support your claims?  Here I find (for random 
matrices of the dimensions given on a machine with a fast BLAS)

> system.time(for(i in 1:100) t(H) %*% Ainv)
[1] 2.19 0.01 2.21 0.00 0.00
> system.time(for(i in 1:100) crossprod(H, Ainv))
[1] 1.33 0.00 1.33 0.00 0.00

so each is quite fast and the difference is not great.  However, that is 
not comparing %*% with crossprod, but t & %*% with crossprod.

I get

> system.time(for(i in 1:1000) H %*% X)
[1] 0.05 0.01 0.06 0.00 0.00

which is hardly 'slow' (60 us for %*%), especially compared to forming X 
in

> system.time({X  = solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d})
[1] 0.04 0.00 0.04 0.00 0.00

I would probably have written

> system.time({X <- solve(crossprod(H, Ainv %*% H), crossprod(crossprod(Ainv, 
> H), d))})
1] 0.03 0.00 0.03 0.00 0.00

which is faster and does give the same answer.

[BTW, I used 2.2.0-beta which defaults to gcFirst=TRUE.]

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] multiple line plots

2005-10-05 Thread sosman
I have some data in a CSV file:

time,pos,t,tl
15:23:44:350,M1_01,4511,1127
15:23:44:350,M1_02,4514,1128
15:23:44:350,M1_03,4503,1125
...
15:23:44:491,M2_01,4500,1125
15:23:44:491,M2_02,4496,1124
15:23:44:491,M2_03,4516,1129
...
15:23:44:710,M3_01,4504,1126
15:23:44:710,M3_02,4516,1129
15:23:44:710,M3_03,4498,1124
...

Each pos (eg M1_01) is an independent time series.  I would like to plot 
each time series as lines on a single plot and I wondered if there was 
something more straight forward than I was attempting.

I got as far as:

fname = 't100.csv'
t = read.csv(fname)
tpos = split(t, t$pos)
plot(tpos[["M1_01"]]$t, type='l')
for (p in names(tpos)) {
 lines(tpos[[p]]$t)
}

which seems to work but then I got stuck on how to make each line a 
different colour and figured that there might a be a one liner R command 
to do what I want.

Any tips would be appreciated.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Analyses of covariation with lme() or lm()

2005-10-05 Thread CG Pettersson
Hello all!

I have a problem that calls for a better understanding, than mine, of 
how lme() uses the random part of the call.

The dataset consists of eleven field trials (Trial) with three 
replicates (Block) and four fertiliser treatments (Treat). Analysing for 
  example yield with lme() is easy:

m1 <- lme(Yield ~ Treat, data=data,
   random =~1| Trial/Block)

giving estimates of Treat effects with good significances. If I compare 
m1 with the model without any random structure:

m2 <- lm(Yield ~ Treat, data=data),
m1 is, naturally, much better than m2. So far so good.

Now I have one (1) measure from each Trial, of soil factors weather and 
such, that I want to evaluate. Remember: only one value of the covariate 
for each Trial. The suggestion I have got from my local guru is to base 
this in m1 like:

m3 <- lme(Yield ~ Treat + Cov1 + Treat:Cov1, data=data,
   random =~1| Trial/Block)

thus giving a model where the major random factor (Trial) is represented 
  both as a (1) measure of Cov1 in the fixed part and by itself in the 
random part. Trying the simpler call:

m4 <- lm(Yield ~ Treat + Cov1 + Treat:Cov1, data=data)

gives back basically the same fixed effects as m3, but with better 
significances for Cov1. Tested with anova(m3,m4) naturally gives the 
answer that m3 is better than m4. Ok, what about dealing with Trial in 
the fixed call? :

m5 <- lm(Yield ~ Trial + Treat + Cov1 + Treat:Cov1, data=data)

lm() swallows this, but silently moves out Cov1 from the analysis, an 
action that feels very logical to me.

My guru says that using the random call secures you from overestimating 
the p-values of the covariate. I fear that the risk is as big that you 
underestimate them with the same action. Working on a paper, I naturally 
want to be able to do some sort of discussion on the impact of 
covariates... ;-)

What is the wise solution? Or, if this is trying to make other people do 
my homework, could anyone tell me where the homework is? (I´ve got both 
Pinhiero & Bates and MASS as well as some others in the bookshelf.)

Cheers
/CG

-- 
CG Pettersson MSci. PhD.Stud.
Swedish University of Agricultural Sciences (SLU)
Dep. of Crop Production Ecology (VPE).
http://www.slu.se/
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] multiple line plots

2005-10-05 Thread Marc Schwartz
On Wed, 2005-10-05 at 22:19 +1000, sosman wrote:
> I have some data in a CSV file:
> 
> time,pos,t,tl
> 15:23:44:350,M1_01,4511,1127
> 15:23:44:350,M1_02,4514,1128
> 15:23:44:350,M1_03,4503,1125
> ...
> 15:23:44:491,M2_01,4500,1125
> 15:23:44:491,M2_02,4496,1124
> 15:23:44:491,M2_03,4516,1129
> ...
> 15:23:44:710,M3_01,4504,1126
> 15:23:44:710,M3_02,4516,1129
> 15:23:44:710,M3_03,4498,1124
> ...
> 
> Each pos (eg M1_01) is an independent time series.  I would like to plot 
> each time series as lines on a single plot and I wondered if there was 
> something more straight forward than I was attempting.
> 
> I got as far as:
> 
> fname = 't100.csv'
> t = read.csv(fname)
> tpos = split(t, t$pos)
> plot(tpos[["M1_01"]]$t, type='l')
> for (p in names(tpos)) {
>  lines(tpos[[p]]$t)
> }
> 
> which seems to work but then I got stuck on how to make each line a 
> different colour and figured that there might a be a one liner R command 
> to do what I want.
> 
> Any tips would be appreciated.


See the examples in ?plot.ts for some approaches.

You will need to review ?ts to create time series objects from your data
to be used in plot.ts().

Another approach, which is not specific to time series, is ?matplot.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] eliminate t() and %*% using crossprod() and solve(A,b)

2005-10-05 Thread Robin Hankin

On 5 Oct 2005, at 12:15, Dimitris Rizopoulos wrote:

> Hi Robin,
>
> I've been playing with your question, but I think these two lines  
> are not equivalent:
>
> N <- 1000
> n <- 4
> Ainv <- matrix(rnorm(N * N), N, N)
> H <- matrix(rnorm(N * n), N, n)
> d <- rnorm(N)
> quad.form <- function (M, x){
> jj <- crossprod(M, x)
> return(drop(crossprod(jj, jj)))
> }
>
>
> X0 <- solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d
> X1 <- solve(quad.form(Ainv, H), crossprod(crossprod(Ainv, H), d))
> all.equal(X0, X1) # not TRUE
>
>
> which is the one you want to compute?
>


These are not exactly equivalent, but:


Ainv <- matrix(rnorm(1e6),1e3,1e3)
H <- matrix(rnorm(4000),ncol=4)
d <- rnorm(1000)

X0 <- solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d
X1 <- solve(quad.form(Ainv, H), crossprod(crossprod(Ainv, H), d))
X0 - X1
   [,1]
[1,]  4.884981e-15
[2,]  3.663736e-15
[3,] -5.107026e-15
[4,]  5.717649e-15

which is pretty close.



On 5 Oct 2005, at 12:50, Prof Brian Ripley wrote:
>
>> QUESTION:
>>
>> how  to calculate
>>
>> H %*% X
>>
>> in the recommended crossprod way?  (I don't want to take a transpose
>> because t() is expensive, and I know that %*% is slow).
>>
>
> Have you some data to support your claims?  Here I find (for random
> matrices of the dimensions given on a machine with a fast BLAS)
>
>

I couldn't supply any performance data because I couldn't figure out the
correct R commands to calculate H %*% X  without using %*% or t()!

I was just wondering if there were a way to calculate

H %*% solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d

without using t() or %*%.  And there doesn't seem to be (my original
question didn't make it clear that I don't have X precalculated).

My take-home lesson from Brian Ripley is that H %*% X is fast
--but this is only useful to me if one has X precalculated, and in
general I don't.   But this discussion suggests to me that it might be
a good idea to change my routines and calculate X anyway.

thanks again Prof Ripley and Dimitris Rizopoulos


very best wishes

Robin



>> system.time(for(i in 1:100) t(H) %*% Ainv)
>>
> [1] 2.19 0.01 2.21 0.00 0.00
>
>> system.time(for(i in 1:100) crossprod(H, Ainv))
>>
> [1] 1.33 0.00 1.33 0.00 0.00
>
> so each is quite fast and the difference is not great.  However,  
> that is
> not comparing %*% with crossprod, but t & %*% with crossprod.
>
> I get
>
>
>> system.time(for(i in 1:1000) H %*% X)
>>
> [1] 0.05 0.01 0.06 0.00 0.00
>
> which is hardly 'slow' (60 us for %*%), especially compared to  
> forming X
> in
>
>
>> system.time({X  = solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*%  
>> d})
>>
> [1] 0.04 0.00 0.04 0.00 0.00
>
> I would probably have written
>
>
>> system.time({X <- solve(crossprod(H, Ainv %*% H), crossprod 
>> (crossprod(Ainv, H), d))})
>>
> 1] 0.03 0.00 0.03 0.00 0.00
>
> which is faster and does give the same answer.
>
> [BTW, I used 2.2.0-beta which defaults to gcFirst=TRUE.]
>
> -- 





--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Rcmdr and scatter3d

2005-10-05 Thread John Fox
Dear Ted,

I assumed that since Naiara was using scatter3d(), he wants a 3D dynamic
scatterplot. He could add points (actually, spheres) to the rgl graph
produced by scatter3d() -- the analog of plot() followed by points() for a
2D graph -- but doing so would be much more work than plotting by groups.

Regards,
 John


John Fox
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox 
 

> -Original Message-
> From: ecatchpole [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, October 04, 2005 8:55 PM
> To: John Fox
> Cc: 'Naiara S. Pinto'; r-help@stat.math.ethz.ch
> Subject: Re: [R] Rcmdr and scatter3d
> 
> Niara,
> 
> Alternatively, instead of scatter3d, the analogy to "hold on" 
> in Matlab is to use plot() for the first set of data, then 
> points() for the remainder. See
> 
> ?plot
> ?points
> 
> Ted.
> 
> On 05/10/05 11:18,  John Fox wrote,:
> > Dear Naiara,
> > 
> > Combine the data sets and differentiate among them with a 
> factor. Then 
> > use the groups argument to scatter3d (see ?scatter3d). If 
> you're using 
> > the R Commander to make the plot, the 3D scatterplot dialog 
> box as a 
> > plot by groups button. You can also fit colour-coded 
> regression surfaces by group.
> > 
> > I've appended a new version of the scatter3d function, not 
> yet in the 
> > Rcmdr package, which will also plot data ellipsoids (for the whole 
> > data set or by groups).
> > 
> > I hope this helps,
> >  John
> > 
> > --- snip --
> 
> > 
> > John Fox
> > Department of Sociology
> > McMaster University
> > Hamilton, Ontario
> > Canada L8S 4M4
> > 905-525-9140x23604
> > http://socserv.mcmaster.ca/jfox
> > 
> > 
> >>-Original Message-
> >>From: [EMAIL PROTECTED] 
> >>[mailto:[EMAIL PROTECTED] On Behalf Of 
> Naiara S. Pinto
> >>Sent: Tuesday, October 04, 2005 6:13 PM
> >>To: r-help@stat.math.ethz.ch
> >>Subject: [R] Rcmdr and scatter3d
> >>
> >>Hi folks,
> >>
> >>I'd like to use scatter3d (which is in R commander) to plot 
> more than 
> >>one dataset in the same graph, each dataset with a different color. 
> >>The kind of stuff you would do with "holdon"
> >>in Matlab.
> >>
> >>I read a recent message that was posted to this list with a similar 
> >>problem, but I couldn't understand the reply. Could someone give me 
> >>one example? How do you plot subgroups using scatter3d?
> >>
> >>Thanks a lot!
> >>
> >>Naiara.
> >>
> >>
> >>
> >>Naiara S. Pinto
> >>Ecology, Evolution and Behavior
> >>1 University Station A6700
> >>Austin, TX, 78712
> >>
> >>__
> >>R-help@stat.math.ethz.ch mailing list
> >>https://stat.ethz.ch/mailman/listinfo/r-help
> >>PLEASE do read the posting guide! 
> >>http://www.R-project.org/posting-guide.html
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> 
> 
> --
> Dr E.A. Catchpole
> Visiting Fellow
> Univ of New South Wales at ADFA, Canberra, Australia and 
> University of Kent, Canterbury, England
> - www.ma.adfa.edu.au/~eac
> - fax: +61 2 6268 8786
> - ph:  +61 2 6268 8895

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] multiple line plots

2005-10-05 Thread vincent
sosman a écrit :

lines() has a color argument

from the online help :
?lines
lines(x, y = NULL, type = "l", col = par("col"),...

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] testing non-linear component in mgcv:gam

2005-10-05 Thread Denis Chabot
Hi,

I need further help with my GAMs. Most models I test are very  
obviously non-linear. Yet, to be on the safe side, I report the  
significance of the smooth (default output of mgcv's summary.gam) and  
confirm it deviates significantly from linearity.

I do the latter by fitting a second model where the same predictor is  
entered without the s(), and then use anova.gam to compare the two. I  
thought this was the equivalent of the default output of anova.gam  
using package gam instead of mgcv.

I wonder if this procedure is correct because one of my models  
appears to be linear. In fact mgcv estimates df to be exactly 1.0 so  
I could have stopped there. However I inadvertently repeated the  
procedure outlined above. I would have thought in this case the  
anova.gam comparing the smooth and the linear fit would for sure have  
been not significant. To my surprise, P was 6.18e-09!

Am I doing something wrong when I attempt to confirm the non- 
parametric part a smoother is significant? Here is my example case  
where the relationship does appear to be linear:

library(mgcv)
> This is mgcv 1.3-7
Temp <- c(-1.38, -1.12, -0.88, -0.62, -0.38, -0.12, 0.12, 0.38, 0.62,  
0.88, 1.12,
1.38, 1.62, 1.88, 2.12, 2.38, 2.62, 2.88, 3.12, 3.38,  
3.62, 3.88,
4.12, 4.38, 4.62, 4.88, 5.12, 5.38, 5.62, 5.88, 6.12,  
6.38, 6.62, 6.88,
7.12, 8.38, 13.62)
N.sets <- c(2, 6, 3, 9, 26, 15, 34, 21, 30, 18, 28, 27, 27, 29, 31,  
22, 26, 24, 23,
 15, 25, 24, 27, 19, 26, 24, 22, 13, 10, 2, 5, 3, 1, 1,  
1, 1, 1)
wm.sed <- c(0.0, 0.016129032, 0.0, 0.062046512,  
0.396459596, 0.189082949,
 0.054757925, 0.142810440, 0.168005168, 0.180804428,  
0.111439628, 0.128799505,
 0.193707937, 0.105921610, 0.103497845, 0.028591837,  
0.217894389, 0.020535469,
 0.080389068, 0.105234450, 0.070213450, 0.050771363,  
0.042074434, 0.102348837,
 0.049748344, 0.019100478, 0.005203125, 0.101711864,  
0.0, 0.0,
 0.014808824, 0.0, 0.22200, 0.16700,  
0.0, 0.0,
 0.0)

sed.gam <- gam(wm.sed~s(Temp),weight=N.sets)
summary.gam(sed.gam)
> Family: gaussian
> Link function: identity
>
> Formula:
> wm.sed ~ s(Temp)
>
> Parametric coefficients:
> Estimate Std. Error t value Pr(>|t|)
> (Intercept)  0.084030.01347   6.241 3.73e-07 ***
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> Approximate significance of smooth terms:
> edf Est.rank F  p-value
> s(Temp)   11 13.95 0.000666 ***
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> R-sq.(adj) =  0.554   Deviance explained = 28.5%
> GCV score = 0.09904   Scale est. = 0.093686  n = 37

# testing non-linear contribution
sed.lin <- gam(wm.sed~Temp,weight=N.sets)
summary.gam(sed.lin)
> Family: gaussian
> Link function: identity
>
> Formula:
> wm.sed ~ Temp
>
> Parametric coefficients:
>  Estimate Std. Error t value Pr(>|t|)
> (Intercept)  0.162879   0.019847   8.207 1.14e-09 ***
> Temp-0.023792   0.006369  -3.736 0.000666 ***
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
>
> R-sq.(adj) =  0.554   Deviance explained = 28.5%
> GCV score = 0.09904   Scale est. = 0.093686  n = 37
anova.gam(sed.lin, sed.gam, test="F")
> Analysis of Deviance Table
>
> Model 1: wm.sed ~ Temp
> Model 2: wm.sed ~ s(Temp)
>Resid. Df Resid. Dev Df  Deviance  F   Pr(>F)
> 1 3.5000e+01  3.279
> 2 3.5000e+01  3.279 5.5554e-10 2.353e-11 0.4521 6.18e-09 ***


Thanks in advance,


Denis Chabot

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2005-10-05 Thread allan_sta_staff_sci_main_uct
hi all

why does the following not work???

this was someone elses code and i couldnt explain why it doesn't work.

m=matrix(c(0,0),2,1)
v=matrix(c(1,0,0,1),2,2)

Y=function(X1,X2,mu=m,V=v)
{
X=matrix(c(X1,X2),2,1)
a=(1/((2*pi)*sqrt(det(V*exp((-0.5)*(t(X-mu)%*%solve(V)%*%(X-mu)))
a[1]
}

x1=seq(-1,1)
x2=x1

Z=outer(x1,x2,FUN="Y",mu=m,V=v)

persp(x1,x2,Z)




my code:

BINORMAL<-function(varx=1,vary=1,covxy=0,meanx=0,meany=0)
{
#the following function plots the density of a bi variate normal 
distribution

covXY<-matrix(c(varx,covxy,covxy,vary),2,2)
A<-solve(covXY)


#up<-max(meanx+4*varx^.5,meanx-4*varx^.5,meany+4*vary^.5,meany-4*vary^.5)
#x <- seq(-up,up,length=50)
#y <- x

x <- seq(meanx-3*varx^.5,meanx+3*varx^.5,length=50)
y <- seq(meany-3*vary^.5,meany+3*vary^.5,length=50)

f <- function(x,y,...)
{
detA<-det(A)

quadForm<-A[1,1]*(x-meanx)^2+2*A[1,2]*(x-meanx)*(y-meany)+A[2,2]*(y-meany)^2
K<-sqrt(detA)/(2*pi)
exp(-0.5*quadForm)*K
}

z <- outer(x, y, f)

par(mfrow=c(1,2))
persp(x, y, z,theta = 30, phi = 30,col="white",main="BI-VARIATE NORMAL
DISTRIBUTION")
contour(x,y,z,main=paste("xy plot, corr(X,Y)= ",(covxy/(varx*vary)^.5)))

print("NOTE -sqrt(varx*vary)<=covxy<=sqrt(varx*vary)")
#print(A)
}
BINORMAL(varx=1,vary=1,covxy=0,meanx=0,meany=0)

thanx

/allan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] transparent surface in rgl

2005-10-05 Thread Prof. Paul R. Fisher
Hi all
I am a complete newbie to this list (just subscribed) and a newcomer to 
R (an S user from olden times). I have been using scatter3d to create a 
3d scatter plot with surface. The graphic is created within the rgl 
package and I have used rgl.postscript to export it so I can generate a 
publication quality image. My problem is that the plotted surface is no 
longer transparent in the postscript output ie. the rgl.spheres that are 
behind the surface disappear in the postscript image. Can't seem to find 
any info on this anywhere. Am I doing something wrong? Is there an easy fix?

Anyway, thanks.
Hope I've not broken some netiquette rule sending this.

Cheers,
Paul Fisher.
-- 
Prof. Paul R. Fisher,
Chair in Microbiology,
La Trobe University,
VIC 3086,
AUSTRALIA.

Tel. + 61 3 9479 2229
Fax. + 61 3 9479 1222
Email. [EMAIL PROTECTED]
Web. http://www.latrobe.edu.au/mcbg/my.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] eliminate t() and %*% using crossprod() and solve(A,b)

2005-10-05 Thread Dimitris Rizopoulos
an alternative is to use X2 below, which seems to be a little bit 
faster:

N <- 1000
n <- 4
Ainv <- matrix(rnorm(N * N), N, N)
H <- matrix(rnorm(N * n), N, n)
d <- rnorm(N)
quad.form <- function (M, x){
 jj <- crossprod(M, x)
 return(drop(crossprod(jj, jj)))
}

###
###

invisible({gc(); gc(); gc()})
system.time(for(i in 1:200){
X0 <- solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d
}, gcFirst = TRUE)


invisible({gc(); gc(); gc()})
system.time(for(i in 1:200){
X1 <- solve(quad.form(Ainv, H), crossprod(crossprod(Ainv, H), d))
}, gcFirst = TRUE)


invisible({gc(); gc(); gc()})
system.time(for(i in 1:200){
tH.Ainv <- crossprod(H, Ainv)
X2 <-  solve(tH.Ainv %*% H) %*% colSums(t(tH.Ainv) * d)
}, gcFirst = TRUE)


all.equal(X0, X1)
all.equal(X0, X2)


I hope this helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://www.med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message - 
From: "Robin Hankin" <[EMAIL PROTECTED]>
To: "Prof Brian Ripley" <[EMAIL PROTECTED]>
Cc: "RHelp" ; "Robin Hankin" 
<[EMAIL PROTECTED]>
Sent: Wednesday, October 05, 2005 3:08 PM
Subject: Re: [R] eliminate t() and %*% using crossprod() and 
solve(A,b)


>
> On 5 Oct 2005, at 12:15, Dimitris Rizopoulos wrote:
>
>> Hi Robin,
>>
>> I've been playing with your question, but I think these two lines
>> are not equivalent:
>>
>> N <- 1000
>> n <- 4
>> Ainv <- matrix(rnorm(N * N), N, N)
>> H <- matrix(rnorm(N * n), N, n)
>> d <- rnorm(N)
>> quad.form <- function (M, x){
>> jj <- crossprod(M, x)
>> return(drop(crossprod(jj, jj)))
>> }
>>
>>
>> X0 <- solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d
>> X1 <- solve(quad.form(Ainv, H), crossprod(crossprod(Ainv, H), d))
>> all.equal(X0, X1) # not TRUE
>>
>>
>> which is the one you want to compute?
>>
>
>
> These are not exactly equivalent, but:
>
>
> Ainv <- matrix(rnorm(1e6),1e3,1e3)
> H <- matrix(rnorm(4000),ncol=4)
> d <- rnorm(1000)
>
> X0 <- solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d
> X1 <- solve(quad.form(Ainv, H), crossprod(crossprod(Ainv, H), d))
> X0 - X1
>   [,1]
> [1,]  4.884981e-15
> [2,]  3.663736e-15
> [3,] -5.107026e-15
> [4,]  5.717649e-15
>
> which is pretty close.
>
>
>
> On 5 Oct 2005, at 12:50, Prof Brian Ripley wrote:
>>
>>> QUESTION:
>>>
>>> how  to calculate
>>>
>>> H %*% X
>>>
>>> in the recommended crossprod way?  (I don't want to take a 
>>> transpose
>>> because t() is expensive, and I know that %*% is slow).
>>>
>>
>> Have you some data to support your claims?  Here I find (for random
>> matrices of the dimensions given on a machine with a fast BLAS)
>>
>>
>
> I couldn't supply any performance data because I couldn't figure out 
> the
> correct R commands to calculate H %*% X  without using %*% or t()!
>
> I was just wondering if there were a way to calculate
>
> H %*% solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d
>
> without using t() or %*%.  And there doesn't seem to be (my original
> question didn't make it clear that I don't have X precalculated).
>
> My take-home lesson from Brian Ripley is that H %*% X is fast
> --but this is only useful to me if one has X precalculated, and in
> general I don't.   But this discussion suggests to me that it might 
> be
> a good idea to change my routines and calculate X anyway.
>
> thanks again Prof Ripley and Dimitris Rizopoulos
>
>
> very best wishes
>
> Robin
>
>
>
>>> system.time(for(i in 1:100) t(H) %*% Ainv)
>>>
>> [1] 2.19 0.01 2.21 0.00 0.00
>>
>>> system.time(for(i in 1:100) crossprod(H, Ainv))
>>>
>> [1] 1.33 0.00 1.33 0.00 0.00
>>
>> so each is quite fast and the difference is not great.  However,
>> that is
>> not comparing %*% with crossprod, but t & %*% with crossprod.
>>
>> I get
>>
>>
>>> system.time(for(i in 1:1000) H %*% X)
>>>
>> [1] 0.05 0.01 0.06 0.00 0.00
>>
>> which is hardly 'slow' (60 us for %*%), especially compared to
>> forming X
>> in
>>
>>
>>> system.time({X  = solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*%
>>> d})
>>>
>> [1] 0.04 0.00 0.04 0.00 0.00
>>
>> I would probably have written
>>
>>
>>> system.time({X <- solve(crossprod(H, Ainv %*% H), crossprod
>>> (crossprod(Ainv, H), d))})
>>>
>> 1] 0.03 0.00 0.03 0.00 0.00
>>
>> which is faster and does give the same answer.
>>
>> [BTW, I used 2.2.0-beta which defaults to gcFirst=TRUE.]
>>
>> -- 
>
>
>
>
>
> --
> Robin Hankin
> Uncertainty Analyst
> National Oceanography Centre, Southampton
> European Way, Southampton SO14 3ZH, UK
>  tel  023-8059-7743
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 


Disclaimer: http://www.kuleuve

Re: [R] testing non-linear component in mgcv:gam

2005-10-05 Thread Liaw, Andy
I think you probably should state more clearly the goal of your
analysis.  In such situation, estimation and hypothesis testing are
quite different.  The procedure that gives you the `best' estimate is
not necessarily the `best' for testing linearity of components.  If your
goal is estimation/prediction, why test linearity of components?  If the
goal is testing linearity, then you probably need to find the procedure
that gives you a good test, rather than relying on what gam() gives you.

Just my $0.02...

Andy

> From: Denis Chabot
> 
> Hi,
> 
> I need further help with my GAMs. Most models I test are very  
> obviously non-linear. Yet, to be on the safe side, I report the  
> significance of the smooth (default output of mgcv's 
> summary.gam) and  
> confirm it deviates significantly from linearity.
> 
> I do the latter by fitting a second model where the same 
> predictor is  
> entered without the s(), and then use anova.gam to compare 
> the two. I  
> thought this was the equivalent of the default output of anova.gam  
> using package gam instead of mgcv.
> 
> I wonder if this procedure is correct because one of my models  
> appears to be linear. In fact mgcv estimates df to be exactly 1.0 so  
> I could have stopped there. However I inadvertently repeated the  
> procedure outlined above. I would have thought in this case the  
> anova.gam comparing the smooth and the linear fit would for 
> sure have  
> been not significant. To my surprise, P was 6.18e-09!
> 
> Am I doing something wrong when I attempt to confirm the non- 
> parametric part a smoother is significant? Here is my example case  
> where the relationship does appear to be linear:
> 
> library(mgcv)
> > This is mgcv 1.3-7
> Temp <- c(-1.38, -1.12, -0.88, -0.62, -0.38, -0.12, 0.12, 
> 0.38, 0.62,  
> 0.88, 1.12,
> 1.38, 1.62, 1.88, 2.12, 2.38, 2.62, 2.88, 3.12, 3.38,  
> 3.62, 3.88,
> 4.12, 4.38, 4.62, 4.88, 5.12, 5.38, 5.62, 5.88, 6.12,  
> 6.38, 6.62, 6.88,
> 7.12, 8.38, 13.62)
> N.sets <- c(2, 6, 3, 9, 26, 15, 34, 21, 30, 18, 28, 27, 27, 29, 31,  
> 22, 26, 24, 23,
>  15, 25, 24, 27, 19, 26, 24, 22, 13, 10, 2, 5, 3, 1, 1,  
> 1, 1, 1)
> wm.sed <- c(0.0, 0.016129032, 0.0, 0.062046512,  
> 0.396459596, 0.189082949,
>  0.054757925, 0.142810440, 0.168005168, 0.180804428,  
> 0.111439628, 0.128799505,
>  0.193707937, 0.105921610, 0.103497845, 0.028591837,  
> 0.217894389, 0.020535469,
>  0.080389068, 0.105234450, 0.070213450, 0.050771363,  
> 0.042074434, 0.102348837,
>  0.049748344, 0.019100478, 0.005203125, 0.101711864,  
> 0.0, 0.0,
>  0.014808824, 0.0, 0.22200, 0.16700,  
> 0.0, 0.0,
>  0.0)
> 
> sed.gam <- gam(wm.sed~s(Temp),weight=N.sets)
> summary.gam(sed.gam)
> > Family: gaussian
> > Link function: identity
> >
> > Formula:
> > wm.sed ~ s(Temp)
> >
> > Parametric coefficients:
> > Estimate Std. Error t value Pr(>|t|)
> > (Intercept)  0.084030.01347   6.241 3.73e-07 ***
> > ---
> > Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> >
> > Approximate significance of smooth terms:
> > edf Est.rank F  p-value
> > s(Temp)   11 13.95 0.000666 ***
> > ---
> > Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> >
> > R-sq.(adj) =  0.554   Deviance explained = 28.5%
> > GCV score = 0.09904   Scale est. = 0.093686  n = 37
> 
> # testing non-linear contribution
> sed.lin <- gam(wm.sed~Temp,weight=N.sets)
> summary.gam(sed.lin)
> > Family: gaussian
> > Link function: identity
> >
> > Formula:
> > wm.sed ~ Temp
> >
> > Parametric coefficients:
> >  Estimate Std. Error t value Pr(>|t|)
> > (Intercept)  0.162879   0.019847   8.207 1.14e-09 ***
> > Temp-0.023792   0.006369  -3.736 0.000666 ***
> > ---
> > Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> >
> >
> > R-sq.(adj) =  0.554   Deviance explained = 28.5%
> > GCV score = 0.09904   Scale est. = 0.093686  n = 37
> anova.gam(sed.lin, sed.gam, test="F")
> > Analysis of Deviance Table
> >
> > Model 1: wm.sed ~ Temp
> > Model 2: wm.sed ~ s(Temp)
> >Resid. Df Resid. Dev Df  Deviance  F   Pr(>F)
> > 1 3.5000e+01  3.279
> > 2 3.5000e+01  3.279 5.5554e-10 2.353e-11 0.4521 6.18e-09 ***
> 
> 
> Thanks in advance,
> 
> 
> Denis Chabot
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] testing non-linear component in mgcv:gam

2005-10-05 Thread John Fox
Dear Denis,

Take a closer look at the anova table: The models provide identical fits to
the data. The differences in degrees of freedom and deviance between the two
models are essentially zero, 5.5554e-10 and 2.353e-11 respectively.

I hope this helps,
 John


John Fox
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox 
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Denis Chabot
> Sent: Wednesday, October 05, 2005 8:22 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] testing non-linear component in mgcv:gam
> 
> Hi,
> 
> I need further help with my GAMs. Most models I test are very 
> obviously non-linear. Yet, to be on the safe side, I report 
> the significance of the smooth (default output of mgcv's 
> summary.gam) and confirm it deviates significantly from linearity.
> 
> I do the latter by fitting a second model where the same 
> predictor is entered without the s(), and then use anova.gam 
> to compare the two. I thought this was the equivalent of the 
> default output of anova.gam using package gam instead of mgcv.
> 
> I wonder if this procedure is correct because one of my 
> models appears to be linear. In fact mgcv estimates df to be 
> exactly 1.0 so I could have stopped there. However I 
> inadvertently repeated the procedure outlined above. I would 
> have thought in this case the anova.gam comparing the smooth 
> and the linear fit would for sure have been not significant. 
> To my surprise, P was 6.18e-09!
> 
> Am I doing something wrong when I attempt to confirm the non- 
> parametric part a smoother is significant? Here is my example 
> case where the relationship does appear to be linear:
> 
> library(mgcv)
> > This is mgcv 1.3-7
> Temp <- c(-1.38, -1.12, -0.88, -0.62, -0.38, -0.12, 0.12, 
> 0.38, 0.62, 0.88, 1.12,
> 1.38, 1.62, 1.88, 2.12, 2.38, 2.62, 2.88, 3.12, 
> 3.38, 3.62, 3.88,
> 4.12, 4.38, 4.62, 4.88, 5.12, 5.38, 5.62, 5.88, 
> 6.12, 6.38, 6.62, 6.88,
> 7.12, 8.38, 13.62)
> N.sets <- c(2, 6, 3, 9, 26, 15, 34, 21, 30, 18, 28, 27, 27, 
> 29, 31, 22, 26, 24, 23,
>  15, 25, 24, 27, 19, 26, 24, 22, 13, 10, 2, 5, 3, 
> 1, 1, 1, 1, 1) wm.sed <- c(0.0, 0.016129032, 
> 0.0, 0.062046512, 0.396459596, 0.189082949,
>  0.054757925, 0.142810440, 0.168005168, 
> 0.180804428, 0.111439628, 0.128799505,
>  0.193707937, 0.105921610, 0.103497845, 
> 0.028591837, 0.217894389, 0.020535469,
>  0.080389068, 0.105234450, 0.070213450, 
> 0.050771363, 0.042074434, 0.102348837,
>  0.049748344, 0.019100478, 0.005203125, 
> 0.101711864, 0.0, 0.0,
>  0.014808824, 0.0, 0.22200, 
> 0.16700, 0.0, 0.0,
>  0.0)
> 
> sed.gam <- gam(wm.sed~s(Temp),weight=N.sets)
> summary.gam(sed.gam)
> > Family: gaussian
> > Link function: identity
> >
> > Formula:
> > wm.sed ~ s(Temp)
> >
> > Parametric coefficients:
> > Estimate Std. Error t value Pr(>|t|)
> > (Intercept)  0.084030.01347   6.241 3.73e-07 ***
> > ---
> > Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> >
> > Approximate significance of smooth terms:
> > edf Est.rank F  p-value
> > s(Temp)   11 13.95 0.000666 ***
> > ---
> > Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> >
> > R-sq.(adj) =  0.554   Deviance explained = 28.5%
> > GCV score = 0.09904   Scale est. = 0.093686  n = 37
> 
> # testing non-linear contribution
> sed.lin <- gam(wm.sed~Temp,weight=N.sets)
> summary.gam(sed.lin)
> > Family: gaussian
> > Link function: identity
> >
> > Formula:
> > wm.sed ~ Temp
> >
> > Parametric coefficients:
> >  Estimate Std. Error t value Pr(>|t|)
> > (Intercept)  0.162879   0.019847   8.207 1.14e-09 ***
> > Temp-0.023792   0.006369  -3.736 0.000666 ***
> > ---
> > Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> >
> >
> > R-sq.(adj) =  0.554   Deviance explained = 28.5%
> > GCV score = 0.09904   Scale est. = 0.093686  n = 37
> anova.gam(sed.lin, sed.gam, test="F")
> > Analysis of Deviance Table
> >
> > Model 1: wm.sed ~ Temp
> > Model 2: wm.sed ~ s(Temp)
> >Resid. Df Resid. Dev Df  Deviance  F   Pr(>F)
> > 1 3.5000e+01  3.279
> > 2 3.5000e+01  3.279 5.5554e-10 2.353e-11 0.4521 6.18e-09 ***
> 
> 
> Thanks in advance,
> 
> 
> Denis Chabot
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] transparent surface in rgl

2005-10-05 Thread John Fox
Dear Paul,

I don't have experience with rgl.postscript(), which is relatively new, but
find that the png graphs produced by rgl.snapshot() are of reasonably good
quality and preserve transparency. Perhaps the developers of the rgl package
can shed more light on the matter.

I hope this helps,
 John


John Fox
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox 
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Prof. 
> Paul R. Fisher
> Sent: Wednesday, October 05, 2005 8:32 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] transparent surface in rgl
> 
> Hi all
> I am a complete newbie to this list (just subscribed) and a 
> newcomer to R (an S user from olden times). I have been using 
> scatter3d to create a 3d scatter plot with surface. The 
> graphic is created within the rgl package and I have used 
> rgl.postscript to export it so I can generate a publication 
> quality image. My problem is that the plotted surface is no 
> longer transparent in the postscript output ie. the 
> rgl.spheres that are behind the surface disappear in the 
> postscript image. Can't seem to find any info on this 
> anywhere. Am I doing something wrong? Is there an easy fix?
> 
> Anyway, thanks.
> Hope I've not broken some netiquette rule sending this.
> 
> Cheers,
> Paul Fisher.
> --
> Prof. Paul R. Fisher,
> Chair in Microbiology,
> La Trobe University,
> VIC 3086,
> AUSTRALIA.
> 
> Tel. + 61 3 9479 2229
> Fax. + 61 3 9479 1222
> Email. [EMAIL PROTECTED]
> Web. http://www.latrobe.edu.au/mcbg/my.html
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] multiple line plots

2005-10-05 Thread Gabor Grothendieck
Assuming, as in your example, that you want to plot your
series against 1, 2, 3, ...  first form a ts series, my.series,
and then plot it using the col= argument:

my.series <- do.call("cbind", lapply(tpos, ts))
m <- ncol(my.series)
ts.plot(my.series, col = 1:m)

Another possibility is to use col = rainbow(m) in the last line.


On 10/5/05, sosman <[EMAIL PROTECTED]> wrote:
> I have some data in a CSV file:
>
> time,pos,t,tl
> 15:23:44:350,M1_01,4511,1127
> 15:23:44:350,M1_02,4514,1128
> 15:23:44:350,M1_03,4503,1125
> ...
> 15:23:44:491,M2_01,4500,1125
> 15:23:44:491,M2_02,4496,1124
> 15:23:44:491,M2_03,4516,1129
> ...
> 15:23:44:710,M3_01,4504,1126
> 15:23:44:710,M3_02,4516,1129
> 15:23:44:710,M3_03,4498,1124
> ...
>
> Each pos (eg M1_01) is an independent time series.  I would like to plot
> each time series as lines on a single plot and I wondered if there was
> something more straight forward than I was attempting.
>
> I got as far as:
>
> fname = 't100.csv'
> t = read.csv(fname)
> tpos = split(t, t$pos)
> plot(tpos[["M1_01"]]$t, type='l')
> for (p in names(tpos)) {
> lines(tpos[[p]]$t)
> }
>
> which seems to work but then I got stuck on how to make each line a
> different colour and figured that there might a be a one liner R command
> to do what I want.
>
> Any tips would be appreciated.
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] transparent surface in rgl

2005-10-05 Thread Duncan Murdoch
On 10/5/2005 9:31 AM, Prof. Paul R. Fisher wrote:
> Hi all
> I am a complete newbie to this list (just subscribed) and a newcomer to 
> R (an S user from olden times). I have been using scatter3d to create a 
> 3d scatter plot with surface. The graphic is created within the rgl 
> package and I have used rgl.postscript to export it so I can generate a 
> publication quality image. My problem is that the plotted surface is no 
> longer transparent in the postscript output ie. the rgl.spheres that are 
> behind the surface disappear in the postscript image. Can't seem to find 
> any info on this anywhere. Am I doing something wrong? Is there an easy fix?

I think Postscript doesn't support transparency (or at least the version 
of Postscript that the rgl.postcript function targets doesn't support 
it).  You may have to export a bitmapped format using the rgl.snapshot() 
function.  If your original window is very large this may give you good 
enough quality.

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] transparent surface in rgl

2005-10-05 Thread vincent
Prof. Paul R. Fisher a écrit :

> ... My problem is that the plotted surface is no 
> longer transparent in the postscript output ie. the rgl.spheres that are 
> behind the surface disappear in the postscript image. Can't seem to find 
> any info on this anywhere. Am I doing something wrong? Is there an easy fix?

Hi,
for many graphical function, eg bmp()
there is a bg argument (background)
which you can set to "transparent".
Or you can directly use:
par(bg="transparent")
Have a look if this doesn't work.
hih
Vincent

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] testing non-linear component in mgcv:gam

2005-10-05 Thread Denis Chabot
Fair enough, Andy. I thought I was getting both predictive ability  
and confirmation that the phenomenon I was studying was not linear. I  
have two projects, in one prediction is the goal and I don't really  
need to test linearity. In the second I needed to confirm a cycle was  
taking place and I thought my procedure did this. There was no  
theoretical reason to anticipate the shape of the cycle, so GAM was  
an appealing methodology.

Denis

Le 05-10-05 à 09:38, Liaw, Andy a écrit :

> I think you probably should state more clearly the goal of your
> analysis.  In such situation, estimation and hypothesis testing are
> quite different.  The procedure that gives you the `best' estimate is
> not necessarily the `best' for testing linearity of components.  If  
> your
> goal is estimation/prediction, why test linearity of components?   
> If the
> goal is testing linearity, then you probably need to find the  
> procedure
> that gives you a good test, rather than relying on what gam() gives  
> you.
>
> Just my $0.02...
>
> Andy
>
>
>> From: Denis Chabot
>>
>> Hi,
>>
>> I need further help with my GAMs. Most models I test are very
>> obviously non-linear. Yet, to be on the safe side, I report the
>> significance of the smooth (default output of mgcv's
>> summary.gam) and
>> confirm it deviates significantly from linearity.
>>
>> I do the latter by fitting a second model where the same
>> predictor is
>> entered without the s(), and then use anova.gam to compare
>> the two. I
>> thought this was the equivalent of the default output of anova.gam
>> using package gam instead of mgcv.
>>
>> I wonder if this procedure is correct because one of my models
>> appears to be linear. In fact mgcv estimates df to be exactly 1.0 so
>> I could have stopped there. However I inadvertently repeated the
>> procedure outlined above. I would have thought in this case the
>> anova.gam comparing the smooth and the linear fit would for
>> sure have
>> been not significant. To my surprise, P was 6.18e-09!
>>
>> Am I doing something wrong when I attempt to confirm the non-
>> parametric part a smoother is significant? Here is my example case
>> where the relationship does appear to be linear:
>>
>> library(mgcv)
>>
>>> This is mgcv 1.3-7
>>>
>> Temp <- c(-1.38, -1.12, -0.88, -0.62, -0.38, -0.12, 0.12,
>> 0.38, 0.62,
>> 0.88, 1.12,
>> 1.38, 1.62, 1.88, 2.12, 2.38, 2.62, 2.88, 3.12, 3.38,
>> 3.62, 3.88,
>> 4.12, 4.38, 4.62, 4.88, 5.12, 5.38, 5.62, 5.88, 6.12,
>> 6.38, 6.62, 6.88,
>> 7.12, 8.38, 13.62)
>> N.sets <- c(2, 6, 3, 9, 26, 15, 34, 21, 30, 18, 28, 27, 27, 29, 31,
>> 22, 26, 24, 23,
>>  15, 25, 24, 27, 19, 26, 24, 22, 13, 10, 2, 5, 3, 1, 1,
>> 1, 1, 1)
>> wm.sed <- c(0.0, 0.016129032, 0.0, 0.062046512,
>> 0.396459596, 0.189082949,
>>  0.054757925, 0.142810440, 0.168005168, 0.180804428,
>> 0.111439628, 0.128799505,
>>  0.193707937, 0.105921610, 0.103497845, 0.028591837,
>> 0.217894389, 0.020535469,
>>  0.080389068, 0.105234450, 0.070213450, 0.050771363,
>> 0.042074434, 0.102348837,
>>  0.049748344, 0.019100478, 0.005203125, 0.101711864,
>> 0.0, 0.0,
>>  0.014808824, 0.0, 0.22200, 0.16700,
>> 0.0, 0.0,
>>  0.0)
>>
>> sed.gam <- gam(wm.sed~s(Temp),weight=N.sets)
>> summary.gam(sed.gam)
>>
>>> Family: gaussian
>>> Link function: identity
>>>
>>> Formula:
>>> wm.sed ~ s(Temp)
>>>
>>> Parametric coefficients:
>>> Estimate Std. Error t value Pr(>|t|)
>>> (Intercept)  0.084030.01347   6.241 3.73e-07 ***
>>> ---
>>> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>>>
>>> Approximate significance of smooth terms:
>>> edf Est.rank F  p-value
>>> s(Temp)   11 13.95 0.000666 ***
>>> ---
>>> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>>>
>>> R-sq.(adj) =  0.554   Deviance explained = 28.5%
>>> GCV score = 0.09904   Scale est. = 0.093686  n = 37
>>>
>>
>> # testing non-linear contribution
>> sed.lin <- gam(wm.sed~Temp,weight=N.sets)
>> summary.gam(sed.lin)
>>
>>> Family: gaussian
>>> Link function: identity
>>>
>>> Formula:
>>> wm.sed ~ Temp
>>>
>>> Parametric coefficients:
>>>  Estimate Std. Error t value Pr(>|t|)
>>> (Intercept)  0.162879   0.019847   8.207 1.14e-09 ***
>>> Temp-0.023792   0.006369  -3.736 0.000666 ***
>>> ---
>>> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>>>
>>>
>>> R-sq.(adj) =  0.554   Deviance explained = 28.5%
>>> GCV score = 0.09904   Scale est. = 0.093686  n = 37
>>>
>> anova.gam(sed.lin, sed.gam, test="F")
>>
>>> Analysis of Deviance Table
>>>
>>> Model 1: wm.sed ~ Temp
>>> Model 2: wm.sed ~ s(Temp)
>>>Resid. Df Resid. Dev Df  Deviance  F   Pr(>F)
>>> 1 3.5000e+01  3.279
>>> 2 3.5000e+01  3.279 5.5554e-10 2.353e-11 0.4521 6.18e-09 ***
>>>
>>
>>
>> Thanks in advance,
>>
>>
>> Denis Chabot
>

Re: [R] testing non-linear component in mgcv:gam

2005-10-05 Thread Denis Chabot
Hi John,

Le 05-10-05 à 09:45, John Fox a écrit :

> Dear Denis,
>
> Take a closer look at the anova table: The models provide identical  
> fits to
> the data. The differences in degrees of freedom and deviance  
> between the two
> models are essentially zero, 5.5554e-10 and 2.353e-11 respectively.
>
> I hope this helps,
>  John
This is one of my difficulties. In some examples I found on the web,  
the difference in deviance is compared directly against the chi- 
squared distribution. But my y variable has a very small range  
(between 0 and 0.5, most of the time) so the difference in deviance  
is always very small and if I compared it against the chi-squared  
distribution as I have seen done in examples, the non-linear  
component would always be not significant. Yet it is (with one  
exception), tested with both mgcv:gam and gam:gam. I think the  
examples I have read were wrong in this regard, the "scale" factor  
seen in mgcv output seems to intervene. But exactly how is still  
mysterious to me and I hesitate to judge the size of the deviance  
difference myself.

I agree it is near zero in my example. I guess I need to have more  
experience with these models to better interpret the output...

Denis
>
>
>> -Original Message-
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of Denis Chabot
>> Sent: Wednesday, October 05, 2005 8:22 AM
>> To: r-help@stat.math.ethz.ch
>> Subject: [R] testing non-linear component in mgcv:gam
>>
>> Hi,
>>
>> I need further help with my GAMs. Most models I test are very
>> obviously non-linear. Yet, to be on the safe side, I report
>> the significance of the smooth (default output of mgcv's
>> summary.gam) and confirm it deviates significantly from linearity.
>>
>> I do the latter by fitting a second model where the same
>> predictor is entered without the s(), and then use anova.gam
>> to compare the two. I thought this was the equivalent of the
>> default output of anova.gam using package gam instead of mgcv.
>>
>> I wonder if this procedure is correct because one of my
>> models appears to be linear. In fact mgcv estimates df to be
>> exactly 1.0 so I could have stopped there. However I
>> inadvertently repeated the procedure outlined above. I would
>> have thought in this case the anova.gam comparing the smooth
>> and the linear fit would for sure have been not significant.
>> To my surprise, P was 6.18e-09!
>>
>> Am I doing something wrong when I attempt to confirm the non-
>> parametric part a smoother is significant? Here is my example
>> case where the relationship does appear to be linear:
>>
>> library(mgcv)
>>
>>> This is mgcv 1.3-7
>>>
>> Temp <- c(-1.38, -1.12, -0.88, -0.62, -0.38, -0.12, 0.12,
>> 0.38, 0.62, 0.88, 1.12,
>> 1.38, 1.62, 1.88, 2.12, 2.38, 2.62, 2.88, 3.12,
>> 3.38, 3.62, 3.88,
>> 4.12, 4.38, 4.62, 4.88, 5.12, 5.38, 5.62, 5.88,
>> 6.12, 6.38, 6.62, 6.88,
>> 7.12, 8.38, 13.62)
>> N.sets <- c(2, 6, 3, 9, 26, 15, 34, 21, 30, 18, 28, 27, 27,
>> 29, 31, 22, 26, 24, 23,
>>  15, 25, 24, 27, 19, 26, 24, 22, 13, 10, 2, 5, 3,
>> 1, 1, 1, 1, 1) wm.sed <- c(0.0, 0.016129032,
>> 0.0, 0.062046512, 0.396459596, 0.189082949,
>>  0.054757925, 0.142810440, 0.168005168,
>> 0.180804428, 0.111439628, 0.128799505,
>>  0.193707937, 0.105921610, 0.103497845,
>> 0.028591837, 0.217894389, 0.020535469,
>>  0.080389068, 0.105234450, 0.070213450,
>> 0.050771363, 0.042074434, 0.102348837,
>>  0.049748344, 0.019100478, 0.005203125,
>> 0.101711864, 0.0, 0.0,
>>  0.014808824, 0.0, 0.22200,
>> 0.16700, 0.0, 0.0,
>>  0.0)
>>
>> sed.gam <- gam(wm.sed~s(Temp),weight=N.sets)
>> summary.gam(sed.gam)
>>
>>> Family: gaussian
>>> Link function: identity
>>>
>>> Formula:
>>> wm.sed ~ s(Temp)
>>>
>>> Parametric coefficients:
>>> Estimate Std. Error t value Pr(>|t|)
>>> (Intercept)  0.084030.01347   6.241 3.73e-07 ***
>>> ---
>>> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>>>
>>> Approximate significance of smooth terms:
>>> edf Est.rank F  p-value
>>> s(Temp)   11 13.95 0.000666 ***
>>> ---
>>> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>>>
>>> R-sq.(adj) =  0.554   Deviance explained = 28.5%
>>> GCV score = 0.09904   Scale est. = 0.093686  n = 37
>>>
>>
>> # testing non-linear contribution
>> sed.lin <- gam(wm.sed~Temp,weight=N.sets)
>> summary.gam(sed.lin)
>>
>>> Family: gaussian
>>> Link function: identity
>>>
>>> Formula:
>>> wm.sed ~ Temp
>>>
>>> Parametric coefficients:
>>>  Estimate Std. Error t value Pr(>|t|)
>>> (Intercept)  0.162879   0.019847   8.207 1.14e-09 ***
>>> Temp-0.023792   0.006369  -3.736 0.000666 ***
>>> ---
>>> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>>>
>>>
>>> R-sq.(adj) =  0.554   Deviance explained = 28.5%
>>> GCV score = 0.09904

Re: [R] (no subject)

2005-10-05 Thread Thomas Lumley
On Wed, 5 Oct 2005, [EMAIL PROTECTED] wrote:

> hi all
>
> why does the following not work???
>
> this was someone elses code and i couldnt explain why it doesn't work.

I think this is a case of FAQ 7.17: Why does outer() behave strangely with 
my function?

-thomas

> m=matrix(c(0,0),2,1)
> v=matrix(c(1,0,0,1),2,2)
>
> Y=function(X1,X2,mu=m,V=v)
> {
>   X=matrix(c(X1,X2),2,1)
>   a=(1/((2*pi)*sqrt(det(V*exp((-0.5)*(t(X-mu)%*%solve(V)%*%(X-mu)))
>   a[1]
> }
>
> x1=seq(-1,1)
> x2=x1
>
> Z=outer(x1,x2,FUN="Y",mu=m,V=v)
>
> persp(x1,x2,Z)
>
>
>
>
> my code:
>
> BINORMAL<-function(varx=1,vary=1,covxy=0,meanx=0,meany=0)
> {
>   #the following function plots the density of a bi variate normal 
> distribution
>
>   covXY<-matrix(c(varx,covxy,covxy,vary),2,2)
>   A<-solve(covXY)
>
>   
> #up<-max(meanx+4*varx^.5,meanx-4*varx^.5,meany+4*vary^.5,meany-4*vary^.5)
>   #x <- seq(-up,up,length=50)
>   #y <- x
>
>   x <- seq(meanx-3*varx^.5,meanx+3*varx^.5,length=50)
>   y <- seq(meany-3*vary^.5,meany+3*vary^.5,length=50)
>
>   f <- function(x,y,...)
>   {
>   detA<-det(A)
>   
> quadForm<-A[1,1]*(x-meanx)^2+2*A[1,2]*(x-meanx)*(y-meany)+A[2,2]*(y-meany)^2
>   K<-sqrt(detA)/(2*pi)
>   exp(-0.5*quadForm)*K
>   }
>
>   z <- outer(x, y, f)
>
>   par(mfrow=c(1,2))
>   persp(x, y, z,theta = 30, phi = 30,col="white",main="BI-VARIATE NORMAL
> DISTRIBUTION")
>   contour(x,y,z,main=paste("xy plot, corr(X,Y)= ",(covxy/(varx*vary)^.5)))
>
>   print("NOTE -sqrt(varx*vary)<=covxy<=sqrt(varx*vary)")
>   #print(A)
> }
> BINORMAL(varx=1,vary=1,covxy=0,meanx=0,meany=0)
>
> thanx
>
> /allan
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] "Survey" package and NAMCS data... unsure of specification

2005-10-05 Thread David L. Van Brunt, Ph.D.
Thanks! That's what I had come up with, but was unsure about it. I'm
checking the marginals against the published manuals now. Nice to have fresh
eyes on the problem! Thanks, also, for the helpful link.

On 10/4/05, Thomas Lumley <[EMAIL PROTECTED]> wrote:
>
> On Tue, 4 Oct 2005, David L. Van Brunt, Ph.D. wrote:
>
> > Hello, all.
> >
> > I wanted to use the "survey" package to analyze data from the National
> > Ambulatory Medical Care Survey, and am having some difficulty
> translating
> > the analysis keywords from one package (Stata) to the other (R). The
> data
> > were collected using a multistage probability sampling, and there are
> > variables included to identify the sampling units and weights.
> Documentation
> > from the NAMCS describes this for Stata as follows (note the variable
> names
> > in the data are in caps):
> >
> > The pweight (PATWT), strata (CSTRATM), and PSU (CPSUM) are set with the
> > svyset command as
> > follows:
> > svyset pweight PATWT
> > svyset strata CSTRATM
> > svyset psu CPSUM
> >
>
> Supposing your data frame is called 'namcs'
>
> dnamcs <- svydesign(id=~CPSUM, strata=~CSTRATM, weight=~PATWT, data=namcs)
>
> or perhaps
>
> dnamcs <- svydesign(id=~CPSUM, strata=~CSTRATM, weight=~PATWT,
> data=namcs, nest=TRUE)
>
> (nest=TRUE is needed if CPSUM repeats the same values in different
> strata).
>
> Also, if you have access to design variables for the multistage design you
> can use them (but it probably won't make much difference). There's a very
> brief example using the National Health Interview Study at
> http://faculty.washington.edu/tlumley/survey/example-twostage.html
>
>
> -thomas
>



--
---
David L. Van Brunt, Ph.D.
mailto:[EMAIL PROTECTED]

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] transparent surface in rgl

2005-10-05 Thread Prof Brian Ripley
On Wed, 5 Oct 2005, Duncan Murdoch wrote:

> On 10/5/2005 9:31 AM, Prof. Paul R. Fisher wrote:
>> Hi all
>> I am a complete newbie to this list (just subscribed) and a newcomer to
>> R (an S user from olden times). I have been using scatter3d to create a
>> 3d scatter plot with surface. The graphic is created within the rgl
>> package and I have used rgl.postscript to export it so I can generate a
>> publication quality image. My problem is that the plotted surface is no
>> longer transparent in the postscript output ie. the rgl.spheres that are
>> behind the surface disappear in the postscript image. Can't seem to find
>> any info on this anywhere. Am I doing something wrong? Is there an easy fix?
>
> I think Postscript doesn't support transparency (or at least the version
> of Postscript that the rgl.postcript function targets doesn't support
> it).  You may have to export a bitmapped format using the rgl.snapshot()
> function.  If your original window is very large this may give you good
> enough quality.

Common PostScript (level 2) does not support either full or partial 
transparency (and I guess partial transparency is meant here or the 
surface could just not be plotted).  It would be good to have a rgl.pdf 
which did.  These days PDF is the `portable PostScript' and since version 
1.4 has had alpha-channel supoort.

Ref:

http://en.wikipedia.org/wiki/Transparent_pixels#Transparency_in_PostScript


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] transparent surface in rgl

2005-10-05 Thread Duncan Murdoch
On 10/5/2005 11:10 AM, Prof Brian Ripley wrote:
> On Wed, 5 Oct 2005, Duncan Murdoch wrote:
> 
>> On 10/5/2005 9:31 AM, Prof. Paul R. Fisher wrote:
>>> Hi all
>>> I am a complete newbie to this list (just subscribed) and a newcomer to
>>> R (an S user from olden times). I have been using scatter3d to create a
>>> 3d scatter plot with surface. The graphic is created within the rgl
>>> package and I have used rgl.postscript to export it so I can generate a
>>> publication quality image. My problem is that the plotted surface is no
>>> longer transparent in the postscript output ie. the rgl.spheres that are
>>> behind the surface disappear in the postscript image. Can't seem to find
>>> any info on this anywhere. Am I doing something wrong? Is there an easy fix?
>>
>> I think Postscript doesn't support transparency (or at least the version
>> of Postscript that the rgl.postcript function targets doesn't support
>> it).  You may have to export a bitmapped format using the rgl.snapshot()
>> function.  If your original window is very large this may give you good
>> enough quality.
> 
> Common PostScript (level 2) does not support either full or partial 
> transparency (and I guess partial transparency is meant here or the 
> surface could just not be plotted).  It would be good to have a rgl.pdf 
> which did.  These days PDF is the `portable PostScript' and since version 
> 1.4 has had alpha-channel supoort.
> 
> Ref:
> 
> http://en.wikipedia.org/wiki/Transparent_pixels#Transparency_in_PostScript
> 
> 

The library we use (GL2PS) apparently supports PDF output, and that's 
one of the format options for rgl.postscript(), so maybe we already do 
support that.  I haven't tried it.

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] output a sequence of plots

2005-10-05 Thread Paul E. Green
I can output two histograms of variables
AXFILTERED and AZPTOP as follows:

win.metafile(filename="C:/AXFILTERED.emf",pointsize=12)
hist(AXFILTERED,breaks=40)
dev.off()

win.metafile(filename="C:/AZPTOP.emf",pointsize=12)
hist(AZPTOP,breaks=40)
dev.off()

But, I actually have a dataframe of 120 variables that I
would like histograms of. Any solutions that would
save me from repeating this code 120 times? Can I
pass arguments inside quotes? Can I write a function
to do this?

Paul Green
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Problem reading in external data and assigning data.frames within R

2005-10-05 Thread Spencer Graves
(xnew <-- edit(data.frame()).

is equivalent to:

(xnew <-(- edit(data.frame())).

make sense?
spencer graves

Nathan Dieckmann wrote:

>   Hey there,
> 
> I apologize if this is an irritatingly simple question ... I'm a
> new user.  I can't understand why R flips the sign of all data values
> when reading in external text files (tab delimited or csv) with the
> read.delim or read.csv functions.  The signs of data values also seem
> to be flipped after assigning a new data.frame from within R (xnew <--
> edit(data.frame()).  What am I doing wrong?
> 
>Any help would be greatly appreciated.  Thanks in advance.
> 
>  -- Nate
> 
> -
> Nathan Dieckmann
> Department of Psychology
> University of Oregon
> Eugene, OR 97403
> (541) 346-4963
> [EMAIL PROTECTED]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com 
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] testing non-linear component in mgcv:gam

2005-10-05 Thread John Fox
Dear Denis,

The chi-square test is formed in analogy to what's done for a GLM: The
difference in residual deviance for the nested models is divided by the
estimated scale parameter -- i.e., the estimated error variance for a model
with normal errors. Otherwise, as you point out, the test would be dependent
upon the scale of the response.

John


John Fox
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox 
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Denis Chabot
> Sent: Wednesday, October 05, 2005 9:04 AM
> To: John Fox
> Cc: R list
> Subject: Re: [R] testing non-linear component in mgcv:gam
> 
> Hi John,
> 
> Le 05-10-05 à 09:45, John Fox a écrit :
> 
> > Dear Denis,
> >
> > Take a closer look at the anova table: The models provide identical 
> > fits to the data. The differences in degrees of freedom and 
> deviance 
> > between the two models are essentially zero, 5.5554e-10 and 
> 2.353e-11 
> > respectively.
> >
> > I hope this helps,
> >  John
> This is one of my difficulties. In some examples I found on 
> the web, the difference in deviance is compared directly 
> against the chi- squared distribution. But my y variable has 
> a very small range (between 0 and 0.5, most of the time) so 
> the difference in deviance is always very small and if I 
> compared it against the chi-squared distribution as I have 
> seen done in examples, the non-linear component would always 
> be not significant. Yet it is (with one exception), tested 
> with both mgcv:gam and gam:gam. I think the examples I have 
> read were wrong in this regard, the "scale" factor seen in 
> mgcv output seems to intervene. But exactly how is still 
> mysterious to me and I hesitate to judge the size of the 
> deviance difference myself.
> 
> I agree it is near zero in my example. I guess I need to have 
> more experience with these models to better interpret the output...
> 
> Denis
> >
> >
> >> -Original Message-
> >> From: [EMAIL PROTECTED] 
> >> [mailto:[EMAIL PROTECTED] On Behalf Of Denis Chabot
> >> Sent: Wednesday, October 05, 2005 8:22 AM
> >> To: r-help@stat.math.ethz.ch
> >> Subject: [R] testing non-linear component in mgcv:gam
> >>
> >> Hi,
> >>
> >> I need further help with my GAMs. Most models I test are very 
> >> obviously non-linear. Yet, to be on the safe side, I report the 
> >> significance of the smooth (default output of mgcv's
> >> summary.gam) and confirm it deviates significantly from linearity.
> >>
> >> I do the latter by fitting a second model where the same 
> predictor is 
> >> entered without the s(), and then use anova.gam to compare 
> the two. I 
> >> thought this was the equivalent of the default output of anova.gam 
> >> using package gam instead of mgcv.
> >>
> >> I wonder if this procedure is correct because one of my models 
> >> appears to be linear. In fact mgcv estimates df to be 
> exactly 1.0 so 
> >> I could have stopped there. However I inadvertently repeated the 
> >> procedure outlined above. I would have thought in this case the 
> >> anova.gam comparing the smooth and the linear fit would 
> for sure have 
> >> been not significant.
> >> To my surprise, P was 6.18e-09!
> >>
> >> Am I doing something wrong when I attempt to confirm the non- 
> >> parametric part a smoother is significant? Here is my example case 
> >> where the relationship does appear to be linear:
> >>
> >> library(mgcv)
> >>
> >>> This is mgcv 1.3-7
> >>>
> >> Temp <- c(-1.38, -1.12, -0.88, -0.62, -0.38, -0.12, 0.12, 
> 0.38, 0.62, 
> >> 0.88, 1.12,
> >> 1.38, 1.62, 1.88, 2.12, 2.38, 2.62, 2.88, 3.12, 3.38, 
> >> 3.62, 3.88,
> >> 4.12, 4.38, 4.62, 4.88, 5.12, 5.38, 5.62, 5.88, 6.12, 
> >> 6.38, 6.62, 6.88,
> >> 7.12, 8.38, 13.62)
> >> N.sets <- c(2, 6, 3, 9, 26, 15, 34, 21, 30, 18, 28, 27, 
> 27, 29, 31, 
> >> 22, 26, 24, 23,
> >>  15, 25, 24, 27, 19, 26, 24, 22, 13, 10, 2, 5, 
> 3, 1, 1, 
> >> 1, 1, 1) wm.sed <- c(0.0, 0.016129032, 0.0, 
> >> 0.062046512, 0.396459596, 0.189082949,
> >>  0.054757925, 0.142810440, 0.168005168, 0.180804428, 
> >> 0.111439628, 0.128799505,
> >>  0.193707937, 0.105921610, 0.103497845, 0.028591837, 
> >> 0.217894389, 0.020535469,
> >>  0.080389068, 0.105234450, 0.070213450, 0.050771363, 
> >> 0.042074434, 0.102348837,
> >>  0.049748344, 0.019100478, 0.005203125, 0.101711864, 
> >> 0.0, 0.0,
> >>  0.014808824, 0.0, 0.22200, 0.16700, 
> >> 0.0, 0.0,
> >>  0.0)
> >>
> >> sed.gam <- gam(wm.sed~s(Temp),weight=N.sets)
> >> summary.gam(sed.gam)
> >>
> >>> Family: gaussian
> >>> Link function: identity
> >>>
> >>> Formula:
> >>> wm.sed ~ s(Temp)
> >>>
> >>> Parametric coefficients:
> >>> Estimate S

Re: [R] output a sequence of plots

2005-10-05 Thread vincent
Paul E. Green a écrit :

> ... Any solutions that would
> save me from repeating this code 120 times? Can I
> pass arguments inside quotes? Can I write a function
> to do this?

as a toy example, with some of the usable functions :
(see ?dir)

fct00 = function()
{
rep0 = "data";  
fichier = dir(rep0, pattern="*.emf");
nbfichiers = length(fichier);

for (i in 1:nbfichiers)
{
#   do the job
}
}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Hiring people for great job from home

2005-10-05 Thread Philip Schneider
Sehr geehrte Frau/Herr, send


Suchen Sie eine Arbeit? hard

Eine der besten Finanzgesellschaften in Osteuropa freut sich sehr 
darauf,
Ihnen eine ausgezeichnete Arbeit vorzuschlagen, die wirklich ein gro?es
Einkommen erm?glicht!! Sie sollen nichts investieren oder keine Waren
kaufen, um bei uns zu arbeiten. art

Kein Geld anzufangen? told


?ber uns:

Real Capitalz Finance Company, Litauen wurde im Jahre 1995 von einer
Gruppe der Antiquit?texperten gestiftet. Bis heute hat sich die
Gesellschaft aus einer kleinen Firma mit 7 Mitarbeitern zu einer
internationalen Gruppe mit einigen Vertretungen in verschiednen L?ndern
der Welt entwickelt. Heute sind wir nicht nur mit der 
Antiquit?tshandlung
besch?ftigt, sondern organisieren auch internationale Ausstellungen,
Seminare, Toure und viele andere Veranstaltungen. F?r mehr 
Informationen:
http://www.trustbizjob.com


Unsere Arbeitsstellen:

Jetzt macht die Klientschaft der Gesellschaft weltweite
Finanztransaktionen, einschlie?lich Australien, aber wegen der
ungen?genden Vollkommenheit des australischen Bankservicen haben wir
einige Schwierigkeiten bei der Reorganisation und Transformation 
unseres
Gesch?fts. Deswegen haben wir uns entschieden, Zweigstellen zu 
errichten,
und dazu brauchen wir Mitarbeiter aus Australien. Wir suchen
Finanzassistenten, die die Zahlungen der Kunden erhalten werden. Wir
schlagen Fernarbeit vor, aber wenn wir in Australien Zweigstellen
errichten, haben Sie dann die M?glichkeit auch im Office zu arbeiten. chance

Wir suchen Finanzassistenten, die die Bezahlung der Kunden durch
Bankanweisung akzeptieren werden. Wegen der Mangelhaftigkeit des
Banksystems, haben australische Partner von unseren Kunden einige
Schwierigkeiten mit Bankanweisungen und Schecks in Osteuropa.  surf

Sie k?nnen 10% Kommission von jeder Bankanweisung bekommen, die mit
unserem Gesch?ft zu tun hat. Das ?brige ist an unseren Vertreter zu 
senden
durch Western Union Service. Alle Western Union Geb?hre werden von uns
bezahlt. Im Durchschnitt werden Sie 2,000 US Dollars pro Woche 
bekommen. inside

Wir glauben, dass Sie bei unserer Gesellschaft gute Perspektiven f?r 
die
Zukunft finden und Vieles erreichen k?nnen. Wenn unser Vorschlag f?r 
Sie
interessant ist, bitte schicken Sie ein E-Mail an unser HR-Office. Wir
werden gern Ihre Fragen beantworten. provide

Wenn Sie bereit sind mit uns zu arbeiten, bitte registrieren Sie sich 
an
unsrer Website um personale Account f?r unsere Website zu bekommen. In 
24
Stunden nach der Registration auf Web werden Sie einen Vertrag von uns 
bei
E-Mail bekommen und wir beginnen mit Ihnen zu arbeiten. own

Mit freundlichen Gr?sse high
WorldWide Company Lithuania big
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] spline.des

2005-10-05 Thread Marc Schwartz (via MN)
On Mon, 2005-10-03 at 15:28 -0500, lforzani wrote:
> Hello, I am using library fda and I can not run a lot of functions because
> I receive the error:
> 
> Error in bsplineS(evalarg, breaks, norder, nderiv) : 
> couldn't find function "spline.des"
> 
> 
> do you know how I can fix that? Thnaks. Liliana


spline.des() is in the 'splines' package, installed with the basic R
distribution. It appears that bsplineS() has a dependency on this not
otherwise referenced in the fda package documentation. Looks like fda
has not been updated in some time as well.

I have cc'd Jim Ramsay on this reply as an FYI.

You probably need a:

  library(splines)

before calling your code above.


HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] transparent surface in rgl

2005-10-05 Thread Duncan Murdoch
On 10/5/2005 11:33 AM, Duncan Murdoch wrote:
> On 10/5/2005 11:10 AM, Prof Brian Ripley wrote:
>> On Wed, 5 Oct 2005, Duncan Murdoch wrote:
>> 
>>> On 10/5/2005 9:31 AM, Prof. Paul R. Fisher wrote:
 Hi all
 I am a complete newbie to this list (just subscribed) and a newcomer to
 R (an S user from olden times). I have been using scatter3d to create a
 3d scatter plot with surface. The graphic is created within the rgl
 package and I have used rgl.postscript to export it so I can generate a
 publication quality image. My problem is that the plotted surface is no
 longer transparent in the postscript output ie. the rgl.spheres that are
 behind the surface disappear in the postscript image. Can't seem to find
 any info on this anywhere. Am I doing something wrong? Is there an easy 
 fix?
>>>
>>> I think Postscript doesn't support transparency (or at least the version
>>> of Postscript that the rgl.postcript function targets doesn't support
>>> it).  You may have to export a bitmapped format using the rgl.snapshot()
>>> function.  If your original window is very large this may give you good
>>> enough quality.
>> 
>> Common PostScript (level 2) does not support either full or partial 
>> transparency (and I guess partial transparency is meant here or the 
>> surface could just not be plotted).  It would be good to have a rgl.pdf 
>> which did.  These days PDF is the `portable PostScript' and since version 
>> 1.4 has had alpha-channel supoort.
>> 
>> Ref:
>> 
>> http://en.wikipedia.org/wiki/Transparent_pixels#Transparency_in_PostScript
>> 
>> 
> 
> The library we use (GL2PS) apparently supports PDF output, and that's 
> one of the format options for rgl.postscript(), so maybe we already do 
> support that.  I haven't tried it.
> 

I've just checked, and currently transparency isn't supported even with 
PDF output.  I tried updating the version of GL2PS and turning on 
transparency support, but so far no luck at all.

If anyone wants to follow up on this I think it would be a nice 
addition, but otherwise, I think the PNG output is the best we can do.

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] pca in dimension reduction

2005-10-05 Thread Weiwei Shi
Hi, there:
I am wondering if anyone here can provide an example using pca doing
dimension reduction for a dataset.
The dataset can be n*q (n>=q or n<=q).

As to dimension reduction, are there other implementations for like ICA,
Isomap, Locally Linear Embedding...

Thanks,

weiwei

--
Weiwei Shi, Ph.D

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] testing non-linear component in mgcv:gam

2005-10-05 Thread Denis Chabot
Thank you everyone for your help, but my introduction to GAM is  
turning my brain to mush. I thought the one part of the output I  
understood the best was r-sq (adj), but now even this is becoming foggy.

In my original message I mentioned a gam fit that turns out to be a  
linear fit. By curiosity I analysed it with a linear predictor only  
with mgcv package, and then as a linear model. The output was  
identical in both, but the r-sq (adj) was 0.55 in mgcv and 0.26 in  
lm. In lm I hope that my interpretation that 26% of the variance in y  
is explained by the linear relationship with x is valid. Then what  
does r2 mean in mgcv?

Denis
 > summary.gam(lin)

Family: gaussian
Link function: identity

Formula:
wm.sed ~ Temp

Parametric coefficients:
  Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.162879   0.019847   8.207 1.14e-09 ***
Temp-0.023792   0.006369  -3.736 0.000666 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1


R-sq.(adj) =  0.554   Deviance explained = 28.5%
GCV score = 0.09904   Scale est. = 0.093686  n = 37


 > summary(sed.true.lin)

Call:
lm(formula = wm.sed ~ Temp, weights = N.sets)

Residuals:
 Min  1Q  Median  3Q Max
-0.6138 -0.1312 -0.0325  0.1089  1.1449

Coefficients:
  Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.162879   0.019847   8.207 1.14e-09 ***
Temp-0.023792   0.006369  -3.736 0.000666 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.3061 on 35 degrees of freedom
Multiple R-Squared: 0.285,Adjusted R-squared: 0.2646
F-statistic: 13.95 on 1 and 35 DF,  p-value: 0.000666


Le 05-10-05 à 09:45, John Fox a écrit :

> Dear Denis,
>
> Take a closer look at the anova table: The models provide identical  
> fits to
> the data. The differences in degrees of freedom and deviance  
> between the two
> models are essentially zero, 5.5554e-10 and 2.353e-11 respectively.
>
> I hope this helps,
>  John
>
> 
> John Fox
> Department of Sociology
> McMaster University
> Hamilton, Ontario
> Canada L8S 4M4
> 905-525-9140x23604
> http://socserv.mcmaster.ca/jfox
> 
>
>
>> -Original Message-
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of Denis Chabot
>> Sent: Wednesday, October 05, 2005 8:22 AM
>> To: r-help@stat.math.ethz.ch
>> Subject: [R] testing non-linear component in mgcv:gam
>>
>> Hi,
>>
>> I need further help with my GAMs. Most models I test are very
>> obviously non-linear. Yet, to be on the safe side, I report
>> the significance of the smooth (default output of mgcv's
>> summary.gam) and confirm it deviates significantly from linearity.
>>
>> I do the latter by fitting a second model where the same
>> predictor is entered without the s(), and then use anova.gam
>> to compare the two. I thought this was the equivalent of the
>> default output of anova.gam using package gam instead of mgcv.
>>
>> I wonder if this procedure is correct because one of my
>> models appears to be linear. In fact mgcv estimates df to be
>> exactly 1.0 so I could have stopped there. However I
>> inadvertently repeated the procedure outlined above. I would
>> have thought in this case the anova.gam comparing the smooth
>> and the linear fit would for sure have been not significant.
>> To my surprise, P was 6.18e-09!
>>
>> Am I doing something wrong when I attempt to confirm the non-
>> parametric part a smoother is significant? Here is my example
>> case where the relationship does appear to be linear:
>>
>> library(mgcv)
>>
>>> This is mgcv 1.3-7
>>>
>> Temp <- c(-1.38, -1.12, -0.88, -0.62, -0.38, -0.12, 0.12,
>> 0.38, 0.62, 0.88, 1.12,
>> 1.38, 1.62, 1.88, 2.12, 2.38, 2.62, 2.88, 3.12,
>> 3.38, 3.62, 3.88,
>> 4.12, 4.38, 4.62, 4.88, 5.12, 5.38, 5.62, 5.88,
>> 6.12, 6.38, 6.62, 6.88,
>> 7.12, 8.38, 13.62)
>> N.sets <- c(2, 6, 3, 9, 26, 15, 34, 21, 30, 18, 28, 27, 27,
>> 29, 31, 22, 26, 24, 23,
>>  15, 25, 24, 27, 19, 26, 24, 22, 13, 10, 2, 5, 3,
>> 1, 1, 1, 1, 1) wm.sed <- c(0.0, 0.016129032,
>> 0.0, 0.062046512, 0.396459596, 0.189082949,
>>  0.054757925, 0.142810440, 0.168005168,
>> 0.180804428, 0.111439628, 0.128799505,
>>  0.193707937, 0.105921610, 0.103497845,
>> 0.028591837, 0.217894389, 0.020535469,
>>  0.080389068, 0.105234450, 0.070213450,
>> 0.050771363, 0.042074434, 0.102348837,
>>  0.049748344, 0.019100478, 0.005203125,
>> 0.101711864, 0.0, 0.0,
>>  0.014808824, 0.0, 0.22200,
>> 0.16700, 0.0, 0.0,
>>  0.0)
>>
>> sed.gam <- gam(wm.sed~s(Temp),weight=N.sets)
>> summary.gam(sed.gam)
>>
>>> Family: gaussian
>>> Link function: identity
>>>
>>> Formula:
>>> wm.sed ~ s(Temp)
>>>
>>> Parametric coefficients:
>>> Estimate Std. Error t value Pr(>|t|)
>>> (Intercept)  0.0840

Re: [R] pca in dimension reduction

2005-10-05 Thread Berton Gunter
?princomp  ?prcomp give examples

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 
"The business of the statistician is to catalyze the scientific learning
process."  - George E. P. Box
 
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> Sent: Wednesday, October 05, 2005 12:27 PM
> To: r-help
> Subject: [R] pca in dimension reduction
> 
> Hi, there:
> I am wondering if anyone here can provide an example using pca doing
> dimension reduction for a dataset.
> The dataset can be n*q (n>=q or n<=q).
> 
> As to dimension reduction, are there other implementations 
> for like ICA,
> Isomap, Locally Linear Embedding...
> 
> Thanks,
> 
> weiwei
> 
> --
> Weiwei Shi, Ph.D
> 
> "Did you always know?"
> "No, I did not. But I believed..."
> ---Matrix III
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] output a sequence of plots

2005-10-05 Thread Don MacQueen
Something similar to this
(but I haven't tested)

mydf <- data.frame(xx=rnorm(100), yy=rnorm(100), zz=rnorm(100))

for (nm in names(mydf)) {

   fnm <- file.path('c:',paste(nm,'.emf',sep=''))
   cat('Creating histogram in file',fnm,'\n')
   win.metafile( filename=fnm)
   hist(mydf[[nm]])  # or 
hist(mydf[,nm])
   dev.off()

}

-Don

At 11:54 AM -0400 10/5/05, Paul E. Green wrote:
>I can output two histograms of variables
>AXFILTERED and AZPTOP as follows:
>
>win.metafile(filename="C:/AXFILTERED.emf",pointsize=12)
>hist(AXFILTERED,breaks=40)
>dev.off()
>
>win.metafile(filename="C:/AZPTOP.emf",pointsize=12)
>hist(AZPTOP,breaks=40)
>dev.off()
>
>But, I actually have a dataframe of 120 variables that I
>would like histograms of. Any solutions that would
>save me from repeating this code 120 times? Can I
>pass arguments inside quotes? Can I write a function
>to do this?
>
>Paul Green
>   [[alternative HTML version deleted]]
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


-- 
--
Don MacQueen
Environmental Protection Department
Lawrence Livermore National Laboratory
Livermore, CA, USA

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] testing non-linear component in mgcv:gam

2005-10-05 Thread John Fox
Dear Denis,

You got me: I would have thought from ?summary.gam that this would be the
same as the adjusted R^2 for a linear model. Note, however, that the
percentage of deviance explained checks with the R^2 from the linear model,
as expected.

Maybe you should address this question to the package author.

Regards,
 John


John Fox
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox 
 

> -Original Message-
> From: Denis Chabot [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, October 05, 2005 3:33 PM
> To: John Fox
> Cc: R list
> Subject: Re: [R] testing non-linear component in mgcv:gam
> 
> Thank you everyone for your help, but my introduction to GAM 
> is turning my brain to mush. I thought the one part of the 
> output I understood the best was r-sq (adj), but now even 
> this is becoming foggy.
> 
> In my original message I mentioned a gam fit that turns out 
> to be a linear fit. By curiosity I analysed it with a linear 
> predictor only with mgcv package, and then as a linear model. 
> The output was identical in both, but the r-sq (adj) was 0.55 
> in mgcv and 0.26 in lm. In lm I hope that my interpretation 
> that 26% of the variance in y is explained by the linear 
> relationship with x is valid. Then what does r2 mean in mgcv?
> 
> Denis
>  > summary.gam(lin)
> 
> Family: gaussian
> Link function: identity
> 
> Formula:
> wm.sed ~ Temp
> 
> Parametric coefficients:
>   Estimate Std. Error t value Pr(>|t|)
> (Intercept)  0.162879   0.019847   8.207 1.14e-09 ***
> Temp-0.023792   0.006369  -3.736 0.000666 ***
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> 
> 
> R-sq.(adj) =  0.554   Deviance explained = 28.5%
> GCV score = 0.09904   Scale est. = 0.093686  n = 37
> 
> 
>  > summary(sed.true.lin)
> 
> Call:
> lm(formula = wm.sed ~ Temp, weights = N.sets)
> 
> Residuals:
>  Min  1Q  Median  3Q Max
> -0.6138 -0.1312 -0.0325  0.1089  1.1449
> 
> Coefficients:
>   Estimate Std. Error t value Pr(>|t|)
> (Intercept)  0.162879   0.019847   8.207 1.14e-09 ***
> Temp-0.023792   0.006369  -3.736 0.000666 ***
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> 
> Residual standard error: 0.3061 on 35 degrees of freedom
> Multiple R-Squared: 0.285,Adjusted R-squared: 0.2646
> F-statistic: 13.95 on 1 and 35 DF,  p-value: 0.000666
> 
> 
> Le 05-10-05 à 09:45, John Fox a écrit :
> 
> > Dear Denis,
> >
> > Take a closer look at the anova table: The models provide identical 
> > fits to the data. The differences in degrees of freedom and 
> deviance 
> > between the two models are essentially zero, 5.5554e-10 and 
> 2.353e-11 
> > respectively.
> >
> > I hope this helps,
> >  John
> >
> > 
> > John Fox
> > Department of Sociology
> > McMaster University
> > Hamilton, Ontario
> > Canada L8S 4M4
> > 905-525-9140x23604
> > http://socserv.mcmaster.ca/jfox
> > 
> >
> >
> >> -Original Message-
> >> From: [EMAIL PROTECTED] 
> >> [mailto:[EMAIL PROTECTED] On Behalf Of Denis Chabot
> >> Sent: Wednesday, October 05, 2005 8:22 AM
> >> To: r-help@stat.math.ethz.ch
> >> Subject: [R] testing non-linear component in mgcv:gam
> >>
> >> Hi,
> >>
> >> I need further help with my GAMs. Most models I test are very 
> >> obviously non-linear. Yet, to be on the safe side, I report the 
> >> significance of the smooth (default output of mgcv's
> >> summary.gam) and confirm it deviates significantly from linearity.
> >>
> >> I do the latter by fitting a second model where the same 
> predictor is 
> >> entered without the s(), and then use anova.gam to compare 
> the two. I 
> >> thought this was the equivalent of the default output of anova.gam 
> >> using package gam instead of mgcv.
> >>
> >> I wonder if this procedure is correct because one of my models 
> >> appears to be linear. In fact mgcv estimates df to be 
> exactly 1.0 so 
> >> I could have stopped there. However I inadvertently repeated the 
> >> procedure outlined above. I would have thought in this case the 
> >> anova.gam comparing the smooth and the linear fit would 
> for sure have 
> >> been not significant.
> >> To my surprise, P was 6.18e-09!
> >>
> >> Am I doing something wrong when I attempt to confirm the non- 
> >> parametric part a smoother is significant? Here is my example case 
> >> where the relationship does appear to be linear:
> >>
> >> library(mgcv)
> >>
> >>> This is mgcv 1.3-7
> >>>
> >> Temp <- c(-1.38, -1.12, -0.88, -0.62, -0.38, -0.12, 0.12, 
> 0.38, 0.62, 
> >> 0.88, 1.12,
> >> 1.38, 1.62, 1.88, 2.12, 2.38, 2.62, 2.88, 3.12, 3.38, 
> >> 3.62, 3.88,
> >> 4.12, 4.38, 4.62, 4.88, 5.12, 5.38, 5.62, 5.88, 6.12, 
> >> 6.38, 6.62, 6.88,
> >> 7.12, 8.38, 13.62)
> >> N.sets <- c(2, 6, 3, 9, 26, 15, 34, 21,

Re: [R] pca in dimension reduction

2005-10-05 Thread Weiwei Shi
Thanks. I got what I needed.
for example:
> USA.pca<-prcomp(USArrests, scale = TRUE)
> predict(USA.pca)
PC1 PC2 PC3 PC4
Alabama -0.97566045 1.12200121 -0.43980366 0.154696581
Alaska -1.93053788 1.06242692 2.01950027 -0.434175454
Arizona -1.74544285 -0.73845954 0.05423025 -0.826264240
Arkansas 0.13999894 1.10854226 0.11342217 -0.180973554
California -2.49861285 -1.52742672 0.59254100 -0.338559240
...





On 10/5/05, Berton Gunter <[EMAIL PROTECTED]> wrote:
>
> ?princomp ?prcomp give examples
>
> -- Bert Gunter
> Genentech Non-Clinical Statistics
> South San Francisco, CA
>
> "The business of the statistician is to catalyze the scientific learning
> process." - George E. P. Box
>
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> > Sent: Wednesday, October 05, 2005 12:27 PM
> > To: r-help
> > Subject: [R] pca in dimension reduction
> >
> > Hi, there:
> > I am wondering if anyone here can provide an example using pca doing
> > dimension reduction for a dataset.
> > The dataset can be n*q (n>=q or n<=q).
> >
> > As to dimension reduction, are there other implementations
> > for like ICA,
> > Isomap, Locally Linear Embedding...
> >
> > Thanks,
> >
> > weiwei
> >
> > --
> > Weiwei Shi, Ph.D
> >
> > "Did you always know?"
> > "No, I did not. But I believed..."
> > ---Matrix III
> >
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide!
> > http://www.R-project.org/posting-guide.html
> >
>
>
>


--
Weiwei Shi, Ph.D

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] mapply for matrices

2005-10-05 Thread Tamas K Papp
Hi,

I have a matrix A and a vector b, and would like to apply a function
f(a,b) to the rows of A and the elements of b.  Eg

A <- matrix(1:4,2,2)
b <- c(5,7)
f <- function(a,b) {sum(a)*b}

myapply(f,A=A,b=b)

would give

(1+3)*5 = 20
(2+4)*7 = 42

I found mapply, but it does not work for matrices.  How could I do
this without loops?  The above is just a toy example, the problem I am
using this for has larger matrices, and f is a computation that does
not handle vectors.

One thing I thought of is

sapply(seq(along=b),function(i,A,b){f(A[i,],b[i])},A=A,b=b)

but this is not very elegant.  I checked the archives and found nothing.

Thank you,

Tamas

-- 
Bayesian statistics is difficult in the sense that thinking is difficult.
--Donald A. Berry, American Statistician 51:242 (1997)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] problem in installing a package

2005-10-05 Thread Claire Lee
I'm using R in Windows XP. I created a package myself.
I've used R CMD check to check it. Everything seems OK
except the latex. I get the error message:
* checking bbHist-manual.tex ... ERROR
LaTeX errors when creating DVI version.
This typically indicates Rd problems.

I ignored it because I didn't want to submit it to
CRAN.

Then I tried to use R CMD INSTALL to install it. First
I get: 
"mv: cannot move `c:/PROGRA~1/R/rw2011/library/bbHist'
to `c:/PROGRA~1/R/rw2011/library/00LOCK/bbHist
': Permission denied" 

and a bunch of making DLL errors.  Then when I tried a
second time, I get:

open(c:/progra~1/r/rw2011/library/bbHist/DESCRIPTION):
No such file or directory

I can see a 00LOCK directory is created in the
c:/PROGRA~1/R/rw2011/library directory. Any idea why
this is happening?

Thanks.

Claire

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] mapply for matrices

2005-10-05 Thread Berton Gunter
At the risk of being dense or R-ically incorrect, why do it without loops
when it is natural and easy to do it with them? More to the point:

1. vectorization speeds things up

2. apply commands are basically looping, not vectorization. Their advantage
is coding transparency, not speed

Flog me if you will ...

(of course constructs like sapply(index,function(index, A,b)...,A=A,b=b )
always work -- but why bother? )

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 
"The business of the statistician is to catalyze the scientific learning
process."  - George E. P. Box
 
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Tamas K Papp
> Sent: Wednesday, October 05, 2005 2:37 PM
> To: R-help mailing list
> Subject: [R] mapply for matrices
> 
> Hi,
> 
> I have a matrix A and a vector b, and would like to apply a function
> f(a,b) to the rows of A and the elements of b.  Eg
> 
> A <- matrix(1:4,2,2)
> b <- c(5,7)
> f <- function(a,b) {sum(a)*b}
> 
> myapply(f,A=A,b=b)
> 
> would give
> 
> (1+3)*5 = 20
> (2+4)*7 = 42
> 
> I found mapply, but it does not work for matrices.  How could I do
> this without loops?  The above is just a toy example, the problem I am
> using this for has larger matrices, and f is a computation that does
> not handle vectors.
> 
> One thing I thought of is
> 
> sapply(seq(along=b),function(i,A,b){f(A[i,],b[i])},A=A,b=b)
> 
> but this is not very elegant.  I checked the archives and 
> found nothing.
> 
> Thank you,
> 
> Tamas
> 
> -- 
> Bayesian statistics is difficult in the sense that thinking 
> is difficult.
> --Donald A. Berry, American Statistician 51:242 (1997)
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] mapply for matrices

2005-10-05 Thread Gabor Grothendieck
Try this:

   mapply(f, split(A, 1:nrow(A)), b)


On 10/5/05, Tamas K Papp <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have a matrix A and a vector b, and would like to apply a function
> f(a,b) to the rows of A and the elements of b.  Eg
>
> A <- matrix(1:4,2,2)
> b <- c(5,7)
> f <- function(a,b) {sum(a)*b}
>
> myapply(f,A=A,b=b)
>
> would give
>
> (1+3)*5 = 20
> (2+4)*7 = 42
>
> I found mapply, but it does not work for matrices.  How could I do
> this without loops?  The above is just a toy example, the problem I am
> using this for has larger matrices, and f is a computation that does
> not handle vectors.
>
> One thing I thought of is
>
> sapply(seq(along=b),function(i,A,b){f(A[i,],b[i])},A=A,b=b)
>
> but this is not very elegant.  I checked the archives and found nothing.
>
> Thank you,
>
> Tamas
>
> --
> Bayesian statistics is difficult in the sense that thinking is difficult.
> --Donald A. Berry, American Statistician 51:242 (1997)
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] playing with R: make a animated GIF file...

2005-10-05 Thread klebyn

Hello all


I am playing with R for to make a animated GIF.

any suggestions, improvements are welcome :-)

case somebody could help me, i thanks!


Cleber N. Borges ( klebyn )




my objective:

(steps TODO)

---
1) to save PNG files;

->  i don't know the best way to make this;


2) transform the PNG files into GIF files (easy! no problem!  ... i 
think ...)


3) reload the GiF files in R and use the caTools package to make a 
animated GIF.

--

   the code

  reverse the STRING

strReverse <- function(x) sapply(lapply(strsplit(x, NULL), rev), paste, 
collapse="")

  logotype to animate

yourLogo ="Is Nice to play with R-package   "

logoWidth = 1.5
logoHeight = 2.5

L = nchar(yourLogo)

TrigSplit = 360 / L

yourLogo = strReverse(yourLogo)

posx = numeric(L)
posy = numeric(L)

for( i in 0:L){
posx[i] = logoHeight * sin(i * TrigSplit * pi / 180)
posy[i] = logoWidth *  cos(i * TrigSplit * pi / 180)
}

max_x = max(posx)*1.1
max_y = max(posy)*3

min_x = min(posx)*1.1
min_y = min(posy)*3


cex = 2/(posy + 2)

idx = 1:L


for(j in 1:L-1) {

###file = paste("CQM_",j,".png",sep="")

###png(filename=file, bg="transparent")

plot(0,t='n', xlim=c(min_x,max_x), ylim=c(min_y,max_y), axes=FALSE, 
ann=FALSE, font=3  )

for( i in 1:L){text(x=posx[i], y=posy[i], 
labels=substr(yourLogo,idx[i],idx[i]), col='blue', cex=cex[i] ) }

idx = (append(idx[L],idx))[1:L]

Sys.sleep(0.2)

###dev.off()
}

##  final code

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Animation of Mandelbrot Set

2005-10-05 Thread Gabor Grothendieck
This probably has nothing to do with your software but on my Windows
XP system I just get a static image on Internet Explorer with the
animated GIF but with Firefox and the same GIF the animation comes
out as expected.


On 10/4/05, Tuszynski, Jaroslaw W. <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I was playing with Mandelbrot sets and come up with the following code, I
> thought I would share:
>
> library(fields)  # for tim.colors
> library(caTools) # for write.gif
> m = 400  # grid size
> C = complex( real=rep(seq(-1.8,0.6, length.out=m), each=m ),
> imag=rep(seq(-1.2,1.2, length.out=m),  m ) )
> C = matrix(C,m,m)
> Z = 0
> X = array(0, c(m,m,20))
> for (k in 1:20) {
>  Z = Z^2+C
>  X[,,k] = exp(-abs(Z))
> }
> image(X[,,k], col=tim.colors(256)) # show final image in R
> write.gif(X, "Mandelbrot.gif", col=tim.colors(256), delay=100)
> # drop "Mandelbrot.gif" file from current directory on any web brouser to
> see the animation
>
>  Jarek
> \
>  Jarek Tuszynski, PhD.   o / \
>  Science Applications International Corporation  <\__,|
>  (703) 676-4192   ">  \
>  [EMAIL PROTECTED] `   \
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R/S-Plus equivalent to Genstat "predict": predictions over "averages" of covariates

2005-10-05 Thread Peter Dunn
Hi all

I'm doing some things with a colleague comparing different
sorts of models.  My colleague has fitted a number of glms in
Genstat (which I have never used), while the glm I have
been using is only available for R.

He has a spreadsheet of fitted means from each of his models
obtained from using the Genstat "predict" function.  For
example, suppose we fit the model of the type
glm.out <- glm( y ~ factor(F1) + factor(F2) + X1 + poly(X2,2) +
   poly(X3,2), family=...)

Then he produces a table like this (made up, but similar):

F1(level1)  12.2
F1(level2)  14.2
F1(level3)  15.3
F2(level1)  10.3
F2(level2)  9.1
X1=010.2
X1=0.5  10.4
X1=110.4
X1=1.5  10.5
X1=210.9
X1=2.5  11.9
X1=311.8
X2=012.0
X2=0.5  12.2
X2=112.5
X2=1.5  12.9
X2=213.0
X2=2.5  13.1
X2=313.5

Each of the numbers are a predicted mean.  So when X1=0, on average
we predict an outcome of 10.2.

To obtain these figures in Genstat, he uses the Genstat "predict"
function.  When I asked for an explanation of how it was done (ie to
make the "predictions", what values of the other covariates were used) I
was told:

> So, for a one-dimensional table of fitted means for any factor (or
> variate), all other variates are set to their average values; and the
> factor constants (including the first, at zero) are given a weighted
> average depending on their respective numbers of observations.

So for quantitative variables (such as pH), one uses the mean pH in the
data set when making the predictions.  Reasonable anmd easy.

But for categorical variables (like Month), he implies we use a weighted
average of the fitted coefficients for all the months, depending on the
proportion of times those factor levels appear in the data.

(I hope I explained that OK...)

Is there an equivalent way in R or S-Plus of doing this?  I have to do
it for a number of sites and species, so an automated way would be
useful.  I have tried searching to no avail (but may not be searching
on the correct terms), and tried hard-coding something myself
as yet unsuccessfully:  The  poly  terms and the use of the weighted
averaging over the factor levels are proving a bit too much for my
limited skills.

Any assistance appreciated.  (Any clarification of what I mean can be
provided if I have not been clear.)

Thanks, as always.

P.

 > version
  _
platform i386-pc-linux-gnu
arch i386
os   linux-gnu
system   i386, linux-gnu
status
major2
minor1.0
year 2005
month04
day  18
language R
 >



-- 
Dr Peter Dunn  |  Senior Lecturer in Statistics
Faculty of Sciences, University of Southern Queensland
   Web:http://www.sci.usq.edu.au/staff/dunn
   Email:  dunn  usq.edu.au
CRICOS:  QLD 00244B |  NSW 02225M |  VIC 02387D |  WA 02521C

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] newbie questions - looping through hierarchial datafille

2005-10-05 Thread Simon Blomberg
Well I haven't seen any replies to this, so I have had a stab at the 
problem of getting the data into a data frame.

The approach I took was to break up the data into a list, and then fill in 
a matrix, row by row, "filling down" a la spreadsheet style when necessary, 
taking advantage of the ordering of the data. Then coercing to a 
data.frame. Maybe not a very portable/general solution, but it appears to work.

list.to.data.frame <- function () {
filecon <- file(file.choose()) # open a data file
dat <- strsplit(readLines(filecon, n=-1), split=" ") # read all the data 
into a list,
 # 1 line per element, each element is
 # a character vector of data 
(variable length)
resultvec <- matrix(rep(NA, 16), nrow=1) # results will be stored here

filldown <- function (x) {
# cluge to simulate fill-down of a vector, spreadsheet style
 if(all(is.na(x)) || all(!is.na(x))) x else {
 last <- min(which(is.na(x)))
 x[last:length(x)] <- x[last-1]
 x
 }
}

#loop through the data
for (vec in dat) {
 f <- switch(vec[1], # what kind of field are we dealing with?
 "A" = c(vec[-1], rep(NA, 15)),
 "X" = c(NA, vec[-1], rep(NA, 12)),
 "P" = c(rep(NA,4), vec[-1], rep(NA, 8)),
 "T" = c(rep(NA, 8), vec[-1], rep(NA, 6)),
 "L" = c(rep(NA, 10), vec[-1], rep(NA, 3)),
 "F" = c(rep(NA, 13), vec[-1]))
 if (any(is.na(resultvec[nrow(resultvec), which(!is.na(f))])))
 # slot the data into the appropriate column
 resultvec[nrow(resultvec),] <- 
ifelse(is.na(resultvec[nrow(resultvec),]), f,
 resultvec[nrow(resultvec),]) else
 # if the row is full, start a new one
 resultvec <- rbind(resultvec, f)
 # if we are at the end of a row, fill down and start a new row
 if (vec[1] == "F") resultvec <- rbind(apply(resultvec, 2, 
filldown), rep(NA, 16))
 }

# coerce to a data frame, and get rid of the last empty row
res <- as.data.frame(resultvec[-nrow(resultvec),], row.names=NULL)
# set column names
names(res) <- c("Inventory", "Stratum_no", "Total", "Ye", "Plot_no", "age", 
"slope",
"species", "tree_no", "frequency", "leader",  "diameter", "height", 
"start_height",
"finish_height", "feature")
#return the result
res
}

Cheers,

Simon.


At 10:36 AM 4/10/2005, you wrote:
>Dear List,
>
>Im new to R - making a transition from SAS. I have a space delimited file
>with the following structure. Each line in the datafile is identified by
>the first letter.
>
>A = Inventory (Inventory)
>X = Stratum (Stratum_no Total Ye=year established)
>P = Plot (Plot_no age slope= species)
>T = Tree (tree_no frequency)
>L = Leader (leader diameter height)
>F = Feature (start_height finish_height feature)
>
>On each of these lines there are some 'line specific' variables (in
>brackets). The data is hierarchical in nature - A feature belongs to a
>leader, a leader belongs to a tree, a tree belongs to a plot, a plot
>belongs to a stratum, a stratum belongs to inventory. There are many
>features in a tree. Many trees in a plot etc.
>
>In SAS I would read in the data in a procedural way using first. and last.
>variables to work out where inventories/stratums/plots/trees  finished and
>started so I could create summary statistics for each of them. For
>example, how many plots in a stratum? How many trees in a plot? An example
>of the sas code I would (not checked for errors!!!). If anybody could give
>me some idea on what the right approach in R would be for a similar
>analysis it would be greatly appreciated.
>
>regards Andrew
>
>
>Data datafile;
>infile 'test.txt';
>input @1 tag $1. @@;
>retain inventory stratum plot tree leader;
>if tag = 'A' then input @3 inventory $.;
>if tag = 'X' then input @3 stratum_no $. total $. yearest $. ;
>if tag = 'P' then input @3 plot_no $. age $. slope $. species $;
>if tag = 'T' then input @3 tree_no $. frequency  ;
>if tag = 'L' then input @3 leader_no $ diameter  height  ;
>if tag = 'F' then input @3 start $ finish $ feature $;
>if tag = 'F' then output;
>run;
>proc sort data = datafile;
>by inventory stratum_no  plot_no  tree_no  leader_no;
>
>* calculate mean dbh in each plot
>data dbh
>set datafile;
>by inventory stratum_no  plot_no  tree_no leader_no
>if first.leader_no then output;
>
>proc summary data = diameter;
>by inventory stratum plot tree;
>var diameter;
>output out = mean mean=;
>run;
>
>A BENALLA_1
>X 1 10 YE=1985
>P 1 20.25 slope=14 SPP:P.RAD
>T 1 25
>L 0 28.5 21.3528
>F 0 21.3528 SFNSW_DIC:P
>F 21.3528 100 SFNSW_DIC:P
>T 2 25
>L 0 32 23.1
>F 0 6.5 SFNSW_DIC:A
>F 6.5 23.1 SFNSW_DIC:C
>F 23.1 100 SFNSW_DIC:C
>T 3 25
>L 0 39.5 22.2407
>F 0 4.7 SFNSW_DIC:A
>F 4.7 6.7 SFNSW_DIC:C
>P 2 20.25 slope=13 SPP:P.RAD
>T 1 25
>L 0 38 22.1474
>F 0 1 SFNSW_DIC:G
>F 1 2.3 SFNSW_DIC:A
>T 1001 25
>L 0 38 22.1474
>F 0 1 SFNSW_DIC:G
>F 1 2.3 SFNSW_DIC:A
>T 2 25

[R] Changing the value of variables passed to functions as arguments

2005-10-05 Thread Paul Baer
Is it possible to write functions in such a way that, rather than 
having to write "a=function(a)", one can just write "function(a)" and 
have the variable passed as the argument be modified?

My real interest here is being able to invoke the editor by writing 
"ed(filename)" rather than filename=edit(filename).

Thanks,

--Paul

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Changing the value of variables passed to functions as arguments

2005-10-05 Thread Gabor Grothendieck
Check out:

http://finzi.psych.upenn.edu/R/Rhelp02a/archive/38536.html

On 10/5/05, Paul Baer <[EMAIL PROTECTED]> wrote:
> Is it possible to write functions in such a way that, rather than
> having to write "a=function(a)", one can just write "function(a)" and
> have the variable passed as the argument be modified?
>
> My real interest here is being able to invoke the editor by writing
> "ed(filename)" rather than filename=edit(filename).
>
> Thanks,
>
> --Paul
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Animation of Mandelbrot Set

2005-10-05 Thread Charles Annis, P.E.
Works well with both IE and Firefox on my 2 year old DELL WinXP machine.

Charles Annis, P.E.

[EMAIL PROTECTED]
phone: 561-352-9699
eFax:  614-455-3265
http://www.StatisticalEngineering.com
 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gabor Grothendieck
Sent: Wednesday, October 05, 2005 9:38 PM
To: Tuszynski, Jaroslaw W.
Cc: ([EMAIL PROTECTED])
Subject: Re: [R] Animation of Mandelbrot Set

This probably has nothing to do with your software but on my Windows
XP system I just get a static image on Internet Explorer with the
animated GIF but with Firefox and the same GIF the animation comes
out as expected.


On 10/4/05, Tuszynski, Jaroslaw W. <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I was playing with Mandelbrot sets and come up with the following code, I
> thought I would share:
>
> library(fields)  # for tim.colors
> library(caTools) # for write.gif
> m = 400  # grid size
> C = complex( real=rep(seq(-1.8,0.6, length.out=m), each=m ),
> imag=rep(seq(-1.2,1.2, length.out=m),  m ) )
> C = matrix(C,m,m)
> Z = 0
> X = array(0, c(m,m,20))
> for (k in 1:20) {
>  Z = Z^2+C
>  X[,,k] = exp(-abs(Z))
> }
> image(X[,,k], col=tim.colors(256)) # show final image in R
> write.gif(X, "Mandelbrot.gif", col=tim.colors(256), delay=100)
> # drop "Mandelbrot.gif" file from current directory on any web brouser to
> see the animation
>
>  Jarek
> \
>  Jarek Tuszynski, PhD.   o / \
>  Science Applications International Corporation  <\__,|
>  (703) 676-4192   ">  \
>  [EMAIL PROTECTED] `   \
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] how to handle missing values in the data?

2005-10-05 Thread uttam . phulwale

Hello Everybody,
I am reffering  David Meyer's  Benchmarking Support Vector Machines , 
Report No.78 (Nov.2002), i am newly working with R  but i am not sure how 
it is handling missing values in the benchmark datasets, I would be very 
thankful to you if you could let me know how to handle those missing 
numerical & categorical variables in the data (e.g. BreastCancer). 

because, i am getting fewer predictions after trained model than the test 
observations for SVM, so could not calculate confusion matrix. At the same 
time, function lda(),fda() , rpart() did give the equal predictions. Then 
i m confused a lot, how these functions handled the missing values, are 
those missing values are imputed with mean, median or new category??

I have another problem with Generalized Linear Model (glm) function.  I 
might have commited some error, but i am not sure where i did?

The script for glm function i have tried is as:

trdata<-data.frame(train,row.names=NULL)
attach(trdata)

glmmod <- glm(Class~., family= binomial(link = 
"logit"),data=trdata,maxit=50)

tstdata<-data.frame(test,row.names=NULL)
attach(tstdata)

xtst <- subset(tstdata, select = -Class)
ytst <- Class

pred<-predict(glmmod,xtst)
library(mda)
confusion(pred,ytst)

can you help me to sort out the problems?

Uttam Phulwale
Tata Consultancy Services Limited
Mailto: [EMAIL PROTECTED]
Website: http://www.tcs.com


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R for teaching multivariate statistics (Summary)

2005-10-05 Thread Murray Jorgensen
Greetings all

I promised a summary of the responses that I got to my question:

"Next year I will be teaching a third year course in applied statistics 
about 1/3 of which is multivariate statistics. I would be interested in 
hearing experiences from those who have taught multivariate statistics 
using R. Especially I am interested in the textbook that you used or 
recommended."

There were not many replies, so my task is easy!

Peter Dunn mentioned the new book by Brian Everitt, "An R and S-Plus 
Companion to Multivariate Analysis" which he had yet to see. Someone 
else has it out on loan here so I have not seen it either.

Peter has been using Bryan Manly's book but finds that it is expensive 
and describes different algorithms to those used in R.

Brian Ripley drew attention to Chapters 11 and 12 of MASS.

Pierre Bady pointed out the material on the website "Enseignements de 
Statistique en Biologie" http://pbil.univ-lyon1.fr/R/enseignement.html
by A.B. Dufour, D. Chessel & J.R. Lobry. I hope to explore this some 
more when I get back to higher bandwidth.

Pat Altham has a wealth of material on her web site, especially
http://www.statslab.cam.ac.uk/~pat/misc.ps
and
http://www.statslab.cam.ac.uk/~pat/AppMultNotes.ps.gz

Many thanks to these respondants for their help.

Murray Jorgensen
-- 
Dr Murray Jorgensen  http://www.stats.waikato.ac.nz/Staff/maj.html
Department of Statistics, University of Waikato, Hamilton, New Zealand
Email: [EMAIL PROTECTED]Fax 7 838 4155
Phone  +64 7 838 4773 wkHome +64 7 825 0441Mobile 021 1395 862

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] playing with R: make a animated GIF file...

2005-10-05 Thread vincent
klebyn a écrit :

> my objective:
> 1) to save PNG files;
> ->  i don't know the best way to make this;

?png
(also bmp(), jpg() available)
hih

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem in installing a package

2005-10-05 Thread Uwe Ligges
Claire Lee wrote:

> I'm using R in Windows XP. I created a package myself.
> I've used R CMD check to check it. Everything seems OK
> except the latex. I get the error message:
> * checking bbHist-manual.tex ... ERROR
> LaTeX errors when creating DVI version.
> This typically indicates Rd problems.
> 
> I ignored it because I didn't want to submit it to
> CRAN.
> 
> Then I tried to use R CMD INSTALL to install it. First
> I get: 
> "mv: cannot move `c:/PROGRA~1/R/rw2011/library/bbHist'
> to `c:/PROGRA~1/R/rw2011/library/00LOCK/bbHist
> ': Permission denied" 
>
> and a bunch of making DLL errors.  Then when I tried a
> second time, I get:
> 
> open(c:/progra~1/r/rw2011/library/bbHist/DESCRIPTION):
> No such file or directory
> 
> I can see a 00LOCK directory is created in the
> c:/PROGRA~1/R/rw2011/library directory. Any idea why
> this is happening?

No, information is still too sparse, unfortunately.
Do you have full write access?
The 00LOCK directory is used to save the older package in order to be 
able to restore it if a new installtion fails.
Something went wrong and you have to remove it manually now, I guess.

Uwe Ligges




> Thanks.
> 
> Claire
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html