[R] Is it possible to model with Laplace's error distribution?

2006-03-21 Thread Petar Milin
Hello!
My question is stated in the Subject: Is it possible to model with
Laplace's error distribution? For example, lmer() function have few
families of functions, like binomial etc., but not Laplace. Is there any
other package that would allow for Laplace? Or is there a way to give
"user-defined" family?

Sincerely,
P. Milin

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to join "specific Special Interest Group (=: SIG) mailing lists"

2006-03-21 Thread Roger Bivand
On Tue, 21 Mar 2006, Gabor Grothendieck wrote:

> Check out:
> 
> https://stat.ethz.ch/mailman/listinfo

Yes, R-sig-Geo, like the other R-sig-* on this page, is a mailing list. In 
this case, and like many others, you can also find out more through the 
CRAN "Spatial" Task View (you find the Task Views in the navigation bar on 
the left). These mailing lists are very often a good place to ask for 
specific help, often related to packages grouped in Task Views. It is 
worth repeating our gratitude to Martin Maechler for all his work with the 
mailing lists!

> 
> On 3/21/06, zhijie zhang <[EMAIL PROTECTED]> wrote:
> > there are several specific Special Interest Group (=: SIG) mailing
> > lists,and i'm interested in the"R-sig-Geo :R Special Interest Group on
> > using Geographical data and Mapping ",but can't find how to join it,
> > could anybody tell me how to do that? thank u very much!
> >
> >
> >
> >
> > --
> > Kind Regards,Zhi Jie,Zhang ,PHDDepartment of EpidemiologySchool of
> > Public HealthFudan UniversityTel:86-21-54237149
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> >
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Helleveien 30, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 95 43
e-mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] converting a string to a vector without using strsplit and unlink?

2006-03-21 Thread Tancred Frickey
Many thanks. That does it

Gabor Grothendieck wrote:

>Try this:
>
>x <- "c(1,2)"
>eval(parse(text = x))
>
>or
>
>x <- sub("\\{(.*)\\}", "c(\\1)", "{1,2,3,4,5}")
>eval(parse(text = x))
>
>or
>
>scan(textConnection(gsub("[{}]", "", "{1,2,3,4,5}")), sep = ",")
>
>On 3/21/06, Tancred Frickey <[EMAIL PROTECTED]> wrote:
>  
>
>>Hi
>>
>>I am importing some data from a postgres database and one of the fields
>>contains an array of values. This field is in the form "{1,2,3,4,5}" and
>>is imported as a character string into "R".  I am able to convert the
>>string into a vector of numerics using the strsplit and unlist
>>functions, but I find that a very inelegant (and slow) solution.
>>As an alternative I have coaxed the database into exporting the data in
>>the form "c(1,2,3,4,5)" and importing that into R. Unfortunately this is
>>also interpreted as one long character string instead of a vector
>>containing values. If I edit the corresponding fields in the table with
>>the "edit" command and remove the quotes at the beginning and end of the
>>line, the data is recognized as a vector of numerics.
>>
>>Question: Is there any direct or easy way of converting a string
>>"c(1,2,3,4,5)" into a vector of values without going through
>>strsplit&unlist and/or is there a way to import arrays from postgres
>>directly as arrays or vectors and not strings (and thereby avoid all the
>>trouble in the first place).
>>
>>
>>Best wishes
>>
>>Tancred
>>
>>__
>>R-help@stat.math.ethz.ch mailing list
>>https://stat.ethz.ch/mailman/listinfo/r-help
>>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>>
>>
>>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Removing rows with missing values

2006-03-21 Thread Ashraf Chaudhary
I got answer to my question from a previous posting on this forum. 
Thank you,
Ashraf
 

-


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Removing rows with missing values

2006-03-21 Thread Gabor Grothendieck
Check out

?na.omit
?complete.cases


On 3/21/06, Ashraf Chaudhary <[EMAIL PROTECTED]> wrote:
> Dear Colleagues:
>
> I have a question that is probably too simple but I could not get it.
>
> How to remove all the rows of a data table with missing values?
>
> Let us say my data table is
>
> 1  2  4
> 3  .   5
> 2  7  .
> 6  7  8
>
> The result should be,
>
> 1 2 4
> 6 7 8
>
> Thank you,
>
> Ashraf
>
>
> 
>
> _
> M. Ashraf Chaudhary, Ph.D.
> Department of International Health
> Johns Hopkins Bloomberg School of Public Health
> 615 N. Wolfe St.  Room W5506
> Baltimore MD 21205-2179
>
> (410) 502-0741/fax: (410) 502-6733
>   [EMAIL PROTECTED]
>
>
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to join "specific Special Interest Group (=: SIG) mailing lists"

2006-03-21 Thread Gabor Grothendieck
Check out:

https://stat.ethz.ch/mailman/listinfo

On 3/21/06, zhijie zhang <[EMAIL PROTECTED]> wrote:
> there are several specific Special Interest Group (=: SIG) mailing
> lists,and i'm interested in the"R-sig-Geo :R Special Interest Group on
> using Geographical data and Mapping ",but can't find how to join it,
> could anybody tell me how to do that? thank u very much!
>
>
>
>
> --
> Kind Regards,Zhi Jie,Zhang ,PHDDepartment of EpidemiologySchool of
> Public HealthFudan UniversityTel:86-21-54237149
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] converting a string to a vector without using strsplit and unlink?

2006-03-21 Thread Gabor Grothendieck
Try this:

x <- "c(1,2)"
eval(parse(text = x))

or

x <- sub("\\{(.*)\\}", "c(\\1)", "{1,2,3,4,5}")
eval(parse(text = x))

or

scan(textConnection(gsub("[{}]", "", "{1,2,3,4,5}")), sep = ",")

On 3/21/06, Tancred Frickey <[EMAIL PROTECTED]> wrote:
> Hi
>
> I am importing some data from a postgres database and one of the fields
> contains an array of values. This field is in the form "{1,2,3,4,5}" and
> is imported as a character string into "R".  I am able to convert the
> string into a vector of numerics using the strsplit and unlist
> functions, but I find that a very inelegant (and slow) solution.
> As an alternative I have coaxed the database into exporting the data in
> the form "c(1,2,3,4,5)" and importing that into R. Unfortunately this is
> also interpreted as one long character string instead of a vector
> containing values. If I edit the corresponding fields in the table with
> the "edit" command and remove the quotes at the beginning and end of the
> line, the data is recognized as a vector of numerics.
>
> Question: Is there any direct or easy way of converting a string
> "c(1,2,3,4,5)" into a vector of values without going through
> strsplit&unlist and/or is there a way to import arrays from postgres
> directly as arrays or vectors and not strings (and thereby avoid all the
> trouble in the first place).
>
>
> Best wishes
>
> Tancred
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] How to join "specific Special Interest Group (=: SIG) mailing lists"

2006-03-21 Thread zhijie zhang
there are several specific Special Interest Group (=: SIG) mailing
lists,and i'm interested in the"R-sig-Geo :R Special Interest Group on
using Geographical data and Mapping ",but can't find how to join it,
could anybody tell me how to do that? thank u very much!




--
Kind Regards,Zhi Jie,Zhang ,PHDDepartment of EpidemiologySchool of
Public HealthFudan UniversityTel:86-21-54237149

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] converting a string to a vector without using strsplit and unlink?

2006-03-21 Thread Tancred Frickey
Hi

I am importing some data from a postgres database and one of the fields 
contains an array of values. This field is in the form "{1,2,3,4,5}" and 
is imported as a character string into "R".  I am able to convert the 
string into a vector of numerics using the strsplit and unlist 
functions, but I find that a very inelegant (and slow) solution.
As an alternative I have coaxed the database into exporting the data in 
the form "c(1,2,3,4,5)" and importing that into R. Unfortunately this is 
also interpreted as one long character string instead of a vector 
containing values. If I edit the corresponding fields in the table with 
the "edit" command and remove the quotes at the beginning and end of the 
line, the data is recognized as a vector of numerics.

Question: Is there any direct or easy way of converting a string 
"c(1,2,3,4,5)" into a vector of values without going through 
strsplit&unlist and/or is there a way to import arrays from postgres 
directly as arrays or vectors and not strings (and thereby avoid all the 
trouble in the first place).


Best wishes

Tancred

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Removing rows with missing values

2006-03-21 Thread Ashraf Chaudhary
Dear Colleagues: 
 
I have a question that is probably too simple but I could not get it.
 
How to remove all the rows of a data table with missing values?  
 
Let us say my data table is 
 
1  2  4
3  .   5
2  7  .
6  7  8
 
The result should be,
 
1 2 4
6 7 8
 
Thank you,
 
Ashraf
  



_
M. Ashraf Chaudhary, Ph.D.
Department of International Health
Johns Hopkins Bloomberg School of Public Health
615 N. Wolfe St.  Room W5506
Baltimore MD 21205-2179

(410) 502-0741/fax: (410) 502-6733
  [EMAIL PROTECTED]


 

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Help with "step()" function(2)

2006-03-21 Thread Kenneth Cabrera


Hi R users:

Sorry if the attached file didn't arrive.
Im sending you in a zip format, hope it arrives.

The data base is not too large so I copy here in ascii format:

Thank you for your help

Kenneth

testData.dat

"MCHNV" "IA" "IM" "altura" "Region" "Offset1"
215 26.9863956381346 1.07456622007468 2 "URABA" -0.185125484126689
325 21.443816698054 1.05477685097601 4 "URABA" 0.118671529717499
435 24.3172115328382 1.08970363580813 8 "URABA" 0.363948427905231
327 22.9861973271014 1.09309079792113 200 "URABA" 0.241376319575269
150 18.1768608561301 0.983952398124775 30 "BAJO CAUCA" -0.683196849706777
949 10.6563785744498 0.97821952889272 25 "URABA" 1.30128098679293
417 15.0910419778194 1.02464131763495 28 "URABA" 0.314810739840034
528 13.6340158642317 0.913249100021176 50 "BAJO CAUCA" 0.986562784859487
572 15.2055383362367 0.999321512574634 34 "URABA" 0.714908672341458
1255 16.9860295444068 1.00064160401003 2 "URABA" 1.54158724552593
517 17.9909267660402 1.00311566921978 50 "BAJO CAUCA" 0.67141268841394
221 26.714962883463 1.03642655495856 100 "BAJO CAUCA" -0.122167633974208
306 20.1507989217486 1.05350982751705 50 "BAJO CAUCA" 0.0411419433311752
274 27.4221304422647 1.03894736842105 125 "BAJO CAUCA" 0.0344014267173323
307 11.8064337044690 1.02694423605901 650 "NORDESTE" 0.242161557149972
203 25.7458563535912 1.02557953637090 75 "URABA" -0.608806032126194
258 27.2816742408892 1.08604552865165 1550 "NORTE" 0.340748793388473
126 26.0150722854506 0.9960092095165 1165 "NORTE" -0.551647618286246
88 21.6368286445013 1.10034379028792 1200 "NORTE" -0.901402119380404
85 25.1843952723718 1.05835010060362 1700 "NORTE" -0.763569644856491
75 18.6270125223614 1.09069658719028 1850 "NORTE" -1.03282454813011
64 29.9515311742675 1.08689655172414 1200 "OCCIDENTE" -1.03845836584836
77 26.2046204620462 1.11247966534976 650 "OCCIDENTE" -1.10563690360507
49 12.6096949435855 1.06872703695699 1475 "NORTE" -1.06421086195078
133 18.7464907355418 1.09098914000587 1535 "NORDESTE" -0.616186139423817
192 10.5534246575342 0.947705442902882 2300 "NORTE" 0.508021696433256
32 17.2654236774291 1.0462158808933 1875 "NORTE" -1.18417017702976
73 17.0988006853227 1.06583294006607 850 "OCCIDENTE" -1.04128722204884
139 15.3530911090010 1.03178310316816 1550 "NORDESTE" -0.245900538436826
52 19.3008705114255 1.02785822645152 1675 "NORTE" -0.473208760194684
17 6.27750073335289 0.95582329317269 2550 "NORTE" -1.69826912614072
307 26.0375981018434 1.03257159815996 450 "OCCIDENTE" 0.131028262406404
127 17.2681438238026 1.05732223283745 980 "NORDESTE" -0.520875959619492
182 15.4432793136320 1.05083088954057 700 "NORDESTE" -0.16960278438618
24 7.7125328659071 1.0375 1800 "NORTE" -1.76609172247948
51 30.1453193646502 1.18054532056006 25 "URABA" -2.21640739675299
141 14.8701064405557 1.08226897069872 1300 "OCCIDENTE" -0.138113302129634
40 24.7907034861927 1.08357198646186 1625 "OCCIDENTE" -1.18090753139494
76 17.3296398891967 1.11061739943873 1250 "NORDESTE" -1.11474167059799
40 10.7364199781261 1.09309423884014 700 "OCCIDENTE" -0.711311151187616
100 19.3800588668138 1.14100039385585 75 "MAGDALENA MEDIO" -0.86988435906
42 10.3085483055134 1.00222807372899 1800 "NORTE" -0.898942093539542
18 8.80185302168878 1.02085106382979 1925 "OCCIDENTE" -1.48722027970985
20 16.6951761888471 1.11658218682114 500 "OCCIDENTE" -1.99510039324608
18 4.50797141286421 1.08840413318025 2300 "NORTE" -1.22417551164346
75 6.84546862896979 0.940623825629463 1050 "NORDESTE" -0.648173814917214
26 8.21845174973489 1.07633027522936 2550 "NORTE" -1.48722027970985
21 6.19588744588745 1.24954351795496 1920 "OCCIDENTE" -1.9241486572738
216 16.9889711436773 1.06553284443751 1350 "OCCIDENTE" 0.188137942115395
81 5.59826387611726 1.02449921811006 2550 "NORTE" 0.162968828278140
90 8.68694955964772 1.03136617100372 1450 "NORDESTE" -0.233193887167711
69 3.65186966980101 0.890711135611907 2200 "NORTE" -0.415515443961666
138 16.1123827476325 1.01692767267131 550 "OCCIDENTE" -0.139262067333508
79 9.67841682127396 1.05692478931468 750 "OCCIDENTE" -0.638658995275876
69 13.5423542354235 1.00617965129111 950 "MAGDALENA MEDIO"  
-0.97021907389971

52 5.6497484139138 1.01988510826337 2475 "NORTE" -0.136965855073157
59 8.7686432740111 1.02820400409177 1975 "NORDESTE" -0.512493680866688
69 11.9602529358627 1.00325732899023 780 "OCCIDENTE" -0.807436326962073
173 6.86294126054023 0.969987048820317 1300 "VALLE DE ABURRA"  
0.436963775167535

29 19.2194992008524 1.0319350473613 1800 "SUROESTE" -1.26584820804402
118 4.64579055441478 0.948 1425 "VALLE DE ABURRA" 0.392717535285662
23 9.50905993495431 0.976430976430976 1875 "ORIENTE" -1.21739582465808
1456 2.79913286437223 0.912491945614184 1450 "VALLE DE ABURRA"  
2.50894864426824

156 10.2515243902439 1.03805825242718 1475 "NORDESTE" -0.0693500781347932
26 8.66738894907909 1.01016333938294 1650 "ORIENTE" -1.53247687129797
208 3.6576768917803 0.934502240405221 1425 "VALLE DE ABURRA"  
0.718814927308523
294 12.4953778763831 0.936918076139056 125 "MAGDALENA MEDIO"  
0.3

[R] error message with stats package fx cor R aqua v1.14

2006-03-21 Thread Betty Gilbert
Hello,
Sorry if this is somewhere in the archives but i couldn't find it. I 
am using the funtion cor() and inputting a large matrix with a column 
of 0, for which i get NA's for. I'm looking for a way to overcome 
that divide by 0 error btw but my real issue is this:
I specify the method as pearson but the error messages always say kendall
ex)
>  read.csv("ncatest.csv", header = FALSE, sep = ",", dec=".")->ncatest
>  ncatest<- ncatest[,-1]
>   rownames(ncatest2)<- ncatest[,1]
>  ncatest2<-as.matrix(ncatest2)
>  > dim(ncatest2)
[1] 1048211
>  cornca<-cor(ncatest2, method="pearson")
Warning message:
the standard deviation is zero in: cor(x, y, na.method, method == "kendall")


I would like to confirm that i am using the metric I specified and 
that its not defaulting to kendall for some weird reason.
Thanks for any assistance.
Sincerely,
betty
-- 
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Using of LME function in non-replicate data

2006-03-21 Thread Cleber N.Borges


Hello Spencer and all R-users


Many thanks for your reply, Spencer!
I don't put the data because I thought that I would be
inconvenient.

The data follow below.


The model that I am using is the following one:

( Scheffe model in Z variable ) * ( Scheffe model in X
variable )

obs.: I make one test for fit this model in SAS and I
obtained the same results.

Thanks
Regards


Cleber
Chemistry' student




### Model


r.lme <-lme( NP ~ -1 +

 x3:z2 + x1:x2:z2 + x3:z1 + x2:x3:z3 + x1:z1 +
x1:x2:x3:z1:z3 +
 x1:x3:z2 + x2:x3:z2 + x3:z1:z2:z3 + x1:z3 + x3:z1:z2
+
 x1:x3:z1 + x2:z1 + x1:x2:z1:z3 + x2:z3 + x2:x3:z2:z3
+
 x1:x2:x3:z3 + x2:x3:z1:z3 + x1:x2:x3:z1 + x1:x2:x3:z2
+
 x1:x2:x3:z1:z2:z3 + x2:z2 + x1:z2 + x3:z3 +
x1:x3:z1:z2 +
 x1:x2:z1:z2:z3 + x1:x2:x3:z2:z3 + x1:x3:z1:z2:z3 +
 x1:x3:z2:z3 + x2:z1:z2:z3

, data=ret,random=~1|wp )



### data

   z1   z2   z3   x1   x2   x3
wp y NP
 1.00 0.00 0.00 1.00 0.00 0.00
 1  11.6  6
 1.00 0.00 0.00 0.00 1.00 0.00
 1   5.8  3
 1.00 0.00 0.00 0.00 0.00 1.00
 1   5.9 12
 1.00 0.00 0.00 0.50 0.50 0.00
 1  12.4  5
 1.00 0.00 0.00 0.50 0.00 0.50
 1   4.6  5
 1.00 0.00 0.00 0.00 0.50 0.50
 1   7.8  7
 1.00 0.00 0.00 0.33 0.33 0.33
 1   8.5  8
 0.00 1.00 0.00 1.00 0.00 0.00
 2  11.6  2
 0.00 1.00 0.00 0.00 1.00 0.00
 2  14.9  2
 0.00 1.00 0.00 0.00 0.00 1.00
 2 114.0 17
 0.00 1.00 0.00 0.50 0.50 0.00
 2  19.2 17
 0.00 1.00 0.00 0.50 0.00 0.50
 2  10.2  3
 0.00 1.00 0.00 0.00 0.50 0.50
 2  11.2 16
 0.00 1.00 0.00 0.33 0.33 0.33
 2  18.5 11
 0.00 0.00 1.00 1.00 0.00 0.00
 3   5.1  4
 0.00 0.00 1.00 0.00 1.00 0.00
 3   4.7  3
 0.00 0.00 1.00 0.00 0.00 1.00
 3   8.5  2
 0.00 0.00 1.00 0.50 0.50 0.00
 3  12.8  3
 0.00 0.00 1.00 0.50 0.00 0.50
 3   5.4  2
 0.00 0.00 1.00 0.00 0.50 0.50
 3  20.8 10
 0.00 0.00 1.00 0.33 0.33 0.33
 3   7.0  9
 0.50 0.50 0.00 1.00 0.00 0.00
 4  11.6  5
 0.50 0.50 0.00 0.00 1.00 0.00
 4  13.3  3
 0.50 0.50 0.00 0.00 0.00 1.00
 4 114.0 19
 0.50 0.50 0.00 0.50 0.50 0.00
 4  19.8 11
 0.50 0.50 0.00 0.50 0.00 0.50
 4  10.2  4
 0.50 0.50 0.00 0.00 0.50 0.50
 4  10.3 15
 0.50 0.50 0.00 0.33 0.33 0.33
 4   8.3 11
 0.50 0.00 0.50 1.00 0.00 0.00
 5  11.6  5
 0.50 0.00 0.50 0.00 1.00 0.00
 5  13.8  4
 0.50 0.00 0.50 0.00 0.00 1.00
 5  19.4  8
 0.50 0.00 0.50 0.50 0.50 0.00
 5  11.3  9
 0.50 0.00 0.50 0.50 0.00 0.50
 5  10.2  4
 0.50 0.00 0.50 0.00 0.50 0.50
 5  11.2  6
 0.50 0.00 0.50 0.33 0.33 0.33
 5  14.7 18
 0.00 0.50 0.50 1.00 0.00 0.00
 6   3.1  2
 0.00 0.50 0.50 0.00 1.00 0.00
 6  13.7  3
 0.00 0.50 0.50 0.00 0.00 1.00
 6  19.5  9
 0.00 0.50 0.50 0.50 0.50 0.00
 6  24.0 10
 0.00 0.50 0.50 0.50 0.00 0.50
 6  10.2  4
 0.00 0.50 0.50 0.00 0.50 0.50
 6  22.2 17
 0.00 0.50 0.50 0.33 0.33 0.33
 6   7.0 10
 0.33 0.33 0.33 1.00 0.00 0.00
 7  11.5  5
 0.33 0.33 0.33 0.00 1.00 0.00
 7  13.7  2
 0.33 0.33 0.33 0.00 0.00 1.00
 7 114.0 18
 0.33 0.33 0.33 0.50 0.50 0.00
 7  19.8 13
 0.33 0.33 0.33 0.50 0.00 0.50
 7  10.2  5
 0.33 0.33 0.33 0.00 0.50 0.50
 7  11.0 16
 0.33 0.33 0.33 0.33 0.33 0.33
 7  18.3 13




Spencer Graves wrote:

>   You did not provide a simple, self-contained
replicable example, so I can not say for sure.  You
say you have 49 observations in 7 blocks. The issue
described in the post you cite would be a problem if
you had 49 blocks of size 1.  I've tried to fit models
like that, only to find after getting strange results
that 'lme' did not bother to tell me how stupid I was:
 It just tried to do what I asked.
>
>   If you've got 7 blocks with 7 treatments and
10-20 fixed effects, I do not believe that the issue
described in that post would be a problem for you. 
You may have other problems, like too much noise to
find the signals you most care about.  However, those
are different from the issue you cited.
>
>   hope this helps.
>   spence

Re: [R] Modified KS test to handle ties.

2006-03-21 Thread Francisco J. Zagmutt
RSiteSearch("KS ties") turned this:

http://finzi.psych.upenn.edu/R/library/Matching/html/ks.boot.html

Does this help?

Best

Francisco

>From: "Steve Su" <[EMAIL PROTECTED]>
>To: 
>Subject: [R] Modified KS test to handle ties.
>Date: Wed, 22 Mar 2006 11:28:06 +1100
>
>Dear All,
>
>
>
>I wonder if there is now an implementation of a modified KS test that
>can handle ties?
>
>
>
>Steve.
>
>
>
>
>
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! 
>http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Modified KS test to handle ties.

2006-03-21 Thread Steve Su
Dear All,

 

I wonder if there is now an implementation of a modified KS test that
can handle ties? 

 

Steve. 

 

 


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Help with "step()" function

2006-03-21 Thread Kenneth Cabrera

Hi R users:

When I use this commands:
-
testData<-read.table("testData.dat",header=T)
model1J<-glm(MCHNV~offset(Offset1),data=testData,family="poisson")
step(model1J,direction="forward",
 
scope=(list(upper=~IA+IM+altura+Region+Region:IA+Region:IM+Region:altura+offset(Offset1
-
(the "testData" (testData.dat) is attached to the e-mail to reproduce the
problem):
I got this error:
-
.
.
.
 MCHNV ~ Region + IA + altura + offset(Offset1)

Df Deviance AIC
+ IA:Region  8   540.24 1377.73
+ IM 1   586.84 1410.32
+ altura:Region  8   574.21 1411.70
   600.74 1422.23

Step:  AIC= 1377.73
 MCHNV ~ Region + IA + altura + Region:IA + offset(Offset1)

Error in factor.scope(ffac, list(add = fadd, drop = fdrop)) :
upper scope does not include model
-

What am I doing wrong?

I try also with:

step(model1J,direction="forward",
 
scope=(list(upper=~IA+IM+altura+Region+IA:Region+Region:IM+Region:altura+offset(Offset1

and with

step(model1J,direction="forward",
 
scope=(list(upper=~IA+IM+altura+Region+IA:Region+Region:IA+Region:IM+Region:altura+offset(Offset1

I got the same message

Thank you for your help.

Kenneth
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] data import

2006-03-21 Thread Gabor Grothendieck
Its possible the email system changed it but from what I
received there were no tabs in your sample data so this
would presumably work:

read.table(filename, fill = TRUE)


On 3/21/06, dai connie <[EMAIL PROTECTED]> wrote:
> Hi, all
>
> nice to be in such a nice mailing list.
>
> I have a questions about importing a data with postfix xls. The data, each
> of which has different length, looks like
>
> 12 23 34 56 78 19
> 12 34 35 93 89 20 10 40
> 12 44 202 39 39 59 49 39 439
> ..
>
> I tried to save the data as txt file first and still confused how to read
> data with unequal
> lines. Seems each lines are separated by "\n" and each number by "\t". I
> tried the code like
>
> a<-read.table("b.txt",sep="\t",strip.white =TRUE, fill=T)
>
> and it turned out to be a disaster. I really need your help.
>
> Thanks a lot!
>
>
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] build R on windows

2006-03-21 Thread Duncan Murdoch
On 3/21/2006 6:14 PM, Jennifer Lai wrote:
> Hi,
> I'm not sure if this question has been answered before, but when I 
> execute command  "Rcmd INSTALL --build nws" to build an R package on 
> Windows,
> the build process got stucked on the save image step.
> 
> Here is the snapshot of the build process,
> --- Making package nws 
>adding build stamp to DESCRIPTION
>installing NAMESPACE file and metadata
>installing R files
>save images
> 
> The build process never returns unless I Ctrl-C out of it.
> 
> I also tried with removing SaveImage option from the DESCRIPTION file. 
> This time, the build process got stucked at lazy loading step.
> I then set LazyLoad option to no in the DESCRIPTION file, this allows 
> the build process to generate a zip file. However, when I load the 
> library in R command console
> by typing "library(nws)", the command just hung trying to load the library.
> 
> Here is the content of the description file,
> Package: nws
> Title: R functions for NetWorkSpaces and Sleigh
> Version: 1.3.0
> License:  GPL Version 2 or later
> Depends: R (>=2.1), methods
> SaveImage: true
> URL: http://nws-r.sourceforge.net
> 
> Is there any subtlety between building R packages in Linux and Windows? 
> I can build and load this package under Linux. But can't figure out 
> what's causing the hang on Windows and how to debug the problem.  Has 
> anyone ran into similar problem before, and steps you took to debug the 
> problem?
> I very much appreciate any help you can provide. Thanks!


The main subtlety is that on Windows you need to install most of the 
tools yourself.  Read the instructions in the R Installation and 
Administration manual, and follow them exactly.  A common error is not 
to put the R tools first in the PATH; then Windows finds the wrong 
commands, and things go wrong.

I don't know what debugging tools are available, other than editing the 
scripts to print things out as they go along.  The scripts are normally 
installed in the RHOME/bin directory.

Duncan Murdoch

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] build R on windows

2006-03-21 Thread Jennifer Lai
Hi,
I'm not sure if this question has been answered before, but when I 
execute command  "Rcmd INSTALL --build nws" to build an R package on 
Windows,
the build process got stucked on the save image step.

Here is the snapshot of the build process,
--- Making package nws 
   adding build stamp to DESCRIPTION
   installing NAMESPACE file and metadata
   installing R files
   save images

The build process never returns unless I Ctrl-C out of it.

I also tried with removing SaveImage option from the DESCRIPTION file. 
This time, the build process got stucked at lazy loading step.
I then set LazyLoad option to no in the DESCRIPTION file, this allows 
the build process to generate a zip file. However, when I load the 
library in R command console
by typing "library(nws)", the command just hung trying to load the library.

Here is the content of the description file,
Package: nws
Title: R functions for NetWorkSpaces and Sleigh
Version: 1.3.0
License:  GPL Version 2 or later
Depends: R (>=2.1), methods
SaveImage: true
URL: http://nws-r.sourceforge.net

Is there any subtlety between building R packages in Linux and Windows? 
I can build and load this package under Linux. But can't figure out 
what's causing the hang on Windows and how to debug the problem.  Has 
anyone ran into similar problem before, and steps you took to debug the 
problem?
I very much appreciate any help you can provide. Thanks!


Regards,
Jennifer

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2006-03-21 Thread Linda Lei
Hi there,

 

I use cluster.stats to test which cluster is better, for example:

 

library(fpc) 

data(xclara) 

a3 <- clara(xclara, 3) 

a4 <- clara(xclara, 4) 

d <- dist(xclara) 

val1 <- cluster.stats(d, a3$clust) 

val2 <- cluster.stats(d, a4$clust) 

barplot(c(val1$avg, val2$avg))

 

Which value in the output of cluster.stats can tell us 3 cluster is
better than 4?

 

I also try to test "kmeans" method with "clara" method by using
cluster.stats. The same problem shows up.

 

 

Thank you!


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] data import

2006-03-21 Thread dai connie

Hi, all

nice to be in such a nice mailing list.

I have a questions about importing a data with postfix xls. The data, each 
of which has different length, looks like


12 23 34 56 78 19
12 34 35 93 89 20 10 40
12 44 202 39 39 59 49 39 439
..

I tried to save the data as txt file first and still confused how to read 
data with unequal 
lines. Seems each lines are separated by "\n" and each number by "\t". I 
tried the code like


a<-read.table("b.txt",sep="\t",strip.white =TRUE, fill=T)

and it turned out to be a disaster. I really need your help. 


Thanks a lot!

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] (no subject)

2006-03-21 Thread Linda Lei
Hi there,

 

I'm using function "cluster.stats" to validate the cluster. But by the
values of "cluster.stats", which value I can use to judge if it is a
good cluster method or not? For example, how can I test "kmeans" method
by "cluster.stats"?

 

Thank you!


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Getting polygons from contiguous grid cells having same value in sp

2006-03-21 Thread Roger Bivand
On Tue, 21 Mar 2006, Waichler, Scott R wrote:

> 
> I have been using the sp package to deal with gridded data.  I would
> like to know how to extract the vertices (x,y) of polygons that outline
> areas of like-valued cells in a grid.  Here is a simple 3x3 grid:
> 
> 2  2  1
> 1  2  1
> 1  1  3
> 
> x <- c(1,1,1,2,2,2,3,3,3) # define a 3 x 3 array of points
> y <- c(1,2,3,1,2,3,1,2,3)
> h <- SpatialPoints(cbind(x,y))  # make these an sp object of the points
> class
> h$z<- c(1,1,2,1,2,2,1,1,3) # add field values
> gridded(h) <- T  # make it a grid
> 
> How could I get the vertices for the region of value=2 cells?  Here is
> what they would be, in clockwise order from the upper left corner:
> (0,3) (2,3) (2,1) (1,1) (1,2) (0,2)

Perhaps R-sig-geo would be a better place to ask (it would):

x <- c(1,1,1,2,2,2,3,3,3)
y <- c(1,2,3,1,2,3,1,2,3)
h <- SpatialPoints(cbind(x,y))
z <- c(1,1,2,1,2,2,1,1,3)
h1 <- SpatialPixelsDataFrame(h, data=list(z=z))
h2 <- as.SpatialPolygons.SpatialPixels(h1)
plot(h2)
h3 <- h2[h1$z == 2]
plot(h3, add=TRUE, col="grey")
library(spgpc) # install from the sourceforge repository, CRAN Spatial 
   # task view link
h4 <- unionSpatialPolygons(h3, rep("z2", sum(z == 2)))
plot(h4, add=TRUE, border="red", lwd=2)

but note that you didn't allow for the coordinates being in the cell 
centres, so you are off by 0.5:

> slot(slot(slot(h4, "polygons")[[1]], "Polygons")[[1]], "coords")
 [,1] [,2]
[1,]  2.5  2.5
[2,]  2.5  1.5
[3,]  1.5  1.5
[4,]  1.5  2.5
[5,]  0.5  2.5
[6,]  0.5  3.5
[7,]  2.5  3.5
[8,]  2.5  2.5

and because gpclib isn't checking to see if line segments are straight 
lines, there remain vertices at (2.5,2.5) and (1.5,3.5) that are strictly 
unneeded.

Next time R-sig-geo?

Roger


> 
> Thanks,
> Scott Waichler
> Pacific Northwest National Laboratory
> scott.waichler _at_ pnl.gov
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Helleveien 30, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 95 43
e-mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Getting polygons from contiguous grid cells having same value in sp

2006-03-21 Thread Waichler, Scott R

I have been using the sp package to deal with gridded data.  I would
like to know how to extract the vertices (x,y) of polygons that outline
areas of like-valued cells in a grid.  Here is a simple 3x3 grid:

2  2  1
1  2  1
1  1  3

x <- c(1,1,1,2,2,2,3,3,3) # define a 3 x 3 array of points
y <- c(1,2,3,1,2,3,1,2,3)
h <- SpatialPoints(cbind(x,y))  # make these an sp object of the points
class
h$z<- c(1,1,2,1,2,2,1,1,3) # add field values
gridded(h) <- T  # make it a grid

How could I get the vertices for the region of value=2 cells?  Here is
what they would be, in clockwise order from the upper left corner:
(0,3) (2,3) (2,1) (1,1) (1,2) (0,2)

Thanks,
Scott Waichler
Pacific Northwest National Laboratory
scott.waichler _at_ pnl.gov

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] ROWNAMES error message

2006-03-21 Thread Sean Davis



On 3/21/06 2:26 PM, "mark salsburg" <[EMAIL PROTECTED]> wrote:

> I am getting an error message, which I do not know the source to.
> 
> I have a matrix SAMPLES that has preexisting rownames that I would like to
> change.
> GENE_NAMES contains these rownames.
> 
> 
>> rownames(SAMPLES) = GENE_NAMES
> Error in "dimnames<-.data.frame"(`*tmp*`, value = list(list(V1 = c(3843,  :
> invalid 'dimnames' given for data frame
>> dim(SAMPLES)
> [1] 1262620
>> dim(GENE_NAMES)
> [1] 12626 1
>> is.data.frame(SAMPLES)
> [1] TRUE
>> is.data.frame(GENE_NAMES)
> [1] TRUE
> 
> I have tried converting GENE_NAMES to a factor, R will not allow me because
> its says "x must be atomic"
> 
> ANY IDEAS??

 rownames() is looking for a vector.  You are asking it to assign a
data.frame.  Try 

 rownames(SAMPLES) <- GENE_NAMES[,1]

Sean

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] ROWNAMES error message

2006-03-21 Thread Marc Schwartz (via MN)
On Tue, 2006-03-21 at 14:26 -0500, mark salsburg wrote:
> I am getting an error message, which I do not know the source to.
> 
> I have a matrix SAMPLES that has preexisting rownames that I would like to
> change.

SAMPLES is not a matrix, it is a data frame, as your output shows below.

> GENE_NAMES contains these rownames.
> 
> 
> > rownames(SAMPLES) = GENE_NAMES
> Error in "dimnames<-.data.frame"(`*tmp*`, value = list(list(V1 = c(3843,  :
> invalid 'dimnames' given for data frame
> > dim(SAMPLES)
> [1] 1262620
> > dim(GENE_NAMES)
> [1] 12626 1
> > is.data.frame(SAMPLES)
> [1] TRUE
> > is.data.frame(GENE_NAMES)
> [1] TRUE
> 
> I have tried converting GENE_NAMES to a factor, R will not allow me because
> its says "x must be atomic"
> 
> ANY IDEAS??

GENE_NAMES is presumably a data frame with a single column. You need to
properly access the single column by name or index and use that as the
RHS of the assignment. So something like one the following should work:

  rownames(SAMPLES) <- GENE_NAMES$V1

or 

  rownames(SAMPLES) <- GENE_NAMES[, 1]

Use:

  str(GENE_NAMES)

which will display the structure of GENE_NAMES.

'atomic' means that the data in question is one of the primary data
types defined for R. See ?is.atomic for more information.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] ROWNAMES error message

2006-03-21 Thread Liaw, Andy
The object to be assigned as rownames need to be a vector, not a data frame.
Try something like:

rownames(SAMPLES) <- GENE_NAMES[[1]]

Also, don't confuse a data frame from a matrix:  They are very different.

Andy

From: mark salsburg
> 
> I am getting an error message, which I do not know the source to.
> 
> I have a matrix SAMPLES that has preexisting rownames that I 
> would like to change. GENE_NAMES contains these rownames.
> 
> 
> > rownames(SAMPLES) = GENE_NAMES
> Error in "dimnames<-.data.frame"(`*tmp*`, value = 
> list(list(V1 = c(3843,  :
> invalid 'dimnames' given for data frame
> > dim(SAMPLES)
> [1] 1262620
> > dim(GENE_NAMES)
> [1] 12626 1
> > is.data.frame(SAMPLES)
> [1] TRUE
> > is.data.frame(GENE_NAMES)
> [1] TRUE
> 
> I have tried converting GENE_NAMES to a factor, R will not 
> allow me because its says "x must be atomic"
> 
> ANY IDEAS??
> 
> Thank you
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list 
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] ROWNAMES error message

2006-03-21 Thread mark salsburg
I am getting an error message, which I do not know the source to.

I have a matrix SAMPLES that has preexisting rownames that I would like to
change.
GENE_NAMES contains these rownames.


> rownames(SAMPLES) = GENE_NAMES
Error in "dimnames<-.data.frame"(`*tmp*`, value = list(list(V1 = c(3843,  :
invalid 'dimnames' given for data frame
> dim(SAMPLES)
[1] 1262620
> dim(GENE_NAMES)
[1] 12626 1
> is.data.frame(SAMPLES)
[1] TRUE
> is.data.frame(GENE_NAMES)
[1] TRUE

I have tried converting GENE_NAMES to a factor, R will not allow me because
its says "x must be atomic"

ANY IDEAS??

Thank you

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Classifying time series by shape over time

2006-03-21 Thread Kjetil Brinchmann Halvorsen
Andreas Neumann wrote:
> Dear all,
> 
> I have hundreds of thousands of univariate time series of the form:
> character "seriesid", vector of Date, vector of integer
> (some exemplary data is at the end of the mail)
> 
> I am trying to find the ones which somehow "have a shape" over time that
> looks like the histogramm of a (skewed) normal distribution:
>>  hist(rnorm(200,10,2))
> The "mean" is not interesting, i.e. it does not matter if the first
> nonzero observation happens in the 2. or the 40. month of observation.
> So all that matters is: They should start sometime, the hits per month
> increase, at some point they decrease and then they more or less
> disappear.
> 
> Short Example (hits at consecutive months (Dates omitted)):
> 1. series: 0 0 0 2 5 8 20 42 30 19 6 1 0 0 0-> Good
> 2. series: 0 3 8 9 20 6 0 3 25 67 7 1 0 4 60 20 10 0 4  -> Bad
> 
> Series 1 would be an ideal case of what I am looking for.
> 
> Graphical inspection would be easy but is not an option due to the huge
> amount of series.
> 

Does function turnpoints)= in package pastecs help_

Kjetil

> Questions:
> 
> 1. Which (if at all) of the many packages that handle time series is
> appropriate for my problem?
> 
> 2. Which general approach seems to be the most straightforward and best
> supported by R?
> - Is there a way to test the time series directly (preferably)?
> - Or do I need to "type-cast" them as some kind of histogram
>   data and then test against the pdf of e.g. a normal distribution (but
>   how)?
> - Or something totally different?
> 
> 
> Thank you for your time,
> 
>  Andreas Neumann
> 
> 
> 
> 
> Data Examples (id1 is good, id2 is bad):
> 
>> id1
> dates   hits
> 1  2004-12-01 3
> 2  2005-01-01 4
> 3  2005-02-0110
> 4  2005-03-01 6
> 5  2005-04-0135
> 6  2005-05-0114
> 7  2005-06-0133
> 8  2005-07-0113
> 9  2005-08-01 3
> 10 2005-09-01 9
> 11 2005-10-01 8
> 12 2005-11-01 4
> 13 2005-12-01 3
> 
> 
>> id2
> dates   hits
> 1  2001-01-01 6
> 2  2001-02-01 5
> 3  2001-03-01 5
> 4  2001-04-01 6
> 5  2001-05-01 2
> 6  2001-06-01 5
> 7  2001-07-01 1
> 8  2001-08-01 6
> 9  2001-09-01 4
> 10 2001-10-0110
> 11 2001-11-01 0
> 12 2001-12-01 3
> 13 2002-01-01 6
> 14 2002-02-01 5
> 15 2002-03-01 1
> 16 2002-04-01 2
> 17 2002-05-01 4
> 18 2002-06-01 4
> 19 2002-07-01 0
> 20 2002-08-01 1
> 21 2002-09-01 0
> 22 2002-10-01 2
> 23 2002-11-01 2
> 24 2002-12-01 2
> 25 2003-01-01 2
> 26 2003-02-01 3
> 27 2003-03-01 7
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] transform shapefiles in UTM to lat/long

2006-03-21 Thread Dylan Beaudette
On Tuesday 21 March 2006 08:13 am, Fredy O. Mejia wrote:
> Dear all:
>
> I have a shapefile in UTM coordinate system and I would like to transform
> it to Lat/Log coordinates (WSG84).  The package PBSmapping contains
> function convUL to transform between the two coordinate systems when data
> is in the form of a data frame with attributes specifying the coordinate
> system. However, when shapefiles are imported using function read.shape
> (package maptools), a list instead of a data frame is generated.  I could
> extract all the polygons in the list and transform them, individually, but
> I hope there is an easier way of doing that.  Besides, the shapefile i'm
> trying to convert has two different UTM zones (is from Guatemala, and it
> covers zones 15 and 16).  Any sugestions would be greatly appreciated.
>
> Thanks,
>
> Fredy
>
>   [[alternative HTML version deleted]]
>

Fredy,

I would recommend performing the inverse projection with the GDAL/OGR library 
first. 


-- 
Dylan Beaudette
Soils and Biogeochemistry Graduate Group
University of California at Davis
530.754.7341

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] transform shapefiles in UTM to lat/long

2006-03-21 Thread Roger Bivand
On Tue, 21 Mar 2006, Fredy O. Mejia wrote:

> Dear all:
> 
> I have a shapefile in UTM coordinate system and I would like to transform it
> to Lat/Log coordinates (WSG84).  The package PBSmapping contains function
> convUL to transform between the two coordinate systems when data is in the
> form of a data frame with attributes specifying the coordinate system.
> However, when shapefiles are imported using function read.shape (package
> maptools), a list instead of a data frame is generated.  I could extract all
> the polygons in the list and transform them, individually, but I hope there
> is an easier way of doing that.  Besides, the shapefile i'm trying to
> convert has two different UTM zones (is from Guatemala, and it covers zones
> 15 and 16).  Any sugestions would be greatly appreciated.

My suggestion would be to read the shapefiles using the readOGR() function 
in the rgdal package, and then use the transform() method in that package 
to change from the appropriate UTM to geographical coordinates. 

Because I think your question involves more than that (I don't understand
how a single shapefile can use two UTM zones), I suggest moving this
thread to the R-sig-geo mailing list, and have cc-ed this reply there.

Please also say which platform and version of R you are using.

> 
> Thanks,
> 
> Fredy
> 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Helleveien 30, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 95 43
e-mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Hessian from optim()

2006-03-21 Thread Thomas Lumley
On Tue, 21 Mar 2006, Ingmar Visser wrote:
> The optim help page says:
>
> hessian Logical. Should a numerically differentiated Hessian matrix be
> returned?
>
> I interpret this as providing a finite differences approximation of the
> Hessian (possibly based on exact gradients?). Is that the case or is it a
> Hessian that results from the optimization process?
>

Yes, when it says "numerically differentiated" it means numerically 
differentiated.  It uses finite differences of the gradient (the analytic 
gradient if that was supplied, otherwise a finite difference 
approximation).

-thomas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Hessian from optim()

2006-03-21 Thread Dimitris Rizopoulos
I think it should be the first, since for BFGS and L-BFGS-B (the only 
optims()'s methods for which approximation to the hessian is required) 
it is known that the hessian update at convergence of the parameters 
might not yet be a good approximation of the true hessian.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://www.med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message - 
From: "Ingmar Visser" <[EMAIL PROTECTED]>
To: "Thomas Lumley" <[EMAIL PROTECTED]>; "Gregor Gorjanc" 
<[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, March 21, 2006 5:41 PM
Subject: Re: [R] Hessian from optim()


>
>>> Hello!
>>>
>>> Looking on how people use optim to get MLE I also noticed that one 
>>> can
>>> use returned Hessian to get corresponding standard errors i.e. 
>>> something
>>> like
>>>
>>> result <- optim(<< snip >>, hessian=T)
>>> result$par  # point estimates
>>> vc <- solve(result$hessian) # var-cov matrix
>>> se <- sqrt(diag(vc))# standard errors
>>>
>>> What is actually Hessian representing here? I appologize for lack 
>>> of
>>> knowledge, but ... Attached PDF can show problem I am facing with 
>>> this
>>> issue.
>>>
>>
>> The Hessian is the second derivative of the objective function, so 
>> if the
>> objective function is minus a loglikelihood the hessian is the 
>> observed
>> Fisher information.   The inverse of the hessian is thus an 
>> estimate of
>> the variance-covariance matrix of the parameters.
>>
>> For some models this is exactly I/n in your notation, for others it 
>> is
>> just close (and there are in fact theoretical reasons to prefer the
>> observed information).  I don't remember whether the two-parameter 
>> gamma
>> family is one where the observed and expected information are 
>> identical.
>
>
>
> The optim help page says:
>
> hessian Logical. Should a numerically differentiated Hessian 
> matrix be
> returned?
>
> I interpret this as providing a finite differences approximation of 
> the
> Hessian (possibly based on exact gradients?). Is that the case or is 
> it a
> Hessian that results from the optimization process?
>
> Best, Ingmar
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Classifying time series by shape over time

2006-03-21 Thread Gabor Grothendieck
If its good enough just to examine the number of strictly positive runs then

sum(rle(sign(id1$hits))$values == 1)

will give 1 in the good case (one run) and > 1 in the bad case (multiple runs).

On 3/21/06, Andreas Neumann <[EMAIL PROTECTED]> wrote:
> Dear all,
>
> I have hundreds of thousands of univariate time series of the form:
> character "seriesid", vector of Date, vector of integer
> (some exemplary data is at the end of the mail)
>
> I am trying to find the ones which somehow "have a shape" over time that
> looks like the histogramm of a (skewed) normal distribution:
> >  hist(rnorm(200,10,2))
> The "mean" is not interesting, i.e. it does not matter if the first
> nonzero observation happens in the 2. or the 40. month of observation.
> So all that matters is: They should start sometime, the hits per month
> increase, at some point they decrease and then they more or less
> disappear.
>
> Short Example (hits at consecutive months (Dates omitted)):
> 1. series: 0 0 0 2 5 8 20 42 30 19 6 1 0 0 0-> Good
> 2. series: 0 3 8 9 20 6 0 3 25 67 7 1 0 4 60 20 10 0 4  -> Bad
>
> Series 1 would be an ideal case of what I am looking for.
>
> Graphical inspection would be easy but is not an option due to the huge
> amount of series.
>
> Questions:
>
> 1. Which (if at all) of the many packages that handle time series is
> appropriate for my problem?
>
> 2. Which general approach seems to be the most straightforward and best
> supported by R?
> - Is there a way to test the time series directly (preferably)?
> - Or do I need to "type-cast" them as some kind of histogram
>  data and then test against the pdf of e.g. a normal distribution (but
>  how)?
> - Or something totally different?
>
>
> Thank you for your time,
>
> Andreas Neumann
>
>
>
>
> Data Examples (id1 is good, id2 is bad):
>
> > id1
>dates   hits
> 1  2004-12-01 3
> 2  2005-01-01 4
> 3  2005-02-0110
> 4  2005-03-01 6
> 5  2005-04-0135
> 6  2005-05-0114
> 7  2005-06-0133
> 8  2005-07-0113
> 9  2005-08-01 3
> 10 2005-09-01 9
> 11 2005-10-01 8
> 12 2005-11-01 4
> 13 2005-12-01 3
>
>
> > id2
>dates   hits
> 1  2001-01-01 6
> 2  2001-02-01 5
> 3  2001-03-01 5
> 4  2001-04-01 6
> 5  2001-05-01 2
> 6  2001-06-01 5
> 7  2001-07-01 1
> 8  2001-08-01 6
> 9  2001-09-01 4
> 10 2001-10-0110
> 11 2001-11-01 0
> 12 2001-12-01 3
> 13 2002-01-01 6
> 14 2002-02-01 5
> 15 2002-03-01 1
> 16 2002-04-01 2
> 17 2002-05-01 4
> 18 2002-06-01 4
> 19 2002-07-01 0
> 20 2002-08-01 1
> 21 2002-09-01 0
> 22 2002-10-01 2
> 23 2002-11-01 2
> 24 2002-12-01 2
> 25 2003-01-01 2
> 26 2003-02-01 3
> 27 2003-03-01 7
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Hessian from optim()

2006-03-21 Thread Ingmar Visser

>> Hello!
>> 
>> Looking on how people use optim to get MLE I also noticed that one can
>> use returned Hessian to get corresponding standard errors i.e. something
>> like
>> 
>> result <- optim(<< snip >>, hessian=T)
>> result$par  # point estimates
>> vc <- solve(result$hessian) # var-cov matrix
>> se <- sqrt(diag(vc))# standard errors
>> 
>> What is actually Hessian representing here? I appologize for lack of
>> knowledge, but ... Attached PDF can show problem I am facing with this
>> issue.
>> 
> 
> The Hessian is the second derivative of the objective function, so if the
> objective function is minus a loglikelihood the hessian is the observed
> Fisher information.   The inverse of the hessian is thus an estimate of
> the variance-covariance matrix of the parameters.
> 
> For some models this is exactly I/n in your notation, for others it is
> just close (and there are in fact theoretical reasons to prefer the
> observed information).  I don't remember whether the two-parameter gamma
> family is one where the observed and expected information are identical.



The optim help page says:

hessian Logical. Should a numerically differentiated Hessian matrix be
returned?

I interpret this as providing a finite differences approximation of the
Hessian (possibly based on exact gradients?). Is that the case or is it a
Hessian that results from the optimization process?

Best, Ingmar

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use: library lattice: barchart

2006-03-21 Thread paladini
Thank you both very much. It has to be "abarley =
data.frame(cbind(as.numeric(ayield),avariety,ayear,asite))".
Than the plot looks really fine!

Best regards

clauida


Zitat von Robert Baer <[EMAIL PROTECTED]>:

> > it looks totaly different and I get the error message:
> > "x should be numeric in: bwplot.formula(x = ayield ~ avariety | asite,
> > data = list(ayield = c(2,"
> 
> ---
> 
> > cbind(some numeric and not numeric columns)
> >
> > gives you all columns to be character and when you make data.frame
> > from it it is converted to factors.
> >
> > so
> >
> > > abarley = data.frame(ayield,avariety,ayear,asite)
> >
> > brings you close but than you need ayear to be factor. Either convert
> > it in data frame or on fly
> Actually, as the warning suggests, you have all factors in the dataframe but
> you need to convert ayield to a numeric.  Something like,
> 
> abarley = data.frame(cbind(as.numeric(ayield),avariety,ayear,asite))
> 
> >
> > barchart(ayield ~ avariety | asite, data = abarley, groups =
> > factor(ayear), layout = c(1,5) )
> >
> > HTH
> > Petr
> 
> 
> 
> Robert W. Baer, Ph.D.
> Associate Professor
> Department of Physiology
> A. T. Still University of Health Science
> 800 W. Jefferson St.
> Kirksville, MO 63501-1497 USA
> 
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] transform shapefiles in UTM to lat/long

2006-03-21 Thread Fredy O. Mejia
Dear all:

I have a shapefile in UTM coordinate system and I would like to transform it
to Lat/Log coordinates (WSG84).  The package PBSmapping contains function
convUL to transform between the two coordinate systems when data is in the
form of a data frame with attributes specifying the coordinate system.
However, when shapefiles are imported using function read.shape (package
maptools), a list instead of a data frame is generated.  I could extract all
the polygons in the list and transform them, individually, but I hope there
is an easier way of doing that.  Besides, the shapefile i'm trying to
convert has two different UTM zones (is from Guatemala, and it covers zones
15 and 16).  Any sugestions would be greatly appreciated.

Thanks,

Fredy

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Hessian from optim()

2006-03-21 Thread Thomas Lumley
On Tue, 21 Mar 2006, Gregor Gorjanc wrote:

> Hello!
>
> Looking on how people use optim to get MLE I also noticed that one can
> use returned Hessian to get corresponding standard errors i.e. something
> like
>
> result <- optim(<< snip >>, hessian=T)
> result$par  # point estimates
> vc <- solve(result$hessian) # var-cov matrix
> se <- sqrt(diag(vc))# standard errors
>
> What is actually Hessian representing here? I appologize for lack of
> knowledge, but ... Attached PDF can show problem I am facing with this
> issue.
>

The Hessian is the second derivative of the objective function, so if the 
objective function is minus a loglikelihood the hessian is the observed 
Fisher information.   The inverse of the hessian is thus an estimate of 
the variance-covariance matrix of the parameters.

For some models this is exactly I/n in your notation, for others it is 
just close (and there are in fact theoretical reasons to prefer the 
observed information).  I don't remember whether the two-parameter gamma 
family is one where the observed and expected information are identical.

-thomas

PS:  \stackrel{d}{\to}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Classifying time series by shape over time

2006-03-21 Thread Andreas Neumann
Dear all,

I have hundreds of thousands of univariate time series of the form:
character "seriesid", vector of Date, vector of integer
(some exemplary data is at the end of the mail)

I am trying to find the ones which somehow "have a shape" over time that
looks like the histogramm of a (skewed) normal distribution:
>  hist(rnorm(200,10,2))
The "mean" is not interesting, i.e. it does not matter if the first
nonzero observation happens in the 2. or the 40. month of observation.
So all that matters is: They should start sometime, the hits per month
increase, at some point they decrease and then they more or less
disappear.

Short Example (hits at consecutive months (Dates omitted)):
1. series: 0 0 0 2 5 8 20 42 30 19 6 1 0 0 0-> Good
2. series: 0 3 8 9 20 6 0 3 25 67 7 1 0 4 60 20 10 0 4  -> Bad

Series 1 would be an ideal case of what I am looking for.

Graphical inspection would be easy but is not an option due to the huge
amount of series.

Questions:

1. Which (if at all) of the many packages that handle time series is
appropriate for my problem?

2. Which general approach seems to be the most straightforward and best
supported by R?
- Is there a way to test the time series directly (preferably)?
- Or do I need to "type-cast" them as some kind of histogram
  data and then test against the pdf of e.g. a normal distribution (but
  how)?
- Or something totally different?


Thank you for your time,

 Andreas Neumann




Data Examples (id1 is good, id2 is bad):

> id1
dates   hits
1  2004-12-01 3
2  2005-01-01 4
3  2005-02-0110
4  2005-03-01 6
5  2005-04-0135
6  2005-05-0114
7  2005-06-0133
8  2005-07-0113
9  2005-08-01 3
10 2005-09-01 9
11 2005-10-01 8
12 2005-11-01 4
13 2005-12-01 3


> id2
dates   hits
1  2001-01-01 6
2  2001-02-01 5
3  2001-03-01 5
4  2001-04-01 6
5  2001-05-01 2
6  2001-06-01 5
7  2001-07-01 1
8  2001-08-01 6
9  2001-09-01 4
10 2001-10-0110
11 2001-11-01 0
12 2001-12-01 3
13 2002-01-01 6
14 2002-02-01 5
15 2002-03-01 1
16 2002-04-01 2
17 2002-05-01 4
18 2002-06-01 4
19 2002-07-01 0
20 2002-08-01 1
21 2002-09-01 0
22 2002-10-01 2
23 2002-11-01 2
24 2002-12-01 2
25 2003-01-01 2
26 2003-02-01 3
27 2003-03-01 7

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Scaling behavior ov bVar from lmer models

2006-03-21 Thread Alan Olav Bergland
Hi Harold,

Ahh, yes, that makes things scale appropriately.

 > lmer(distance~age+(age|Subject), data=OrthoFem)@bVar$Subject[2,2,]* 
(attr(VarCorr(lmer(distance~age+(age|Subject), data=OrthoFem)), "sc")^2)
[1] 0.01020546 0.01020546 0.01020546 0.01020546 0.01020546 0.01020546  
0.01020546 0.01020546 0.01020546 0.01020546 0.01020546

OrthoFem$distance2<-OrthoFem$Distance*100
 > lmer(distance2~age+(age|Subject), data=OrthoFem)@bVar$Subject[2,2,] 
*(attr(VarCorr(lmer(distance2~age+(age|Subject), data=OrthoFem)),  
"sc")^2)
[1] 102.0546 102.0546 102.0546 102.0546 102.0546 102.0546 102.0546  
102.0546 102.0546 102.0546 102.0546

Thanks,
Alan



On Mar 21, 2006, at 8:14 AM, Doran, Harold wrote:

> Alan:
>
> I think you need to multiply the values in the bVar slot by th  
> residual
> variance to get the correct estimates.
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Alan Bergland
> Sent: Tuesday, March 21, 2006 6:31 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] Scaling behavior ov bVar from lmer models
>
> Hi all,
>
> To follow up on an older thread, it was suggested that the  
> following
> would produce confidence intervals for the estimated BLUPs from a  
> linear
> mixed effect model:
>
>
> OrthoFem<-Orthodont[Orthodont$Sex=="Female",]
> fm1OrthF. <- lmer(distance~age+(age|Subject), data=OrthoFem)
>
> fm1.s <- coef(fm1OrthF.)$Subject
> fm1.s.var <- [EMAIL PROTECTED]
> fm1.s0.s <- sqrt(fm1.s.var[1,1,])
> fm1.s0.a <- sqrt(fm1.s.var[2,2,])
> fm1.s[,1]+outer(fm1.s0.s, c(-2,0,2))
> fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))
>
> [,1] [,2] [,3]
>  [1,] 11.08578 14.48469 17.88359
>  [2,] 13.86631 17.26521 20.66412
>  [3,] 13.37444 16.77334 20.17224
>  [4,] 13.55727 16.95617 20.35508
>  [5,] 14.96331 18.36221 21.76112
>  [6,] 13.88490 17.28381 20.68271
>  [7,] 12.65522 16.05412 19.45303
>  [8,] 16.00368 19.40258 22.80149
>  [9,] 12.95778 16.35669 19.75559
> [10,] 15.62506 19.02396 22.42287
> [11,] 15.73831 19.13721 22.53612
>
>> fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))
> [,1]  [,2]  [,3]
>  [1,] 0.07354181 0.3758789 0.6782160
>  [2,] 0.05062219 0.3529593 0.6552964
>  [3,] 0.09632606 0.3986632 0.7010003
>  [4,] 0.10176055 0.4040977 0.7064348
>  [5,] 0.08322913 0.3855662 0.6879033
>  [6,] 0.21706649 0.5194036 0.8217407
>  [7,] 0.33132614 0.6336632 0.9360004
>  [8,] 0.05382874 0.3561658 0.6585029
>  [9,] 0.37048154 0.6728186 0.9751558
> [10,] 0.22354726 0.5258844 0.8282215
> [11,] 0.34756193 0.6498990 0.9522361
>>
>
> These matrices contain in the middle column the BLUPs for each female
> individual in the study, and on the left and right columns the  
> upper and
> lower confidence intervals.
>
>
> However, when the response variable is scaled up by a factor of  
> 100, the
> CIs do not scale accordingly.  Recall that the variance, SE, and CI
> scale with the magnitude of the variable at hand.  I.e.,
>
> x<-c(1,2,3,4,5)
> var(x)
>> 2.5
>
> x2<-x*10
> var(x2)
>> 250
>
>
> So, to rerun the above example with "distance" scaled up by a  
> factor of
> 100:
>
> OrthFem$distance2<-OrthoFen$distance*100
> fm1OrthF. <- lmer(distance2~age+(age|Subject), data=OrthoFem)
>
> fm1.s <- coef(fm1OrthF.)$Subject
> fm1.s.var <- [EMAIL PROTECTED]
> fm1.s0.s <- sqrt(fm1.s.var[1,1,])
> fm1.s0.a <- sqrt(fm1.s.var[2,2,])
> fm1.s[,1]+outer(fm1.s0.s, c(-2,0,2))
> fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))
>
>> fm1.s[,1]+outer(fm1.s0.s, c(-2,0,2))
>   [,1] [,2] [,3]
>  [1,] 1445.070 1448.469 1451.868
>  [2,] 1723.122 1726.521 1729.920
>  [3,] 1673.935 1677.334 1680.733
>  [4,] 1692.218 1695.617 1699.016
>  [5,] 1832.822 1836.221 1839.620
>  [6,] 1724.982 1728.381 1731.780
>  [7,] 1602.014 1605.412 1608.811
>  [8,] 1936.859 1940.258 1943.657
>  [9,] 1632.270 1635.669 1639.068
> [10,] 1898.997 1902.396 1905.795
> [11,] 1910.322 1913.721 1917.120
>> fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))
>   [,1] [,2] [,3]
>  [1,] 37.28555 37.58789 37.89023
>  [2,] 34.99359 35.29593 35.59827
>  [3,] 39.56398 39.86632 40.16865
>  [4,] 40.10743 40.40977 40.71210
>  [5,] 38.25429 38.55662 38.85896
>  [6,] 51.63802 51.94036 52.24270
>  [7,] 63.06399 63.36632 63.66866
>  [8,] 35.31425 35.61658 35.91892
>  [9,] 66.97953 67.28186 67.58420
> [10,] 52.28610 52.58844 52.89077
> [11,] 64.68757 64.98990 65.29224
>
> Note that the BLUPs scaled accordingly, but the CIs do not.  For
> example, take fm1.s[,2][11,] from both examples:
>
> ##unscaled
> [11,] 0.34756193 0.6498990 0.9522361
>
>
> ##scaled
> [11,] 64.68757 64.98990 65.29224
>
> In both cases the upper CI is ~0.3 greater than the BLUP.
>
>
> This seems very strange to me.  Am I totally misusing the bVar  
> slot?  If
> so, is there a way to obtain variances, or SE's associated with each
> BLUP?
>
> Thanks,
> Alan
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/

Re: [R] How to use: library lattice: barchart

2006-03-21 Thread Robert Baer
> it looks totaly different and I get the error message:
> "x should be numeric in: bwplot.formula(x = ayield ~ avariety | asite,
> data = list(ayield = c(2,"

---

> cbind(some numeric and not numeric columns)
>
> gives you all columns to be character and when you make data.frame
> from it it is converted to factors.
>
> so
>
> > abarley = data.frame(ayield,avariety,ayear,asite)
>
> brings you close but than you need ayear to be factor. Either convert
> it in data frame or on fly
Actually, as the warning suggests, you have all factors in the dataframe but
you need to convert ayield to a numeric.  Something like,

abarley = data.frame(cbind(as.numeric(ayield),avariety,ayear,asite))

>
> barchart(ayield ~ avariety | asite, data = abarley, groups =
> factor(ayear), layout = c(1,5) )
>
> HTH
> Petr



Robert W. Baer, Ph.D.
Associate Professor
Department of Physiology
A. T. Still University of Health Science
800 W. Jefferson St.
Kirksville, MO 63501-1497 USA

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use: library lattice: barchart

2006-03-21 Thread Petr Pikal
Hi

cbind(some numeric and not numeric columns)

gives you all columns to be character and when you make data.frame 
from it it is converted to factors.

so

> abarley = data.frame(ayield,avariety,ayear,asite)

brings you close but than you need ayear to be factor. Either convert 
it in data frame or on fly

barchart(ayield ~ avariety | asite, data = abarley, groups = 
factor(ayear), layout = c(1,5) )

HTH
Petr

BTW. If you encounter error other than 

"Error: syntax error in: 

it's time to look at your data by

?str, class, typeoff, ...

and any other structure and type revealing tools.

Cheers.


On 21 Mar 2006 at 12:05, [EMAIL PROTECTED] wrote:

Date sent:  Tue, 21 Mar 2006 12:05:27 +0100
From:   [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Subject:[R] How to use: library lattice: barchart

> Dear ladies and gentlemen!
> 
> In the help text for the xyplot (library(lattice), help(xyplot)) is an
> example given how one can use barchart:
> 
> barchart(yield ~ variety | site, data = barley,
>   groups = year, layout = c(1,6),
>   ylab = "Barley Yield (bushels/acre)",
>   scales = list(x = list(abbreviate = TRUE,
> minlength = 5)))
> 
> I want my data to be represented just in the same way. But when I try
> it like this:
> 
> 
> ayield = c(2,3,5,6,3,4,7,8,9,2,3,5,6,1,2,3,4,2,6,8)
> avariety = c(rep("A",5),rep("B",5),rep("C",5),rep("D",5))
> ayear = (c(rep(1931,10),rep(1932,10)))
> asite = c(rep(c("iu","gt","jt","jhzt","tr"),4))
> abarley = data.frame(cbind(ayield,avariety,ayear,asite))
> 
> barchart(ayield ~ avariety | asite, data = abarley,groups = ayear,
> layout = c(1,5) )
> 
> it looks totaly different and I get the error message:
> "x should be numeric in: bwplot.formula(x = ayield ~ avariety | asite,
> data = list(ayield = c(2,"
> 
> What did I do wrong?
> Can anybody help me?
> 
> Best regards, thank you very much
> 
> 
> Claudia Paladini
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html

Petr Pikal
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fixed legend in vcd/mosaicplot (package vcd)

2006-03-21 Thread Uwe Ligges
Dieter Menne wrote:

> Uwe Ligges  statistik.uni-dortmund.de> writes:
> 
> 
>>>I want to attach a fixed legend to mosaicplot, e.g. -3..3 independently of
>>>the data in the graphics. This must be hidden somehow in shading/legend, but
>>>I could not get the right handle to it.
>>
>>Example:
>>
>>   plot(1:10)
>>   legend(par("usr")[2], par("usr")[4],
>>  legend="Example", pch=1, xjust=1)
> 
> 
> :> I would not dare to ask this here. mosaicplot is part of package vcd, it's 
> now grid based, and has a very flexible but a little awkward decoration 
> interface. "legend" is the interface to the shading legend.
> 
> Dieter
> 


Oh sorry, was not aware of that. I'd have to look into the "grid" topic 
myself (and I left Paul's book at home ;-)), hence let the real experts 
comment on this issue...

Uwe

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use: library lattice: barchart

2006-03-21 Thread P Ehlers
I think your problem is the definition of abarley. You're making
ayield into a factor. Have a look at str(abarley).

Peter Ehlers

[EMAIL PROTECTED] wrote:
> Dear ladies and gentlemen!
> 
> In the help text for the xyplot (library(lattice), help(xyplot)) is an example
> given how one can use barchart:
> 
> barchart(yield ~ variety | site, data = barley,
>   groups = year, layout = c(1,6),
>   ylab = "Barley Yield (bushels/acre)",
>   scales = list(x = list(abbreviate = TRUE,
> minlength = 5)))
>   
> I want my data to be represented just in the same way. But when I try it like
> this:
> 
> 
> ayield = c(2,3,5,6,3,4,7,8,9,2,3,5,6,1,2,3,4,2,6,8)
> avariety = c(rep("A",5),rep("B",5),rep("C",5),rep("D",5))
> ayear = (c(rep(1931,10),rep(1932,10)))
> asite = c(rep(c("iu","gt","jt","jhzt","tr"),4))
> abarley = data.frame(cbind(ayield,avariety,ayear,asite))
> 
> barchart(ayield ~ avariety | asite, data = abarley,groups = ayear, layout =
> c(1,5) )
> 
> it looks totaly different and I get the error message:
> "x should be numeric in: bwplot.formula(x = ayield ~ avariety | asite, data =
> list(ayield = c(2,"
> 
> What did I do wrong?
> Can anybody help me?
> 
> Best regards, thank you very much
> 
> 
> Claudia Paladini
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] computing the average response by nlme

2006-03-21 Thread Marc Bernard
Dear All,
   
  let E(Y | t, b, theta) be the conditional  expectation of Y where b is a 
vector of random effects , theta is a vector of  fixed effects and t is the  
time of observation (t =1,2, ...,n). 
   
  I wonder if nlme has a function to compue E(Y | t, theta). I know that it can 
be done by approximating the integrals over b of   E(Y | t, b, theta). But I am 
wondering if nlme has a function to do that directly.
   
  Thank you,
   
  Bernard
   
   
   


-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] nested anova diagnostics

2006-03-21 Thread Leonardo D Bacigalupe
Hello all,

I have two questions regarding nested anova.
I've attached a simple example, where a male is mated with 4 females, 
and the intensity of the colour of the eyes is measured in two 
offspring from each female.

1) As I understand it, residuals in a nested anova are the difference 
between an observed value and the predicted value, which is equal to 
the sample mean for the cell that contains that observed value (e.g. 
residual for ind 1 = 58.5 - 59 = -0.5). Therefore for the dataset below 
there should be 24 residuals and 24 fitted values, to be used for 
diagnostics, etc.

As I read on Venables & Ripley (4th Edition) pp 279-286, there should 
be 4 strata in this experiment 
(model<-aov(eye~male/female+Error(male/female)): (i) one corresponding 
to the total of all observations, (ii) one corresponding to contrasts 
between male totals, (iii) one corresponding to contrasts between 
females and (iv) the residual stratum. Residual and fitted values for 
this model can be obtained from resid(model[[4]]) and 
fitted(model[[4]]) (pp 284 V&R). Given this, could someone please 
explain me how fitted values are calculated in this case? There seems 
that Heiberger 1989 (in V&R) offers some explanation, but unfortunately 
the library does not have the book, or someone around.


2) In the present case, there seems that fitted(model[[4]]) are all 0. 
Is there any particular reason for this result?
[,1]
130
140
150
160
170
180
190
200
210
220
230
240


eye male female
58.51  1
59.51  1
77.81  2
80.91  2
84.01  3
83.61  3
70.11  4
68.31  4
69.82  1
69.82  1
56.02  2
54.52  2
50.72  3
49.32  3
63.82  4
65.82  4
56.63  1
57.53  1
77.83  2
79.23  2
69.93  3
69.23  3
62.13  4
64.53  4


Thanks

Leonardo



Leonardo D. Bacigalupe

Department of Animal and Plant Sciences
University of Sheffield
Sheffield S10 2TN
United Kingdom

[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Replies on this list [was: removing NA from a data frame]

2006-03-21 Thread Uwe Ligges
François Pinard wrote:

> [Berton Gunter]
> 
>>[Sam Steingold]
> 
> 
>>>PPS. how do I figure out the number of rows in a data.frame?
>>> is length(attr(X,"row.names")) the right way?
> 
> 
>>help.search("number of rows") immediately gets you your answer!
> 
> 
> Hi, people.  Here, I get:
> 
>   Help files with alias or concept or title matching ‘number of rows’
>   using fuzzy matching:
> 
>   nrow(base)  The Number of Rows/Columns of an Array
> 
> and '?nrow' says that it meant for arrays: nothing about data.frame, and 

Very well about data.frames! ?nrow says in its argument description:

"x a vector, array or data frame "

Uwe Ligges



> not a generic method either.  Even if it was a class method, we should 
> not expect a new user to be very familiar with R (both!) class systems 
> from the start.
> 
> What a new user might think, reading the documentation?   Sam Steingold 
> is surely an experimented and competent computer guy.  He might guess, 
> who knows, that some automatic array to data.frame conversion occurs 
> (all inefficient that it could be).  Yet this would not match other 
> knowledge nor experimentation, as a data.frame is hardly an array:
> 
>   > x = data.frame(a=1:3, b=c(TRUE, TRUE, FALSE), c=letters[1:3])
>   > as.array(x)
>   Erreur dans "dimnames<-.data.frame"(`*tmp*`, value = list(c("a", "b", "c" :
>   'dimnames' incorrect pour ce tableau de données
> 
> Despite help.search("number of rows") provides an answer that happens to 
> be right, it might not be recognised as such by an intelligent reader, 
> and so, it is not really satisfactory.  The documentation for "nrow" 
> could be improved by saying that it applies to any kind of structure for 
> which dim() is meaningful.  And even then, ?dim is silent about data 
> frames.  One clue (yet a pretty weak one) that nrow may be applied to 
> a data.frame comes from the fact that ?dim.data.frame lists the same 
> documentation as ?dim.
> 
> 
> Why do I say all this?  Because it happens, not necessarily in this 
> case, a bit too often nevertheless, that answers given to users are 
> uselessly harsh or haughty.  Especially when they imply that the 
> documentation is perfect.  One problem is that some people enjoy reading 
> such replies.  As example of this strange kind of pleasure, here is 
> a excerpt from R Archives, which I find especially enlightening on the 
> mentality of few members:
> 
>   From: [EMAIL PROTECTED] (Steve Wisdom)
>   Date: 2003-12-26 17:04
>   Subject: [R] re| Dr Ward on List protocol 
> 
>   "Andrew C. Ward" <[EMAIL PROTECTED]> :
> 
>   >With respect to 'tone' and 'friendliness', perhaps all that is meant or
>   >needed is that people be polite and respectful.
> 
>   >I shake my head as often at rude answers
> 
>   Oh, by gosh, by golly.
> 
>   I don't think an occasional dose of 'real life', via a jab from the
>   Professor, will cause any lasting harm to the cosseted & emolumated students
>   and academics on the List.
> 
>   On a Wall St trading desk, for example, every day one is kicked in the head
>   more brutally by clients, superiors, counterparts, the markets & etc, than
>   ever one would be by the Professor.
> 
>   Plus, the Professor's jabs are good Schadenfreudic fun for the rest of us.
> 
>   Regards,
> 
>   Steve Wisdom
>   Westport CT US
> 
> The truth is that not everybody around here is "cosseted & emolumated 
> students and academics".  Moreover, behaviour at trading desks is fully 
> irrelevant, and for most of us, this is not the kind of life we chose to 
> live.  Wrong behaviour elsewhere is hardly an excuse for not behaving 
> properly, here.
> 
> Moreover, what is mere "good fun" for some may be perceived as highly 
> inelegant by others.  While some competent members may inspire 
> admiration and charism by their knowledge and dedication, they sometimes
> damage beyond repair what they inspire, when showing poor humanity.
> 
> I'm aware of the constant fear some have of seeing this list abused.  
> There are ways for not being abused, which do not require becoming 
> abusive ourselves.  We should deepen such ways in our own habits.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] Quadratic Programming where D is indefinite

2006-03-21 Thread Felix Distel
Hi,

Is there an R package capable of solving quadratic programming
problems where 1/2 x^T D x - d^T x is to be minimized and D is
not necessarily positive definite.

Apparently quadprog requires D to be positive definite.

All constraints are linear inequalities.

Thanks in advance,
Felix

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fixed legend in vcd/mosaicplot (package vcd)

2006-03-21 Thread Dieter Menne
Uwe Ligges  statistik.uni-dortmund.de> writes:

> > 
> > I want to attach a fixed legend to mosaicplot, e.g. -3..3 independently of
> > the data in the graphics. This must be hidden somehow in shading/legend, but
> > I could not get the right handle to it.
> 
> Example:
> 
>plot(1:10)
>legend(par("usr")[2], par("usr")[4],
>   legend="Example", pch=1, xjust=1)

:> I would not dare to ask this here. mosaicplot is part of package vcd, it's 
now grid based, and has a very flexible but a little awkward decoration 
interface. "legend" is the interface to the shading legend.

Dieter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Scaling behavior ov bVar from lmer models

2006-03-21 Thread Alan Bergland
Hi all,

To follow up on an older thread, it was suggested that the following 
would produce confidence intervals for the estimated BLUPs from a linear 
mixed effect model:


OrthoFem<-Orthodont[Orthodont$Sex=="Female",]
fm1OrthF. <- lmer(distance~age+(age|Subject), data=OrthoFem)

fm1.s <- coef(fm1OrthF.)$Subject
fm1.s.var <- [EMAIL PROTECTED]
fm1.s0.s <- sqrt(fm1.s.var[1,1,])
fm1.s0.a <- sqrt(fm1.s.var[2,2,])
fm1.s[,1]+outer(fm1.s0.s, c(-2,0,2))
fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))

[,1] [,2] [,3]
 [1,] 11.08578 14.48469 17.88359
 [2,] 13.86631 17.26521 20.66412
 [3,] 13.37444 16.77334 20.17224
 [4,] 13.55727 16.95617 20.35508
 [5,] 14.96331 18.36221 21.76112
 [6,] 13.88490 17.28381 20.68271
 [7,] 12.65522 16.05412 19.45303
 [8,] 16.00368 19.40258 22.80149
 [9,] 12.95778 16.35669 19.75559
[10,] 15.62506 19.02396 22.42287
[11,] 15.73831 19.13721 22.53612

 > fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))
[,1]  [,2]  [,3]
 [1,] 0.07354181 0.3758789 0.6782160
 [2,] 0.05062219 0.3529593 0.6552964
 [3,] 0.09632606 0.3986632 0.7010003
 [4,] 0.10176055 0.4040977 0.7064348
 [5,] 0.08322913 0.3855662 0.6879033
 [6,] 0.21706649 0.5194036 0.8217407
 [7,] 0.33132614 0.6336632 0.9360004
 [8,] 0.05382874 0.3561658 0.6585029
 [9,] 0.37048154 0.6728186 0.9751558
[10,] 0.22354726 0.5258844 0.8282215
[11,] 0.34756193 0.6498990 0.9522361
 >

These matrices contain in the middle column the BLUPs for each female 
individual in the study, and on the left and right columns the upper and 
lower confidence intervals.


However, when the response variable is scaled up by a factor of 100, the 
CIs do not scale accordingly.  Recall that the variance, SE, and CI 
scale with the magnitude of the variable at hand.  I.e.,

x<-c(1,2,3,4,5)
var(x)
 >2.5

x2<-x*10
var(x2)
 >250


So, to rerun the above example with "distance" scaled up by a factor of 100:

OrthFem$distance2<-OrthoFen$distance*100
fm1OrthF. <- lmer(distance2~age+(age|Subject), data=OrthoFem)

fm1.s <- coef(fm1OrthF.)$Subject
fm1.s.var <- [EMAIL PROTECTED]
fm1.s0.s <- sqrt(fm1.s.var[1,1,])
fm1.s0.a <- sqrt(fm1.s.var[2,2,])
fm1.s[,1]+outer(fm1.s0.s, c(-2,0,2))
fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))

 > fm1.s[,1]+outer(fm1.s0.s, c(-2,0,2))
  [,1] [,2] [,3]
 [1,] 1445.070 1448.469 1451.868
 [2,] 1723.122 1726.521 1729.920
 [3,] 1673.935 1677.334 1680.733
 [4,] 1692.218 1695.617 1699.016
 [5,] 1832.822 1836.221 1839.620
 [6,] 1724.982 1728.381 1731.780
 [7,] 1602.014 1605.412 1608.811
 [8,] 1936.859 1940.258 1943.657
 [9,] 1632.270 1635.669 1639.068
[10,] 1898.997 1902.396 1905.795
[11,] 1910.322 1913.721 1917.120
 > fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))
  [,1] [,2] [,3]
 [1,] 37.28555 37.58789 37.89023
 [2,] 34.99359 35.29593 35.59827
 [3,] 39.56398 39.86632 40.16865
 [4,] 40.10743 40.40977 40.71210
 [5,] 38.25429 38.55662 38.85896
 [6,] 51.63802 51.94036 52.24270
 [7,] 63.06399 63.36632 63.66866
 [8,] 35.31425 35.61658 35.91892
 [9,] 66.97953 67.28186 67.58420
[10,] 52.28610 52.58844 52.89077
[11,] 64.68757 64.98990 65.29224

Note that the BLUPs scaled accordingly, but the CIs do not.  For 
example, take fm1.s[,2][11,] from both examples:

##unscaled
[11,] 0.34756193 0.6498990 0.9522361


##scaled
[11,] 64.68757 64.98990 65.29224

In both cases the upper CI is ~0.3 greater than the BLUP.


This seems very strange to me.  Am I totally misusing the bVar slot?  If 
so, is there a way to obtain variances, or SE's associated with each BLUP?

Thanks,
Alan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to use: library lattice: barchart

2006-03-21 Thread Carlos Ortega
Hello,

You could get something similar to the example by adding "horizontal=FALSE".

##
barchart(ayield ~ avariety | asite,
 data = abarley,
 groups = ayear,
 horizontal=FALSE,
 layout = c(1,5)
)
###

With respect to the error message you got. Check carefully "barley" data and
compare it with yours. In your case, "avariety" has four levels A,B,C and D
but only A and B appears in 1931 and (C,D) in 1932 while in barley data, the
all the varieties appear in all the years. That lack of data in some years
is the cause of the error.

Regards,
Carlos Ortega.




On 3/21/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Dear ladies and gentlemen!
>
> In the help text for the xyplot (library(lattice), help(xyplot)) is an
> example
> given how one can use barchart:
>
> barchart(yield ~ variety | site, data = barley,
>  groups = year, layout = c(1,6),
>  ylab = "Barley Yield (bushels/acre)",
>  scales = list(x = list(abbreviate = TRUE,
>minlength = 5)))
>
> I want my data to be represented just in the same way. But when I try it
> like
> this:
>
>
> ayield = c(2,3,5,6,3,4,7,8,9,2,3,5,6,1,2,3,4,2,6,8)
> avariety = c(rep("A",5),rep("B",5),rep("C",5),rep("D",5))
> ayear = (c(rep(1931,10),rep(1932,10)))
> asite = c(rep(c("iu","gt","jt","jhzt","tr"),4))
> abarley = data.frame(cbind(ayield,avariety,ayear,asite))
>
> barchart(ayield ~ avariety | asite, data = abarley,groups = ayear, layout
> =
> c(1,5) )
>
> it looks totaly different and I get the error message:
> "x should be numeric in: bwplot.formula(x = ayield ~ avariety | asite,
> data =
> list(ayield = c(2,"
>
> What did I do wrong?
> Can anybody help me?
>
> Best regards, thank you very much
>
>
> Claudia Paladini
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Scaling behavior ov bVar from lmer models

2006-03-21 Thread Doran, Harold
Alan:

I think you need to multiply the values in the bVar slot by th residual
variance to get the correct estimates.  

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alan Bergland
Sent: Tuesday, March 21, 2006 6:31 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Scaling behavior ov bVar from lmer models

Hi all,

To follow up on an older thread, it was suggested that the following
would produce confidence intervals for the estimated BLUPs from a linear
mixed effect model:


OrthoFem<-Orthodont[Orthodont$Sex=="Female",]
fm1OrthF. <- lmer(distance~age+(age|Subject), data=OrthoFem)

fm1.s <- coef(fm1OrthF.)$Subject
fm1.s.var <- [EMAIL PROTECTED]
fm1.s0.s <- sqrt(fm1.s.var[1,1,])
fm1.s0.a <- sqrt(fm1.s.var[2,2,])
fm1.s[,1]+outer(fm1.s0.s, c(-2,0,2))
fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))

[,1] [,2] [,3]
 [1,] 11.08578 14.48469 17.88359
 [2,] 13.86631 17.26521 20.66412
 [3,] 13.37444 16.77334 20.17224
 [4,] 13.55727 16.95617 20.35508
 [5,] 14.96331 18.36221 21.76112
 [6,] 13.88490 17.28381 20.68271
 [7,] 12.65522 16.05412 19.45303
 [8,] 16.00368 19.40258 22.80149
 [9,] 12.95778 16.35669 19.75559
[10,] 15.62506 19.02396 22.42287
[11,] 15.73831 19.13721 22.53612

 > fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))
[,1]  [,2]  [,3]
 [1,] 0.07354181 0.3758789 0.6782160
 [2,] 0.05062219 0.3529593 0.6552964
 [3,] 0.09632606 0.3986632 0.7010003
 [4,] 0.10176055 0.4040977 0.7064348
 [5,] 0.08322913 0.3855662 0.6879033
 [6,] 0.21706649 0.5194036 0.8217407
 [7,] 0.33132614 0.6336632 0.9360004
 [8,] 0.05382874 0.3561658 0.6585029
 [9,] 0.37048154 0.6728186 0.9751558
[10,] 0.22354726 0.5258844 0.8282215
[11,] 0.34756193 0.6498990 0.9522361
 >

These matrices contain in the middle column the BLUPs for each female
individual in the study, and on the left and right columns the upper and
lower confidence intervals.


However, when the response variable is scaled up by a factor of 100, the
CIs do not scale accordingly.  Recall that the variance, SE, and CI
scale with the magnitude of the variable at hand.  I.e.,

x<-c(1,2,3,4,5)
var(x)
 >2.5

x2<-x*10
var(x2)
 >250


So, to rerun the above example with "distance" scaled up by a factor of
100:

OrthFem$distance2<-OrthoFen$distance*100
fm1OrthF. <- lmer(distance2~age+(age|Subject), data=OrthoFem)

fm1.s <- coef(fm1OrthF.)$Subject
fm1.s.var <- [EMAIL PROTECTED]
fm1.s0.s <- sqrt(fm1.s.var[1,1,])
fm1.s0.a <- sqrt(fm1.s.var[2,2,])
fm1.s[,1]+outer(fm1.s0.s, c(-2,0,2))
fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))

 > fm1.s[,1]+outer(fm1.s0.s, c(-2,0,2))
  [,1] [,2] [,3]
 [1,] 1445.070 1448.469 1451.868
 [2,] 1723.122 1726.521 1729.920
 [3,] 1673.935 1677.334 1680.733
 [4,] 1692.218 1695.617 1699.016
 [5,] 1832.822 1836.221 1839.620
 [6,] 1724.982 1728.381 1731.780
 [7,] 1602.014 1605.412 1608.811
 [8,] 1936.859 1940.258 1943.657
 [9,] 1632.270 1635.669 1639.068
[10,] 1898.997 1902.396 1905.795
[11,] 1910.322 1913.721 1917.120
 > fm1.s[,2]+outer(fm1.s0.a, c(-2,0,2))
  [,1] [,2] [,3]
 [1,] 37.28555 37.58789 37.89023
 [2,] 34.99359 35.29593 35.59827
 [3,] 39.56398 39.86632 40.16865
 [4,] 40.10743 40.40977 40.71210
 [5,] 38.25429 38.55662 38.85896
 [6,] 51.63802 51.94036 52.24270
 [7,] 63.06399 63.36632 63.66866
 [8,] 35.31425 35.61658 35.91892
 [9,] 66.97953 67.28186 67.58420
[10,] 52.28610 52.58844 52.89077
[11,] 64.68757 64.98990 65.29224

Note that the BLUPs scaled accordingly, but the CIs do not.  For
example, take fm1.s[,2][11,] from both examples:

##unscaled
[11,] 0.34756193 0.6498990 0.9522361


##scaled
[11,] 64.68757 64.98990 65.29224

In both cases the upper CI is ~0.3 greater than the BLUP.


This seems very strange to me.  Am I totally misusing the bVar slot?  If
so, is there a way to obtain variances, or SE's associated with each
BLUP?

Thanks,
Alan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fixed legend in vcd/mosaicplot

2006-03-21 Thread Uwe Ligges
Dieter Menne wrote:

> Hello, friends of mosaicplot,
> 
> I want to attach a fixed legend to mosaicplot, e.g. -3..3 independently of
> the data in the graphics. This must be hidden somehow in shading/legend, but
> I could not get the right handle to it.
> 
> Dieter
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Example:

   plot(1:10)
   legend(par("usr")[2], par("usr")[4],
  legend="Example", pch=1, xjust=1)

Uwe Ligges

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] finite mixture model, using flexmix

2006-03-21 Thread Jan Wijffels
Dear R-users,
I would like to use the package flexmix to fit latent classes to a
regression model. My data are repeated measurements of bernouilli
variables so I can use the binomial family link to the glm function. The
design is not balanced, meaning that for some individuals in my data set
I have 10 measurements or more, for others I only have 5 or even less. 
My question is the following. Can flexmix handle this unbalancedness in
the dataset (I looked in the jstatsoft.org paper "Flexmix: A General
Framework for Finite Mixture Models and Latent Class Regression in R"
and it only mentioned a dataset with balanced repeated measurements - 50
persons with 4 measurements each). When I look at the formula for the
loglikelihood of the data, it seems to overweigh individuals with more
measurements in an unbalanced design, am I correct in this and are there
ways to correct for this?
 
Thanks for the help,
Jan
 
Jan Wijffels
University Center for Statistics 
W. de Croylaan 54
3001 Heverlee
Belgium
tel: +32 (0)16 322784
fax: +32 (0)16 322831
  http://www.kuleuven.be/ucs
 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] rownames, colnames, and date and time

2006-03-21 Thread Erich Neuwirth
I noticed something surprising (in R 2.2.1 on WinXP)
According to the documentation, rownames and colnames are character vectors.
Assigning a vector of class POSIXct or POSIXlt as rownames or colnames
therefore is not strictly according to the rules.
In some cases, R performs a reasonable typecast, but in some other cases
where the same typecast also would be possible, it does not.

Assigning a vector of class POSIXct to the rownames or names of a
dataframe creates a reasonable string representation of the dates (and
possibly times).
Assigning such a vector to the rownames or colnames of a matrix produces
rownames or colnames consisting of the integer representation of the
date-time value.
Trying to assign a vector of class POSIXlt in all cases
(dataframes and matrices, rownames, colnames, names)
produces an error.

Demonstration code is given below.

This is somewhat inconsistent.
Perhaps a reasonable solution could be that the typecast
used for POSIXct and dataframes is used in all the other cases also.

Code:

mymat<-matrix(1:4,nrow=2,ncol=2)
mydf<-data.frame(mymat)
mydates<-as.POSIXct(c("2001-1-24","2005-12-25"))

rownames(mydf)<-mydates
names(mydf)<-mydates
rownames(mymat)<-mydates
colnames(mymat)<-mydates

print(deparse(mydates))
print(deparse(rownames(mydf)))
print(deparse(names(mydf)))
print(deparse(rownames(mymat)))
print(deparse(colnames(mymat)))

mydates1<-as.POSIXlt(mydates)

# the following lines will not work and
# produce errors

rownames(mydf)<-mydates1
names(mydf)<-mydates1
rownames(mymat)<-mydates1
colnames(mymat)<-mydates1


-- 
Erich Neuwirth
Institute for Scientific Computing and
Didactic Center for Computer Science
University of Vienna
phone: +43-1-4277-39464  fax: +43-1-4277-39459

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] How to use: library lattice: barchart

2006-03-21 Thread paladini
Dear ladies and gentlemen!

In the help text for the xyplot (library(lattice), help(xyplot)) is an example
given how one can use barchart:

barchart(yield ~ variety | site, data = barley,
  groups = year, layout = c(1,6),
  ylab = "Barley Yield (bushels/acre)",
  scales = list(x = list(abbreviate = TRUE,
minlength = 5)))

I want my data to be represented just in the same way. But when I try it like
this:


ayield = c(2,3,5,6,3,4,7,8,9,2,3,5,6,1,2,3,4,2,6,8)
avariety = c(rep("A",5),rep("B",5),rep("C",5),rep("D",5))
ayear = (c(rep(1931,10),rep(1932,10)))
asite = c(rep(c("iu","gt","jt","jhzt","tr"),4))
abarley = data.frame(cbind(ayield,avariety,ayear,asite))

barchart(ayield ~ avariety | asite, data = abarley,groups = ayear, layout =
c(1,5) )

it looks totaly different and I get the error message:
"x should be numeric in: bwplot.formula(x = ayield ~ avariety | asite, data =
list(ayield = c(2,"

What did I do wrong?
Can anybody help me?

Best regards, thank you very much


Claudia Paladini

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Special characters: plus/minus - a method that works

2006-03-21 Thread Martin Maechler
> "Peter" == Peter Alspach <[EMAIL PROTECTED]>
> on Tue, 21 Mar 2006 14:35:43 +1200 writes:

Peter> Gabor Grothendieck wrote:

>> Another way that works on my Windows XP system is to use "\261" .

Peter> Please note Windows escape codes are not portable thus not 
recommended as Martin Maechler pointed out in a response to a suggestion of 
mine:

Peter> On Tue, 14 Jun 2005, Martin Maechler wrote:


>>> "Peter" == Peter Alspach 
>>> on Tue, 14 Jun 2005 14:11:47 +1200 writes:
>> 
Peter> Ben
>> 
Peter> Others have pointed out plotmath. However, for some
Peter> superscripts (including 2) it may be easier to use
Peter> the appropriate escape sequence (at in Windows):
>> 
Peter> ylab = 'BA (m\262/ha)'
>> 
>> but please refrain from doing that way.
>> You should write R code that is portable, and ``plotmath''
>> has been designed to be portable. Windows escape codes are not,
>> and may not even work in future (or current?) versions of
>> Windows with `unusual' locale settings {fonts, etc}.

Peter> Peter Alspach

Indeed, thank you, Peter.

For the present case, 
if you really want something better than the ASCII "+/-" that
Duncan recommended (and I agree),
you can do what I had recommended above:  

use 'plotmath' --> ?plotmath
and see that

the%+-%   symbol works -- platform independently! -- 
e.g.,

  plot(1, main = expression(3 %+-% 0.1))

or a bit more realistically:

   m <- pi
  sd <- 1/7
  plot(1, main = substitute(mu %+-% sig, list(mu = m, sig = sd)))
## or
  plot(1, main = substitute(mu %+-% sig, 
list(mu= round(m,3), sig= round(sd,3

--
Martin Maechler, ETH

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] [R-pkgs] New version of 'systemfit'

2006-03-21 Thread Arne Henningsen
Dear R users,

The authors of the systemfit package have released a new version of this 
package with substantial enhancements. 
The systemfit package contains functions for fitting simultaneous systems of 
linear equations using Ordinary Least Squares (OLS), Weighted Least Squares 
(WLS), Seemingly Unrelated Regressions (SUR), Two-Stage Least Squares (2SLS), 
Weighted Two-Stage Least Squares (W2SLS), and Three-Stage Least Squares 
(3SLS). 
The new enhancements include the ability to test linear restrictions using an 
F, Wald or likelihood-ratio test and the ability to test model specification 
using the Hausman test. The major enhancements include better estimation 
using large data sets as well as many minor improvements, e.g. the ability to 
define how the residual covariance matrix and degrees of freedom are 
computed. The new version includes a wrapper function systemfitClassic that 
can be applied to (classical) panel-like data in long format. Furthermore, 
version 0.8-0 includes additional documentation 
(inst/doc/vignette_systemfit.pdf).

Jeff D. Hamann
Arne Henningsen


-- 
Jeff D. Hamann
Forest Informatics, Inc.
PO Box 1421
Corvallis, Oregon 97339-1421
phone 541-754-1428
[EMAIL PROTECTED]
www.forestinformatics.com

and

Arne Henningsen
Department of Agricultural Economics
University of Kiel
Olshausenstr. 40
D-24098 Kiel (Germany)
Tel: +49-431-880 4445
Fax: +49-431-880 1397
[EMAIL PROTECTED]
http://www.uni-kiel.de/agrarpol/ahenningsen/

___
R-packages mailing list
[EMAIL PROTECTED]
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] type in daisy

2006-03-21 Thread Martin Maechler
> "Jeanne" == Jeanne Vallet <[EMAIL PROTECTED]>
> on Mon, 20 Mar 2006 18:33:08 +0100 writes:

Jeanne> Hi, I'm a PhD student and I want to use the function
Jeanne> 'daisy' from the package 'cluster' to compute
Jeanne> dissimilarities.  My variables are of mixed types so
Jeanne> I use the argument 'stand' in daisy to define the
Jeanne> type of my variables. 
you mean 'type' not 'stand';  yes, that's good
 
Jeanne> I have the following error message : 

Jeanne> Warning message: 
no, it's *NOT* an error message.  
A warning is a warning is a warning is a warning is a warning!
Sorry to stress this to the extreme; but it happens too often
that people confuse warnings with errors.

Jeanne> binary variable(s) 13, 16, 17, 22, 23, 24, 25, 26, 27, 28, 29, 30, 
31,
Jeanne> 32, 33, 34, 35, 36, 37, 38, 39 treated as interval scaled in:
Jeanne> daisy(basetraits, stand = FALSE, type = list(numeric = c(1, 7),  


Jeanne> I don't understand why R
Jeanne> doesn't accept that I declare these binary variables
Jeanne> in symm or assym (depending on the variable).
 
well below you only declare 12 & 14 in 'symm' and
15 & 40 in 'asymm'.  Hence all the above are treated as interval scaled.

daisy() gives a warning {which I had added a while ago}, because
Kaufmann & Rousseeuw in their book make a point -- on which I
agree -- that people should carefully consider how
(dis)similarities between binary variables should be defined.

But after all it's just a warning, not an error!

Jeanne> Thanking you in anticipation, Jeanne Vallet
 
Jeanne> PS : To help you to understand my question, my script is : 

Jeanne> aa<-file.choose()
Jeanne> basetraits<-read.csv2(aa, header = TRUE,row.names=1)
 
Jeanne> #variables declaration
 
The following is completely useless  (and horrible reading (with
"=" and no spaces), and unnecessary):
You don't change basetraits at all with these statements

Jeanne> attach(basetraits)
Jeanne> pdias=as.numeric(pdias)
Jeanne> longindex=as.numeric(longindex)
Jeanne> light=as.numeric(light)
Jeanne> humid=as.numeric(humid)
Jeanne> basic=as.numeric(basic)
Jeanne> azot=as.numeric(azot)
Jeanne> durflow=as.numeric(durflow)
Jeanne> height=as.ordered(height)
Jeanne> spread=as.ordered(spread)
Jeanne> begflow=as.ordered(begflow)
Jeanne> mycor=as.ordered(mycor)
Jeanne> piq=as.factor(piq)
Jeanne> lign=as.factor(lign)
Jeanne> grain=as.factor(grain)
Jeanne> ros=as.factor(ros)
Jeanne> semiros=as.factor(semiros)
Jeanne> leafy=as.factor(leafy)
Jeanne> autopoll=as.factor(autopoll)
Jeanne> geito=as.factor(geito)
Jeanne> insect=as.factor(insect)
Jeanne> wind=as.factor(wind)
Jeanne> suman=as.factor(suman)
Jeanne> winan=as.factor(winan)
Jeanne> monocarp=as.factor(monocarp)
Jeanne> polycarp=as.factor(polycarp)
Jeanne> phan=as.factor(phan)
Jeanne> cham=as.factor(cham)
Jeanne> hemi=as.factor(hemi)
Jeanne> geo=as.factor(geo)
Jeanne> thero=as.factor(thero)
Jeanne> seasaes=as.factor(seasaes)
Jeanne> seashiv=as.factor(seashiv)
Jeanne> seasver=as.factor(seasver)
Jeanne> everalw=as.factor(everalw)
Jeanne> everparti=as.factor(everparti)
Jeanne> endozoo=as.factor(endozoo)
Jeanne> epizoo=as.factor(epizoo)
Jeanne> aquat=as.factor(aquat)
Jeanne> wind=as.factor(wind)
Jeanne> detach(basetraits)
 
Jeanne> #dissimilarities calculation
 
Jeanne> library(cluster)
Jeanne> 
daisybase<-daisy(basetraits,stand=FALSE,type=list(numeric=c(1,7),ordrati
Jeanne> o=c(8,11),symm=c(12,14),asymm=c(15,40)))

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Simulate Mixture Model data

2006-03-21 Thread Martin Maechler
> "Goeland" == Goeland  <[EMAIL PROTECTED]>
> on Tue, 21 Mar 2006 00:21:12 -0500 writes:

Goeland> Dear R Users__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] nlme predict with se?

2006-03-21 Thread Martin Maechler
> "Emilio" == Emilio A Laca <[EMAIL PROTECTED]>
> on Sat, 18 Mar 2006 22:01:05 -0800 writes:

Emilio> Berton, What you say makes sense and helps. I could
Emilio> do a parametric bootstrapping. However, I cannot
Emilio> make predict.nlme work at all, even without the
Emilio> se's. It would save me a lot of time to be able to
Emilio> use the predict as the statistic in boot.

Emilio> Does "predict.nlme" work at all? 
Emilio> It is a listed method in MEMSS. Is there an example?  


  help(predict.nlme)   or even more directly
  example(predict.nlme)

show that it does work (at least in some cases).
If you read the help page, you can also deduce that an
argument 'se' is not made use of.

Please do give us a *reproducible* (for us!) example as you are
asked to do by the posting guide. 
For this you must use an R or nlme example data set; or
artificially generated data for which you show the exact
generative statements {including a set.seed(*) one if you use
random number}.

Martin Maechler, ETH Zurich

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html