Re: [R] analyzing results from Tuesday's US elections

2020-11-09 Thread Alexandra Thorn
This thread strikes me as pretty far off-topic for a forum dedicated to
software support on R.

https://www.r-project.org/mail.html#instructions
"The ‘main’ R mailing list, for discussion about problems and solutions
using R, announcements (not covered by ‘R-announce’ or ‘R-packages’,
see above), about the availability of new functionality for R and
documentation of R, comparison and compatibility with S-plus, and for
the posting of nice examples and benchmarks. Do read the posting guide
before sending anything!"

https://www.r-project.org/posting-guide.html
"The R mailing lists are primarily intended for questions and
discussion about the R software. However, questions about statistical
methodology are sometimes posted. If the question is well-asked and of
interest to someone on the list, it may elicit an informative
up-to-date answer. See also the Usenet groups sci.stat.consult (applied
statistics and consulting) and sci.stat.math (mathematical stat and
probability)."

On Mon, 9 Nov 2020 00:53:46 -0500
Matthew McCormack  wrote:

> You can try here: https://decisiondeskhq.com/
> 
> I think they have what you are looking for. From their website:
> 
> "Create a FREE account to access up to the minute election results
> and insights on all U.S. Federal elections. Decision Desk HQ &
> Øptimus provide live election night coverage, race-specific results
> including county-level returns, and exclusive race probabilities for
> key battleground races."
> 
>     Also, this article provides a little, emphasis on little, of 
> statistical analysis of election results, but it may be a place to
> start.
> 
> https://www.theepochtimes.com/statistical-anomalies-in-biden-votes-analyses-indicate_3570518.html?utm_source=newsnoe_medium=email_campaign=breaking-2020-11-08-5
> 
> Matthew
> 
> On 11/8/20 11:25 PM, Bert Gunter wrote:
> >  External Email - Use Caution
> >
> > NYT  had interactive maps that reported  votes by county. So try
> > contacting them.
> >
> >
> > Bert
> >
> > On Sun, Nov 8, 2020, 8:10 PM Abby Spurdle 
> > wrote: 
> >>> such a repository already exists -- the NY Times, AP, CNN, etc.
> >>> etc.  
> >> already have interactive web pages that did this
> >>
> >> I've been looking for presidential election results, by
> >> ***county***. I've found historic results, including results for
> >> 2016.
> >>
> >> However, I can't find such a dataset, for 2020.
> >> (Even though this seems like an obvious thing to publish).
> >>
> >> I suspect that the NY Times has the data, but I haven't been able
> >> to work where the data is on their website, or how to access it.
> >>
> >> More ***specific*** suggestions would be appreciated...?
> >>  
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://secure-web.cisco.com/1C8m4dUQtDXEQdbAFTH153ehiJcvHuL_FkvDGeJBHhMRYZauAp6gdevfmLIh2MLpRjBx7LXAG9QpagRV63oMY5AyQF6uOkNa7JGw-0zGZKIFHoSuZtjpcIokATDMxqoJlVfCiktqIYXEiJcrovbnxo-DAgLEiREocQrn0yMbLc2A-gwR3CN9XurWkU21TUD1CLJ-3gpiCLKKe9BdHWdaeEA/https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-help
> > PLEASE do read the posting guide
> > http://secure-web.cisco.com/1ppZyk8SO6U25PKNDKtGQ-VIADLxXgKvnHc8QlV3cUMNPzLQvS8E0i9cg05EyzUyHnFjj2QWDjvAjyuduvE1P8Nr0TogQweiuBysM9a1rXjQn1EOaypHdqwa2_inODK1icu0Ff33AZDB00N4x-nYxZ2e16nArVuaMEddaLXBhtBYMn2LAcPYJ8s2wGN10heiFWywn-r8--Hw77GJx1hkTgg/http%3A%2F%2Fwww.R-project.org%2Fposting-guide.html
> > and provide commented, minimal, self-contained, reproducible code. 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html and provide commented,
> minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Mac/PC differences in lmer results

2019-06-05 Thread Alexandra Thorn
To check whether the data are being read in appropriately, what happens
when you plot the distribution of each of the independent variables on
the respective systems?

-A

On Wed, 5 Jun 2019 12:32:28 +0200
Olivier Crouzet  wrote:

> Hi,
> 
> 32bit vs. 64bit systems? 
> 
> Another thing I would look at would be how the windows machine will
> read the data file. Though issues should probably only arise with
> respect to text data, I've often experienced problems with reading
> unicode csv files on windows computers compared with unix-based
> computers. No guarantee though, just suggestions...
> 
> Olivier.
> 
> On Wed, 5 Jun 2019 12:15:53 +0200
> Nicolas Schuck  wrote:
> 
> > bert: you are right, sorry for not cc-ing the list. thanks also for
> > the hint. 
> > 
> > I wanted to bring this up here again, emphasising that we do find in
> > at least one case *a very large difference* in the p value, using
> > the same scripts and data on a windows versus mac machine (see
> > reproducible example in the gitlab link posted below). I have now
> > come across several instances in which results of (g)lmer models
> > don’t agree on windows vs unix-based machines, which I find a bit
> > disturbing. any ideas where non-negligible differences could come
> > from? 
> > 
> > thanks, 
> > nico 
> > 
> >   
> > > On 30. May 2019, at 16:58, Bert Gunter 
> > > wrote:
> > > 
> > > 
> > > Unless there us good reason not to, always cc the list. I have
> > > done so here.
> > > 
> > > The R Installation manual has some info on how to use different
> > > BLASes I believe, but someone with expertise (I have none) needs
> > > to respond to your queries.
> > > 
> > > On Thu, May 30, 2019 at 7:50 AM Nicolas Schuck
> > > mailto:nico.sch...@gmail.com>> wrote: I
> > > know that it is in use on the Mac, see sessionInfo below. I have
> > > to check on the Win system. Why would that make such a difference
> > > and how could I make the Win get the same results as the Unix
> > > Systems? 
> > > 
> > > R version 3.6.0 (2019-04-26) 
> > > Platform: x86_64-apple-darwin15.6.0 (64-bit) 
> > > Running under: macOS Mojave 10.14.5 
> > > Matrix products: default 
> > > BLAS:  
> > > /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
> > >  
> > > LAPACK: 
> > > /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
> > >  
> > > Random number generation: 
> > > RNG:  Mersenne-Twister 
> > > Normal:  Inversion Sample:  Rounding 
> > > locale: [1]
> > > en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
> > > attached base packages: [1] stats  graphics  grDevices utils
> > > datasets  methods  base Thanks, Nico On 30. May 2019, at 16:34,
> > > Bert Gunter  > > > wrote:
> > >   
> > >> The BLAS in use on each?
> > >> 
> > >> Bert Gunter
> > >> 
> > >> "The trouble with having an open mind is that people keep coming
> > >> along and sticking things into it."
> > >> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip
> > >> )
> > >> 
> > >> 
> > >> On Thu, May 30, 2019 at 5:27 AM Nicolas Schuck
> > >> mailto:nico.sch...@gmail.com>> wrote:
> > >> Dear fellow R coders, 
> > >> 
> > >> I am observing differences in results obtained using glmer when
> > >> using a Mac or Linux computer versus a PC. Specifically, I am
> > >> talking about a relatively complex glmer model with a nested
> > >> random effects structure. The model is set up in the following
> > >> way: gcctrl = glmerControl(optimizer=c('nloptwrap'), optCtrl =
> > >> list (maxfun = 50), calc.derivs = FALSE)
> > >> 
> > >> glmer_pre_instr1 = glmer(
> > >>   formula = cbind(FREQ, NSAMP-FREQ) ~ FDIST_minz + poly
> > >> (RFREQ,2) + ROI + (1 + FDIST_minz + RFREQ + ROI|ID/COL), data =
> > >> cdf_pre_instr, family = binomial, 
> > >>   control = gcctrl)
> > >> 
> > >> Code and data of an example for which I find reproducible,
> > >> non-negligible differences between Mac/Win can be found here:
> > >> https://gitlab.com/nschuck/glmer_sandbox/tree/master
> > >> 
> > >>  > >> > The
> > >> differences between the fitted models seem to be most pronounced
> > >> regarding the estimated correlation structure of the random
> > >> effects terms. Mac and Linux yield very similar results, but
> > >> Windows deviates quite a bit in some cases. This has a large
> > >> impact on p values obtained when performing model comparisons. I
> > >> have tried this on Mac OS 10.14, Windows 10 and Ubuntu and
> > >> Debian. All systems I have tried are using lme 1.1.21 and R
> > >> 3.5+. 
> > >> 
> > >> Does anyone have an idea what the underlying cause might be? 
> > >> 
> > >> Thanks, 
> > >> Nico 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> [[alternative HTML version deleted]]
> > >> 
> > >> __
> > >> 

[R] Help reinstalling rgdal (Ubuntu 16.04)

2018-08-09 Thread Alexandra Thorn
Hi all,

Following some updates to R that I received via Synaptic Package
Manager on Ubuntu 16.04 (looks like I now have R 3.4.4-1xenial0), I
have been unable to reinstall rgdal, and I need help.

Initially I was getting error messages about dependencies on GDAL
1.11.4, but after following instructions to install GDAL from source
(into my /usr/local directory) I'm now getting a different set of error
messages (see the output error messages posted below my signature).  

I can't tell if this means that I've made a mistake installing GDAL or
if there's some other problem with my setup/configuration.  I really
need rgdal for the analyses I do in R and could use some help figuring
this out.

Thanks,
Alex

> install.packages("rgdal")
Installing package into ‘/home/athorn/R/x86_64-pc-linux-gnu-library/3.4’
(as ‘lib’ is unspecified)
trying URL 'https://cloud.r-project.org/src/contrib/rgdal_1.3-4.tar.gz'
Content type 'application/x-gzip' length 1664774 bytes (1.6 MB)
==
downloaded 1.6 MB

* installing *source* package ‘rgdal’ ...
** package ‘rgdal’ successfully unpacked and MD5 sums checked
configure: R_HOME: /usr/lib/R
configure: CC: gcc -std=gnu99
configure: CXX: g++
configure: C++11 support available
configure: rgdal: 1.3-4
checking for /usr/bin/svnversion... no
configure: svn revision: 766
checking for gdal-config... /usr/local/bin/gdal-config
checking gdal-config usability... yes
configure: GDAL: 2.3.1
checking C++11 support for GDAL >= 2.3.0... yes
checking GDAL version >= 1.11.4... yes
checking gdal: linking with --libs only... no
checking gdal: linking with --libs and --dep-libs... no
In file included from /usr/local/include/gdal.h:45:0,
 from gdal_test.cc:1:
/usr/local/include/cpl_port.h:187:6: error: #error Must have C++11 or
  newer. #error Must have C++11 or newer.
  ^
In file included from /usr/local/include/gdal.h:49:0,
 from gdal_test.cc:1:
/usr/local/include/cpl_minixml.h:202:47: error: expected template-name
  before '<' token class CPLXMLTreeCloser: public
  std::unique_ptr ^
/usr/local/include/cpl_minixml.h:202:47: error: expected '{' before '<'
  token /usr/local/include/cpl_minixml.h:202:47: error: expected
  unqualified-id before '<' token In file included
  from /usr/local/include/ogr_api.h:45:0,
  from /usr/local/include/gdal.h:50, from gdal_test.cc:1:
/usr/local/include/ogr_core.h:79:28: error: expected '}' before end of
  line /usr/local/include/ogr_core.h:79:28: error: expected declaration
  before end of line In file included
  from /usr/local/include/gdal.h:45:0, from gdal_test.cc:1:
/usr/local/include/cpl_port.h:187:6: error: #error Must have C++11 or
  newer. #error Must have C++11 or newer.
  ^
In file included from /usr/local/include/gdal.h:49:0,
 from gdal_test.cc:1:
/usr/local/include/cpl_minixml.h:202:47: error: expected template-name
  before '<' token class CPLXMLTreeCloser: public
  std::unique_ptr ^
/usr/local/include/cpl_minixml.h:202:47: error: expected '{' before '<'
  token /usr/local/include/cpl_minixml.h:202:47: error: expected
  unqualified-id before '<' token In file included
  from /usr/local/include/ogr_api.h:45:0,
  from /usr/local/include/gdal.h:50, from gdal_test.cc:1:
/usr/local/include/ogr_core.h:79:28: error: expected '}' before end of
  line /usr/local/include/ogr_core.h:79:28: error: expected declaration
  before end of line configure: Install failure: compilation and/or
  linkage problems. configure: error: GDALAllRegister not found in
  libgdal. ERROR: configuration failed for package ‘rgdal’
* removing ‘/home/athorn/R/x86_64-pc-linux-gnu-library/3.4/rgdal’

The downloaded source packages are in
‘/tmp/RtmpeuSDnj/downloaded_packages’
Warning message:
In install.packages("rgdal") :
  installation of package ‘rgdal’ had non-zero exit status
> 

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] working with ordinal predictor variables?

2017-10-05 Thread Alexandra Thorn
I'm trying to develop a linear model for crop productivity based on
variables published as part of the SSURGO database released by the
USDA.  My default is to just run lm() with continuous predictor
variables as numeric, and discrete predictor variables as factors, but
some of the discrete variables are ordinal (e.g. drainage class, which
ranges from excessively drained to excessively poorly drained), but
this doesn't make use of the fact that the predictor variables have a
known order.

How do I correctly set up a regression model (with lm or similar) to
detect the influence of ordinal variables?

How will the output differ compared to the dummy variable outputs for
unordered categorical variables.

Thanks,
Alex

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] AIC() vs. mle.aic() vs. step()?

2011-06-23 Thread Alexandra Thorn
The packages is wle.

I'll put together some code that shows the behavior I'm talking about,
and send it to the list.

Alexandra

On Thu, 2011-06-23 at 13:51 +0200, Rubén Roa wrote: 
 I don't find the mle.aic function. Thus it does not ship with R and it's in 
 some contributed package.
 What package is that?
 If you had asked for help providing minimal, self-contained, reproducible 
 code, you'd have realized that you need to tell people what package you are 
 using.
 
 ___
  
 
 Dr. Rubén Roa-Ureta
 AZTI - Tecnalia / Marine Research Unit
 Txatxarramendi Ugartea z/g
 48395 Sukarrieta (Bizkaia)
 SPAIN
 
  
 
  -Mensaje original-
  De: r-help-boun...@r-project.org 
  [mailto:r-help-boun...@r-project.org] En nombre de Alexandra Thorn
  Enviado el: miércoles, 22 de junio de 2011 22:38
  Para: r-help@r-project.org
  Asunto: [R] AIC() vs. mle.aic() vs. step()?
  
  I know this a newbie question, but I've only just started 
  using AIC for model comparison and after a bunch of different 
  keyword searches I've failed to find a page laying out what 
  the differences are between the AIC scores assigned by AIC() 
  and mle.aic() using default settings.  
  
  I started by using mle.aic() to find the best submodels, but 
  then I wanted to also be able to make comparisons with a 
  couple of submodels that were nowhere near the top, so I 
  started calculating AIC values using AIC().  What I found was 
  that not only the scores, but also the ranking of the models 
  was different.  I'm not sure if this has to do with the fact 
  that mle.aic() scores are based on the full model, or some 
  sort of difference in penalties, or something else.
  
  Could anybody enlighten me as to the differences between 
  these functions, or how I can use the same scoring system to 
  find the best models and also compare to far inferior models?
  
  Failing that, could someone point me to an appropriate 
  resource that might help me understand?
  
  Thanks in advance,
  Alexandra
  
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
  

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [example code] RE: AIC() vs. mle.aic() vs. step()?

2011-06-23 Thread Alexandra Thorn
 0.20755404
[85] 0.10898273 0.10303441 0.19145080 0.38541988 0.29372153 0.19337137
[91] 0.06810569 0.06357357 0.15778877 0.21364239 0.33999760 0.13670444
[97] 0.11900238 0.01315180 0.30599263 0.05201595 0.30131938 0.22017956
[103] 0.23811364
R summary(mle.aic(lm(y1~xP+x5+x15)),max.num=30) # mle.aic output

Call:
mle.aic(formula = lm(y1 ~ xP + x5 + x15))


Akaike Information Criterion (AIC):
  (Intercept) xPNA xPRing x5 x15 aic
[1,]   11  1  0   0 -113.60
[2,]   11  1  0   1 -112.80
[3,]   11  1  1   0 -112.20
[4,]   10  1  0   0 -112.10
[5,]   11  1  1   1 -111.30
[6,]   10  1  0   1 -111.20
[7,]   10  1  1   0 -110.60
[8,]   10  1  1   1 -109.60
[9,]   11  0  0   0  -98.05
[10,]   11  0  0   1  -96.66
[11,]   11  0  1   0  -96.28
[12,]   11  0  1   1  -94.86
[13,]   10  0  0   0  -90.92
[14,]   10  0  0   1  -89.32
[15,]   10  0  1   0  -89.06
[16,]   10  0  1   1  -87.45
[17,]   00  1  1   1  -59.09
[18,]   00  1  1   0  -57.98
[19,]   01  1  1   1  -57.34
[20,]   01  1  1   0  -56.35

Printed the first  20  best models 
R AIC(lm(y1~xA)) # Model 1 above
[1] -120.3801
R AIC(lm(y1~xA+x15)) # Model 2 above
[1] -110.8642
R AIC(lm(y1~xA+x5)) # Model 3 above
[1] -118.9906


On Thu, 2011-06-23 at 09:05 -0400, Alexandra Thorn wrote: 
 The packages is wle.
 
 I'll put together some code that shows the behavior I'm talking about,
 and send it to the list.
 
 Alexandra
 
 On Thu, 2011-06-23 at 13:51 +0200, Rubén Roa wrote: 
  I don't find the mle.aic function. Thus it does not ship with R and it's in 
  some contributed package.
  What package is that?
  If you had asked for help providing minimal, self-contained, reproducible 
  code, you'd have realized that you need to tell people what package you are 
  using.
  
  ___
   
  
  Dr. Rubén Roa-Ureta
  AZTI - Tecnalia / Marine Research Unit
  Txatxarramendi Ugartea z/g
  48395 Sukarrieta (Bizkaia)
  SPAIN
  
   
  
   -Mensaje original-
   De: r-help-boun...@r-project.org 
   [mailto:r-help-boun...@r-project.org] En nombre de Alexandra Thorn
   Enviado el: miércoles, 22 de junio de 2011 22:38
   Para: r-help@r-project.org
   Asunto: [R] AIC() vs. mle.aic() vs. step()?
   
   I know this a newbie question, but I've only just started 
   using AIC for model comparison and after a bunch of different 
   keyword searches I've failed to find a page laying out what 
   the differences are between the AIC scores assigned by AIC() 
   and mle.aic() using default settings.  
   
   I started by using mle.aic() to find the best submodels, but 
   then I wanted to also be able to make comparisons with a 
   couple of submodels that were nowhere near the top, so I 
   started calculating AIC values using AIC().  What I found was 
   that not only the scores, but also the ranking of the models 
   was different.  I'm not sure if this has to do with the fact 
   that mle.aic() scores are based on the full model, or some 
   sort of difference in penalties, or something else.
   
   Could anybody enlighten me as to the differences between 
   these functions, or how I can use the same scoring system to 
   find the best models and also compare to far inferior models?
   
   Failing that, could someone point me to an appropriate 
   resource that might help me understand?
   
   Thanks in advance,
   Alexandra
   
   __
   R-help@r-project.org mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide 
   http://www.R-project.org/posting-guide.html
   and provide commented, minimal, self-contained, reproducible code.
   
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [cleaned code] RE: AIC() vs. mle.aic() vs. step()?

2011-06-23 Thread Alexandra Thorn

On Thu, 2011-06-23 at 09:29 -0400, Alexandra Thorn wrote:
 Ok, here's some example code showing how I get different output for
AIC
 vs. mle.aic().  Now that I've taken another look at the independent
 variables, I'm wondering whether missing values in one of the
variables
 might be what is messing me up.  I'm going to see if the behavior
 changes when I remove those...
 

Okay, here's the code with dput() used to present the various objects.
Thanks to the list for being so patient with me... it's been quite
educational.

Thanks in advance,
Alexandra

Code:
R require(wle)
R dput(xA)
structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 
1L, 3L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
3L, 3L, 1L, 1L, 3L, 3L, 3L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 
2L, 3L, 3L, 3L, 3L, 3L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 
1L, 1L, 1L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 3L, 
3L, 3L, 3L, 1L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 2L, 2L, 3L, 
3L, 3L, 2L, 3L, 2L, 1L, 1L, 1L, 3L, 3L), .Label = c(Diffuse, 
Other, Ring), class = factor)
R dput(x5)
c(35.1890162754395, 22.8565556431035, 15.296994438186,
9.60022407772812, 
25.0393843292946, 21.179788208, 9.26776603463063, 14.5228279661883, 
6.6982273554755, 5.7889657235105, 21.4854296564891, 20.5942435860781, 
20.2180106449331, 0.44420165807875, 5.041499147412, 26.984947359545, 
14.7613969969327, 10.304583446995, 13.4192477726851, 13.90740846636, 
6.721998863216, 13.25694036483, 18.1492698335532, 8.9814627576195, 
14.2575003028425, 21.8982502817969, 8.5661573887, 15.343499557995, 
7.4060631990625, 10.2824613451941, 23.4777018427811, 35.3389594363836, 
51.5448185920973, 6.9571800684925, 23.3166747093435, 35.2280399322705, 
53.3812645912466, 44.7933630466069, 25.5658796310335, 9.6980968165235, 
2.90031387090862, 4.80738140821225, 6.927406749722, 8.61786424398488, 
43.957850260725, 0, 44.1995269203482, 14.68783550262, 5.63854620095413, 
0, 21.1687123966326, 20.566941833529, 0, 0, 28.4924849319605, 
8.7184162712155, 18.8744437360889, 20.9748315239075, 21.3849539280062, 
163.143692522173, 10.85655822755, 9.92978608605625, 0, 0,
41.9369100379775, 
121.762594814280, 13.570939755438, 20.1040411710892, 14.1449650049318, 
8.2172523975435, 10.16499876975, 19.598117628078, 20.3028116584013, 
17.0104638219038, 12.612999143628, 8.20519315482388, 6.42935872078125, 
22.1598563909594, 13.9703385210014, 23.0206302023242, 15.25902295115, 
14.4778823661717, 2.4819054257875, 21.8293459510672, 25.151516683063, 
32.105084991422, 12.5154914474453, 11.6927538156488, 9.40486317871687, 
38.4559898615062, 53.195916748074, 14.4917169976215, 10.2548528385015, 
8.82271943808338, 12.8573514676201, 10.0589964580665, 12.886892914765, 
9.6626724052155, 5.98260608673, 3.25811900139, 13.446737566015, 
8.80658397675, 17.773449287436)
R dput(x15)
c(1.69924629406401, -1.63414400065288, 0.714151689343318,
4.17480342154949, 
1.52512663197893, 1.73541067946363, -5.47498002151169,
0.956812825760668, 
-1.48092554972038, 1.51101949018443, -2.25838766176389,
2.12958862888441, 
1.43795702627435, -4.48003372542488, -3.65963008576897,
-0.763463882139697, 
-2.44019862561235, 1.32552846648453, 1.89863804289907,
1.80655970149808, 
-0.741756823200407, 1.30112633095768, -1.06424642846912,
-1.47852202054490, 
0.090359152072348, NA, NA, 1.82385291704612, -0.153087078076393, 
1.0468532207338, 2.45599032439301, 1.36474092834838, -2.39863477181754, 
-0.212204468662908, -2.50255033079852, -1.92296430369566,
-0.245775784395867, 
-1.96756216156693, 0.433499968438238, 0.884598593578297,
-0.127559050278120, 
2.31771322353091, -1.21846730709075, 1.7508299240518,
-3.02346893141966, 
-4.15582444612729, 1.09946459784029, 4.30008521664531,
4.37542383384967, 
NA, -1.93641861765076, -0.019194921394532, -2.39609317657158, 
-3.12228102462318, 0.488046064498046, -1.42886436846636,
-3.52078266098328, 
3.22115286286252, 0.879425403143162, -0.293853650273392,
0.400308672754849, 
0.843826073923569, -0.144454076182464, -0.619035270434771, NA, 
1.53158893613932, -1.01595045420127, 0.188573746980020,
-1.24703875463314, 
-0.53766035430668, -0.433050941330375, 1.30035413662748,
0.0825664730349873, 
-0.0100815443036547, -1.89151834308193, 0.601611806130933,
1.38339048228375, 
1.70782208107344, 0.489955991643127, NA, 0.717743402714073, NA, 
0.355783083720979, -1.30038021268004, 0.181709422709264,
-0.769997723552683, 
-0.528601269320360, -0.587139047162164, 2.45770817832288,
-3.79345760049497, 
-0.737003476707607, 1.85916858045961, 0.485234889001515,
-2.24404921428853, 
-3.71691740913278, -0.805258199659559, 0.207685613867357,
-0.0558821002122282, 
NA, -0.503328331764907, 0.704074652205563, -0.573911596976014, 
-1.11740646296423)
R dput(y1)
c(0.117364072525805, 0.127930151301644, 0.066273900401,
0.0338529181312498, 
0.0511158613502366, 0.128968673883822, 0.210301133239691,
0.10661115427526, 
0.0232107944450872, 0.0603516951698553, 0.179660748593193,
0.221208092790247, 
0.163670330934813, 0.0706269859311667

[R] Ranking submodels by AIC (more general question)

2011-06-23 Thread Alexandra Thorn
Here's a more general question following up on the specific question I
asked earlier:

Can anybody recommend an R command other than mle.aic() (from the wle
package) that will give back a ranked list of submodels?  It seems like
a pretty basic piece of functionality, but the closest I've been able to
find is stepAIC(), which as far as I can tell only gives back the best
submodel, not a ranking of all submodels.

Thanks in advance,
Alexandra

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ranking submodels by AIC (more general question)

2011-06-23 Thread Alexandra Thorn
Thanks for the suggestion.  Those functions only provide part of the
functionality I want.  

After a great deal more of drawing the internet, I've concluded that the
correct answer to my question is dredge() from the package MuMIn.  It
seems to use the same AIC algorithm as AIC, which is perfect for making
comparisons!

Thanks again to everybody who's tried to help me out on this!
Alexandra

On Thu, 2011-06-23 at 21:29 +0200, Jan van der Laan wrote: 
 Alexandra,
 
 Have a look at add1 and drop1.
 
 Regards,
 Jan
 
 
 On 06/23/2011 07:32 PM, Alexandra Thorn wrote:
  Here's a more general question following up on the specific question I
  asked earlier:
 
  Can anybody recommend an R command other than mle.aic() (from the wle
  package) that will give back a ranked list of submodels?  It seems like
  a pretty basic piece of functionality, but the closest I've been able to
  find is stepAIC(), which as far as I can tell only gives back the best
  submodel, not a ranking of all submodels.
 
  Thanks in advance,
  Alexandra
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] AIC() vs. mle.aic() vs. step()?

2011-06-22 Thread Alexandra Thorn
I know this a newbie question, but I've only just started using AIC for
model comparison and after a bunch of different keyword searches I've
failed to find a page laying out what the differences are between the
AIC scores assigned by AIC() and mle.aic() using default settings.  

I started by using mle.aic() to find the best submodels, but then I
wanted to also be able to make comparisons with a couple of submodels
that were nowhere near the top, so I started calculating AIC values
using AIC().  What I found was that not only the scores, but also the
ranking of the models was different.  I'm not sure if this has to do
with the fact that mle.aic() scores are based on the full model, or some
sort of difference in penalties, or something else.

Could anybody enlighten me as to the differences between these
functions, or how I can use the same scoring system to find the best
models and also compare to far inferior models?

Failing that, could someone point me to an appropriate resource that
might help me understand?

Thanks in advance,
Alexandra

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.