Hi Paul,
here's a lm model to illustrate this:
> summary(lm(y~x.1+x.2))
Call:
lm(formula = y ~ x.1 + x.2)
Residuals:
Min 1Q Median 3QMax
-0.0561359 -0.0054020 0.0004553 0.0056516 0.0515817
Coefficients:
Estimate Std. Error t value Pr(>|t|)
Dear Cristina,
On Sat, Mar 24, 2007 at 01:51:34PM +0100, Cristina Gomes wrote:
> Dear R-users,
> I was wondering if anybody knows if it's possible to obtain a p value
> for the full model of a GLMM with the lme4 package.
I do not believe that it is possible to do so.
> I was told that I
> sho
Dear R-users,
I was wondering if anybody knows if it's possible to obtain a p value
for the full model of a GLMM with the lme4 package. I was told that I
should check whether the full model including all the predictor
variables is significant before doing stepwise regression or further
analysis
Hi there,
I have a question about the GLMM that I'm doing, that a statistician
friend suggested I should have for my analysis. I would like to know if
there's any way of obtaining a p value and R square for the full model
(and not each variable separately) as to asses whether this model is
some
[Oops! Written 6 hours ago, the following was accidentally not sent.]
> "Celso" == Celso Barros <[EMAIL PROTECTED]>
> on Wed, 5 Jul 2006 04:09:17 -0300 writes:
Celso> When I run rlm to obtain robust standard errors, my output does not
include
Celso> p-values. Is there any r
Dear All,
When I run rlm to obtain robust standard errors, my output does not include
p-values. Is there any reason p-values should not be used in this case? Is
there an argument I could use in rlm so that the output does
include p-values?
Thanks in advance,
Celso
[[alternative HTML ver
On Tue, 11 Apr 2006, Prof Brian Ripley wrote:
> On Tue, 11 Apr 2006, Thomas Lumley wrote:
>>
>> He has a linear model with the same number of observations for each person
>
> Not so: some have 3 and some have 2, and the two levels of T are not quite
> balanced (29/28).
>
>> and no covariates tha
On Tue, 11 Apr 2006, Thomas Lumley wrote:
> On Tue, 11 Apr 2006, Renaud Lancelot wrote:
>
>> 2006/4/10, Tarca, Adi <[EMAIL PROTECTED]>:
>>> Hi all,
>>>
>>> I have a dataset in which the output Y is observed on two groups of
>>> patients (treatment factor T with 2 levels).
>>>
>>> Every subject in
On Tue, 11 Apr 2006, Renaud Lancelot wrote:
> 2006/4/10, Tarca, Adi <[EMAIL PROTECTED]>:
>> Hi all,
>>
>> I have a dataset in which the output Y is observed on two groups of
>> patients (treatment factor T with 2 levels).
>>
>> Every subject in each group is observed three times (not time points b
2006/4/10, Tarca, Adi <[EMAIL PROTECTED]>:
> Hi all,
>
> I have a dataset in which the output Y is observed on two groups of
> patients (treatment factor T with 2 levels).
>
> Every subject in each group is observed three times (not time points but
> just technical replication).
>
> I am interested
On Mon, 10 Apr 2006, Tarca, Adi wrote:
> Hi all,
>
> I have a dataset in which the output Y is observed on two groups of
> patients (treatment factor T with 2 levels).
>
> Every subject in each group is observed three times (not time points but
> just technical replication).
>
> I am interested in
Hi all,
I have a dataset in which the output Y is observed on two groups of
patients (treatment factor T with 2 levels).
Every subject in each group is observed three times (not time points but
just technical replication).
I am interested in estimating the treatment effect and take into account
2/(0)16/336899
Fax: +32/(0)16/337015
Web: http://www.med.kuleuven.be/biostat/
http://www.student.kuleuven.be/~m0390867/dimitris.htm
- Original Message -
From: "Wilson, Andrew" <[EMAIL PROTECTED]>
To:
Sent: Monday, March 13, 2006 10:53 AM
Subject: [R] P-values i
When fitting a simple linear or polynomial regression using lm, R
provides a p-value for the whole model as well as for the individual
coefficients. When fitting the same models using gls (in order to
correct for autocorrelation), there doesn't seem to be a p-value
provided for the whole model, al
"Ladelund, Steen" <[EMAIL PROTECTED]> writes:
> Hi all.
>
> Belove is the example from the cluster-help page wtih the output.
>
> I simply cannot figure out how to relate the estimate and robust Std. Err to
> the p-value. I am aware this a marginal model applying the sandwich
> estimator using (
Hi all.
Belove is the example from the cluster-help page wtih the output.
I simply cannot figure out how to relate the estimate and robust Std. Err to
the p-value. I am aware this a marginal model applying the sandwich
estimator using (here I guess) an emperical (unstructered/exchangeable?)
ICC.
On Thu, 4 Aug 2005, Peter Ho wrote:
> HI R-users,
>
> I am trying to repeat an example from Rayner and Best "A contingency
> table approach to nonparametric testing (Chapter 7, Ice cream example).
>
> In their book they calculate Durbin's statistic, D1, a dispersion
> statistics, D2, and a residu
Spencer,
Here is an example from rayner and best 2001 and the script sent by
Felipe. This can be done as follows using the function durbin.grupos()
in the attached file
> ###Ice cream example from Rayner and Best 2001 . Chapter 7
> judge <- rep(c(1:7),rep(3,7))
> variety <- c(1,2,4,2,3,5,3,4
pperm seems reasonable, though I have not looked at the details.
We should be careful about terminology, however. So-called "exact
p-values" are generally p-values computed assuming a distribution over a
finite set of possible outcomes assuming some constraints to make the
Spencer,
Thank you for referring me to your other email on Exact goodness-of-fit
test. However, I'm not entirely sure if what you mentioned is the same
for my case. I'm not a statistician and it would help me if you could
explain what you meant in a little more detail. Perhaps I need to
expla
Hi, Peter:
Please see my reply of a few minutes ago subject: exact
goodness-of-fit test. I don't know Rayner and Best, but the same
method, I think, should apply. spencer graves
Peter Ho wrote:
> HI R-users,
>
> I am trying to repeat an example from Rayner and Best "A contingency
HI R-users,
I am trying to repeat an example from Rayner and Best "A contingency
table approach to nonparametric testing (Chapter 7, Ice cream example).
In their book they calculate Durbin's statistic, D1, a dispersion
statistics, D2, and a residual. P-values for each statistic is
calculated f
Not really an R question.
Most classifiers will produce predicted probabilities, and you can check
their accuracy. There are lots of details in my PRNN book, and some
examples in MASS4.
I suggest you adjust your training and test sets to be more nearly equal,
or use cross-validation.
I don't
Dear All,
I'm classifying some data with various methods (binary classification). I'm
interpreting the results via a confusion matrix from which I calculate the
sensitifity and the fdr. The classifiers are trained on 575 data points and my
test set has 50 data points.
I'd like to calculate p-v
On Sat, 2005-03-26 at 20:36 -0500, John Sorkin wrote:
> R 2.0.1
> Linux
>
> I am using rlm() to fit a model, e.g. fit1<-rlm(y~x). My model is more
> complex than the one shown.
>
> When I enter summary(fit1)
> I get estimates for the model's coefficients along with their SEs, and
> t values, b
R 2.0.1
Linux
I am using rlm() to fit a model, e.g. fit1<-rlm(y~x). My model is more
complex than the one shown.
When I enter summary(fit1)
I get estimates for the model's coefficients along with their SEs, and
t values, but no p values. The p value column is blank.
Similarly, when I enter ano
Hmm, this is rather about reading the (Hartigan)^2 paper ...
> "Kylie" == Kylie Lange <[EMAIL PROTECTED]>
> on Fri, 22 Oct 2004 11:17:34 +0930 writes:
Kylie> Hi all,
Kylie> I am using Hartigan & Hartigan's [1] "dip test" of
Kylie> unimodality via the diptest package in R.
Hi all,
I am using Hartigan & Hartigan's [1] "dip test" of unimodality via the
diptest package in R. The function dip() returns the value of the test
statistic but I am having problems calculating the p-value associated with
that value. I'm hoping someone here is familiar with this process and c
Frank -
Thanks for your reply, on which I really have no comment.
There are contexts where a Bayesian approach necessary,
natural and easy to handle, and can be used to broaden
the inferential vision of students. Examples from HIV
testing and mammography screening in
Gigerenzer's book, sold in the
On Thu, 29 Apr 2004 14:49:26 +1000
John Maindonald <[EMAIL PROTECTED]> wrote:
> This is, of course, not strictly about R. But if there should be
> a decision to pursue such matters on this list, then we'd need
> another list to which such discussion might be diverted.
>
> I've pulled Frank's "Re
day, April 29, 2004 10:39 AM
To: Thomas Lumley
Cc: [EMAIL PROTECTED]; John Maindonald
Subject: Re:[R] p-values
On 29-Apr-04 Thomas Lumley wrote:
> On Thu, 29 Apr 2004, John Maindonald wrote:
>
>> This is, of course, not strictly about R. But if there should be
>> a decision to
On 29-Apr-04 Thomas Lumley wrote:
> On Thu, 29 Apr 2004, John Maindonald wrote:
>
>> This is, of course, not strictly about R. But if there should be
>> a decision to pursue such matters on this list, then we'd need
>> another list to which such discussion might be diverted.
>>
>
> Ted Harding s
On Thu, 29 Apr 2004, John Maindonald wrote:
> This is, of course, not strictly about R. But if there should be
> a decision to pursue such matters on this list, then we'd need
> another list to which such discussion might be diverted.
>
Ted Harding started such a list (stats-discuss) quite some
ill have a section in his book in the future?
John Maindonald <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
04/29/2004 12:49 AM
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: Re:[R] p-values
This is, of course, not strict
On 29-Apr-04 John Maindonald wrote:
> This is, of course, not strictly about R. But if there should be
> a decision to pursue such matters on this list, then we'd need
> another list to which such discussion might be diverted.
A few of us have taken further discussion on this topic off-list,
betw
This is, of course, not strictly about R. But if there should be
a decision to pursue such matters on this list, then we'd need
another list to which such discussion might be diverted.
I've pulled Frank's "Regression Modeling Stratregies" down
from my shelf and looked to see what he says about
inf
On Tue, 27 Apr 2004 22:25:22 +0100 (BST)
(Ted Harding) <[EMAIL PROTECTED]> wrote:
> On 27-Apr-04 Greg Tarpinian wrote:
> > I apologize if this question is not completely
> > appropriate for this list.
>
> Never mind! (I'm only hoping that my response is ... )
>
> > [...]
> > This week I have be
vity of results to the choice
of prior?
John Maindonald.
From: Greg Tarpinian <[EMAIL PROTECTED]>
Date: 28 April 2004 6:32:06 AM
To: [EMAIL PROTECTED]
Subject: [R] p-values
I apologize if this question is not completely
appropriate for this list.
I have been using SAS for a while and am now in th
Group,
I'm currently trying to find out how the function correlog in the ncf
package may be useful to me for calculating cross-correlograms. The
function's output includes P-values for all distance classes, but it seems
that only positive values can become significant. Is this true and correct?
If
39 matches
Mail list logo