As John noted, there are different kinds of weights, and
different terminology:
* inverse-variance weights (accuracy weights)
* case weights (frequencies, counts)
* sampling weights (selection probability weights)
I'll add:
* inverse-variance weights, where var(y for observation) = 1/weight
(as
>>> Adaikalavan Ramasamy <[EMAIL PROTECTED]> 09/05/2007 01:37:31 >>>
>..the variance of means of each row in table above is ZERO because
>the individual elements that comprise each row are identical.
>... Then is it valid then to use lm( y ~ x, weights=freq ) ?
ermmm... probably not, because if
Dear Hadley,
> -Original Message-
> From: hadley wickham [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, May 09, 2007 2:21 AM
> To: John Fox
> Cc: R-help@stat.math.ethz.ch
> Subject: Re: [R] Weighted least squares
>
> Thanks John,
>
> That's just the e
Dear Adai,
> -Original Message-
> From: Adaikalavan Ramasamy [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, May 08, 2007 8:38 PM
> To: S Ellison
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; R-help@stat.math.ethz.ch
> Subject: Re: [R] Weighted least squares
>
> http
On 5/9/07, Adaikalavan Ramasamy <[EMAIL PROTECTED]> wrote:
> http://en.wikipedia.org/wiki/Weighted_least_squares gives a formulaic
> description of what you have said.
Except it doesn't describe what I think is important in my case - how
do you calculate the degrees of freedom/n for weighted linea
L8S 4M4
> 905-525-9140x23604
> http://socserv.mcmaster.ca/jfox
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of hadley wickham
> > Sent: Tuesday, May 08, 2007 5:09 AM
> &g
http://en.wikipedia.org/wiki/Weighted_least_squares gives a formulaic
description of what you have said.
I believe the original poster has converted something like this
y x
0 1.1
0 2.2
0 2.2
0 2.2
1 3.3
Hadley,
You asked
> .. what is the usual way to do a linear
> regression when you have aggregated data?
Least squares generally uses inverse variance weighting. For aggregated data
fitted as mean values, you just need the variances for the _means_.
So if you have individual means x_i and sd's
Doubling the length of the data doubles the apparent number of observations.
You would expect the standard error to reduce by sqrt(2) (which it just about
does, though I'm not clear on why its not exact here)
Weights are not as simple as they look. You have given all your data the same
weight,
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of hadley wickham
> Sent: Tuesday, May 08, 2007 5:09 AM
> To: R Help
> Subject: [R] Weighted least squares
>
> Dear all,
>
> I'm struggling with weighted least squares, where something
> that I
Sorry, you did not explain that your weights correspond to your
frequency in the original post. I assumed they were repeated
measurements with within group variation.
I was merely responding to your query why the following differed.
summary(lm(y ~ x, data=df, weights=rep(2, 100)))
summar
On 5/8/07, Adaikalavan Ramasamy <[EMAIL PROTECTED]> wrote:
> See below.
>
> hadley wickham wrote:
> > Dear all,
> >
> > I'm struggling with weighted least squares, where something that I had
> > assumed to be true appears not to be the case. Take the following
> > data set as an example:
> >
> > d
See below.
hadley wickham wrote:
> Dear all,
>
> I'm struggling with weighted least squares, where something that I had
> assumed to be true appears not to be the case. Take the following
> data set as an example:
>
> df <- data.frame(x = runif(100, 0, 100))
> df$y <- df$x + 1 + rnorm(100, sd=1
Dear all,
I'm struggling with weighted least squares, where something that I had
assumed to be true appears not to be the case. Take the following
data set as an example:
df <- data.frame(x = runif(100, 0, 100))
df$y <- df$x + 1 + rnorm(100, sd=15)
I had expected that:
summary(lm(y ~ x, data=d
Hallo all,
I have a question concerning the weights used in the glm function.
I need to build a linear model (family=gaussian) with only one regressor. Sadly
I have only 6 different sets:
y_i=alpha+beta*x_i , i=1,2,3,4,5.
i=1,2,3,4,5 has been observed 60 times, while i=6 has only been observed
On Tue, 18 Jan 2005, Prof Brian Ripley wrote:
On Mon, 17 Jan 2005, Ming Hsu wrote:
I would like to run a weighted least squares with the the weighting matrix
W.
This is generalized not weighted least squares if W really is a matrix and
not a vector of case-by-case weights.
I ran the following tw
On Mon, 17 Jan 2005, Ming Hsu wrote:
I would like to run a weighted least squares with the the weighting
matrix W.
This is generalized not weighted least squares if W really is a matrix and
not a vector of case-by-case weights.
I ran the following two regressions,
(W^-1)Y = Xb + e
Y = WXb+ We
If
Hi,
I would like to run a weighted least squares with the the weighting matrix
W. I ran the following two regressions,
(W^-1)Y = Xb + e
Y = WXb+ We
In both cases, E[bhat] = b.
I used the following commands in R
lm1 <- lm(Y/W ~ X)
lm2 <- lm(Y ~ W:X, weights = W)
where
Y <- rnorm(10,1)
X <-
I apologize, in advance, for cross-posting this to
the R listserv. I have submitted this query twice
to the S listserv (yesterday and this morning)and
neither post has "made it", not sure why.
When I run the code
gls.1 <-
gls(y ~ x, data = foo.frame,
weights = varPower(form = ~ fitted(.)|g
19 matches
Mail list logo