On Tue, 2005-04-05 at 22:54 -0400, John Sorkin wrote:
> Please forgive a straight stats question, and the informal notation.
>
> let us say we wish to perform a liner regression:
> y=b0 + b1*x + b2*z
>
> There are two ways this can be done, the usual way, as a single
> regression,
> fit1<-lm(y
> "Michael" == Michael A Miller <[EMAIL PROTECTED]>
> on Tue, 05 Apr 2005 10:28:21 -0500 writes:
> "dream" == dream home <[EMAIL PROTECTED]> writes:
>> Does it sound like spline work do the job? It would be
>> hard to persuave him to use some modern math technique
>> bu
Privitak je zaraÅen virusom. Kontaktirajte sistem administratora.
An attachment is infected by virus. Contact administrator.
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://ww
I have posted an update to the GAM package. Note that this package
implements gam() as described
in the "White" S book (Statistical models in S). In particular, you can
fit models with lo() terms (local regression)
and/or s() terms (smoothing splines), mixed in, of course, with any
terms appropr
This is possible if x and z are orthogonal, but in general it doesn't
work as you have noted. (If it did work it would almost amount to a way
of inverting geenral square matrices by working one row at a time, no
going back...)
It is possible to fit a bivariate regression using simple linear
regre
Please forgive a straight stats question, and the informal notation.
let us say we wish to perform a liner regression:
y=b0 + b1*x + b2*z
There are two ways this can be done, the usual way, as a single
regression,
fit1<-lm(y~x+z)
or by doing two regressions. In the first regression we could ha
Dear all,
Are there any functions which can solve a system of
nonlinear equations. My system is R 2.0.1+Debian.
Thank you in advance.
--
Yours Sincerely
Shusong Jin
===
Add:
On Apr 5, 2005 6:59 PM, Itay Furman <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I have a data set, the structure of which is something like this:
>
> > a <- rep(c("a", "b"), c(6,6))
> > x <- rep(c("x", "y", "z"), c(4,4,4))
> > df <- data.frame(a=a, x=x, r=rnorm(12))
>
> The true data set has >1 mill
On Tuesday 05 April 2005 18:40, Murray Jorgensen wrote:
> I assigned a class the first problem in Pinheiro & Bates, which uses
> the data set PBIB from the SASmixed package. I have recently
> downloaded 2.0.1 and its associated packages. On trying
>
> library(SASmixed)
> data(PBIB)
> library(nlme)
Hi Itay,
Not sure if by() can do it directly, but this does it from first
principles, using lapply() and tapply() (which aggregate uses
internally). It would be reasonably straightforward to wrap this up
in a function.
a <- rep(c("a", "b"), c(6,6))
x <- rep(c("x", "y", "z"), c(4,4,4))
df <- data
Dr. Frank E. Harrell, Jr., Professor and Chair of the Department of
Biostatistics at Vanderbilt University is giving a one-day workshop on
Regression Modeling Strategies on Friday, April 29, 2005. Analyses of the
example datasets use R/S-Plus and make extensive use of the Hmisc library
written
I assigned a class the first problem in Pinheiro & Bates, which uses the
data set PBIB from the SASmixed package. I have recently downloaded
2.0.1 and its associated packages. On trying
library(SASmixed)
data(PBIB)
library(nlme)
plot(PBIB)
I get a warning message
Warning message:
replacing previ
Hi,
I have a data set, the structure of which is something like this:
a <- rep(c("a", "b"), c(6,6))
x <- rep(c("x", "y", "z"), c(4,4,4))
df <- data.frame(a=a, x=x, r=rnorm(12))
The true data set has >1 million rows. The factors "a" and "x"
have about 70 levels each; combined together they subset 'd
Gidday,
Perhaps try something along these lines:
## Establish which 4-letter group each row belongs to
prefix <- substr(names(d), 1, 4)
gp <- match(prefix, unique(prefix))
gp[regexpr("\\.total$", names(d)) > -1] <- NA # Exclude `*.total' rows
## Sum up each of the groups
d.sums <- lapply(split(s
On 5 Apr 2005, at 9:39 pm, Minyu Chen wrote:
Sorry for bothering again, but it doesn't work yet. Now it shows "x11"
when I type getOption("device"), but when I do the plot, the terminal
just simply told me x11 is not available.
This is why I asked you whether you have X11 before compiling R. It'
Hi,
I guess by now you relaize that we have been trying to
promote our NBGLMM in order to show people some of the
capabiuliteis of our randome effects software ADMB-RE.
If you want I can help you to analyze your data with
the package we talk about on the R list. All I ask is that
if it works for yo
On 5 Apr 2005, at 8:45 pm, Minyu Chen wrote:
No, the only output is postscipt. As I just install X11, I did not
have it before compiling R.
You can try to set the device to x11 by issue the following command,
options(device = 'x11')
and hope now it works.
What to do now except for getting and comp
On Apr 5, 2005 1:36 PM, Paul Johnson <[EMAIL PROTECTED]> wrote:
> I'm writing R code to calculate Hierarchical Social Entropy, a diversity
> index that Tucker Balch proposed. One article on this was published in
> Autonomous Robots in 2000. You can find that and others through his web
> page at Ge
Alexis J. Diamond wrote:
hi,
thanks for the reply to my query about exclusion rules for propensity
score matching.
Exclusion can be based on the non-overlap regions from the propensity.
It should not be done in the individual covariate space.
i want a rule inspired by non-overlap in propensity sc
To get the neighborhoods of radius r of each point in your data set, given
distances calculated already in the matrix d, you could do (but note below)
$ A <- (d <= r)
then rows (or columns) of A are indicator vectors for the neighborhoods.
"Unique" will work on these vectors, as "unique.array", t
On 5 Apr 2005, at 19:12, Minyu Chen wrote:
Dear all:
I am a newbie in Mac. Just installed R and found R did not react on my
command plot (I use command line in terminal). It did not give me any
error message, either. All it did was just giving out a new command
prompt--no reaction to the plot co
> From: Paul Johnson
>
> I'm writing R code to calculate Hierarchical Social Entropy,
> a diversity
> index that Tucker Balch proposed. One article on this was
> published in
> Autonomous Robots in 2000. You can find that and others
> through his web
> page at Georgia Tech.
>
> http://www.
Check out these recent postings to the R list:
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/48429.html
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/48646.html
Cheers, Pierre
[EMAIL PROTECTED] wrote:
Greetings R Users!
I have a data set of count responses for which I have made repeated obser
Hi,
Also consider using the function supplied in the post:
https://stat.ethz.ch/pipermail/r-help/2005-March/066752.html
for fitting negative binomial mixed effects models.
Cheers,
Anders.
On Tue, 5 Apr 2005, Achim Zeileis wrote:
> On Tue, 5 Apr 2005 11:20:37 -0600 [EMAIL PROTECTED] wrote:
The following strategy may or may not work, depending on whether the numbers
in your lists are integers or could be the result of flowing point
computations (so that 2 might be 1.99... etc.).
As I understand it, you wish to reduce an arbitrary list to one with unique
members, where each membe
Dear all:
I am a newbie in Mac. Just installed R and found R did not react on my
command plot (I use command line in terminal). It did not give me any
error message, either. All it did was just giving out a new command
prompt--no reaction to the plot command. I suppose whenever I gives out
a co
I have a dataset of the form
Year tosk.fai tosk.isd tosk.gr ... tosk.total hysa.fai
hysa.isd ...
and so on. I want to sum all the columns using the first four letters in
the columns label(e.g. 'tosk', 'hysa' etc.). How can you do that? Also,
the sums should be without the '.to
On Tue, 5 Apr 2005 11:20:37 -0600 [EMAIL PROTECTED] wrote:
> Greetings R Users!
>
> I have a data set of count responses for which I have made repeated
> observations on the experimental units (stream reaches) over two air
> photo dates, hence the mixed effect. I have been using Dr. Jim
> Linds
I'm writing R code to calculate Hierarchical Social Entropy, a diversity
index that Tucker Balch proposed. One article on this was published in
Autonomous Robots in 2000. You can find that and others through his web
page at Georgia Tech.
http://www.cc.gatech.edu/~tucker/index2.html
While I wor
Greetings R Users!
I have a data set of count responses for which I have made repeated observations
on the experimental units (stream reaches) over two air photo dates, hence the
mixed effect. I have been using Dr. Jim Lindsey's GLMM function found in his
"repeated" measures package with the "poi
It is seldom a good idea to use the _significance_ of a difference as a
surrogate for its magnitude. The reason is that the significance varies
with many irrelevant aspects of the analysis, such as the model and the
sample size. If you have other measures of group similarity, then there
are many
Hi-
I have about 20 groups for which I know the mean, variance, and number of
points per group. Is here an R function where I can plot (3 group example)
something like
| |- 2
| |---|
| | |- 1
| |
|--|
| |- 3
|
|
0
Hi R users,
I need some help in the followings:
I'm doing factor analysis and I need to extract the loading values and
the Proportion Var and Cumulative Var values one by one.
Here is what I am doing:
> fact <- factanal(na.omit(gnome_freq_r2),factors=5);
> fact$loadings
Loadings:
Factor1
Dear Michael,
For unbalanced data, you might want to take a look at the Anova()
function in the car package.
As well, it probably makes sense to read something about how linear
models are expressed in R. ?lm and ?formula both have some information
about model formulas; the Introduction to R manua
On Tue, 2005-04-05 at 15:51 +0100, michael watson (IAH-C) wrote:
> So, what I want to know is:
>
> 1) Given my unbalanced experimental design, is it valid to use aov?
I'd say no. Use lm() instead, save your analysis in an object and then
possibly use drop1() to check the analysis
> 2) Have I us
See ?try.
Andy
> From: Federico Calboli
>
> Dear All,
>
> I am trying to calculate the Hardy-Weinberg Equilibrium p-value for 42
> SNPs. I am using the function HWE.exact from the package "genetics".
>
> In order not to do a lot of coding "by hand", I have a for loop that
> goes through each
Federico Calboli wrote:
Dear All,
I am trying to calculate the Hardy-Weinberg Equilibrium p-value for 42
SNPs. I am using the function HWE.exact from the package "genetics".
In order not to do a lot of coding "by hand", I have a for loop that
goes through each column (each column is one SNP) and gi
Hi all,
I'm using the function "fitdistr" (library MASS) to fit a distribution to
given data.
What I have to do further, is getting the log-Likelihood-Value from this
estimation.
Is there any simple possibility to realize it?
Regards, Carsten
__
R-hel
> "dream" == dream home <[EMAIL PROTECTED]> writes:
> Does it sound like spline work do the job? It would be hard
> to persuave him to use some modern math technique but he
> did ask me to help him implement the French Curve so he can
> do his work in Excel, rather than PAPER.
Dear All,
I am trying to calculate the Hardy-Weinberg Equilibrium p-value for 42
SNPs. I am using the function HWE.exact from the package "genetics".
In order not to do a lot of coding "by hand", I have a for loop that
goes through each column (each column is one SNP) and gives me the
p.value for
hello,
is it possible to convert a 3D plot I made with rgl package to a
quicktime VR?
thanks,
simone
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting
Mike,
I used R for exactly that purpose, to test a new
method for sampling coarse woody debris in silico
against existing alternatives. The results are
published in:
Bebber, D.P. & Thomas, S.C., 2003. Prism sweeps for
coarse woody debris. Canadian Journal of Forest
Research 33, 17371743.
I will p
hi,
thanks for the reply to my query about exclusion rules for propensity
score matching.
> Exclusion can be based on the non-overlap regions from the propensity.
> It should not be done in the individual covariate space.
i want a rule inspired by non-overlap in propensity score space, but that
Hi
I have data from 12 subjects. The measurement is log(expression) of a
particular gene and can be assumed to be normally distributed. The 12
subjects are divided into the following groups:
Infected, Vaccinated, Lesions - 3 measurements
Infected, Vaccintaed, No Lesions - 2 measurements
Infecte
[EMAIL PROTECTED] wrote:
Dear R-list,
i have 6 different sets of samples. Each sample has about 5000 observations,
with each observation comprised of 150 baseline covariates (X), 125 of which
are dichotomous. Roughly 20% of the observations in each sample are "treatment"
and the rest are "control"
On Tue, 5 Apr 2005, Simon Blomberg wrote:
The questioner clearly wants generalized linear mixed models. lmer in package
lme4 may be more appropriate. (Prof. Bates is a co-author.). glmmPQL should
do the same job, though, but with less accuracy.
Actually, I think the questioner wants GEE, from gee
good catch. you can also get access to both colnames and rownames by dimnames().
On Apr 5, 2005 9:57 AM, Jabez Wilson <[EMAIL PROTECTED]> wrote:
> thank you for your replies
>
> what I really wanted was the individual names, but I think I can get them with
>
> colnames(myTable)[x]
>
> where x i
On Tue, 5 Apr 2005 10:07:00 -0400, "Mike Saunders"
<[EMAIL PROTECTED]> wrote :
>Is there a package, or does anyone have code they are willing to share, that
>would allow me to simulate sampling of dead wood pieces across an area? I am
>specifically looking for code to simulate the dead wood dis
Is there a package, or does anyone have code they are willing to share, that
would allow me to simulate sampling of dead wood pieces across an area? I am
specifically looking for code to simulate the dead wood distribution as small
line segments across an extent, and then I will "sample" the de
thank you for your replies
what I really wanted was the individual names, but I think I can get them with
colnames(myTable)[x]
where x is 1:7
Send instant messages to your online friends http://uk.messenger.yahoo.com
[[alternative HTML version deleted]]
__
Look at "?names()", "?colnames()", e.g. check
names(myTable)
I hope it helps.
Best,
Dimitris
Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven
Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/16/336899
Fax: +32/16/337015
Web: http
Ted,
TH> Did you test the function cubic.distance?
Yes, I did.
TH> As written, I think it
TH> will always return a single value,
Yes, here was the misunderstanding.
Subset required a vector, and I gave it a scalar.
Prof. Ripley has already shown my mistake.
--
Best regards
Wladimir Eremeev
Dear list, I have read an excel table into R using read.table() and headers=T.
The table is 7 columns of figures. Is there any way to gain access to the
header information? Say the table headers are Mon, Tue etc I can get to the
data with myTable$Tue, but how do I get the headers themselves.
T
On 05-Apr-05 Wladimir Eremeev wrote:
> Dear r-help,
>
> I have the following function defined:
>
> cubic.distance<-function(x1,y1,z1,x2,y2,z2) {
> max(c(abs(x1-x2),abs(y1-y2),abs(z1-z2)))
> }
>
> I have a data frame from which I make subsets.
>
> When I call
> subset(dataframe,cubic.distanc
On Tue, 5 Apr 2005, Wladimir Eremeev wrote:
Dear r-help,
I have the following function defined:
cubic.distance<-function(x1,y1,z1,x2,y2,z2) {
max(c(abs(x1-x2),abs(y1-y2),abs(z1-z2)))
}
I have a data frame from which I make subsets.
When I call
subset(dataframe,cubic.distance(tb19h,tb37v,tb19v,190
Dear r-help,
I have the following function defined:
cubic.distance<-function(x1,y1,z1,x2,y2,z2) {
max(c(abs(x1-x2),abs(y1-y2),abs(z1-z2)))
}
I have a data frame from which I make subsets.
When I call
subset(dataframe,cubic.distance(tb19h,tb37v,tb19v,190,210,227)<=2)
I have the result with 0
I just started using gmail and one thing that I thought would
be annoying but sometimes is actually interesting are the
ads at the right hand side. They are keyed off the content
of the email and in the case of your post produced:
http://www.visibone.com/regular-expressions/?via=google120
http:/
On Tue, 5 Apr 2005, Petr Pikal wrote:
Dear Prof.Ripley
Thank you for your answer. After some tests and errors I finished
with suitable extraction function which gives me substatnial
increase in positive answers.
Nevertheless I definitely need to gain more practice in regular
expressions, but from t
Dear Prof.Ripley
Thank you for your answer. After some tests and errors I finished
with suitable extraction function which gives me substatnial
increase in positive answers.
Nevertheless I definitely need to gain more practice in regular
expressions, but from the help page I can grasp only ea
On 05-Apr-05 Ross Clement wrote:
> Hi. I have a question that I have asked in other stat forums
> but do not yet have an answer for. I would like to know if
> there is some way in R or otherwise of performing the following
> hypothesis test.
>
> I have a single data item x. The null hypothesis is
> From: Liaw, Andy
>
> > From: Liaw, Andy
> >
> > Here's one possibility, assuming muhat and sigmahat are
> > estimtes of mu and sigma from N iid draws of N(mu, sigma^2):
> >
> > tStat <- abs(x - muhat) / sigmahat
> > pValue <- pt(tStat, df=N, lower=TRUE)
>
> Oops... That should be:
>
> pV
> From: Liaw, Andy
>
> Here's one possibility, assuming muhat and sigmahat are
> estimtes of mu and sigma from N iid draws of N(mu, sigma^2):
>
> tStat <- abs(x - muhat) / sigmahat
> pValue <- pt(tStat, df=N, lower=TRUE)
Oops... That should be:
pValue <- pt(tStat, df=N, lower=TRUE) / 2
Andy
Here's one possibility, assuming muhat and sigmahat are estimtes of mu and
sigma from N iid draws of N(mu, sigma^2):
tStat <- abs(x - muhat) / sigmahat
pValue <- pt(tStat, df=N, lower=TRUE)
I'm not quite sure what df tStat should have (exercise for math stat), but
given fairly large N, that shoul
Dear Professor Ripley
The good news is that I fot PVM up and running one two
Windows nodes now. I had to connect them with each
other manually . for now not using rsh or ssh.
Now building RPVM for Windows might not be so easy as
it sounds. Did anyone try this out before
successfully?
Also th
Hi. I have a question that I have asked in other stat forums but do not
yet have an answer for. I would like to know if there is some way in R
or otherwise of performing the following hypothesis test.
I have a single data item x. The null hypothesis is that x was selected
from a normal distributio
On Tue, 5 Apr 2005, Petr Pikal wrote:
Dear all,
please, is there any possibility how to extract a date from data
which are like this:
Yes, if you delimit all the possibilities.
"Date: Sat, 21 Feb 04 10:25:43 GMT"
"Date: 13 Feb 2004 13:54:22 -0600"
"Date: Fri, 20 Feb 2004 17:00:48 +"
"Date:
Dear all,
please, is there any possibility how to extract a date from data
which are like this:
"Date: Sat, 21 Feb 04 10:25:43 GMT"
"Date: 13 Feb 2004 13:54:22 -0600"
"Date: Fri, 20 Feb 2004 17:00:48 +"
Calling gc() before starting a memory-intensive task is normally a good
idea, as it helps avoid memory fragmentation (which is possibly a problem
in a 32-bit OS, but you did not say). R 2.1.0 beta has some dodges to
help, so you may find if helpful to try that out.
On Mon, 4 Apr 2005, Mike Hic
On Mon, 4 Apr 2005, bogdan romocea wrote:
You need another OS. Standard/32-bit Windows (XP, 2000 etc) can't use
more than 4 GB of RAM. Anyway, if you try to buy a box with 16 GB of
RAM, the seller will probably warn you about Windows and recommend a
suitable OS.
There _are_ versions of Windows 2000
I think the function that does the printing of the loadings assumes
that eigenvectors are normalized to the corresponding eigenvalue, rather
than unity, and the output it produces is incorrect.
ft.
--
Fernando TUSELLe-mail:
Departamento de Econometría y Estadística
70 matches
Mail list logo