R-users & helpers:
I am using Amelia, mitools and cmprsk to fit cumulative incidence curves
to multiply imputed datasets. The error message that I get
"Error in eval(expr, envir, enclos) : invalid 'envir' argument"
occurs when I try to fit models to the 50 imputed datasets using the
"with.imp
R-helpers:
I am using R 2.5 on Windows XP, packages all up to date. I have run
into an issue with the MIcombine function of the mitools package that I
hoped some of you might be able to help with. I will work through a
reproducible example to demonstrate the issue.
First, make a dataset from t
Thanks Adai, I got it to work. You were right, I had called the wrong
pool function.
Brant
-Original Message-
From: Adaikalavan Ramasamy [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 17, 2007 1:56 PM
To: Inman, Brant A. M.D.
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] MICE for Cox
From: Adaikalavan Ramasamy [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 17, 2007 4:56 AM
To: Inman, Brant A. M.D.
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] MICE for Cox model
I encountered this problem about 18 months ago. I contacted Prof. Fox
and Dr. Malewski (the R package maintain
R-helpers:
I have a dataset that has 168 subjects and 12 variables. Some of the
variables have missing data and I want to use the multiple imputation
capabilities of the "mice" package to address the missing data. Given
that mice only supports linear models and generalized linear models (via
the
This email is intended to highlight 2 problems that I encountered
running R 2.5.0 alpha on a Windows XP machine.
#1 - Open script error
If I click the "Open folder" icon on the toolbar, R opens my script
files perfectly. However, when I select "File > Open Script >
MyFileLocation", I get a fat
Thank you very much, that was indeed the problem. (And now that I read
more carefully the help page, it did in fact say that the input was a
data matrix and not a data frame.)
Brant
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Prof Brian Ripley
Sent: W
Attention R users, especially those that are experienced enough to be
opinionated, I need your input.
Consider the following simple plot:
x <- rnorm(100)
y <- rnorm(100)
plot(x, y, bty='n')
A colleague (and dreaded SAS user) commented that she thought that my
plots could be "cleaned up" by conn
Chuck Cleland and Steve Weigand both pointed out my mistake in the
loop...trying to assign a list (i.e. the output from power.t.test) to a
cell in a data.frame.
Thanks guys.
Brant
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman
R-Helpers:
I would like to perform sample size calculations for an experiment. As
part of this process, I would like to know how various assumptions
affect the sample size calculation. For instance, one thing that I
would like to know is how the calculated sample size changes as I vary
the diff
R plotting experts:
I have a bivariate dataset composed of 300 (x,y) continuous datapoints.
297 of these points are located within the y range of [0,10], while 2
are located at 20 and one at 55. No coding errors, real outliers.
When plotting these data with a scatterplot, I obviously have a pro
: Peter Dalgaard [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 13, 2007 3:36 PM
To: Bos, Roger
Cc: Inman, Brant A. M.D.; r-help@stat.math.ethz.ch
Subject: Re: [R] Freeman-Tukey arcsine transformation
Peter Dalgaard wrote:
> Bos, Roger wrote:
>
>> I'm curious what this transform
R-Experts:
Does anyone know if there are R functions to perform the Freeman-Tukey
double arcsine transformation and then backtransform it?
Thanks,
Brant Inman
Mayo Clinic
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r
R-Experts:
I just realized that the example I used in my previous posting today is
incorrect because it is a binary response, not a multilevel response
(small, medium, large) such as my real life problem has. I apologize
for the confusion. The example is incorrect, but the multinomial
prob
R Experts:
I am conducting a meta-analysis where the effect measures to be pooled
are simple proportions. For example, consider this data from
Fleiss/Levin/Paik's Statistical methods for rates and proportions (2003,
p189) on smokers:
Study N Event P(Event)
1 86 830.
R-helpers:
In the construction of control charts for statistical quality control
objectives, one might choose to estimate the control limits for the mean
using the mean range of the samples. This requires multiplying the mean
range by a correction factor, often called "d2", that is tabulated in
Several R Helpers pointed out that I forgot to include the dev.off()
statement in my code. This solved my problem with one caviat: the
output file address cannot have any spaces in it (as pointed out by
Chuck Cleland). For instance:
# This file location works great
bitmap(file='C:\
Thank you Peter Dalgaard.
When I open a DOS box and type gswin32c, I do indeed get an error message
saying that it can't find the program. I edited the Windows system
environmental variable "Path" and the user environmental variable "PATH"
(wasn't sure which to edit), to contain the follwing
Thanks to all the responders. Here are some replies to the comments:
1) Concerning the term TIFF "format".
It may be that the journals are misusing the term TIFF, but it would
also appear that wikipedia is as well. The first sentence in the wiki
link sent below states:
"Tagged Image File FORMA
Many medical journals and publishers require that images, whether
photographs or line art, be submitted as high resolution .TIFF images.
One option for R users is to produce an image in one format and to
convert it to a .TIFF file using a second software program. My
experience has been that this o
Just a follow-up note on my last posting. I still have not had any
replies from the R-experts our there that use partial proportional odds
regression (and I have to hope that there are some of you!) but I do
think that I have figured out how to perform the unconstrained partial
proportional odds
R-experts:
I would like to explore the partial proportional odds models of Peterson
and Harrell (Applied Statistics 1990, 39(2): 205-217) for a dataset that
I am analyzing. I have not been able to locate a R package that
implements these models. Is anyone aware of existing R functions,
packages
]
Sent: Sunday, January 07, 2007 2:52 AM
To: Inman, Brant A. M.D.
Cc: r-help@stat.math.ethz.ch; [EMAIL PROTECTED]
Subject: Re: [R] Using VGAM's vglm function for ordinal logistic
regression
On Sat, 6 Jan 2007, Inman, Brant A. M.D. wrote:
>
> R-Experts:
>
> I am using the vglm fun
R-Experts:
I am using the vglm function of the VGAM library to perform proportional
odds ordinal logistic regression. The issue that I would like help with
concerns the format in which the response variable must be provided for
this function to work correctly. Consider the following example:
--
On July 12, 2004 Spencer Graves wrote an email describing essentially
the same issue that I would like help on: calling the confint function
from within another homemade function. Because he provided many good
examples of the problem, I will not reproduce them here but will instead
refer readers
Thank you to Prof Ripley and Henric Nilsson for their observation that I
was using "anova" inappropriately for the question that I was trying to
answer. Note that the Wald statistics and confidence interval were
calculable in the previous email but that I prefered using the more
accurate deviance
System: R 2.3.1 on Windows XP machine.
I am building a logistic regression model for a sample of 100 cases in
dataframe "d", in which there are 3 binary covariates: x1, x2 and x3.
> summary(d)
y x1 x2 x3
0:54 0:50 0:64 0:78
1:46 1:50 1:36 1:22
I am running R 2.3.1 on a Windows XP machine. I have a large dataset of
over 13 000 cases of a disease for which I am attempting to build a
prognostic model using Cox proportional hazards regression. Some of the
continuous covariates are skewed and therefore require transformation
for use in the
at useful
because it will NOT contain your dataframe, only the name of the file
that contained your dataframe:
>test <- data.restore('C:\\temp\\ddump.sdd')
>test
[1] "C:\\temp\\ddump.sdd"
Brant Inman
-Original Message-
From: Richard M. Heiberger [mailto:
As recommended, I have tried the following code to solve my problem
importing data to R from S-Plus 7.0. Unfortunately, I have not had
success.
In S-Plus:
> data.dump('data', file='C:\\temp\\ddump.sdd', oldStyle=T)
This resulted in the production of a file called "ddump.sdd" that I can
import i
I am running R 2.3.1 on a Windows XP machine. I am trying to import
some data into R that is stored as an S-Plus 7.0 .sdd file.
When I run the following command, I get this error:
> library(foreign)
> d <- read.S(file='H:\\Research\\data.sdd')
Error in read.S(file = "H:\\Research\\data.sdd") :
I am using the ipred library to calculate the censored Brier score for a
Cox proportional hazards model. I would like to know if anyone has
developed a method of calculating confidence intervals for the various
forms of the Brier score that are used in the analysis of
survival/censored data. If
I am looking for a book that discusses the theory of multiple imputation
(and other methods of dealing with missing data) and, just as
importantly, how to implement these methods in R or S-Plus. Ideally,
the book would have a structure similar to Faraway (Regression),
Pinheiro&Bates (Mixed Effect
I would like to determine the probability of an event at a specific
timepoint given the linear predictor of a given Cox model. For
instance, assume that I fit the following model:
data(pbc)
fit <- coxph(Surv(time, status)~ age, data=pbc)
To extract the value of the linear predictor for
System: R 2.3.1 on a Windows XP computer.
I am validating several cancer prognostic models that have been
published with a large independent dataset. Some of the models report a
probability of survival at a specified timepoint, usually at 5 and 10
years. Others report only the linear predictor
35 matches
Mail list logo