What do you mean by 'standard time'? If you mean UTC, set TZ=UTC.
For anything else, see ?Sys.timezone (which explains the use of TZ).
I am guessing that you are storing times in "POSIXct" objects (you do
not say): they are in UTC. Have you any evidence that 'datetimes are
getting converted t
Hi
I want to install some versions of R simultaneously from source on a
computer (running Linux). Some programs have an option to specify a
suffix for the executable (eg R would become R-2.7.2 when the suffix
is specified as "-2.7.2"). I did not find this option for R - did I
overlook it?
If it i
actually, I just realised you also want a line in the plot. I am not
super-sure how to do this.
On Feb 27, 5:20 pm, andrew wrote:
> the following should work
>
> library(lattice)
> x <- seq(1,100)
> y <- seq(1,100)
> gr <- expand.grid(x,y)
> gr$z <- x + y + rnorm(1,0,100)
> cloud(z ~ x + y,
Hi, I am trying to do an analysis of variance for an unbalanced design.
As a toy example, I use a dataset presented by K. Hinkelmann and O.
Kempthorne in "Design and Anaylysis of Experiments" (p353-356).
This example is very similar to my own dataset, with one difference: it
is balanced.
Thus it is
Hi all,
I've been having some trouble with times in regards to daylight savings in R
version 2.8.1. I have an ORACLE database that R is importing data in from,
for the 2am and 2:30am time intervals for the dates that daylight savings
starts the times are getting read as NA values i
the following should work
library(lattice)
x <- seq(1,100)
y <- seq(1,100)
gr <- expand.grid(x,y)
gr$z <- x + y + rnorm(1,0,100)
cloud(z ~ x + y, data = gr)
also, look for the package rgl which does similar but with more
possiblities.
On Feb 27, 4:28 pm, Dipankar Basu wrote:
> Hi R Users,
>
Hi R Users,
I have produced a simulated scatter plot of y versus x tightly clustered
around the 45 degree line through the origin with the following code:
x <- seq(1,100)
y <- x+rnorm(100,0,10)
plot(x,y,col="blue")
abline(0,1)
Is there some way to generate a 3-dimensional analogue of this? C
How would this agency be convinced of adopting R code also
how would these things work.
Regards,
Ajay
www.decisionstats.com
On Fri, Feb 27, 2009 at 4:27 AM, Frank E Harrell Jr <
f.harr...@vanderbilt.edu> wrote:
> If anyone wants to see a prime example of how inefficient it is to program
>
Sometimes for the sake of simplicity, SAS coding is created like that. One
can use the concatenate function and drag and drop in an simple excel sheet
for creating elaborate SAS code like the one mentioned and without any time
at all.
There are multiple ways to do this in SAS , much better and simi
Thanks for pointing me to the SAS code, Dr Harrell
After reading codes, I have to say that the inefficiency is not
related to SAS language itself but the SAS programmer. An experienced
SAS programmer won't use much of hard-coding, very adhoc and difficult
to maintain.
I agree with you that in the S
res <- lapply(1:length(L),do.one)
Actually, I do
res <- lapply(:length(L),function(x)do.one(L[x]))
-- this is the price of needing the element's name, so I have to both
make do.one extract the name and the meat separately inside, and
lapply becomes ugly. Yet the obvious alternatives -- ex
Hi Stanley.
CHD850 wrote:
Hi everyone,
I have to fetch about 300 to 500 zipped archives from a remote ftp server.
Each of the archive is about 1Mb. I know I can get it done by using
download.file() in R, but I am curious that is there a faster way to do this
using RCurl. For example, are there
Jeff Evans-5 wrote:
>
>
> lme4 does have a leg up on GLIMMIX in other areas, though.
> The latest SAS release (9.2) is now able to compute the Laplace
> approximation of the likelihood, but it will only fit an overdispersion
> parameter using pseudo-likelihoods which can't be used for model
>
Sometimes I'm iterating over a list where names are keys into another
data structure, e.g. a related list. Then I can't use lapply as it
does [[]] and loses the name. Then I do something like this:
do.one <- function(ldf) { # list-dataframe item
key <- names(ldf)
meat <- ldf[[1]]
mydf
It just so happens that I created a vectorized SD function the other
day. See the column and row versions below. If there are any
rows/columns with only one non-NA value, it will return NaN.
col_sd = function(x){
dimx = dim(x)
x.mean = .Internal(colMeans(x,dimx[1],dimx[2],na.rm=TRU
You can fit this kind of model (and negative binomial) and more
difficult mixed models with AD Model Builder's random effects module
which is now freely available at
http://admb-project.org/
--
David A. Fournier
P.O. Box 2040,
Sidney, B.C. V8l 3S3
Canada
Phone/FAX 250-655-3364
http://ott
Perhaps "coat" and "jacket" are more ambiguous in the United States than
the United Kingdom. If it's cold enough to warrant it, I wear a jacket
in the morning. If it isn't, I don't want to have to carry it around all
day. Checking the daily weather forecast is too much work, so I just go
by the cur
Along similar lines, I wrote a toy script to apply any function you want
in a windowed sense. Be warned that it's about 1 times slower
than loess().
# my own boxcar tool, just because.
# use bfunc to specify what function to apply to the windowed
# region.
# basically must be valid func
On 26 Feb 2009 at 23:47, Barry Rowlingson wrote:
> 2009/2/26 Frank E Harrell Jr :
> > If anyone wants to see a prime example of how inefficient it is to program
> > in SAS, take a look at the SAS programs provided by the US Agency for
> > Healthcare Research and Quality for risk adjusting and repo
Barry Rowlingson wrote:
2009/2/26 Frank E Harrell Jr :
If anyone wants to see a prime example of how inefficient it is to program
in SAS, take a look at the SAS programs provided by the US Agency for
Healthcare Research and Quality for risk adjusting and reporting for
hospital outcomes at http:/
Jason Rupert wrote:
I have a tightly coupled collection of variables with different lengths and types (some characters and others numerics). I looked at the documentation for data.frame, but indicates that it expects all variables to have the same length, i.e. number of elements.
I think you
?list
On Thu, Feb 26, 2009 at 6:15 PM, Jason Rupert wrote:
> I have a tightly coupled collection of variables with different lengths and
> types (some characters and others numerics). I looked at the documentation
> for data.frame, but indicates that it expects all variables to have the same
2009/2/26 Frank E Harrell Jr :
> If anyone wants to see a prime example of how inefficient it is to program
> in SAS, take a look at the SAS programs provided by the US Agency for
> Healthcare Research and Quality for risk adjusting and reporting for
> hospital outcomes at http://www.qualityindicat
Frank,
I couldn't locate the program you mentioned. doyou mind being more
specific? could you please point me to the file? i am just curious.
thanks.
On Thu, Feb 26, 2009 at 5:57 PM, Frank E Harrell Jr
wrote:
> If anyone wants to see a prime example of how inefficient it is to program
> in SAS, t
Perhaps you could be more specific in your request? The gam function
in the gam package has 4 pages describing the syntax and multiple
examples, so saying you "can not quite seem to get the syntax correct
for my data format as simple as it is." is impossibly non-specific.
--
David Winsemius
Elaine Jones wrote:
I am running R version 2.8.1 on Windows XP OS.
I generate and write a .csv file from my R script. Then the following
command works to upload it to a remote server using a windows batch file
that carries out the ftp (among other things).
> system("C:/upload_data/upload
Dear All,
I am interested in using R to summarize data so that I can THEN do some
additional analyses using behavioral data.
I study birds, mostly, and often behavioral data that includes timing. So,
for example, each line may be the occurrence of some behavior, and it would
include identifiers
I have a tightly coupled collection of variables with different lengths and
types (some characters and others numerics). I looked at the documentation for
data.frame, but indicates that it expects all variables to have the same
length, i.e. number of elements.
I was hoping the "See Also" do
Hi all,
Tnx for the number of suggestion to buy one of the books.
Unfortunately being a non profit conservation project now w/o funding
since all Central American projects were shut down 2 years ago due to
lack of donor funding I have nothing now at all for books or even my
support, to cover e
See also http://umbrellatoday.com/
Hadley
On Thu, Feb 26, 2009 at 2:47 PM, Thomas Levine wrote:
> I'm writing a program that will tell me whether I should wear a coat,
> so I'd like to be able to download daily weather forecasts and daily
> reports of recent past weather conditions.
>
> The NOAA
On Thu, Feb 26, 2009 at 7:02 AM, wrote:
>
> Hello,
> I d like to run a survival analysis with "left truncated data". Could you
> recommend me a package to do this please ?
The 'eha' package if you want parametric or discrete time models.
Göran
> Thanks
> Philippe Guardiola
>
>
If anyone wants to see a prime example of how inefficient it is to
program in SAS, take a look at the SAS programs provided by the US
Agency for Healthcare Research and Quality for risk adjusting and
reporting for hospital outcomes at
http://www.qualityindicators.ahrq.gov/software.htm . The PS
2009/2/26 Thomas Levine :
> I'm writing a program that will tell me whether I should wear a coat,
> so I'd like to be able to download daily weather forecasts and daily
> reports of recent past weather conditions.
>
> The NOAA has very promising tabular forecasts
> (http://forecast.weather.gov/MapC
-- or perhaps even the section on "Additive Models" in the latest edition of
V&R's MASS (still a useul book to have in one's library, IMHO, although,
like me, it's getting grayer)
Bert Gunter
Genentech Nonclinical Biostatistics
650-467-7374
-Original Message-
From: r-help-boun...@r-pro
James Muller wrote:
> Yes, as a general thing go to regular expressions if you don't have an
> existing library available to do the same thing (or you're lazy like
> me:).
>
>
many things are simply *much* easier with xpath than with regexes, and
with the XML package you got it for free.
vQ
Yes, as a general thing go to regular expressions if you don't have an
existing library available to do the same thing (or you're lazy like
me:).
Jame
On Thu, Feb 26, 2009 at 5:16 PM, Wacek Kusnierczyk
wrote:
> Scillieri, John wrote:
>> Looks like you can sign up to get XML feed data from Weathe
Scillieri, John wrote:
> Looks like you can sign up to get XML feed data from Weather.com
>
> http://www.weather.com/services/xmloap.html
>
... and use the excellent R package XML by Duncan Temple Lang to parse
the document and easily access the data with, e.g.., XPath rather than
regular expre
On Feb 26, 2009, at 4:54 PM, David Winsemius wrote:
On Feb 26, 2009, at 4:44 PM, Neotropical bat risk assessments wrote:
Hi all,
I would like to run the gam package, but can not quite seem to get
the syntax correct for my data format as simple as it is.
The gamlss package has a user man
Hi everyone,
In the situation that the remote file does not exist, download.file fails,
yet it still creates a file as provided for "destfile" argument. I tried to
delete this bad file but got the message that it is still being used by
"other programs", which I assume is R.
Does anyone know ho
Looks like you can sign up to get XML feed data from Weather.com
http://www.weather.com/services/xmloap.html
Hope it works out!
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of James Muller
Sent: Thursday, February 26, 2009 3:57 PM
On Feb 26, 2009, at 4:44 PM, Neotropical bat risk assessments wrote:
Hi all,
I would like to run the gam package, but can not quite seem to get
the syntax correct for my data format as simple as it is.
The gamlss package has a user manual but other than the help with
gam package I can no
Hi all,
I would like to run the gam package, but can not quite seem to get
the syntax correct for my data format as simple as it is.
The gamlss package has a user manual but other than the help with gam
package I can not find a user manual per se.
I did install the gamair that has loads of
I am running R version 2.8.1 on Windows XP OS.
I generate and write a .csv file from my R script. Then the following
command works to upload it to a remote server using a windows batch file
that carries out the ftp (among other things).
> system("C:/upload_data/uploadq8.bat
C:/upload_data
Thanks for responding Doug. I'm sure SAS just hasn't gotten around to
releasing their code yet.
lme4 does have a leg up on GLIMMIX in other areas, though.
The latest SAS release (9.2) is now able to compute the Laplace
approximation of the likelihood, but it will only fit an overdispersion
paramet
Thanks very much Rolf, Dimitris, & Greg!
Bill
On Thu, Feb 26, 2009 at 8:56 PM, Rolf Turner wrote:
>
> On 27/02/2009, at 9:46 AM, William Simpson wrote:
>
>> I would like to do as follows
>> plot(a,b)
>> points(c,d,pch=19)
>>
>> Now join with a line segment point a[1], b[1] to c[1], d[1]; a[2],
>
Thomas,
Have a look at the source code for the webpage (ctrl-u in firefox,
don't know in internet explorer, etc.). That is what you'd have to
parse in order to get the forecast from this page. Typically when I
parse webpages such as this I use regular expressions to do so (and I
would never downpl
On 27/02/2009, at 9:46 AM, William Simpson wrote:
I would like to do as follows
plot(a,b)
points(c,d,pch=19)
Now join with a line segment point a[1], b[1] to c[1], d[1]; a[2],
b[2] to c[2], d[2] ... a[n], b[n] to c[n], d[n]
All corresponding points from the two data sets are joined by line
have a look at segments(), e.g.,
x <- rnorm(5)
y <- rnorm(5)
z <- rnorm(5)
w <- rnorm(5)
r1 <- range(x, z)
r2 <- range(y, w)
plot(r1, r2, type = "n")
points(x, y)
points(z, w, pch = 19)
segments(x, y, z, w)
I hope it helps.
Best,
Dimitris
William Simpson wrote:
I would like to do as follow
Try:
> segments(a,b,c,d)
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.org] On Behalf Of William Simpson
> Sent: Thursday, Fe
On Thu, Feb 26, 2009 at 12:04 PM, Jeff Evans wrote:
> Has there been any follow up to this question? I have found myself wondering
> the same thing: How then does SAS fit a beta distributed GLMM? It also fits
> the negative binomial distribution.
When SAS decides to open-source their code we'll b
I'm writing a program that will tell me whether I should wear a coat,
so I'd like to be able to download daily weather forecasts and daily
reports of recent past weather conditions.
The NOAA has very promising tabular forecasts
(http://forecast.weather.gov/MapClick.php?CityName=Ithaca&state=NY&sit
On Thu, Feb 26, 2009 at 10:58 AM, Tanja Srebotnjak
wrote:
> I'm resending this message because I did not include a subject line in my
> first posting.
Also, it is generally more effective to send questions about
lmer/glmer to the R-SIG-Mixed-Models list, which I am cc:ing on this
reply.
>> Hell
I would like to do as follows
plot(a,b)
points(c,d,pch=19)
Now join with a line segment point a[1], b[1] to c[1], d[1]; a[2],
b[2] to c[2], d[2] ... a[n], b[n] to c[n], d[n]
All corresponding points from the two data sets are joined by line segments.
Thanks very much for any tips on how to do th
Rolf Turner writes:
> Despite the knowledge, wisdom, insight, skill, good looks, and other
> admirable characteristics of the members of the R-help list, few of
> us are skilled in telepathy or clairvoyance.
Oh, yeah? Then how did I know you were going to say that, huh?
--
Jeff
__
You need to set "Class" as factor before you call naiveBayes(); i.e.,
mixture.train$Class <- factor(mixture.train$Class)
Then you can just do:
pred.bayes <-predict(Bayes.res, mixture.test, type="class")
Andy
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:
Has anyone had success with producing legends to a qplot graph such that the
legend is placed on the bottom, under the abcissa rather than to the right hand
side ?
The following doesn't move the legend:
library(ggplot2)
qplot(mpg, wt, data=mtcars, colour=cyl, gpar(legend.position=
on 02/26/2009 11:52 AM Vadlamani, Subrahmanyam {FLNA} wrote:
> Hi:
> I am a new R user. I have the following question and would appreciate your
> input
>
> Data1 (data frame 1)
> p1,d1,d2 (p1 is text and d1 and d2 are numeric)
> xyz,10,25
>
> Data2 (data frame 2)
> p1,d1,d2
> xyz,11,15
>
> Now
Dear all,
I've sent a mail last week already. I've been trying to get this predict
function work, but somehow I keep on getting the same error. Read the help
files, searched the internet, but I don't seem to get what I'm doing wrong.
Anybody who has experience with this function? It's contained i
Hi there,
something like this?
Data1<-read.table(stdin(), head=T, sep=",")
p1,d1,d2
xyz,10,25
kmz,100,250
Data2<-read.table(stdin(), head=T, sep=",")
p1,d1,d2
xyz,11,15
kmz,110,150
Data1
Data2
Data3<-data.frame(rbind(Data1,Data2))
Data3
Data3.sum<-aggregate(Data3[,c("d1","d2")], list(Data3$p1)
For the first question you can use %in% rather than ==, for example:
ifelse( dataframe$vector_o_number %in% c('00','01), 'red', 'black')
for the reference line, the abline function will draw a line the full
width/height of a graph for a general reference, if you want separate lines for
each gr
On Thu, 26 Feb 2009, Paul Gilbert wrote:
Gabor Grothendieck wrote:
Try
if (.Platform$OS.type == "windows") ... else ...
Gabor has suggested what I think is the best way to do this check, but in my
experience, if you are doing this check then you are almost certainly missing
some featu
Hi all,
I have to run a logit regresion over a large dataset and I am not sure
about the best option to do it. The dataset is about 20x2000 and R
runs out of memory when creating it.
After going over help archives and the mailing lists, I think there are
two main options, though I am not
Has there been any follow up to this question? I have found myself wondering
the same thing: How then does SAS fit a beta distributed GLMM? It also fits
the negative binomial distribution.
Both of these would be useful in glmer/lmer if they aren't 'illegal' as
Brian suggested. Especially as SAS i
Hi:
I am a new R user. I have the following question and would appreciate your input
Data1 (data frame 1)
p1,d1,d2 (p1 is text and d1 and d2 are numeric)
xyz,10,25
Data2 (data frame 2)
p1,d1,d2
xyz,11,15
Now I want to create a new data frame that looks like so below. The fields d1
and s2 are su
Are you behind a firewall?
See the help for download.file for details on setting proxy information if this
is the case.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-proje
All San Francisco Bay Area useRs,
On March 11th, Spencer Graves & Sundar Dorai-Raj will talk about
Creating R Packages, and related issues.
This is the first of our (now) regular monthly meetings, details here
http://www.meetup.com/R-Users/calendar/9718957/
Last week Wednesday, we had over 60 pe
It looks like what you did was something like:
> tmp <- sim(100)
> table(tmp)/100
To get your results (it is easier for us to help if you tell us what you
actually did, also setting a seed and telling us that seed helps us reproduce
what you did exactly).
The reason that you did not see 9 and
randomForest output is based on predict(iris.rf) whereas the
code shown below uses predict(iris.rf, iris). See ?predict.randomForest
for an explanation.
On Thu, Feb 26, 2009 at 11:10 AM, Li GUO wrote:
> Dear R users,
>
> I have a question on the confusion matrix generated by function randomFores
I'm resending this message because I did not include a subject line in my first
posting.
Apologies for the inconvenience!
Tanja
> Hello,
>
> I'm trying to fit a generalized linear mixed model to estimate diabetes
> prevalence at US county level. To do this I'm using the glmer() function
Hi everyone,
I have to fetch about 300 to 500 zipped archives from a remote ftp server.
Each of the archive is about 1Mb. I know I can get it done by using
download.file() in R, but I am curious that is there a faster way to do this
using RCurl. For example, are there some parameters that I can s
This could depend somewhat on which OS you have on the computers as to which
packages will work or work best for you. A couple of packages to look at
include, Rmpi, nws, and snow (no relation), and the other packages in the
suggests field for snow.
--
Gregory (Greg) L. Snow Ph.D.
Statistical
Gabor Grothendieck wrote:
Try
if (.Platform$OS.type == "windows") ... else ...
Gabor has suggested what I think is the best way to do this check, but
in my experience, if you are doing this check then you are almost
certainly missing some feature of R that will let you avoid doing it.
Thanks, Duncan. The BACCO package worked.
On Thu, Feb 26, 2009 at 11:22 AM, Duncan Murdoch wrote:
> On 2/26/2009 11:08 AM, eric lee wrote:
>>
>> Hi. I'm trying to run the elliptic package on my computer (windows
>> platform, version 2.7.2). I downloaded the elliptic package zip file
>> from
>>
Reading this article, announcing the elliptic package (which was a
distinct pleasure) and also looking at the R-FAQ, my guess is that
you need the BACCO bundle.
http://www.jstatsoft.org/v15/i07/paper
--
David Winsemius
On Feb 26, 2009, at 11:08 AM, eric lee wrote:
Hi. I'm trying to run
Hi, Duncan. I choose 'package' then ''install package' and tried 6
different U.S. mirrors. The message I always get is:
Warning: unable to access index for repository
http://lib.stat.cmu.edu/R/CRAN/bin/windows/contrib/2.7
Warning: unable to access index for repository
http://www.stats.ox.ac.uk/p
On 2/26/2009 11:08 AM, eric lee wrote:
Hi. I'm trying to run the elliptic package on my computer (windows
platform, version 2.7.2). I downloaded the elliptic package zip file
from
http://lib.stat.cmu.edu/R/CRAN/
and installed it, but it says that it needs the "emulator" package.
Can you tell
Thanks very much, Bernhard and Terry.
It clarify my confusion and really helps a lot.
Regards
Jeff Xu
Terry Therneau wrote:
>
>> plot(survfit(fit)) should plot the survival-function for x=0 or
>> equivalently beta'=0. This curve is independent of any covariates.
>
> This is not correct. It
Dear R-Listers,
I am very confused with what seems to be a misuse of the faceting options with
gplot function and I hope you might help me on this.
z contains various simulation results from simulations with different set of
parameters.
I melt my data to have the following data.frame structure
Dear R users,
I have a question on the confusion matrix generated by function randomForest.
I used the entire data
set to generate the forest, for example:
> print(iris.rf)
Call:
randomForest(formula = Species ~ ., data = iris, importance = TRUE,
keep.forest = TRUE)
confusion
Hi. I'm trying to run the elliptic package on my computer (windows
platform, version 2.7.2). I downloaded the elliptic package zip file
from
http://lib.stat.cmu.edu/R/CRAN/
and installed it, but it says that it needs the "emulator" package.
Can you tell me where to download this? The only simi
Shukai,
layout.drl supports edge weights, so you could try that. An
alternative is doing MDS, see ?cmdscale and maybe help.search("MDS").
But both these methods are approximate, obviously, most often you
cannot embed an n-dimensional (n>>2) graph into the 2-dimensional
plane and keep all the "dis
Thanks Gabor's fast reply.
In my research, every node has it's own vector of scores. So I can compute
correlation between every pair of nodes. I used the width and color of the
edge for this purpose. But visualizing the correlations by distance may be
clearer.
Best,
Shukai
Gábor Csárdi-2 wro
Corrado,
Package bigmemory has undergone a major re-engineering and will be available
soon (available now in Beta version upon request). The version
currently on CRAN
is probably of limited use unless you're in Linux.
bigmemory may be useful to you for data management, at the very least, where
Shukai,
the force based layout algorithms (layout.drl,
layout.fruchterman.reingold, layout.graphopt, layout.kamada.kawai) are
likely to do this; although they are not explicitly required to place
hubs in the center, usually they do.
I am not sure what is the "correlation between two nodes". You m
I saw Ted's reply and it is certainly sensible. I would wonder whether
to model ought to be recast so that the scientific question is more
clear? You are obviously studying the effect of different
substitutions (F, Cl, Br, I, Me) and different positions around an
aromatic ring (meta, para).
Dear R users,
I am trying to draw a network using igraph package. I intend to place the
hub nodes (the ones with the relatively more connection with other nodes) in
the center of the graph. Also, the graph need to be in the fashion that the
higher the correlation between two nodes is , the clos
It looks like your data has not enough information to estimate the
parameter for metaCl. Maybe because metaCL is identical to one of the
other variables or a constant.
HTH,
Thierry
ir. Thierry Onkelinx
Instituut voor
On Feb 26, 2009, at 9:54 AM, (Ted Harding) wrote:
On 26-Feb-09 13:54:51, David Winsemius wrote:
I saw Gabor's reply but have a clarification to request. You say you
want to remove low frequency components but then you request
smoothing
functions. The term "smoothing" implies removal of high
On 26-Feb-09 12:58:49, Bob Gotwals wrote:
> R friends,
>
> In a matrix of 1s and 0s, I'm getting a singularity error. Any helpful
> ideas?
>From the degress of freedom in your output, it seems you are fitting
10 binary variables to a total of 23 observations. In such circumstances,
it is not un
I wrote a little code using Fourier filtering if you would like to
take a look at this:
library(StreamMetabolism)
library(mFilter)
x <- read.production(file.choose())
#contiguous.zoo(data.frame(x[,"RM202DO.Conc"], coredata(x[,"RM202DO.Conc"])))
#contiguous.zoo(data.frame(x[,"RM61DO.Conc"], coredat
On 26-Feb-09 13:54:51, David Winsemius wrote:
> I saw Gabor's reply but have a clarification to request. You say you
> want to remove low frequency components but then you request smoothing
> functions. The term "smoothing" implies removal of high-frequency
> components of a series.
If you pr
R friends,
In a matrix of 1s and 0s, I'm getting a singularity error. Any helpful ideas?
lm(formula = activity ~ metaF + metaCl + metaBr + metaI + metaMe +
paraF + paraCl + paraBr + paraI + paraMe)
Residuals:
Min 1Q Median 3QMax
-4.573e-01 -7.884e-02 3.4
I apologize for my messy post which stems from my own confusion ... and
depression as well. In fact I though I was done with a big chunk of a project
and to my dismay I found out there is more to do.
I am trying to adapt an algorithm, based on advanced wavelet analysis, to my
respiration signals
> plot(survfit(fit)) should plot the survival-function for x=0 or
> equivalently beta'=0. This curve is independent of any covariates.
This is not correct. It plots the curve for a hypothetical subject with x=
mean of each covariate.
This is NOT the "average survival" of the data set. Im
On 26 Feb 2009, at 14:14, Max Kuhn wrote:
Do you know about any good reference that discusses kappa for
classification and maybe CI for kappa???
You might also want to take a look at this survey article on kappa and
its alternatives:
Artstein, Ron and Poesio, Massimo (2008). Survey arti
For a fitted Cox model, one can either produce the predicted survival curve
for
a particular "hypothetical" subject (survfit), or the predicted curve for a
particular cohort of subjects (survexp). See chapter 10 of Therneau and
Grambsch for a long discussion of the differences between these,
You are mostly correct.
Because of the censoring issue, there is no good estimate of the mean survival
time. The survival curve either does not go to zero, or gets very noisy near
the right hand tail (large standard error); a smooth parametric estimate is
what
is really needed to deal with thi
One of my colleagues has written a technical report on how to do this, but I
have not yet implemented it in the survival package.
Terry Therneau
http://mayoresearch.mayo.edu/mayo/research/biostat/techreports.cfm
#80
Concordance for Survival Time Data: Fixed and Time-Depende
I saw Gabor's reply but have a clarification to request. You say you
want to remove low frequency components but then you request smoothing
functions. The term "smoothing" implies removal of high-frequency
components of a series.
If smoothing really is your goal then additional R resource w
> Do you know about any good reference that discusses kappa for classification
> and maybe CI for kappa???
I don't, but googling on kappa and confusion matrix etc should get you
there. Kappa works very well when the true classes are skewed. For
example, if 10% of you samples are class A and 90% c
1 - 100 of 128 matches
Mail list logo