Dear R-help,
Is there any wrapper avaliable to use IMSL C library functions within R?
Thanks.
Manoj
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/pos
Arun Kumar Saha wrote:
> Dear all R-users,
>
> I have a GARCH related query. Suppose I fit a GARCH(1,1) model on a
> dataframe dat
>
>
>>garch1 = garch(dat)
>>summary(garch1)
>
> Call:
> garch(x = dat)
>
> Model:
> GARCH(1,1)
>
> Residuals:
> Min 1Q Median 3Q Max
> -4.7278
I think Paul's suggestion works if you use:
panel.points(x, y, col = "white", cex = 1.5, pch = 16, ...)
instead of the default background color. For me
trellis.par.get("background")$col returns "transparent".
HTH,
--sundar
Benjamin Tyner wrote:
> I can imagine the intended effect of this, bu
I can imagine the intended effect of this, but for some reason it does
not work as expected--the 'gaps' do not show up (I'm using 2.3.0). Also,
I think it would have to be tweaked to prevent overlapping for points
very close together. Incidentally, the benefit I seek is most pronounced
with pch="."
Hi
Benjamin Tyner wrote:
> Thanks, but this proposal has the same effect as type="b" in
> panel.xyplot, which as noted in the documentation is the same as
> type="o". To clarify, I don't want type="o" at all; I want there to be
> gaps between the lines and points. Have a look at
>
> plot(y~x,dat
On 6/20/2006 6:24 PM, Gerald Jansen wrote:
> I would like to store and manipulate large sets of marker genotypes
> compactly using "raw" data arrays. This works fine for vectors or
> matrices, but I run into the error shown in the example below as soon
> as I try to use 3 dimensional arrays (eg. a
Thanks, but this proposal has the same effect as type="b" in
panel.xyplot, which as noted in the documentation is the same as
type="o". To clarify, I don't want type="o" at all; I want there to be
gaps between the lines and points. Have a look at
plot(y~x,data=dat,type="b")
to see what I mean.
B
apropos("^panel")
will show you what panel function exist.It seems that panel.points
plus panel.lines are what you want.
dat<-data.frame(x=1:10,y=1:10,z=sample(letters[1:3],10,T))
xyplot(y~x | z, data = dat,pan=function(x,y,...)
{panel.points(x,y,...);panel.lines(x,y,...)})
2006/6/
Is there any way to have xyplot produce the "points connected by lines"
analogous to the effect of type="b" under standard graphics? I seem to
recall doing this in xyplot at one point.
Thanks,
Ben
__
R-help@stat.math.ethz.ch mailing list
https://stat.
Dearfriends,
A question related with resale(). I have a dataset *a* with three
variables *x,y,id*
I want to do two different things:
1. rescale the combination of x and y into the new range (0,1),that is, keep
the shape of original plot;
2.rescale x and y into the new range (0,1) respectively,
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2,10,20, labels=c("Ctl","Trt"))
weight <- c(ctl, trt)
lm.D9 <- lm(weight ~ group)
summary(lm.D9)$coef
Estimate Std. Error t value P
Hi Everyone,
I just don't know how to extract the information I
want from the summary of a linear regression model
fitting.
For example, I fit the following simple linear
regression model:
results = lm(y_var ~ x_var)
summary(results) gives me:
Call:
lm(formula = y_var ~ x_var)
Residuals:
On 6/20/2006 6:24 PM, Gerald Jansen wrote:
> I would like to store and manipulate large sets of marker genotypes
> compactly using "raw" data arrays. This works fine for vectors or
> matrices, but I run into the error shown in the example below as soon
> as I try to use 3 dimensional arrays (eg. a
Here is an example of adding up 2 consecutive rows, iterating through all
the row:
> x <- matrix(1:100,10)
> x
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,]1 11 21 31 41 51 61 71 8191
[2,]2 12 22 32 42 52 62 72 8292
[3,]3 13
I would like to store and manipulate large sets of marker genotypes
compactly using "raw" data arrays. This works fine for vectors or
matrices, but I run into the error shown in the example below as soon
as I try to use 3 dimensional arrays (eg. animal x marker x allele).
> a <- array(as.raw(1:6)
uugh : i promise that this will be my last question of the day.
i hate to constantly bother this group but it takes me time to get familar with
all of these functions, tricks and and manipulations.
i appreciate everyone's patience. i used too splus a lot
but i've gotten rusty.
i have a matrix of
Look at ?class and perhaps is?. i.e.
x=c("1","2","3")
class(x)
[1] "character"
x=c(1,2,3)
class(x)
[1] "numeric"
I hope this helps
Francisco
Dr. Francisco J. Zagmutt
College of Veterinary Medicine and Biomedical Sciences
Colorado State University
>From: "Alex Restrepo" <[EMAIL PROTECTED]>
Dear Lalitha
Take a look at ?cut and ?table. Cut will create categories of your
continuous "Age" variable and table will create a contingency table of the
categorized values against "Type" i.e.
#Creates practice data frame
dat=data.frame(Type=sample(c("TypeA","TypeB","TypeC"),100,replace=T),Ag
This is probably one case where you want to use the 'difftime' function
directly:
> x1 <- as.POSIXct("2006-02-08 17:12:55")
> x2 <- as.POSIXct("2006-02-08 17:15:26")
> x2-x1
Time difference of 2.516667 mins
> x3 <- x2-x1
> str(x3)
Class 'difftime' atomic [1:1] 2.52
..- attr(*, "tzone")= chr ""
Hi
I am working with a dataset of age and class of
proteins
#Age
0
0.0
0.677
#Class
Type A
Type B
.
.
.
Type K
I wish to get a table that reads as follows
0-0.02 0.02-0.04 0.04-0.06 . 0.78-0.8
Type A15 20 5 8
Type B 86
.
.
.
Hello All:
How can I determing the "types" of args passed to an R function? For
example, given the following:
calculate <- function(...)
{
args <- list(...)
argName = names(args)
if (arg1 == character)
cat("arg1 is a character")
else
Try these functions (modify to suit your needs:
tri1 <- function(x){
n <- dim(x)[2]
for(i in n:1){
for( j in 1:(n-i+1) ){
cat(sprintf(' %5.2f',x[j,j+i-1]))
}
cat("\n")
}
}
tri2 <- function(x){
Hi
Could you use something along the lines of the following:
transposedat<-t(dat)
transposedat$oddrow<-(1:ncol(dat))%%2
oddcolumns<-t(transposedat[transposedat$oddrow==1,])
Now you should have the odd columns in a single dataframe.
gzero<-matrix(oddcolumns>0,nrow=nrow(oddcolumns),ncol=ncol(odd
sapply() is not the right tool. It operates on a list, and a matrix is not
a list (at least not treated as you'd expected it to be). It would have
sort of worked if tempMatrix were a data frame instead of a matrix.
Try something like:
colSums(tempMatrix[, seq(1, ncol(tempMatrix), by=2)] > 0)
A
A couple of suggestions:
#First solution
mydatexpanded<-mydat[rep(1:5,mydat[,1]),]
sampledat<-mydatexpanded[sample(1:85,7),-1]
#Second solution
sampledat<-mydat[sample(1:5,size=7,prob=mydat[,1]/85,replace=TRUE),-1]
Regards
Per Jensen
On 6/20/06, Muhammad Subianto <[EMAIL PROTECTED]> wrote:
>
I've tried and give up. I have a matrix of say 200 columns and 400 rows.
For each odd ( or even i suppose if i wanted to )column,
I want to know the number of rows in which the value is greater than zero.
So, I did sapply(tempMatrix,2,function(x) sum( x > 0 ))
this almost works but i don't know
Compare
d<-data.frame(a=c(1,2),b=c(2,3))
> class(d[,1])
[1] "numeric"
> class(d[,1,drop=FALSE])
[1] "data.frame"
Regards
On 20 Jun 2006 15:52:36 -0400, Allen S. Rout <[EMAIL PROTECTED]> wrote:
>
>
>
> I've observed something I don't understand, and I was hoping someone
> could point me to the r
I've observed something I don't understand, and I was hoping someone
could point me to the right section of docs.
There's a portion in one of my analyses in which I am wont to sort a
data.frame so:
seriesS <- seriesS[order(as.Date(row.names(seriesS),format="%m/%d/%Y")),]
So, I've got row.nam
I had a hard time building 1.9.1 with the Sun One Studio
on Solaris (SPARC) almost 2 years ago. I filed a bug report,
but I can't seem to find it right now. For 2.3.1, I discovered
that the same problems remain. Here are the tricks that
resolved them for me if anyone is interested.
#1. /usr/li
>From: Brian Dolan <[EMAIL PROTECTED]>
>Date: Tue Jun 20 12:54:14 CDT 2006
>To: r-help@stat.math.ethz.ch
>Subject: [R] arima fails when called from command line
hi : i had a similar problem a few weeks agowhen i was tryin g to use R CMD
BATCH ( but when i sourced the copde at the r prompt everyth
Hello all, I was wondering if anyone is aware of formal approaches and tools
for comparing partial response curves produced in GAM? My interest is in
determining if two partial response curves are "statistically" different. I
recognize that point-wise standard error estimates can be produced using
Never mind. I RFTM'ed more carefully and found that the 'R' directory
can also have a 'unix' or 'windows' subdirectory.
[EMAIL PROTECTED] wrote:
jh> I have a few functions, such as screenWidth() and screenHeight(), which
jh> I have been able to implement for a Unix/Linux environment, but not
From: Gavin Simpson
>
> On Tue, 2006-06-20 at 08:12 -0700, Scott Rollins wrote:
> > Glenn De'ath published a paper in 'Ecology' several years ago and
> > included S-Plus functions in the archives. I haven't looked at the
> > files, so I'm not sure what modifications would be necessary for R.
> >
On 6/19/06, Rick Bilonick <[EMAIL PROTECTED]> wrote:
> On Sun, 2006-06-18 at 13:58 +0200, Douglas Bates wrote:
> > If I understand correctly Rick it trying to fit a model with random
> > effects on a binary response when there are either 1 or 2 observations
> > per group.
If you look at Rick's exa
Peter Dalgaard biostat.ku.dk> writes:
> > ?ave
>
> Finally, someone remembered it!
I feel ashamed. May I suggest to add a link under "by" and/or "tapply" ?
Dieter
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
Davis, Jacob B. txfb-ins.com> writes:
>
> In summary.glm I'm trying to get a better feel for the z output. The
> following lines can be found in the function
>
[snip]
digging through the function is good: debugging your way through
the function is sometimes even better.
examples(glm,loc
I have a few functions, such as screenWidth() and screenHeight(), which
I have been able to implement for a Unix/Linux environment, but not for
Windows. (Does anyone know how to find the screen dimensions in
Windows?)
The Writing R Extensions manual tells me how to include
platform-specific secti
>
> - the open source compiler FreePascal potentially allows to compile for
> other platforms
> (http://www.freepascal.org/wiki/index.php/Platform_list).
> [while I am interested in this for reasons of completeness, it's
> unlikely that I will
> look into it soon without somebody fund
I'm sure there is no consistent way to reproduce this, but I'm hoping
someone has some information.
I have a time series we'll call y. The data gets updated every day, so
I run a cron job that fits and predicts from an arima(0,0,1) X (1,1,1)_7
model.
When I open R and run the script, it processe
R has many samples included with the distribution just look in the library
sub folder of your distribution. In windows examine the base library
c:\Program Files\R\R-2.3.1\library\base\R-ex you might find they are
archived (rex.zip) extract them and use Tinn-R (or any other editor) to play
around w
On Tue, 2006-06-20 at 08:12 -0700, Scott Rollins wrote:
> Glenn De'ath published a paper in 'Ecology' several years ago and
> included S-Plus functions in the archives. I haven't looked at the
> files, so I'm not sure what modifications would be necessary for R.
>
> De'ath, G. 2002. Multivariate r
Thanks, Gabor. Works like a charm.
--sundar
Gabor Grothendieck wrote:
> Try this:
>
>> fo <- ~ x | y
>> fo[[3]] <- fo[[2]]
>> fo[[2]] <- as.name("z")
>> fo
>
> z ~ x | y
>
> On 6/20/06, Sundar Dorai-Raj <[EMAIL PROTECTED]> wrote:
>
>> Hi, all,
>>
>> I just recently noticed a change in terms.f
On Tue, 2006-06-20 at 12:03 -0500, Davis, Jacob B. wrote:
> Thanks for humbling yourself to my level and not answering my question.
>
> If object is user defined is:
> object$df.residual
> the same thing as
> df.residual(object)
Hi,
In this case, yes they are the same, but don't think that i
Davis, Jacob B. txfb-ins.com> writes:
>
> Thanks for humbling yourself to my level and not answering my question.
>
> If object is user defined is:
> object$df.residual
> the same thing as
> df.residual(object)
>
df.residual() is an extractor function.
stats:::df.residual.default give
On Tue, 20 Jun 2006, Smith, Phil (CDC/NIP/DMD) wrote:
> Hi R people:
>
> I'm trying to set the dimnames of a data frame called "ests" and am
> having trouble!
>
> First, I check to see if "ests" is a data.frame...
>
> > is.data.frame( ests )
> [1] TRUE
>
> ... and it is a data frame!
>
> Nex
Hi R people:
I'm trying to set the dimnames of a data frame called "ests" and am
having trouble!
First, I check to see if "ests" is a data.frame...
> is.data.frame( ests )
[1] TRUE
... and it is a data frame!
Next, I try to assign dimnames to that data frame
> dimnames( ests )[[ 1 ]] <-
Thanks for humbling yourself to my level and not answering my question.
If object is user defined is:
object$df.residual
the same thing as
df.residual(object)
You know I have self taught myself: Visual Basic, PL/SQL, C++, Matlab
and I'm not a programmer.
In these programs I help beginners
Try this:
> fo <- ~ x | y
> fo[[3]] <- fo[[2]]
> fo[[2]] <- as.name("z")
> fo
z ~ x | y
On 6/20/06, Sundar Dorai-Raj <[EMAIL PROTECTED]> wrote:
> Hi, all,
>
> I just recently noticed a change in terms.formula from 2.2.1 to 2.3.1
> (possibly 2.3.0, but I didn't check). Here's the problem:
>
> ## 2
-Original Message-
From: Vayssières, Marc
Sent: Tuesday, June 20, 2006 9:35 AM
To: '[EMAIL PROTECTED]'
Subject: RE: [R] multivariate splits
Glen De'ath's package for R is on cran! It is called mvpart, see:
http://cran.cnr.berkeley.edu/doc/packages/mvpart.pdf
Cheers,
Marc Vayssières
Hi, all,
I just recently noticed a change in terms.formula from 2.2.1 to 2.3.1
(possibly 2.3.0, but I didn't check). Here's the problem:
## 2.2.1
> update(~ x | y, z ~ .)
z ~ x | y
## 2.3.1
> update(~ x | y, z ~ .)
z ~ (x | y)
and in the NEWS for 2.3.1
o terms.formula needed to add p
Davis, Jacob B. wrote:
> If object is user defined is:
>
>
>
> object$df.residual
>
>
>
> the same thing as
>
>
>
> df.residual(object)
>
>
>
> This is my first time to encounter the $ sign in R, I'm new. I'm
> reviewing "summary.glm" and in most cases it looks as though the $ is
>
In summary.glm I'm trying to get a better feel for the z output. The
following lines can be found in the function
1 if (p > 0) {
2p1 <- 1:p
3Qr <- object$qr
4coef.p <- object$coefficients[Qr$pivot[p1]]
5covmat.unscaled <- chol2inv(Qr$qr[p1, p1, drop = FA
hadley wickham wrote:
>> what I really would love to see would be an improved help.search():
>> on r-devel I found a reference to the /concept tag in .Rd files and the
>> fact that it is rarely used (again: I was not aware of this :-( ...),
>> which might serve as keyword container suitable for imp
> what I really would love to see would be an improved help.search():
> on r-devel I found a reference to the /concept tag in .Rd files and the
> fact that it is rarely used (again: I was not aware of this :-( ...),
> which might serve as keyword container suitable for improving
> help.search() res
Glenn De'ath published a paper in 'Ecology' several years ago and included
S-Plus functions in the archives. I haven't looked at the files, so I'm not
sure what modifications would be necessary for R.
De'ath, G. 2002. Multivariate regression trees: a new technique for modeling
species--environm
Hi Kristine,
"Kristine Kleivi" <[EMAIL PROTECTED]> writes:
> I been trying to install bioconducter into R using the script on the
> bioconductor home page. However, I get this error message: >
> source("http://www.bioconductor.org/biocLite.R";) Error in file(file,
> "r", encoding = encoding) : una
If object is user defined is:
object$df.residual
the same thing as
df.residual(object)
This is my first time to encounter the $ sign in R, I'm new. I'm
reviewing "summary.glm" and in most cases it looks as though the $ is
used to extract some characteristic/property of the object,
Martin Morgan wrote:
> Here's another way:
>
> makeSolver <- function() {
> f1 <- function(x, K) K - x
> f2 <- function(x, r, K) r * x * f1(x, K)
> function() f1(3,4) + f2(1,2,3)
> }
>
> solverB <- makeSolver()
>
> solverB()
>
> makeSolver (implicitly) creates an environment, installs f1
These guys advertize R workshops on the list fairly regularly:
http://www.xlsolutions-corp.com/
Best,
Jim
zubin wrote:
> Hello - need advice on a company or individual who will offer custom
> training in R in Atlanta Georgia. Specifically need training on data
> manipulations (equivalent to
Hello - need advice on a company or individual who will offer custom
training in R in Atlanta Georgia. Specifically need training on data
manipulations (equivalent to SAS data steps in R) and R Graphics. Does
anyone know folks who train on R or any Firms? Google searches are
coming up empty.
Hi all,
I have a table with values that I rounded with:
mytable = round(mytable, digits=2)
and when I use write.table:
write.table(mytable, file = "/home/user/mytable.txt", sep = " ",
row.names=TRUE, col.names=TRUE, quote=FALSE)
the values are printed like "1" instead of "1.00" (which would ma
Here's another way:
makeSolver <- function() {
f1 <- function(x, K) K - x
f2 <- function(x, r, K) r * x * f1(x, K)
function() f1(3,4) + f2(1,2,3)
}
solverB <- makeSolver()
solverB()
makeSolver (implicitly) creates an environment, installs f1 and f2
into it, and then returns a function tha
Hi all.
Are there any R functions around that do quick logistic regression with
a Gaussian prior distribution on the coefficients? I just want
posterior mode, not MCMC. (I'm using it as a step within an iterative
imputation algorithm.) This isn't hard to do: each step of a glm
iteration sim
Hello all
I haven't got the npmc package to work yet, despite several attempts. I'm
new to R, so this might be a stupid question, but it's taking me hours
already, so I'm asking for help.
To be exact, this is what I typed, and what R told me:
++
> colsmall<-read.table("DBtest2")
>
If you can quote an actual instance of where it's mentioned, perhaps that
will make it more clear. I'd interpret that as simply rescaling all
variables to have min=0 and max=1, one variable at a time.
Andy
From: zhijie zhang
>
> Dear Rusers,
> Recently, i saw the sentence "rescale the data in
Hello,
I discussed the following problem on the great useR conference with
several people and wonder if someone of you knows a more elegant (or
more common ?) solution than the one below.
The problem:
I have several sets of interrelated functions which should be compared.
The functi
you can try something like the following:
mat <- matrix(rnorm(1000), 100, 10, dimnames = list(NULL,
LETTERS[1:10]))
rcts <- rcor.test(mat)
rcts
pvals <- as.data.frame(rcts$p.values)
pvals$corr <- rcts$cor.mat[lower.tri(rcts$cor.mat)]
pvals[1:2] <- sapply(pvals[1:2], factor, leve
Thank you, Yan,
> "Yan" == Yan Wong <[EMAIL PROTECTED]>
> on Tue, 13 Jun 2006 12:27:48 +0100 writes:
Yan> Just a quick point which may be easy to correct. Whilst
Yan> typing the wrong thing into R 2.2.1, I noticed the
Yan> following error messages, which seem to have some
Hi
you can also do
as.POSIXlt(today)$mon
but I do not know if it is more efficient
HTH
Petr
On 20 Jun 2006 at 5:20, Maciej Radziejewski wrote:
Date sent: Tue, 20 Jun 2006 05:20:38 -0700 (PDT)
From: Maciej Radziejewski <[EMAIL PROTECTED]>
To:
I have been using the function rcor.test in the ltm package and have
been trying to use it for p-values of pairwise correlations using the
pearson correlation . However when I call the p-values from the output
it gives me back a number index instead of the row names (I transposed
the columns and r
Hi
If you do not insist on loops you can either use merge
merge(tpframe, pqframe)
or
df.new<-data.frame(date=tpframe$date,
discharge=pqframe$discharge[match(tpframe$gage heights, pqframe$gage
heights)])
I hope I did not do typo :-)
HTH
Petr
On 20 Jun 2006 at 14:20, René Capell wrote:
D
Maciej Radziejewski <[EMAIL PROTECTED]> writes:
> Hello,
>
> I can't figure out a "proper" way to extract the month number from a Date
> class variable. I can do it like this:
>
> > today <- as.Date(Sys.time())
> > as.integer (format.Date (today, "%m"))
>
> but it must be inefficient, since I
Dear all R-users,
I have a GARCH related query. Suppose I fit a GARCH(1,1) model on a
dataframe dat
>garch1 = garch(dat)
>summary(garch1)
Call:
garch(x = dat)
Model:
GARCH(1,1)
Residuals:
Min 1Q Median 3Q Max
-4.7278 -0.3240 0. 0.3107 12.3981
Coefficient(s):
Estima
>From: "john hickey" <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Subject: R genetic parameters
>Date: Tue, 20 Jun 2006 11:20:45 +
>X-OriginalArrivalTime: 20 Jun 2006 11:20:48.0892 (UTC)
>FILETIME=[94F903C0:01C6945B]
>X-Virus-Scanned: by amavisd-new at fcav.unesp.br
>
>
>Luiz,
>
>How large a g
Dear Rusers,
Recently, i saw the sentence "rescale the data into unit square" for
several times. Could anybody tell me what it means,and give an example?
Thanks very much!
--
Kind Regards,
Zhi Jie,Zhang ,
[[alternative HTML version deleted]]
___
Hello,
is there a restriction for the number of loops in a nested for-loop in R?
I wrote a small function to replace water gage hights by discharge values,
where the outer loop walks through the levels of a gage time series and the
inner loop looks up the corresponding dicharge value in a vector
Hello,
I can't figure out a "proper" way to extract the month number from a Date class
variable. I can do it like this:
> today <- as.Date(Sys.time())
> as.integer (format.Date (today, "%m"))
but it must be inefficient, since I convert a date to text and back to a
number. I tried:
> months(t
Jonathan Baron wrote:
> On 06/19/06 13:13, Duncan Murdoch wrote:
>>> `help.search' does not allow full text search in the manpages (I can
>>> imagine why (1000 hits...), but without such a thing google, for
>>> instance, would probably not be half as useful as it is, right?) and
>>> there is no "so
On Tue, 20 Jun 2006 12:38:11 +0200 COMTE Guillaume wrote:
>
>
> Hello,
>
>
>
> I've got 2 dates like these :
>
>
>
> "2006-02-08 17:12:55"
>
> "2006-02-08 17:15:26"
>
>
>
>
>
> I wish to get the middle of these two date :
There is a mean method for "POSIXct" objects:
x <- as
Dear all R-users,
(My apologies if this subject is wrong)
I have dataset:
mydat <- as.data.frame(
matrix(c(14,0,1,0,1,1,
25,1,1,0,1,1,
5,0,0,1,1,0,
31,1,1,1,1,1,
10,0,0,0,0,1),
nrow=5,ncol=6,byrow=TRUE))
dimna
"COMTE Guillaume" <[EMAIL PROTECTED]> writes:
> I've got 2 dates like these :
> "2006-02-08 17:12:55"
> "2006-02-08 17:15:26"
> I wish to get the middle of these two date :
> "2006-02-08 17 :14 :10"
> Is there is a function in R to do that ?
Well,
> x1 <- as.POSIXct("2006-02-08 17:12:55")
> x2
Hello,
I've got 2 dates like these :
"2006-02-08 17:12:55"
"2006-02-08 17:15:26"
I wish to get the middle of these two date :
"2006-02-08 17 :14 :10"
Is there is a function in R to do that ?
Thks all
guillaume
[[alternative HTML version deleted]]
_
hi greg
If you are using windows, set up a plot window and click the "Record"
option in the menu. Then run the command. Now you can scroll back
through previous pages by hitting Page Up.
Beware that if you save your workspace without clearing the history,
you may have a lot of bloat from the grap
"jim holtman" <[EMAIL PROTECTED]> writes:
> ?ave
Finally, someone remembered it!
> > cbind(x, ave(x$h.age, x$hhid))
>hhid h.age ave(x$h.age, x$hhid)
> 1 1001002023 23.0
> 2 1001002023 23.0
Or, get rid of ugly colname using
cbind(x, hhavg=av
?ave
> x
hhid h.age
1 1001002023
2 1001002023
3 1001012642
4 1001012660
5 1001014220
6 1001014249
7 1001014252
8 1001015018
9 1001015051
10 1001015028
> cbind(x, ave(x$h.age, x$hhid))
hhid h.age ave(x$h.age, x$hhid)
1 1001002023
Hi
This is the 2nd time I am posting this question as I never got a reply the
1st time round - apologies to anybody who might take offense at this but I
dont know what else to do.
I am struggling to split up the plots of the grouped objects in nlme in a
usable way. The standard plot command gen
Hi,
I been trying to install bioconducter into R using the script on the
bioconductor home page. However, I get this error message:
> source("http://www.bioconductor.org/biocLite.R";)
Error in file(file, "r", encoding = encoding) :
unable to open connection
In addition: Warning message:
I worked this out over the weekend. I appreciate that using temporary
variables would be simpler but I think this makes for quite readable
code:
# in RProfile.site
inplace <- function (f, arg=1)
eval.parent(call("<-",substitute(f)[[arg+1]], f),2)
# examples in code
inplace(foo[b
Stephan Lindner umich.edu> writes:
> The problem is to create new variables from a data frame which
> contains both individual and group variables, such as mean age for an
> household. My data frame:
>
> df
>
>hhid h.age
> 1 1001002023
> 2 1001002023
...
> where hhid is the
you can use something like:
dat <- data.frame(hhid = rep(c(10010020, 10010126, 10010142,
10010150), c(2, 2, 3, 3)), h.age = sample(18:50, 10, TRUE))
###
dat$mean.age <- rep(tapply(dat$h.age, dat$hhid, mean),
tapply(dat$h.age, dat$hhid, length))
dat
I hope it helps.
Best,
Dimitris
---
Stephan Lindner wrote:
> Dear all,
>
> sorry, this is for sure really basic, but I searched a lot in the
> internet, and just couldn't find a solution.
>
> The problem is to create new variables from a data frame which
> contains both individual and group variables, such as mean age for an
> hou
Hi Deepayan,
> You will need to do it manually, e.g.:
>
> xyplot(value ~ date, data = x,
>panel = function(x, y, subscripts, ...) {
>panel.grid(h = -1, v = 0, col = "grey", lwd = 1, lty = 1)
>panel.abline(v = as.Date(c("2005/01/01", "2006/01/01")),
>
Dear list members
I am pleased to annonce that I have just finished a native Excel
reader/writer. It's wrapped up in two packages: either "xlsReadWrite" (open
source) or the slightly beefed-up "xlsReadWritePro" (shareware). Working
with Excel data is now as easy as writing read.xls and write.xls.
Dear all,
sorry, this is for sure really basic, but I searched a lot in the
internet, and just couldn't find a solution.
The problem is to create new variables from a data frame which
contains both individual and group variables, such as mean age for an
household. My data frame:
df
h
On Tue, Jun 20, 2006 at 09:09:16AM +0800, zhijie zhang wrote:
> suppose i want to do the following caulation for 100 times, how to put the
> results of x , y and z into the same dataframe/dataset?
> x<-runif(1)
> y<-x+1
> z<-x+y
Several possibilities:
1) Use rbind
Before loop:
d = NULL
And in
Hi,
I have fitted a cox model with time-varying covariates (counting process style)
using the cph function of the Design package.
Now I want to know the survival probability of a single individual at every
time of his history.
I know the survest function, but I am not sure how to interpretet it
Hello!
I just noticed new link on R wiki on R galleries and wanted to share
this info with YOU!
- R graphical manuals (this is awesome page as there are all help pages
of all packages on CRAN and probably even more and all graphics examples
are displayed! - more than 8000 images!)
http://bg9.ims
97 matches
Mail list logo