On Dec 5, 2007 12:49 AM, Patrick Connolly <[EMAIL PROTECTED]> wrote:
> On Tue, 04-Dec-2007 at 05:32PM -0800, Ben Bolker wrote:
>
> |>
> |>
> |>
> |> S Ellison wrote:
> |> >
> |> > Package review is a nice idea. But you raise a worrying point.
> |> > Are any of the 'downright dangerous' packages on
On Wed, 5 Dec 2007, [EMAIL PROTECTED] wrote:
> Consider the following:
>> A <- 1:10
>> A
> [1] 1 2 3 4 5 6 7 8 9 10
>> dim(A)
> NULL
>> dim(A) <- c(2,5)
>> A
> [,1] [,2] [,3] [,4] [,5]
> [1,]13579
> [2,]2468 10
>> dim(A)
> [1] 2 5
>> dim(A) <- 10
Dear listers,
I am doing an time-series analysis on the relationship fo daily air
pollutants and mortality using generalized additive models. I have two
questions as followed:
Question 1:
How can I get the relative risk estimates from the smoothing spines ? I have
found a paper which focused on
On Tue, 04-Dec-2007 at 05:32PM -0800, Ben Bolker wrote:
|>
|>
|>
|> S Ellison wrote:
|> >
|> > Package review is a nice idea. But you raise a worrying point.
|> > Are any of the 'downright dangerous' packages on CRAN?
|> > If so, er... why?
|> >
|> >
|> <[EMAIL PROTECTED]> 12/01/07 7:21
On Wed, 5 Dec 2007, Tim Calkins wrote:
> Hi all -
>
> I'm trying to find a way to create dummy variables from factors in a
> regression. I have been using biglm along the lines of
>
> ff <- log(Price) ~ factor(Colour):factor(Store) +
> factor(DummyVar):factor(Colour):factor(Store)
>
> lm1 <- bigl
Vadim Kutsyy wrote:
>
>
> Does anyone know a way to connect from R on Linux box to TeraData
> server? I can use ODBC connection on Windows box, but with amount of
> data I need (and prefer) to use large Linux box.
>
> Thanks,
> Vadim
>
>
You indicate that the ODBC connection to Teradata
I can install hdf5 smoothly. I am using R2.5.0 on WinXP-32bit machine.
>> configure: error: Can't find HDF5
It looks u r using x86_64 machine. Maybe u need install hdf5 from source code.
I have been install hdf5 on both CentOS and Fedora, no error messages.
ZHUANSHI
On 12/4/07, Jahrul Alam
Hi all -
I'm trying to find a way to create dummy variables from factors in a
regression. I have been using biglm along the lines of
ff <- log(Price) ~ factor(Colour):factor(Store) +
factor(DummyVar):factor(Colour):factor(Store)
lm1 <- biglm(ff, data=my.dataset)
but because there are lots of c
On Wed, 5 Dec 2007, Gad Abraham wrote:
> Hi,
>
> The following error looks like a bug to me but perhaps someone can shed
> light on it:
>
> > library(splines)
> > library(survival)
> > s <- survreg(Surv(futime, fustat) ~ ns(age, knots=c(50, 60)),
> data=ovarian)
> > n <- data.frame(age=rep(mean(ov
I agree that it is better to use your way.
However, in my defense, I thought simply how to recover the specific numbers
that Kevin wants to get.
> I don't understand why R doesn't output a value for F and Pr for the
> Error (Block) dimension, as my textbook shows 12.807 and 0.0015
> respectively.
Hello,
I want to install hdf5 libraries for R and I get the same error as posted
below whether I try install.packages("hdf5") from R command prompt or the
command line installation below. I would appreciate if any one could help me
on this.
[EMAIL PROTECTED] hdf5]$ ./configure --with-hdf5=/bluej
John Sorkin wrote:
>
> I believe we need to know the following about packages:
> (1) Does the package do what it purports to do, i.e. are the results
> valid?
> (2) Have the results generated by the package been validate against some
> other statistical package, or hand-worked example?
> (3) Ar
S Ellison wrote:
>
> Package review is a nice idea. But you raise a worrying point.
> Are any of the 'downright dangerous' packages on CRAN?
> If so, er... why?
>
>
<[EMAIL PROTECTED]> 12/01/07 7:21 AM >>>
>>I think the need for this is rather urgent, in fact. Most packages are
>>very g
I apologise for not including a reproducible example with this query but I
hope that I can make things clear without one.
I am fitting some finite mixture models to data. Each mixture component
has p parameters (p=29 in my application) and there are q components to
the mixture. The number of data
Bert Gunter wrote:
>
> Let's be careful here. aov() treats block as a **random** error component
> of
> variance. lm() treats block as a **fixed effect**. That's a different
> kettle of fish. Perhaps both Kevin and the authors of his textbook need to
> read up on fixed versus random effects an
On Tue, 4 Dec 2007, Charles C. Berry wrote:
> On Tue, 4 Dec 2007, Charles C. Berry wrote:
>
>>
>>
>> I think coxph() leaves safe prediction off by default. You need to turn it
>> on.
>
>
> Sorry for the misdirection, I see now that you are describing survreg() not
> coxph().
>
> But even coxph
On Tue, 4 Dec 2007, Charles C. Berry wrote:
>
>
> I think coxph() leaves safe prediction off by default. You need to turn it
> on.
Sorry for the misdirection, I see now that you are describing survreg()
not coxph().
But even coxph() seems to barf on this.
A possible workaround:
> n <- data.
In R, vectors don't have dimensions, arrays do.
> x <- c(1, 4, 5)
> class(x)
[1] "numeric"
> y <- array(x)
> class(y)
[1] "array"
> dim(x)
NULL
> dim(y)
[1] 3
On Dec 4, 2007 7:35 PM, <[EMAIL PROTECTED]> wrote:
> Consider the following:
> > A <- 1:10
> > A
> [1] 1 2 3 4 5 6 7 8 9 10
> >
I think coxph() leaves safe prediction off by default. You need to turn it
on.
See
?predict.coxph
Chuck
On Wed, 5 Dec 2007, Gad Abraham wrote:
> Hi,
>
> The following error looks like a bug to me but perhaps someone can shed
> light on it:
>
> > library(splines)
> > library(survival
Consider the following:
> A <- 1:10
> A
[1] 1 2 3 4 5 6 7 8 9 10
> dim(A)
NULL
> dim(A) <- c(2,5)
> A
[,1] [,2] [,3] [,4] [,5]
[1,]13579
[2,]2468 10
> dim(A)
[1] 2 5
> dim(A) <- 10
> A
[1] 1 2 3 4 5 6 7 8 9 10
> dim(A)
[1] 10
Would it
Hi Gad,
The problem is with ns:
> x <- ns(rnorm(100), knots=c(50, 60))
Error in qr.default(t(const)) :
NA/NaN/Inf in foreign function call (arg 1)
but the following is OK:
> x <- ns(rnorm(100))
> dim(x)
[1] 100 1
Regards,
Moshe.
--- Gad Abraham <[EMAIL PROTECTED]> wrote:
> Hi,
>
> The
Hi,
The following error looks like a bug to me but perhaps someone can shed
light on it:
> library(splines)
> library(survival)
> s <- survreg(Surv(futime, fustat) ~ ns(age, knots=c(50, 60)),
data=ovarian)
> n <- data.frame(age=rep(mean(ovarian$age), 10))
> predict(s, newdata=n)
Error in q
Tom Backer Johnsen psych.uib.no> writes:
>
> I am also informed that it is possible to run Latex in this manner.
>
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/107419.html refers to
http://at-aka.blogspot.com/2006/06/portable-emacs-22050-on-usb.html
which can give you a portable emacs +
I found that you can do the same thing with 'aov' as well.
Sorry for any confusion. :)
> model.aov <- aov(Score.changes ~ Therapy + Block, data=table1)
> summary(model.aov)
Df Sum Sq Mean Sq F value Pr(>F)
Therapy 2 260.93 130.47 15.259 0.001861 **
Block4 438.00 109.5
Let's be careful here. aov() treats block as a **random** error component of
variance. lm() treats block as a **fixed effect**. That's a different
kettle of fish. Perhaps both Kevin and the authors of his textbook need to
read up on fixed versus random effects and what they mean -- and what sorts
On 04/12/2007 5:25 PM, Tom Backer Johnsen wrote:
> Prof Brian Ripley wrote:
>> On Tue, 4 Dec 2007, John Kane wrote:
>>
>>> I simply installed R onto a USB stick, downloaded my
>>> normal packages to it and it works fine under Windows.
>> Yes, on Windows, but
>>
>> 1) There are other OSes,
>>
>> 2)
This seems to work.
The trick is to use 'lm' instead of 'aov'.
> model.aov <- lm(Score.changes ~ factor(Therapy) + factor(Block),
data=table)
> anova(model.aov)
Analysis of Variance Table
Response: Score.changes
Df Sum Sq Mean Sq F value Pr(>F)
factor(Therapy) 2 260.93 130.47
On Tue, 4 Dec 2007, Scott Bartell wrote:
> I'm getting unexpected results from the coxph function when using
> weights from counter-matching. For example, the following code
> produces a parameter estimate of -1.59 where I expect 0.63:
You can get the answer you want with
coxph(Surv(pseudotime,
Prof Brian Ripley wrote:
> On Tue, 4 Dec 2007, John Kane wrote:
>
>> I simply installed R onto a USB stick, downloaded my
>> normal packages to it and it works fine under Windows.
>
> Yes, on Windows, but
>
> 1) There are other OSes,
>
> 2) This didn't just happen: it needed some careful design
hadley wickham gmail.com> writes:
<<>>
>For example, I have this alpha function in
> ggplot:
> alpha <- function(colour, alpha) {
> col <- col2rgb(colour, TRUE) / 255
> col[4, ] <- rep(alpha, length(colour))
> new_col <- rgb(col[1,], col[2,], col[3,], col[4,])
> new_col[is.na(colour)] <- N
> - krigging in package fields, which also requires irregular spaced data
That kriging requires irregularly spaced data sounds new to me ;) It
cannot be, you misread something (I feel free to say that even if I
never used that package).
It can be tricky doing kriging, though, if you're not comfort
Well, yesterday I put a linux version of R 2.6.0 in a USB stick of 2Gb and it
runs very well...
Bernardo Rangel Tura, MD,MPH,Phd
National Cardiology Institute
-- Original Message ---
From: John Kane <[EMAIL PROTECTED]>
To: Roland Rau <[EMAIL PROTECTED]>, Tom Backer Johnsen
<[EM
We just studied randomized block design analysis in my statistics class,
and I'm trying to learn how to do them in R. I'm trying to duplicate a
case study example from my textbook [1]:
> # Case Study 13.2.1, page 778
> cd <- c(8, 11, 9, 16, 24)
> dp <- c(2, 1, 12, 11, 19)
> lm <- c(-2, 0, 6, 2, 11
Daniel Stepputtis wrote:
>
> Dear Ben,
> I was searching for the same problem. Thank you very much, it helped me a
> lot and I will use it quite often!
>
> In addition to the problem given by tintin_et_milou. I have to compare a
> two pairs of vectors.
>
> I.e. I have two datasets each with l
Hi Roy:
I don't know where Simon found the "/usr/local/lib 1". I checked
everywhere but could not find it. There is a softlink links "/usr/
bin/ to "/private/var/automount/usr/local", but no softlink links "/
usr/bin/local" to other place.
###
lrwxr-xr-x 1 root wheel 27 Dec
Look to see if you have a link in /usr/local that links /usr/local/ib
to "/usr/local/lib 1". it is that which is causing the problems.
In the link below, Simon gives 2 commands that remove the link and
renames the library location. Then the install works.
-Roy
On Dec 4, 2007, at 10:21 A
My apologies, I was wrong with z$p.value. That is not what you wanted. I
should not write email at 3:30 :) But I think z$statistic as suggested by
Peter is not it either. You said you want the rho. The code for it is
z$estimate, assuming that you used method="spearman" to get a rho . Please
look be
Hi Roy:
Thank you for your quick response! I checked the softlink and I did
see: /usr/local/ -> /private/var/automount/usr/local
From the post I read: "I'm not quite sure what to do with this".
What does it mean? It means there is no way to install the Tcl/Tk
libraries on my powerPCs? I wa
Dear Thibaut,
Thank you very much. I will post the message in that website.
Cheers,
Núria
Thibaut Jombart wrote:
>
> Núria Roura wrote:
>
>>Dear all,
>>I'm using the package adehabitat in R to import several .asc
files(=matrix), and also create a kasc object (=dataframe) with all of them.
> Hmm, I would have recommended
>
> colorRampPalette(c('dark red','white','dark blue'),
>space = "Lab")
>
> where the 'space = "Lab"' part also makes sure that a
> "perceptually-based" space rather than RGB is used.
>
> I think the functions colorRamp() and (even more)
> colo
On Dec 4, 2007 10:34 AM, Stéphane CRUVEILLER <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I tried this method but it seems that there is something wrong with my
> data frame:
>
>
> when I type in:
>
> > qplot(x=as.factor(Categorie),y=Total,data=mydata)
>
> It displays a graph with 2 points in each categor
On Dec 4, 2007 11:19 AM, John Kane <[EMAIL PROTECTED]> wrote:
>
> --- Domenico Vistocco <[EMAIL PROTECTED]> wrote:
>
> > Perhaps this could be useful:
> > > x=scan()
> > 11.81 10.51 1.95 2.08 2.51 2.05 1.98 0.63
> > 0.17 0.20
> > 12.49 13.56 2.81 3.13 4.58 0.70 0.85 0.22 0.06
> > 0.0
Sorry, I copied the wrong link. Look at:
https://stat.ethz.ch/pipermail/r-devel/2007-December/047605.html
-Roy M.
On Dec 4, 2007, at 9:57 AM, Roy Mendelssohn wrote:
> Hi Shelley:
>
> Look at:
>
> https://stat.ethz.ch/pipermail/r-devel/2007-December/047601.html
>
> -Roy M.
>
>
> On Dec 4, 2007
Hi Shelley:
Look at:
https://stat.ethz.ch/pipermail/r-devel/2007-December/047601.html
-Roy M.
On Dec 4, 2007, at 9:42 AM, shelley wrote:
> Hello:
>
> I tried to install the latest R2.6.1 on my Mac OS X machines. I have
> no problems installing it on my Mac OS X Tigr machine. But when I
> trie
On Tue, 4 Dec 2007, John Kane wrote:
> I simply installed R onto a USB stick, downloaded my
> normal packages to it and it works fine under Windows.
Yes, on Windows, but
1) There are other OSes,
2) This didn't just happen: it needed some careful design, including some
caching to make it run fa
You're right! It was kmail under kde to distort your message.
It works.
Thanks a lot
Vittorio
Il Monday 03 December 2007 22:34:48 Gabor Grothendieck ha scritto:
> Maybe your email software corrupted it somehow. Often
> email software will cause weird line wrappings, for example.
> Or maybe you hav
Roland Rau wrote:
> Hi Tom,
>
> did you check the R for Windows FAQ?
>
> http://cran.r-project.org/bin/windows/base/rw-FAQ.html#Can-I-run-R-from-a-CD-or-USB-drive_003f
>
Puh. My apologies. I should have done so before I asked the
question. Sorry.
Tom
> Hope this helps,
> Roland
>
>
> T
Hello:
I tried to install the latest R2.6.1 on my Mac OS X machines. I have
no problems installing it on my Mac OS X Tigr machine. But when I
tried to install it on other Mac OS X 10.4.9 PowerPCs, I could not
have the GNU Fortran and/or Tcl/Tk libraries installed. I always got
the error me
ONLINE VENDOR NEUTRAL INTRO TO DATA MINING FOR ABSOLUTE BEGINNERS
(no charge)
A non-technical data mining introduction for beginners
December 13, 2007
US and European Timezones:
To register: http://salford.webex.com
This one-hour webinar is a perfect place to start if you are new to data mining
Núria Roura wrote:
>Dear all,
>I'm using the package adehabitat in R to import several .asc files(=matrix),
>and also create a kasc object (=dataframe) with all of them.
>The main idea is to use this kasc object to map the predicted values
>ofclimate-matching model for an overall area. However,
I simply installed R onto a USB stick, downloaded my
normal packages to it and it works fine under Windows.
--- Roland Rau <[EMAIL PROTECTED]> wrote:
> Hi Tom,
>
> did you check the R for Windows FAQ?
>
http://cran.r-project.org/bin/windows/base/rw-FAQ.html#Can-I-run-R-from-a-CD-or-USB-drive_
--- Domenico Vistocco <[EMAIL PROTECTED]> wrote:
> Perhaps this could be useful:
> > x=scan()
> 11.81 10.51 1.95 2.08 2.51 2.05 1.98 0.63
> 0.17 0.20
> 12.49 13.56 2.81 3.13 4.58 0.70 0.85 0.22 0.06
> 0.03
>
> > x=matrix(x,5,4,byrow=T)
> > rownames(x)=paste("comp",1:5,sep="")
>
I am trying to work with a time series of one observation every fifteen
minutes for a length of two years. My question really lies in how to code
the timeseries to represent every observation is a fifteen minute interval.
Below is the code that I used any help would be appreciated.
> x<-read.tabl
Yes, it is indeed true for other systems as well, although
some configuration problems might arise, at least on Linux.
It is also true that there are several small Linux distributions
which easily fit into a flash drive, and then you can boot from the
flash drive. I used to use SLAX, this is mo
Dear all,
I'm using the package adehabitat in R to import several .asc files
(=matrix),
and also create a kasc object (=dataframe) with all of them.
The main idea is to use this kasc object to map the predicted values of
climate-matching model for an overall area. However, I don't know how to
p
Hi Tom,
did you check the R for Windows FAQ?
http://cran.r-project.org/bin/windows/base/rw-FAQ.html#Can-I-run-R-from-a-CD-or-USB-drive_003f
Hope this helps,
Roland
Tom Backer Johnsen wrote:
> Recently I came across an interesting web site:
> http://portableapps.com/. The idea is simple, this
On Windows anyway, R can be located in any directory including one on a
flash drive). R can also be set up to make no use of the registry (again --
Windows only), so AFAICS the answer is yes, and it's trivial to do. I would
be surprised if this were not true in other OS's, too. R is just an
executa
library(R.utils)
pos=which(diff(x)==1)+1
insert(x,ats=pos,rep(list(rep(0,3)),length(pos)))
domenico vistocco
Serguei Kaniovski wrote:
> Hallo,
>
> suppose I have a vector:
>
> x <- c(1,1,1,2,2,3,3,3,3,3,4)
>
> How can I generate a vector/sequence in which a fixed number of zeroes (say
> 3) is ins
Hi,
I tried this method but it seems that there is something wrong with my
data frame:
when I type in:
> qplot(x=as.factor(Categorie),y=Total,data=mydata)
It displays a graph with 2 points in each category...
but if I add the parameter geom="histogram"
> qplot(x=as.factor(Categorie),y=Tot
Take the rle, fix up the result and take the invese.rle. Our formula
adds 0's to the end too so remove those with head:
head(inverse.rle(with(rle(x), list(lengths = c(rbind(lengths, 3)),
values = c(rbind(values, 0), -3)
On Dec 4, 2007 10:49 AM, Serguei Kaniovski <[EMAIL PROTECTED]> wrote:
>
On 12/4/2007 11:06 AM, Mohammad Ehsanul Karim wrote:
> Dear list,
>
> After running for a while, it crashes and gives the following error message:
> can anybody suggest how to deal with this?
>
> Error in if (ratio0[i] < log(runif(1))) { :
> missing value where TRUE/FALSE needed
Use options(
You could use a combination of rle, cumsum and append.
> x <- c(1,1,1,2,2,3,3,3,3,3,4)
> y<-rle(x)$lengths
> y
[1] 3 2 5 1
> z<-cumsum(y)[y>1]
> z
[1] 3 5 10
>
> for(i in rev(z)) x <- append(x, c(0,0,0), after = i)
> x
[1] 1 1 1 0 0 0 2 2 0 0 0 3 3 3 3 3 0 0 0 4
Chris
Serguei Kaniovski-3
my original reply was intended for the original version of 'df', in
which both columns were factors. In your example you have added a
numeric column, so not exactly the case I've replied for. For your
example can use the following:
testdata <- as.factor(c("1.1",NA,"2.3","5.5"))
testdata2 <- as.
Dear list,
After running for a while, it crashes and gives the following error message:
can anybody suggest how to deal with this?
Error in if (ratio0[i] < log(runif(1))) { :
missing value where TRUE/FALSE needed
### original program
p2 <- function (Nsim=1000){
x<
Hello R users,
I have numerical data sampled on two grids, each one shifted by 0.5
from the other.
For example:
grid1 = expand.grid(x=0:3, y=0.5:2.5)
grid2 = expand.grid(x=0.5:2.5, y=0:3)
gridFinal = expand.grid(x=0.5:2.5, y=0.5:2.5)
plot(gridFinal, xlim=c(0,3), ylim=c(0,3), col="black", pch=
It depends on the nature of your data. Have you used the stl function
to decompose your time series data?
plot(stl(time series, s.window="periodic"))
Are you looking at the ACF and PCF to see how strong the
autocorrelations are? You may need to use a differencing operator to
make your series s
On Tue, 4 Dec 2007, Martin Maechler wrote:
> Package 'vcd' (and others) use package 'colorspace',
> and I have wondered in the past if these color space computations
> should not be merged into to standard R (package 'grDevices').
> But that's really a topic for another thread, on R-devel, not R-he
Recently I came across an interesting web site:
http://portableapps.com/. The idea is simple, this is software that
is possible to install and run on some type of USB memory, a stick or
one of these hard disks. I can think of a number of situations where
this could be handy. In addition memo
Also see www.nabble.com for a very nice interface to current and archived
posts.
vince-28 wrote:
>
> I made a google group archive of current and future R-help posts at
> http://groups.google.com/group/r-help-archive
>
> If you are signed-up for the R-help mailing list with a gmail account
>
"Dimitris Rizopoulos" <[EMAIL PROTECTED]> wrote in
news:[EMAIL PROTECTED]:
> try this (also look at R-FAQ 7.10):
>
> sapply(df, function (x) as.numeric(levels(x))[as.integer(x)])
That looks rather dangerous. By the time I saw your suggestion, I had
already added an extra variable with:
df$tes
Hallo,
suppose I have a vector:
x <- c(1,1,1,2,2,3,3,3,3,3,4)
How can I generate a vector/sequence in which a fixed number of zeroes (say
3) is inserted between the consecutive values, so I get
1,1,1,0,0,0,2,2,0,0,0,3,3,3,3,3,0,0,0,4
thanks a lot,
Serguei
[[alternative HTML version de
Perhaps this could be useful:
> x=scan()
11.81 10.51 1.95 2.08 2.51 2.05 1.98 0.63 0.17 0.20
12.49 13.56 2.81 3.13 4.58 0.70 0.85 0.22 0.06 0.03
> x=matrix(x,5,4,byrow=T)
> rownames(x)=paste("comp",1:5,sep="")
> colnames(x)=paste("c",1:4,sep="")
> library(ggplot2)
> dfm=melt(
Stéphane CRUVEILLER genoscope.cns.fr> writes:
> I would like to know whether it is possible to draw several
> stacked barplots (i.e. side by side on the same sheet)...
>
> my data look like :
>
> Cond1 Cond1' Cond2 Cond2'
> Compartment 111,812,0512,49
hi, hi all,
you can consult these links:
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/43008.html
https://stat.ethz.ch/pipermail/r-help/2004-October/058703.html
hope this help
pierre
Selon Florencio González <[EMAIL PROTECTED]>:
>
> Hi, I´m trying to plot a nonlinear regresion with the con
Peter Dalgaard wrote:
> Daniel Malter wrote:
>
>> x=c(1,2,3,4,5,6,7,8,9)
>> y=c(3,5,4,6,7,8,8,7,10)
>>
>> z=cor.test(x,y)
>>
>> othervariable=z$p.value
>>
>>
> z$statistic, more likely.
>
>
z$estimate, even more likely (D'oh!!!)
>> Cheers,
>> Daniel
>>
>>
See R-FAQ # 7-11 for the solution.
Have a look at
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/98227.html
for a discussion of this type of problem and ways to
get around the issue.
--- Antje <[EMAIL PROTECTED]> wrote:
> Hello,
>
> can anybody help me with this problem?
> I have a datafram
Hi
[EMAIL PROTECTED] napsal dne 04.12.2007 02:39:39:
> > I recently picked up R for econometrics modeling, and I am confronted
with
> a
> > problem. I use cor.test() for spearman test, and want to get the "rho"
and
> > "P-value" in the summary. Would you please tell me how to get them?
Thank
>
> x=c(1,2,NA,3,4,5)*10
> y=array(rep(x,15),c(5,3,2))
> dimnames(y)=list(1:5,letters[1:3],NULL)
So to have in the workspace:
> y
, , 1
a b c
1 10 50 40
2 20 10 50
3 NA 20 10
4 30 NA 20
5 40 30 NA
, , 2
a b c
1 30 NA 20
2 40 30 NA
3 50 40 30
4 10 50 40
5 20 10 50
Then to set the miss
The following will work:
apply(my.array, c(1,2), mean)
Ravi.
---
Ravi Varadhan, Ph.D.
Assistant Professor, The Center on Aging and Health
Division of Geriatric Medicine and Gerontology
Johns Hopkins University
Hi all,
I am a graduate student doing some simulation study using R about publication
bias in Meta Analysis. One method I need to implement is the Trim and Fill
method. I know that there is a Trim and Fill "package" in S-Plus and Stata, so
I am wondering wheter there is something in R that I c
> I recently picked up R for econometrics modeling, and I am confronted with
a
> problem. I use cor.test() for spearman test, and want to get the "rho" and
> "P-value" in the summary. Would you please tell me how to get them? Thank
you very much!
>
>
>
> Here is the cor.test() summary:
>
> Spearman
try this (also look at R-FAQ 7.10):
sapply(df, function (x) as.numeric(levels(x))[as.integer(x)])
I hope it helps.
Best,
Dimitris
Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven
Address: Kapucijnenvoer 35, Leuven, Belgium
Tel:
Dear R-Users,
I would like to know whether it is possible to draw several
stacked barplots (i.e. side by side on the same sheet)...
my data look like :
Cond1 Cond1' Cond2 Cond2'
Compartment 111,812,0512,490,70
Compartment 2 10,511,98
Duccio - wrote:
> I used the lm function for a univariate regression model.
> Then I plotted it within the x/y scatterplot by abline(lm).
> I would like to plot standard error (or confidence intervals but SE should
> be better)...
> Any suggestion?
>
help(predict.lm) has an example of this.
--
I used the lm function for a univariate regression model.
Then I plotted it within the x/y scatterplot by abline(lm).
I would like to plot standard error (or confidence intervals but SE should
be better)...
Any suggestion?
Cheers
Duccio
[[alternative HTML version deleted]]
___
Hello,
can anybody help me with this problem?
I have a dataframe, which contains its values as factors though I have numbers
but it was read as factors with "scan". Now I would like to convert these
columns (multiple) to a numeric format.
# this example creates a similar situation
testdata <-
I'm getting unexpected results from the coxph function when using
weights from counter-matching. For example, the following code
produces a parameter estimate of -1.59 where I expect 0.63:
d2 = structure(list(x = c(1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1,
1, 0, 0, 1, 0, 1, 0, 1, 0, 1), wt = c(5,
Hello,
In order to do a future forecast based on my past Time Series data sets
(salespricesproduct1, salespricesproduct2, etc..), I used arima() functions
with different parameter combinations which give the smallest AIC. I also
used auto.arima() which finds the parameters with the smallest AICs.
On Tue, 4 Dec 2007, Florencio González wrote:
Hi, I´m trying to plot a nonlinear regresion with the confidence bands for
the curve obtained, similar to what nlintool or nlpredci functions in Matlab
does, but I no figure how to. In nls the option is there but not implemented
yet.
Is there a pla
Dear all
I have a deaths and atrisk dataset. I would like to calculate IRR of the
dataset but before I want to choose the smoothing parameter and I have one
error message. One example of dataset is:
classage DeathsAtRisk
1 1 20-34 1 100
2 2
qplot(data=dataset, x, y, colour=z)
ONKELINX, Thierry wrote:
> Dear useRs,
>
> I'm trying to specify the colour of a factor with ggplot2. The example
> below gets me close to what I want, but it's missing a legend.
>
> Any ideas?
>
> Thanks,
>
> Thierry
>
> library(ggplot2)
> dataset <- data.frame
R-help,
I have a 3-way array:
> dim(bugvinP)
[1] 13 14 3
The array looks something like this (object trimmed for readability)
, , slag = 1
ar
1994 1995 1996 1997 1998
1 NA 0.000 0.000 0.000 0.000
2 0.036 0.059 0.027 0.000 0.000
3 0.276 0.475 0.491 0.510 0.559
10 1.0
On 04/12/2007 12:51 AM, Thomas L Jones, PhD wrote:
> From: Thomas Jones
>
> I have several user-defined functions. As is standard practice, I am
> defining a logical vector named idebug in order to control debugging
> printouts. For example, if idebug [1] has the value TRUE, such-and-such
> deb
Hi, I´m trying to plot a nonlinear regresion with the confidence bands for
the curve obtained, similar to what nlintool or nlpredci functions in Matlab
does, but I no figure how to. In nls the option is there but not implemented
yet.
Is there a plan to implement the in a relative near future?
Th
Linda Smith wrote:
> Hi All,
>
> I am looking for a color palette like this:
> http://www.ncl.ucar.edu/Applications/Images/h_long_5_lg.png
>
> I think I found out how some time ago (something like Colors[1:n]), but when
> I now wanna use it, I could not remember how I did it.
>
> Does anyone kno
Dear Ben,
I was searching for the same problem. Thank you very much, it helped me a lot
and I will use it quite often!
In addition to the problem given by tintin_et_milou. I have to compare a two
pairs of vectors.
I.e. I have two datasets each with latitude and longitude (which defines the
geo
Biscarini, Filippo wrote:
> Good evening,
>
> I am trying to add labels to the point of a simple plot, using the
> text() function; the problem is that sometimes, if two points are too
> close to each other, labels overlap and are no longer readable.
> I was wondering whether there are options th
> "AZ" == Achim Zeileis <[EMAIL PROTECTED]>
> on Tue, 4 Dec 2007 05:08:51 +0100 (CET) writes:
AZ> On Mon, 3 Dec 2007, jim holtman wrote:
>> see if this is what you need:
>>
>> require(lattice) x <- matrix(1:100,10)
>> levelplot(x,col.regions=colorRampPalette(c('da
On Sun, 02-Dec-2007 at 11:20PM +, S Ellison wrote:
|> Package review is a nice idea. But you raise a worrying point.
|> Are any of the 'downright dangerous' packages on CRAN?
Don't know about "dangerous", but I would like the opportunity to
provide feedback or to have seen feedback from other
Daniel Malter wrote:
> x=c(1,2,3,4,5,6,7,8,9)
> y=c(3,5,4,6,7,8,8,7,10)
>
> z=cor.test(x,y)
>
> othervariable=z$p.value
>
z$statistic, more likely.
> Cheers,
> Daniel
>
> -
> cuncta stricte discussurus
> -
>
> -Ursprüngliche Nachricht-
>
1 - 100 of 103 matches
Mail list logo