What statistical measure(s) tend to be answering ALL(?) question of practical
interest?
--
View this message in context:
http://r.789695.n4.nabble.com/Re-Question-regarding-significance-of-a-covariate-in-a-coxme-survival-tp2399386p2399577.html
Sent from the R help mailing list archive at
On Aug 30, 2010, at 1:27 AM, Nam Lethanh wrote:
Dear Guys,
I do converting codes from Fortran into R and got stuck in solving
LOOPING
procedure with R. In FORTRAN, it is (DO and END DO) for looping in
the net.
In R, it is (FOR with { }).
Looking at the results (namely the 1's from
When j==1 for loops from i down to zero. 5:0 is valid and means c(5,4,3,2,1,0)
Hope it helps
mario
Nam Lethanh wrote:
Dear Guys,
I do converting codes from Fortran into R and got stuck in solving LOOPING
procedure with R. In FORTRAN, it is (DO and END DO) for looping
On 30/08/2010 1:58 p.m., Derek M Jones wrote:
All,
I have been trying to get calls to hist(...) to be plotted
with the y-axis having a log scale.
I have tried: par(ylog=TRUE)
I have also looked at the histogram package.
Suggestions welcome.
You appear to be looking for a log-histogram
Hi,
I would also like to point out that you flipped the first two the
values of theta in the R vs. FORTRAN versions. This fixes part of
your differences (and makes it easy to see that the differences occur
when j i, as David and Mario point out).
Josh
On Sun, Aug 29, 2010 at 10:27 PM, Nam
Hi Martin!
Which version of maxLik do you use?
(You can figure this out, e.g. by typing 'help( package = maxLik )'
at the R prompt.)
If you are not using version 0.8-0, please upgrade to the latest version.
You can get more information on the optimization process by adding
print.level=4 as
Dear R helpers,
Thanks a lot for your earlier guidance esp. Mr Davind Winsemius Sir. However,
there seems to be mis-communication from my end corresponding to my
requirement. As I had mentioned in my earlier mail, I am dealing with a very
large database of borrowers and I had given a part of
You made a mistake with theta
theta-c(0.08,0.06,0.09,0)
This should be (see the fortran)
theta-c(0.06,0.08,0.09,0)
The innermost loop (for( k in ...) is better written as while loop to take
into account how Fortran handles loops (see the previous replies):
k - i
while( k =
On Aug 29, 2010, at 10:24 PM, David Winsemius wrote:
On Aug 29, 2010, at 3:13 PM, moleps wrote:
glm(A~B+C+D+E+F,family = binomial(link = logit),data=tre,na.action=na.omit)
Error in `contrasts-`(`*tmp*`, value = contr.treatment) :
contrasts can be applied only to factors with 2 or more
Dear all,
I have a question regarding using JAGS and R. My problem is that every
single time I want to call JAGS from R the latter crashes (i.e. it
turns pale and the Windows tells me R has stopped working.
Strangely, if there is a mistake in my jags code, this will be
reported without a crash.
Hi,
I've three values. What is the best method to choice the lowest values
with an if function?
example:
a = 3
b = 1
c = 5
if (lowest(a,b,c) is a) {}
if (lowest(a,b,c) is b) {}
if (lowest(a,b,c) is c) {}
Thanks,
Alfredo
Hi again,
I was asked to provide some code. Well, in my case it doesn't really
matter which example I use, so I just write down a very basic and
canned example:
First I create some data:
N - 1000
x - rnorm(N, 0, 5)
Then I specify a model in JAGS, storing it in the directory with the
extension
Have a look at switch() and which.min()
x - c(a = 3, b = 1, c = 5)
switch(which.min(x), a is lowest, b is lowest, c is lowest)
HTH,
Thierry
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek
team Biometrie
Hi,
sure, maxNR uses 'iterlim' (or was it 'maxIter'?) where you can specify
the max # of iterations. The default was 100 iterations, AFAIK.
There was a bug in one of the previous versions which lead to infinite
loops of constrained maximization. Do you use constrained or
unconstrained?
and
Hi everybody,
I want an x-axis which has xlim=c(max, min) rather than xlim=c(min, max)
in order to reflect the type of the process (cooling):
library(lattice)
myprepanel - function(x,y,...) list(xlim=rev(range(x)))
x - rep(1:10, 100)
z - factor(sample(10, 1000, T))
y - rnorm(1000, x,
On Mon, Aug 30, 2010 at 3:57 PM, Thaler, Thorn, LAUSANNE, Applied
Mathematics thorn.tha...@rdls.nestle.com wrote:
Hi everybody,
I want an x-axis which has xlim=c(max, min) rather than xlim=c(min, max)
in order to reflect the type of the process (cooling):
library(lattice)
myprepanel -
No (in fact that wouldn't work anyway), you can simply do
xyplot(y~x|z, xlim = rev(extendrange(x)))
The point is that in this case you have to explicitly specify a
pre-computed limit, and cannot rely on the prepanel function to give
you a nice default.
Thanks that does the trick. Because
Hi!
I might be a bit late, but maybe you meant:
x[x= -0.50 | x= -1.0]
See ?|
HTH,
Ivan
Le 8/29/2010 04:54, Joshua Wiley a écrit :
Dear Erin,
-0.50 is greater than -1.00, this means that your logical test returns
all FALSE answers. Now consider the results of:
x[FALSE]
you are
It's not just that counts might be zero, but also that the base of
each bar starts at zero. I really don't see how logging the y/axis of
a histogram makes sense.
Hadley
On Sunday, August 29, 2010, Joshua Wiley jwiley.ps...@gmail.com wrote:
Hi Derek,
Here is an option using the package
On Mon, Aug 30, 2010 at 3:56 AM, Thaler,Thorn,LAUSANNE,Applied
Mathematics thorn.tha...@rdls.nestle.com wrote:
No (in fact that wouldn't work anyway), you can simply do
xyplot(y~x|z, xlim = rev(extendrange(x)))
The point is that in this case you have to explicitly specify a
pre-computed
The underlying problem is your expectations.
R (unlike S) was set up many years ago to use na.omit as the default,
and when fitting both lm() and loess() silently omit cases with
missing values. So why should prediction from 'newdata' be different
unless documented to be so (which it is
Well, it's actually lattice:::extend.limits(range(x)), but
extendrange() does basically the same thing, and is available to the
user (i.e., exported), albeit from a different package.
Thanks again Deepayan for the insight.
A followup question though-- in another setting I'd like to have
Hadley,
It's not just that counts might be zero, but also that the base of
each bar starts at zero. I really don't see how logging the y/axis of
a histogram makes sense.
I have counts ranging over 4-6 orders of magnitude with peaks
occurring at various 'magic' values. Using a log scale for
9쩔첫 1�횕�횣쨌횓 횁짝횁횜쨈챘횉횖짹쨀쩔징쩌 짹횢쨔짬횉횛쨈횕쨈횢. 쩐횛�쨍쨌횓 쨍횧�횕�쨘 횁짚쩌짰짹횢
sukgeun.j...@gmail.com �쨍쨌횓 쨘쨍쨀쨩횁횜쩌쩌쩔채.
I no longer work at NFRDI. Please send your mail to me at
sukgeun.j...@gmail.com.
__
R-help@r-project.org
dear R experts:
has someone written a function that returns the results of by() as a
data frame? of course, this can work only if the output of the
function that is an argument to by() is a numerical vector.
presumably, what is now names(byobject) would become a column in the
data frame, and
felix,
thanks a lot for the hint!
i actually found another way by setting up a panel function by which i
can control every single panel with panel.number(). maybe there is
more efficient coding - i don't know. i also alternated tickmarks and
tick-labeling by panel-rows, which is nicer, but
I have counts ranging over 4-6 orders of magnitude with peaks
occurring at various 'magic' values. Using a log scale for the
y-axis enables the smaller peaks, which would otherwise
be almost invisible bumps along the x-axis, to be seen
That doesn't justify the use of a _histogram_ - and
Hi R Helpers,
I'm still new to R and i experience many difficulties..I'm using vegan
package (R version 2.11) trying to calculate checkerboard units for each
species pair of a matrix. I've prepared the function:
pair.checker=function (dataset) {designdist (dataset,
method=c((A-J)x(B-J), terms
Try this:
as.data.frame(by( indf, indf$charid, function(x) c(m=mean(x), s=sd(x)) ))
On Mon, Aug 30, 2010 at 10:19 AM, ivo welch ivo.we...@gmail.com wrote:
dear R experts:
has someone written a function that returns the results of by() as a
data frame? of course, this can work only if the
serious?
key - c(1,1,1,2,2,2)
val1 - rnorm(6)
indf - data.frame( key, val1)
outdf - by(indf, indf$key, function(x) c(m=mean(x), s=sd(x)) )
outdf
indf$key: 1
m.key m.val1 s.key s.val1
1. 0.6005 0. 1.0191
On Aug 30, 2010, at 4:05 AM, Vincy Pyne wrote:
Dear R helpers,
Thanks a lot for your earlier guidance esp. Mr Davind Winsemius Sir.
However, there seems to be mis-communication from my end
corresponding to my requirement. As I had mentioned in my earlier
mail, I am dealing with a very
FYI, since R version 2.11.0, aggregate() can return a vector of summary
results, rather than just a scalar:
aggregate(iris$Sepal.Length, list(Species = iris$Species),
function(x) c(Mean = mean(x), SD = sd(x)))
Speciesx.Mean x.SD
1 setosa 5.006 0.3524897
2
perfect. this is the R way to do it quick and easy. thank you, marc.
(PS, in my earlier example, what I wanted was aggregate( . ~ key,
data=indf, FUN = function(x) c(m=mean(x), s=sd(x))) )
Ivo Welch (ivo.we...@brown.edu, ivo.we...@gmail.com)
On Mon, Aug 30, 2010 at 10:47 AM, Marc
I can definitely recommend the plyr package for these sorts of
operations.
http://had.co.nz/plyr/
ivo welch wrote:
dear R experts:
has someone written a function that returns the results of by() as a
data frame? of course, this can work only if the output of the
function that is an argument
Have you tried aggregate or plyr's ddply?
by() is meant for functions that return such
complicated return values that automatically combining
them is not feasible (e.g., lm()). aggregate()
works for functions that return scalars or
simple vectors and returns a data.frame.
ddply is part of a
Hi, all
how to get all filenames in a directory and its all subdirectories?
something like
filenames - c(Sys.glob('/path/to/directory/*'),
Sys.glob('/path/to/directory/*/*'), Sys.glob('/path/to/directory/*/*/*'),
...)
Thanks in advance,
Hyunchul
[[alternative HTML version deleted]]
Hi R experts,
I am trying to remove autocorrelation from Simple Moving Average time series. I
know that this can be done by using seasonal ARIMA like,
library(TTR)
data - rnorm(252)
n=21
sma_data=SMA(data,n)
Hi,
This should do it:
list.files(path = /your/path, recursive = TRUE)
Cheers,
Josh
On Mon, Aug 30, 2010 at 8:01 AM, Hyunchul Kim
hyunchul.kim@gmail.com wrote:
Hi, all
how to get all filenames in a directory and its all subdirectories?
something like
filenames -
Hyunchul -
You don't have to rely on the operating system to
provide information about the file system. Look at
?list.files
For example,
list.files('/path/to/directory',recursive=TRUE)
- Phil Spector
Dear R-help list,
Im using the mgcv package to plot predictions based on the gam function.
I predict the chance of being a (frequent) participant at theater plays vs.
not being a participant by age.
Because my outcome variable is dichotomous, I use the binomial family with
logit link function.
On Aug 30, 2010, at 9:19 AM, ivo welch wrote:
dear R experts:
has someone written a function that returns the results of by() as a
data frame? of course, this can work only if the output of the
function that is an argument to by() is a numerical vector.
presumably, what is now
Hi,
Why is the following retuning a nodset of length 0:
library(XML)
test - xmlTreeParse(
http://www.unimod.org/xml/unimod_tables.xml,useInternalNodes=TRUE)
getNodeSet(test,//modifications_row)
Thanks for any hint.
Joh
__
R-help@r-project.org
Hadley,
I have counts ranging over 4-6 orders of magnitude with peaks
occurring at various 'magic' values. Using a log scale for the
y-axis enables the smaller peaks, which would otherwise
be almost invisible bumps along the x-axis, to be seen
That doesn't justify the use of a _histogram_ -
Dear R-Users,
I would like to use the spatstat package in R for simulation and
analysis of point processes in 1d = 1 dimension, i.e. on the line. I
tried to use the owin class to obtain a 1-dimensional window, e.g.
[1,..L], but it doesn't work.
Does anyone know if the spatstat package
I run R on a remote UNIX server where the data are stored that I ssh into
through Emacs, while I store my R scripts on local Windows network drives.
So far this arrangement hasn't been a problem, though now I'd like to use
source() or a similar function to include other R scripts to get a better
On Mon, Aug 30, 2010 at 5:29 PM, Erik Shilts erik.shi...@opower.com wrote:
I run R on a remote UNIX server where the data are stored that I ssh into
through Emacs, while I store my R scripts on local Windows network drives.
So far this arrangement hasn't been a problem, though now I'd like to
On Sun, Aug 29, 2010 at 20:00, Worik R wor...@gmail.com wrote:
Is there a simple way to put a legend outside the plot area for a simple
plot?
I found... (at http://www.harding.edu/fmccown/R/)
# Expand right side of clipping rect to make room for the legend
*par(xpd=T,
many thanks! ill try going down the environment variables path as i kind of
know how to do it.
abhisek
On Sun, Aug 29, 2010 at 9:25 PM, Gabor Grothendieck ggrothendi...@gmail.com
wrote:
On Sun, Aug 29, 2010 at 1:13 PM, Abhisek shi...@gmail.com wrote:
Hi there,
Ive tried trawling the
Since you are already on emacs, try using ESS. I am pretty sure we do that.
http://ess.r-project.org
Please follow up on the ESS mailing ess-h...@stat.math.ethz.ch
Rich
On Mon, Aug 30, 2010 at 12:48 PM, Barry Rowlingson
b.rowling...@lancaster.ac.uk wrote:
On Mon, Aug 30, 2010 at 5:29 PM, Erik
Stephen,
You may have missed an important point in my post (or you may have understood
it, but not stating it hear could mislead future readers of the exchange.
Below you have:
detach()
attach(singer[1:15,])
The following object(s) are masked from women :
height
That doesn't justify the use of a _histogram_ - and regardless of
The usage highlights meaningful characteristics of the data.
What better justification for any method of analysis and display is
there?
That you're displaying something that is mathematically well founded
and meaningful - but
What you can do is patch the code to add the NAs back after the
Prediction step (which many predict() methods do).
Thanks Andy for your hints and especially for digging into the problem
like this! I have, in the meantime, written a simple wrapper around
predict.loess that fills in the NAs,
Thanks for the response. I'm currently running ESS. I sent a message to the
ESS listserv to see if anyone can help with that since I don't see how to do
it in the Manual or Readme.
Erik
On Mon, Aug 30, 2010 at 2:04 PM, RICHARD M. HEIBERGER r...@temple.eduwrote:
Since you are already on emacs,
On Mon, Aug 30, 2010 at 5:51 AM, Thaler,Thorn,LAUSANNE,Applied
Mathematics thorn.tha...@rdls.nestle.com wrote:
Well, it's actually lattice:::extend.limits(range(x)), but
extendrange() does basically the same thing, and is available to the
user (i.e., exported), albeit from a different package.
On Aug 30, 2010, at 11:44 AM, Jef Vlegels wrote:
There was a problem with the data in attachment, here is a better
version.
Use something like:
data - read.table(pas_r.txt,header=TRUE,sep=;)
to read it.
I did, but I hate dealing with attached datasets and also am
uncomfortable dealing
Hi Johannes
This is a common issue. The document has a default XML namespace, e.g.
the root node is defined as
unimod xmlns=http://www.unimod.org/xmlns/schema/unimod_tables_1;...
.
So you need to specify which namespace to match in the XPath expression
in getNodeSet(). The XML
On Mon, Aug 30, 2010 at 01:50:03PM +0100, Prof Brian Ripley wrote:
The underlying problem is your expectations.
R (unlike S) was set up many years ago to use na.omit as the
default, and when fitting both lm() and loess() silently omit cases
with missing values. So why should prediction from
Greetings:
I recently installed R 2.11.1 for windows. It seems that there is only
online help now. Is there any way to get the local docs? I don't have
always-on high-speed internet.
thanks,
Bob
--
View this message in context:
Hadley,
That you're displaying something that is mathematically well founded
and meaningful - but my emphasis there was on histogram. I don't
think a histogram makes sense, but there are other ways of displaying
the same data that would (e.g. a frequency polygon, or maybe a density
plot)
The
Hi,
On Mon, Aug 30, 2010 at 3:53 PM, Bob McCall rcm...@gmail.com wrote:
Greetings:
I recently installed R 2.11.1 for windows. It seems that there is only
online help now. Is there any way to get the local docs? I don't have
always-on high-speed internet.
You don't have to be connected to
Dear friends,
two years ago (as I found on the web) Paul sent the following message but I was
not able to find if he got an answer. Today I have the same question and it
would be great if I could find out that this test has been implemented
(somehow) in R. Please do not confuse it with the
try help.start()
that starts a local help process (within R) and open your browser to
that local location.
-c
On 08/30/2010 03:53 PM, Bob McCall wrote:
Greetings:
I recently installed R 2.11.1 for windows. It seems that there is only
online help now. Is there any way to get the local
I'm relatively new to R, and not particularly adept yet, but I was wondering
if there was a simply way to simulate missing data that are MAR, MNAR and
MCAR. I've got a good work-around for the MCAR data, but it's sort of hard
to work with.
Josh
[[alternative HTML version deleted]]
Thanks for the replies. I tried help.start() and ?foo but my browser opened
and was blank. It just said can't open site as I was offline. Must be my
system. I'll try again. Maybe I can get it sorted out. Maybe the install was
bad.
Thanks again,
Bob
--
View this message in context:
Inline below.
-- Bert
On Mon, Aug 30, 2010 at 1:05 PM, Iasonas Lamprianou lampria...@yahoo.comwrote:
Dear friends,
two years ago (as I found on the web) Paul sent the following message but I
was not able to find if he got an answer. Today I have the same question and
it would be great if I
On 30 Aug 2010 at 13:25, Bert Gunter wrote:
Inline below.
-- Bert
Wrong. There *is* a Brown-Forsythe test of equality of means given
heterogeneity of variance.
[Kirk's experimental design tst, 3rd Ed. p. 155 describes the test.]
---JRG
John R. Gleason
On Mon, Aug 30, 2010 at 1:05
Hi:
You've already gotten some good replies re aggregate() and plyr; here are
two more choices, from packages doBy and data.table, plus the others for
a contained summary:
key - c(1,1,1,2,2,2)
val1 - rnorm(6)
indf - data.frame( key, val1)
outdf - by(indf, indf$key, function(x) c(m=mean(x),
On 31/08/10 03:37, Derek M Jones wrote:
Hadley,
I have counts ranging over 4-6 orders of magnitude with peaks
occurring at various 'magic' values. Using a log scale for the
y-axis enables the smaller peaks, which would otherwise
be almost invisible bumps along the x-axis, to be seen
That
Hi,
I have a small doubt regarding naive Bayes. I am able to classify the
data's properly but i am just stuck up with how to get the probability
values for naive bayes. In the case of SVM we have attr function that
helps in displaying the probability values. Is there any function similar to
Thanks. I stand corrected, then.
-- Bert
On Mon, Aug 30, 2010 at 1:45 PM, JRG loesl...@verizon.net wrote:
On 30 Aug 2010 at 13:25, Bert Gunter wrote:
Inline below.
-- Bert
Wrong. There *is* a Brown-Forsythe test of equality of means given
heterogeneity of variance.
[Kirk's
Hi,
I have analyzed my data using log-linear model as seen below:
yes.no - c(Yes,No)
tk - c(On,Off)
ats - c(S,V,M)
L - gl(2,1,12,yes.no)
T - gl(2,2,12,tk)
A - gl(3,4,12,ats)
n - c(1056,4774,22,283,326,2916,27,360,274,1770,15,226)
library(MASS)
l.loglm - data.frame(A,T,L,n)
l.loglm
I am trying to do post-hoc tests associated with a repeated measures
analysis with on factor nested within respondents.
The factor (SOI) has 17 levels. The overall testing is working fine, but I
can't seem to get the multiple comparisons to work.
The first step is to stack the data.
Then I
On Mon, Aug 30, 2010 at 3:54 PM, Dennis Murphy djmu...@gmail.com wrote:
Hi:
You've already gotten some good replies re aggregate() and plyr; here are
two more choices, from packages doBy and data.table, plus the others for
a contained summary:
key - c(1,1,1,2,2,2)
val1 - rnorm(6)
indf
Please consider the following dataset:
I want to reorder the levels by year but get the following error:
Error in tapply(v, x, FUN, ...) : arguments must have same length
I suspect that I need to add the levels before I melt the dataset
but either way I have only use 'reorder' once before and
Thank you for replying. But there is another test with the same name which
tests for equality of means. It is a robust version of ANOVA, like the Welch
(ANOVA) test. They are both available at SPSS. the Welch test is available
through the oneway.test in R but the Brown-Forsythe test for the
On Aug 30, 2010, at 5:25 PM, Felipe Carrillo wrote:
Please consider the following dataset:
I want to reorder the levels by year but get the following error:
Error in tapply(v, x, FUN, ...) : arguments must have same length
I suspect that I need to add the levels before I melt the dataset
Several of us locally are puzzling over the following problem:
We have a central repository of R packages on our linux system.
Occasionally, we'd like to install a different version of the same
package (such as making updates to the survival package before it is
released to the
Sorry about the structure thing, I basically want my levels on this order:
w_melt -
reorder(w_melt$year,c(BY2005,BY2009,BY2006,BY2008,BY2007,BY2010))
Here's the new dataset , please discard the reverse year, I was just trying it
but it didn't do what I wanted
and forgot to delete it.
With
On Aug 30, 2010, at 6:09 PM, Felipe Carrillo wrote:
Sorry about the structure thing, I basically want my levels on this
order:
w_melt -
reorder(w_melt
$year,c(BY2005,BY2009,BY2006,BY2008,BY2007,BY2010))
Here's the new dataset , please discard the reverse year, I was
just trying it
but
Hi, is there anyway I can retrieve the user coordinates for the region of the
heatmap (only the heatmap, not include dendrogram, x- y- axis annotation). I
found that par(usr) didn't give the user coordinates that I want. I want
those
user coordinates to add some additional information to the
mercy!!! ;-)
thanks, everyone. sure beats me trying to reinvent a slower version of the
wheel. came in very handy.
I think it would be nice to see some of these pointers in the ?by manual
page. not sure who to ask to do this, but maybe this person reads r-help.
/iaw
Ivo Welch
Dear all,
I was asked to send the following question:
We have some (raw) observations and would like to get the first and second
derivatives in R. Any comment would be appreciated.
Thanks,
Mahmoud
__
R-help@r-project.org mailing list
On Aug 30, 2010, at 6:40 PM, mtor...@math.carleton.ca wrote:
Dear all,
I was asked to send the following question:
We have some (raw) observations and would like to get the first and
second
derivatives in R. Any comment would be appreciated.
From the time of Newton, the quick and dirty
On 30/08/2010 3:53 PM, Bob McCall wrote:
Greetings:
I recently installed R 2.11.1 for windows. It seems that there is only
online help now. Is there any way to get the local docs? I don't have
always-on high-speed internet.
The local docs are generated on demand. You use your browser, but
Hi:
Try this:
library(sos) # install from CRAN if you don't have it
findFn('imputation')
I got 285 hits. That should be enough to get you started.
Here's a recent paper about how to use sos from the R Journal (Dec. 2009):
On 30/08/2010 4:16 PM, Bob McCall wrote:
Thanks for the replies. I tried help.start() and ?foo but my browser opened
and was blank. It just said can't open site as I was offline. Must be my
system. I'll try again. Maybe I can get it sorted out. Maybe the install was
bad.
I'd guess you're set
On 30/08/2010 6:00 PM, Atkinson, Elizabeth J. (Beth) wrote:
Several of us locally are puzzling over the following problem:
We have a central repository of R packages on our linux system.
Occasionally, we'd like to install a different version of the same
package (such as making
On 30/08/2010 6:40 PM, mtor...@math.carleton.ca wrote:
Dear all,
I was asked to send the following question:
We have some (raw) observations and would like to get the first and second
derivatives in R. Any comment would be appreciated.
Fit a model, and take derivatives of the fit. Which
There is a strong argument for fitting something like splines
and then differentiating the spline fit.
I trust you won't object to my immodest recommendation of the
fda package and book by Ramsay, Hooker and Graves (2009) Functional
Data Analysis with R and Matlab (Springer).
Hi:
See below.
On Mon, Aug 30, 2010 at 2:21 PM, Bruce Johnson
bruce.ejohn...@verizon.netwrote:
I am trying to do post-hoc tests associated with a repeated measures
analysis with on factor nested within respondents.
The factor (SOI) has 17 levels. The overall testing is working fine, but I
Florian Weiler fweiler08 at johnshopkins.it writes:
First I create some data:
N - 1000
x - rnorm(N, 0, 5)
Then I specify a model in JAGS, storing it in the directory with the
extension .bug:
model {
for (i in 1:N) {
x[i] ~ dnorm(mu, tau) ## the model
}
p.s. My suggestion is a special case of Duncan Murdoch's suggestion:
If you have a model that should fit the data, use it. If you don't --
or if you have only something rather general -- then the more general
tools of functional data analysis may be useful.
##
Hi:
I don't know if this is exactly what you wanted, but here goes. I made
a few adjustments in the data frame before calling ggplot():
# library(ggplot2)
# Reorient the order in which variables appear
winter - winter[, c(1, 7, 3, 6, 4, 5, 2)]
# Get rid of second week 26 at the end
winter2 -
Thanks Dennis:
I always seem to have a hard time defining levels. That's exactly what I
needed.
From: Dennis Murphy djmu...@gmail.com
To: Felipe Carrillo mazatlanmex...@yahoo.com
Cc: r-h...@stat.math.ethz.ch
Sent: Mon, August 30, 2010 5:41:02 PM
Subject: Re: [R] reordering levels error
Thanks for this. I was thinking the spaces rule only applied within the
\alias{} statements. I'm not sure why I thought this.
Original message
Date: Mon, 30 Aug 2010 18:43:35 +0100 (BST)
From: Prof Brian Ripley rip...@stats.ox.ac.uk
Subject: Re: [Rd] S4 Method Rd Warning
To: Duncan
Dear All,
I am trying to use the packadge polr () to analyse ordinal categorical data
responses. Instead of using polr() directly, I have altered the script slightly
(imposed a constraint to make all the parameters greater than or equal to zero)
(see below),
fit - list(coefficients =
Hi, All
I have a problem of R memory space.
I am getting Error: cannot allocate vector of size 198.4 Mb
--
I've tried with:
memory.limit(size=2047);
[1] 2047
memory.size(max=TRUE);
[1] 12.75
library('RODBC');
Dear David and Dennis Sir,
Thanks a lot for your guidance.
As guided by Mr Dennis Murphy Sir in his reply
Replace table in the tapply call with sum. While you're at it, typing
?tapply to find out what the function does wouldn't hurt...
I had really tried earlier to understand the
Hello!
I want to use the Icens package for analyzing interval-censored data. This code
from the manual gives me what I want.
library(Icens)
data(cosmesis)
csub1 - subset(cosmesis, subset=Trt==0, select=c(L,R))
e1 - VEM(csub1)
plot(e1)
However, I would like to change the color of the shading
Hi,
On Mon, Aug 30, 2010 at 9:17 PM, 나여나 dllm...@hanmail.net wrote:
Hi, All
I have a problem of R memory space.
I am getting Error: cannot allocate vector of size 198.4 Mb
It's a RAM thing: you don't have enough.
The OS said nice try when R tried asked for that last 198.4 MB's of
1 - 100 of 110 matches
Mail list logo