On Fri, 15 Oct 2010, Erin Hodgess wrote:
Dear R People:
I'm trying to install R-2.12.0 from source on a Netbook with Windows XP.
I have installed the Rtools.exe (version 2.12)
However, when I enter "tar xvfz R-2.12.0.tar.gz"
I keep getting the message "cannot change owneship to uid 501, gid
Dear R-helpers,
Considering that a substantial part of analysis is related data
manipulation, I'm just wondering if I should do the basic data part in a
database server (currently I have the data in .txt file).
For this purpose, I am planning to use MySQL. Is MySQL a good way to go
about? Are ther
I dont know whether I am missing something or not:
> head(read.zoo(file="f:/dat1.txt", header=T, sep=",", format = "%m/%d/%Y
> %H:%M:%S"), tz="GMT")
data.open data.high data.low data.close
2010-10-15 73.7 73.7 73.7 73.7
2010-10-15 73.8 73.8 73.8
However I have noticed a strange thing. Placing of "tz = """ matters here:
> head(read.zoo("f:/dat1.txt", sep = ",", header = TRUE, format = "%m/%d/%Y
> %H:%M:%S"), tz = "")
data.open data.high data.low data.close
2010-10-15 73.7 73.7 73.7 73.7
2010-10-15 73.8
Hi Rob:
Are you thinking of the digitize package?
HTH,
Dennis
On Fri, Oct 15, 2010 at 1:46 PM, Rob James wrote:
> Do I recall correctly that there is an R package that can take an image,
> and help one estimate the x/y coordinates? I can't find the package,
> thought it was an R-tool, but wou
Hi:
On Fri, Oct 15, 2010 at 4:29 AM, Anh Nguyen wrote:
> Thank you for the very helpful tips. Just one last question:
> - In the lattice method, how can I plot TIME vs OBSconcentration and TIME
> vs PREDconcentration in one graph (per individual)? You said "in lattice
> you would replace 'smooth
On Fri, Oct 15, 2010 at 9:56 PM, Megh Dal wrote:
> However I have noticed a strange thing. Placing of "tz = """ matters here:
>
>> head(read.zoo("f:/dat1.txt", sep = ",", header = TRUE, format = "%m/%d/%Y
>> %H:%M:%S"), tz = "")
Your tz argument has been passed as an argument of head. You want
Dear R People:
I'm trying to install R-2.12.0 from source on a Netbook with Windows XP.
I have installed the Rtools.exe (version 2.12)
However, when I enter "tar xvfz R-2.12.0.tar.gz"
I keep getting the message "cannot change owneship to uid 501, gid 20
invalid argument"
Has anyone else run ac
On Fri, 15-Oct-2010 at 07:41PM -0300, Kjetil Halvorsen wrote:
|> I downloaded the tarball for R-2-12.0, made
|> ./configure
|> make
|>
|> without problems.
|>
|> Then
|> make test
|> ...which have now been running for more than an hour, and seems to
|> have stalled at:
|> comparing 'reg-plot-lat
You could have posted an example of your data. You can use 'sub' to
substitute one set of characters for another in your data. There are
other ways of doing it if we had an example of your data.
On Fri, Oct 15, 2010 at 5:55 PM, Clint Bowman wrote:
> A data set I obtained has the hours running f
Awesome! It worked. Thank you both for your help.
-joe
--
View this message in context:
http://r.789695.n4.nabble.com/Data-Parameter-extract-tp2996369p2997761.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mail
I downloaded the tarball for R-2-12.0, made
./configure
make
without problems.
Then
make test
...which have now been running for more than an hour, and seems to
have stalled at:
comparing 'reg-plot-latin1.ps' to './reg-plot-latin1.ps.save' ... OK
make[3]: Leaving directory `/home/kjetil/R/R-2.12.
On Fri, Oct 15, 2010 at 9:46 PM, Rob James wrote:
> Do I recall correctly that there is an R package that can take an image, and
> help one estimate the x/y coordinates? I can't find the package, thought it
> was an R-tool, but would appreciate any leads.
>
> Thanks,
I dont know an R package fo
A data set I obtained has the hours running from 01 through 24
rather than the conventional 00 through 23. My favorite, strptime,
balks at hour 24.
I thought it would be easy to correct but it must be too late on
Friday for my brain and caffeine isn't helping.
TIA for a hint,
Clint
--
Cli
On Fri, Oct 15, 2010 at 6:46 AM, David A. wrote:
>
> Thanks Dennis,
>
> I don't think it was a problem of not feeding in a function for rollapply(),
> because I was using mean() and my co.var() function in the FUN argument.
> The key part seems to be the transformation that zoo() does to the matr
Hi,
I would like to write a function that finds parameters of a log-normal
distribution with a 1-alpha CI of (x_lcl, x_ucl):
However, I don't know how to optimize for the two unknown parameters.
Here is my unsuccessful attempt to find a lognormal distribution with
a 90%CI of 1,20:
prior <- func
Andrei -
Looking inside the code for cut, it looks like you could retrieve
the breaks as follows:
getbreaks = function(x,nbreaks){
nb = nbreaks + 1
dx = diff(rx <- range(x,na.rm=TRUE))
seq.int(rx[1] - dx/1000,rx[2] + dx/1000,length.out=nb)
}
The dx/1000 is what makes cut()'s br
On Fri, Oct 15, 2010 at 4:27 PM, Megh wrote:
>
> Thanks Gabor for pointing to my old version. However I got one more question
> why the argument tz="" is sitting there? As you are not passing any explicit
It would otherwise assume "Date" class.
> str(read.zoo(file="dal1.csv", header=TRUE, sep=",
Michael,
Let c_1 and c_2 be vectors representing contrasts. Then c_1 and c_2
are orthogonal if and only if the inner product is 0. In your example,
you have vectors (1,0,-1) and (0,1,-1). The inner product is 1, so
they are not orthogonal. It's impossible to have more orthogonal
contrasts than you
Do I recall correctly that there is an R package that can take an image,
and help one estimate the x/y coordinates? I can't find the package,
thought it was an R-tool, but would appreciate any leads.
Thanks,
Rob
__
R-help@r-project.org mailing list
Tena koe Steven
cutData <- rbind(summary(Acut), summary(Bcut))
barplot(cutData, beside=TRUE)
should get you started. The challenge, as you identify, is to get the data
into the appropriate form and the simple approach I have used may not work for
your real data.
HTH
Peter Alspach
> --
Thanks Gabor for pointing to my old version. However I got one more question
why the argument tz="" is sitting there? As you are not passing any explicit
value for that, I am assuming it is redundant. Without any tz argument, I
got following:
head(read.zoo(file="f:/dat1.txt", header=T, sep=",",
On Fri, Oct 15, 2010 at 10:22 AM, Andrei Zorine wrote:
> Hello,
> My question is assuming I have cut()'ed my sample and look at the
> table() of it, how can I compute probabilities for the bins?
I actually don't know what you mean by this (my own ignorance probably).
Do I have
> to parse table'
On Fri, Oct 15, 2010 at 3:22 PM, Megh Dal wrote:
> Hi Gabor, please see the attached files which is in text format. I have
> opened them on excel then, used clipboard to load them into R. Still really
> unclear what to do.
>
> Also can you please elaborate this term "index = list(1, 2), FUN =
>
On Fri, Oct 15, 2010 at 7:29 AM, Anh Nguyen wrote:
> Thank you for the very helpful tips. Just one last question:
> - In the lattice method, how can I plot TIME vs OBSconcentration and TIME vs
> PREDconcentration in one graph (per individual)? You said "in lattice you
> would replace 'smooth' by '
I've read a number of examples on doing a multiple bar plot, but cant seem
to grasp
how they work or how to get my data into the proper form.
I have two variable holding the same factor
The variables were created using a cut command, The following simulates that
A <- 1:100
B <- 1:100
A[30:60]
On Fri, 15 Oct 2010, Megh Dal wrote:
Hi Gabor, please see the attached files which is in text format. I have
opened them on excel then, used clipboard to load them into R. Still
really unclear what to do.
I've read both files using read.zoo():
R> z1 <- read.zoo("dat1.txt", sep = ",", header
I have compared "dat11" and "x" using str() function, however did not find
drastic difference:
> str(dat11)
‘zoo’ series from 2010-10-15 13:43:54 to 2010-10-15 13:49:51
Data: num [1:7, 1:4] 73.8 73.8 73.8 73.8 73.8 73.8 73.7 73.8 73.8 73.8 ...
- attr(*, "dimnames")=List of 2
..$ : chr [1:7]
Hello,
My question is assuming I have cut()'ed my sample and look at the
table() of it, how can I compute probabilities for the bins? Do I have
to parse table's names() to fetch bin endpoints to pass them to
p[distr-name] functions? i really don't want to input arguments to PDF
functions by hand (n
Thank you for the very helpful tips. Just one last question:
- In the lattice method, how can I plot TIME vs OBSconcentration and TIME vs
PREDconcentration in one graph (per individual)? You said "in lattice you
would replace 'smooth' by 'l' in the type = argument of xyplot()" that just
means now t
Hi R users,
I am trying to call openbugs from R. And I got the following error message:
~
model is syntactically correct
expected the collection operator c error pos 8 (error on line 1)
variable ww is not defined
Thanks Dennis,
I don't think it was a problem of not feeding in a function for rollapply(),
because I was using mean() and my co.var() function in the FUN argument.
The key part seems to be the transformation that zoo() does to the matrix. If I
do the same transformation to my original matrix,
Try this:
split(as.data.frame(DF), is.na(DF$x))
On Fri, Oct 15, 2010 at 9:45 AM, Jumlong Vongprasert wrote:
> Dear all
> I have data like this:
> x y
> [1,] 59.74889 3.1317081
> [2,] 38.77629 1.7102589
> [3,] NA 2.2312962
> [4,] 32.35268 1.3889621
> [5,] 74
I tried that too, it doesn't work because of the way I wrote the code.
Listing y as free or not giving it a limit makes the scale go from -0.5 to
0.5, which is useless. This is what my code looks like now (it's S-Plus
code, btw)-- I'll try reading up on lattices in R to see if I can come up
with so
public class my_convolve
{
public static void main(String[] args)
{
}
public static void convolve()
{
System.out.println("Hello");
}
}
library(rJava)
.jinit(classpath="C:/Documents and Settings/GV/workspace/Test/bin",
pa
Hi David,
More info
Thanks a lot
Christophe
##
library(Hmisc)
library(lattice)
library(fields)
library(gregmisc)
library(quantreg)
> str(sasdata03_a)
'data.frame': 109971 obs. of 6 variables:
$ jaar : Factor w/ 3 levels "2006","2007",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Cat_F
Hi,
I am using R and tried to normalize the data within each sample group using
RMA. When I tried to import the all the normalized expression data as a
single text file and make a boxplot, it showed discrepancy among the sample
groups. I tried to scale them or re-normalize them again, so that it
Hi:
You need to give a function for rollapply() to apply :)
Here's my toy example:
d <- as.data.frame(matrix(rpois(30, 5), nrow = 10))
library(zoo)
d1 <- zoo(d) # uses row numbers as index
# rolling means of 3 in each subseries (columns)
> rollmean(d1, 3)
V1 V2 V3
2 3.
Hi Gabor, please see the attached files which is in text format. I have opened
them on excel then, used clipboard to load them into R. Still really unclear
what to do.
Also can you please elaborate this term "index = list(1, 2), FUN = function(d,
t) as.POSIXct(paste(d, t))" in your previous fil
On Fri, Oct 15, 2010 at 2:20 PM, Megh Dal wrote:
> Dear all, I have following 2 zoo objects. However when I try to merge those 2
> objects into one, nothing is coming as intended. Please see below the objects
> as well as the merged object:
>
>
>> dat11
> V2 V3 V4 V5
>
On Fri, 15 Oct 2010, Megh Dal wrote:
Dear all, I have following 2 zoo objects. However when I try to merge those 2
objects into one, nothing is coming as intended. Please see below the objects
as well as the merged object:
dat11
V2 V3 V4 V5
2010-10-15 13:43:54
Megh wrote:
>
> Dear all, I have following 2 zoo objects. However when I try to merge
> those 2 objects into one, nothing is coming as intended. Please see below
> the objects as well as the merged object:
>
>
>> merge(dat11, dat22)
> V2.dat11 V3.dat11 V4.dat11 V5.dat11
I have a program that creates a Png file using Rgooglemap with an extent
(lonmin,lonmax,latmin,latmax)
I also have a contour plot of the same location, same extent, same sized
(height/width) png file.
I'm looking for a way to make the contour semi transparent and overlay it on
the google map ( hyb
Dear all, I have following 2 zoo objects. However when I try to merge those 2
objects into one, nothing is coming as intended. Please see below the objects
as well as the merged object:
> dat11
V2 V3 V4 V5
2010-10-15 13:43:54 73.8 73.8 73.8 73.8
2010-10-15 13:44:15 7
?matplot
e.g.,
copy your data to the clipboard then
library(psych)
my.data <- read.clipboard()
my.data
Tenth Fifth Third
GG 112 152 168
EC 100 120 140
SQ 160 184NA
SK 120 100 180
matplot(t(my.data),type="b")
Bill
At 10:27 AM -0700 10/15/10, barnhillec wrote:
barnhillec wrote:
>
> I'm trying to graph some simple music psychology data. Columns are musical
> intervals, rows are the initials of the subjects. Numbers are in beats per
> minute (this is the value at which they hear the melodic interval split
> into two streams). So here's my table:
>
>
Hi,
I am relatively new to R but not to graphing, which I used to do in Excel
and a few other environments on the job. I'm going back to school for a PhD
and am teaching myself R beforehand. So I hope this question is not
unacceptably ignorant but I have perused every entry level document I can
f
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of Joshua Wiley
> Sent: Friday, October 15, 2010 12:23 AM
> To: Gregor
> Cc: r-help@r-project.org
> Subject: Re: [R] fast rowCumsums wanted for calculating the cdf
>
> Hi,
>
> You
On 15 Oct 2010, at 13:55, Berwin A Turlach wrote:
> G'day Michael,
>
Hi Berwin
Thanks for the reply
> On Fri, 15 Oct 2010 12:09:07 +0100
> Michael Hopkins wrote:
>
>> OK, my last question didn't get any replies so I am going to try and
>> ask a different way.
>>
>> When I generate contrast
Hi Dennis,
The first thing I did with my data was to explore it with 6 graphs
(wet-high, med, and solo-; dry-high, med, and solo-) and gave me very
interesting patterns: seed size in wet treatments is either negatively
correlated (high and medium densities) or flat (solo). But dry treatments
are a
Is there a way to estimate a nominal response model?
To be more specific let's say I want to calibrate:
\pi_{v}(\theta_j)=\frac{e^{\xi_{v}+\lambda_{v}\theta_j}}{\sum_{h=1}^m
e^{\xi_{h}+\lambda_{h}\theta_j}}
Where $\theta_j$ is a the dependent variable and I need to estimate
$\xi_{h}$ and $
Also look at the get function, it may be a bit more straight forward (and safer
if there is any risk of someone specifying 'rm(ls())' as a data frame name).
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message
Hi,
I've had to do something like that before. It seems to be a "feature" of nls
(in R, but not as I recall in Splus) that it accepts a list with vector
components as 'start' values, but flattens the result values to a single
vector.
I can't spend much time explaining, but here's a fragment of
Should you need to do it again, you may want to look at the relevel
function. I suppose that would meet the definition of some versions of
"on the fly" but once I have a model, rerunning with a different
factor leveling is generally pretty painless.
--
David.
On Oct 15, 2010, at 9:09 AM,
And thank YOU for taking the time to express your gratitude. I'm sure
all those who regularly take the time to contribute to the list
appreciate the appreciation.
Andrew Miles
On Oct 15, 2010, at 9:49 AM, Jumlong Vongprasert wrote:
Dear R-help mailing list and software development team.
A
On Oct 15, 2010, at 9:21 AM, Öhagen Patrik wrote:
Dear List,
I each iteration of a simulation study, I would like to save the p-
value generated by "coxph". I fail to see how to adress the p-value.
Do I have to calculate it myself from the Wald Test statistic?
No. And the most important
Thank you Henrique!! It works.
Thu
Le 15/10/2010 16:53, Henrique Dallazuanna a écrit :
coef(fm$modelStruct$varStruct, uncons = FALSE)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
Try this:
coef(fm$modelStruct$varStruct, uncons = FALSE)
On Fri, Oct 15, 2010 at 11:42 AM, Hoai Thu Thai wrote:
> Dear R-Users,
>
> I have a question concerning extraction of parameter estimates of variance
> function from lme fit.
>
> To fit my simulated data, we use varConstPower ( constant pl
A) you hijacked another thread.
On Oct 15, 2010, at 9:50 AM, Jonas Josefsson wrote:
Hi!
I am trying to produce a graph which shows overlap in latitude for a
number of species.
I have a dataframe which looks as follows
species1,species2,species3,species4.
minlat
Dear R-Users,
I have a question concerning extraction of parameter estimates of
variance function from lme fit.
To fit my simulated data, we use varConstPower ( constant plus power
variance function).
fm<-lme(UPDRS~time,data=data.simula,random=~time,method="ML",weights=varConstPower(fixed=li
Hi,
I'm new to this mailing list so apologies if this is too basic. I have
confocal images 512x512 from which I have extracted x,y positions of the
coordiates of labelled cells exported from ImageJ as a.csv file. I also
have images that define an underlying pattern in the tissue defined as
ar
I am having hard time properly setting NetBeans to work with JRI libs
(http://rosuda.org/JRI/). Most of the instructions I have found so far
are written for Eclipse or Windows (or both).
I have set java.library.path variable in config: customize:VM
arguments field, by specifying
"-Djava.library.p
Hi!
I am trying to produce a graph which shows overlap in latitude for a
number of species.
I have a dataframe which looks as follows
species1,species2,species3,species4.
minlat 6147947,612352,627241,6112791
maxlat 7542842,723423,745329,7634921
I wan
Dear R-help mailing list and software development team.
After I have used R a few weeks, I was exposed to the best of the program.
In addition, the R-help mailing list a great assist new users.
I do my job as I want and get great support from the R-help mailing list.
Thanks R-help mailing list.
T
Dear R-help mailing list and software development team.
After I have used R a few weeks, I was exposed to the best of the program.
In addition, the R-help mailing list a great assist new users.
I do my job as I want and get great support from the R-help mailing list.
Thanks R-help mailing list.
T
Hi Gerrit,
Almost it but I need to insert M[,i] as well as (matrix( -1, nrow( M),
CN[i]) when CN[i] = 0
I know this is not correct but can something like the following be done?
HH <- c(0.88, 0.72, 0.89, 0.93, 1.23, 0.86, 0.98, 0.85, 1.23)
TT <- c(7.14, 7.14, 7.49, 8.14, 7.14, 7.32, 7.14, 7.14,
Henrik, there is an easily adaptable example in this thread:
http://r.789695.n4.nabble.com/coloring-leaves-in-a-hclust-or-dendrogram-plot
-tt795496.html#a795497
HTH. Bryan
*
Bryan Hanson
Professor of Chemistry & Biochemistry
DePauw University, Greencastle IN USA
On 10/15/10 9:05 AM,
Dear List,
I each iteration of a simulation study, I would like to save the p-value
generated by "coxph". I fail to see how to adress the p-value. Do I have to
calculate it myself from the Wald Test statistic?
Cheers, Paddy
__
R-help@r-project.org
On 10/15/2010 06:17 AM, Ying Ye wrote:
> Hi!
>
> I am a new R user and have no clue of this error (see below) while using
> edgeR package:
edgeR is a Bioconductor pacakge so please subscribe to the Bioconductor
list and ask there.
http://bioconductor.org/help/mailing-list/
include the output of
Hello again John,
I was going to suggest that you just use qbinom to generate the
expected number of extinctions. For example, for the family with 80
spp the central 95% expectation is:
qbinom(c(0.025, 0.975), 80, 0.0748)
which gives 2 - 11 spp.
If you wanted to do look across a large number of
Hi!
I am a new R user and have no clue of this error (see below) while
using edgeR package:
> Y <- clade_reads
> y <- Y[,c(g1,g2)]
> grouping <- c( rep(1,length(g1)), rep(2,length(g2)) )
> size <- apply(y, 2, sum)
> d <- DGEList(data = y, group = grouping, lib.size = size)
Error in DGEList(da
..by some (extensive) trial and error reordering the contrast matrix and the
reference level
i figured it out myself -
for anyone who might find this helpful searching for a similar contrast in
the future:
this should be the right one:
c2<-rbind("fac2-effect in A"=c(0,1,0,0,0,0,0,0),
Hi,
This would probably deserve some abstraction, we had C++ versions of
apply in our TODO list for some time, but here is a shot:
require( Rcpp )
require( inline )
f.Rcpp <- cxxfunction( signature( x = "matrix" ), '
NumericMatrix input( x ) ;
NumericMatrix output = clone( input )
Hi
r-help-boun...@r-project.org napsal dne 15.10.2010 15:00:46:
> you can do the following:
>
> mat <- cbind(x = runif(15, 50, 70), y = rnorm(15, 2))
> mat[sample(15, 2), "x"] <- NA
>
> na.x <- is.na(mat[, 1])
> mat[na.x, ]
> mat[!na.x, ]
Or if you have missing data in several columns and you
Hi,
I have performed a clustering of a matrix and plotted the result with
pltree. See code below. I want to color the labels of the leafs
individually. For example I want the label name "Node 2" to be plotted in
red. How do I do this?
Sincerely
Henrik
library(cluster)
D <- matrix(nr=4,
Try this:
> a <- read.table(textConnection(" x y
+ 59.74889 3.1317081
+ 38.77629 1.7102589
+NA 2.2312962
+ 32.35268 1.3889621
+ 74.01394 1.5361227
+ 34.82584 1.1665412
+ 42.72262 2.7870875
+ 70.54999 3.3917257
+ 59.37573 2.6763249
+ 68.87422 1.96977
you can do the following:
mat <- cbind(x = runif(15, 50, 70), y = rnorm(15, 2))
mat[sample(15, 2), "x"] <- NA
na.x <- is.na(mat[, 1])
mat[na.x, ]
mat[!na.x, ]
I hope it helps.
Best,
Dimitris
On 10/15/2010 2:45 PM, Jumlong Vongprasert wrote:
Dear all
I have data like this:
x
G'day Michael,
On Fri, 15 Oct 2010 12:09:07 +0100
Michael Hopkins wrote:
> OK, my last question didn't get any replies so I am going to try and
> ask a different way.
>
> When I generate contrasts with contr.sum() for a 3 level categorical
> variable I get the 2 orthogonal contrasts:
>
> > con
Dear all
I have data like this:
x y
[1,] 59.74889 3.1317081
[2,] 38.77629 1.7102589
[3,] NA 2.2312962
[4,] 32.35268 1.3889621
[5,] 74.01394 1.5361227
[6,] 34.82584 1.1665412
[7,] 42.72262 2.7870875
[8,] 70.54999 3.3917257
[9,] 59.37573 2.67632
Although I know there is another message in this thread I am replying
to this message to be able to include the whole discussion to this
point.
Gregor mentioned the possibility of extending the compiled code for
cumsum so that it would handle the matrix case. The work by Dirk
Eddelbuettel and Rom
Hi John,
I haven't read that particular paper but in answer to your question...
> So if i do this for all the families it will be the same as doing the
> simulation experiment
> outline in the method above?
Yes :)
Michael
On 15 October 2010 23:18, John Haart wrote:
> Hi Michael,
>
> Thanks
On Fri, Oct 15, 2010 at 6:14 AM, Chris Howden
wrote:
> Thanks for the advice Gabor,
>
> I was indeed not starting and finishing with sqldf(). Which was why it was
> not working for me. Please forgive a blatantly obvious mistake.
>
>
> I have tried what U suggested and unfortunately R is still havi
Hi,
I a trying to compute scores for a new observation based on previously
computed PCA by PCAgrid() function in the pcaPP package. My data has
more variables than observations.
Here is an imaginary data set to show the case:
> n.samples<-30
> n.bins<-1000
> x.sim<-rep(0,n.bins)
> V.sim<-diag(n.bi
Looking at the source for nlrob, it looks like it saves the coefficients
from the results of running an nls and then passes those coefficients back
into the next nls request. The issue that it's running into is that nls
returns the coefficients as upper, LOGEC501, LOGEC502, and LOGEC503, rather
tha
Hi Michael,
Thanks for this - the reason i am following this approach is that it appeared
in a paper i was reading, and i thought it was a interesting angle to take
The paper is
Vamosi & Wilson, 2008. Nonrandom extinction leads to elevated loss of
angiosperm evolutionary history. Ecology Let
Hi, Doug,
maybe
HH <- c(0.88, 0.72, 0.89, 0.93, 1.23, 0.86, 0.98, 0.85, 1.23)
TT <- c(7.14, 7.14, 7.49, 8.14, 7.14, 7.32, 7.14, 7.14, 7.14)
columnnumbers <- c(0, 0, 0, 3, 0, 0, 0, 2, 0)
TMP <- lapply( seq( columnnumbers),
function( i, CN, M) {
if( CN[i] == 0) as.
Hi John,
The word "species" attracted my attention :)
Like Dennis, I'm not sure I understand your idea properly. In
particular, I don't see what you need the simulation for.
If family F has Fn species, your random expectation is that p * Fn of
them will be at risk (p = 0.0748). The variance on t
Hi Denis and list
Thanks for this , and sorry for not providing enough information
First let me put the study into a bit more context : -
I know the number of species at risk in each family, what i am asking is "Is
risk random according to family or do certain families have a disproportionate
OK, my last question didn't get any replies so I am going to try and ask a
different way.
When I generate contrasts with contr.sum() for a 3 level categorical variable I
get the 2 orthogonal contrasts:
> contr.sum( c(1,2,3) )
[,1] [,2]
110
201
3 -1 -1
This provides the
Sometimes such message appears when you try to open .RData file in
environment where packages used when the file was created are not
installed. Than it is possible just to install necessary packages. Without
whole story it is impossible to say what is real cause for that error.
Regards
Petr
r-
On Oct 15, 2010, at 12:37 , Philipp Pagel wrote:
> On Fri, Oct 15, 2010 at 09:57:21AM +0200, Muteba Mwamba, John wrote:
>
>> "FATAL ERROR: unable to restore saved data in .RDATA"
>
> Without more information it's hard to know what exactly went wrong.
>
> Anyway, the message most likely means t
On Fri, Oct 15, 2010 at 09:57:21AM +0200, Muteba Mwamba, John wrote:
> "FATAL ERROR: unable to restore saved data in .RDATA"
Without more information it's hard to know what exactly went wrong.
Anyway, the message most likely means that the .RData file got
corrupted. Deleting it should solve the
Hi:
I don't believe you've provided quite enough information just yet...
On Fri, Oct 15, 2010 at 2:22 AM, John Haart wrote:
> Dear List,
>
> I am doing some simulation in R and need basic help!
>
> I have a list of animal families for which i know the number of species in
> each family.
>
> I a
Hi
r-help-boun...@r-project.org napsal dne 14.10.2010 10:34:12:
>
> Thanks Dennis.
>
>
>
> One more thing if you don't mind. How to I abstract the individual H
and T
> “arrays” from f(m,o,l) so as I can combine them with a date/time array
and
> write to a file?
>
Try to look at ?merge fu
Thanks for the advice Gabor,
I was indeed not starting and finishing with sqldf(). Which was why it was
not working for me. Please forgive a blatantly obvious mistake.
I have tried what U suggested and unfortunately R is still having problems
doing the join. The problem seems to be one of memory
I've rolled up R-2.12.0.tar.gz a short while ago. This is a development
release which contains a number of new features.
Also, a number of mostly minor bugs have been fixed. See the full list
of changes below.
You can get it from
http://cran.r-project.org/src/base/R-2/R-2.12.0.tar.gz
or wait fo
Barry, Gerrit,
That was what I am after but unfortunately only the starting point. I am
now trying to amend a function that inserts the R matrices into a dataset in
the correct places:
i.e.
H <- c(0.88, 0.72, 0.89, 0.93, 1.23, 0.86, 0.98, 0.85, 1.23)
T <- c(7.14, 7.14, 7.49, 8.14, 7.14, 7.32,
Hi:
To get the plots precisely as you have given them in your png file, you're
most likely going to have to use base graphics, especially if you want a
separate legend in each panel. Packages ggplot2 and lattice have more
structured ways of constructing such graphs, so you give up a bit of freedom
hello,
i was shortly asking the list for help with some interaction contrasts (see
below) for which
i had to change the reference level of the model "on the fly" (i read a post
that this is possible in
multcomp).
if someone has a clue how this is coded in multcomp; glht() - please point
me ther
Have a look at the package smoothmest.
Christian
On Fri, 15 Oct 2010, Ondrej Vozar wrote:
Dear colleagues,
I would like to ask you how to estimate
biweight M-estimator of Tukey with known
scale example.
I know how to estimate biweight M-estimator
if estimated scale is used using function
rml
1 - 100 of 125 matches
Mail list logo