Hi:
Here are two possibilities:
df1 <- data.frame(rows=c("A","B","C", "B", "C", "A"),
columns=c("21_2", "22_2", "23_2", "21_2", "22_2", "23_2"),
values=c(3.3, 2.5, 67.2, 44.3, 53, 66))
with(df1, xtabs(values ~ rows + columns))
columns
rows 21_2 22_2 23_2
A 3.3 0.0 66.0
B 44.3 2.
Hi:
If you have R 2.11.x or later, one can use the formula version of aggregate():
aggregate(Correct ~ Subject + Group, data = ALLDATA, FUN = function(x)
sum(x == 'C'))
A variety of contributed packages (plyr, data.table, doBy, sqldf and
remix, among others) have similar capabilities.
If you wa
I have the following script for generating a dataset. It works like a champ
except for a couple of things.
1. I need the variables "itbs" and "map" to be negatively correlated with the
binomial variable "lunch" (around -0.21 and -0.24, respectively). The binomial
variable "lunch" needs to
Alice Wines wrote:
>
> Hello all,
>
> I have a quandry I have been scratching my head about for a
> while. I've searched the manual and the web and have not been able to
> find an acceptable result, so I am hoping for some help.
>
> I have two data frames and I want to index into the
Dear All,
For function: rmvnorm{mvtnorm} in (library mvtnorm, not splus2R), if I
generate 2 bivariate normal samples as follows:
> rmvnorm(2,mean=rep(0,2),sigma=diag(2))
[,1] [,2]
[1,] 2.0749459 1.4932752
[2,] -0.9886333 0.3832266
Where is the first sample, it is stored in the
Oh, I did not see this post and I just saw your message in my blog.
Anyway, here is a solution for other people's future reference:
http://yihui.name/en/2011/04/produce-authentic-math-formulas-in-r-graphics/
Regards,
Yihui
--
Yihui Xie
Phone: 515-294-2465 Web: http://yihui.name
Department of Stat
On Apr 30, 2011, at 9:36 PM, Jeroen Ooms wrote:
I have a dependent variable with is very peaked and has heavy tails,
something I haven't encountered before. (histogram:
http://postimage.org/image/2sw9bn8pw/). What could be an appropriate
family
or transformation to do regress on this?
None
Hi, all
I would like to convert xls files to xlsx files with R commands in R console
instead of saving xls files as xlsx files after opening xls files.
Please show me how.
Thanks.
Wonjae
--
View this message in context:
http://r.789695.n4.nabble.com/Conversion-to-xlsx-file-tp3487118p3487118.h
Hello all,
I have a quandry I have been scratching my head about for a
while. I've searched the manual and the web and have not been able to
find an acceptable result, so I am hoping for some help.
I have two data frames and I want to index into the first using
the second, and replace t
I have a dependent variable with is very peaked and has heavy tails,
something I haven't encountered before. (histogram:
http://postimage.org/image/2sw9bn8pw/). What could be an appropriate family
or transformation to do regress on this?--
View this message in context:
http://r.789695.n4.nabble.co
On 05/01/2011 05:28 AM, Kevin Burnham wrote:
HI All,
I have a long data file generated from a minimal pair test that I gave to
learners of Arabic before and after a phonetic training regime. For each of
thirty some subjects there are 800 rows of data, from each of 400 items at
pre and posttest.
Hi all,
plotrix 3.2 has arrived. The reason for this announcement is that there
have been a couple of major rewrites.
The barNest family of functions has had an overhaul that began as a fix
for an apparently trivial bug that caused problems with empty
subcategories. So far, testing has not fo
On Sun, May 1, 2011 at 4:49 AM, David Winsemius wrote:
>
> On Apr 30, 2011, at 10:44 AM, Jabba wrote:
>
>> Dear useRs,
>>
>> I was asked to produce a survival curve like this:
>>
>> http://www.palug.net/Members/jabba/immaginetta.png/view
>>
>> with the cardinality of the riskset at the bottom.
>
>
to the first two lines of your solutions
df<-data.frame(id=c(1:20),name=c('a','b','b','c','a','d','b','e',
'd','d','c','a','b','a','a','b','f','b','c','g'))
freq <- ave(rep(1, times=nrow(df)), df$name, FUN=sum)
I would add:
df[ sort.list(freq), ]
__
Since you did provide a description of your data (e.g., at least
'str(ALLDATA)') so that we know its structure, I will take a guess:
tapply(ALLDATA$Correct, list(ALLDATA$Subject, ALLDATA$Time),
function(x)sum(x=="C"))
On Sat, Apr 30, 2011 at 3:28 PM, Kevin Burnham wrote:
> HI All,
>
> I have a l
HI All,
I have a long data file generated from a minimal pair test that I gave to
learners of Arabic before and after a phonetic training regime. For each of
thirty some subjects there are 800 rows of data, from each of 400 items at
pre and posttest. For each item the subject got correct, there
Hello,
when using
Sys.getenv() during startup-phase (.First or .Rprofile)
to get the env-variables
COLUMNS as well as HOST I get empty strings.
After the startup is done, when asking via Sys.getenv()
by hand, COLUMNS is set (but HOST is not, even "hostname" on the shell gives me
a correct answe
On Sat, 30 Apr 2011 10:53:00 -0700 (PDT)
Berend Hasselman wrote:
>
> Stephen P Molnar wrote:
> >
> > Is there an R package for the psuedoinversion of non-square matrices,
> > similiar to to pinv MATLAB function?
> >
>
> ginv in package MASS
> pseudoinverse in package corpcor
>
> Berend--
> V
Hi,
qqnorm basically plots your actual sample values against what the
values would be (approximately) if they were from a normal
distribution. qqline() adds a line through the 1st and 3rd quartiles.
So roughly speaking, if your QQ plot forms a straight line
(particularly the one drawn by qqline)
Stephen P Molnar wrote:
>
> Is there an R package for the psuedoinversion of non-square matrices,
> similiar to to pinv MATLAB function?
>
ginv in package MASS
pseudoinverse in package corpcor
Berend--
View this message in context:
http://r.789695.n4.nabble.com/Psuedoinverse-of-a-Non-square-M
Hi all,
I am trying to test wheater the distribution of my samples is normal with QQ
plot.
I have a values of water content in clays in around few hundred samples. Is the
code :
qqnorm(w) #w being water content
qqline(w)
sufficient?
How do I know when I get the plo
df<-data.frame(id=c(1:20),name=c('a','b','b','c','a','d','b','e','d','d','c','a','b','a','a','b','f','b','c','g'))
freq <- ave(rep(1, times=nrow(df)), df$name, FUN=sum)
rowSums(table(df$name,freq))
--
View this message in context:
http://r.789695.n4.nabble.com/Sorting-dataframe-by-number-of-occ
Is there an R package for the psuedoinversion of non-square matrices, similiar
to to pinv MATLAB function?
Thanks in advance.
--
Stephen P. Molnar, Ph.D.Life is a fuzzy set
Foundation for ChemistryStochastic and multivarate
http://www.FoundationFo
I have a logistic regression model I'm trying to do k-fold cross validation
on.
The number of observations is approximately 550 and an event rate of about
30%
Does anyone have a recommendation for a B value to use for this data set?
--
View this message in context:
http://r.789695.n4.nabble.co
On Apr 30, 2011, at 10:44 AM, Jabba wrote:
Dear useRs,
I was asked to produce a survival curve like this:
http://www.palug.net/Members/jabba/immaginetta.png/view
with the cardinality of the riskset at the bottom.
The 2nd sentence of help page for survival::survplot says that is an
inbuil
Bare test code: My simple Java test class source and R test code follow:
public class RJavTest {
public static void main(String[]args) { RJavTest rJavTest=new RJavTest();
}
public final static String conStg="testString";
public final static double con0dbl=1001;
public final static d
Dear list,
I made a logistic regression model (MyModel) using lrm and penalization
by pentrace for data of 104 patients, which consists of 5 explanatory
variables and one binary outcome (poor/good). Then, I found bootcov and
robcov function in rms package for calculation of confidence range of
coe
Dear useRs,
I was asked to produce a survival curve like this:
http://www.palug.net/Members/jabba/immaginetta.png/view
with the cardinality of the riskset at the bottom.
I do not like doing it, because it doesn't add any valuable information
and because it doesn't discriminate between died and
Hi all,
I am a C++/C# programmer who is new to R. I would like to use something like
"namespace" to organize my functions without creating a package. How can I do
this? Thanks!
xiagao1982
2011-04-30
[[alternative HTML version deleted]]
__
On Apr 30, 2011, at 9:48 AM, Jun Shen wrote:
> Dear David/Dennis,
>
> Thanks. I have 'mvtnorm' and 'multicomp' installed on R 2.10.1.
> After I installed R 2.13.0, I copied the whole library from R 2.10.1
> to R 2.13.0. That should do it? Then when I tried to load 'nparcomp'
> in 2.13.0, I
Dear David/Dennis,
Thanks. I have 'mvtnorm' and 'multicomp' installed on R 2.10.1. After I
installed R 2.13.0, I copied the whole library from R 2.10.1 to R 2.13.0.
That should do it? Then when I tried to load 'nparcomp' in 2.13.0, I got the
error saying "Error: package 'mvtnorm' is not installed
On Apr 30, 2011, at 7:06 AM, Patrick Hausmann wrote:
Dear list,
I would like to do some calculation using different grouping
variables. My 'df' looks like this:
# Some data
set.seed(345)
id <- seq(200,400, by=10)
ids <- sample(substr(id,1,1))
group1 <- rep(1:3, each=7)
group2 <- rep(1:2, c
I an using np package for kernel Modal regression. I am interested in getting
conditional mode estimates. Function fitted extract conditional density
estimates, but i am not interested in that.
Subject conmode gives a vector of type factor (or ordered factor) containing
the conditional mode at eac
Dear user,
I have a monthly time series data (length 144), as ts (time series)
object in R. I would like to
1-extract vector of let say January data (a vector of length 12),
2-extract the monthly means, sums (a vector of length 12).
Many tahnks in advance
Yours,
Hamid
On Apr 29, 2011, at 7:17 PM, Jun Shen wrote:
Hi, Dennis,
Thanks for the reply. I tried to upgrade to R 2.13.0. Then when I
tried to
load the library(nparcomp), I got an error
Error: package 'mvtnorm' is not installed for 'arch=i386'
What does that mean? Thanks.
You can see from Dennis'
Dear list,
I would like to do some calculation using different grouping variables.
My 'df' looks like this:
# Some data
set.seed(345)
id <- seq(200,400, by=10)
ids <- sample(substr(id,1,1))
group1 <- rep(1:3, each=7)
group2 <- rep(1:2, c(10,11))
group3 <- rep(1:4, c(5,5,5,6))
df <- data.frame(
On Fri, Apr 29, 2011 at 11:17:58PM -0700, adigs wrote:
> Apologies for what's probably quite simple, but I'm having some problems with
> sorting a data frame by the number of occurences of each level of a factor.
>
> df<-data.frame(id=c(1:20),name=c('a','b','b','c','a','d','b','e','d','d','c','a',
Dear Duncan - many thanks for your reply/solution, it works beautifully now.
Best wishes,
Tal
Contact
Details:---
Contact me: tal.gal...@gmail.com | 972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (
Apologies for what's probably quite simple, but I'm having some problems with
sorting a data frame by the number of occurences of each level of a factor.
df<-data.frame(id=c(1:20),name=c('a','b','b','c','a','d','b','e','d','d','c','a','b','a','a','b','f','b','c','g'))
I want to sort the dataframe
Hi,
Anybody knows how I can go about solving a linear least square problem with
bounds? In Matlab I can use lsqlin but I haven't been able to get anything
in R.
Appreciate your help!
--
View this message in context:
http://r.789695.n4.nabble.com/Least-square-with-Bounds-tp3484741p3484741.html
Great, thanks! Still need to figure out all these functions... ;)
--
View this message in context:
http://r.789695.n4.nabble.com/For-loop-and-sqldf-tp3484559p3484715.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org
41 matches
Mail list logo