Hi,
Input Format: excel file (XLS)
Column 1: Gene ID (alphanumeric)
Column 2 - 10 : (numeric data).
inData = read.xls ( fileName)
geneLabel = inData [ , 1] - column 1 stored in geneLabel
tempData = inData [ , 2: 10]
expValues = data.matrix (tempData) - convert frame into Matrix
Hi,
It works for me too. When I had reshape and reshape2 both loaded, I had the
same problem.
library(reshape2)
tmp - melt(smiths,
id.vars=1:2,
measure.vars=c(age,weight,height),
variable.name=myvars,
value.name=myvals
)
names(tmp)
[1] subject time myvars
Hello,
Try this:
You can use the read.table with sep=, if it is comma separated to read the
file.
test1-read.table(text=
0.141 0.242 0.342
0.224 0.342 0.334
0.652 0.682 0.182
,sep=,header=FALSE)
#Read data
test1
# V1 V2 V3
#1 0.141 0.242 0.342
#2 0.224 0.342 0.334
#3 0.652
Hello,
Try using the vector geneLabel. You don't need to convert to list.
rownames(expValues) - geneLabel # if this doesn't work
rownames(expValues) - as.vector(geneLabel)
But I really don't see a problem with the first. Have you tried it?
Hope this helps,
Rui Barradas
Em 22-07-2012 03:04,
How to adapt this piece of code but for: - gamma distribution - 3 parameter
log normal
More specifically, where can I find the specification of the parameter
(lmom) for pelgam() and pelln3()?
Lmom package info just gives: pelgam(lmom), lelln3(lmom), where lmom is a
numeric vector containing the
Hi,
Try this:
test1-read.table(text=
Fkh2 0.141 0.242 0.342
Swi5 0.224 0.342 0.334
Sic1 0.652 0.682 0.182
,sep=,header=FALSE)
test1
# V1 V2 V3 V4
#1 Fkh2 0.141 0.242 0.342
#2 Swi5 0.224 0.342 0.334
#3 Sic1 0.652 0.682 0.182
geneLabel-test1[,1]
hi Greg, David, and Tal,
Thank you very much for the information.
I found this in SPSS 17.0 missing value manual:
EM Method
This method assumes a distribution for the partially missing data and bases
inferences
on the likelihood under that distribution. Each iteration consists of an E step
Hi,
i need a little help! i must create a dummy variable to insert as external
regressor in the variance equation of a garch model; this dummy is referred
to the negative sign of returns of an asset, so it has to be 1 when returns
are negative and 0 when they are positive, and in my model the
Hello,
See if this is it.
returns - rnorm(10)
dummy - ifelse(returns 0, -1, 0)
Hope this helps
Rui Barradas
Em 22-07-2012 08:53, saraberta escreveu:
Hi,
i need a little help! i must create a dummy variable to insert as external
regressor in the variance equation of a garch model; this
Check and see if you have reshape loaded as well. I had a somewhat similar
problem (R2.13 ?) and realised that reshape was masking reshape2
John Kane
Kingston ON Canada
-Original Message-
From: dwarnol...@suddenlink.net
Sent: Sat, 21 Jul 2012 16:06:11 -0700 (PDT)
To:
Sara:
Are you sure?? I am wholly unfamiliar with garch, but in general, R
does not need dummy variables at all. You make your covariate a factor
with appropriate contrasts and then write an appropriate model
formula, in this case, with an interaction with your series.
I could be wrong in this
Dear Henrik,
As you discovered, entering the covariate age additively into the
between-subject model doesn't prevent Anova() from reporting tests for the
interactions between age and the within-subjects factors. I'm not sure why you
would want to do so, but you could simply ignore these tests.
Hi,
I was using Gviz package to create a boxplot. I understand that Gviz uses
panel.bwplot to create the boxplot.
Is there any way that I can remove the dashed line surrounding each pair of
boxplots?
Here is some sample code:
#
library(Gviz)
thisdata -
On Jul 22, 2012, at 14:45 , Rui Barradas wrote:
Hello,
See if this is it.
returns - rnorm(10)
dummy - ifelse(returns 0, -1, 0)
Sara had 1 if results are negative, so lose the minus. It is easier just to
say
dummy - as.numeric(returns 0)
-pd
Hope this helps
Rui Barradas
Dear friends,
Many thanks to Jim (Holtman) and David (Carlson) for their quick
responses: Q1 is now solved. There are two almost equivalent ways for
doing this. They follow:
library(lattice)
z - rbind(cbind(z, 0), cbind(z, 20), cbind(z, 40))
z - cbind(z, rnorm(n = nrow(z)))
z - as.data.frame(z)
reorder() is probably the best way to order the levels in a vector
without manually specifying the order. But reorder() orders by default
in an increasing order: The levels are ordered such that the values
returned by ‘FUN’ are in increasing order.
Is there a way to do what reorder() does, but
Hello!
I am interested in creating contingency tables, namely one that would
let me find the frequency and proportion of patients with specific
risk factors (dyslipidemia, diabetes, obesity, smoking, hypertension).
There are 3 dimensions I would like to divide the population into:
sex, family
Sverre,
have you tried to put minus(-) in front of the variable by which you
order the other?
weidong
On Sun, Jul 22, 2012 at 12:27 PM, Sverre Stausland
john...@fas.harvard.edu wrote:
reorder() is probably the best way to order the levels in a vector
without manually specifying the order. But
Best to ask questions about Bioconductor packages on the bioconductor mailing
list.
---
Jeff NewmillerThe . . Go Live...
DCN:jdnew...@dcn.davis.ca.usBasics: ##.#. ##.#.
I have a value
a=300
observation (x) = sample(1:50)
How to find a p-value from this. I need to show that a is different fom
mean(x).
Thanks
--
-
Mary Kindall
Yorktown Heights, NY
USA
[[alternative HTML version deleted]]
__
Dear John,
thanks for your response. But if I simply ignore the unwanted effects, the
estimates of the main effects for the within-subjects factors are distroted
(rationale see below). Or doesn't this hold for between-within interactions?
Or put another way: Do you think this approach is the
Hi Mary,
I think the good old t-test is what you want:
x - sample(1:50)
t.test(x, mu = 300)
gives:
One Sample t-test
data: x
t = -133.2, df = 49, p-value 0.00022
alternative hypothesis: true mean is not equal to 300
95 percent confidence interval:
21.36 29.64
sample
On 2012-07-17 05:13, R. Michael Weylandt wrote:
On Mon, Jul 16, 2012 at 3:39 PM, Oxenstierna david.chert...@gmail.com wrote:
lapply(thing, function(x) x[['p.value']]) --works very well, thank you.
Not to be a chore, but I'm interested in comparing the results of
wilcox.test--and the
Hi everybody,
I am currently quite inexperienced with R.
I try to create a function that simply take a value in a dataframe, look for
this value in another dataframe and copy all the row that have this value
This example is a simplified version of what I am doing but it's enough to
help me
listA
Not quite. It still orders the values in an increasing order, you've
just changed the values here. I'm using reorder() to prepare for
plotting the values, so I can't change the values.
On Sun, Jul 22, 2012 at 6:51 PM, arun smartpink...@yahoo.com wrote:
Hi,
I hope this is what you are looking
By looking at your output, it didn't change the order of the levels.
(This is symptomatic for how difficult it is to change levels in R in
any automatic way).
On Sun, Jul 22, 2012 at 7:31 PM, arun smartpink...@yahoo.com wrote:
Hi,
Not sure if this helps or not.
with(InsectSprays,
HI,
Probably ?pnorm
x1-mean(x)
x1
[1] 25.5
pnorm(25.5,mean=300)
[1] 0
A.K.
- Original Message -
From: Mary Kindall mary.kind...@gmail.com
To: r-help@r-project.org
Cc:
Sent: Sunday, July 22, 2012 3:37 PM
Subject: [R] pvalue calculate
I have a value
a=300
observation (x) =
Hi,
Try this:
dat1-read.table(text=
NACE aaa bbb ccc
1 a a c
1 a a c
1 a a c
2 a a c
2 a a c
3 a a c
4 a a c
4 a a c
4 a a c
,sep=,header=TRUE)
dat2-read.table(text=
Name NACE
a 1
b 2
c 3
,sep=,header=TRUE)
dat3-merge(dat1,dat2)
dat3-dat3[,1:4]
dat3
NACE aaa bbb ccc
1 1 a a c
2
On 12-07-22 12:27 PM, Sverre Stausland wrote:
reorder() is probably the best way to order the levels in a vector
without manually specifying the order. But reorder() orders by default
in an increasing order: The levels are ordered such that the values
returned by ‘FUN’ are in increasing order.
On 12-07-22 3:37 PM, Mary Kindall wrote:
I have a value
a=300
observation (x) = sample(1:50)
How to find a p-value from this. I need to show that a is different fom
mean(x).
Thanks
This question doesn't really make sense. sample(1:50) gives you the
same sample as 1:50 does, just in a
Dear Henrik,
The within-subjects contrasts are constructed by Anova() to be orthogonal in
the row-basis of the design, so you should be able to safely ignore the effects
in which (for some reason that escapes me) you are uninterested. This would
also be true (except for the estimated error)
On 2012-07-22 09:02, Ranjan Maitra wrote:
Dear friends,
Many thanks to Jim (Holtman) and David (Carlson) for their quick
responses: Q1 is now solved. There are two almost equivalent ways for
doing this. They follow:
library(lattice)
z - rbind(cbind(z, 0), cbind(z, 20), cbind(z, 40))
z -
On 2012-07-22 13:09, Henrik Singmann wrote:
Hi Mary,
I think the good old t-test is what you want:
Maybe, but calculating p-values with absolutely no consideration
of assumptions is pure folly. It may well be that Mary has some
assumptions in mind, but the way the question was posed does not
Dear John,
indeed, you are very right. Including the covariate as is, doesn't make any
sense. The only correct way would be to center it on the mean beforehands. So
actually the examples in my first and second mail are bogus (I add a corrected
example at the end) and the reported test do not
On Sun, 22 Jul 2012 15:04:36 -0700 Peter Ehlers ehl...@ucalgary.ca
wrote:
On 2012-07-22 09:02, Ranjan Maitra wrote:
Dear friends,
Many thanks to Jim (Holtman) and David (Carlson) for their quick
responses: Q1 is now solved. There are two almost equivalent ways for
doing this. They
On 2012-07-22 15:58, Ranjan Maitra wrote:
On Sun, 22 Jul 2012 15:04:36 -0700 Peter Ehlers ehl...@ucalgary.ca
wrote:
On 2012-07-22 09:02, Ranjan Maitra wrote:
Dear friends,
Many thanks to Jim (Holtman) and David (Carlson) for their quick
responses: Q1 is now solved. There are two almost
Dear Henrik,
On Mon, 23 Jul 2012 00:56:16 +0200
Henrik Singmann henrik.singm...@psychologie.uni-freiburg.de wrote:
Dear John,
indeed, you are very right. Including the covariate as is, doesn't make any
sense. The only correct way would be to center it on the mean beforehands. So
actually
[I had to dig back to see what your Q2 was. It's good to keep context.]
Try this:
p - bwplot(Error~Method | sigma + INU, data = z,
scales = list(rot=90), horiz = FALSE,
layout = c(5,3), col = red)
require(latticeExtra)
useOuterStrips(p,
Dear R help,
Does no one have an idea of where I might find information that could help
me with this problem? I apologize for re-posting - I have half a suspicion
that my original message did not make it through.
I hope you all had a good weekend and look forward to your reply,
MO
On Fri, Jul
On 2012-07-22 18:03, Ranjan Maitra wrote:
[I had to dig back to see what your Q2 was. It's good to keep context.]
Try this:
p - bwplot(Error~Method | sigma + INU, data = z,
scales = list(rot=90), horiz = FALSE,
layout = c(5,3), col = red)
require(latticeExtra)
On Sun, 22 Jul 2012 18:58:39 -0700 Peter Ehlers ehl...@ucalgary.ca
wrote:
On 2012-07-22 18:03, Ranjan Maitra wrote:
[I had to dig back to see what your Q2 was. It's good to keep context.]
Try this:
p - bwplot(Error~Method | sigma + INU, data = z,
scales =
Dear all,
I have a question regarding changing the xlim and ylim in the function
image().
For example, in the following code, how can I have a heatmap with
xlim=ylim=c(0, 100)
instead of (0,1).
Thank you very much.
x - matrix(rnorm(1, 0,1), 100, 100)
image(x)
Hannah
Hello,
image(1:100, 1:100, x)
Regards,
Pascal
Le 23/07/12 11:28, li li a écrit :
Dear all,
I have a question regarding changing the xlim and ylim in the function
image().
For example, in the following code, how can I have a heatmap with
xlim=ylim=c(0, 100)
instead of (0,1).
Thank
On 2012-07-22 19:09, Ranjan Maitra wrote:
On Sun, 22 Jul 2012 18:58:39 -0700 Peter Ehlers ehl...@ucalgary.ca
wrote:
On 2012-07-22 18:03, Ranjan Maitra wrote:
[I had to dig back to see what your Q2 was. It's good to keep context.]
Try this:
p - bwplot(Error~Method | sigma + INU, data
Just reset the levels of z$sigma (and also redefine sigmaExpr):
z$sigma - factor(z$sigma,
levels = c(5,10,20,30,50)) # new levels order
sigmaExprList - lapply(as.numeric(levels(z$sigma)),
function(s) bquote(sigma == .(s)))
Can someone verify for me if the for loop below is really calculating the
nonzero min for each row of a matrix? I have a bug somewhere in the is
section of code. My first guess is how I am find the the nonzero min of each
row of my matrix. The overall idea is to make sure I am investing all of my
!!! Well, that strikes me as a fair bit of chutzpah. Some generous
soul may well respond, but why not do your own debugging using R's
debugging tools.It will serve you well in the long run to put in the
effort now to learn them. Debugging is a major part of any
programming.
?trace
?debug
There's a typo below. It's Deepayan Sarkar.
-- Bert
On Sun, Jul 22, 2012 at 9:55 PM, Bert Gunter bgun...@gene.com wrote:
inline.
-- Bert
On Sun, Jul 22, 2012 at 8:26 PM, Ranjan Maitra
maitra.mbox.igno...@inbox.com wrote:
Just reset the levels of z$sigma (and also redefine sigmaExpr):
On Sun, 22 Jul 2012 22:05:14 -0700 Bert Gunter gunter.ber...@gene.com
wrote:
On Sun, Jul 22, 2012 at 8:26 PM, Ranjan Maitra
maitra.mbox.igno...@inbox.com wrote:
Just reset the levels of z$sigma (and also redefine sigmaExpr):
z$sigma - factor(z$sigma,
levels =
49 matches
Mail list logo