I have built a few R servers on Amazon's EC2 infrastructure and rolled a
pre-packaged instance of Ubuntu 64 with R ready to go. These instances are
available with various levels technical support (for the server, not R itself).
For those of you who want to get work done with R and don't want t
Dear R users,
I have a data where I desire to subset according to certain conditions.
However, the script is very messy as there are about 30 distinct conditions.
(i.e. same script but with different conditions)
I would like to make a user defined function so that I can input the desired
conditi
On 3 September 2009 at 20:41, lehe wrote:
| Hi,
| I have some results generated in my C++ program. I 'd like to call some R
| functions that can test if two sample sets are from different distribution,
| like Kolmogorov-Smirnov and other more sophisticated Tests. I also like to
| draw the histogra
Hi,
I have some results generated in my C++ program. I 'd like to call some R
functions that can test if two sample sets are from different distribution,
like Kolmogorov-Smirnov and other more sophisticated Tests. I also like to
draw the histograms of the two sample sets in the same plot using R.
Most of the time was being spent within the 'sapply' call. Here is a
list from Rprof:
0 18.4 root
1. 18.4 system.time
2. . 12.9 A1
3. . . 12.9 sapply
4. . . . 12.9 lapply
5. . . . | 12.9 FUN
6. . . . | . 12.9 sapply
7. . . . | . . 12.9 lapply
8. . . . | . . . 12
I did a speed test with a colleague. We basically have identical Lenovo
ThinkCentres. He has 8 vs my 4 G RAM, but I don't think that's the
issue.
code:
length <- 2*10^6
a <- runif(length)
b <- runif(length)
print(summary(lm(a~b)))
Running Arch x86_64 this takes about 16 to 17 seconds. I *think
Bill,
Thanks for the great tips for speeding up functions in your response. Those
are really useful for me. Even with the improvements the recursion is still
many times slower than I need it to be in order to make it useful in R. I may
just have to suck it up and call a compiled language.
B
I'm having difficulties with plot.locfit.3d, at least I think that is
the problem. I have a large dataframe (about 4 MM cases) and was
hoping to see a non-parametric estimate of the hazard plotted against
two variables:
> fit <- locfit(~surv.yr+ ur_protein + ur_creatinine, data=TRdta,
cen
it is speedier to use sort than a combination of [] and order:
N<- 100
x <- runif(N)
> system.time(x[order(x)[c(N-1,N)]])
user system elapsed
1.030.001.03
> system.time(sort(x)[c(N-1,N)])
user system elapsed
0.280.000.28
On Sep 4, 11:17 am, Noah Silverman wrot
Phil,
That's perfect. (For my application, I've never seen a tie. While
possible, the likelihood is almost none.)
Thanks!
--
Noah
On 9/3/09 4:29 PM, Phil Spector wrote:
> Noah -
>max(x[-which.max(x)] will give you the second largest value,
> but it doesn't handle ties.
>x[order(x,de
Hello
I'm having trouble figuring out how to use the output of "segmented()"
with a new set of predictor values.
Using the example of the help file:
set.seed(12)
xx<-1:100
zz<-runif(100)
yy<-2+1.5*pmax(xx-35,0)-1.5*pmax(xx-70,0)+15*pmax(zz-.5,0)+rnorm(100,0,2)
dati<-data.fra
Try this:
# input data
ID <- scan()
1 1 1 1 2 2 2 2 2
V2 <- scan()
23.9 NA 22.0 23.9 0.4 NA NA 3.0 2.4
# setup data frame
DF <- cbind(ID, V2); DF
# find unique pairs
unique(DF)
###
# Chi Yau
# http://r-tutor.com
###
Barbara-43 wrote:
>
> Hello R-helpers,
>
Hello!
This is not a topic I am well versed in but required to become well versed
in...I welcome any assistance!
Using R, I want to create an optimal design for an experiment. I'll be
analyzing the results with logistic regression or some generalized linear
model. I am thinking that the algdesi
Hello R-helpers,
I am having a difficult time figuring out the following: I have a data
frame with 24 variables. I need to remove redundant observations where,
within the same values of ID, V2 is equal to another observation in V2.
>ID V2
>123.9
>1NA
>122.0
>1 23.9
>2 0.4
On Thu, Sep 3, 2009 at 1:25 PM, Jim Porzak wrote:
> Tim,
>
> I've had success (& user acceptance) simply plotting to a .pdf &
> passing zoom functionality to Acrobat, or whatever.
>
> Worked especially well with large US map with a lot of fine print annotation.
>
> Of, course, will not replot axes
Hi Noah,
Next time try, please, send a short reproducible code/example.
May be this can be (not so elegant but) helpfull
myDF<-data.frame(cbind(a=runif(5),b=runif(5)))
myDF
N=2
a.order<-rev(order(myDF$a))[1:N]
b.order<-rev(order(myDF$b))[1:N]
myDF.max2a<-myDF[a.order,]
myDF.max2a
myDF.max2b<-m
Bill Dunlap
TIBCO Software Inc - Spotfire Division
wdunlap tibco.com
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of Bryan Keller
> Sent: Thursday, September 03, 2009 2:11 PM
> To: r-help@r-project.org
> Subject: [R] Recur
Hi Richard,
If not for your condition (d), which imposes nonlinear constraints, you could
have used `constrOptim'. I have written a function called `constrOptim.nl"
that can handle nonlinear inequality constraints, and it also improves upon
`constrOptim' in a couple of aspects. Fortunately, y
There is also playwith, if you're willing to go down the GTK+ route...
2009/9/3 Tim Shephard :
> Hi folks,
>
> I was wondering if anyone could confirm/deny whether there exists any
> kind of package to facilitate zoomable graphs with multiple plots (eg,
> plot(..) and then points(..)). I've t
Hi,
I use the max function often to find the top value from a matrix or
column of a data.frame.
Now I'm looking to find the top 2 (or three) values from my data.
I know that I could sort the list and then access the first two items,
but that seems like the "long way". Is there some way to a
Hi,
There are good books by O'Reilly for both sed and awk.
That said, neither is what I would call a "complete" scripting language.
Without knowing the details of your requirements, I STRONGLY suggest
that you learn perl. It allows you do almost anything with data, fetch
web pages, access a
Hello,
Thank you very much for the tip. I think that the irt.ability() function is
what I'm looking for but I'm not sure.
To be more clear, I have obtained item parameters from the LLTM function in
the eRm package which allows for repeated measurements. I have 10 items
measured at 2 time point
Thank you for all your suggestions. I will start with the chapter.
Annie
On Thu, Sep 3, 2009 at 1:50 PM, Don McKenzie wrote:
> Frank may be too modest to suggest it, but a great place to start that
> reading is in his book "Regression Modeling Strategies" chapter 4.
> On Sep 3, 2009, at 1:45
Bryan Keller wrote:
>
> The following recursion is about 120 times faster in C#. I know R is not
> known for its speed with recursions but I'm wondering if anyone has a tip
> about how to speed things up in R.
>
> #"T" is a vector and "m" is a number between 1 and sum(T)
>
> A <- function(T,
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf Of AnnieE
> Sent: Thursday, September 03, 2009 12:57 PM
> To: r-help@r-project.org
> Subject: Re: [R] Party plots
>
>
>
>
> Achim Zeileis wrote:
> >
> >
> >>>You can easily plot i
The following recursion is about 120 times faster in C#. I know R is not known
for its speed with recursions but I'm wondering if anyone has a tip about how
to speed things up in R.
#"T" is a vector and "m" is a number between 1 and sum(T)
A <- function(T,m) {
lt <- length(T)
if (lt == 1) {
Frank may be too modest to suggest it, but a great place to start
that reading is in his book "Regression Modeling Strategies" chapter 4.
On Sep 3, 2009, at 1:45 PM, Frank E Harrell Jr wrote:
> You'll need to do a huge amount of background reading first. These
> stepwise options do not inco
Are you looking for reshape()?
HTH,
Stephan
Edward Chen schrieb:
Hi all,
I have a mxn matrix that consists of 28077 rows of features and 30 columns
of samples. I want to normalize each row for the samples for each feature.
I have tried normalize and scale functions but they don't seem to work
On Sep 3, 2009, at 4:41 PM, Edward Chen wrote:
Hi all,
I have a mxn matrix that consists of 28077 rows of features and 30
columns
of samples. I want to normalize each row for the samples for each
feature.
I have tried normalize and scale functions
How?
but they don't seem to work out
You'll need to do a huge amount of background reading first. These
stepwise options do not incorporate penalization.
Frank
annie Zhang wrote:
Hi, Frank,
If I want to do prediction as well as to select important predictors,
which may be the best function to use when I have 35 samples and 35
Hi all,
I have a mxn matrix that consists of 28077 rows of features and 30 columns
of samples. I want to normalize each row for the samples for each feature.
I have tried normalize and scale functions but they don't seem to work out
the way I want to.
Thank you
--
Edward Chen
Email: edche...@g
Hi Annie,
What kind of data (response and explanatory) you have?
As a ecological modeller, I always think first what
is my (our) mind models. After that I analyze a set of
concorrent - *but with ecological meaning* - models.
It helps me to test hypothesis as well as to discuss
the results. If you
Tim,
I've had success (& user acceptance) simply plotting to a .pdf &
passing zoom functionality to Acrobat, or whatever.
Worked especially well with large US map with a lot of fine print annotation.
Of, course, will not replot axes more appropriate for zoom level.
HTH,
Jim Porzak
Ancestry.com
(I haven't seen a response to this - might have missed it.)
see inline
Lars Bishop wrote:
Dear experts,
I have a few quick questions related to GLMs:
1) Suppose my response is of the type Yes/No, How can I control which
response I'm modelling?
By selecting the appropriate reference level. As
Hi, Frank,
If I want to do prediction as well as to select important predictors, which
may be the best function to use when I have 35 samples and 35 predictors
(penalized logistic with variable selection)? I saw there is a 'fastbw'
function in the Design package. And there is a 'step.plr' function
On Sep 3, 2009, at 4:03 PM, diana buitrago wrote:
Hi, R users,
How can I export an R object as a .txt file? As an example I have
the result
from a regression and I need to save this object in a .txt file
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.
Dear list members.
I am trying to make a sensitivity analysis (as derived by Zurada 1994,
Engelbrecht 1995) of input parameters (gene expression data) when
applying a neural network to classify different cancer subtypes. Since
I am no expert in the field, (rather a newbie), I wonder if ther
On Thu, Sep 3, 2009 at 4:34 PM, Henrique Dallazuanna wrote:
> Try this:
>
> #1
> lapply(my.array, '[', , 3)
>
>
this works! thank you a lot!
> #2
> newThirdColumn <- sample(3)
> lapply(my.array, replace, list = 7:9, values = newThirdColumn)
>
>
i did not understand this last line, so far i could
Hi, R users,
How can I export an R object as a .txt file? As an example I have the result
from a regression and I need to save this object in a .txt file
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2,10,20, l
Achim Zeileis wrote:
>
>
>>>You can easily plot into a large PDF, e.g., something like this
>
>>> pdf(file = "foo.pdf", height = 15, width = 20)
>>> plot(foo)
>>> dev.off()
>
>>>and then view the PDF in an external viewer, zooming into parts of a tree
>>>etc. Depending on the size of
I'm having trouble understanding the output from as.windrose(). For one
thing, data on a boundary between sectors seem to be left out of the
counts. I assume that explains the missing point in the output below
(angle 45). Shouldn't one side of each sector interval be open, to
include values such
Ravi & list,
Here is a simplified example of the type of problem I need to solve.
It's a constrained allocation problem of a finite population sample:
Decision vars: n[h] , h=1:H, i.e an H-vector of stratum sample sizes
Objective: Min the sum over h=1:H of ( W[h]^2 * S[h]^2 / n[h] )
w
Hi Richard,
Others have written to me about the non-availability of the Rdonlp2 package on
CRAN and on package author's website. Some of these emails had expressed the
hardships that thay are experiencing because a lot of their codes are dependent
upon Rdonlp2. They had also expressed their f
On Thu, 3 Sep 2009, AnnieE wrote:
I'm pretty new to R, and not much of a progammer (yet). I'm having trouble
navigating the graphical output for the party algorithm. Essentially, my
tree is too large for the default page size so the nodes overlap and obscure
one another. Anybody know how to c
On 09/03/2009 05:42 PM, j.delashe...@ed.ac.uk wrote:
Hello list,
I use R for microarray analysis.
One procedure I use takes a large matrix, and loops through it looking
for specific rows, does an operation with them, and outputs a result
(single row) as a row of another matrix. The loop goes o
What operating system are you running under? You should take a look
at the R process and see how much time it is using to see if there is
a difference in the CPU time. Are you paging? Exactly how are you
invoking the R script? Why are you using the GUI instead of Rterm?
You might try to run Rpr
You may find the multtest package helpful. It implements methods from
Westfall and Young (Resampling based multiple testing).
On Mon, Aug 31, 2009 at 5:37 AM, Yonatan
Nissenbaum wrote:
> Hi,
>
> My query is regarding permutation test and reshuffling of genotype/phenotype
> data
> I have been usi
My original problem was an inability to generate R plots using the
following scenario:
1. HTML page sends data to Ruby CGI
2. Ruby CGI passes on data to R plotting script, which generates a plot
3. Ruby CGI passes plot back to user
With help from Scott Sherrill-Mix on this list and from
Robert A. LaBudde wrote:
I have Vista Home with R-2.9.0, and installed and tried to test the
package 'roxygen':
> utils:::menuInstallPkgs()
trying URL
'http://lib.stat.cmu.edu/R/CRAN/bin/windows/contrib/2.9/roxygen_0.1.zip'
Content type 'application/zip' length 699474 bytes (683 Kb)
opened
Hi,
I would like to create wireframe plots conditional on 2 variables and use
different
limits for the 3-axes in each plot. I thought I could do this with subscripts
and
the panel.wireframe but I haven't been successful. I am getting this
error "...multiple actual arguments..." so I definitel
I assume you mean you wish to convert it from text to
to the numeric representation that POSIXct uses:
> options(digits = 20)
> as.numeric(as.POSIXct("2009-06-16 09:28:17.746"))
[1] 1245158897.746
If you just want convert it to POSIXct then omit the
as.numeric part.
On Thu, Sep 3, 2009 at 1:17 P
Try this:
?numericDeriv
On Thu, Sep 3, 2009 at 1:18 PM, FMH wrote:
> Dear All,
>
> I was trying to compute the first and second order differentiation on a
> number of mathematical equations, but never found any command in R to do this
> operation. At present, i have almost 3000 different functi
I need to convert a date-style string: "2009-06-16 09:28:17.746"
To its POSIX representation: 1245137297746
The function below converts my POSIX date to a string ... now I need to go
backwards!
render.t32 <- function(t32, tz = "CET")
{
timez <- ISOdatetime(1970,1,1,0,0,0, tz ="UTC")+t32/
Dear All,
I was trying to compute the first and second order differentiation on a number
of mathematical equations, but never found any command in R to do this
operation. At present, i have almost 3000 different functions, consist of
polynomial, harmonic and several other functions.
Could some
annie Zhang wrote:
Thank you for all your reply.
Actually as Bert said, besides predicion, I also need variable selection
(I need to know which variables are important). As far as the sample
size and number of variables, both of them are small around 35. How can
I get accurate prediction as lo
I have Vista Home with R-2.9.0, and installed and tried to test the
package 'roxygen':
> utils:::menuInstallPkgs()
trying URL
'http://lib.stat.cmu.edu/R/CRAN/bin/windows/contrib/2.9/roxygen_0.1.zip'
Content type 'application/zip' length 699474 bytes (683 Kb)
opened URL
downloaded 683 Kb
packa
Thank you for all your reply.
Actually as Bert said, besides predicion, I also need variable selection (I
need to know which variables are important). As far as the sample size and
number of variables, both of them are small around 35. How can I get
accurate prediction as long as good predictors?
A
Dear Ottorino-Luca,
Here is a suggestion using ave():
df.mydata$PERCENTAGE <- with(df.mydata, ave(CONC, list(SAMPLE), FUN =
function(x) x / max(x) ))
df.mydata[1:5,]
# CONC TIME SAMPLE PERCENTAGE
# 1 1.01 A1.0
# 2 0.92 A0.9
# 3 0.83 A0.8
# 4
On Sep 3, 2009, at 12:17 PM, Ottorino-Luca Pantani wrote:
Dear R users, today I've got the following problem.
Here you are a dataframe as example.
There are some SAMPLES for which a CONCentration was recorded
through TIME.
The time during which the concentration was recorded is not always
You could try plotting them with a parallel plot. See ?parallel
in lattice or ?ggpcp in ggplot2
On Thu, Sep 3, 2009 at 4:18 AM, Sannr wrote:
>
> I am looking for an alternative way to inspect my data, other than doing an
> correspondance analysis.
>
> What I have is a list with 5 different measure
Look at the zoomplot function in the TeachingDemos package.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.org] On Behalf Of T
Hi Ajay,
To install new packages try:
install.packages("package_name", dependencies=T)
But stats is part of base R. :-)
bests
miltinho
On Thu, Sep 3, 2009 at 6:01 AM, Ajay Singh wrote:
> Dear All,
> I have to install 'stats' package in my windows machine, could you let me
> know how to insta
It should not require installation. It's part of the base
distribution. What makes you think you do not have it as already? What
does this report:
installed.packages(priority="base")
?
--
David
On Sep 3, 2009, at 6:01 AM, Ajay Singh wrote:
Dear All,
I have to install 'stats' package in
Dear R users, today I've got the following problem.
Here you are a dataframe as example.
There are some SAMPLES for which a CONCentration was recorded through TIME.
The time during which the concentration was recorded is not always the same,
10 points for Sample A, 7 points for Sample B and 11 fo
Hello, I read a lot about ordination, but I am still confused... I have data
on species presence/absence for 8 different sites and I would like to
represent my species and the sites on an ordination plot to see if some
species are associated with specific sites. I used metaMDS function, which
disp
Dear list,
I have a microarray data in which gene expression is the response
(dependent) variable. Can anybody tell me which package/function in R
should I use to model the gene expression with an ordinal predictor
(Independent Variable)
Thanks,
Shirley
__
Hello,
I am not sure whether this is a bug or lack of R experience.
However, I am using your Sweavelistingutil package, which is very
nice. Obviously I use it to create LaTeX files. These are encoded in
utf8.
However, when I use the Sweavelistingutil is uses some funky
character for "`" and "'"
I'm posting answers to my own Q's here - as far as I have answers - first so
that people don't spend time on them, and second in case the solutions are
helpful to anyone else in future.
1) My first question is: is there a simple way of getting both dates along
the x-axis and the "*100" calculatio
I am looking for an alternative way to inspect my data, other than doing an
correspondance analysis.
What I have is a list with 5 different measurements of a person and ratings
that that person gave to a number of objects, so:
person1.name, person1.score1, person1.score2, person1.score3,
person1
Dear All,
I have to install 'stats' package in my windows machine, could you let me know
how to install the package?
Look forward to your assistance,
Ajay.
-
Ajay Singh, Ph.D. |Tel: +91-22 25764785
Research Scien
When I perform a two-way anova on my dataframe "pin", I can't get any
indication about the interaction among the two factors "gen" and "con" while
the statistics about the significance relative to the two factors alone is
correct.
I wrote this:
*pvalues_genotipo<-sapply(pin[,1:(length(pin)-2)],FUN
I'm pretty new to R, and not much of a progammer (yet). I'm having trouble
navigating the graphical output for the party algorithm. Essentially, my
tree is too large for the default page size so the nodes overlap and obscure
one another. Anybody know how to change the plot parameters to either:
Hi,
You may find some inspiration here:
http://wiki.r-project.org/rwiki/doku.php?id=tips:graphics-misc:ggplot2theme_inbase
HTH,
baptiste
2009/9/3 RINNER Heinrich
> Dear R-community,
>
> using R.2.9.1, how do you put reference lines or grids into the
> _background_ of a plot?
> For example:
Hello list,
I use R for microarray analysis.
One procedure I use takes a large matrix, and loops through it looking
for specific rows, does an operation with them, and outputs a result
(single row) as a row of another matrix. The loop goes on about 25000
times.
When I run the loop direct
But let's be clear here folks:
Ben's comment is apropos: ""As many variables as samples" is particularly
scary."
(Aside -- how much scarier then are -omics analyses in which the number of
variables is thousands of times the number of samples?)
Sensible penalization (it's usually not too sensitiv
The my.symbols and ms.polygon functions in the TeachingDemos package may help.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.
Dear R-community,
using R.2.9.1, how do you put reference lines or grids into the _background_ of
a plot?
For example:
barplot(3:1)
abline(h = seq(0.5, 2.5 ,0.5), col = "red", lty = "dashed")
-> The lines are before the bars (and the line parts going through the bars
might be considered as "ch
Previous versions have this question have partially bounced.
I apologize if parts of this are showing up multiple times on the
list.
Another try ...
There was at one time an R package called Rdonlp2 for solving
constrained nonlinear programming problems. Both the objective
function
and the const
uwe
that is brilliant - i was not aware of that command. the solution is
now apparent: as you suggest, i can access that PID, write it to a
file, then search for that file, reads it contents, etc.
thanks.
dennis
Dennis Fisher MD
P < (The "P Less Than" Company)
Phone: 1-866-PLessThan (1-86
Dennis Fisher wrote:
Colleagues,
I have encountered the following situation in R (2.9.0) with Windows XP.
I have an application that calls Rterm.exe. In certain situations, the
application terminates but fails to close R. Then, the next time that
the application runs, there are replicated
Try this:
#1
lapply(my.array, '[', , 3)
#2
newThirdColumn <- sample(3)
lapply(my.array, replace, list = 7:9, values = newThirdColumn)
On Thu, Sep 3, 2009 at 11:16 AM, Carlos Hernandez wrote:
> Dear All,
> I created a list (of length Z) in the following way:
>
> my.array <- vector("list", Z)
>
Colleagues,
I have encountered the following situation in R (2.9.0) with Windows XP.
I have an application that calls Rterm.exe. In certain situations,
the application terminates but fails to close R. Then, the next time
that the application runs, there are replicated copies of R running -
Dear All,
I created a list (of length Z) in the following way:
my.array <- vector("list", Z)
then i assigned a matrix (of T rows by N columns) in each of the elements of
the list my.array in the following way:
my.array[[i]] <- matrix.data ##( matrix.data has dimensions TxN, and i
repeated this
You can convert back to UTF-8:
value <- unlist(xpathApply(doc,"//MESSUNG/BEZEICHNUNG", xmlValue))
Encoding(value) <- "UTF-8"
On Thu, Sep 3, 2009 at 7:56 AM, Dominik Bänninger wrote:
> Dear list
> I tried to read an xml file using the xml package. Unfortunately, some
> encoding problems occure.
There are many ways to measure prediction quality, and what you choose
depends on the data and your goals. A common measure for a
quantitative response is mean squared error (i.e. 1/n * sum((observed
- predicted)^2)) which incorporates bias and variance. Common terms
for what you are looking for
This is what I've done. Just capture two identifies and then replot.
If the identifies are right left, I zoom out. Works quite well.
Still, can't wait for iplot xtreme.
Cheers,
Tim.
On Thu, Sep 3, 2009 at 5:03 AM, Jim Lemon wrote:
> Tim Shephard wrote:
>>
>> Hi folks,
>>
>> I was wondering
Peter,
Thank you for the dput values.
Kenn points out that
result <- cbind(data1[,,1], data2[,,1])
dim(result) <- c(3,3,1)
gets what you want.
We wrote abind to bind atomic arrays and/or data.frames. The list
feature, which is interfering with your usage,
was designed to simplify calling se
>> You mean the backward and forward stepwise selection is bad? You also
>> suggest the penalized logistic regression is the best choice? Is there any
>> function to do it as well as selecting the best penalty?
>> Annie
>
> All variable selection is bad unless its in the context of penalization.
Tim Shephard wrote:
Hi folks,
I was wondering if anyone could confirm/deny whether there exists any
kind of package to facilitate zoomable graphs with multiple plots (eg,
plot(..) and then points(..)).I've tried zoom from IDPmisc, and
iplot from the iplot and iplot extreme packages, but as
Hi
use any of suitable selection ways that are in R.
E.g.
data[data$gender==1, ]
selects only female values
data$wage[(data$gender==1) & (data$race=1)] selects black female wages.
and see also ?subset
Regards
Petr
r-help-boun...@r-project.org napsal dne 03.09.2009 10:51:59:
> Dear all,
>
Dear list
I tried to read an xml file using the xml package. Unfortunately, some encoding
problems occure. E.g. german Umlaut will be red correctly. I assume that the
occurs due to (internal?) conversion to utf-8. To illustrate the problem, I
have wrote to xml files.
File Test 1
---
Dear experts,
I have a few quick questions related to GLMs:
1) Suppose my response is of the type Yes/No, How can I control which
response I'm modelling?
2) How can I perform Type III tests? Is it with -> drop1(mymodel,
test="Chisq") ?
3) I have a numerical variable which I converted to an ord
Dear R Users,
Is the "iterated cumulative sums of squares algorithm" implemented in any
package in R ? A simple search yields no results but perhaps it is named
something else.
Thanks in advance,
Tolga
This email is confidential and subject to important disclaimers and
conditions including o
Dear all,
I have 1980~1990 eleven datas,
every year have three variables,
wage
gender(1=female, 2=male)
race(1=black, 2=white)
My original commands is:
fig2b<-reldist(y=mu1990$wage,yo=mu1980$wage,...)
I have three questions:
1. If I want to appoint y=women's wage in 1990
Thanks Andris, Michael and Petr for your prompt and kind feedbacks.
I will try generating my own biplot from low-level graph commands... I hope it
will work.
Best regards,
Marco
--
Marco Manca, MD
University of Maastricht
Faculty of Health, Medicine and Life Sciences (FHML)
Cardiovascular Re
On Thu, Sep 3, 2009 at 5:50 AM, Peter Meilstrup
wrote:
> I'm trying to massage some data from Matlab into R. The matlab file has a
> "struct array" which when imported into R using the R.matlab package,
> becomes an R list with 3+ dimensions, the first of which corresponds to the
> structure fiel
Is this what you want?
> dat <- read.table("http://dpaste.com/88988/plain/";,
comment.char="", header = TRUE)
> names(dat)
[1] "X.ID" "VALUE" "FREQUENCY"
> subset(dat, X.ID %in% 0:2 & !duplicated(X.ID))
X.ID VALUE FREQUENCY
1 0 0.00 3
321 0.67 1
65
Your subset problem has been solved already but i'd like to add a
comment on this:
> I want all rows where TTE is equal to 0.024657534
Comparing floating point numbers for equality with '==' is problematic
so a simple df[df$TTE == 0.024657534, ] can easily fail. have a look
at help("=="), especi
98 matches
Mail list logo