test?
>
> Thanks, Mark
>
>
>
> On Mon, Apr 23, 2012 at 3:10 PM, Greg Snow <538...@gmail.com> wrote:
>>
>> One option is to subtract the continuous variable from y before doing
>> the regression (this works with any regression package/function). The
&
://doylesdartden.com/Monthly-pH-example.csv";,
>> sep=",")
>>
>> attach(data)
>>
>>
>> plot( Year, MC.pH, ylim=range(MC.pH,MV.pH) , col='blue')
>>
>> points( Year, MV.pH, col='green' )
>>
>> lines( loess.sm
Assuming that you want event as the x-axis (horizontal) you can do
something like (untested without reproducible data):
par(mfrow=c(2,1))
scatter.smooth( event, pH1 )
scatter.smooth( event, pH2 )
or
plot( event, pH1, ylim=range(pH1,pH2) , col='blue')
points( event, pH2, col='green' )
lines( loes
One option is to subtract the continuous variable from y before doing
the regression (this works with any regression package/function). The
probably better way in R is to use the 'offset' function:
formula = I(log(data$AB.obs + 1, 10)-log(data$SIZE,10)) ~
log(data$SIZE, 10) + data$Y
formula = log
Here is a method that uses negative look behind:
> tmp <- c('mutation','nonmutated','unmutated','verymutated','other')
> grep("(? wrote:
> Hello All,
>
> Started out awhile ago trying to select columns in a dataframe whose names
> contain some variation of the word "mutant" using code like:
>
> n
This is really a job for a database, and Excel is not a database (even
though many think it is). I have some clients that I have convinced
to create an Access database rather than use Excel (still MS product
so it can't be that scary, right?). They were often a little
reluctant at first because t
R works on the idea that factor level ordering is a property of the
data rather than a property of the graph. So if you have the factor
levels ordered properly in the data, then the graph will take care of
itself. To order the levels see functions like: factor, relevel, and
reorder.
On Sat, Apr
The triplot function in the TeachingDemos package uses base graphics,
the subplot function (also in TeachingDemos) is another way to place a
plot within a plot (and triplot and subplot do work together).
If you want to stick to grid graphics then you can use viewports in
grid to insert one plot in
And another way is to remember properties of matrix multiplication:
y %*% diag(x)
On Fri, Apr 20, 2012 at 8:35 AM, David Winsemius wrote:
>
> On Apr 20, 2012, at 4:57 AM, Dimitris Rizopoulos wrote:
>
>> try this:
>>
>> x <- 1:3
>> y <- matrix(1:12, ncol = 3, nrow = 4)
>>
>> y * rep(x, each =
Would using the 'sink' function with type='message' and split=TRUE do
what you want?
On Thu, Apr 19, 2012 at 2:00 AM, Alexander wrote:
> Hello,
> I am working under R2.11.0 Windows and I would like to ask you if you know a
> way to save all warning messages obtained by the R function "warning" in
Almost always when people ask this question (it and its answer are FAQ
7.21) it is because they want to do things the wrong way (just don't
know there is a better way).
The better way is to put the variables that you want to access in this
way into a list, then you can easily access the objects in
Determining what is an outlier is complicated regardless of the tools
used (this is a philosophical issue rather than an R issue). You need
to make some assumptions and definitions based on the science that
produces the data rather than the data itself before even approaching
the question of outli
's
> an item in the examples which may be exactly what I'm after.
>
> DAV
>
>
> -Original Message-
> From: Greg Snow [mailto:538...@gmail.com]
> Sent: Monday, April 16, 2012 11:54 AM
> To: David A Vavra
> Cc: r-help@r-project.org
> Subject: Re: [R]
Look at the Reduce function.
On Mon, Apr 16, 2012 at 8:28 AM, David A Vavra wrote:
> I have a large number of 3d tables that I wish to sum
> Is there an efficient way to do this? Or perhaps a function I can call?
>
> I tried using do.call("sum",listoftables) but that returns a single value.
>
> S
This sounds like possibly using logsplines may be what you want. See
the 'oldlogspline' function in the 'logspline' package.
On Thu, Apr 12, 2012 at 7:45 AM, Michael Haenlein
wrote:
> Dear all,
>
> This is probably more related to statistics than to [R] but I hope someone
> can give me an idea h
Here is one approach:
tmp <- rbinom(10, 100, 0.78)
mp <- barplot(tmp, space=0, ylim=c(0,100))
tmpfun <- colorRamp( c('green','yellow',rep('red',8)) )
mat <- 1-row(matrix( nrow=100, ncol=10 ))/100
tmp2 <- tmpfun(mat)
mat2 <- as.raster( matrix( rgb(tmp2, maxColorValue=255), ncol=10) )
for(i in
The "emp.hpd" function in the TeachingDemos package will do this
(assumes a single interval result, either unimodal or multimodes but
the valleys between don't drop far enough to split the interval). I
am sure there are similar functions in other packages as well.
On Fri, Apr 6, 2012 at 12:39 PM,
Peter showed how to get the minimums from a list or data frame using
sapply, here is a way to copy your 1440 vectors into a single list
(doing this and keeping your data in a list instead of separate
vectors will make your life easier in general):
my.list <- lapply( 1:1440, function(x) get( sprint
Run the examples for the "loess.demo" function in the TeachingDemos
package to get a better understanding of what goes into the loess
predictions.
On Tue, Apr 3, 2012 at 2:12 PM, Recher She wrote:
> Dear R community,
>
> I am trying to understand how the predict function, specifically, the
> pred
If you look at the code for summary.lm the line for the value of sigma is:
ans$sigma <- sqrt(resvar)
and above that we can see that resvar is defined as:
resvar <- rss/rdf
If that is not sufficient you can find how rss and rdf are computed in
the code as well.
On Tue, Apr 3, 2012 at 8:56 AM,
I tried your code, first I removed the reference to the global
variable data$Line, then it works if I finish identifying by either
right clicking (I am in windows) and choosing stop, or using the stop
menu. It does as you say if I press escape or use the stop sign
button (both stop the whole evalu
You might want to look at the lattice or ggplot2 packages, both of
which can create a graph for each of the classes.
On Tue, Apr 3, 2012 at 6:20 AM, arunkumar wrote:
> Hi
> I have a data class wise. I want to create a histogram class wise without
> using for loop as it takes a long time
> my
How are you calculating the correlations? That may be part of the
problem, when you categorize a continuous variable you get a factor
whose internal representation is a set of integers. If you try to get
a correlation with that variable it will not be the polychoric
correlation.
Also do you need
Your explanation below has me more confused than before. Now it is
possible that it is just me, but it seems that if others understood it
then someone else would have given a better answer by now. Are you
restricting your categorical and binary variables to be binned
versions of underlying normal
Partly this depends on what you mean by a covariance between
categorical variables (and binary) and what is a covariance between a
categorical and a continuous variable?
On Thu, Mar 29, 2012 at 12:31 PM, Burak Aydin wrote:
> Hi,
> I d like to simulate 9 variables; 3 binary, 3 categorical and 3 co
I nominate the following paragraph for the fortunes package:
"The basic issue appears to be that glht is not smart enough to deal
with degrees of freedom so it uses an asymptotic z-test instead of a
t-test. Infinite df, basically, and since 4 is a pretty poor
approximation of infinity, you get you
I would use the my.symbols function from the TeachingDemos package
(but then I might be a little bit biased), here is a simple example:
library(TeachingDemos)
x <- runif(25)
y <- runif(25)
z <- sample(1:4, 25, TRUE)
ms.halfcirc2 <- function(col, adj=pi/2, ...) {
theta <- seq(0, 2*pi, len
You should use mixed effects modeling to analyze data of this sort.
This is not a topic that has generally been covered by introductory
classes, so you should consult with a professional statistician on
your problem, or educate yourself well beyond the novice level (this
takes more than just readin
Running findFn('linear programming') from the sos package brings up
several possibilities that look promising.
On Sun, Mar 25, 2012 at 5:48 AM, agent dunham wrote:
> Dear Community,
>
> I've a Work -Shift Scheduling Problem I'd like to solve via constraint
> linear programming.
>
> Maybe somethin
As others have said, you pretty much need to do the plot 2 times, but
if it takes more that one command to create the plot you can use the
dev.copy function to copy what you have just plotted into another
graphics device rather than reissuing all the commands again.
On Sat, Mar 24, 2012 at 9:43 AM
In addition to Michael's answers, there are packages that allow you to
use SQL syntax on R data objects, so you could probably just use what
you are familiar with.
On Sat, Mar 24, 2012 at 9:32 AM, reeyarn wrote:
> Hi, I want to run something like
> SELECT firm_id, count(*), mean(value), sd(value
If you are trying to see if both vectors could be random samples from
the same population then I would look at a qqplot (see ?qqplot) which
will compare them visually (and if they are not the same length then
the qqplot function will use interpolation to compare them. For a
more formal test you ca
You could put this data into a 3 dimensional array and then use the
apply function to apply a function (such as mean) over which ever
variables you choose.
Or you could put the data into a data frame in long format where you
have your 3 variable indices in 3 columns, then the data in a 4th
column.
See the 'petals' function in the TeachingDemos package for one example
of hiding source from casual inspection (intermediate level R users
will still easily be able to figure out what the key code is, but will
not be able to claim that they stumbled across it on accident).
This post gives another
ter chance of actually helping you.
>>
>> On Fri, Mar 9, 2012 at 9:46 AM, aoife doherty
>> wrote:
>> >
>> > Thank you. Can the chi-squared test compare two matrices that are not
>> > the
>> > same size, eg if matrix 1 is a 2 X 4 table, and ma
of actually helping you.
On Fri, Mar 9, 2012 at 9:46 AM, aoife doherty wrote:
>
> Thank you. Can the chi-squared test compare two matrices that are not the
> same size, eg if matrix 1 is a 2 X 4 table, and matrix 2 is a 3 X 5 matrix?
>
>
>
> On Fri, Mar 9, 2012 at 4:
R tends to see the ordering of factor levels as a property of the data
rather than a property of the table/graph. So it is generally best to
modify the data object (factor) to represent what you want rather than
look for an option in the table/plot function (this will also be more
efficient in the
Why do you want to do this? Lattice was not really designed to put
just part of the graph up, but rather to create the entire graph using
one command.
If you want to show a process, putting up part of a graph at a time,
it may be better to create the whole graph as a vector graphics file
(pdf, po
The chi-squared test is one option (and seems reasonable to me if it
the the proportions/patterns that you want to test). One way to do
the test is to combine your 2 matrices into a 3 dimensional array (the
abind package may help here) and test using the loglin function.
On Thu, Mar 8, 2012 at 5:
The issue here is the difference between what is contained in a string
and what R displays to you.
The string produced with the code:
> tmp <- "C:\\"
only has 3 characters (as David pointed out), the third of which is a
single backslash, since the 1st \ escapes the 2nd and the R string
parsing r
To further explain. If you want contours of a bivariate normal, then
you want ellipses. The density for a bivariate normal (with 0
correlation to keep things simple, but the theory will extend to
correlated cases) is proportional to exp( -1/2 ( x1^2/v1 + x2^2/v2 )
so a contour of the distributi
The key part of the ellipse function is:
matrix(c(t * scale[1] * cos(a + d/2) + centre[1], t * scale[2] *
cos(a - d/2) + centre[2]), npoints, 2, dimnames = list(NULL,
names))
Where (if I did not miss anything) the variable 't' is derived from a
chisquare distribution and the c
A general solution if you always want 2 columns and the pattern is
always every other column (but the number of total columns could
change) would be:
cbind( c(Dat[,c(TRUE,FALSE)]), c(Dat[,c(FALSE,TRUE)]) )
On Sat, Mar 3, 2012 at 11:40 AM, David Winsemius wrote:
>
> On Mar 3, 2012, at 11:02 AM
apt to your
> environment and make the best of it. So you just have to learn that
> Excel can be your friend (or at least not your enemy) and can serve a
> very useful purpose in getting your ideas across to other people.
>
> On Fri, Mar 2, 2012 at 6:41 PM, Greg Snow <538...@gma
Look at the ellipse package (and the ellipse function in the package)
for a simple way of showing a confidence region for bivariate data on
a plot (a 68% confidence interval is about 1 SD if you just want to
show 1 SD).
On Sat, Mar 3, 2012 at 7:54 AM, drflxms wrote:
> Dear all,
>
> I created a bi
Using the readlines function on your dat string gives the error
because it is looking for a file named "2 3 ..." which it is not
finding. more likely what you want is to create a text connection
(see ?textConnection) to your string, then use scan or read.table on
that connection.
On Sat, Mar 3, 2
I would use the regular text function instead of mtext (remembering to
set par(xpd=...)), then use the grconvertX and grconvertY functions to
find the location to plot at (possibly adding in the results from
strwidth or stheight).
On Thu, Mar 1, 2012 at 4:52 PM, Frank Harrell wrote:
> Rich's poin
Others explained why it happens, but you might want to look at the
zapsmall function for one way to deal with it.
On Thu, Mar 1, 2012 at 2:49 PM, Mark A. Albins wrote:
> Hi!
>
> I'm running R version 2.13.0 (2011-04-13)
> Platform: i386-pc-mingw32/i386 (32-bit)
>
> When i type in the command:
>
>
?xspline
On Thu, Mar 1, 2012 at 8:15 AM, hendersi wrote:
>
> Hello,
>
> I have a spreadsheet of pairs of coordinates and I would like to plot a line
> along which curves/arcs connect each pair of coordinates. The aim is to
> visualise the pattern of point connections.
>
> Thanks! Ian
>
> --
> Vie
If you know that your first date is a Friday then you can use seq with
by="7 day", then you don't need to post filter the vector.
On Thu, Mar 1, 2012 at 1:40 PM, Ben quant wrote:
> Great thanks!
>
> ben
>
> On Thu, Mar 1, 2012 at 1:30 PM, Marc Schwartz wrote:
>
>> On Mar 1, 2012, at 2:02 PM, Ben
Or
lapply(LIST, cat, file='outtext.txt', append=TRUE)
On Thu, Mar 1, 2012 at 6:20 AM, R. Michael Weylandt
wrote:
> Perhaps something like
>
> sink("outtext.txt")
> lapply(LIST, print)
> sink()
>
> You could replace print with cat and friends if you wanted more
> detailed control over the look of
Try sending your clients a data set (data frame, table, etc) as an MS
Access data table instead. They can still view the data as a table,
but will have to go to much more effort to mess up the data, more
likely they will do proper edits without messing anything up (mixing
characters in with number
?ks.test
?qqplot
also look at permutation tests and possibly the vis.test function in
the TeachingDemos package.
Note that with all of these large samples may give you power to detect
meaningless differences and small samples may not have enough power to
detect potentially important differences.
The validate function in the rms package can do cross validation of
ols objects (ols is similar to lm, but with additional information),
the default is to do bootstrap validation, but you can specify
crossvalidation instead.
On Thu, Feb 16, 2012 at 10:44 AM, samuel-rosa
wrote:
> Dear R users
>
>
Also look at the zapsmall function. A useful but often overlooked tool.
On Thu, Feb 16, 2012 at 2:54 AM, Petr Savicky wrote:
> On Thu, Feb 16, 2012 at 10:17:09AM +0100, Gian Maria Niccolò Benucci wrote:
>> Dear List,
>>
>> I will appreciate any advice regarding how to convert the following numbe
This post https://stat.ethz.ch/pipermail/r-sig-mixed-models/2009q1/001819.html
may help you understand why the standard p-values in some cases are
not the right thing to do and what one alternative is.
On Tue, Feb 14, 2012 at 3:36 PM, Xiang Gao wrote:
> Hi
>
> I am working on a Nested one-way ANO
Note that you can also do logical comparisons with the results of grepl like:
grepl('^as', a) | grepl('^df',a)
For the given example it is probably simplest to do it in the regular
expression as shown, but for some more complex cases (or including
other variables) the logic with the output may be
All the distribution tests are rule out tests, i.e. they can tell you
if your data does not match a given distribution, but they can never
tell you that the data does come from a specific distribution.
Note also that the results of any of these studies may not be that
useful, for small sample size
Assuming this is the hexplom function from the hexbin package (it is
best to be specific in case there are multiple versions of the
function you ask about), you can specify "lower.panel=function(...){}"
for a, and "as.matrix=TRUE" for c, for b I am not sure what exactly
you want to do, but look at
If you are willing to use base graphics instead of ggplot2 graphs, then look at
the subplot function in the TeachingDemos package. One of the examples there
shows adding multiple small bar graphs to a map.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s.
If you use logspline estimation (logspline package) instead of kernel density
estimation then this is simple as there are cumulative area functions for
logspline fits.
If you need to do this with kernel density estimates then you can just find the
area over your region for the kernel centered a
A different approach is to use the etxtStart function in the TeachingDemos
package. You need to run this before you start, then it will save everything
(commands and output and plots if you tell it to) to a file that can then be
post processed to give a file that shows basic coloring (or with o
The locator() function can help you find coordinates of interest on an existing
plot.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
>
What variables to consider adding and when to stop adding them depends greatly
upon what question(s) you are trying to answer and the science behind your data.
Are you trying to create a model to predict your outcome for future predictors?
How precise of predictions are needed?
Are you trying
Have you read ?"[[" ?
The short answer is that you can use both [] and [[]] on lists, the []
construct will return a subset of the list (which will be a list) while [[]]
will return a single element of the list (which could be a list or a vector or
whatever that element may be): compare:
> t
mon 0.05 to reviewers).
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: Chris Wallace [mailto:chris.wall...@cimr.cam.ac.uk]
> Sent: Thursday, January 26, 2012 9:36 AM
> To: Greg Snow
>
I believe that what you are seeing is due to the discrete nature of the
binomial test. When I run your code below I see the bar between 0.9 and 1.0 is
about twice as tall as the bar between 0.0 and 0.1, but the bar between 0.8 and
0.9 is not there (height 0), if you average the top 2 bars (0.8-
I nominate this response for the fortunes package.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of David Wins
You might consider using the state.vbm map that is now part of the maptools
package.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Jeffrey Joh
Sent: Tuesday, January 17, 2012 3:37 AM
To: r-help@r-project.org
Subject: [R] Display
See the rasterImage function to do the plotting. If you need to read the image
in then I would start with the EBImage package from bioconductor (though there
are others as well).
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Al
In addition to the recommendations to use the grid function, you could just do:
par(tck=1)
before calling the plotting functions.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Erin Hodgess
Sent: Wednesday, January 18, 2012 6:19
You could use the saveHistory command to save the history of commands that you
have written to a file, then read that into a variable using the scan function,
then do the save or save.image to save everything.
A different approach would be to save transcripts of your session that would
show the
If you need the animation in a file outside of R (or possibly in R) then look
at the animation package. This allows you quite a few options on how to save
an animation, some of which depend on outside programs, but options mean that
if you don't have one of those programs there are other ways t
I have had clients who also wanted to make little changes to the graphs (mostly
changing colors or line widths). Most after doing this a couple of times have
been happy to give be better descriptions of what they want so I can just do it
correctly the first time.
I mostly give them the graph
The scan function can be used to read a single row. If your file has multiple
rows you can use the skip and nlines arguments to determine which row to read.
With the what argument sent to a single item (a number or string depending on
which you want) it will read each element on that row into
Does "vnew <- vold[,,ks]" accomplish what you want?
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Asher Mei
This takes me back to listening to a professor lament about the researchers
that would spend years collecting their data, then negate all that effort
because they insist on using tools that are quick rather than correct.
So, before dismissing the use of pvals.fnc you might ask how long it takes
The error is because you are trying to assign to the result of a get call, and
nobody has programmed that (hence "could not find function") because it is
mostly (if not completely) meaningless to do so.
It is not completely clear what you want to accomplish, but there is probably a
better way t
This looks like a hierarchical Bayes type problem. There are a few packages
that do Bayes estimation or link to external tools (like openbugs) to do this.
You would just set up each of the relationships like you define below, y is a
function of a(k), b(k), x and e where e comes from a normal d
Look at the layout function, it may do what you want.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of annek
>
This really depends on more than just the optimizer, a lot can depend on what
the data looks like and what question is being asked. In bootstrapping it is
possible to get bootstrap samples for which there is no unique correct answer
to converge to, for example if there is a category where there
[mailto:r...@temple.edu]
Sent: Wednesday, December 14, 2011 11:20 AM
To: Greg Snow
Cc: Duncan Murdoch; Tal Galili; r-help
Subject: Re: [R] nice report generator?
Greg,
Please look at the SWord package. This package integrates MS Word with R
in a manner similar to the SWeave integration of La
If you have a recent download of RExcel from the RAndFriends installer, then
you will already have SWord on your machine.
Rich
On Wed, Dec 14, 2011 at 12:39 PM, Greg Snow
mailto:greg.s...@imail.org>> wrote:
Duncan,
If you are taking suggestions for expanding the tables package (looks great)
Colors probably are not the best for so many levels and combinations. Look at
the symbols function (or the my.symbols and subplot functions in the
TeachingDemos package) for ways to add symbols to a map showing multiple
variables.
-Original Message-
From: r-help-boun...@r-project.org [
Duncan,
If you are taking suggestions for expanding the tables package (looks great)
then I would suggest some way to get the tables into MS products. If I create
a full output/report myself then I am happy to work in LaTeX, but much of what
I do is to produce tables and graphs to clients tha
Often when someone wants lines (axes) in R plots to be thicker or thinner it is
because they are producing the plots at the wrong size, then changing the size
of the plot in some other program (like MSword) and the lines do not look as
nice. If this is your case, then the better approach is to
The zoomplot function in the TeachingDemos package can be used for this (it
actually redoes the entire plot, but with new limits). This will generally
work for a quick exploration, but for quality plots it is suggested to create
the 1st plot with the correct range to begin with.
--
Gregory (G
If you don't want to go with the simple method mentioned by David and Ted, or
you just want some more theory, you can check out:
http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle and implement that.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s.
Your question is a bit too general to give a useful answer. One possible
answer to your question is:
> mrv <- matrix( runif(1000), ncol=10 )
Which generates multivariate random observations, but is unlikely to be what
you are really trying to accomplish. There are many tools for generating
m
Those formulas are the standard way to convert from polar coordinates to
Euclidean coordinates. The polar coordinates are 'r' which is the radius or
distance from the center point and 'theta' which is the angle (0 is pointing in
the positive x direction).
If r is constant and theta coveres a f
Look at the txtStart function in the TeachingDemos package. It works like sink
but also includes commands as well as output. Though I have never tried it
with browser() (and it does not always include the results of errors).
Another option in to use some type of editor that links with R such a
I see the problem, I fixed this bug for version 2.8 of TeachingDemos, but have
not submitted the new version to CRAN yet (I thought that I had fixed this
earlier, but apparently it is still only in the development version). An easy
fix is to install version 2.8 from R-forge (install.packages("T
You could use the rgl package and plot a sprite at each of your points with the
color based on the concentration:
plume$col <- cut(plume$conc, c(-1,0.01,0.02,0.3,0.7,1),
labels=c('blue','green','yellow','orange','red'))
plume2 <- plume
theta <- atan2(plume2$y-mean(plume2$y), plume2$x-m
Unless your audience is mainly interested in Texas and California and is
completely content to ignore Rhode Island, then I would suggest that you look
at the state.vbm map in the TeachingDemos package that works with the maptools
package. The example there shows coloring based on a variable.
-
The chisq.test function is expecting a contingency table, basically one column
should have the count of respondents and the other column should have the count
of non-respondents (yours looks like it is the total instead of the
non-respondents), so your data is wrong to begin with. A significant
One approach would be to code dummy variables for your factor levels, have d1
equal to 0 for 'low' and 1 for 'med' and 'high', then have d2 equal to 1 for
'high' and 0 otherwise. For linear regression there are functions that will
fit a model with all non-negative coefficients, but I don't know
Replace "stop()" with "break" to see if that does what you want. (you may also
want to include "cat()" or "warn()" to indicate the early stopping.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> F
As an interesting extension to David's post, try:
M4.e <- matrix(rexp(4,1), ncol=4)
Instead of the uniform and rerun the rest of the code (note the limits on the
x-axis).
With 3 dimensions and the restriction we can plot in 2 dimensions to compare:
library(TeachingDemos)
m3.unif <- matrix
You probably want to generate data from a Dirichlet distribution. There are
some functions in packages that will do this and give you more background, or
you can just generate 4 numbers from an exponential (or gamma) distribution and
divide them by their sum.
--
Gregory (Greg) L. Snow Ph.D.
S
When I copy and paste your code I get what is expected, the 2 subplots line up
on the same y-value. What version of R are you using, which version of
subplot? What platform?
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> --
501 - 600 of 2240 matches
Mail list logo