k on the "Tools" menu, then the "Folder Options..." choice.
>
> Choose the View tab.
>
> About 10 choices down within Files and Folders, you'll see "Hide
> Extensions for Known File Types". Make sure this is *not* checked.
>
on=f,margins=c(2,3))
Or, even more general, like:
marg.apply(X,Y,Z,fun1=f,fun2=sum,margins=c(2,3))
(Such a question must have been asked before, but I haven't
located it).
With thanks,
Ted.
----
E-Mail: (Ted Hardi
[OOPS!!I accidentally reproduced my second example below
as my third example. Now corrected. See below.]
On 11-Nov-10 20:02:29, Ted Harding wrote:
On 11-Nov-10 18:39:34, Roslina Zakaria wrote:
> Hi,
> Does anybody encounter the same problem when we overlap histogram
> and density
et.seed(54321)
N <- 1000
X <- exp(rnorm(N,sd=0.4))
dd <- density(X)
## A coarse histogram
H <- hist(X,prob=TRUE,
xlim=c(-0.5,4),ylim=c(0,max(dd$y)),breaks=0.5*(0:8))
dx <- unique(diff(H$breaks))
lines(dd$x,dd$y)
## A finer histogram
H <- hist(X,prob=TRUE,
xl
;, 'pnorm' and the like, see the help at:
?dnorm
or
?pnorm
(both lead to the same page). Granted, for a newcomer to R the
documentation (which often relies heavily on cross-referencing,
and sometimes the cross-references can be difficult to identify)
can be difficult to get to
multinecker.pdf
What you *see* in treacherous (or any) images is marks on paper,
or on a computer screen, ...
What you *perceive* is different. Always. (Well, almost always:
you can make a deliberate effort to study the marks on the paper
as marks on paper).
Ted.
(1)
If you want to see all computed results displayed to more than
7 digits of accuracy, you can set the global "digits" option:
options(digits=17)
exp(1)
# [1] 2.718281828459045
pi
# [1] 3.141592653589793
See the entry for "digits" in ?options.
Hoping this helps
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] tools_2.11.0
---
= 1/12 = 0.0833
> var(Y)
[1] 0.08318914 # theory: var = 1/12 = 0.0833
> cov(X,Y)
[1] 0.04953733 # theory: cov = p/12 = 0.6/12 = 0.05
> cor(X,Y)
[1] 0.5947063 # theory: cor = p= 0.6
It would be interesting to see a solution which did not involve
having cases with X=Y
ty you want is that
the sampled value is between 2.3 and 2.5, whose probability is
pnorm(2.5, 5, 2) - pnorm(2.3, 5, 2) = 0.01714178
Since the question you are really interested in cannot be
identified from what you have asked (see examples above),
you should try to ma
gt; first.
>
> Regards -- Gerrit
This indicates that the sentence can be mis-read. It should be
cured by a small change in punctuation (hence I copy to R-devel):
The logical operators are <, <=, >, >=; == for exact equality;
and != for inequality
Hopin
different forms for the joint distribution of (X,Y).
So "correlated counts from a multivariate Poisson distribution"
does not lead to a definite target!
So it would be useful if you could specify precisely what you
want that phrase to mean.
Ted.
------
ct = exact) : subscript out of bounds
> [2] is it possible to turn off recycling for vector operations? (I
> may have asked this at some point already, but I can't find the
> answer.)
>
>> a=c(2,3)
>> b=c(4,
fundamental issue is interpolation.
There are many methods for this, with different proprties!
An R Site Search on "interpolation" yields a lot of hits.
One (which is fairly basic, but may suit your purposes) is
the int
to be incorporated into the model,
somehow. How best to do it will depend on what is being
modelled, and on how you expect it to be related to wind
direction.
Ted.
E-Mail: (Ted Ha
and you should be OK.
It may simply be that, by default, your Excel has saved the
whole spreadsheet, so you get all the columns. The same
solution should apply.
Hoping that this help[s.
Ted.
E-Mail: (Ted Harding)
Fa
up with last time, you should be able to
do this by executing
load("session01.wsp")
as the first command of the new session.
Finally, if you want to "clear" (by which I guess you mean
"remove") variables (say X, Y, Z1, Z2) or any other R objects
you have create
's already one out there that matches them.
> Duncan Murdoch
As Duncan and Clint suggest, writing a function is straightforward:
for the problem as you have stated it, on the lines of
function(x,k){floor(signif(x,k-as.integer(log(x,10)-1))) + 10^k}
However, w
same site in the other data frame is called
>> "Frozen Niagara Entrance". It seems to me the easiest thing
>> to do would be to remove the numbers from the first data
>> frame so the two will match. How do I go about removing those
rt
Try something based on:
X <- "001a Frozen Niagara Entrance"
sub("[[:alnum:]]* ","",X)
# [1] "Frozen Niagara Entrance"
Hoping this helps!
Ted.
E-Mail: (Ted Harding)
Fax
es to
rounding of the fractional part: the integer part is always
displayed in full:
1234567891/10
# [1] 123456789
print(1234567891/10,10)
# [1] 123456789.1
print(1234567891/10,4)
# [1] 123456789
The internally stored value is always stored to the full available
precision.
Hoping this hel
ength.
Note that R's pf offers additional functionality, such as a
non-centrality paramater for the no-centgral F distribution.
Enter '?pf' for more detailed information.
Ted.
--------
E
t knowing where you
should be looking, it could take you several tries in
different places before you find what you want!
Hoping this helps,
Ted.
E-Mail: (Ted Harding)
Fax-to
How about:
sum(unlist(strsplit(b,NULL))==";")
# [1] 5
(More transparent, at least to me ... ). See '?strsplit',
and note what is said under "Value".
Ted.
On 11-Oct-10 04:35:43, Michael Sumner wrote:
> Literally:
>
> length( gregexpr(";"
ESSAGES=en_GB.UTF-8
[7] LC_PAPER=en_GB.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] s
"errors in both variables"
yielded nothing relevant. It may be that using different search
terms would find appropriate methods (such as considered by Gillard,
or the Lindley approach for [A]), but I'm having difficulty
On 30-Sep-10 09:02:18, Jim Lemon wrote:
> On 09/30/2010 03:27 AM, Kevin E. Thorpe wrote:
>> Ted is well aware of how to change his list email. He was advising
>> people on the list who who have his old email address in their address
>> books to remove it.
>>
> How c
have kept a record of my current (manchester.ac.uk)
address, please modify this to the new address.
Messages sent to the current address will continue to arrive,
for another week or two.
With thanks, and best wishes to all,
Ted
There was a typo error in my code below. See the inserted correction.
On 23-Sep-10 17:05:45, Ted Harding wrote:
> On 23-Sep-10 16:52:09, Duncan Murdoch wrote:
>> On 23/09/2010 11:42 AM, wangguojie2006 wrote:
>>> b<-runif(1000,0,1)
>>> f<-density(b)
>>
[i1]
u1 <- f$x[i1] ; v1 <- f$y[i1]
y0 <- v0 + (v1-v0)*(x0-u0)/(u1-u0) ## Linear interpolation
points(x0, y0, pch="+", col="red") ## Add interpolated point
Ted.
---
;- glm(Y ~ X, family=binomial)
prGLM <- predict(GLM,type="response",se=TRUE)
plot(X,prGLM$fit,type="l",ylim=c(0,1),col="blue")
lines(X,prGLM$fit+1.96*prGLM$se,col="red")
lines(X,prGLM$fit-1.96*prGLM$se,col="red")
prGLM <- predict(G
=0.1*(0:10))
all the bin-values were similar.
So what heppans here is that the Welch/Satterthwaite approxumation
does not produce uniformly distributed P-values when the Null
Hyptohesis is true (at any rate for sample sizes s as small as the
6 you are using).
Hoping
ghtly relaxed recently so that these
now pose a problem less often), and it may also be that access
via Gmane could trigger it.
If you would state what reason you were given, it may help to
identify the problem. If it was "Posting by non-member ... "
on any occa
rame(a=a,b=b)))
B <- as.integer(rep(rownames(T),each=ncol(T)))
A <- as.integer(rep(colnames(T),nrow(T)))
cbind(A,B,L)[L>0,]
# A B L
# [1,] 59 32 7
# [2,] 60 32 1
# [3,] 60 33 9
# [4,] 61 33 10
# [5,] 62 34 1
Ted.
-
of the form Y[t+1] = Y[t] + A + B^t, whereas
I was led to suggest this as an approach on the basis of looking
at *all* the values you supplied.
Just thoughts! Probably others can suggest ways of taking this
further, or approaching it differently.
Ted.
-
4 5 4 5 4 5 6 8 6 8 6 8
(By the way, you have 3 repetitions but wrote "twice" -- I assume
you meant "thrice" but the above generalises to 2 repetitions ... :)
Ted.
----
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 094 0
gument, since the above calculation has assigned
equal prior probability to the tie-breaks!
One could also, I suppose, consider the question of what
distribution of P-values might arise if the/an alternative
huypothesis were true, and where in this does the P-value that
we actually got lie? But the
be able to do. This could be a useful extension to
the plot() function and friends.
You can of course define an auxiliary function, say mycross(),
on the lines of
mycross <- function(x,y,L,U,R,D){
lines(c(x,x-L),c(y,y))
lines(c(x,x),c(y,y+
s: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
# Residual standard error: 0.2589 on 38 degrees of freedom
# Multiple R-squared: 0.965, Adjusted R-squared: 0.9631
# F-statistic: 523.4 on 2 and 38 DF, p-value: < 2.2e-16
The re
On 08-Sep-10 19:16:15, Bert Gunter wrote:
> Ted:
> ?layout
>
> Is this close to what you want?
>
> layout(matrix(1:2, nrow=2),wid=1,heigh=c(1,1), resp= TRUE)
> set.seed(54321)
> X0 <- rnorm(50) ; Y0 <- rnorm(50)
> plot(X0,Y0,pch="+",col="blue
despite
the xlim=c(-3,3); however, the "ylim=c(-3,3)" has been
respected, as has "asp=1".
What I would like to see, independently of the shape of
the graphics window, is a pair of square plots, each with
X and Y ranging from -3 to 3, even if this leaves empty
space in the graph
aiming at 500 it would on average only take
about 3000 tries to hit it. After that it rapidly becomes less likely.
Ted.
On 04-Sep-10 19:27:54, Yi wrote:
> Enh, I see.
> It totally makes sense.
> Thank you for your perfect explanation.
> Enjoy the long weekend~
> Yi
-
itional distribution:
X1, X2, ... , Xn uniformly distributed on (17:23) conditional on
X1 + X2 + ... Xn = 20*n.
This can be done, but before working out how to do it one would
need to be assured that this really is what you mean!
Ted.
On 04-Sep-10 07:07:41, Yi wrote:
> Sorry I forgot to talk ab
x<-sample(((-3):4),100,replace=TRUE)
x ### Have a look at the values in x
barplot(x) ### A barplot with bar heights given by x
barplot(table(x)) ### a barplot of the counts of the values of x
Hoping this helps,
Ted.
------
Hs that are implied by a large value of a certain test
statistic T are those AHs that give such values of T greater
probability than they would get under NH. Thus we are now getting
into the domain of the Power of the test to detect discrepancy.
Ted.
ether the coin
is fair or not. It is not possible for such data to discriminate
between a fair and an unfair coin.
And, as explained above, a P-value of 1 cannot prove that the
null hypothesis is true. All that is possible with a significance
test is that a small P-value can be taken as evidence
R commands (e.g. "project1.R") with a PDF
file generated by R using the same name (e.g. "project1.pdf")
because Windows (by default) does not show you the extension
(respectively ".R" and ".pdf").
Please clarify!
Ted.
---
0 0 0 14 8 12 16
i.e. (in R):
(diag(nrow(A)) + c(0,0,2,0, 0,0,0,0, 0,0,0,0, 0,0,0,0))%*%A
# [,1] [,2] [,3] [,4]
# [1,]159 13
# [2,]26 10 14
# [3,]5 17 29 41
# [4,]
row(A)==3]
# [1] 3 7 11 15
Then 'replace(A, row(A) == 3, 2 * A[1,] + A[3,])' replaces that
part of the vector A as indexed by TRUE with the given expression
2 * A[1,] + A[3,] = 5 17 29 41
Hence
replace(A, row(A) == 3, 2 * A[1,] + A[3,])
# [,1] [,2] [,3] [,4]
d as vectors,
or can be referred to separately as in a dataframe (say D), like
D$B[(2:n)] <- D$A[(2:n)] + 0.5*D$B[1:(n-1)]
Ted.
----
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 094 0861
Date: 27-Au
uot;)) ## Nasty but necessary!
Y0 <- unlist(strsplit(Y,split="")) ## ...
ix <- which(X0 != Y0)
cbind(ix,X0[ix],Y0[ix])
# ix
# [1,] "2" "b" "B"
# [2,] "4" "d" "D"
# [3,] "5" &quo
On 21-Aug-10 08:33:50, Gavin Simpson wrote:
> [...]
> If that is too much trouble then I'm sure SAS will welcome you
> with open arms (and then have one of those arms in down payment ;-)
>
> HTH
> G
... And also leave you with only one leg t
; header whether you have onefile = TRUE or FALSE.
> Back in the day, when I was using word and R, R's EPS files were
> imported without the preview (as R doesn't generate one). Later on,
> a low resolution bitmap was being displayed. I presumed this was
> because I was using a ne
an EPS file with no such EPSI inclusion.
There are PostScript-handling program suites, such as ghostscript,
which include a facility to convert from EPS to EPSI: in particular,
ghostscript has the command ps2epsi.
Ted.
the sake of the desired
visual effect, you would want to use an aspect ratio different
from 1. The basic point is that it is a tool to help you get
the vertical and horizontal dimensions of the graph in the
proportions that help to achieve the visual effect you seek.
Ted.
On 19-Aug-10 21:50:12, Spencer
es # = 0.0227
## Fisher's F-ratio statistic = meanSSeff/meanSSres:
F <- meanSSeff/meanSSres
F # = 45.23889
## P-value for F as test of difference between group means
## relative to within-group residuals (upper tail):
Pval <- pf(F, df.groups, df.res,
imply "See also ?RNG" in every case might be enough!
Ted.
----
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 094 0861
Date: 16-Aug-10 Time: 0
aving an explicit indication of pairing,
> eg paired=variable.name, or even better, paired=~variable.name.
> Relying on data frame ordering seems a really bad idea.
>
> -thomas
Thanks, Thomas, for elucidating the mechanisms of what I had suspected.
Following th
perhaps possible that 'na.action="na.pass"' and
'na.action="na.exclude"' result in different pairings in the
case "paired=TRUE". However, it seems to me that the differences
he observed are, shall we say, obscure!
Ted.
On 13-Aug-10 22:31:45, Thoma
d to dwindle
into near-irrelevance, since locally the reponse is close to linear
and whatever you achieve on one scale will be (close to) achieved
on the other scale.
Hoping this helps,
Ted.
> On 7 August 2010 18:37, Peter Dalgaard wrote:
>>
>> Probably, neither is optimal,
(real versus integer). However, once you start to use them, one
is likely to be coerced into the same type as the other if they
occur together in an expression. Hence the "confusing" results
from Bill Venables!
Ted.
On 02-Aug-10 04:09:35, bill.venab...@csiro.au wrote:
> Just to add t
the sample mean
equal to 1.03E-6 (0.0103) and max/min=100. However this would
be really inadequate information to determine what the parameters
of the log-normal distribution might be. At the very least one needs
also the sample size, and even then the determination wo
ou should
use data which do not have this property that some of the independent
variables (x1,x2,x3) are linear functions of the others.
However, in trying it out as you have, you have already found out
something very important about linear regression! (And about R).
Hoping this helps,
Ted.
-
to separate the original elements). For a true
catenation, with nothing separating the chained elements, see:
cat("A B C","D E F","G H I",sep="")
# A B CD E FG H I
Hoping this helps.
Ted.
--
On 29-Jul-10 09:25:37, Ted Harding wrote:
> On 29-Jul-10 09:08:22, Nicola Sturaro Sommacal wrote:
>> Hi!
>> I have a ftable object in which some row contains integers and
>> some other contains a percentage that I would like to show with
>> two digits after the dot.
&g
is with sprintf(). using different 'fmt' specifications
for the "integer" rows and the "percentage" rows, and using
cat() to output the results. However, I don't have time right
now to explore the details of how it might be done. Sorry.
Ted.
-
# dev 4 is active
dev.set(devN["plotB"]) # dev 3 is active
dev.set(devN["plotA"]) # dev 2 is active
dev.set(devN["plotC"]) # dev 4 is active
Hoping this helps,
Ted.
---
hard?!
>
> Thank you in advance,
> Katrin
(y2 - y1)/(x2-x1)
or, if X=c(x1,x2) and Y=c(y1,y2),
diff(Y)/diff(X)
either of which is shorter than
lm(Y ~ X)$coeff[2] ## !! :)
Ted.
----
E-Mail: (Ted Hard
0,1)
# [1] 10
Ted.
On 22-Jul-10 20:51:07, Jonathan wrote:
> I see.. Thanks!
>
> On Thu, Jul 22, 2010 at 4:39 PM, Hadley Wickham
> wrote:
>> Did you look at the examples in sample?
>>
>> # sample()'s surprise -- example
>> x <- 1:10
>> _ _sample
this leads to the error I show above. But I can't confirm that yet,
because I don't yet know how to get rid of rows that have a row name but
only NULL as the value.
I haven't seen this dealt with in the references I have read so far.
I think I may be able to deal with it by creating d
nd the resultset is empty. So assigning the
value returned by dbGetQuery to moreinfo works ONLY if the resultset is not
empty. It fails with a fatal error if the resultset is empty. So, the
question is, how can I revise that statement so that the assignment happens
only if the resultset is NOT emp
DE Germany
# 2 IT Italy
# 3 NA Namibia
# 4 FR France
which(is.na(X))
# integer(0)
So that works.
There ought to be an option in read.csv() and friends which suppresses
the conversion of a string "NA" found in input into an value.
Maybe there is -
either get dbWriteTable working (ideally
in a way that works around the limitations I mention above) or to do a bulk
insert into my MySQL table (yes, I already have a table in the relevant
schema with all the right data types for each field, and I load RMySQL at
the start of my program.) In a wors
o know this way back in
the stone age) to compute the confidence intervals for each of these
integrals.
So I don't bother anyone with similar elementary questions, what web
resource exists that defines confidence intervals for such integrals for
arbitrary distributions? or does such a resource
"32", "33", "34", "35",
"36", "37", "38", "39", "40"), class = "data.frame")
>
The full dataset has almost 200,000 observations! That is why I hadn't
posted the raw data. And m_id_default_res
the columns in m_id_default_res that I don't
need, or I need to copy only those columns I need to a new data.frame. How
do I do this. Obviously, doing an element-wise copy, at least as I tried to
do it, doesn't work.
Thanks,
Ted
[[alternative HTML version deleted]]
_
05981
> 6 1 0.7930274 -1.0530558
> 7 2 0.5908323 -1.3543282
> 8 3 2.5079242 -0.4657274
> 9 4 1.6294046 -1.4094830
> 10 5 0.5183756 1.3084776
>
> what is the simplest way to do that?
>
> Thanks a lot in advance!
> Ralf
Something on the lines of
dataframe(id = c(mydat
l containers. I am looking for the R equivalent for objects, and
the R equivalent of the C++ STL algorithm std::copy (passed the begin and
end iterators of the source list and a back inserter for the recipient
container), for appending a source list to a master list.
Thanks
Ted
On Thu, Jul 1
en
calling rbind after the loop on the list of such data.frames?
Thanks again,
Ted
On Thu, Jul 15, 2010 at 3:27 PM, Marc Schwartz wrote:
> On Jul 15, 2010, at 2:18 PM, Ted Byers wrote:
>
> > The data.frame is constructed by one of the following functions:
> >
> > funweek
le length, and I am not certain how either might be used
inside the IDs loop.
So, what is the best way to combine all lists assigned to z into a single
data.frame?
Thanks
Ted
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing l
do it, modulo some
head-scratching about how to test for convergence (an appropriate
test would depend on what kind of series was being summed).
Ted.
On 15-Jul-10 08:21:59, Allan Engelhardt wrote:
> Not 100% if this is what you are looking for, but maybe Reduce("+", x)
> will do i
erm (n-1), and n,
for n>1?
E.g. for the exponential series,
fun1 <- function(x) 1
fun <- function(x,n,tn) tn*x/n
Ted.
----
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 094 0861
Date: 14-Jul-10
irregular.
It may look better if you plot "Snow" as points, and then add
lines drawn to a regular sequence. Hence something like
S0 <- ((max(Snow) - min(Snow))/100)*(-0.05+(0:101))
plot(Snow,dnorm(Snow,mean=26.61,sd=14.179),pch="+",col="blue")
lines(S0,dnorm(S0
ple is greater than
> 1000?
>
Thanks
Ted
On Mon, Jul 12, 2010 at 4:02 PM, jim holtman wrote:
> try 'drop=TRUE' on the split function call. This will prevent the
> NULL set from being sent to the function.
>
> On Mon, Jul 12, 2010 at 3:10 PM, Ted Byers wrote:
&
ple size of a given subsample is greater than, say,
100?
Even better, is there a way to make the split more dynamic, so that it
groups a given m_id's data by month if the average weekly subsample size is
less than 100, or by day if the average weekly subsample is greater than
1000?
Thanks
Ted
per as it now seems trivial to just break it out into a loop (followed
by a lapply/split combo using only sale year and sale month).
While I am asking, is there a better way to split such temporally ordered
data into weekly samples that respective the year in
identical
Result: The loop took (11.184 - 1.728) = 9.456 seconds,
Vectorised, it took (11.348 - 11.184) = 0.164 seconds.
Loop/Vector = (11.184 - 1.728)/(11.348 - 11.184) = 57.65854
i.e. nearly 60 times as long.
Ted.
---
ot;,"green","blue","yellow")
M<-max(D,na.rm=TRUE)
ix.NA <- which(!is.na(D[1,]))
plot(posns[ix.NA],D[1,ix.NA],pch="+",xlim=c(0,16),ylim=c(0,M))
lines(posns[ix.NA],D[1,ix.NA])
for(i in (2:nrow(D))){
ix.NA <- which(!is.na(D[i,]))
po
lity, g)*). How would that example
be changed if there were two or more columns in the data.frame that are
needed to define the groups? I.E. in my example, I'd need to group by m_id,
and the year and week values that can be computed from sale_date.
Thanks
Ted
[[alternative
See at end ...
On 08-Jul-10 12:41:03, Ted Harding wrote:
> On 08-Jul-10 10:33:56, Gina Liao wrote:
>> Dear all,
>> Hi, I have the problems about converting the matrix to adjacency
>> matrix.Here's my example,
>> ab c
ina
If B is your original matrix of numbers, and A is to be the adjacency
matrix which you want, then:
B <- 1*(A > 0.4763)
Ted.
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 094 0861
Date
H
you will find that the components
$intensities
$density
have not been properly set.
Nevertheless, the above is a relatively painless way of getting
a standard histogram plot from such data.
Ted.
--
ing in between that level and the
level comprised of the maze of documentation for the plethora of relevant
packages is needed here (there is such an embarrassment of riches, I find
myself getting confused as to how to proceed).
Thanks
Ted
[[alternative HTML version deleted]]
the regression line, but displaced downwards
in such a way as to exclude a ceretain percantage of ths points
(i.e. such that that percentage lie below the line); but this does
not match my own usual interpretation of "envelope" (which is such
that it is a boundary for the positions of th
uch appreciated.
>
> So the sequence starts with 'A' and then has 10 from all possible
> values of those 17 letters? Have you computed how many sequences that
> is? If so, have you comprehended how big a number that is?
>
> It's 17^10 -
to:
x <- c(1,2,3,4) ; y <- c(1,2,3,4)
(i.e. locations of grid *lines*, not points)
with z being the matrix
12 12 5 32
23 1 45 21
23 1 65 23
42 32 76 43
i.e.
z <- matrix(c(12,23,23,42,12,1,1,32,5,45,65,76,32,21,23,43),ncol=4)
Then
contour(x,y,z)
should do
visit the R-help
info page at:
https://stat.ethz.ch/mailman/listinfo/r-help
and follow the instructions under "Subscribing to R-help".
Welcome!
Ted.
----
E-Mail: (Ted Harding)
Fax-to-email: +44 (0)870 09
That's neat, Greg! (As code, anyway). There was I, thinking about
how best to build it up by construction, then your "slash-and-burn"
technique does it in one line.
But was this the right problem, or the alternative that Bert Gunter
suggested?
Ted.
On 24-Jun-10 21:06:06, Greg Sno
combinatorial problem it is one
where you can quite easily drop stitches; so if there isn't
one I'll wait for confirmation before thinking about how to
implement it in R!
There may be some mileage in the 'partitions' package, see e.g.
http://finzi.
he fixed number (24) of "Y=1" cases
between Grp1 and Grp2, holding the sizes of Grp1 and Grp2 fixed.
(Hence X cannot exceed 5, and can be as low as 0, so the possible
re-assignments are X=0,1,2,3,4,5).
Ted.
-----
301 - 400 of 1124 matches
Mail list logo