Quite fascinating, if annoying. Nice example Petr!
Turns out my expected values are causing even more trouble because of this!
I've even gotten negative chi square values (calculated using Cressie and
Read's formula)!
So instead of kludging the error measurement code, I think I'm going to have
t
Aaah, it truly is wonderful, this technology!
I guess I'm going to have to override it a bit though..
Along the lines of
tae <- ifesle(all.equal(obs, exp) == TRUE, 0, sum(abs(obs - exp)))
Do I like doing this? No. But short of reading the vast literature that
exists on calculation precision - wh
Thanks Josh and Dan!
I did figure it had something to do with the machine epsilon...
But so what do I do now? I'm calculating the total absolute error over
thousands of tables e.g.:
tae<-sum(abs(obs-exp))
Is there any easy way to I keep these ignorable errors from showing up?
And furthermore, wh
Hi there!
I'm not sure I can create a minimal example of my problem, so I'm linking to
a minimal .RData file that has only two objects: obs and exp, each is a 6x9
matrix. http://dl.dropbox.com/u/10364753/test.RData link to dropbox file
(I hope this is acceptable mailing list etiquette!)
Here's
OK, for the record, this is not my homework, thanks for asking!
Also, I am sure I could have phrased my question more eloquently, but (hence
the newbie qualifier) I didn't. The code I posted was for the plot I want,
only smoothed i.e not based on random sampling from the distribution.
Dennis:
Thanks, but that wasn't what I was going for. Like I said, I know how to do a
simple chi-square density plot with dchisq().
What I'm trying to do is chi-square / degrees of freedom. Hence
rchisq(10,i)/i).
How do I do that with dchisq?
--
View this message in context:
http://r.789695.n4.na
Hi! This is going to be a real newbie question, but I can't figure it out.
I'm trying to plot densities of various functions of chi-square. A simple
chi-square plot I can do with dchisq(). But e.g. chi.sq/degrees of freedom I
only know how to do using density(rchisq()/df). For example:
plot(1,
ze for not being more precise.
Thanks guys!
Maja.
Ben Bolker wrote:
>
> David Winsemius comcast.net> writes:
>
>>
>>
>> On Jan 17, 2010, at 8:17 PM, maiya wrote:
>>
>> >
>> > There must be a very basic thing I am not getting...
>
There must be a very basic thing I am not getting...
I'm working with some entropy functions and the convention is to use
log(0)=0.
So I wrote a function:
llog<-function(x){
if (x ==0) 0 else log(x)
}
which seems to work fine for individual numbers e.g.
>llog(0/2)
[1] 0
but if I try whole ve
Hi everyone!
This is a ridiculously simple problem, I just can't seem to find the
solution!
All I need is something equivalent to
sum(is.na(x))
but instead of counting missing values, to count empty cells (with a value
of 0).
A naive attempt with is.empty didn't work :)
Thanks!
Maja
Oh, a
Cool! Thanks for the sampling and ff tips! I think I've figured it out now
using sampling...
I'm getting a quad-core, 4GB RAM computer next week, will try it again using
a 64 bit version :)
Thanks for your time!!!
Maja
tlumley wrote:
>
> On Tue, 10 Nov 2009, maiya wrote:
&
64-bit version of R you will probably not be able
> to have the whole file in memory at one time.
>
> On Tue, Nov 10, 2009 at 7:10 AM, maiya wrote:
>>
>> I'm trying to import a table into R the file is about 700MB. Here's my
>> first
>> try:
>>
>&g
I'm trying to import a table into R the file is about 700MB. Here's my first
try:
> DD<-read.table("01uklicsam-20070301.dat",header=TRUE)
Error: cannot allocate vector of size 15.6 Mb
In addition: Warning messages:
1: In scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
Reac
tions=test[,1:2],
> + col.segments=c("gray90", "gray"),draw.segments=TRUE, scale=FALSE)
>
> --
> Gregory (Greg) L. Snow Ph.D.
> Statistical Data Center
> Intermountain Healthcare
> greg.s...@imail.org
> 801.408.8111
>
>
>> -Original Message
Wow! Thank you for that Ted, a wonderfully comprehensive explanation and now
everything makes perfect sense!!
Regarding your last point, I would love to hear other people's experience. I
myself, as a complete newbie in both R and LaTeX, am perhaps not the best
judge... But there are several graph
Solution!!
Peter, that seems to do the trick!
dev.copy2eps(file="test.eps", useKerning=FALSE)
correctly places the labels without splitting them!
the same also works with postscript() of course.
I also found another thread where this was solved
http://www.nabble.com/postscript-printer-breakin
OK, this is really weird!
here's an example code:
t1<-c(1,2,3,4)
t2<-c(4,2,4,2)
plot(t1~t2, xlab="exp1", ylab="exp2")
dev.copy2eps(file="test.eps")
that all seems fine...
until you look at the eps file created, where for some weird reason, if you
scroll down to the end, the code reads:
/Font1
Hi!
I have a dataset with three columns -the first two refer to x and y
coordinates, the last one are odds ratios.
I'd like to plot the data with x and y coordinates and the odds ratio shown
as a fourfold plot, which I prefer to do using the stars function.
Unfortunately the stars option in sym
I realise that in the case of loglin the parameters are clacluated post
festum from the cell frequencies,
however other programmes that use Newton-Raphson as opposed to IPF work the
other way round, right?
In which case one would expect the output of parameters to be limited to the
particular cont
I am fairly new to log-linear modelling, so as opposed to trying to fit
modells, I am still trying to figure out how it actually works - hence I am
looking at the interpretation of parameters. Now it seems most people skip
this part and go directly to measuring model fit, so I am finding very few
Marc, it's the second "expansion" type transformation I was after, although
your expand.dft looks quite complicated? here's what I finaly came up with -
the bold lines correspond to what expand.dft does?
> orig<-matrix(c(40,5,30,25), c(2,2))
> orig
[,1] [,2]
[1,] 40 30
[2,]5 25
>
ion)
hope this is clearer now!
maja
jholtman wrote:
>
> Not exactly clear what you are asking for. Your data.frame.table does not
> seem related to the original 'orig'. What exactly are you expecting as
> output?
>
> On Wed, May 21, 2008 at 10:16 PM,
i appologise for the trivialness of this post - but i've been searching the
forum wothout luck - probably simply because it's late and my brain is
starting to go..
i have a frequency table as a matrix:
orig<-matrix(c(40,5,30,25), c(2,2))
orig
[,1] [,2]
[1,] 40 30
[2,]5 25
i basic
Hi!
(a complete newby, but will not give up easily!)
I was wondering if there is any way to decouple the axis and tick mark
widths? As I understand they are both controlled by the lwd setting, and
cannot be controlled independently? For example I might want to create major
and minor ticks, which
24 matches
Mail list logo