Re: [R] open source and R

2005-11-14 Thread Ted Harding
On 14-Nov-05 Prof Brian Ripley wrote:
> On Sun, 13 Nov 2005 [EMAIL PROTECTED] wrote:
> 
> [...]
> 
>> There is one aspect though where R users are in the cold when
>> it comes to C and FORTAN. If you want to understand the function
>> 'eigen', say, then you can "?eigen" to learn about its usage.
>> You can enter "eigen" to see the R code, and indeed that is
>> not too imcomprehensible. But then you find
>>
>>  .Fortran("ch", n, n, xr, xi, values = dbl.n,
>>   !only.values, vectors = xr, ivectors = xi, dbl.n,
>>   dbl.n, double(2 * n), ierr = integer(1),
>>   PACKAGE = "base")
>>
>> and similar for "rs", "cg" and "rg". Where's the help for
>> these? Nowhere obvious! In fact you have to go to the source
>> code, locate the FORTRAN routines, and study these, hoping
>> that enough helpful comments have been included to steer
>> your study. So it is a much more formidable task, especially
>> if you are having to learn the language at the same time.
> 
> That is an unfair comment.  The help page for eigen explains what 
> routines are used and gives you references to books describing them.
> So the help _is_ in the most obvious place.

Apologies for misleading wording. This was not meant as a criticism
of R in any way, but as an illustration of the theme that, sooner
or later, you "drop through the floor" of what R can provide in
the way of explicit explanation. So, while R's help is indeed helpful
in this case in indicating an orientation for your travels in that
outside world, "out in the cold" as it were, users are then on
their own as far as R is concerned. This is of course inevitable,
and the comment was intended as part of the general response to
Robert that if you want to study how R does things you are led
to study how other software, on which R depends, does things.

Best wishes,
Ted.



E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 14-Nov-05   Time: 08:55:35
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] correlating irregular time series

2005-11-14 Thread Christophe Pouzat
Hi Paul,

Here is how an amateur statistician deals with this problem when 
analyzing spike trains from simultaneously recorded neurons.

Start by estimating the "hazard function" h(t) of your several point 
processes (if you have a copy of MASS, check out the chapter 13, If you 
have a copy of Jim Lindsey, "The Statistical Analysis of Stochastic 
Processes in Time", check out chap 3 & 4; the hazard function is also 
called the "conditional intensity" or the "stochastic intensity").

In practice if you have a renewal process, meaning that the successive 
intervals between your events times are independent, you can first 
estimate the "Inter Event Interval" pdf, f(t), and its cumulative 
distribution function F(t). h(t) is then given by:

h(t) = f(t) / (1-F(t)),

where the quantity S(t) = 1-F(t) is often called the survivor function.

Fine, now if your processes are well approximated by renewal processes, 
you can look for the distribution of "time to next event" (TTN) and 
"time to former event" (TTF). By that I mean that for each of the black 
events of your figure, you must get the interval separating it from the 
last red event preceding it (the time to former) and the next red event 
following it (the time to next). Under the null hypothesis of no 
correlation these to random variables have the same pdf given by:

TTN(i) = S(i) / ,

where S(i) in that case is the survivor function of the red (test) 
process and  is its inter event interval expected value.
Using this approach I typically estimate the TTN and TTF pdfs with 
histograms and compare these histograms to their expected values under 
the null hypothesis. A warning though, I have most of the time much more 
events than you seem to have on your figure.

Let me know if any of this makes sense.

Christophe.

paul sorenson wrote:

>I have some time stamped events that are supposed to be unrelated.
>
>I have plotted them and that assumption does not appear to be valid. 
>http://metrak.com/tmp/sevents.png is a plot showing three sets of events 
>over time.  For the purpose of this exercise, the Y value is irrelevant. 
>  The series are not sampled at the same time and are not equispaced 
>(just events in a log file).
>
>The plot is already pretty convincing but requires a human-in-the-loop 
>to zoom in on "hot" areas and then visually interpret the result.  I 
>want to calculate some index of the events' temporal relationship.
>
>I think the question I am trying to ask is something like: "If event B 
>occurs, how likely is it that an event A occurred at almost the same time?".
>
>Can anyone suggest an established approach that could provide some 
>further insight into this relationship?  I can think of a fairly basic 
>approach where I start out with the ecdf of the time differences but I 
>am guessing I would be reinventing some wheel.
>
>Any tips would be most appreciated.
>
>cheers
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>
>  
>


-- 
A Master Carpenter has many tools and is expert with most of them.If you
only know how to use a hammer, every problem starts to look like a nail.
Stay away from that trap.
Richard B Johnson.
--

Christophe Pouzat
Laboratoire de Physiologie Cerebrale
CNRS UMR 8118
UFR biomedicale de l'Universite Paris V
45, rue des Saints Peres
75006 PARIS
France

tel: +33 (0)1 42 86 38 28
fax: +33 (0)1 42 86 38 30
web: www.biomedicale.univ-paris5.fr/physcerv/C_Pouzat.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] matrix subset

2005-11-14 Thread vincent
Marc Schwartz a écrit :

> In R version 2.1.0, a matrix method was added to the subset() function,
> so I am guessing that you are several versions out of date. Please
> upgrade to the latest version, which is 2.2.0, where you will get:

I upgraded and it works fine.
Thanks for the hint.

Many thanks also to the authors of the function, Peter Dalgaard
and prof Ripley (I suspect prof Ripley is responsible for
the matrix improvment ?).

Many thanks also to all the contributors to the last
version of this wonderful software.

Vincent

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] correlating irregular time series

2005-11-14 Thread paul sorenson
I don't have the texts you mention but I get the general idea.  The 
diagram I posted shows only a small fraction of the events I have.

Thank you

Christophe Pouzat wrote:
> Hi Paul,
> 
> Here is how an amateur statistician deals with this problem when 
> analyzing spike trains from simultaneously recorded neurons.
> 
> Start by estimating the "hazard function" h(t) of your several point 
> processes (if you have a copy of MASS, check out the chapter 13, If you 
> have a copy of Jim Lindsey, "The Statistical Analysis of Stochastic 
> Processes in Time", check out chap 3 & 4; the hazard function is also 
> called the "conditional intensity" or the "stochastic intensity").
> 
> In practice if you have a renewal process, meaning that the successive 
> intervals between your events times are independent, you can first 
> estimate the "Inter Event Interval" pdf, f(t), and its cumulative 
> distribution function F(t). h(t) is then given by:
> 
> h(t) = f(t) / (1-F(t)),
> 
> where the quantity S(t) = 1-F(t) is often called the survivor function.
> 
> Fine, now if your processes are well approximated by renewal processes, 
> you can look for the distribution of "time to next event" (TTN) and 
> "time to former event" (TTF). By that I mean that for each of the black 
> events of your figure, you must get the interval separating it from the 
> last red event preceding it (the time to former) and the next red event 
> following it (the time to next). Under the null hypothesis of no 
> correlation these to random variables have the same pdf given by:
> 
> TTN(i) = S(i) / ,
> 
> where S(i) in that case is the survivor function of the red (test) 
> process and  is its inter event interval expected value.
> Using this approach I typically estimate the TTN and TTF pdfs with 
> histograms and compare these histograms to their expected values under 
> the null hypothesis. A warning though, I have most of the time much more 
> events than you seem to have on your figure.
> 
> Let me know if any of this makes sense.
> 
> Christophe.
> 
> paul sorenson wrote:
> 
>> I have some time stamped events that are supposed to be unrelated.
>>
>> I have plotted them and that assumption does not appear to be valid. 
>> http://metrak.com/tmp/sevents.png is a plot showing three sets of 
>> events over time.  For the purpose of this exercise, the Y value is 
>> irrelevant.  The series are not sampled at the same time and are not 
>> equispaced (just events in a log file).
>>
>> The plot is already pretty convincing but requires a human-in-the-loop 
>> to zoom in on "hot" areas and then visually interpret the result.  I 
>> want to calculate some index of the events' temporal relationship.
>>
>> I think the question I am trying to ask is something like: "If event B 
>> occurs, how likely is it that an event A occurred at almost the same 
>> time?".
>>
>> Can anyone suggest an established approach that could provide some 
>> further insight into this relationship?  I can think of a fairly basic 
>> approach where I start out with the ecdf of the time differences but I 
>> am guessing I would be reinventing some wheel.
>>
>> Any tips would be most appreciated.
>>
>> cheers
>>
>> __
>> R-help@stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide! 
>> http://www.R-project.org/posting-guide.html
>>
>>  
>>
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] www.krankenversicherung.ch News - Krankenkassen - Information Newsletter

2005-11-14 Thread krankenversicherung
Guten Tag
 
Weitere Krankenkassen-Spar-Tipps fuer Sie!


INHALT DIESES NEWSLETTERS
1) Trend 2006 - Der Trend geht Richtung Hausarzt-Model
2) Angebot Assura - Kennen Sie die guenstigen Praemien der Assura
3) Vergleich der Praemien
4) Voting - Jetzt koennen Sie Ihre Versicherung bewerten
5) Wettbewerb - schon gewonnen?
6) Kuendigungstermin ist der 30.11.2005


Gerne informieren wir Sie ueber die aktuellen Sparmoeglichkeiten fuer das Jahr 
2006.


>>> AKTUELLER TREND FUER 2006
Vor einigen Tagen haben wir den Trend fuer das kommende Jahr ausgewertet. Der 
Trend geht Richtung 
Hausarztmodelle. Weitere Details und Statistiken finden Sie auf folgendem Link:
http://www.krankenversicherung.ch/helpart2.cfm?art=trend2006



>>> PRAEMIEN ASSURA
Die Assura hat eine eigene Philosophie. Diese verhilft seit Jahren zu 
guenstigen Praemien fuer die 
Assura-Versicherten.
Erhalten Sie hier Ihre persoenliche und unverbindliche Offerte:
http://www.assura.krankenversicherung.ch



>>> VERGLEICHEN SIE JETZT
Unter http://vergleich.krankenversicherung.ch vergleichen Sie Ihre Praemien 
fuer das Jahr 2006.



>>> VOTING - BEWERTEN SIE JETZT IHRE KRANKENKASSE
Bewerten Sie jetzt Ihre Krankenkasse unter http://voting.krankenversicherung.ch


>>> WETTBEWERB - GEWINNEN SIE EINE REISE - HABEN SIE SCHON EINEN TAGESPREIS 
>>> GEWONNEN?
100 Sofortpreise und eine Flugreise sind zu gewinnen unter 
http://wettbewerb.krankenversicherung.ch . Viel Glueck!


>>> KUENDIGUNGSTERMIN
Nicht vergessen! Der Kuendigungstermin fuer die Grundversicherung ist der 
30.11.2005.


>>> NAECHSTE INFORMATION
Noch einmal erinnern wir Sie rechtzeitig an den letzten Kuendigungstermin. Ein 
kostenloser Service 
von Krankenversicherung.ch


>>> INFORMATION - Kein Spam
Sie erhalten diesen Newsletter aufgrund einer Bestellung oder Eintrages auf 
www.krankenversicherung.ch oder www.help.ch. Sie koennen diesen Newsletter 
jederzeit sofort 
abbestellen mit Nutzung des untenstehenden Links.
 
 
>>> ABMELDUNG
http://www.krankenversicherung.ch/unsubscribe.cfm



Mit freundlichen Gruessen und viel Erfolg beim Wettbewerb
www.krankenversicherung.ch und www.help.ch
 
Ihr Newsletter-Team

___

Die Schweizer Firmen-Suchmaschine
HELP Searchengines AG - Badenerstrasse 75 - 8004 Zuerich
www.help.ch  -  www.firmenscout.ch  -  www.produktesuche.ch
mailto:[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] correlating irregular time series

2005-11-14 Thread Christophe Pouzat
Paul,

You can get a version of Lindsey's course at the following address:

http://popgen.unimaas.nl/~jlindsey/manuscripts.html

At the bottom of the page under heading "Courses".

Christophe.

paul sorenson wrote:

> I don't have the texts you mention but I get the general idea.  The 
> diagram I posted shows only a small fraction of the events I have.
>
> Thank you



-- 
A Master Carpenter has many tools and is expert with most of them.If you
only know how to use a hammer, every problem starts to look like a nail.
Stay away from that trap.
Richard B Johnson.
--

Christophe Pouzat
Laboratoire de Physiologie Cerebrale
CNRS UMR 8118
UFR biomedicale de l'Universite Paris V
45, rue des Saints Peres
75006 PARIS
France

tel: +33 (0)1 42 86 38 28
fax: +33 (0)1 42 86 38 30
web: www.biomedicale.univ-paris5.fr/physcerv/C_Pouzat.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2005-11-14 Thread Soetaert, Karline
Hi,

 

I am trying to solve a model that consists of rather stiff ODEs in R. 

 

I use the package ODEsolve (lsoda) to solve these ODEs.  

 

To speed up the integration, the jacobian is also specified. 

 

Basically, the model is a one-dimensional advection-diffusion problem,
and thus the jacobian is a tridiagonal matrix. 

The size of this jacobian is 100*100.

 

In the original package LSODA it is possible to specify that the
jacobian is banded, which makes its inversion very efficient. 

However, this feature seems to have been removed in the R version. 

 

 

Is there a way to overcome this limitation?

 

Thanks 

 

 

dr. Karline Soetaert

NIOO - CEME

PO box 140

4400 AC  Yerseke

the Netherlands

Phone: ++ 31 113 577487

fax: ++ 31 113 573616

e-mail: [EMAIL PROTECTED]

 

 


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] poker package -- comments?

2005-11-14 Thread Dan Bolser
Duncan Murdoch wrote:
> Over the weekend I wrote a small package to evaluate poker hands and to 
> do some small simulations with them.  If anyone is interested in looking 
> at it, I'd appreciate comments and/or contributions.

How do I install this package?

A README or a hint on th webpage below would be great.


> The package is available at 
> http://www.stats.uwo.ca/faculty/murdoch/software.  (Look at the bottom 
> of the list.)
> 
> So far only the Texas Hold'em variation has been programmed.  There's 
> support for wild cards and fairly general schemes of putting together
> hands for evaluation, so it wouldn't be too hard to add other games. 
> There's no support for betting or simulating different strategies, but 
> again, if you want to write that, it should be possible.
> 
> Here's a quick example, where I've asked it to simulate hands until it 
> came up with one I won.  In the first case I start with a pair of aces 
> and won on the first hand; in the second another player started with 
> aces, and it took 7 hands to find me a winner.
> 
> poker> select.hand(pocket = card("Ah As"), players = 4)
> Showing: 4H 3S 2D 6S 4C
>Rank Name Cards  Value
> 11 Self AH AS   Two pair
> 221 8S 3C   Two pair
> 332 QD KH Pair of 4s
> 443 8H 9D Pair of 4s
> Would win  4  person game
> Required 1 hand.
> 
> poker> select.hand(players = list(card("Ah As"), NULL, NULL))
> Showing: AD 4H 7D 2C 8S
>Rank Name Cards   Value
> 11 Self 6H 5HStraight
> 221 AH AS 3 of a kind
> 332 AC 3C  Pair of As
> 443 9D 6D  A high
> Would win  4  person game
> Required 7 hands.
> 
> Duncan Murdoch
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] name of object

2005-11-14 Thread Claus Atzenbeck
Hi,

I have the following function:

test <- function(x)
{
print(shapiro.test(x))
...
}

The output for "test(sample1$sec)" is:

Shapiro-Wilk normality test

data:  x
W = 0.9447, p-value = 0.5767
...

I would like to see "data: sample1$sec" instead of "data: x", as it
would be when directly called "shapiro.test(sample1$sec)".

How can I do that? I browsed the documentation and other literature, but
did not find any solution.

Thanks.
Claus

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] name of object

2005-11-14 Thread Prof Brian Ripley
On Mon, 14 Nov 2005, Claus Atzenbeck wrote:

> Hi,
>
> I have the following function:
>
>test <- function(x)
>{
>print(shapiro.test(x))
>...
>}
>
> The output for "test(sample1$sec)" is:
>
>Shapiro-Wilk normality test
>
>data:  x
>W = 0.9447, p-value = 0.5767
>...
>
> I would like to see "data: sample1$sec" instead of "data: x", as it
> would be when directly called "shapiro.test(sample1$sec)".
>
> How can I do that? I browsed the documentation and other literature, but
> did not find any solution.

Use substitute().  Something like

test <- function(x)
{
xlab <- substitute(x)
print(eval.parent(substitute(shapiro.test(x), list(x=xlab
}

See S Programming section 3.5.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] how to plot matrix in graphs

2005-11-14 Thread Liaw, Andy
See ?image.

Andy

> From: peter eric
> 
> halo,
>  
> how to plot a matrix (i have a multiple matrix ) in graphs in 
> terms of colored boxes or circles.
> my matrix looks like
>  
>A   B  C
>  
>6 2 3 4 3 2  2 1 7
>   A4 3 1 4 6 8  2 1 6
> 2 7 8 7 8 0  2 3 5
>  
>5 2 3 4 7 2  2 1 7 B4 3 1 4 8 8  3 1 6
> 9 7 8 7 8 0  6 3 5
> 
>  
> 1 2 3 4 3 2  2 1 7  C   4 3 1 4 6 8  2 1 6
> 2 7 8 7 8 0  2 3 5
>  
> And my graph should looks like(in terms of colored boxes or 
> circles according to the magnitude of the nos)
>  
>  
>  A   B  C
>  
>   O O OO O O   O O O
>  A   O O OO O O   O O O  
>   O O OO O O   O O O
>  
>   O O OO O O   O O O B   O O OO O O   O O O  
>   O O OO O O   O O O
>  
>   O O OO O O   O O O C   O O OO O O   O O O  
>   O O OO O O   O O O
>  
> Can you suggest me some ways of doing this..
>  
> thank you
>  
> best regards,
> peter.
>  
> Research student,
> Fraunhofer IPT,
> Germany.
> 
> 
> 
>   
> -
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] poker package -- comments?

2005-11-14 Thread Duncan Murdoch
On 11/14/2005 7:39 AM, Dan Bolser wrote:
> Duncan Murdoch wrote:
>> Over the weekend I wrote a small package to evaluate poker hands and to 
>> do some small simulations with them.  If anyone is interested in looking 
>> at it, I'd appreciate comments and/or contributions.
> 
> How do I install this package?

It depends on which version of R you're running, but on windows, you
just download the zip file, and then in R, choose "Install package from
local zip file".

By the way, I've rearranged the files on my web page, so if you need to
download it again you'll need to follow a few different links.  It still
starts at http://www.stats.uwo.ca/faculty/murdoch/software.

Duncan
> 
> A README or a hint on th webpage below would be great.
> 
> 
>> The package is available at 
>> http://www.stats.uwo.ca/faculty/murdoch/software.  (Look at the bottom 
>> of the list.)
>> 
>> So far only the Texas Hold'em variation has been programmed.  There's 
>> support for wild cards and fairly general schemes of putting together
>> hands for evaluation, so it wouldn't be too hard to add other games. 
>> There's no support for betting or simulating different strategies, but 
>> again, if you want to write that, it should be possible.
>> 
>> Here's a quick example, where I've asked it to simulate hands until it 
>> came up with one I won.  In the first case I start with a pair of aces 
>> and won on the first hand; in the second another player started with 
>> aces, and it took 7 hands to find me a winner.
>> 
>> poker> select.hand(pocket = card("Ah As"), players = 4)
>> Showing: 4H 3S 2D 6S 4C
>>Rank Name Cards  Value
>> 11 Self AH AS   Two pair
>> 221 8S 3C   Two pair
>> 332 QD KH Pair of 4s
>> 443 8H 9D Pair of 4s
>> Would win  4  person game
>> Required 1 hand.
>> 
>> poker> select.hand(players = list(card("Ah As"), NULL, NULL))
>> Showing: AD 4H 7D 2C 8S
>>Rank Name Cards   Value
>> 11 Self 6H 5HStraight
>> 221 AH AS 3 of a kind
>> 332 AC 3C  Pair of As
>> 443 9D 6D  A high
>> Would win  4  person game
>> Required 7 hands.
>> 
>> Duncan Murdoch
>> 
>> __
>> R-help@stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] odesolve with banded Jacobian [was "no subject"]

2005-11-14 Thread Martin Maechler
> "KSoet" == Soetaert, Karline <[EMAIL PROTECTED]>
> on Mon, 14 Nov 2005 13:20:24 +0100 writes:

KSoet> Hi, I am trying to solve a model that consists of
KSoet> rather stiff ODEs in R.

KSoet> I use the package ODEsolve (lsoda) to solve these
KSoet> ODEs.
 
KSoet> To speed up the integration, the jacobian is also
KSoet> specified.
 
KSoet> Basically, the model is a one-dimensional
KSoet> advection-diffusion problem, and thus the jacobian is
KSoet> a tridiagonal matrix.

KSoet> The size of this jacobian is 100*100.

KSoet> In the original package LSODA it is possible to
KSoet> specify that the jacobian is banded, which makes its
KSoet> inversion very efficient.

KSoet> However, this feature seems to have been removed in
KSoet> the R version.
 
KSoet> Is there a way to overcome this limitation?

Yes.  But probably not a very easy one; maybe even a very
cumbersome one... ;-)

Note however that questions like these should typically be
addressed at the package author - which you can always quickly
find out via

  > packageDescription("odesolve")
  Package: odesolve
  Version: 0.5-12
  Date: 2004/10/25
  Title: Solvers for Ordinary Differential Equations
  Author: R. Woodrow Setzer <[EMAIL PROTECTED]>
  Maintainer: R. Woodrow Setzer <[EMAIL PROTECTED]>
  Depends: R (>= 1.4.0)
  Description: This package provides an interface for the ODE solver
  lsoda. ODEs are expressed as R functions or as compiled code.
  ...

 
I've CC'ed this e-mail to Woodrow to help you for once


 <..>

KSoet>  [[alternative HTML version deleted]]

KSoet> __
KSoet> .
KSoet> PLEASE do read the posting guide!
KSoet> http://www.R-project.org/posting-guide.html

if you do read that guide, it will tell you 

- why you should always use a 'Subject' for your e-mails
- why HTML-ified e-mails are not much liked and what you can do
   about it.

Regards,
Martin Maechler, ETH Zurich

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] bug/feature with barplot?

2005-11-14 Thread Karin Lagesen

I have found a bug/feature with barplot that at least to me shows
undesireable behaviour. When using barplot and plotting fewer
groups/levels/factors(I am unsure what they are called) than the number
of colors stated in a col statement, the colors wrap around such that
the colors are not fixed to one group. This is mostly problematic when
I make R figures using scripts, since I sometimes have empty input
groups. I have in these cases experienced labeling the empty group as
red, and then seeing a bar being red when that bar is actually from a
different group.

Reproducible example (I hope):

barplot(VADeaths, beside=TRUE, col=c("red", "green", "blue", "yellow", "black"))
barplot(VADeaths[1:4,], beside=TRUE, col=c("red", "green", "blue", "yellow", 
"black"))

Now, I don't know if this is a bug or a feature, but it sure bugged me...:)

Karin
-- 
Karin Lagesen, PhD student
[EMAIL PROTECTED]
http://www.cmbn.no/rognes/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Curve fitting tutorial / clue stick?

2005-11-14 Thread Allen S. Rout


Working through the R archives and webspace, I've mostly proved to myself that
I don't know enough about what statisticians call "Curve Fitting" to even
begin translating the basics.


I'm a sysadmin, and have collected a variety of measurements of my systems,
and I can draw pretty pictures in R showing what has happened.  People are
happy, customers feel empowered.  Whee!


Now, I want to take my corpus of data and make a prediction based on it; In
statistics-moron speak, I want to draw a line or a simple curve across my
extant graph, and figure out where the predictive curve passes threshold 'T',
and then graph that too.


I thought I'd be telling R something like:  

- I think this is exponential.  Here's the data.  Give me the best function
  you can come up with, and tell me "how good" the fit is.

- I think this is quadratic.  Here's the data.  Give me the best function
  you can come up with, and tell me "how good" the fit is.



Can someone point me at a spot in the docs which might be suitable for my
level of ignorance?  


- Allen S. Rout

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] [<- and indexing for zoo objects

2005-11-14 Thread Brandt, T. (Tobias)
Hi
 
I've been greatly enjoying the functionality the zoo package offers.
However I've hit a snag with the following code
 
> a <- zoo(matrix(1:10,5,2), 2001:2005)
> a
  
2001  1  6
2002  2  7
2003  3  8
2004  4  9
2005  5 10
> a[I(2003), 2]
  
2003 8
> a[I(2003), 2] <- NA
Error: subscript out of bounds
> 

I've also tried
 
> coredata(a[I(2003), 2]) <- NA
Error: subscript out of bounds
> 
 
but that doesn't work either.  I can of course do
 
> ix <- which(index(a)==2003)
> a[ix, 2] <- NA
> a
  
2001  1  6
2002  2  7
2003  3 NA
2004  4  9
2005  5 10
> 

which gives me the desired result but I feel that a timeseries class should
really be able to handle the first syntax since with the workaround I'm back
to the way of doing things before I had timeseries objects.
 
Am I missing something or any comments?
 

Regards

Tobias Brandt
 


Nedbank Limited Reg No 1951/09/06
Directors: WAM Clewlow (Chairman)  Prof MM Katz (Vice-chairman)  ML Ndlovu 
(Vice-chairman)  TA Boardman (Chief Executive)
CJW Ball  MWT Brown  RG Cottrell  BE Davison  N Dennis (British)  MA Enus-Brey  
Prof B de L Figaji  RM Head (British)
RJ Khoza  JB Magwaza  ME Mkwanazi  JVF Roberts (British)  CML Savage  GT Serobe 
 JH Sutcliffe (British)
Company Secretary: GS Nienaber 16.08.2005

This email and any accompanying attachments may contain confidential and 
proprietary information.  This information is private and protected by law and, 
accordingly, if you are not the intended recipient, you are requested to delete 
this entire communication immediately and are notified that any disclosure, 
copying or distribution of or taking any action based on this information is 
prohibited.

Emails cannot be guaranteed to be secure or free of errors or viruses.  The 
sender does not accept any liability or responsibility for any interception, 
corruption, destruction, loss, late arrival or incompleteness of or tampering 
or interference with any of the information contained in this email or for its 
incorrect delivery or non-delivery for whatsoever reason or for its effect on 
any electronic device of the recipient.

If verification of this email or any attachment is required, please request a 
hard-copy version.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Coercion of percentages by as.numeric

2005-11-14 Thread Brandt, T. (Tobias)
Hi

Given that things like the following work
 
 > a <- c("-.1"," 2.7 ","B")
> a
[1] "-.1"   " 2.7 " "B"
> as.numeric(a)
[1] -0.1  2.7   NA
Warning message:
NAs introduced by coercion 
> 

I naively expected that the following would behave differently.
 
 > b <- c('10%', '-20%', '30.0%', '.40%')
> b
[1] "10%"   "-20%"  "30.0%" ".40%" 
> as.numeric(b)
[1] NA NA NA NA
Warning message:
NAs introduced by coercion 
> 

> version
 _  
platform i386-pc-mingw32
arch i386   
os   mingw32
system   i386, mingw32  
status  
major2  
minor2.0
year 2005   
month10 
day  06 
svn rev  35749  
language R  
> 

Various RSiteSearches with terms like "percentage" and "coercion" yielded
nothing.
 
Does anyone know how to do this elegantly?
 
 
Thanks
 
Tobias Brandt
 
 
P.S. Apologies if this appears on the list twice but I suspect an earlier
post was blocked since it was in html format.



Nedbank Limited Reg No 1951/09/06
Directors: WAM Clewlow (Chairman)  Prof MM Katz (Vice-chairman)  ML Ndlovu 
(Vice-chairman)  TA Boardman (Chief Executive)
CJW Ball  MWT Brown  RG Cottrell  BE Davison  N Dennis (British)  MA Enus-Brey  
Prof B de L Figaji  RM Head (British)
RJ Khoza  JB Magwaza  ME Mkwanazi  JVF Roberts (British)  CML Savage  GT Serobe 
 JH Sutcliffe (British)
Company Secretary: GS Nienaber 16.08.2005

This email and any accompanying attachments may contain confidential and 
proprietary information.  This information is private and protected by law and, 
accordingly, if you are not the intended recipient, you are requested to delete 
this entire communication immediately and are notified that any disclosure, 
copying or distribution of or taking any action based on this information is 
prohibited.

Emails cannot be guaranteed to be secure or free of errors or viruses.  The 
sender does not accept any liability or responsibility for any interception, 
corruption, destruction, loss, late arrival or incompleteness of or tampering 
or interference with any of the information contained in this email or for its 
incorrect delivery or non-delivery for whatsoever reason or for its effect on 
any electronic device of the recipient.

If verification of this email or any attachment is required, please request a 
hard-copy version.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] bug/feature with barplot?

2005-11-14 Thread Marc Schwartz (via MN)
On Mon, 2005-11-14 at 15:55 +0100, Karin Lagesen wrote:
> I have found a bug/feature with barplot that at least to me shows
> undesireable behaviour. When using barplot and plotting fewer
> groups/levels/factors(I am unsure what they are called) than the number
> of colors stated in a col statement, the colors wrap around such that
> the colors are not fixed to one group. This is mostly problematic when
> I make R figures using scripts, since I sometimes have empty input
> groups. I have in these cases experienced labeling the empty group as
> red, and then seeing a bar being red when that bar is actually from a
> different group.
> 
> Reproducible example (I hope):
> 
> barplot(VADeaths, beside=TRUE, col=c("red", "green", "blue", "yellow", 
> "black"))
> barplot(VADeaths[1:4,], beside=TRUE, col=c("red", "green", "blue", "yellow", 
> "black"))
> 
> Now, I don't know if this is a bug or a feature, but it sure bugged me...:)
> 
> Karin

Most definitely not a bug.

As with many vectorized function arguments, they will be recycled as
required to match the length of other appropriate arguments.

In this case, the number of colors (5) does not match the number of
groups (4). Thus, they are "out of synch" with each other and you get
the result you have.

Not unexpected behavior.

You should adjust your code and the function call so that the number of
groups matches the number of colors. Something along the lines of the
following:

col <- c("red", "green", "blue", "yellow", "black")
no.groups <- 4
barplot(VADeaths[1:no.groups, ], beside = TRUE, col = col[1:no.groups])


Now try:

 no.groups <- 5
 barplot(VADeaths[1:no.groups, ], beside = TRUE, col = col[1:no.groups])

 no.groups <- 3
 barplot(VADeaths[1:no.groups, ], beside = TRUE, col = col[1:no.groups])


HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] [<- and indexing for zoo objects

2005-11-14 Thread Achim Zeileis
Tobias,

thanks for the report:

> > a[I(2003), 2] <- NA
> Error: subscript out of bounds

Yes, we would have to write a [<-.zoo method for that, currently we
rely on the corresponding methods for matrices and vectors. I'll add it
to the WISHLIST and try to add this functionality for the next zoo
release.

All the indexing can also be done via window<-

window(a, start = 2003, end = 2003)[,2] <- NA

which would currently be the preferred solution.

thx,
Z

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] [<- and indexing for zoo objects

2005-11-14 Thread Gabor Grothendieck
On 11/14/05, Brandt, T. (Tobias) <[EMAIL PROTECTED]> wrote:
>
>
> Hi
>
> I've been greatly enjoying the functionality the zoo package offers.
> However I've hit a snag with the following code
>
> > a <- zoo(matrix(1:10,5,2), 2001:2005)
> > a
>
> 2001  1  6
> 2002  2  7
> 2003  3  8
> 2004  4  9
> 2005  5 10
> > a[I(2003), 2]
>
> 2003 8
> > a[I(2003), 2] <- NA
> Error: subscript out of bounds
> >
>
> I've also tried
>
> > coredata(a[I(2003), 2]) <- NA
> Error: subscript out of bounds
> >
>
> but that doesn't work either.  I can of course do


Try this:

   window(a, 2003)[,2] <- NA

See

   ?"window<-.zoo"

for more info.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] effect sizes for Wilcoxon tests

2005-11-14 Thread Claus Atzenbeck
Hello,

I use t.test for normal distributed and wilcox.test for non-normal
distributed samples.

It is easy to write a function for t.test that calculates the effect
size, because all parts of the formula are available from the t.test
result: r = sqrt(t*t / (t*t + df))

However, for Wilcoxon tests, the formula for effect sizes is:
r = Z / sqrt(N)

I wonder how I can calculate the Z-score in R for a Wilcoxon test.

BTW, would it be correct to name "wilcox.test(..., paired=F)" a
"Mann-Whitney test" in a report?  If I understand the documentation
(?wilcox.test) correctly, R does actually not use Mann-Whitney, but the
equivalent Wilcoxon test.

Thanks for clarification.
Claus

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Coercion of percentages by as.numeric

2005-11-14 Thread Gabor Grothendieck
On 11/14/05, Brandt, T. (Tobias) <[EMAIL PROTECTED]> wrote:
> Hi
>
> Given that things like the following work
>
>  > a <- c("-.1"," 2.7 ","B")
> > a
> [1] "-.1"   " 2.7 " "B"
> > as.numeric(a)
> [1] -0.1  2.7   NA
> Warning message:
> NAs introduced by coercion
> >
>
> I naively expected that the following would behave differently.
>
>  > b <- c('10%', '-20%', '30.0%', '.40%')
> > b
> [1] "10%"   "-20%"  "30.0%" ".40%"
> > as.numeric(b)
> [1] NA NA NA NA
> Warning message:
> NAs introduced by coercion

Try this:

as.numeric(sub("%", "e-2", b))

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Curve fitting tutorial / clue stick?

2005-11-14 Thread Jean-Luc Fontaine
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Allen S. Rout wrote:

>
> Working through the R archives and webspace, I've mostly proved to
> myself that I don't know enough about what statisticians call
> "Curve Fitting" to even begin translating the basics.
>
>
> I'm a sysadmin,

I have just the thing for you (citing myself):
(http://moodss.sourceforge.net/ )
The major new feature planned and being worked on is... predicting the
future. With the help of the R project statistical engine (a
remarkable piece of software), the user will be able to receive emails
such as: "the disk on server S... is likely to become full in 3
weeks". The statistical model will be automatically determined by
moodss in the new predictor viewer, from traditional models (ARIMA,
...) and neural networks... Expect the new release by the end of 2005.

Unfortunately, that'll take a few months...

- --
Jean-Luc Fontaine  http://jfontain.free.fr/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFDeLvGkG/MMvcT1qQRAtZUAKCYlqJP77sD4yeS747uvoNrtljHiwCfebkd
w+uE4Ip++2oabUWJjFqoZU4=
=hCLa
-END PGP SIGNATURE-

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Little's Chi Square test for MCAR?

2005-11-14 Thread Rohit Vishal Kumar
Hi.

Can anyone point me to any module in R which implements "Little's Chi 
Square test" for MCAR.
The problem is that i have around 60 behavioural variables on a 6 point 
categorical scale which i need to test for MCAR and MAR. What i can make 
out from preliminary analysis is that moderate (0.30 to 0.60) 
correlations  may be present in several variable pairs leading me to 
suspect that the data may not be MCAR or MAR. However i need some more 
"concrete" proof.

Any help - onlist or offlist - would be greatly appreciated.

Thanks in Advance

Rohit Vishal Kumar
Ph.D. Student (Calcutta) India

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Little's Chi Square test for MCAR?

2005-11-14 Thread Dimitris Rizopoulos
This depends on the analysis you want to do; Maximum Likelihood will 
give you unbiased results even under MAR. In this case the more 
relevant question is whether the missing data mechanism is MNAR, in 
which case ML might give you biased results. Unfortunately you cannot 
test MNAR without making, some times very strong, assumptions.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://www.med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message - 
From: "Rohit Vishal Kumar" <[EMAIL PROTECTED]>
To: 
Sent: Monday, November 14, 2005 5:40 PM
Subject: [R] Little's Chi Square test for MCAR?


> Hi.
>
> Can anyone point me to any module in R which implements "Little's 
> Chi
> Square test" for MCAR.
> The problem is that i have around 60 behavioural variables on a 6 
> point
> categorical scale which i need to test for MCAR and MAR. What i can 
> make
> out from preliminary analysis is that moderate (0.30 to 0.60)
> correlations  may be present in several variable pairs leading me to
> suspect that the data may not be MCAR or MAR. However i need some 
> more
> "concrete" proof.
>
> Any help - onlist or offlist - would be greatly appreciated.
>
> Thanks in Advance
>
> Rohit Vishal Kumar
> Ph.D. Student (Calcutta) India
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Coercion of percentages by as.numeric

2005-11-14 Thread Brandt, T. (Tobias)
 

>-Original Message-
>From: Gabor Grothendieck [mailto:[EMAIL PROTECTED] 
>Sent: 14 November 2005 06:21 PM
>
>On 11/14/05, Brandt, T. (Tobias) <[EMAIL PROTECTED]> wrote:
>> Hi
>>
>> Given that things like the following work
>>
>>  > a <- c("-.1"," 2.7 ","B")
>> > a
>> [1] "-.1"   " 2.7 " "B"
>> > as.numeric(a)
>> [1] -0.1  2.7   NA
>> Warning message:
>> NAs introduced by coercion
>> >
>>
>> I naively expected that the following would behave differently.
>>
>>  > b <- c('10%', '-20%', '30.0%', '.40%')
>> > b
>> [1] "10%"   "-20%"  "30.0%" ".40%"
>> > as.numeric(b)
>> [1] NA NA NA NA
>> Warning message:
>> NAs introduced by coercion
>
>Try this:
>
>as.numeric(sub("%", "e-2", b))
>

Thank you, that accomplishes what I had intended.

I would have thought though that the expression "53%" would be a fairly
standard representation of the number 0.53 and might be handled as such.  Is
there a specific reason for avoiding this behaviour?  

I can imagine that it might add unnecessary overhead to routines like
"as.numeric" which one would like to keep as fast as possible.

Perhaps there are other areas though where it might be desirable?  For
example I'm thinking of the read.table function for reading in csv files
since I have many of these that have been saved from excel and now contain
numbers in the "%" format.


Nedbank Limited Reg No 1951/09/06
Directors: WAM Clewlow (Chairman)  Prof MM Katz (Vice-chairman)  ML Ndlovu 
(Vice-chairman)  TA Boardman (Chief Executive)
CJW Ball  MWT Brown  RG Cottrell  BE Davison  N Dennis (British)  MA Enus-Brey  
Prof B de L Figaji  RM Head (British)
RJ Khoza  JB Magwaza  ME Mkwanazi  JVF Roberts (British)  CML Savage  GT Serobe 
 JH Sutcliffe (British)
Company Secretary: GS Nienaber 16.08.2005

This email and any accompanying attachments may contain confidential and 
proprietary information.  This information is private and protected by law and, 
accordingly, if you are not the intended recipient, you are requested to delete 
this entire communication immediately and are notified that any disclosure, 
copying or distribution of or taking any action based on this information is 
prohibited.

Emails cannot be guaranteed to be secure or free of errors or viruses.  The 
sender does not accept any liability or responsibility for any interception, 
corruption, destruction, loss, late arrival or incompleteness of or tampering 
or interference with any of the information contained in this email or for its 
incorrect delivery or non-delivery for whatsoever reason or for its effect on 
any electronic device of the recipient.

If verification of this email or any attachment is required, please request a 
hard-copy version.


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Coercion of percentages by as.numeric

2005-11-14 Thread Gabor Grothendieck
On 11/14/05, Brandt, T. (Tobias) <[EMAIL PROTECTED]> wrote:
>
>
>
>
> >-Original Message-
> >From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
> >Sent: 14 November 2005 06:21 PM
> >
> >On 11/14/05, Brandt, T. (Tobias) <[EMAIL PROTECTED]> wrote:
> >> Hi
> >>
> >> Given that things like the following work
> >>
> >>  > a <- c("-.1"," 2.7 ","B")
> >> > a
> >> [1] "-.1"   " 2.7 " "B"
> >> > as.numeric(a)
> >> [1] -0.1  2.7   NA
> >> Warning message:
> >> NAs introduced by coercion
> >> >
> >>
> >> I naively expected that the following would behave differently.
> >>
> >>  > b <- c('10%', '-20%', '30.0%', '.40%')
> >> > b
> >> [1] "10%"   "-20%"  "30.0%" ".40%"
> >> > as.numeric(b)
> >> [1] NA NA NA NA
> >> Warning message:
> >> NAs introduced by coercion
> >
> >Try this:
> >
> >as.numeric(sub("%", "e-2", b))
> >
>
> Thank you, that accomplishes what I had intended.
>
> I would have thought though that the expression "53%" would be a fairly
> standard representation of the number 0.53 and might be handled as such.  Is
> there a specific reason for avoiding this behaviour?
>
> I can imagine that it might add unnecessary overhead to routines like
> "as.numeric" which one would like to keep as fast as possible.
>
> Perhaps there are other areas though where it might be desirable?  For
> example I'm thinking of the read.table function for reading in csv files
> since I have many of these that have been saved from excel and now contain
> numbers in the "%" format.

Assuming a .csv file with trailing percents after some numbers
you could try this:

Lines <- readLines(myfile)
Lines <- gsub("%", "e-2", Lines)
mydata <- read.csv(textConnection(Lines))

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] point pattern interactions (Gcross and Kcross)

2005-11-14 Thread Charlotte Reemts
Dear R-users,

I am exploring disease spread in trees.  I have locations for diseased trees
in 2004 and 2005 and want to know whether the patterns are independent.  I
would like to use both the 'gcross' and 'kcross' functions (in spatstat).

I imported the data from two text files and created point objects using
as.ppp.  I then created a marked planar point pattern using ppp.  I am
fairly sure that everything worked, because when I ask for information about
the data file, I get
marked planar point pattern: 628 points
multitype, with levels = w2004  w2005
window: rectangle = [ 607200 , 634800 ] x [ 3438400 , 3460400 ]
(Note: the window values are so large because I am using UTM coordinates for
my x and y locations)

However, when I run gcross, I get the following error:
code: g.0405<-Gcross(oakwilt, i="w2004", j="w2005")
error: Error in hist.default(nnd, breaks = brks, plot = FALSE) :
some 'x' not counted; maybe 'breaks' do not span range of 'x'

When running kcross, I get this:
k.0405<-Kcross(oakwilt, i="w2004", j="w2005")
Error in "[<-"(`*tmp*`, index, value = NULL) :
incompatible types (1000) in subassignment type fix

I suspect that these two errors are somehow related.  Any suggestions for
what might be wrong?

Thanks!
Charlotte Reemts
Vegetation Ecologist
The Nature Conservancy--Fort Hood (TX) Project

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Coercion of percentages by as.numeric

2005-11-14 Thread Marc Schwartz (via MN)
On Mon, 2005-11-14 at 19:07 +0200, Brandt, T. (Tobias) wrote:
>  
> >-Original Message-
> >From: Gabor Grothendieck [mailto:[EMAIL PROTECTED] 
> >Sent: 14 November 2005 06:21 PM
> >
> >On 11/14/05, Brandt, T. (Tobias) <[EMAIL PROTECTED]> wrote:
> >> Hi
> >>
> >> Given that things like the following work
> >>
> >>  > a <- c("-.1"," 2.7 ","B")
> >> > a
> >> [1] "-.1"   " 2.7 " "B"
> >> > as.numeric(a)
> >> [1] -0.1  2.7   NA
> >> Warning message:
> >> NAs introduced by coercion
> >> >
> >>
> >> I naively expected that the following would behave differently.
> >>
> >>  > b <- c('10%', '-20%', '30.0%', '.40%')
> >> > b
> >> [1] "10%"   "-20%"  "30.0%" ".40%"
> >> > as.numeric(b)
> >> [1] NA NA NA NA
> >> Warning message:
> >> NAs introduced by coercion
> >
> >Try this:
> >
> >as.numeric(sub("%", "e-2", b))
> >
> 
> Thank you, that accomplishes what I had intended.
> 
> I would have thought though that the expression "53%" would be a fairly
> standard representation of the number 0.53 and might be handled as such.  Is
> there a specific reason for avoiding this behaviour?  

"53%" is a 'shorthand' character representation of a mathematical
concept. To wit, the specific representation of a fraction using 100 as
the denominator (ie. 53 / 100). The symbol '%' can be replaced by the
word "percent", such as "53 percent", which is also a character
representation.

0.53, in context, is a numeric representation of a proportion in the
range of 0 - 1.0.

> I can imagine that it might add unnecessary overhead to routines like
> "as.numeric" which one would like to keep as fast as possible.
> 
> Perhaps there are other areas though where it might be desirable?  For
> example I'm thinking of the read.table function for reading in csv files
> since I have many of these that have been saved from excel and now contain
> numbers in the "%" format.

In Excel, numbers displayed with a '%' are what you see visually.
However, the internal representation (how the value is actually stored
in the program) is still as a floating point value, without the '%'. 

For example:

> a <- 53
> a
[1] 53

> sprintf("%.0f%%", a)
[1] "53%"

> is.numeric(a)
[1] TRUE

> is.numeric(sprintf("%.0f%%", a))
[1] FALSE


Unfortunately (depending upon your perspective), Excel, and other
similar programs, tend to export the visually displayed values and not
the internal representations of them. Thus, as Gabor pointed out, you
will need to do some 'editing' of the values before using them in R. You
can either do this in Excel, by removing the "%" formatting, or
post-import in R as Gabor has described.

You need to keep separate the internal representation of a value and its
printed or displayed representation for human readable consumption.

as.numeric() does basically one thing and it does it well and properly.
It is up to the user to ensure that it is passed the proper values. When
that is not the case, it issues an appropriate warning message and
returns NA.

Of course, using Gabor's hint, you can also write your own variation of
as.numeric(), creating a function that takes percent formatted values
and converts them as you require. One of the many strengths of R, is
that you can extend it to meet your own specific requirements when the
base functions do not.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] point pattern interactions (Gcross and Kcross)

2005-11-14 Thread Barry Rowlingson
Charlotte Reemts wrote:


> marked planar point pattern: 628 points
> multitype, with levels = w2004  w2005
> window: rectangle = [ 607200 , 634800 ] x [ 3438400 , 3460400 ]

  I just created something as close as possible to that using random 
poisson points:

  > oakfake
  marked planar point pattern: 628 points
  multitype, with levels = w2004  w2005
  window: rectangle = [ 607200 , 634800 ] x [ 3438400 , 3460400 ]

but Gcross works fine:

  > g.0405<-Gcross(oakfake, i="w2004", j="w2005")
  > plot(g.0405)

as does Kcross - so it must be something wrong with your data beyond 
what you've told us.

  Do you see it okay if you do plot(oakwilt)?

Baz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Robust Non-linear Regression

2005-11-14 Thread Martin Maechler
Package 'sfsmisc' has had a function  'rnls()' for a while 
which does robust non-linear regression via M-estimation.

[The name of the function is probably *really* a misnomer,
 since the 'ls' part stands for "least squares"!]

Two weeks ago, there's been a small workshop
"Robustness and R" in Treviso (It),
http://www.dst.unive.it/rsr/

where we've talked about available and missing robustness
functionality ``in R''.  One consequence of the workshop is the
new mailing list "R-SIG-Robust" {to which I CC this message} 
and another planned and hopefully even more consequential
consequence will be collaboration on producing more widely
available robustness functionality for R.  Do subscribe to the
list if you are interested.

> "Vermeiren" == Vermeiren, Hans [VRCBE] 
> on Sun, 13 Nov 2005 22:47:31 +0100 writes:

Vermeiren> Hi, I'm trying to use Robust non-linear
Vermeiren> regression to fit dose response curves.  Maybe I
Vermeiren> didnt look good enough, but I dind't find robust
Vermeiren> methods for NON linear regression implemented in
Vermeiren> R. A method that looked good to me but is
Vermeiren> unfortunately not (yet) implemented in R is
Vermeiren> described in
Vermeiren> 
http://www.graphpad.com/articles/RobustNonlinearRegression_files/frame.htm

 
Vermeiren> in short: instead of using the premise that the
Vermeiren> residuals are gaussian they propose a Lorentzian
Vermeiren> distribution, in stead of minimizing the squared
Vermeiren> residus SUM (Y-Yhat)^2, the objective function is
Vermeiren> now SUM log(1+(Y-Yhat)^2/ RobustSD)
 
Vermeiren> where RobustSD is the 68th percentile of the
Vermeiren> absolute value of the residues

 
Vermeiren> my question is: is there a smart and elegant way
Vermeiren> to change to objective function from squared
Vermeiren> Distance to log(1+D^2/Rsd^2) ?

no; not easily.
 
Vermeiren> or alternatively to write this as a weighted
Vermeiren> non-linear regression where the weights are
Vermeiren> recalculated during the iterations in nlme it is
Vermeiren> possible to specify weights, possibly that is the
Vermeiren> way to do it, but I didn't manage to get it
Vermeiren> working the weights should then be something
Vermeiren> like:
 
Vermeiren> SUM (log(1+(resid(.)/quantile(all_residuals,0.68))^2))
Vermeiren>   / SUM (resid(.))
 
rnls() mentioned does use robust weights and IRLS (iteratively
reweighted LS) making use of  nls() and rlm(),
similarly to your suggestion.

Vermeiren> the test data I use :

Vermeiren> x<-seq(-5,-2,length=50)
Vermeiren> x<-rep(x,4)
Vermeiren> y<-SSfpl(x,0,100,-3.5,1)
Vermeiren> y<-y+rnorm(length(y),sd=5)
Vermeiren> y[sample(1:length(y),floor(length(y)/50))]<-200 # add 2% 
outliers at 200

Since you have only outliers in 'y' and none in 'x',
you could use the 'nlrq' (nonlinear regression quantiles)
package that Roger Koenker mentioned.

To really robustify such self starting models as the 4-parameter
logistic 'SSfpl' above, you would also need to provide a robust
initial estimator; 
maybe that could be done pretty easily 'rlm()' instead of 'lm()' and
using 'rnls()' instead of 'nls()' also for the "initial" part in
something like

SSfpl.rob <-
selfStart(~ A + (B - A)/(1 + exp((xmid - input)/scal)), 
  initial = function( ...) { .. },
  parameters= c("A","B","xmid","scal"))
 
{look at 'SSfpl() for the initial estimator}.

However, BTW, currently the "plinear" version fails for our robust
nonlinear procedure 'rnls()'.

Martin Maechler, ETH Zurich

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Robust Non-linear Regression

2005-11-14 Thread Ruben Roa
> -Original Message-
> From: [EMAIL PROTECTED] [SMTP:[EMAIL PROTECTED] On Behalf Of Vermeiren, Hans 
> [VRCBE]
> Sent: Sunday, November 13, 2005 7:48 PM
> To:   'r-help@stat.math.ethz.ch'
> Subject:  [R] Robust Non-linear Regression
> 
> Hi,
>  
> I'm trying to use Robust non-linear regression to fit dose response curves.
> Maybe I didnt look good enough, but I dind't find robust methods for NON
> linear regression implemented in R. A method that looked good to me but is
> unfortunately not (yet) implemented in R is described in 
> http://www.graphpad.com/articles/RobustNonlinearRegression_files/frame.htm
> 
> 
> 
> in short: instead of using the premise that the residuals are gaussian they
> propose a Lorentzian distribution,
> in stead of minimizing the squared residus SUM (Y-Yhat)^2, the objective
> function is now
> SUM log(1+(Y-Yhat)^2/ RobustSD)
>  
> where RobustSD is the 68th percentile of the absolute value of the residues
>  
> my question is: is there a smart and elegant way to change to objective
> function from squared Distance to log(1+D^2/Rsd^2) ?
>  
---
I do not know about in-built robustness options in R but I have found that 
Dave Fournier's robust likelihood for nonlinear regression in ADMB does
a pretty good job in detecting and counter-acting the influence of outliers
(in my applications this has been used to counter-act the effect of reading 
errors in determination of the age of fish based on rings in bones). 
It relies on a likelihood function based on a mixture of a normal and another
distribution with fatter tails. You can find the documentation in the ADMB 
manual 
at the ADMB website: http://otter-rsch.com/admodel.htm
Ruben

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] how to plot matrix in graphs

2005-11-14 Thread peter eric
halo,
 
how to plot a matrix (i have a multiple matrix ) in graphs in terms of colored 
boxes or circles.
my matrix looks like
 
   A   B  C
 
   6 2 3 4 3 2  2 1 7
  A4 3 1 4 6 8  2 1 6
2 7 8 7 8 0  2 3 5
 
   5 2 3 4 7 2  2 1 7 B4 3 1 4 8 8  3 1 6
9 7 8 7 8 0  6 3 5

 
1 2 3 4 3 2  2 1 7  C   4 3 1 4 6 8  2 1 6
2 7 8 7 8 0  2 3 5
 
And my graph should looks like(in terms of colored boxes or circles according 
to the magnitude of the nos)
 
 
 A   B  C
 
  O O OO O O   O O O
 A   O O OO O O   O O O  
  O O OO O O   O O O
 
  O O OO O O   O O O B   O O OO O O   O O O  
  O O OO O O   O O O
 
  O O OO O O   O O O C   O O OO O O   O O O  
  O O OO O O   O O O
 
Can you suggest me some ways of doing this..
 
thank you
 
best regards,
peter.
 
Research student,
Fraunhofer IPT,
Germany.




-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] roots of a function

2005-11-14 Thread Alejandro Veen
For finding the root of the following function I have been using 'uniroot':

f(p) = log(p-1) - log(p) + 1/(p-1) - log(A) - B = 0

where 'p' is a scalar.  However, I will have to find this root repeatedly,
so I would like to suggest a starting value, which is not possible with
'uniroot'.  'nlm' allows the use of starting values, so I have been thinking
of applying 'nlm' to abs(f(p)).  Is that the way to go or is there a better
way I don't know about?

Thanks for your help,

Alejandro

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] library MASS fitdistri() funciotn question

2005-11-14 Thread David Zhao
Hi there,
 I've been trying to use fitdistr fuction to fit our data onto a gamma
distribution, sometimes it works and sometimes it doesn't. I realized that
our data 10% of time is normal distributed instead of gamma. Since this is
included in a processing pipeline, I'd like to test the fitting to see which
fitting is better. Is there any way of doing this?
Thanks very much!
 David

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] as.integer with base other than ten.

2005-11-14 Thread William Astle
Is there an R function analogous to the C function strtol? 
I would like to convert a binary string into an integer.

Cheers for advice

Will


-- 
__
William Astle
Statistical Genetics,
David Balding's Group.
Imperial College,
St Mary's Hospital Campus,
147 Norfolk Place,
Paddington.
London.
W2 1PG

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] point pattern interactions (Gcross and Kcross)

2005-11-14 Thread Charlotte Reemts
In going over the creation of the point pattern (again), I discovered a typo
that switched x and y data.  Once I fixed that, the code worked just fine.
Thanks for your help!




Charlotte Reemts wrote:


> marked planar point pattern: 628 points
> multitype, with levels = w2004  w2005
> window: rectangle = [ 607200 , 634800 ] x [ 3438400 , 3460400 ]

  I just created something as close as possible to that using random
poisson points:

  > oakfake
  marked planar point pattern: 628 points
  multitype, with levels = w2004  w2005
  window: rectangle = [ 607200 , 634800 ] x [ 3438400 , 3460400 ]

but Gcross works fine:

  > g.0405<-Gcross(oakfake, i="w2004", j="w2005")
  > plot(g.0405)

as does Kcross - so it must be something wrong with your data beyond
what you've told us.

  Do you see it okay if you do plot(oakwilt)?

Baz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] as.integer with base other than ten.

2005-11-14 Thread Marc Schwartz (via MN)
On Mon, 2005-11-14 at 19:01 +, William Astle wrote:
> Is there an R function analogous to the C function strtol? 
> I would like to convert a binary string into an integer.
> 
> Cheers for advice
> 
> Will


There was some discussion in the past and you might want to search the
archive for a more generic solution for any base to any base, but for
binary to decimal specifically, something like the following will work:

bin2dec <- function(x)
{
  b <- as.numeric(unlist(strsplit(x, "")))
  pow <- 2 ^ ((length(b) - 1):0)
  sum(pow[b == 1])
}


The function takes the binary string and splits it up into individual
numbers ('b'). It then creates a vector of powers of 2 as long as 'b'
less one through 0 ('pow').  It then takes the sum of the values of pow,
indexed by 'b == 1'.


> bin2dec("101")
[1] 5

> bin2dec("")
[1] 15

> bin2dec("101")
[1] 95


HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] open source and R

2005-11-14 Thread Liaw, Andy
Here comes a not-so-nice one:  Sorry to be blunt, but I think the current
reality is that one's effectiveness in scientific computing is not likely to
be high if s/he can't read C for Fortran code.

The mode of development for new methods, I believe, should be:

- Write it in R (or S-PLUS or Matlab or ...) because one can usually do that
quite quickly.

- Check and make sure the code produces correct result.

- See if the code can be improved for efficiency.  Use the profiling
facility in R to see where the bottlenecks really are, and try to improve
those parts.

- If no significant improvement is possible in R, move only the
time-consuming part of the computation to C/Fortran/C++.

The above mode is not always followed, because many of the packages on CRAN
are simply R interfaces to _existing_ C/Fortran code.  One would be happy to
be able to use them at the R level, but to rewrite the whole thing in R, one
better have _very_ good reason!

For some algorithms, efficient code can be written in pure R, but the
resulting code can be less readable than one written more legibly in C for
Fortran.

Just my $0.02...

Andy

> From: Robert
> 
> Thanks for all the nice discussions. 
>   Though different users have various needs from R, It's 
> always good to stand on the shoulders of giants (as roger 
> said). How far we will see depends our ability to understand 
> what have been done by other languages. 
>   The package written in pure R might be good for education 
> in starting OOP in research but not effective in scientific 
> computing as suggested.
>   
> 
> [EMAIL PROTECTED] wrote:
>   On 13-Nov-05 Roger Bivand wrote:
> > On Sun, 13 Nov 2005, Robert wrote:
> > 
> >> If I do not know C or FORTRAN, how can I fully understand 
> the package
> >> or possibly improve it?
> > 
> > By learning enough to see whether that makes a difference for your 
> > purposes. Life is hard, but that's what makes life interesting ...
> > 
> >> Robert.
> >> 
> >> Roger Bivand wrote:
> >> On Sun, 13 Nov 2005, Robert wrote:
> >> 
> >> > Roger Bivand wrote: 
> >> > On Sun, 13 Nov 2005, Robert wrote:
> >> > 
> >> > > It uses FORTRAN code and not in pure R.
> >> > 
> >> > The same applies to deldir - it also includes Fortran. So the
> >> > answer seems to be no, there is no voronoi function only
> >> > written in R.
> >> > 
> >> 
> >> Robert wrote:
> >> 
> >> > 
> >> > I am curious about one thing: since the reason for using r
> >> > is r is a easy-to-learn language and it is good for getting
> >> > more people involved.
> >> >
> >> > Why most of the packages written in r use other languages
> >> > such as FORTRAN's code? I understand some functions have
> >> > already been written in other language or it is faster to
> >> > be implemented in other language.
> >> >
> >> > But my understanding is if the user does not know that
> >> > language (for example, FORTRAN), the package is still a
> >> > black box to him because he can not improve the package and
> >> > can not be involved in the development. 
> >> >
> >> > When I searched the packages of R, I saw many packages with
> >> > duplicated or similar functions. the main difference among
> >> > them are the different functions implemented using other
> >> >languages, which are always a black box to the users. So it
> >> > is very hard for users to believe the package will run
> >> > something they need, let alone getting involved in the
> >> > development. My comments are not to disregard these efforts.
> >> > But it is good to see the packages written in pure R.
> >> > 
> >> 
> >> Although surprisingly much of R is written in R, quite a lot is
> >> written in Fortran and C. One very good reason, apart from
> >> efficiency, is code
> >> re-use
> >> - BLAS and LAPACK among many others are excellent implementations
> >> of what we need for numerical linear algebra. R is very typical
> >> of good scientific software, it tries to avoid re-implementing
> >> functions that are used by the community, are well-supported by
> >> the community, and work. Packages by and large do the same - if
> >> existing software does the required job, package authors attempt
> >> to port that software to R, providing interfaces to underlying
> >> C or Fortran libraries. 
> >> 
> >> It's about standing on the shoulders of giants.
> 
> Those are very strong points. Some comments:
> 
> It would be possible to implement in "pure R" a matrix inversion
> or eigenvalue/vector function, for instance, and I'm sure it would
> be done (if it were done) to very high quality. However, it would
> run like an elephant in quicksands. BLAS and LAPACK have, over the
> years, become highly optimised not just for accuracy and robustness,
> but for speed and efficiency.
> 
> Also, you will hit the "other language" problem sooner or
> later. Robert's complaint is that he does not like black
> boxes. But R itself is a black box. You cannot write R in R,
> all the way down to the bottom. At the bottom is machine
> code, and languages

[R] change some levels of a factor column in data frame according to a condition

2005-11-14 Thread Gesmann, Markus
Dear R-users,

I am looking for an elegant way to change some levels of a factor column
in data frame according to a condition.
Lets look at the following data frame:

> data.frame(crit1=gl(2,5), crit2=factor(letters[1:10]), x=rnorm(10))
   crit1 crit2   x
1  1 a -1.06957692
2  1 b  0.24368402
3  1 c -0.24958322
4  1 d -1.37577955
5  1 e -0.01713288
6  2 f -1.25203573
7  2 g -1.94348533
8  2 h -0.16041719
9  2 i -1.91572616
10 2 j -0.20256478

Now I would like to find for each level in crit1 the two smallest values
of x and change the levels of crit2 to "small", so the result would look
like this:

   crit1 crit2   x
1  1 small -1.06957692
2  1 b  0.24368402
3  1 c   -0.24958322
4  1 small  -1.37577955
5  1 e  -0.01713288
6  2 f   -1.25203573
7  2 small  -1.94348533
8  2 h   -0.16041719
9  2 small  -1.91572616
10 2 j  -0.20256478

Thank you for advice!

Markus Gesmann

LNSCNTMCS01***
The information in this E-Mail and in any attachments is CON...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] roots of a function

2005-11-14 Thread Martin Maechler
> "Alejandro" == Alejandro Veen <[EMAIL PROTECTED]>
> on Mon, 14 Nov 2005 10:23:02 -0800 writes:

Alejandro> For finding the root of the following function I
Alejandro> have been using 'uniroot': f(p) = log(p-1) -
Alejandro> log(p) + 1/(p-1) - log(A) - B = 0

Alejandro> where 'p' is a scalar.  However, I will have to
Alejandro> find this root repeatedly, so I would like to
Alejandro> suggest a starting value, which is not possible
Alejandro> with 'uniroot'.  'nlm' allows the use of starting
Alejandro> values, so I have been thinking of applying 'nlm'
Alejandro> to abs(f(p)).  Is that the way to go or is there
Alejandro> a better way I don't know about?

No, using minimization instead of root finding is typically not
as efficient (particularly in one dimension).

But in some ways you *have* been using  starting values for
uniroot() contrary to what you said:

uniroot even needs an starting *interval* in which to search.
If you know a lot about your function and its zero, just make
that initial interval appropriately small.

Alejandro> Thanks for your help,

you're welcome!
Martin

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Tidiest way of modifying S4 classes?

2005-11-14 Thread Patrick Connolly
I wish to make modifications to the plot.pedigree function in the
kinship package.  My attempts to contact the maintainer have been
unsuccessful, but my question is general, so specifics of the kinship
package might not be an issue.

My first attempt was to make a new function Plot.pedigree in the
.GlobalEnv which mostly achieved what I wanted to.  However, I'm sure
that's not the tidiest way to do it.  We don't have the green book,
but there's lots of interesting information I found here:

http://www.stat.auckland.ac.nz/S-Workshop/Gentleman/S4Objects

However, there's something I'm missing in connecting that information
into knowledge of how I go about making a new method or slot or
whatever is sensible in this case.  What does one make of this:


> getClass(class(kinship:::plot.pedigree))

No Slots, prototype of class "function"

Extends: "OptionalFunction", "PossibleMethod"

Known Subclasses: 
Class "MethodDefinition", from data part
Class "genericFunction", from data part
Class "functionWithTrace", from data part
Class "derivedDefaultMethod", by class "MethodDefinition"
Class "MethodWithNext", by class "MethodDefinition"
Class "SealedMethodDefinition", by class "MethodDefinition"
Class "standardGeneric", by class "genericFunction"
Class "nonstandardGenericFunction", by class "genericFunction"
Class "groupGenericFunction", by class "genericFunction"
> 


If I want a new plot.pedigree function, do I make a slot, or what is
the approach to take?

Suggestions most welcome.

Thanks

-- 
Patrick Connolly
HortResearch
Mt Albert
Auckland
New Zealand 
Ph: +64-9 815 4200 x 7188
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~
I have the world`s largest collection of seashells. I keep it on all
the beaches of the world ... Perhaps you`ve seen it.  ---Steven Wright 
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] change some levels of a factor column in data frame according to a condition

2005-11-14 Thread jim holtman
try this:

# create data

x.by  <- data.frame(crit1=rep(c(1,2),c(10,10)),
crit2=sample(letters[1:4],20,T), val=runif(20))
levels(x.by$crit2) <- c(levels(x.by$crit2), 'small') # add 'small' to the
levels
y <- by(x.by , x.by$crit1, function(.grp){
.small <- order(.grp$val) # find the smallest values
.grp$crit2[.small[1:min(2,length(.small))]] <- 'small' # make sure we don't
exceed vector
.grp
})
do.call('rbind', y) # put it back together


 On 11/14/05, Gesmann, Markus <[EMAIL PROTECTED]> wrote:
>
> Dear R-users,
>
> I am looking for an elegant way to change some levels of a factor column
> in data frame according to a condition.
> Lets look at the following data frame:
>
> > data.frame(crit1=gl(2,5), crit2=factor(letters[1:10]), x=rnorm(10))
> crit1 crit2 x
> 1 1 a -1.06957692
> 2 1 b 0.24368402
> 3 1 c -0.24958322
> 4 1 d -1.37577955
> 5 1 e -0.01713288
> 6 2 f -1.25203573
> 7 2 g -1.94348533
> 8 2 h -0.16041719
> 9 2 i -1.91572616
> 10 2 j -0.20256478
>
> Now I would like to find for each level in crit1 the two smallest values
> of x and change the levels of crit2 to "small", so the result would look
> like this:
>
> crit1 crit2 x
> 1 1 small -1.06957692
> 2 1 b 0.24368402
> 3 1 c -0.24958322
> 4 1 small -1.37577955
> 5 1 e -0.01713288
> 6 2 f -1.25203573
> 7 2 small -1.94348533
> 8 2 h -0.16041719
> 9 2 small -1.91572616
> 10 2 j -0.20256478
>
> Thank you for advice!
>
> Markus Gesmann
>
> LNSCNTMCS01***
> The information in this E-Mail and in any attachments is CON...{{dropped}}
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>



--
Jim Holtman
Cincinnati, OH
+1 513 247 0281

What the problem you are trying to solve?

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] November Course In San Francisco***R/Splus Fundamentals and Programming Techniques

2005-11-14 Thread elvis
XLSolutions Corporation (www.xlsolutions-corp.com) is proud to
announce  2-day "R/S-plus Fundamentals and Programming
Techniques" in San Francisco: www.xlsolutions-corp.com/Rfund.htm

 San Francisco,   November 17 - 18, 2005

Reserve your seat now at the early bird rates! Payment due AFTER
the class

Course Description:

This two-day beginner to intermediate R/S-plus course focuses on a
broad spectrum of topics, from reading raw data to a comparison of R
and S. We will learn the essentials of data manipulation, graphical
visualization and R/S-plus programming. We will explore statistical
data analysis tools,including graphics with data sets. How to enhance
your plots, build your own packages (librairies) and connect via
ODBC,etc.
We will perform some statistical modeling and fit linear regression
models. Participants are encouraged to bring data for interactive
sessions

With the following outline:

- An Overview of R and S
- Data Manipulation and Graphics
- Using Lattice Graphics
- A Comparison of R and S-Plus
- How can R Complement SAS?
- Writing Functions
- Avoiding Loops
- Vectorization
- Statistical Modeling
- Project Management
- Techniques for Effective use of R and S
- Enhancing Plots
- Using High-level Plotting Functions
- Building and Distributing Packages (libraries)
- Connecting; ODBC, Rweb, Orca via sockets and via Rjava


Email us for group discounts.
Email Sue Turner: [EMAIL PROTECTED]
Phone: 206-686-1578
Visit us: www.xlsolutions-corp.com/training.htm
Please let us know if you and your colleagues are interested in this
classto take advantage of group discount. Register now to secure your
seat!

Interested in R/Splus Advanced course? email us.


Cheers,
Elvis Miller, PhD
Manager Training.
XLSolutions Corporation
206 686 1578
www.xlsolutions-corp.com
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] change some levels of a factor column in data frame according to a condi

2005-11-14 Thread Francisco J. Zagmutt
Hi Gesman

There may be more elegant ways to do this but here is one option:

d=data.frame(crit1=gl(2,5), crit2=factor(letters[1:10]), x=rnorm(10)) 
#Creates data

levels(d$crit2)=c(levels(d$crit2),"Small")#Adds the level "Small" to the 
factor crit2.

d2=d[order(d$crit1,d$x),]#Sorts x ascending, by crit1

idx=do.call("rbind",by(d2,d$crit1,head,2))#selects the 2 smallest by crit1 
and merges the results by row

d2[d2$x %in% idx$x,'crit2']="Small" #Changes the desired crit2 to "Small"


Cheers

Francisco


>From: "Gesmann, Markus" <[EMAIL PROTECTED]>
>To: r-help@stat.math.ethz.ch
>Subject: [R] change some levels of a factor column in data frame according 
>to a condition
>Date: Mon, 14 Nov 2005 20:05:38 +
>
>Dear R-users,
>
>I am looking for an elegant way to change some levels of a factor column
>in data frame according to a condition.
>Lets look at the following data frame:
>
> > data.frame(crit1=gl(2,5), crit2=factor(letters[1:10]), x=rnorm(10))
>crit1 crit2   x
>1  1 a -1.06957692
>2  1 b  0.24368402
>3  1 c -0.24958322
>4  1 d -1.37577955
>5  1 e -0.01713288
>6  2 f -1.25203573
>7  2 g -1.94348533
>8  2 h -0.16041719
>9  2 i -1.91572616
>10 2 j -0.20256478
>
>Now I would like to find for each level in crit1 the two smallest values
>of x and change the levels of crit2 to "small", so the result would look
>like this:
>
>crit1 crit2   x
>1  1 small -1.06957692
>2  1 b 0.24368402
>3  1 c  -0.24958322
>4  1 small -1.37577955
>5  1 e -0.01713288
>6  2 f  -1.25203573
>7  2 small -1.94348533
>8  2 h  -0.16041719
>9  2 small -1.91572616
>10 2 j -0.20256478
>
>Thank you for advice!
>
>Markus Gesmann
>
>LNSCNTMCS01***
>The information in this E-Mail and in any attachments is CON...{{dropped}}
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! 
>http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] open source and R

2005-11-14 Thread Liaw, Andy
However code readability can not be over-emphasized.  I must admit to have
written R code in such a supposedly `clever' way that I can't figure out
what I was trying to do (or how I did it) a week later...

Andy

> From: Ernesto Jardim 
> 
> Hi,
> 
> One single comment about the subject of this message. Open source is 
> about making the code _available_ for all, not making the code 
> _understandable_ for all.
> 
> Regards
> 
> EJ
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Using pakage foreign and to import SAS file

2005-11-14 Thread Walter R. Paczkowski
Hi,

I'm struggling with foreign to import a SAS file.  The file, for lack of 
imagination, is d.sas7bdat and is in my root directory (c:\) under Windows XP.  
When I type

read.ssd("c:\\", "d")

which I think I'm suppose to enter, I get

SAS failed.  SAS program at 
C:\DOCUME~1\Owner\LOCALS~1\Temp\Rtmp32758\file19621.sas 
The log file will be file19621.log in the current directory
NULL
Warning messages:
1: "sas" not found 
2: SAS return code was -1 in: read.ssd("c:\\", "d") 

I have SAS 9.1 running on my computer so SAS is there.  What am I doing wrong?

Thanks,

Walt





Walter R. Paczkowski, Ph.D.
Data Analytics Corp.
44 Hamilton Lane
Plainsboro, NJ  08536
(V) 609-936-8999
(F) 609-936-3733
www.dataanalyticscorp.com
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Trouble with aovlist and Tukey test

2005-11-14 Thread Jonathan Dushoff
I am having what I think is a strange problem with applying TukeyHSD to
an aov fit with error strata.

TukeyHSD is supposed to take "A fitted model object, usually an 'aov'
fit."  aov (with error strata) is supposed to generate an object of type
aovlist, which is a list of objects of type aov.  But I can't seem to
feed components of my aovlist to TukeyHSD.  I guess I wouldn't expect to
be able to use the error strata, but I did expect to be able to use the
final stratum.

I have posted a complete example, which I hope explains why I am
confused, below.  Any help will be appreciated.

Jonathan Dushoff

--


> morley$Expt = factor(morley$Expt)
> morley$Run = factor(morley$Run)
>
> mod =  aov(Speed~Expt+Run, data=morley)
> class(mod)
[1] "aov" "lm"
>
> TukeyHSD(mod)$Expt
  difflwrupr
2-1 -53.0 -117.91627  11.916268
3-1 -64.0 -128.91627   0.916268
4-1 -88.5 -153.41627 -23.583732
5-1 -77.5 -142.41627 -12.583732
3-2 -11.0  -75.91627  53.916268
4-2 -35.5 -100.41627  29.416268
5-2 -24.5  -89.41627  40.416268
4-3 -24.5  -89.41627  40.416268
5-3 -13.5  -78.41627  51.416268
5-4  11.0  -53.91627  75.916268
>
> errmod =  aov(Speed~Expt+Error(Run), data=morley)
> names(errmod)
[1] "(Intercept)" "Run" "Within"
> basemod = errmod$W
>
> class(basemod)
[1] "aov" "lm"
> TukeyHSD(basemod)
Error in sort(unique.default(x), na.last = TRUE) :
'x' must be atomic

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] open source and R

2005-11-14 Thread Berton Gunter
Andy:

Ah, don't feel bad, Andy; this is a universal problem in programming that
despite all kinds of efforts in "lucid programming", OOP, etc. no one has
figured out. So while "code readability cannot be overemphasized," what this
actually means also apparently cannot be defined.

From: http://www.jeffgainer.com/lucid_code/lc_cover.html

"If you are a software professional, you know how software is created.
Surely you recognize it. Chances are you live it: chaos."

;-)

-- Bert
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Liaw, Andy
> Sent: Monday, November 14, 2005 2:46 PM
> To: 'Ernesto Jardim'
> Cc: [EMAIL PROTECTED]; r-help@stat.math.ethz.ch
> Subject: Re: [R] open source and R
> 
> However code readability can not be over-emphasized.  I must 
> admit to have
> written R code in such a supposedly `clever' way that I can't 
> figure out
> what I was trying to do (or how I did it) a week later...
> 
> Andy
> 
> > From: Ernesto Jardim 
> > 
> > Hi,
> > 
> > One single comment about the subject of this message. Open 
> source is 
> > about making the code _available_ for all, not making the code 
> > _understandable_ for all.
> > 
> > Regards
> > 
> > EJ
> > 
> >
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2005-11-14 Thread [EMAIL PROTECTED]
Dear all,

just a little problem report for R 2.2.0 on OpenSuse 10.0-64. Gcc version is 
4.0.2
Installing fortran packages runs into: 'cc1' command not found.
I apparently got away with:
sudo ln -s /usr/bin/cc /usr/bin/cc1
which causes other warnings but the packages seem to function well. Obviously 
cc1 does no longer exist in gcc 4.0.2.

Stefan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] error in NORM lib

2005-11-14 Thread Ted Harding
Folks,

Leo Gürther and I have been privately discussing the
problems with imputation using NORM which he originally
described on 9 November. Essentially, he observed that
many of the imputed missing values were totally absurd,
being well out of any range comatible with the observed
values of the variables.

After following a few false trails, we have discovered
the reason. People interested in using NORM (and CAT and
MIX and maybe PAN) may well be interested in this reason!

The dataset, which can be downloaded from his URL

  http://www.anicca-vijja.de/lg/dframe.Rdata

consists of a matrix with 74 columns and 200 rows.
There are 553 missing values out of the 14800 (less
than 4%), and the distributions of the observed values
of the variables are well-behaved. So this should not
be a problematic dataset.

61 of the 74 columns have missing values (NAs) in them,
and this is the reason why NORM fails.

Specifically, the first few lines of the code of the
function prelim.norm() are as follows:

if (is.vector(x)) 
x <- matrix(x, length(x), 1)
n <- nrow(x)
p <- ncol(x)
storage.mode(x) <- "double"
r <- 1 * is.na(x)
nmis <- as.integer(apply(r, 2, sum))
names(nmis) <- dimnames(x)[[2]]
mdp <- as.integer((r %*% (2^((1:ncol(x)) - 1))) + 1)

and, as can be seen from the last line, if there are
missing values in a column with index > 31 then

  (r %*% (2^((1:ncol(x)) - 1))) + 1 >= 2^31

and then applying as.integer() to this value returns NA
since as.integer only works for numbers no greater than
.Machine$integer.max, normally 2^31 - 1. (Is the situation
different for R on say 64-bit machines?)

The value of mdp[i] is a "packed" binary encoding of the
column positions of any NAs in row i: if bit j-1 (counting
from 0) in the binary representation of mdp[i] is 1, then
there is an NA in column j of row i.

The vector mdp is used at various places in the NORM routines,
and the effect on the imputations of having NAs in it, when
the functioning of the routines depends on unpacking the
encoding, is catastrophic. (Experiment had shown, indeed,
that imputing with a subset of fewer than 32 columns always
gave acceptable results).

The upshot of this is that NORM cannot be used for multiple
imputations if there are more than 31 columns in the data
which have NAs in them.

You could have more than 31 columns of data -- indeed Leo's
74 would have worked then -- if the columns are re-ordered
so that all the columns with NAs are at the left, provided
there are fewer than 32 with NAs. Unfortunately Leo has 61.

There is in principle no necessity to represent NA positions
in this way, but that is how Shafer did it and it was carried
over into R. An alternative method would simply be to have
a 0/1 matrix of NA indicators, but the code for the NORM
functions would have to be picked through to replace the
unpacking of mdp -- and this includes FORTRAN routines
(Oh dear, echoes of the "open source and R" discussion)!

So removing this limitation would not be trivial.

I have not noticed mention of the limitation in the documentation
of the NORM functions.

Exactly the same construction of mdp, and therefore exactly the
same problem, occurs in prelim.cat in CAT, for which I'm joint
maintainer with Fernando Tusell, so we had better try to look
into that! Any lessons we learn will be broadcast, so should be
useful for NORM as well.

And, for good measure, in MIX it occurs twice over in prelim.mix:
once in constructing mdpz for the continuous variables, and
once in mdpw for the categorical variables. This is perhaps
less likely in practice to cause the problem in MIX, since it
would arise only if either there were more than 31 columns of
continuous variables with NAs, or more than 31 of categorical
variables; so MIXers can spread their bets.

Again, I have not noticed that the limitation is mentioned
in the documentation of MIX; and I'm pretty sure it is not
in the documentation of CAT!

Any suggestions or guidance from people who are familiar with
NORM and MIX will be most welcome.

I should add that I have not looked into PAN, but would not
be surprised if it were there as well.

I've written this explanation in consultation with Leo Gürtler,
and he has proposed that I should publish it to the R List;
but please consider that it is a joint effort.

Best wishes to all,
Ted.



E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 15-Nov-05   Time: 00:45:46
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Linear model mixed with an ARIMA model

2005-11-14 Thread Z ZX

   Dear all,

   I'm looking for how can I input a linear model with an arma model,like

   log(y) = 8.95756 + 0.0346414^t - 0.1*t^2   + ut
   ut=-0.296ut-1+at-0.68at-1

   where log(y) is qudratic function ,for the time series trend,

   and get then get the residuals from the first function.

" obersvations value - the fit value = ut"

   and fit an ARIMA(1,1,1) model for ut.

   anyway,how can I combine this two models together as a group ?

   my purpose is to  to use this mixed model forecast  'y'

   can you help me?  I will very appreciate it.


 _

   Don't just Search. Find! [1]The new MSN Search: Fast. Clear. Easy.

References

   1. http://g.msn.com/8HMAENCA/2749??PS=47575
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Linear model mixed with an ARIMA model

2005-11-14 Thread Zhu, Zhaoxuan
Dear all, 


I'm looking for how can I input a linear model with an arma model,like

log(y) = 8.95756 + 0.0346414^t - 0.1*t^2   + ut   
ut=-0.296ut-1+at-0.68at-1  

where log(y) is qudratic function ,for the time series trend,

and get then get the residuals from the first function. 

 " obersvations value - the fit value = ut"

and fit an ARIMA(1,1,1) model for ut.

anyway,how can I combine this two models together as a group ? 

my purpose is to  to use this mixed model forecast  'y'

can you help me?  I will very appreciate it.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] (no subject)

2005-11-14 Thread Prof Brian Ripley
In what sense is this a problem report for R?  R does not know about
cc1, unless some user told it to use it.

cc1 is an internal part of gcc (the C front-end), usually found in

/usr/libexec/gcc/i686-pc-linux-gnu/4.0.2

or some such path. As my path shows, it is part of gcc 4.0.2, so this 
looks like a error in your compiler installation.  It is not to do 
with Fortran, whose front-end is f951 in the same directory.

On Tue, 15 Nov 2005, [EMAIL PROTECTED] wrote:

> just a little problem report for R 2.2.0 on OpenSuse 10.0-64. Gcc version is 
> 4.0.2
> Installing fortran packages runs into: 'cc1' command not found.
> I apparently got away with:
> sudo ln -s /usr/bin/cc /usr/bin/cc1
> which causes other warnings but the packages seem to function well. Obviously 
> cc1 does no longer exist in gcc 4.0.2.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Using pakage foreign and to import SAS file

2005-11-14 Thread Rick Bilonick
On Mon, 2005-11-14 at 22:55 +, Walter R. Paczkowski wrote:
> Hi,
> 
> I'm struggling with foreign to import a SAS file.  The file, for lack of 
> imagination, is d.sas7bdat and is in my root directory (c:\) under Windows 
> XP.  When I type
> 
> read.ssd("c:\\", "d")
> 
> which I think I'm suppose to enter, I get
> 
> SAS failed.  SAS program at 
> C:\DOCUME~1\Owner\LOCALS~1\Temp\Rtmp32758\file19621.sas 
> The log file will be file19621.log in the current directory
> NULL
> Warning messages:
> 1: "sas" not found 
> 2: SAS return code was -1 in: read.ssd("c:\\", "d") 
> 
> I have SAS 9.1 running on my computer so SAS is there.  What am I doing wrong?
> 
> Thanks,
> 
> Walt
> 
> 
I've not used read.ssd but I've had good results with sas.get in Hmisc.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Using pakage foreign and to import SAS file

2005-11-14 Thread Austin, Matt
If sas isn't in the path, then you might have trouble with sas.get or
read.ssd.

Assuming you are using windows, go to the Start menu, select run and type
"sas".  If sas fires up it's in your path, if not then that is the reason.

--Matt

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Rick Bilonick
> Sent: Monday, November 14, 2005 8:22 PM
> To: Walter R. Paczkowski
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] Using pakage foreign and to import SAS file
> 
> 
> On Mon, 2005-11-14 at 22:55 +, Walter R. Paczkowski wrote:
> > Hi,
> > 
> > I'm struggling with foreign to import a SAS file.  The 
> file, for lack of imagination, is d.sas7bdat and is in my 
> root directory (c:\) under Windows XP.  When I type
> > 
> > read.ssd("c:\\", "d")
> > 
> > which I think I'm suppose to enter, I get
> > 
> > SAS failed.  SAS program at 
> C:\DOCUME~1\Owner\LOCALS~1\Temp\Rtmp32758\file19621.sas 
> > The log file will be file19621.log in the current directory
> > NULL
> > Warning messages:
> > 1: "sas" not found 
> > 2: SAS return code was -1 in: read.ssd("c:\\", "d") 
> > 
> > I have SAS 9.1 running on my computer so SAS is there.  
> What am I doing wrong?
> > 
> > Thanks,
> > 
> > Walt
> > 
> > 
> I've not used read.ssd but I've had good results with sas.get 
> in Hmisc.
> 
> Rick B.
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Using pakage foreign and to import SAS file

2005-11-14 Thread Prof Brian Ripley
It is highly unlikely that SAS is on the path, as it does not put itself 
there.

read.ssd () has a 'sascmd' argument to give the path to SAS.  This is 
explained  *with a functioning Windows example*, on the help page for 
read.ssd.

On Mon, 14 Nov 2005, Austin, Matt wrote:

> If sas isn't in the path, then you might have trouble with sas.get or
> read.ssd.
>
> Assuming you are using windows, go to the Start menu, select run and type
> "sas".  If sas fires up it's in your path, if not then that is the reason.
>
> --Matt
>
>> -Original Message-
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] Behalf Of Rick Bilonick
>> Sent: Monday, November 14, 2005 8:22 PM
>> To: Walter R. Paczkowski
>> Cc: r-help@stat.math.ethz.ch
>> Subject: Re: [R] Using pakage foreign and to import SAS file
>>
>>
>> On Mon, 2005-11-14 at 22:55 +, Walter R. Paczkowski wrote:
>>> Hi,
>>>
>>> I'm struggling with foreign to import a SAS file.  The
>> file, for lack of imagination, is d.sas7bdat and is in my
>> root directory (c:\) under Windows XP.  When I type
>>>
>>> read.ssd("c:\\", "d")
>>>
>>> which I think I'm suppose to enter, I get
>>>
>>> SAS failed.  SAS program at
>> C:\DOCUME~1\Owner\LOCALS~1\Temp\Rtmp32758\file19621.sas
>>> The log file will be file19621.log in the current directory
>>> NULL
>>> Warning messages:
>>> 1: "sas" not found
>>> 2: SAS return code was -1 in: read.ssd("c:\\", "d")
>>>
>>> I have SAS 9.1 running on my computer so SAS is there.
>> What am I doing wrong?

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html