[R] [R-pkgs] new packages psyphy and MLDS

2007-05-29 Thread ken knoblauch
New packages psyphy and MLDS are available on CRAN:

psyphy ncludes an assortment of functions
  useful in  analyzing data from pyschophysical experiments. It
   includes functions for calculating d' from several
   different experimental designs, links for mafc to be
   used with the binomial family in glm (and possibly
   other contexts) and selfStart functions for estimating gamma values
  for CRT (and possibley other RGB) screen calibration data.

MLDS implements analyses for Maximum Likelihood Difference Scaling.
Difference scaling is a method for scaling perceived super-threshold
differences. The package contains functions that allow the user to fit
the resulting data by maximum likelihood and to test the internal 
validity
of the estimated scale.   There are also example functions that might
be used to design and run a difference scaling experiment,

Any suggestioins, criticisms, bug-reports, etc. are always welcome.

Best,

Ken Knoblauch

-- 
Ken Knoblauch
Inserm U846
Institut Cellule Souche et Cerveau
Département Neurosciences Intégratives
18 avenue du Doyen Lépine
69500 Bron
France
tel: +33 (0)4 72 91 34 77
fax: +33 (0)4 72 91 34 61
portable: +33 (0)6 84 10 64 10
http://www.pizzerialesgemeaux.com/u846/

___
R-packages mailing list
[EMAIL PROTECTED]
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] http proxies: setting and unsetting

2007-05-29 Thread Gabor Grothendieck
One other point.  If you find you need to set a system or user environment
variable then microsoft has a free tool called setx.exe that you can find here:

http://support.microsoft.com/kb/927229

You can do this from within R using system().

On 5/30/07, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > OK, I think I get that... do you know which namespace the Sys.setenv() 
> > function affects?  Do you know if there are functions in R that can alter 
> > the user/system/process environment variables?
> >
>
> Use the R Sys.getenv() command to get the process environment variables.
> To get user and system environment variables, from the Desktop right click on
> My Computer and choose Properties.  Then choose the Advanced tab
> and click on the Environment Variables button near the bottom of the
> window that appears.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] http proxies: setting and unsetting

2007-05-29 Thread Gabor Grothendieck
On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> OK, I think I get that... do you know which namespace the Sys.setenv() 
> function affects?  Do you know if there are functions in R that can alter the 
> user/system/process environment variables?
>

Use the R Sys.getenv() command to get the process environment variables.
To get user and system environment variables, from the Desktop right click on
My Computer and choose Properties.  Then choose the Advanced tab
and click on the Environment Variables button near the bottom of the
window that appears.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] http proxies: setting and unsetting

2007-05-29 Thread Prof Brian Ripley
That was misleading advice.  R is a C program and accesses the environment 
via the C calls getenv and (on Windows) putenv.  This is not Windows 
scripting (Grothendieck's earlier reference): the C runtime maintains only 
one environment block.

I've re-checked, and suspect the problem is in the following comment in
?download.file

  These environment variables must be set before the download code
  is first used: they cannot be altered later by calling
  'Sys.setenv'.

If I have http_proxy set in the environment, the proxy is not used in any 
of the following cases:

- Calling R from a shortcut with http_proxy="" on the target (see the 
rw-FAQ).

- Calling R from a shortcut with no_proxy="*" on the target.

- Using Sys.unsetenv("http_proxy") right at the start of the R session.

- Using Sys.setenv(no_proxy="*") right at the start of the R session.

If you set options(internet.info=0) you will see exactly what is tried.

Another possibility is that R is being used with --internet2, in which 
case none of this applies, and the simple answer is not to use --internet2 
at home.



On Tue, 29 May 2007, Gabor Grothendieck wrote:

> You can have 4 different http_proxy environment variables and if you
> set one type but try to unset a different type then that will have no
> effect on the one you originally set.  For example, if you originally
> set it as a system or user environment variable and then try to
> unset the process environment variable of the same name then
> that will have no effect on the system or user environment variable.
>
> On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> Hi Gabor,
>>
>> Thanks for the reply and link.
>>
>> I took a look at the link -- one thing I don't understand is why if I delete 
>> the 'http_proxy' variable via the cmd shell (or equivalent OS dialog box), 
>> why I can get R to ignore the proxy, but using Sys.setenv("http_proxy"="") 
>> won't do that for me (at least for the scope of the session).  If there were 
>> other variables affecting it, I would think my deleting 'http_proxy' in the 
>> OS would also have no effect -- yet it does.
>>
>> Any ideas?
>>
>> Thanks again,
>> Matt
>>
>>
>> -Original Message-
>> From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
>> Sent: Tue 5/29/2007 9:49 PM
>> To: Pettis, Matthew (Thomson)
>> Cc: r-help@stat.math.ethz.ch
>> Subject: Re: [R] http proxies: setting and unsetting
>>
>> Note that Windows XP has 4 types of environment variables and I suspect
>> that the problem stems from not taking that into account:
>>
>> http://www.microsoft.com/technet/scriptcenter/guide/sas_wsh_kmmj.mspx?mfr=true
>>
>> On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>>> Hi,
>>>
>>> I am trying to use R at work and at home on the same computer.  At work, I 
>>> have a proxy, and at home, I do not.  I have, for work, a User environment 
>>> variable "http_proxy" which I set in the OS (Windows XP Pro).  When I am at 
>>> work, and I try to retrieve data from the web with 'read.csv', things work 
>>> just fine.  I assume it knows how to use the proxy.
>>>
>>> The trouble is when I am at home and have no proxy, R still tries to use my 
>>> work proxy.  I have tried the following:
>>>
>>> Sys.setenv("http_proxy"="")
>>> Sys.setenv("no_proxy"=TRUE)
>>> Sys.setenv("no_proxy"=1)
>>>
>>> none of which seems to work.  Whenever I try to use read.csv, it tells me 
>>> that it cannot find my work proxy, which I am trying to tell R to ignore.
>>>
>>> I can solve this problem by removing the http_proxy environment variable 
>>> binding in the OS when at home, but that is a pain, because then I have to 
>>> reset it when I go back into work.
>>>
>>> Is there a way to tell R within a session to ignore the proxy?  If so, what 
>>> am I doing wrong?
>>>
>>> thanks,
>>> matt
>>>
>>> __
>>> R-help@stat.math.ethz.ch mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>>
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Antiguo Mueble Con Tocadiscos Y Radio!

2007-05-29 Thread Pat
MITTWOCH 30. MAI STARTET DIE HAUSSE!

Firma: TALKTECH TELEMEDIA (OYQ.F)
Kurzel: WKN: 278-104 (OYQ.F)
ISIN: US8742781045

Preis: 0.81 (+50% in 1 tag)
3T Prognose: 3

Overall I believe this to be a good work and worth the money if you wish
to be introduced to Windows Workflow Foundation.
Singer A Pedal - Impecable ! Lote De Dos Frascos Antiguos De Vidrio.
This is not your average computer text which gives a chunk of code and
then explains what it does. Antiguo Tractor De Chapa .
Explain that all of this material will make sense once they start to do
the task on the job. Antiguo Autito De Chapa Argo ,usa Juguete, Chapa.
However, this enhancement comes with a price, a steep learning curve.
Instead, you can spend a few hours each week practicing skills and
learning vocabulary.
Even if the customer doesn't get all the people required to make the
possible profit, the money will be paid out every month.
There really is nothing average about this writer and book. Antiguo 
Juguete Elefante De Goma Piel Roce Ind.
And then check out any MLM opportunitiy and see how it looks.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] http proxies: setting and unsetting

2007-05-29 Thread matt.pettis
OK, I think I get that... do you know which namespace the Sys.setenv() function 
affects?  Do you know if there are functions in R that can alter the 
user/system/process environment variables?

Thanks,
Matt


-Original Message-
From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
Sent: Tue 5/29/2007 10:20 PM
To: Pettis, Matthew (Thomson)
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] http proxies: setting and unsetting
 
You can have 4 different http_proxy environment variables and if you
set one type but try to unset a different type then that will have no
effect on the one you originally set.  For example, if you originally
set it as a system or user environment variable and then try to
unset the process environment variable of the same name then
that will have no effect on the system or user environment variable.

On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi Gabor,
>
> Thanks for the reply and link.
>
> I took a look at the link -- one thing I don't understand is why if I delete 
> the 'http_proxy' variable via the cmd shell (or equivalent OS dialog box), 
> why I can get R to ignore the proxy, but using Sys.setenv("http_proxy"="") 
> won't do that for me (at least for the scope of the session).  If there were 
> other variables affecting it, I would think my deleting 'http_proxy' in the 
> OS would also have no effect -- yet it does.
>
> Any ideas?
>
> Thanks again,
> Matt
>
>
> -Original Message-
> From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
> Sent: Tue 5/29/2007 9:49 PM
> To: Pettis, Matthew (Thomson)
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] http proxies: setting and unsetting
>
> Note that Windows XP has 4 types of environment variables and I suspect
> that the problem stems from not taking that into account:
>
> http://www.microsoft.com/technet/scriptcenter/guide/sas_wsh_kmmj.mspx?mfr=true
>
> On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I am trying to use R at work and at home on the same computer.  At work, I 
> > have a proxy, and at home, I do not.  I have, for work, a User environment 
> > variable "http_proxy" which I set in the OS (Windows XP Pro).  When I am at 
> > work, and I try to retrieve data from the web with 'read.csv', things work 
> > just fine.  I assume it knows how to use the proxy.
> >
> > The trouble is when I am at home and have no proxy, R still tries to use my 
> > work proxy.  I have tried the following:
> >
> > Sys.setenv("http_proxy"="")
> > Sys.setenv("no_proxy"=TRUE)
> > Sys.setenv("no_proxy"=1)
> >
> > none of which seems to work.  Whenever I try to use read.csv, it tells me 
> > that it cannot find my work proxy, which I am trying to tell R to ignore.
> >
> > I can solve this problem by removing the http_proxy environment variable 
> > binding in the OS when at home, but that is a pain, because then I have to 
> > reset it when I go back into work.
> >
> > Is there a way to tell R within a session to ignore the proxy?  If so, what 
> > am I doing wrong?
> >
> > thanks,
> > matt
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] http proxies: setting and unsetting

2007-05-29 Thread Gabor Grothendieck
You can have 4 different http_proxy environment variables and if you
set one type but try to unset a different type then that will have no
effect on the one you originally set.  For example, if you originally
set it as a system or user environment variable and then try to
unset the process environment variable of the same name then
that will have no effect on the system or user environment variable.

On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi Gabor,
>
> Thanks for the reply and link.
>
> I took a look at the link -- one thing I don't understand is why if I delete 
> the 'http_proxy' variable via the cmd shell (or equivalent OS dialog box), 
> why I can get R to ignore the proxy, but using Sys.setenv("http_proxy"="") 
> won't do that for me (at least for the scope of the session).  If there were 
> other variables affecting it, I would think my deleting 'http_proxy' in the 
> OS would also have no effect -- yet it does.
>
> Any ideas?
>
> Thanks again,
> Matt
>
>
> -Original Message-
> From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
> Sent: Tue 5/29/2007 9:49 PM
> To: Pettis, Matthew (Thomson)
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] http proxies: setting and unsetting
>
> Note that Windows XP has 4 types of environment variables and I suspect
> that the problem stems from not taking that into account:
>
> http://www.microsoft.com/technet/scriptcenter/guide/sas_wsh_kmmj.mspx?mfr=true
>
> On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I am trying to use R at work and at home on the same computer.  At work, I 
> > have a proxy, and at home, I do not.  I have, for work, a User environment 
> > variable "http_proxy" which I set in the OS (Windows XP Pro).  When I am at 
> > work, and I try to retrieve data from the web with 'read.csv', things work 
> > just fine.  I assume it knows how to use the proxy.
> >
> > The trouble is when I am at home and have no proxy, R still tries to use my 
> > work proxy.  I have tried the following:
> >
> > Sys.setenv("http_proxy"="")
> > Sys.setenv("no_proxy"=TRUE)
> > Sys.setenv("no_proxy"=1)
> >
> > none of which seems to work.  Whenever I try to use read.csv, it tells me 
> > that it cannot find my work proxy, which I am trying to tell R to ignore.
> >
> > I can solve this problem by removing the http_proxy environment variable 
> > binding in the OS when at home, but that is a pain, because then I have to 
> > reset it when I go back into work.
> >
> > Is there a way to tell R within a session to ignore the proxy?  If so, what 
> > am I doing wrong?
> >
> > thanks,
> > matt
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Generating Data using Formulas

2007-05-29 Thread Chrisitan Falde
Hello, 
 
My name is Christian Falde.  I am new to R. 
 
My problem is this.  I am attempting to learn R on my own. In so doing I am 
using some problems from Davidson and MacKinnon  Econometric Theory and Methods 
to do so.  This is because I can already do the some of the problems in SAS so 
I am attempting to rework them using R. Seemed logical to me, now I am stuck 
and its really bugging me. 
 
 
The problem is this 
 
Generate a data set sample size of 25  with the formula  y=1+.8*y(t-1)+ u.  
Where y is the dependent, y(t-1) is the dependent variable lagged one peroid, 
and u is the classical error term.  Assume y0=0 and the u is NID(0,1). Use this 
sample to compute the OLS estimates B1 (1) and B2(.8).  Repeat at least 100 
times and find the average of the B's.  Use these average to estimate the bias 
of the ols estimators. 
 
To start I did the following non lagged program.  
 
final<-function(i,j){x<-function(i) {10*i}
y<-function(i,j) {1+.8*10*i+100*rnorm(j)}
datathreeone<- data.frame(replicate(100,coef(lm(y(i,j)~x(i)
rowMeans(datathreeone)}
final(1:25,25)
final(1:50,50)
final(1:100,100)
final(1:200,200)
final(1:1,1)
 
 
Now the "only" thing I need to to is change ".8*10*i"  which is exogenous to 
".8* y(t-1) ".   
 
There are two reasons why I did it this way. I needed the rnorm(i) to generate 
a new set of u's each replication, and I wanted to be able to use the function 
as i did to make the results more concise. 
 
For the lag in SAS we used an if then else logic relating to the number of 
observation.  This in R would have to be linked to the invisable row number.  I 
think I need an index variable for the row.  Perhaps, sorry thinking while 
typing. 
 
Another reason why I am stuck, the lag function was seemingly straight forward. 
 
 
lag (x, k=1)
 
yet x  has to be a matrix  so when I tried to do it like above with y as a 
function R complained.  
 
I have been working on this for a couple of days now so everything is begining 
to not make sense.  It just seems to me to get the matrix to work out I would 
need to have two matrices. 
 
dependentand   explanatory
y1 = sum (  1 +.8*0 + 100*rnorm(i))
y2 = sum ( 1 +.8* (dependent row 1) + 100*rnorm(i))
etc  
 
I just am not sure how to do that. 
 
Please help and thank you for your time, 
 
christian falde
 
 
 
 
 
 
 
 
 
 
_
Create the ultimate e-mail address book. Import your contacts to Windows Live 
Hotmail.
www.windowslive-hotmail.com/learnmore/managemail2.html?locale=en-us&ocid=TXT_TAGLM_HMWL_reten_impcont_0507
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] http proxies: setting and unsetting

2007-05-29 Thread matt.pettis
Hi Gabor,

Thanks for the reply and link.

I took a look at the link -- one thing I don't understand is why if I delete 
the 'http_proxy' variable via the cmd shell (or equivalent OS dialog box), why 
I can get R to ignore the proxy, but using Sys.setenv("http_proxy"="") won't do 
that for me (at least for the scope of the session).  If there were other 
variables affecting it, I would think my deleting 'http_proxy' in the OS would 
also have no effect -- yet it does.

Any ideas?

Thanks again,
Matt


-Original Message-
From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
Sent: Tue 5/29/2007 9:49 PM
To: Pettis, Matthew (Thomson)
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] http proxies: setting and unsetting
 
Note that Windows XP has 4 types of environment variables and I suspect
that the problem stems from not taking that into account:

http://www.microsoft.com/technet/scriptcenter/guide/sas_wsh_kmmj.mspx?mfr=true

On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am trying to use R at work and at home on the same computer.  At work, I 
> have a proxy, and at home, I do not.  I have, for work, a User environment 
> variable "http_proxy" which I set in the OS (Windows XP Pro).  When I am at 
> work, and I try to retrieve data from the web with 'read.csv', things work 
> just fine.  I assume it knows how to use the proxy.
>
> The trouble is when I am at home and have no proxy, R still tries to use my 
> work proxy.  I have tried the following:
>
> Sys.setenv("http_proxy"="")
> Sys.setenv("no_proxy"=TRUE)
> Sys.setenv("no_proxy"=1)
>
> none of which seems to work.  Whenever I try to use read.csv, it tells me 
> that it cannot find my work proxy, which I am trying to tell R to ignore.
>
> I can solve this problem by removing the http_proxy environment variable 
> binding in the OS when at home, but that is a pain, because then I have to 
> reset it when I go back into work.
>
> Is there a way to tell R within a session to ignore the proxy?  If so, what 
> am I doing wrong?
>
> thanks,
> matt
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] http proxies: setting and unsetting

2007-05-29 Thread Gabor Grothendieck
Note that Windows XP has 4 types of environment variables and I suspect
that the problem stems from not taking that into account:

http://www.microsoft.com/technet/scriptcenter/guide/sas_wsh_kmmj.mspx?mfr=true

On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am trying to use R at work and at home on the same computer.  At work, I 
> have a proxy, and at home, I do not.  I have, for work, a User environment 
> variable "http_proxy" which I set in the OS (Windows XP Pro).  When I am at 
> work, and I try to retrieve data from the web with 'read.csv', things work 
> just fine.  I assume it knows how to use the proxy.
>
> The trouble is when I am at home and have no proxy, R still tries to use my 
> work proxy.  I have tried the following:
>
> Sys.setenv("http_proxy"="")
> Sys.setenv("no_proxy"=TRUE)
> Sys.setenv("no_proxy"=1)
>
> none of which seems to work.  Whenever I try to use read.csv, it tells me 
> that it cannot find my work proxy, which I am trying to tell R to ignore.
>
> I can solve this problem by removing the http_proxy environment variable 
> binding in the OS when at home, but that is a pain, because then I have to 
> reset it when I go back into work.
>
> Is there a way to tell R within a session to ignore the proxy?  If so, what 
> am I doing wrong?
>
> thanks,
> matt
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] control axis

2007-05-29 Thread Murray Pung
I have an example below:


example <-
structure(list(
patient = c(1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6),
group = structure(c("active", "active", "active", "active", "active",
"active", "active",
"active", "active", "active", "active", "active", "active", "active",
"active", "active", "active", "active", "active", "active", "active",
"active", "active",
"active", "active", "active", "active", "active", "active",
"active","placebo", "placebo", "placebo", "placebo", "placebo", "placebo",
"placebo",
"placebo", "placebo", "placebo", "placebo", "placebo", "placebo", "placebo",
"placebo", "placebo", "placebo", "placebo", "placebo", "placebo", "placebo",
"placebo", "placebo",
"placebo", "placebo", "placebo", "placebo", "placebo", "placebo",
"placebo"),
.label = c("active", "placebo")),
visit = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2,
3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2
, 3, 4,
5, 6, 7, 8, 9, 10),
var = c(1, 0.8, 0.5, 0.45, 0.3,
0.34, 0.26, 0.25, 0.2, 0.19, 1, 0.6, 0.5, 0.4, 0.35, 0.3,
0.23, 0.2, 0.19, 0.1, 1, 1.2, 0.8, 0.6, 0.56, 0.45, 0.54,
0.34, 0.32, 0.2, 1, 1.2, 1.3, 1.1, 4.2, 1.3, 1.2, 0.9, 0.89,
0.88, 1, 1.3, 1.2, 1.2, 0.9, 0.87, 0.76, 0.8, 0.98, 1.2,
1, 1.2, 1.3, 1.2, 1.15, 1.2, 1.234, 1.4, 1.1, 1)),
.Names = c("patient", "group", "visit", "var"), class = "data.frame",
row.names = c(NA,
-60))


xyplot(example$var ~ example$visit | example$group,
groups = example$patient,
col = "black",
type = "b",
ylab = "Variable",
xlab = "Visit",
bty = "n",
pch = c(16,16),
bty = "n",
las = 1,
#ylim = c(0,5),
scales = list(
x = list(at = c(1:10),
labels = c("Baseline","2","3","4","5","6","7","8","9","End of
Study"),
rot = 40,
alternating = 1))
)


On 30/05/07, Anup Nandialath <[EMAIL PROTECTED]> wrote:
>
> did you try the xlim and ylim options on xyplot? you can change the axis
> using that
>
> HTH
>
> Anup
>
> --
> Fussy? Opinionated? Impossible to please? Perfect. Join Yahoo!'s user
> paneland
>  lay it on us.
>
>


-- 
Murray Pung
Statistician, Datapharm Australia Pty Ltd
0404 273 283

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] http proxies: setting and unsetting

2007-05-29 Thread matt.pettis
Hi,

I am trying to use R at work and at home on the same computer.  At work, I have 
a proxy, and at home, I do not.  I have, for work, a User environment variable 
"http_proxy" which I set in the OS (Windows XP Pro).  When I am at work, and I 
try to retrieve data from the web with 'read.csv', things work just fine.  I 
assume it knows how to use the proxy.

The trouble is when I am at home and have no proxy, R still tries to use my 
work proxy.  I have tried the following:

Sys.setenv("http_proxy"="")
Sys.setenv("no_proxy"=TRUE)
Sys.setenv("no_proxy"=1)

none of which seems to work.  Whenever I try to use read.csv, it tells me that 
it cannot find my work proxy, which I am trying to tell R to ignore.

I can solve this problem by removing the http_proxy environment variable 
binding in the OS when at home, but that is a pain, because then I have to 
reset it when I go back into work.

Is there a way to tell R within a session to ignore the proxy?  If so, what am 
I doing wrong?

thanks,
matt

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Partially reading a file (particularly)

2007-05-29 Thread Gabor Grothendieck
On 5/29/07, Charles C. Berry <[EMAIL PROTECTED]> wrote:
> On Tue, 29 May 2007, Tobin, Jared wrote:
>
> > Hello,
> >
> > I am trying to figure out if there exists some R command that allows one
> > to be
> > particularly selective when reading a file.  I'm dealing with large
> > fixed-width data
> > sets that look like
> >
> > 539001..
> > 639001..
> > 639001..
> > ...
> > 539002..
> > 639002..
> > ...
> >
> > Presently, I am using read.fwf to read an entire file, but I am
> > interested only in
> > reading those records beginning with 5.  I have been unable to find help
> > in any of
> > the suggested resources.
>
> Assuming you have 'grep' in your path,
>
>res <- read.fwf( pipe( "grep '^5' my.file" ) ,  )
>
> will do it.
>
> grep will usually be found on linux/unix systems and Mac OS X. The
> 'Rtools' toolkit for windows has grep, I believe.

On windows XP we can also use findstr which comes with Windows:

 res <- read.fwf( pipe( "findstr /b 5 my.file" ) ,  )

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] control axis

2007-05-29 Thread Deepayan Sarkar
On 5/29/07, Murray Pung <[EMAIL PROTECTED]> wrote:
> I have an outlier that I would still like to display, but would prefer to
> shorten the axis. For example, display 0% - 40%, and 90% - 100%. Is this
> possible? I am using an xyplot.

Could you give an example? If you mean xyplot from the lattice
package, using a shingle can be a reasonable approach, perhaps with
relation="sliced".

-Deepayan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help with optim

2007-05-29 Thread Bojanowski, M.J. \(Michal\)
Hi,

Unfortunately I don't think it is possible to do exactly what you want, but:

If the numbers reported by 'optim' to the console are enough for you, then
consider using 'capture.output'. Below I used the example from 'optim' help
page, because I could not use yours directly.

hth,

Michal


# -b-e-g-i-n---R---c-o-d-e-

# this is from the 'optim' example

fr <- function(x) {   ## Rosenbrock Banana function
x1 <- x[1]
x2 <- x[2]
100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}

# and now optim-ize capturing the output to 'out' and the results to 'o'
out <- capture.output(
o <- optim( c(-1.2, 1), fr, method="BFGS",
control=c(REPORT=1, trace=1))
)

# 'out' is a character vector storing every line as a separate element
out

# 'o' is returned by optim
o

# to get a grip on the values you could use for example 'strsplit' and then
# extract neccessary info
optimout <- function(out)
{
# split by spaces
l <- strsplit(out, " ")
# just return the numbers
rval <- sapply(l[-length(l)], function(x) x[length(x)] )
as.numeric(rval)
}

x <- optimout(out)
x
plot(x)


# -e-n-d---R---c-o-d-e-


-Wiadomo¶æ oryginalna-
Od: [EMAIL PROTECTED] w imieniu Anup Nandialath
Wys³ano: Wt 2007-05-29 08:33
Do: r-help@stat.math.ethz.ch
Temat: [R] Help with optim
 
Dear Friends,

I'm using the optim command to maximize a likelihood function. My optim command 
is as follows

estim.out <- optim(beta, loglike, X=Xmain, Y=Y, hessian=T, method="BFGS", 
control=c(fnscale=-1, trace=1, REPORT=1))

Setting the report=1, gives me the likelihood function value (if i'm correct) 
at each step. The output from running this is as follows

initial  value 3501.558347 
iter   2 value 3247.277071
iter   3 value 3180.679307
iter   4 value 3157.201356
iter   5 value 3156.579887
iter   6 value 3017.715292
iter   7 value 2993.349538
iter   8 value 2987.181782
iter   9 value 2986.672719
iter  10 value 2986.658620
iter  11 value 2986.658266
iter  12 value 2986.658219
iter  13 value 2986.658156
iter  13 value 2986.658156
iter  13 value 2986.658135
final  value 2986.658135 
converged

I just wanted to know if there was any way I could get the value of each 
iteration into an object. At present it is dumped on the screen. But is there a 
way to get hold of these values through an object??

Thanks in advance

sincerely

Anup




 
-
The fish are biting.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] control axis

2007-05-29 Thread Murray Pung
I have an outlier that I would still like to display, but would prefer to
shorten the axis. For example, display 0% - 40%, and 90% - 100%. Is this
possible? I am using an xyplot.

Thanks
Murray

-- 
Murray Pung
Statistician, Datapharm Australia Pty Ltd
0404 273 283

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] aggregation of a zoo object

2007-05-29 Thread jim holtman
Here is one way of doing it:

> time<-c("2000-10-03 14:00:00","2000-10-03 14:10:00","2000-10-03 14:20:00",
+ "2000-10-03 15:30:00","2000-10-03 16:40:00","2000-10-03 16:50:00",
+ "2000-10-03 17:00:00","2000-10-03 17:10:00","2000-10-03 17:20:00",
+ "2000-10-03 18:30:00","2000-10-04 14:00:00","2000-10-04 14:10:00",
+ "2000-10-04 14:20:00","2000-10-04 15:30:00","2000-10-04  16:40:00",
+ "2000-10-04 16:50:00","2000-10-04 17:00:00","2000-10-04 18:30:00",
+ "2000-10-04 18:30:00","2000-10-04 18:30:00")
> # remark the last date is occuring 3 times
>
> precipitation<-c(NA,0.1,0,0,NA,0,0.2,0.3,0.5,6,7,8,9,1,0,0,NA,0,1,0)
>
> my.df <- data.frame(time=as.POSIXct(time), precip=precipitation)
> # get only good data
> my.df <- my.df[complete.cases(my.df),]
> tapply(my.df$precip, as.POSIXct(trunc(my.df$time, 'day')), sum)
2000-10-03 2000-10-04
   7.1   26.0
>



On 5/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Dear all,
>
> I am trying to execute the following example:
>
> time<-c("2000-10-03 14:00:00","2000-10-03 14:10:00","2000-10-03
> 14:20:00","2000-10-03 15:30:00","2000-10-03 16:40:00","2000-10-03
> 16:50:00","2000-10-03 17:00:00","2000-10-03 17:10:00","2000-10-03
> 17:20:00","2000-10-03 18:30:00","2000-10-04 14:00:00","2000-10-04
> 14:10:00","2000-10-04 14:20:00","2000-10-04 15:30:00","2000-10-04
> 16:40:00","2000-10-04 16:50:00","2000-10-04 17:00:00","2000-10-04
> 18:30:00","2000-10-04 18:30:00","2000-10-04 18:30:00")
> # remark the last date is occuring 3 times
>
> precipitation<-c(NA,0.1,0,0,NA,0,0.2,0.3,0.5,6,7,8,9,1,0,0,NA,0,1,0)
>
> library(zoo)
>
> z <- zoo(precipitation, as.POSIXct(time, tz = "GMT"))
> Warning message:
> some methods for "zoo" objects do not work if the index entries in
> 'order.by' are not unique in: zoo(precipitation, as.POSIXct(time, tz =
> "GMT"))
>
> # then I want to do the sum per hour
>
> z_sum_per_hour <- aggregate(na.omit(z), function(x) as.POSIXct(trunc(x,
> "hour")),sum)
> Warning message:
> some methods for "zoo" objects do not work if the index entries in
> 'order.by' are not unique in: zoo(rval[i], x.index[i])
>
>
>
> Do anyone has an idea how to avoid that ?
>
>
>
> Thanks in advance
>
>
> Jessica
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] summing up colum values for unique IDs when multiple ID's exist in data frame

2007-05-29 Thread jim holtman
try this:

> x <- " IDval
+ 1  A  0.100
+ 2  B  0.001
+ 3  C -0.100
+ 4  A  0.200
+ "
> x <- read.table(textConnection(x), header=TRUE)
> (z <- tapply(x$val, x$ID, sum))
 A  B  C
 0.300  0.001 -0.100
> data.frame(ID=names(z), val=z)
  IDval
A  A  0.300
B  B  0.001
C  C -0.100
>



On 5/29/07, Young Cho <[EMAIL PROTECTED]> wrote:
>
> I have data.frame's with IDs and multiple columns. B/c some of IDs showed
> up
> more than once, I need sum up colum values to creat a new dataframe with
> unique ids.
>
> I hope there are some cheaper ways of doing it...  Because the dataframe
> is
> huge, it takes almost an hour to do the task.  Thanks so much in advance!
>
> Young
>
> # -  examples are here and sum.dup.r is at the
> bottom.
>
> > x = data.frame(ID = c('A','B','C','A'), val=c(0.1,0.001,-0.1,0.2))
> > x
> IDval
> 1  A  0.100
> 2  B  0.001
> 3  C -0.100
> 4  A  0.200
> > sum.dup(x)
> IDval
> 1  A  0.300
> 2  B  0.001
> 3  C -0.100
>
>
>
> sum.dup <- function( x ){
>
>d.row = which(duplicated(x$ID))
>if( length(d.row) > 0){
>id = x$ID[d.row]
>com.val = x[-d.row,]
>for(i in 1:length(id)){
>s = sum(x$val[ x$ID == id[i] ])
>com.val$val[ com.val$ID == id[i] ] = s
>}
>ix = sort(as.character(com.val[,1]),index.return=T)
>return(com.val[ix$ix,])
>}else{
>ix = sort(as.character(x[,1]),index.return=T)
>return(x[ix$ix,])
>}
>
> }
>
>[[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] trouble understanding why ...=="NaN" isn't true

2007-05-29 Thread Sundar Dorai-Raj
Hi, Andrew,

Looks like you're reading the data incorrectly. If using ?read.table or 
the like, try to add a na.strings = c("NA", "NaN") argument. Second, 
Bert's comment: use ?is.nan, rather than "==".

--sundar

Andrew Yee said the following on 5/29/2007 3:39 PM:
> Okay, it turns out that there were leading spaces, so that in the data, it
> was represented as "   NaN", hence the expression =="NaN" was coming back as
> false.
> 
> Is there a way to find out preemptively if there are leading spaces?
> 
> Thanks,
> Andrew
> 
> 
> On 5/29/07, Andrew Yee <[EMAIL PROTECTED]> wrote:
>> I have the following data:
>>
>>> dataset[2,"Sample.227"]
>> [1]NaN
>> 1558 Levels: -0.000 -0.001 -0.002 -0.003 -0.004 -0.005 -0.006 -0.007 -
>> 0.008 -0.009 ...  2.000
>>
>>
>> However, I'm not sure why this expression is coming back as FALSE:
>>
>>> dataset[2,"Sample.227"]=="NaN"
>> [1] FALSE
>>
>> Similarly:
>>
>>> dataset[2,"Sample.227"]==NaN
>> [1] NA
>>
>>
>> It seems that since "NaN" is represented as a character, this expression
>> =="NaN" should be TRUE, but it's returning as FALSE.
>>
>> Thanks,
>> Andrew
>>
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] trouble understanding why ...=="NaN" isn't true

2007-05-29 Thread Bert Gunter
1. "NaN" is a character string, **not** NaN; hence is.nan("NaN") yields
FALSE.

2. Please read the docs!  ?NaN explicitly says:

"Do not test equality to NaN, or even use identical, since systems typically
have many different NaN values."


Bert Gunter
Genentech Nonclinical Statistics


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andrew Yee
Sent: Tuesday, May 29, 2007 3:33 PM
To: r-help@stat.math.ethz.ch
Subject: [R] trouble understanding why ...=="NaN" isn't true

I have the following data:

> dataset[2,"Sample.227"]
[1]NaN
1558 Levels: -0.000 -0.001 -0.002 -0.003 -0.004 -0.005 -0.006 -0.007 -0.008-
0.009 ...  2.000


However, I'm not sure why this expression is coming back as FALSE:

> dataset[2,"Sample.227"]=="NaN"
[1] FALSE

Similarly:

> dataset[2,"Sample.227"]==NaN
[1] NA


It seems that since "NaN" is represented as a character, this expression
=="NaN" should be TRUE, but it's returning as FALSE.

Thanks,
Andrew

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] trouble understanding why ...=="NaN" isn't true

2007-05-29 Thread Andrew Yee
Okay, it turns out that there were leading spaces, so that in the data, it
was represented as "   NaN", hence the expression =="NaN" was coming back as
false.

Is there a way to find out preemptively if there are leading spaces?

Thanks,
Andrew


On 5/29/07, Andrew Yee <[EMAIL PROTECTED]> wrote:
>
> I have the following data:
>
> > dataset[2,"Sample.227"]
> [1]NaN
> 1558 Levels: -0.000 -0.001 -0.002 -0.003 -0.004 -0.005 -0.006 -0.007 -
> 0.008 -0.009 ...  2.000
>
>
> However, I'm not sure why this expression is coming back as FALSE:
>
> > dataset[2,"Sample.227"]=="NaN"
> [1] FALSE
>
> Similarly:
>
> > dataset[2,"Sample.227"]==NaN
> [1] NA
>
>
> It seems that since "NaN" is represented as a character, this expression
> =="NaN" should be TRUE, but it's returning as FALSE.
>
> Thanks,
> Andrew
>

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] trouble understanding why ...=="NaN" isn't true

2007-05-29 Thread Andrew Yee
I have the following data:

> dataset[2,"Sample.227"]
[1]NaN
1558 Levels: -0.000 -0.001 -0.002 -0.003 -0.004 -0.005 -0.006 -0.007 -0.008-
0.009 ...  2.000


However, I'm not sure why this expression is coming back as FALSE:

> dataset[2,"Sample.227"]=="NaN"
[1] FALSE

Similarly:

> dataset[2,"Sample.227"]==NaN
[1] NA


It seems that since "NaN" is represented as a character, this expression
=="NaN" should be TRUE, but it's returning as FALSE.

Thanks,
Andrew

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Partially reading a file (particularly)

2007-05-29 Thread Charles C. Berry
On Tue, 29 May 2007, Tobin, Jared wrote:

> Hello,
>
> I am trying to figure out if there exists some R command that allows one
> to be
> particularly selective when reading a file.  I'm dealing with large
> fixed-width data
> sets that look like
>
> 539001..
> 639001..
> 639001..
> ...
> 539002..
> 639002..
> ...
>
> Presently, I am using read.fwf to read an entire file, but I am
> interested only in
> reading those records beginning with 5.  I have been unable to find help
> in any of
> the suggested resources.

Assuming you have 'grep' in your path,

res <- read.fwf( pipe( "grep '^5' my.file" ) ,  )

will do it.

grep will usually be found on linux/unix systems and Mac OS X. The 
'Rtools' toolkit for windows has grep, I believe.


>
> I understand this is a SAS example that replicates what I'm looking to
> do, if it's of
> any help to anyone.
>
> street type   nameam
> traffic   pm traffic
>
> freeway   408 3684
> 3459
> surface   Martin Luther King Jr. Blvd.15901234
> freeway   608 4583
> 3860
> freeway   808 2386
> 2518
> surface   Lake Shore Dr.  15901234
>
> INPUT type $ @;
> IF type = 'surface' THEN DELETE;
> INPUT name $ 9-38 amtraff pmtraff;
>
> Any answers, suggestions, or points-in-the-right-direction would be much
> appreciated.
>
> --
>
> Jared Tobin, Student Research Assistant
> Dept. of Fisheries and Oceans
> [EMAIL PROTECTED]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

Charles C. Berry(858) 534-2098
  Dept of Family/Preventive Medicine
E mailto:[EMAIL PROTECTED]   UC San Diego
http://biostat.ucsd.edu/~cberry/ La Jolla, San Diego 92093-0901

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] summing up colum values for unique IDs when multiple ID's exist in data frame

2007-05-29 Thread Thomas Lumley
On Tue, 29 May 2007, Seth Falcon wrote:

> "Young Cho" <[EMAIL PROTECTED]> writes:
>
>> I have data.frame's with IDs and multiple columns. B/c some of IDs
>> showed up more than once, I need sum up colum values to creat a new
>> dataframe with unique ids.
>>
>> I hope there are some cheaper ways of doing it...  Because the
>> dataframe is huge, it takes almost an hour to do the task.  Thanks
>> so much in advance!
>
> Does this do what you want in a faster way?
>


rowsum() should probably be faster (but perhaps not much).

   -thomas

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] summing up colum values for unique IDs when multiple ID's exist in data frame

2007-05-29 Thread Seth Falcon
"Young Cho" <[EMAIL PROTECTED]> writes:

> I have data.frame's with IDs and multiple columns. B/c some of IDs
> showed up more than once, I need sum up colum values to creat a new
> dataframe with unique ids.
>
> I hope there are some cheaper ways of doing it...  Because the
> dataframe is huge, it takes almost an hour to do the task.  Thanks
> so much in advance!

Does this do what you want in a faster way?

sum_dup <- function(df) {
idIdx <- split(1:nrow(df), as.character(df$ID))
whID <- match("ID", names(df))
colNms <- names(df)[-whID]
ans <- lapply(colNms, function(cn) {
unlist(lapply(idIdx,
  function(x) sum(df[[cn]][x])),
   use.names=FALSE)
})
attributes(ans) <- list(names=colNms,
row.names=names(idIdx),
class="data.frame")
ans
}


-- 
Seth Falcon | Computational Biology | Fred Hutchinson Cancer Research Center
http://bioconductor.org

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] Rcmdr 1.3-0 and RcmdrPlugins.TeachingDemos

2007-05-29 Thread John Fox
I'd like to announce a new version, 1.3-0, of the Rcmdr package. The Rcmdr
package provides a basic-statistics graphical user interface (GUI) to R. 

Beyond small changes and additions, this new version of the package makes
provision for "plug-ins" that permit extension of the Rcmdr GUI without
altering and rebuilding the Rcmdr source package or modifying the installed
package. An R Commander plug-in is an ordinary R package that (1) provides
extensions to the R Commander menus is a file named menus.txt located in the
package's etc directory; (2) provides call-back functions required by these
menus; and (3) in optional Log-Exceptions: and Models: fields in the
package's DESCRIPTION file, augments respectively the list of functions for
which printed output is suppressed and the list of model objects recognized
by the R Commander. The menus provided by a plug-in package are merged with
the standard Commander menus. 

Plug-in packages given in the R Commander plugins option (see ?Commander)
are automatically loaded when the Commander starts up. Plug-in packages may
also be loaded via the Commander "Tools -> Load Rcmdr plug-in(s)" menu; a
restart of the Commander is required to install the new menus. Finally,
loading a plug-in package when the Rcmdr is not loaded will load the Rcmdr
and activate the plug-in. 

An illustrative R Commander plug-in package, RcmdrPlugin.TeachingDemos
(providing a GUI to some of Greg Snow's TeachingDemos package), is now
available on CRAN. (I suggest using this naming convention -- RcmdrPlugin.*
-- so that plug-in packages will sort immediately below the Rcmdr package on
CRAN. This assumes, of course, that other people will be interested in
creating Rcmdr plugins!)

Because this is a new feature of the Rcmdr, feedback and suggestions would
be appreciated.

I'd like to acknowledge Richard Heiberger's suggestions for the design of
this plug-in facility.

John


John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox

___
R-packages mailing list
[EMAIL PROTECTED]
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] summing up colum values for unique IDs when multiple ID's exist in data frame

2007-05-29 Thread Young Cho
I have data.frame's with IDs and multiple columns. B/c some of IDs showed up
more than once, I need sum up colum values to creat a new dataframe with
unique ids.

I hope there are some cheaper ways of doing it...  Because the dataframe is
huge, it takes almost an hour to do the task.  Thanks so much in advance!

Young

# -  examples are here and sum.dup.r is at the
bottom.

> x = data.frame(ID = c('A','B','C','A'), val=c(0.1,0.001,-0.1,0.2))
> x
  IDval
1  A  0.100
2  B  0.001
3  C -0.100
4  A  0.200
> sum.dup(x)
  IDval
1  A  0.300
2  B  0.001
3  C -0.100



sum.dup <- function( x ){

d.row = which(duplicated(x$ID))
if( length(d.row) > 0){
id = x$ID[d.row]
com.val = x[-d.row,]
for(i in 1:length(id)){
s = sum(x$val[ x$ID == id[i] ])
com.val$val[ com.val$ID == id[i] ] = s
}
ix = sort(as.character(com.val[,1]),index.return=T)
return(com.val[ix$ix,])
}else{
ix = sort(as.character(x[,1]),index.return=T)
return(x[ix$ix,])
}

}

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Partially reading a file (particularly)

2007-05-29 Thread Tobin, Jared
Hello,

I am trying to figure out if there exists some R command that allows one
to be
particularly selective when reading a file.  I'm dealing with large
fixed-width data 
sets that look like

539001..
639001..
639001..
...
539002..
639002..
...

Presently, I am using read.fwf to read an entire file, but I am
interested only in 
reading those records beginning with 5.  I have been unable to find help
in any of 
the suggested resources.

I understand this is a SAS example that replicates what I'm looking to
do, if it's of
any help to anyone.

street type nameam
traffic pm traffic

freeway 408 3684
3459
surface Martin Luther King Jr. Blvd.15901234
freeway 608 4583
3860
freeway 808 2386
2518
surface Lake Shore Dr.  15901234

INPUT type $ @;
IF type = 'surface' THEN DELETE;
INPUT name $ 9-38 amtraff pmtraff;

Any answers, suggestions, or points-in-the-right-direction would be much
appreciated.

--

Jared Tobin, Student Research Assistant
Dept. of Fisheries and Oceans
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] aggregation of a zoo object

2007-05-29 Thread Achim Zeileis
On Tue, 29 May 2007 [EMAIL PROTECTED] wrote:

> # then I want to do the sum per hour
>
> z_sum_per_hour <- aggregate(na.omit(z), function(x) as.POSIXct(trunc(x,
> "hour")),sum)
> Warning message:
> some methods for “zoo” objects do not work if the index entries in
> ‘order.by’ are not unique in: zoo(rval[i], x.index[i])
>
> Do anyone has an idea how to avoid that ?

The warning does not come from the aggregate() call, but from the
na.omit() call. After omitting the NAs, you have still duplicated time
stamps, hence the warning is issued again. After that, aggregating works
fine and produces no warnings.
Z

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] off-topic: affine transformation matrix

2007-05-29 Thread Dylan Beaudette
Thanks for the prompt and clear reply! The simplicity of the solution may have 
been why I initially overlooked this approach...


The results look convincing (http://169.237.35.250/~dylan/temp/affine.png), 
now I just need to verify that the output from coef() is in the format that I 
need it in.


l <- lm(cbind(nx,ny) ~ x + y, data=g)
coef(l)
 nx   ny
(Intercept)  6.87938629  5.515261158
x1.01158806 -0.005449152
y   -0.04481893  0.996895878


## convert to format needed for affine() function in postGIS?
t(coef(l))

   (Intercept)x   y
nx6.879386  1.011588063 -0.04481893
ny5.515261 -0.005449152  0.99689588


note that the format that I am looking for looks something like the matrix 
defined on this page:
http://www.geom.uiuc.edu/docs/reference/CRC-formulas/node15.html

cheers,

dylan



On Monday 28 May 2007 15:18, Prof Brian Ripley wrote:
> Isn't this just a regression (hopefully with a near-zero error).
>
> coef(lm(cbind(xnew, ynew) ~ xold + yold))
>
> should do what I think you are asking for.  (I am not clear which
> direction you want the transformation, so choose 'old' and 'new'
> accordingly.)
>
> On Mon, 28 May 2007, Dylan Beaudette wrote:
> > This may sound like a very naive question, but...
> >
> > give two lists of coordinate pairs (x,y - Cartesian space) is there any
> > simple way to compute the affine transformation matrix in R.
> >
> > I have a set of data which is offset from where i know it should be. I
> > have coordinates of the current data, and matching coordinates of where
> > the data should be. I need to compute the composition of the affine
> > transformation matrix, so that I can apply an affine transform the entire
> > dataset.
> >
> > any ideas?
> >
> > thanks in advance!

-- 
Dylan Beaudette
Soils and Biogeochemistry Graduate Group
University of California at Davis
530.754.7341

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] aggregation of a zoo object

2007-05-29 Thread jessica . gervais
Dear all,

I am trying to execute the following example:

time<-c("2000-10-03 14:00:00","2000-10-03 14:10:00","2000-10-03
14:20:00","2000-10-03 15:30:00","2000-10-03 16:40:00","2000-10-03
16:50:00","2000-10-03 17:00:00","2000-10-03 17:10:00","2000-10-03
17:20:00","2000-10-03 18:30:00","2000-10-04 14:00:00","2000-10-04
14:10:00","2000-10-04 14:20:00","2000-10-04 15:30:00","2000-10-04
16:40:00","2000-10-04 16:50:00","2000-10-04 17:00:00","2000-10-04
18:30:00","2000-10-04 18:30:00","2000-10-04 18:30:00")
# remark the last date is occuring 3 times

precipitation<-c(NA,0.1,0,0,NA,0,0.2,0.3,0.5,6,7,8,9,1,0,0,NA,0,1,0)

library(zoo)

z <- zoo(precipitation, as.POSIXct(time, tz = "GMT"))
Warning message:
some methods for “zoo” objects do not work if the index entries in
‘order.by’ are not unique in: zoo(precipitation, as.POSIXct(time, tz =
"GMT"))

# then I want to do the sum per hour

z_sum_per_hour <- aggregate(na.omit(z), function(x) as.POSIXct(trunc(x,
"hour")),sum)
Warning message:
some methods for “zoo” objects do not work if the index entries in
‘order.by’ are not unique in: zoo(rval[i], x.index[i])



Do anyone has an idea how to avoid that ?



Thanks in advance


Jessica

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] #include

2007-05-29 Thread statmobile
Hey Everyone,

I'm running R 2.4.0 on Debian etch 4.0, and I'm trying to call some
Lapack functions from my C code.  Actually, to be honest I'm not
really having trouble calling the commands such as La_dgesv from
within my C code, but I do get warning when compiling the package
saying:

GAUSSlkhd.c: In function 'GAUSSlkhd':
GAUSSlkhd.c:37: warning: implicit declaration of function 'La_dgesv'
GAUSSlkhd.c:37: warning: assignment makes pointer from integer without
a cast

I tried using:

#include 

but it won't compile the package at all with that included,
complaining that

bjl.h:5:30: error: Rmodules/Rlapack.h: No such file or directory

Can someone explain to me how I should include the AWESOME wrapper
code to the Lapack libraries?  Am I not following the proper protocol
by using these La_* commands in my package source code?

TIA,
Brian

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rgl.postscript

2007-05-29 Thread Duncan Murdoch
On 5/29/2007 1:53 PM, coar wrote:
> Hi, 
> I am having an issue when creating a postscript file from RGL window.  It 
> seems to cut off some of the axis labels.  Here is the code I am using.
> 
> I created a 3D plot using RGL_0.71 with R 2.5 on Windows XP.
> 
> z1<-c(5,4,1,4.5,2,3,2,1,1)
> z2<-c(6,8,7,7.5,5,3.5,4,1,1)
> z3<-c(3,2,4,7,3,4.5,6,2,3)
> x1<-seq(1,9)
> x2<-seq(1,9)
> x3<-seq(10,18)
> 
> y1<-seq(8,0)
> y2<--1*y1
> y3<-rep(0,9)
> m1<-cbind(x1,y1,z1)
> m2<-cbind(x2,y2,z2)
> m3<-cbind(x3,y3,z3)
> m3<-rbind(m2[9,],m3)
> 
> up1<-m1[,-2]
> up2<-m2[,-2]
> 
> lp<-m3[,-2]
> p1<-rbind(up1, lp[-1,])
> p2<-rbind(up2, lp[-1,])
> sp1<-spline(p1)
> sp2<-spline(p2)
> 
> sp1m<-cbind(sp1$x,sp1$y)
> sp2m<-cbind(sp2$x,sp2$y)
> 
> ge9<-(sp1$x>=9)
> ge9recs<-seq(1,length(ge9))[ge9]
> 
> b1<-sp1m[ge9recs,]
> b2<-sp2m[ge9recs,]
> 
> b1b2<-cbind(b1[,2],b2[,2])
> 
> bavg<-apply(b1b2,1,mean)
> blow<-cbind(sp1m[ge9recs,1],bavg)
> 
> path.one<-rbind(sp1m[-ge9recs,],blow)
> path.two<-rbind(sp2m[-ge9recs,],blow)
> 
> uy1<-9-path.one[-ge9recs,1]
> ly1<-rep(0,length(ge9recs))
> y1<-c(uy1,ly1)
> 
> uy2<--1*(9-path.two[-ge9recs,1])
> ly2<-rep(0,length(ge9recs))
> y2<-c(uy2,ly2)
> 
> m1<-cbind(path.one,y1)
> m2<-cbind(path.two,y2)
> d.mat<-rbind(m1,m2)
> 
> open3d()
> 
> points3d(x=d.mat[,1],y=d.mat[,3],z=d.mat[,2],size=3)
> lines3d(x=m1[,1],y=m1[,3],z=m1[,2],size=3)
> lines3d(x=m2[,1],y=m2[,3],z=m2[,2],size=3)
> 
> I then added axes using
> 
> box3d()
> axes3d(c('x--'),tick=TRUE,nticks=5) 
> axes3d(c('z--'),tick=TRUE,nticks=5) 
> axes3d(c('z++'),tick=TRUE,nticks=5)
> 
> title3d(main = "Test 3-D plot", sub = NULL, xlab ="Lag", ylab = NULL, zlab = 
> "Dissolved O2", line = NA)
> 
> 
> I did some rotation to determine a nicer view of the plot.  I now wanted to 
> create a snapshot of the plot (using rgl.postscript since I will be using in 
> LATEX).  However, it cuts off some of the axis labels.  Is there a way to 
> adjust the area that gets captured to the postscript file?  or some other way 
> to fix this?

You could try resizing, or using Latex to put the labels on the plot, 
but there is no parameter to control what gets cut off.  You should also 
be aware that the Postscript support is somewhat limited, and you might 
be better off using a bitmap copy with rgl.snapshot.

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] search path question

2007-05-29 Thread Prof Brian Ripley
Actually, it does

 if (is.character(file))
 if (file == "")
 file <- stdin()
 else {
 file <- file(file, "r")
 on.exit(close(file))
 }

so all the searching is done in the file() connection.

You could do this via a search_file() connection wrapper, but there is a 
problem with ensuring connections get closed (which on.exit does here).

On Tue, 29 May 2007, Barry Rowlingson wrote:

> Zhiliang Ma wrote:
>> Thanks, Barry.
>> In fact, I have a function just like yours, and I'm looking for a simple
>> alternative function, which is like "path" in Matlab.
>
>  Dont think it can be done - if you look at the code for 'scan', it
> disappears off into internal() calls to do the business of finding and
> reading a file, so you're going to have trouble changing its behaviour
> in R. You'd have to patch R's C source to implement a search path.
>
> Barry
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] LAPACK and BLAS libraries

2007-05-29 Thread Tommy Ouellet
Hi,

I don't know if I'm sending this to the right place but I've looked throught
tens and tens of topics on http://tolstoy.newcastle.edu.au/ and finally
found that email address where I can maybe find some help.

Well my main goal is to get to use the lapack library within my R package
(which can be done using calls from C). But in order to do this I have to
create a file src/Makevars with the following line : PKG_LIBS=$(LAPACK_LIBS)
$(BLAS_LIBS) $(FLIBS)

However when I create this file, my package won't build anymore. Actually
the checking results in the following :

mingw32\bin\ld.exe: cannot find -lg2c
collect2: ld returned 1 exit status
make[3]: *** [PACKAGE.dll] Error 1
make[2]: *** [srcDynlib] Error 2
make[1]: *** [all] Error 2
make: *** [pkg-PACKAGE] Error 2
*** Installation of PACKAGE failed ***

I've installed all the following tools :
 mingw-runtime-3.12.tar.gz
 w32api-3.9.tar.gz
 binutils-2.17.50-20060824-1.tar.gz
 gcc-core-3.4.5-20060117-1.tar.gz
 gcc-g++-3.4.5-20060117-1.tar.gz
 gcc-g77-3.4.5-20060117-1.tar.gz
So I don't know what to do next for the package to build... Any help would
be greatly appreciated.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] search path question

2007-05-29 Thread Barry Rowlingson
Zhiliang Ma wrote:
> Thanks, Barry.
> In fact, I have a function just like yours, and I'm looking for a simple
> alternative function, which is like "path" in Matlab.

  Dont think it can be done - if you look at the code for 'scan', it 
disappears off into internal() calls to do the business of finding and 
reading a file, so you're going to have trouble changing its behaviour 
in R. You'd have to patch R's C source to implement a search path.

Barry

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] rgl.postscript

2007-05-29 Thread coar
Hi, 
I am having an issue when creating a postscript file from RGL window.  It 
seems to cut off some of the axis labels.  Here is the code I am using.

I created a 3D plot using RGL_0.71 with R 2.5 on Windows XP.

z1<-c(5,4,1,4.5,2,3,2,1,1)
z2<-c(6,8,7,7.5,5,3.5,4,1,1)
z3<-c(3,2,4,7,3,4.5,6,2,3)
x1<-seq(1,9)
x2<-seq(1,9)
x3<-seq(10,18)

y1<-seq(8,0)
y2<--1*y1
y3<-rep(0,9)
m1<-cbind(x1,y1,z1)
m2<-cbind(x2,y2,z2)
m3<-cbind(x3,y3,z3)
m3<-rbind(m2[9,],m3)

up1<-m1[,-2]
up2<-m2[,-2]

lp<-m3[,-2]
p1<-rbind(up1, lp[-1,])
p2<-rbind(up2, lp[-1,])
sp1<-spline(p1)
sp2<-spline(p2)

sp1m<-cbind(sp1$x,sp1$y)
sp2m<-cbind(sp2$x,sp2$y)

ge9<-(sp1$x>=9)
ge9recs<-seq(1,length(ge9))[ge9]

b1<-sp1m[ge9recs,]
b2<-sp2m[ge9recs,]

b1b2<-cbind(b1[,2],b2[,2])

bavg<-apply(b1b2,1,mean)
blow<-cbind(sp1m[ge9recs,1],bavg)

path.one<-rbind(sp1m[-ge9recs,],blow)
path.two<-rbind(sp2m[-ge9recs,],blow)

uy1<-9-path.one[-ge9recs,1]
ly1<-rep(0,length(ge9recs))
y1<-c(uy1,ly1)

uy2<--1*(9-path.two[-ge9recs,1])
ly2<-rep(0,length(ge9recs))
y2<-c(uy2,ly2)

m1<-cbind(path.one,y1)
m2<-cbind(path.two,y2)
d.mat<-rbind(m1,m2)

open3d()

points3d(x=d.mat[,1],y=d.mat[,3],z=d.mat[,2],size=3)
lines3d(x=m1[,1],y=m1[,3],z=m1[,2],size=3)
lines3d(x=m2[,1],y=m2[,3],z=m2[,2],size=3)

I then added axes using

box3d()
axes3d(c('x--'),tick=TRUE,nticks=5) 
axes3d(c('z--'),tick=TRUE,nticks=5) 
axes3d(c('z++'),tick=TRUE,nticks=5)

title3d(main = "Test 3-D plot", sub = NULL, xlab ="Lag", ylab = NULL, zlab = 
"Dissolved O2", line = NA)


I did some rotation to determine a nicer view of the plot.  I now wanted to 
create a snapshot of the plot (using rgl.postscript since I will be using in 
LATEX).  However, it cuts off some of the axis labels.  Is there a way to 
adjust the area that gets captured to the postscript file?  or some other way 
to fix this?

Thanks,
Bill

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] parallel processing an lme model

2007-05-29 Thread Douglas Bates
On 5/28/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi All

> Had anyone of you seen if it is possible to split a large lme() job to be
> processed by multiple cpus/computers?

> I am just at the very beginning to understand related things, but does the 
> lme()
> use solution finding functions like nlm() mle(), and I-dont-know-what-else of
> the standard R pagage or does lme come with its own? If the former, has onyone
> seen how to split an mle() function call to be processed by multiple cpus?

First, if you want speed and your model can be fit by lmer I would
recommend using lmer or lmer2 from the lme4 package.  These functions
can fit models with crossed or partially crossed random effects which
is often the case for models in very large data sets.  However, they
do not provide the facility for specifying correlation structures or
variance functions in addition to those implied by the random effects.

Both lme and lmer end up calling nlminb to do the optimization of the
log-likelihood or the REML criterion.  The lmer2 function does not
call nlminb explicitly but does use the underlying code from nlminb.

None of these operations are easily parallelizable.  The only hope for
getting a speed boost from multiple CPU cores or multiple processors
is by using a multithreaded accelerated BLAS (Basic linear algebra
subroutines) library (see the R Installation and Administration manual
for details).  However, in some cases we have observed that
multithreaded BLAS actually slow down the computation.  To check if
this is the case for you try the following both with and without
multithreaded BLAS.

library(lme4)
data(star, package = "mlmRev")
system.time(fm1 <- lmer(math ~ sx*eth+ses+gr+cltype
   +(yrs|id)+(1|tch)+(yrs|sch),
star, control = list(grad = 0, nit = 0, msV = 1))
system.time(m1 <- lmer2(math ~ sx*eth+ses+gr+cltype
   +(yrs|id)+(1|tch)+(yrs|sch),
star, control = list(msV = 1))

> In the case if, I would very much appreicate your hint or point to a source
> where I can read about.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] search path question

2007-05-29 Thread Zhiliang Ma
Thanks, Barry.
In fact, I have a function just like yours, and I'm looking for a simple
alternative function, which is like "path" in Matlab.

On 5/29/07, Barry Rowlingson <[EMAIL PROTECTED]> wrote:
>
> Zhiliang Ma wrote:
> >  I want to find a function that can simply add
> > "C:\inFiles\" into R's search path, so that we I scan a file R will go
> to
> > all the search paths to find it. In matlab, path(path,"C:\inFiles") will
> do
> > this job, I'm just wondering if there is a similar function in R can do
> this
> > job.
>
> Something like this (not extensively tested):
>
> `sscan` <-
>function(name, path=options()$scanpath,...){
>
>  for(p in path){
>file=file.path(p,name)
>if(file.exists(file)){
>  return(scan(file,...))
>}
>## last resort..
>return(scan(name,...))
>  }
>}
>
> Then do:
>
>   options(scanpath="/tmp")
>
>   and then:
>
>   sscan("foo.data")
>
>   will look for /tmp/foo.data first, then if that fails it will do the
> 'last resort' which is to look in the current directory.
>
>   My worry is that this will bite you one day - if you have two files
> with the same name, it will get the first one in your scanpath - one day
> this will not be the one you think it is
>
>   Note this only works with 'scan' - you'll have to do the same thing
> for read.table, source, etc etc if you want them to behave with a search
> path too. Unless there's a lower-level approach. But that really will
> bite you!
>
> Barry
>
>
> Barry
>

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] search path question

2007-05-29 Thread David Forrest
On Tue, 29 May 2007, Zhiliang Ma wrote:

> Hi R users,
>
> Is there a simple function that can add a folder into current R search path?

This works for adding libraries to your search path, but I don't think it 
would work for finding data files outside of your getwd() quite as you'd 
like:

.libPaths(c("/home/foo/R/library",.libPaths()))

> For example, suppose my current work directory is "D:\work", but my input
> files are stored in folder "C:\inFiles\",  I know I can change work
> directory or add "C:\inFiles\" before files name when I scan them, but I
> don't want to do that. I want to find a function that can simply add
> "C:\inFiles\" into R's search path, so that we I scan a file R will go to
> all the search paths to find it. In matlab, path(path,"C:\inFiles") will do
> this job, I'm just wondering if there is a similar function in R can do this
> job.
>
> Thanks,
> zhiliang
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

-- 
  Dr. David Forrest
  [EMAIL PROTECTED](804)684-7900w
  [EMAIL PROTECTED] (804)642-0662h
http://maplepark.com/~drf5n/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] search path question

2007-05-29 Thread Barry Rowlingson
Zhiliang Ma wrote:
>  I want to find a function that can simply add
> "C:\inFiles\" into R's search path, so that we I scan a file R will go to
> all the search paths to find it. In matlab, path(path,"C:\inFiles") will do
> this job, I'm just wondering if there is a similar function in R can do this
> job.

Something like this (not extensively tested):

`sscan` <-
   function(name, path=options()$scanpath,...){

 for(p in path){
   file=file.path(p,name)
   if(file.exists(file)){
 return(scan(file,...))
   }
   ## last resort..
   return(scan(name,...))
 }
   }

Then do:

  options(scanpath="/tmp")

  and then:

  sscan("foo.data")

  will look for /tmp/foo.data first, then if that fails it will do the 
'last resort' which is to look in the current directory.

  My worry is that this will bite you one day - if you have two files 
with the same name, it will get the first one in your scanpath - one day 
this will not be the one you think it is

  Note this only works with 'scan' - you'll have to do the same thing 
for read.table, source, etc etc if you want them to behave with a search 
path too. Unless there's a lower-level approach. But that really will 
bite you!

Barry


Barry

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] search path question

2007-05-29 Thread Zhiliang Ma
Hi R users,

Is there a simple function that can add a folder into current R search path?
For example, suppose my current work directory is "D:\work", but my input
files are stored in folder "C:\inFiles\",  I know I can change work
directory or add "C:\inFiles\" before files name when I scan them, but I
don't want to do that. I want to find a function that can simply add
"C:\inFiles\" into R's search path, so that we I scan a file R will go to
all the search paths to find it. In matlab, path(path,"C:\inFiles") will do
this job, I'm just wondering if there is a similar function in R can do this
job.

Thanks,
zhiliang

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] AIC for lrm(Hmisc/Design) model.

2007-05-29 Thread Frank E Harrell Jr
Milton Cezar Ribeiro wrote:
> Dear all,
> 
> I am adjusting a Logistic Regression Model using lmr() function of 
> Hmisc/Design package. Now I would like to compute AIC for this model. How can 
> I do that?
> 
> Kind regards,
> 
> miltinho
> Brazil

I like to change AIC to have it on the chi-square scale.  For that you 
can do

aic <- function(fit)
   round(unname(fit$stats['Model L.R.'] - 2*fit$stats['d.f.']),2)

f <- lrm( )
aic(f)

If unname doesn't exist in S-Plus as it does in R, you can remove that part.

-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Estimate Fisher Information by Hessian from OPTIM

2007-05-29 Thread ChenYen
Dear All, 
I am trying to find MLE by using "OPTIM" function.

Difficult in differentiating some parameter in my objective function, I
would like to use the returned hessian matrix to yield an estimate of
Fisher's Information matrix.

My question: Since the hessian is calculated by numerical differentiate, is
it a reliable estimate? Otherwise I would have to do a lot of  work to write
a second derivative on my own.

 

Thank you very much in advance


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] AIC for lrm(Hmisc/Design) model.

2007-05-29 Thread Milton Cezar Ribeiro
Dear all,

I am adjusting a Logistic Regression Model using lmr() function of Hmisc/Design 
package. Now I would like to compute AIC for this model. How can I do that?

Kind regards,

miltinho
Brazil

__


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fw: hierarhical cluster analysis of groups of vectors

2007-05-29 Thread Rafael Duarte
Yes, of course.
But since the initial question referred the use of dist(), (I suppose 
with Euclidean distances) and hclust on a matrix with a priori known 
groups, discriminant analysis seemed adequate to me.
It was only a suggestion, without having much detail about the problem.
Thanks,
Rafael


Ron Michael wrote:

> -->
> Hi Rafael,
>
> What about multivariate logistic regression?
>
> - Forwarded Message 
> From: Rafael Duarte <[EMAIL PROTECTED]>
> To: Anders Malmendal <[EMAIL PROTECTED]>
> Cc: r-help@stat.math.ethz.ch
> Sent: Tuesday, May 29, 2007 3:21:11 PM
> Subject: Re: [R] hierarhical cluster analysis of groups of vectors
>
> It seems that you have already groups defined.
> Discriminant analysis would probably be more appropriate for what you 
> want.
> Best regards,
> Rafael Duarte
>
>
>
> Anders Malmendal wrote:
>
> >I want to do hierarchical cluster analysis to compare 10 groups of
> >vectors with five vectors in each group (i.e. I want to make a dendogram
> >showing the clustering of the different groups). I've looked into using
> >dist and hclust, but cannot see how to compare the different groups
> >instead of the individual vectors. I am thankful for any help.
> >Anders
> >
> >__
> >R-help@stat.math.ethz.ch mailing list
> >https://stat.ethz.ch/mailman/listinfo/r-help
> >PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> >and provide commented, minimal, self-contained, reproducible code.
> >  
> >
>
>
> -- 
> Rafael Duarte
> Marine Resources Department - DRM
> IPIMAR -  National Research Institute for Agriculture and Fisheries
> Av. Brasília, 1449-006 Lisbon  -  Portugal
> Tel:+351 21 302 7000  Fax:+351 21 301 5948
> e-mail: [EMAIL PROTECTED]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
> Send instant messages to your online friends 
> http://uk.messenger.yahoo.com 



-- 
Rafael Duarte
Marine Resources Department - DRM
IPIMAR -  National Research Institute for Agriculture and Fisheries
Av. Brasília, 1449-006 Lisbon  -  Portugal
Tel:+351 21 302 7000  Fax:+351 21 301 5948
e-mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] exemples, tutorial on lmer

2007-05-29 Thread Martin Henry H. Stevens
Hi Oliver,
You could start with R News 2005, no. 1. Also the PDF associated with  
lme4, "Implementation.pdf."
Hnak
On May 29, 2007, at 10:35 AM, Olivier MARTIN wrote:

> Hi all,
>
> I have some difficulties to work with the function lmer from lme4
> Does somebody have a tutorial or different examples to use this  
> function?
>
> Thanks,
> Oliver.
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting- 
> guide.html
> and provide commented, minimal, self-contained, reproducible code.



Dr. Hank Stevens, Assistant Professor
338 Pearson Hall
Botany Department
Miami University
Oxford, OH 45056

Office: (513) 529-4206
Lab: (513) 529-4262
FAX: (513) 529-4243
http://www.cas.muohio.edu/~stevenmh/
http://www.muohio.edu/ecology/
http://www.muohio.edu/botany/

"E Pluribus Unum"

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] search path question

2007-05-29 Thread Vladimir Eremeev

Yes, it is.
The original is here
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/92829.html

However, it requires some modifications.
Here they are. Sorry, I can test it only in Windows.

search.source <- function(file, path=Sys.getenv("PATH"), ...) 
{ 
for(p in strsplit(path,.Platform$path.sep)[[1]]) { 
fp <- file.path(p, f) 
if(file.exists(fp)) return(source(fp, ...)) 
   } 
   stop("file ", sQuote(file), " not found") 
}

Try also looking here.
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/92821.html


Zhiliang Ma wrote:
> 
> Is there a simple function that can add a folder into current R search
> path?
> For example, suppose my current work directory is "D:\work", but my input
> files are stored in folder "C:\inFiles\",  I know I can change work
> directory or add "C:\inFiles\" before files name when I scan them, but I
> don't want to do that. I want to find a function that can simply add
> "C:\inFiles\" into R's search path, so that we I scan a file R will go to
> all the search paths to find it. In matlab, path(path,"C:\inFiles") will
> do
> this job, I'm just wondering if there is a similar function in R can do
> this
> job.
> 

-- 
View this message in context: 
http://www.nabble.com/search-path-question-tf3833821.html#a10855885
Sent from the R help mailing list archive at Nabble.com.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] normality tests [Broadcast]

2007-05-29 Thread Bert Gunter
False. Box proved ~ca 1952 that standard inferences in the linear regression
model are robust to nonnormality, at least for (nearly) balanced designs.
The **crucial** assumption is independence, which I suspect partially
motivated his time series work on arima modeling. More recently, work on
hierarchical models (e.g. repeated measures/mixed effect models) has also
dealt with lack of independence.


Bert Gunter
Genentech Nonclinical Statistics


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of wssecn
Sent: Friday, May 25, 2007 2:59 PM
To: r-help
Subject: Re: [R] normality tests [Broadcast]

 The normality of the residuals is important in the inference procedures for
the classical linear regression model, and normality is very important in
correlation analysis (second moment)...

Washington S. Silva

> Thank you all for your replies they have been more useful... well
> in my case I have chosen to do some parametric tests (more precisely
> correlation and linear regressions among some variables)... so it
> would be nice if I had an extra bit of support on my decisions... If I
> understood well from all your replies... I shouldn't pay s much
> attntion on the normality tests, so it wouldn't matter which one/ones
> I use to report... but rather focus on issues such as the power of the
> test...
> 
> Thanks again.
> 
> On 25/05/07, Lucke, Joseph F <[EMAIL PROTECTED]> wrote:
> >  Most standard tests, such as t-tests and ANOVA, are fairly resistant to
> > non-normalilty for significance testing. It's the sample means that have
> > to be normal, not the data.  The CLT kicks in fairly quickly.  Testing
> > for normality prior to choosing a test statistic is generally not a good
> > idea.
> >
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of Liaw, Andy
> > Sent: Friday, May 25, 2007 12:04 PM
> > To: [EMAIL PROTECTED]; Frank E Harrell Jr
> > Cc: r-help
> > Subject: Re: [R] normality tests [Broadcast]
> >
> > From: [EMAIL PROTECTED]
> > >
> > > On 25/05/07, Frank E Harrell Jr <[EMAIL PROTECTED]> wrote:
> > > > [EMAIL PROTECTED] wrote:
> > > > > Hi all,
> > > > >
> > > > > apologies for seeking advice on a general stats question. I ve run
> >
> > > > > normality tests using 8 different methods:
> > > > > - Lilliefors
> > > > > - Shapiro-Wilk
> > > > > - Robust Jarque Bera
> > > > > - Jarque Bera
> > > > > - Anderson-Darling
> > > > > - Pearson chi-square
> > > > > - Cramer-von Mises
> > > > > - Shapiro-Francia
> > > > >
> > > > > All show that the null hypothesis that the data come from a normal
> >
> > > > > distro cannot be rejected. Great. However, I don't think
> > > it looks nice
> > > > > to report the values of 8 different tests on a report. One note is
> >
> > > > > that my sample size is really tiny (less than 20
> > > independent cases).
> > > > > Without wanting to start a flame war, are there any
> > > advices of which
> > > > > one/ones would be more appropriate and should be reported
> > > (along with
> > > > > a Q-Q plot). Thank you.
> > > > >
> > > > > Regards,
> > > > >
> > > >
> > > > Wow - I have so many concerns with that approach that it's
> > > hard to know
> > > > where to begin.  But first of all, why care about
> > > normality?  Why not
> > > > use distribution-free methods?
> > > >
> > > > You should examine the power of the tests for n=20.  You'll probably
> >
> > > > find it's not good enough to reach a reliable conclusion.
> > >
> > > And wouldn't it be even worse if I used non-parametric tests?
> >
> > I believe what Frank meant was that it's probably better to use a
> > distribution-free procedure to do the real test of interest (if there is
> > one) instead of testing for normality, and then use a test that assumes
> > normality.
> >
> > I guess the question is, what exactly do you want to do with the outcome
> > of the normality tests?  If those are going to be used as basis for
> > deciding which test(s) to do next, then I concur with Frank's
> > reservation.
> >
> > Generally speaking, I do not find goodness-of-fit for distributions very
> > useful, mostly for the reason that failure to reject the null is no
> > evidence in favor of the null.  It's difficult for me to imagine why
> > "there's insufficient evidence to show that the data did not come from a
> > normal distribution" would be interesting.
> >
> > Andy
> >
> >
> > > >
> > > > Frank
> > > >
> > > >
> > > > --
> > > > Frank E Harrell Jr   Professor and Chair   School
> > > of Medicine
> > > >   Department of Biostatistics
> > > Vanderbilt University
> > > >
> > >
> > >
> > > --
> > > yianni
> > >
> > > __
> > > R-help@stat.math.ethz.ch mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide
> > > http://www.R-project.org/posting-guide.html
> > > and provide commented, minimal, self-contained, reproducible cod

Re: [R] look for packages

2007-05-29 Thread Henrique Dallazuanna
Look that:
RSiteSearch("PCA Analysis", restrict="functions")


-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22"
O

On 29/05/07, De-Jian,ZHAO <[EMAIL PROTECTED]> wrote:
>
> Dear list members,
>
> I am analysing some microarray data. I have got the differentially
> expressed genes and now want to carry out PCA analysis to get the main
> components that contribute to the variance.I have browsered the CRAN and
> BioConductor and did not find an appropriate package.
>
> Have anybody ever carried out PCA analysis? Is there any package about PCA
> in R?
>
> Thanks for your advice.
>
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] exemples, tutorial on lmer

2007-05-29 Thread Olivier MARTIN
Hi all,

I have some difficulties to work with the function lmer from lme4
Does somebody have a tutorial or different examples to use this function?

Thanks,
Oliver.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Fw: hierarhical cluster analysis of groups of vectors

2007-05-29 Thread Ron Michael
Hi Rafael,

What about multivariate logistic regression?

- Forwarded Message 
From: Rafael Duarte <[EMAIL PROTECTED]>
To: Anders Malmendal <[EMAIL PROTECTED]>
Cc: r-help@stat.math.ethz.ch
Sent: Tuesday, May 29, 2007 3:21:11 PM
Subject: Re: [R] hierarhical cluster analysis of groups of vectors

It seems that you have already groups defined.
Discriminant analysis would probably be more appropriate for what you want.
Best regards,
Rafael Duarte



Anders Malmendal wrote:

>I want to do hierarchical cluster analysis to compare 10 groups of 
>vectors with five vectors in each group (i.e. I want to make a dendogram 
>showing the clustering of the different groups). I've looked into using 
>dist and hclust, but cannot see how to compare the different groups 
>instead of the individual vectors. I am thankful for any help.
>Anders
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>  
>


-- 
Rafael Duarte
Marine Resources Department - DRM
IPIMAR -  National Research Institute for Agriculture and Fisheries
Av. Brasília, 1449-006 Lisbon  -  Portugal
Tel:+351 21 302 7000  Fax:+351 21 301 5948
e-mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project..org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.






Send instant messages to your online friends http://uk.messenger.yahoo.com
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] look for packages

2007-05-29 Thread De-Jian,ZHAO
Dear list members,

I am analysing some microarray data. I have got the differentially
expressed genes and now want to carry out PCA analysis to get the main
components that contribute to the variance.I have browsered the CRAN and
BioConductor and did not find an appropriate package.

Have anybody ever carried out PCA analysis? Is there any package about PCA
in R?

Thanks for your advice.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ratio distribution - missing attachment

2007-05-29 Thread Ravi Varadhan
Dear Martin and Vitto,

Please find attached the R function to compute the density of the ratio of 2
dependent normal variates.

Best,
Ravi.


---

Ravi Varadhan, Ph.D.

Assistant Professor, The Center on Aging and Health

Division of Geriatric Medicine and Gerontology 

Johns Hopkins University

Ph: (410) 502-2619

Fax: (410) 614-9625

Email: [EMAIL PROTECTED]

Webpage:  http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html

 




-Original Message-
From: Martin Maechler [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 28, 2007 5:37 AM
To: Ravi Varadhan
Cc: R-help@stat.math.ethz.ch
Subject: Re: [R] ratio distribution - missing attachment

Thank you, Ravi.
You probably will have noticed, that the attachment didn't make
it to the mailing list.

The reason is that the we let the mailing list software strip 
"binary" attachments which can easily be misused to spread
viruses; see --> http://www.r-project.org/mail.html (search "attachment")
or the posting-guide.

OTOH, the software allows attachments with MIME type text/plain.
If you use an e-mail software for sophisticated users, the
software allows you to specify the MIME type of your
attachments;
otherwise (as with most "user friendly", "modern" e-mail
software), attach a *.txt file (Doug Bates uses  _R.txt) 
and it should make it to the lists;
as a third alternative, just "cut & paste" the corresponding
text into your e-mal.

I think your R function should make it to R-help (and its
archives), so I'd be thankful for a repost.

Martin


> "Ravi" == Ravi Varadhan <[EMAIL PROTECTED]>
> on Fri, 25 May 2007 14:24:20 -0400 writes:

Ravi> Mike, Attached is an R function to do this, along with
Ravi> an example that will reproduce the MathCad plot shown
Ravi> in your attached paper. I haven't checked it
Ravi> thoroughly, but it seems to reproduce the MathCad
Ravi> example well.

Ravi> Ravi.

Ravi>

Ravi> ---

Ravi> Ravi Varadhan, Ph.D.

Ravi> Assistant Professor, The Center on Aging and Health

Ravi> Division of Geriatric Medicine and Gerontology

Ravi> Johns Hopkins University

Ravi> Ph: (410) 502-2619

Ravi> Fax: (410) 614-9625

Ravi> Email: [EMAIL PROTECTED]

Ravi> Webpage:
Ravi> http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html

 

Ravi>

Ravi> 


Ravi> -Original Message- From:
Ravi> [EMAIL PROTECTED]
Ravi> [mailto:[EMAIL PROTECTED] On Behalf Of
Ravi> Mike Lawrence Sent: Friday, May 25, 2007 1:55 PM To:
Ravi> Lucke, Joseph F Cc: Rhelp Subject: Re: [R] Calculation
Ravi> of ratio distribution properties

Ravi> According to the paper I cited, there is controversy
Ravi> over the sufficiency of Hinkley's solution, hence
Ravi> their proposed more complete solution.

Ravi> On 25-May-07, at 2:45 PM, Lucke, Joseph F wrote:

>> The exact ratio is given in
>> 
>> On the Ratio of Two Correlated Normal Random Variables,
>> D. V.  Hinkley, Biometrika, Vol. 56, No. 3. (Dec., 1969),
>> pp. 635-639.
>> 

Ravi> -- Mike Lawrence Graduate Student, Department of
Ravi> Psychology, Dalhousie University

Ravi> Website: http://myweb.dal.ca/mc973993 Public calendar:
Ravi> http://icalx.com/public/informavore/Public

Ravi> "The road to wisdom? Well, it's plain and simple to
Ravi> express: Err and err and err again, but less and less
Ravi> and less."  - Piet Hein
ratio2normals <- function(x, mean1,mean2,sd1,sd2,rho){
# A function to compute ratio of 2 normals
# R code written by Ravi Varadhan 
# May 25, 2007
# Based on the paper by Pham Gia et al., Comm in Stats (2006)
A <- 1 / (2*pi*sd1*sd2*sqrt(1-rho^2))
exponent.num <- -sd2^2*mean1^2 - sd1^2*mean2^2 + 2*rho*sd1*sd2*mean1*mean2
exponent.denom <- 2*(1-rho^2)*sd1^2*sd2^2
K <- A * exp(exponent.num/exponent.denom)
t2x.num <- -sd2^2*mean1*x - sd1^2*mean2 + rho*sd1*sd2*(mean2*x + mean1)
t2x.denom <- sd1*sd2*sqrt(2*(1-rho^2)*(sd2^2*x^2 - 2*rho*x*sd1*sd2 + sd1^2))
t2x <- t2x.num / t2x.denom
erf.term <- 2 * pnorm(sqrt(2) * t2x) - 1
Ft2x <- sqrt(pi) * t2x * exp(t2x^2) * erf.term + 1
fx <- K * Ft2x * 2 * (1 - rho^2) * sd1^2 * sd2^2 / (sd2^2 * x^2 + sd1^2 - 
2*x*rho*sd1*sd2)
return(fx)
}


mean1 <- 75.25
mean2 <- 71.58 
sd1 <- 6.25
sd2 <- 5.45
rho <- 0.76

x <- seq(0.5,1.5, length=100)
y <- ratio2normals(x,mean1,mean2, sd1,sd2,rho)
plot(x,y, type="l")


# compute the mean and variance via quadrature
m1 <- function(x, mean1, mean2, sd1, sd2, rho){
x * ratio2normals(x, mean1, mean2, sd1, sd2, rho)
}

m2 <- function(x, mean1, mean2, sd1, sd2, rho){
x^2 * ratio2normals(x, mean1, mean2, sd1, sd2, rho)
}

m.1 <- integrat

Re: [R] Odp: Odp: pie initial angle

2007-05-29 Thread Adrian Dusa
Thanks Petr and Gabor,

On Tuesday 29 May 2007, Petr PIKAL wrote:
> >From simple geometry
>
> pie(c(x, y), init.angle=(300+y/2*360/100)-360)
>
> shall do what you request. Although I am not sure if it is wise.

Yes, this is what I want to do.
I agree with all your points re initial angle, I just needed the position of 
the slices the way they are. My geometry seems to be poor towards 
innexistent :)

All the best,
Adrian

-- 
Adrian Dusa
Romanian Social Data Archive
1, Schitu Magureanu Bd
050025 Bucharest sector 5
Romania
Tel./Fax: +40 21 3126618 \
  +40 21 3120210 / int.101

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] hierarhical cluster analysis of groups of vectors

2007-05-29 Thread S Ellison
Anders;

If you want to _test_ for differences, ANOVA applied to on the (typically) 
first principal component scores for each object would  give a fairly quick 
indication of whether there was a case to answer (though scaling is an issue to 
be aware of; a low-variance variable might differ strongly between groups yet 
be masked by a larger variance variable wiht no group association unless you 
get the scaling right for the circumstances).

If you just want to cluster the 10 groups, I suspect it might be simplest to 
"average" (where "average" implies some consistent summary statistic for each 
variable) your starting vectors, _before_ playing about with your distance 
matrix; after all, it is the inter-"mean" distances you are after, so why not 
get the "means" in the first place?. Of course, scaling is again an issue if 
the variables differ in variance...


Steve E


>>> Anders Malmendal <[EMAIL PROTECTED]> 29/05/2007 10:15:23 >>>
I want to do hierarchical cluster analysis to compare 10 groups of 
vectors with five vectors in each group (i.e. I want to make a dendogram 
showing the clustering of the different groups). I've looked into using 
dist and hclust, but cannot see how to compare the different groups 
instead of the individual vectors. I am thankful for any help.
Anders

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.

***
This email and any attachments are confidential. Any use, co...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using ksmooth to produce the resulting plots

2007-05-29 Thread JIZ JIZ
Hi,

Consider the Nottingham temperature and the Sunspot data:
>data(nottem)
>data(sunspot)

Assume that we may model each data set by a nonparametic autoregressice 
model of the form
Yt = m(Yt-1) +et,

What are the value of x and y for using function 
ksmooth(x,y,"normal",bandwidth=0.01)?

Thanks a lot!
Owen

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] about the unscaled covariances from a summary.lm object

2007-05-29 Thread Dimitris Rizopoulos
try the following:

x1 <- rnorm(100)
x2 <- rep(0:1, each = 50)
x3 <- runif(100)
y <- drop(cbind(1, x1, x2, x3) %*% c(1, 2, -1, -3)) + rnorm(100, sd = 
2)
dat <- data.frame(y, x1, x2, x3)

##

fit.lm <- lm(y ~ x1 + x2 + x3, dat)
summ.fit.lm <- summary(fit.lm)
X <- model.matrix(fit.lm)

all.equal(solve(crossprod(X)), summ.fit.lm$cov.unscaled)

Sigma <- summ.fit.lm$sigma^2 * solve(crossprod(X))
all.equal(sqrt(diag(Sigma)), summ.fit.lm$coefficients[, "Std. Error"])


I hope it helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message - 
From: "Martin Ivanov" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, May 29, 2007 3:12 PM
Subject: [R] about the unscaled covariances from a summary.lm object


> Hello!
> I want to clarify something about the unscaled covarinces component 
> of a summary.lm object. So we have the regressor matrix X. If the 
> fitted lm object is lmobj, the inverse of the matrix t(X)%*%X is xx, 
> and the residual variance is sigma^2_e, the variance-covariance 
> matrix of the OLS estimate of the coefficients is given by:
> xx*sigma^2_e
> I saw that what the function vcov actually does is simply:
> vcov=summary(lmobj)$sigma^2 * summary(lmobj)$cov.unscaled
> So the cov.unscaled component should give the matrix xx. I am right?
> I tried inverting the matrix t(X)%*%X with solve by issuing:
> solve(t(X)%*%X), but I get a matrix quite different from the matrix 
> given by cov.unscaled. Is it just computational instability, or I am 
> missing something important?
>
> Regards,
> Martin
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RODBC

2007-05-29 Thread Bill Szkotnicki
I have now read the README file which I should have done before. :-[   
Sorry.
To summarize:
- Install the odbc connector driver (3.51)
- Set up the dsn in the file   .odbc.ini
- It works beautifully and RODBC is super!


Prof Brian Ripley wrote:
> yOn Mon, 28 May 2007, Bill Szkotnicki wrote:
>
>> Hello,
>>
>> I have installed R2.5.0 from sources ( x86_64 )
>> and added the package RODBC
>> and now I am trying to connect to a mysql database
>> In windows R after installing the 3.51 driver
>> and creating the dsn by specifying server, user, and password
>> it is easy to connect with
>> channel <- odbcConnect("dsn")
>>
>> Does anyone know what needs to be done to make this work from linux?
>
> Did you not read the RODBC README file?  It is described in some detail
> with reference to tutorials.
>

-- 
Bill Szkotnicki
Department of Animal and Poultry Science
University of Guelph
[EMAIL PROTECTED]
(519)824-4120 Ext 52253

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] about the unscaled covariances from a summary.lm object

2007-05-29 Thread Martin Ivanov
Hello!
I want to clarify something about the unscaled covarinces component of a 
summary.lm object. So we have the regressor matrix X. If the fitted lm object 
is lmobj, the inverse of the matrix t(X)%*%X is xx, and the residual variance 
is sigma^2_e, the variance-covariance matrix of the OLS estimate of the 
coefficients is given by:
xx*sigma^2_e
I saw that what the function vcov actually does is simply:
vcov=summary(lmobj)$sigma^2 * summary(lmobj)$cov.unscaled
So the cov.unscaled component should give the matrix xx. I am right?
I tried inverting the matrix t(X)%*%X with solve by issuing:
solve(t(X)%*%X), but I get a matrix quite different from the matrix given by 
cov.unscaled. Is it just computational instability, or I am missing something 
important?

Regards,
Martin

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] search path question

2007-05-29 Thread Zhiliang Ma
Hi R users,

Is there a simple function that can add a folder into current R search path?
For example, suppose my current work directory is "D:\work", but my input
files are stored in folder "C:\inFiles\",  I know I can change work
directory or add "C:\inFiles\" before files name when I scan them, but I
don't want to do that. I want to find a function that can simply add
"C:\inFiles\" into R's search path, so that we I scan a file R will go to
all the search paths to find it. In matlab, path(path,"C:\inFiles") will do
this job, I'm just wondering if there is a similar function in R can do this
job.

Thanks,
zhiliang

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] hierarhical cluster analysis of groups of vectors

2007-05-29 Thread Prof Brian Ripley
On Tue, 29 May 2007, Anders Malmendal wrote:

> Thanks.
> The vectors are produced by PLS-discriminant analysis between groups and
> the vectors within a group are simply different measurements of the same
> thing. What I need is a measure of how the different groups cluster
> (relative to each other). (I assume that I can do some averaging after
> applying dist, but I can not  find the information on how to do it.)

I don't think anyone can tell you that: it is a matter of judgement. 
What you need is a dissimilarity on your groups.

Assuming your vectors are numeric (you didn't say) you could use 
Mahalanobis distance between the centroids, with within-group covariance 
as the variance matrix.  Often that works well, but not always, and you 
might prefer Euclidean distance between centroids, or minimum Euclidean or 
Mahalanobis distance 


> Best regards
> Anders
>
>
> Rafael Duarte wrote:
>> It seems that you have already groups defined.
>> Discriminant analysis would probably be more appropriate for what you
>> want.
>> Best regards,
>> Rafael Duarte
>>
>>
>>
>> Anders Malmendal wrote:
>>
>>> I want to do hierarchical cluster analysis to compare 10 groups of
>>> vectors with five vectors in each group (i.e. I want to make a
>>> dendogram showing the clustering of the different groups). I've
>>> looked into using dist and hclust, but cannot see how to compare the
>>> different groups instead of the individual vectors. I am thankful for
>>> any help.
>>> Anders
>>>
>>> __
>>> R-help@stat.math.ethz.ch mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>>
>>
>>
>
>
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Odp: Odp: pie initial angle

2007-05-29 Thread Petr PIKAL
>From simple geometry

pie(c(x, y), init.angle=(300+y/2*360/100)-360)

shall do what you request. Although I am not sure if it is wise.

Regards
Petr
[EMAIL PROTECTED]

[EMAIL PROTECTED] napsal dne 29.05.2007 13:30:06:

> Hi
> 
> [EMAIL PROTECTED] napsal dne 29.05.2007 12:53:14:
> 
> > 
> > Dear all,
> > 
> > I'd like to produce a simple pie chart for a customer (I know it's bad 

> but 
> > they insist), and I have some difficulties setting the initial angle.
> > For example:
> > 
> > pie(c(60, 40), init.angle=14)
> > 
> > and 
> > 
> > pie(c(80, 20), init.angle=338)
> > 
> > both present the slices in the same direction, where:
> 
> I presume you misunderstand init angle. Above statements points an arrow 

> of both slices to the similar direction but slices starts at different 
> initial angles.
> 
> > 
> > pie(c(60, 40))
> > pie(c(80, 20))
> > 
> > present the slices in different directions.
> 
> The arrow slices point to different direction **but** they both 
**start** 
> at the same initial angle 0 deg. 
> 
> > 
> > I read everything I could about init.angle argument, I even played 
with 
> > various formulas to compute it, but I just can't figure it out.
> > How can I preserve the desired *direction* of the slices?
> 
> You probably need to compute initial angle based on proportions in your 
> pie chart (If you really want each pie chart starting at different 
> position). 
> 
> Regards
> Petr
> 
> > 
> > Many thanks in advance,
> > Adrian
> > 
> > 
> > -- 
> > Adrian Dusa
> > Romanian Social Data Archive
> > 1, Schitu Magureanu Bd
> > 050025 Bucharest sector 5
> > Romania
> > Tel./Fax: +40 21 3126618 \
> >   +40 21 3120210 / int.101
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] hierarhical cluster analysis of groups of vectors

2007-05-29 Thread Anders Malmendal
Thanks.
The vectors are produced by PLS-discriminant analysis between groups and 
the vectors within a group are simply different measurements of the same 
thing. What I need is a measure of how the different groups cluster 
(relative to each other). (I assume that I can do some averaging after 
applying dist, but I can not  find the information on how to do it.)
Best regards
Anders


Rafael Duarte wrote:
> It seems that you have already groups defined.
> Discriminant analysis would probably be more appropriate for what you 
> want.
> Best regards,
> Rafael Duarte
>
>
>
> Anders Malmendal wrote:
>
>> I want to do hierarchical cluster analysis to compare 10 groups of 
>> vectors with five vectors in each group (i.e. I want to make a 
>> dendogram showing the clustering of the different groups). I've 
>> looked into using dist and hclust, but cannot see how to compare the 
>> different groups instead of the individual vectors. I am thankful for 
>> any help.
>> Anders
>>
>> __
>> R-help@stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide 
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>  
>>
>
>


-- 
Anders Malmendal, mailto:[EMAIL PROTECTED]
Center for Insoluble Protein Structures (inSPIN)
and Interdisciplinary Nanoscience Center (iNANO),
Department of Chemistry, University of Aarhus,
Langelandsgade 140, DK-8000 Aarhus C, Denmark.
tel: +45 8942 3866 fax: +45 8619 6199

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] hierarhical cluster analysis of groups of vectors

2007-05-29 Thread Anders Malmendal
Thanks.
The vectors are produced by PLS-discriminant analysis between groups and 
the vectors within a group are simply different measurements of the same 
thing. What I need is a measure of how the different groups cluster. (I 
assume that I can do some averaging after applying dist, but I can not  
find the information on how to do it.)
Best regards
Anders


Rafael Duarte wrote:
> It seems that you have already groups defined.
> Discriminant analysis would probably be more appropriate for what you 
> want.
> Best regards,
> Rafael Duarte
>
>
>
> Anders Malmendal wrote:
>
>> I want to do hierarchical cluster analysis to compare 10 groups of 
>> vectors with five vectors in each group (i.e. I want to make a 
>> dendogram showing the clustering of the different groups). I've 
>> looked into using dist and hclust, but cannot see how to compare the 
>> different groups instead of the individual vectors. I am thankful for 
>> any help.
>> Anders
>>
>> __
>> R-help@stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide 
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>  
>>
>
>


-- 
Anders Malmendal, mailto:[EMAIL PROTECTED]
Center for Insoluble Protein Structures (inSPIN)
and Interdisciplinary Nanoscience Center (iNANO),
Department of Chemistry, University of Aarhus,
Langelandsgade 140, DK-8000 Aarhus C, Denmark.
tel: +45 8942 3866 fax: +45 8619 6199

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Odp: pie initial angle

2007-05-29 Thread Petr PIKAL
Hi

[EMAIL PROTECTED] napsal dne 29.05.2007 12:53:14:

> 
> Dear all,
> 
> I'd like to produce a simple pie chart for a customer (I know it's bad 
but 
> they insist), and I have some difficulties setting the initial angle.
> For example:
> 
> pie(c(60, 40), init.angle=14)
> 
> and 
> 
> pie(c(80, 20), init.angle=338)
> 
> both present the slices in the same direction, where:

I presume you misunderstand init angle. Above statements points an arrow 
of both slices to the similar direction but slices starts at different 
initial angles.

> 
> pie(c(60, 40))
> pie(c(80, 20))
> 
> present the slices in different directions.

The arrow slices point to different direction **but** they both **start** 
at the same initial angle 0 deg. 

> 
> I read everything I could about init.angle argument, I even played with 
> various formulas to compute it, but I just can't figure it out.
> How can I preserve the desired *direction* of the slices?

You probably need to compute initial angle based on proportions in your 
pie chart (If you really want each pie chart starting at different 
position). 

Regards
Petr

> 
> Many thanks in advance,
> Adrian
> 
> 
> -- 
> Adrian Dusa
> Romanian Social Data Archive
> 1, Schitu Magureanu Bd
> 050025 Bucharest sector 5
> Romania
> Tel./Fax: +40 21 3126618 \
>   +40 21 3120210 / int.101
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Legend outside plotting area

2007-05-29 Thread S Ellison
Judith,

Haven't tried it in anger myself, but two things suggest themselves. The first 
is to use the lattice package, which seems to draw keys (autokey option) 
outside the plot region by default. Look at the last couple of examples in 
?xyplot. May save a lot of hassle...

In classical R graphics, have you tried plotting everything explicitly inside a 
plot region with margins at zero? 

For example:
plot.new()
par(mar=c(0,0,0,0))
plot.window(xlim=c(-2,11), ylim=c(-3,13))
points(1:10,1:10, pch=1)
points(1:10,10:1, pch=19)
par(srt=90)
text(x=-2, y=5, "y-axis", pos=1, offset=0.5)
par(srt=0)
text(c(5,5), c(13,-1), labels=c("Title","x-axis"), pos=1, offset=0.7, 
cex=c(1.5,1))
rect(-0.2,-0.2, 11.2,11.2)
axis(side=1, at=0:10, pos=-0.2)
axis(side=2, at=0:10, pos=-0.2)
legend(x=5, y=-2, xjust=0.5, pch=c(1,19), legend=c("Type 1", "Type 19"), ncol=2)

All very tedious, but it works. Also, fiddling around with things like pretty() 
on the data can automate most of the above positional choices if you're so 
inclined. And legend(..., plot=F) returns the legend size and coordinates if 
you want to fine-tune the location.

Steve E

>>> <[EMAIL PROTECTED]> 23/05/2007 13:14:54 >>>
Quoting Judith Flores <[EMAIL PROTECTED]>:

> Hi,
>
> I have been trying many of the suggested options
> to place a legend outside plotting area, including
> something like this:
>
> par(xpd=T,
> oma=par()$oma+c(4.5,0,1.5,0),mar=par()$mar+c(1,0,1,0)
>
>
> But the aspect of the four plots gets compromised
> when I change the margin settings. I cannot use mtext
> because I need to use colors for the text. I tried
> layout, but wouldn't let me include the legend, only
> plots.
>
>I would appreciate very much some more help.
>
> Regards,
>
> J


***
This email and any attachments are confidential. Any use, co...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pie initial angle

2007-05-29 Thread Gabor Grothendieck
Not sure I understand what you want but the clockwise= argument
of pie determines whether the slice is drawn clockwise or counter
clockwise.

On 5/29/07, Adrian Dusa <[EMAIL PROTECTED]> wrote:
>
> Dear all,
>
> I'd like to produce a simple pie chart for a customer (I know it's bad but
> they insist), and I have some difficulties setting the initial angle.
> For example:
>
> pie(c(60, 40), init.angle=14)
>
> and
>
> pie(c(80, 20), init.angle=338)
>
> both present the slices in the same direction, where:
>
> pie(c(60, 40))
> pie(c(80, 20))
>
> present the slices in different directions.
>
> I read everything I could about init.angle argument, I even played with
> various formulas to compute it, but I just can't figure it out.
> How can I preserve the desired *direction* of the slices?
>
> Many thanks in advance,
> Adrian
>
>
> --
> Adrian Dusa
> Romanian Social Data Archive
> 1, Schitu Magureanu Bd
> 050025 Bucharest sector 5
> Romania
> Tel./Fax: +40 21 3126618 \
>  +40 21 3120210 / int.101
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] pie initial angle

2007-05-29 Thread Adrian Dusa

Dear all,

I'd like to produce a simple pie chart for a customer (I know it's bad but 
they insist), and I have some difficulties setting the initial angle.
For example:

pie(c(60, 40), init.angle=14)

and 

pie(c(80, 20), init.angle=338)

both present the slices in the same direction, where:

pie(c(60, 40))
pie(c(80, 20))

present the slices in different directions.

I read everything I could about init.angle argument, I even played with 
various formulas to compute it, but I just can't figure it out.
How can I preserve the desired *direction* of the slices?

Many thanks in advance,
Adrian


-- 
Adrian Dusa
Romanian Social Data Archive
1, Schitu Magureanu Bd
050025 Bucharest sector 5
Romania
Tel./Fax: +40 21 3126618 \
  +40 21 3120210 / int.101

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Function tsmooth

2007-05-29 Thread JIZ JIZ
Hi,

Assume that we may model the Nottingham temperature data (nottem) or Sunspot 
data (sunspot) set by a nonparametric autoregressive model of the form
   Yt = m(Yt-1) + et.

Using the kernel estimation method, produce the resulting plots. We may use 
the fucntion
tsmooth(x,y,"notmal",bandwidth=0.01).

How can i define x and y using data nottem and sunspot?

Thanks a lot!
Owen

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] JGR

2007-05-29 Thread Ronaldo Reis Junior
Hi,

I dont find any site to make suggestions about JGR.

It is possible to make a keyboard link do automatic attribution sign <- like 
Emacs use Shift -

Thanks
Ronaldo
-- 
The most exciting phrase to hear in science, the one that heralds new
discoveries, is not "Eureka!" (I found it!) but "That's funny ..."
-- Isaac Asimov
--
> Prof. Ronaldo Reis Júnior
|  .''`. UNIMONTES/Depto. Biologia Geral/Lab. de Ecologia
| : :'  : Campus Universitário Prof. Darcy Ribeiro, Vila Mauricéia
| `. `'` CP: 126, CEP: 39401-089, Montes Claros - MG - Brasil
|   `- Fone: (38) 3229-8187 | [EMAIL PROTECTED] | [EMAIL PROTECTED]
| http://www.ppgcb.unimontes.br/ | ICQ#: 5692561 | LinuxUser#: 205366

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] hierarhical cluster analysis of groups of vectors

2007-05-29 Thread Rafael Duarte
It seems that you have already groups defined.
Discriminant analysis would probably be more appropriate for what you want.
Best regards,
Rafael Duarte



Anders Malmendal wrote:

>I want to do hierarchical cluster analysis to compare 10 groups of 
>vectors with five vectors in each group (i.e. I want to make a dendogram 
>showing the clustering of the different groups). I've looked into using 
>dist and hclust, but cannot see how to compare the different groups 
>instead of the individual vectors. I am thankful for any help.
>Anders
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>  
>


-- 
Rafael Duarte
Marine Resources Department - DRM
IPIMAR -  National Research Institute for Agriculture and Fisheries
Av. Brasília, 1449-006 Lisbon  -  Portugal
Tel:+351 21 302 7000  Fax:+351 21 301 5948
e-mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] hierarhical cluster analysis of groups of vectors

2007-05-29 Thread Anders Malmendal
I want to do hierarchical cluster analysis to compare 10 groups of 
vectors with five vectors in each group (i.e. I want to make a dendogram 
showing the clustering of the different groups). I've looked into using 
dist and hclust, but cannot see how to compare the different groups 
instead of the individual vectors. I am thankful for any help.
Anders

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sum per hour

2007-05-29 Thread jessica . gervais
Thank you,

I have try your proposition.

Seems to be the right way... but I still have an error message.

Here is the code I have executing:

time<-c("2000-10-03 14:00:00","2000-10-03 14:10:00","2000-10-03
14:20:00","2000-10-03 15:30:00","2000-10-03 16:40:00","2000-10-03
16:50:00","2000-10-03 17:00:00","2000-10-03 17:10:00","2000-10-03
17:20:00","2000-10-03 18:30:00","2000-10-04 14:00:00","2000-10-04
14:10:00","2000-10-04 14:20:00","2000-10-04 15:30:00","2000-10-04
16:40:00","2000-10-04 16:50:00","2000-10-04 17:00:00","2000-10-04
17:10:00","2000-10-04 17:20:00","2000-10-04 18:30:00")

precipitation<-c(0,0.1,0,0,0,0,0.2,0.3,0.5,6,7,8,9,1,0,0,0,0,1,0)

library(zoo)

z <- zoo(precipitation, as.POSIXct(time, tz = "GMT"))
aggregate(z, function(x) as.POSIXct(trunc(x, "hour")), sum(na.rm=TRUE))
Error in FUN(X[[1L]], ...) : argument "INDEX" is missing, with no default

...
I saw you can index a zoo object. I have tried with a vector..but doesn't
work.

I don't know else what this INDEX argument is...

Does anyone have an idea about it ?

Thanks in advance,

Jessica

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R-About PLSR

2007-05-29 Thread Bjørn-Helge Mevik
Nitish Kumar Mishra wrote:

> I have installed PLS package in R and use it for princomp & prcomp
> commands for calculating PCA using its example file(USArrests example).

Uhm.  These functions and data sets are not in the pls package; they
are in the stats and datasets packages that come with R.

> But How I can use PLS for Partial least square, R square, mvrCv one more
> think how i can import external file in R. When I use plsr, R2, RMSEP it
> show error could not find function plsr, RMSEP etc.
> How I can calculate PLS, R2, RMSEP, PCR, MVR using pls package in R.

There is an Rnews article describing the package¹, and a paper in
Journal of Statistical Software².

¹Mevik, B.-H. (2006); The pls package; R News  6(3), 12-17.


²Mevik, B.-H., Wehrens, R. (2007); The pls Package: Principal
Component and Partial Least Squares Regression in R; Journal of
Statistical Software  18(2), 1--24.


-- 
Bjørn-Helge Mevik

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Where to find "nprq"?

2007-05-29 Thread Rainer M. Krug
roger koenker wrote:
> It has been folded into my quantreg package.
Thanks a lot - it is working now

Rainer


> 
> url:www.econ.uiuc.edu/~rogerRoger Koenker
> email[EMAIL PROTECTED]Department of Economics
> vox: 217-333-4558University of Illinois
> fax:   217-244-6678Champaign, IL 61820
> 
> 
> On May 28, 2007, at 4:32 AM, Rainer M. Krug wrote:
> 
>> Hi
>>
>> I am trying to install the package "pheno", but it needs the package
>> "nprq" by Roger Koenker et al. which I can I find this package? It does
>> not seem to be on CRAN and googling also doesn't give me an URL - is it
>> still somewhere available?
>>
>> Thanks,
>>
>> Rainer
>>
>>
>> -- 
>> NEW EMAIL ADDRESS AND ADDRESS:
>>
>> [EMAIL PROTECTED]
>>
>> [EMAIL PROTECTED] WILL BE DISCONTINUED END OF MARCH
>>
>> Rainer M. Krug, Dipl. Phys. (Germany), MSc Conservation
>> Biology (UCT)
>>
>> Leslie Hill Institute for Plant Conservation
>> University of Cape Town
>> Rondebosch 7701
>> South Africa
>>
>> Fax:+27 - (0)86 516 2782
>> Fax:+27 - (0)21 650 2440 (w)
>> Cell:+27 - (0)83 9479 042
>>
>> Skype:RMkrug
>>
>> email:[EMAIL PROTECTED]
>>[EMAIL PROTECTED]
>>
>> __
>> R-help@stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide 
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
> 


-- 
NEW EMAIL ADDRESS AND ADDRESS:

[EMAIL PROTECTED]

[EMAIL PROTECTED] WILL BE DISCONTINUED END OF MARCH

Rainer M. Krug, Dipl. Phys. (Germany), MSc Conservation
Biology (UCT)

Leslie Hill Institute for Plant Conservation
University of Cape Town
Rondebosch 7701
South Africa

Fax:+27 - (0)86 516 2782
Fax:+27 - (0)21 650 2440 (w)
Cell:   +27 - (0)83 9479 042

Skype:  RMkrug

email:  [EMAIL PROTECTED]
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] SARIMA in R

2007-05-29 Thread Gad Abraham
Hi,

Is R's implementation of Seasonal ARIMA in the arima() function a 
multiplicative or an additive model?

e.g., is an ARIMA(0,1,1)(0,1,1)[12] from arima() the same as Box et al's 
ARIMA(0,1,1)x(0,1,1)[12] (from Time Series Analysis 1994, p.333).

 From another post http://tolstoy.newcastle.edu.au/R/help/04/07/0117.html
I suspect it's additive but I'm not sure.

Thanks,
Gad

-- 
Gad Abraham
Department of Mathematics and Statistics
The University of Melbourne
Parkville 3010, Victoria, Australia
email: [EMAIL PROTECTED]
web: http://www.ms.unimelb.edu.au/~gabraham

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.