Re: [R] on.exit() inside local()

2004-08-10 Thread Gabor Grothendieck
Vadim Ogranovich vograno at evafunds.com writes:

: 
: Hi,
: 
: Since I routinely open files in a loop I've developed a habit of using
: on.exit() to close them. Since on.exit() needs to be called within a
: function I use eval() as a surrogate. For example:
: 
: for (fileName in c(a, b)) eval({
:   con - file(fileName);
:   on.exit(close(con))
:   }) 
: 
: and con will be closed no matter what.
: 
: However it stopped working once I wrapped the loop in local():
:  local(
: +   for (foo in seq(2)) eval({
: + on.exit(cat(foo, \n))
: +   })
: + )
: Error in cat(foo, \n) : Object foo not found
: 
: 
: W/o local()it works just fine
:for (foo in seq(2)) eval({
: + on.exit(cat(foo, \n))
: +   })
: 1 
: 2 
: 
: The reason I wanted the local() is to keep 'foo' from interfering with
: the existing environments, but somehow this breaks the thing.
: At this point I am stuck. Could someone please tell what's going on?

The on.exit code is executing in an environment whose parent is
namespace:base and so cannot access the environment created by
local.  Use evalq, instead of eval, which has the effect of
running the on.exit code in the environment created by
local:

local({
   for (i in c(a, b)) evalq(
 on.exit(cat(i, \n))
  )
})

or use an inner local, which has the effect of creating
a new environment for each iteration of the loop in which
the on.exit code runs:

local({
   for (i in c(a, b)) local(
  on.exit(cat(i, \n))
   )
})

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to import specific column(s) using read.table?

2004-08-10 Thread Gabor Grothendieck
Gabor Grothendieck ggrothendieck at myway.com writes:

: 
: F Duan f.duan at yale.edu writes:
: 
:  I have a very big tab-delim txt file with header and I only want to import 
:  several columns into R. I checked the options for read.table and only 
: 
: Try using scan with the what=list(...) and flush=TRUE arguments.  
: For example, if your data looks like this:
: 
: 1 2 3 4 
: 5 6 7 8 
: 9 10 11 12
: 13 14 15 16
: 
: then you could read columns 2 and 4 into a list with:
: 

oops. That should be 1 and 3.

:scan(myfile, what = list(0, NULL, 0), flush = TRUE)
: 
: or read in and convert to a data frame via:
: 
:do.call(cbind, scan(myfile, what = list(0, NULL, 0), flush = TRUE))

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] date axes and formats in levelplot

2004-08-10 Thread Toby.Patterson
Hi all (and particularly Deepayan), 

A while back Deepayan helped me with the query in the text below (thanks
again). Specifically it was about changing the way that dates plotted on
the axes of lattice plots. 

While this works using xyplot, everything comes apart when I use
levelplot. The axis labels on the date axis are shown as the integer
representation of the date (number of seconds since the origin I
assume). I guess that the POSIX dates are getting coerced into numeric
objects somewhere along the way and that there is no easy fix for this. 

I would be really grateful if there is a work around that would allow me
to plot recognizable dates. Any suggestions?

Cheers 
Toby  

-Original Message-
From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 05, 2004 12:16 PM
To: [EMAIL PROTECTED]
Cc: Patterson, Toby (Marine, Hobart)
Subject: Re: [R] date Axes and formats in lattice plots

On Sunday 04 July 2004 21:02, [EMAIL PROTECTED] wrote:
 All,

 I have some data of animal movements that I'm plotting using xyplot()
 from lattice. I want to have the date (class POSIXct object) on the
 Y-axis and the animals longitude on X-axis. Eg.

 xyplot(date ~ longitude, groups = animal, data = my.data)

 with data like:

  animal   ptt year month daylondate
 125 03P0014 13273 2003 7  10 150.38 2003-07-10 14:03:48
 126 03P0192 20890 2003 7  10 151.13 2003-07-10 14:00:47
 127 03P0197 30466 2003 7  10 150.74 2003-07-10 14:02:21
 ...etc

 It all works fine except for the format of the dates that gets
 displayed.

 I am not sure what I need to change within the lattice frame work to
 get a specific date format (eg. %Y-%-m-%d). Does anyone have any
 tips or, even better, some example code that they could pass on?

For R 1.9.0 and above, you should be able to do this with 

xyplot(date ~ longitude, groups = animal, data = my.data,
   scales = list(y = list(format = %Y-%-m-%d)))

Deepayan

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


AW: AW: [R] built-in Sweave-like documentation in R-2.x

2004-08-10 Thread Khamenia, Valery
  Is selecting and 'C-c C-r'-ing the 3 chunks separately that bad?
 
 Yes.  The UI should take care of it for him.

right.
 
  Others may have better suggestions.
 
 A bit more work on the chunk evaluation approach within Emacs is one;
 it almost does what is needed, but not quite.  

why almost, but not quite?

...without these almost, but not quite I would rather
confirm your statemnt :)

--
Valery

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Using R boxplot function in Excel

2004-08-10 Thread Vito Ricci
Hi,

I tried to create boxplot in Excel using Rcom (ver.
1.0) and it works correctly. Explain better A,B,C: are
three samples? groups? do you want a boxplot using
A,B,C for grouping?
Maybe there are corrupted files in Rcom installation,
re-install Rcom.

Best
Vito




Hi, I have downloaded the R-Com and I was able to run
Interactive Graphics 
Demo 2 in excel. However, I couldn't create my own
boxplot. Whenever I 
tried to run any code, it always say Error in loading
DLL, even 
=rput(A1,A2:A20). Any idea about what's going wrong?
A detailed 
explaination about how to use R-Excel tool would be
greatly appreciated.
Thanks a lot in advance!

PS: I would like you to use the following data as an
example.
AB   C
12.5186182  7.394714354 6.58360308
11.37597453 16.66820087 3.900166247
7.059103407 9.696804606 3.738396698
13.80587153 21.95622475 5.365668029
7.933769009 9.572635842 4.195704277
14.80409653 12.39208079 6.883236109
8.974253685 12.02387754 5.842696863
7.6083609   18.08369863 4.75223318
10.01654143 10.61151753 4.940416728
10.22753966 7.59634933  5.150066626
9.638591817 17.68393592 5.427933173
12.9405328  17.35731932 5.079704705
7.758718564 14.28801913 5.319497531
9.873025445 16.89445473 5.044402668
8.023517946 16.28102329 5.637006679
7.214663381 24.19544618 5.083052782
11.82039457 5.482319845 5.26250973
8.432808752 14.50188112 7.040906111
10.41255589 8.92899781  3.335806595
14.0030136  18.31841647 3.26446583
9.75501396  18.97398026 6.075650289
11.25837687 16.9443803  5.077193363
13.51650669 22.33716661 2.850945874

=
Diventare costruttori di soluzioni

Visitate il portale http://www.modugno.it/
e in particolare la sezione su Palese http://www.modugno.it/archivio/cat_palese.shtml

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Using R boxplot function in Excel

2004-08-10 Thread Vito Ricci
Hi,

there is a mailing list about R Com, if you still have
problems write to that list.

See:

http://mailman.csd.univie.ac.at/mailman/listinfo/rcom-l

Best
Vito




Hi, I have downloaded the R-Com and I was able to run
Interactive Graphics 
Demo 2 in excel. However, I couldn't create my own
boxplot. Whenever I 
tried to run any code, it always say Error in loading
DLL, even 
=rput(A1,A2:A20). Any idea about what's going wrong?
A detailed 
explaination about how to use R-Excel tool would be
greatly appreciated.
Thanks a lot in advance!

PS: I would like you to use the following data as an
example.
AB   C
12.5186182  7.394714354 6.58360308
11.37597453 16.66820087 3.900166247
7.059103407 9.696804606 3.738396698
13.80587153 21.95622475 5.365668029
7.933769009 9.572635842 4.195704277
14.80409653 12.39208079 6.883236109
8.974253685 12.02387754 5.842696863
7.6083609   18.08369863 4.75223318
10.01654143 10.61151753 4.940416728
10.22753966 7.59634933  5.150066626
9.638591817 17.68393592 5.427933173
12.9405328  17.35731932 5.079704705
7.758718564 14.28801913 5.319497531
9.873025445 16.89445473 5.044402668
8.023517946 16.28102329 5.637006679
7.214663381 24.19544618 5.083052782
11.82039457 5.482319845 5.26250973
8.432808752 14.50188112 7.040906111
10.41255589 8.92899781  3.335806595
14.0030136  18.31841647 3.26446583
9.75501396  18.97398026 6.075650289
11.25837687 16.9443803  5.077193363
13.51650669 22.33716661 2.850945874

=
Diventare costruttori di soluzioni

Visitate il portale http://www.modugno.it/
e in particolare la sezione su Palese http://www.modugno.it/archivio/cat_palese.shtml

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


AW: [R] built-in Sweave-like documentation in R-2.x

2004-08-10 Thread Khamenia, Valery
hi tony,

 What exactly do you mean by this?
 1. generation of Sweave-style docs from R programs or interaction?  

neither (if i correctly interpret your question).

 2. tools for doing docs and analysis at the same time?  Emacs Speaks
 Statistics has supported this with R since last century (1997 or so).

as you have seen, i use emacs and even since last century :)

 3. the vignettes of Bioconductor?

not sure.

 4. a text book in line with the above?

nope.

I think just smarter C-c C-r would be kind of trade-off here.

hm, maybe there are some other voices here similar to mine?
It would be easier to discuss the subj. 

--
Valery.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] RSPerl on Redhat 9 (i386 box) and R-1.9.1

2004-08-10 Thread Manoj - Hachibushu Capital
Hi,
Sorry for the open-ended nature of the question but was *anyone*
able to successfully install RSPerl(version 0.5-7) on Redhat 9, i386 box
for the latest R version (1.9.1)?

I tried far too many things to make it work but am unable to get
it working for any case (R within Perl or Perl within R... my main
interest is R within perl).

If anyone is interested, I can document all my (unsuccessful)
efforts that I tried to make it work. 

Cheers

Manoj

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] linear constraint optim with bounds/reparametrization

2004-08-10 Thread Ingmar Visser
On 8/9/04 4:52 PM, Thomas Lumley [EMAIL PROTECTED] wrote:

 On Mon, 9 Aug 2004, Kahra Hannu wrote:
 
 1) constrOptim does not work in this case because it only fits inequality
 constraints, ie A%*%theta  =  c
   --- I was struggling with the same problem a
 few weeks ago in the portfolio optimization context. You can impose
 equality constraints by using inequality constraints = and =
 simultaneously. See the example bellow.
 
 
 Ick. You do not want to use constrOptim for equality constraints.
 constrOptim is a log-barrier interior-point method, meaning that it adds
 a multiple of log(A%*%theta-c) to the objective function. This is a really
 bad idea as a way of faking equality constraints.
 
 Use Lagrange multipliers and optim.

Is there a package that does all that for me? Or is there example code that
does something similar?

ingmar

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] two-way ANOVA

2004-08-10 Thread Luis Rideau Cruz
R-help

This is more a statistic thing than an R question.

I have length measurements of organisms which I want to use for an two-way ANOVA( 
fixed factors)
The problem is that I have different number of replicates for each combination of 
factors.

What are the strengths and weakness of these approach when I applly aov function in R.
The help file states  : 
 'aov' is designed for balanced designs, and the results can be
 hard to interpret without balance.

Thank you 


Luis Ridao Cruz
Fiskirannsóknarstovan
Nóatún 1
P.O. Box 3051
FR-110 Tórshavn
Faroe Islands
Phone: +298 353900
Phone(direct): +298 353912
Mobile: +298 580800
Fax: +298 353901
E-mail:  [EMAIL PROTECTED]
Web:www.frs.fo

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] linear constraint optim with bounds/reparametrization

2004-08-10 Thread Spencer Graves
	  If A%*%thetac, then log(c-A%*%theta) returns NA.  if A%*%thetac, log(A%*%theta-c) returns NA.  Only when A%*%theta==c do you get a number from log(A%*%theta-c), and that's (-Inf).  

	  However, for an equality constraint, I've had good luck by with an objective function that adds something like the following to my objective function:  

	  constraintViolationPenalty*(A%*%theta-c)^2,  

where constraintViolationPenalty is passed via ... in a call to optim.  If I want only (A%*%theta=c), then I might write this as follows:  

	  constraintViolationPenalty*(A%*%thetac)*(A%*%theta-c)^2  

  This term is everywhere differentiable and is 0 when the 
constraint is satisfied. 

   I may first run optim with a modest value for 
constraintViolationPenalty then restart it with the output of the 
initial run as starting values and with a larger value for 
constraintViolationPenalty. 

 hope this helps.  spencer graves
Ingmar Visser wrote:
On 8/9/04 4:52 PM, Thomas Lumley [EMAIL PROTECTED] wrote:
 

On Mon, 9 Aug 2004, Kahra Hannu wrote:
   

1) constrOptim does not work in this case because it only fits inequality
constraints, ie A%*%theta  =  c
   

 --- I was struggling with the same problem a
few weeks ago in the portfolio optimization context. You can impose
equality constraints by using inequality constraints = and =
simultaneously. See the example bellow.
 

Ick. You do not want to use constrOptim for equality constraints.
constrOptim is a log-barrier interior-point method, meaning that it adds
a multiple of log(A%*%theta-c) to the objective function. This is a really
bad idea as a way of faking equality constraints.
Use Lagrange multipliers and optim.
   

Is there a package that does all that for me? Or is there example code that
does something similar?
ingmar
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Help with Normal Range Estimation for repated measures

2004-08-10 Thread Crabb, David
I would be grateful if members of the list could point me in the
direction of any code (preferably in R) that will allow me to estimate
95th percentiles from a set of repeated measurements. For example, we
are interested in a clinical measurement where we have 3 measures for 14
subjects and 2 measurements on 24 subjects and single measurement on 36
subjects. We want to combine these to form a Normal range by using
something that takes account that some of the measures are repeats.
Something non-parametric would be ideal like a weighted empirical
distribution function. In other words we don't simply want to use 84
single values from the 84 subjects but use all the data (but we are
aware this needs to be corrected for).

Any help, however small, with this problem will be gratefully received.


---
Dr. David Crabb
School of Science,
The Nottingham Trent University,
Clifton Campus, Nottingham. NG11 8NS
Tel: 0115 848 3275   Fax: 0115 848 6690

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Easy acf and pacf for irregular time series in R

2004-08-10 Thread Adrian Trapletti

R:
Is there an easy way to get the acf and pacf for an irregular times 
series?  That is, the acf and pacf with lag lengths that are in units of 
time, not observation number.
 

There are several solutions available depending on the particular 
problem, some of them statistically cleaner than others:
For example eliminate non-business days (NA's) from the series and 
compute the acf and pacf (e.g. with na.remove from tseries).
For example interpolate to get a regular series and compute acf and pacf 
(e.g. with approx.irts from tseries).
For example use a methodology which can treat NA's (e.g. use Kalman 
filtering from ts (R-1.8.1) now ??) and compute the acf and pacf from 
the estimated model...

best
Adrian
Thanks,
Jason Higbee
Research Associate
Federal Reserve Bank of St. Louis
The views expressed in this email are the author's and not necessarily 
those of the Federal Reserve Bank of St. Louis or the Federal Reserve 
System
	[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] R packages install problems linux - X not found (WhiteBoxEL 3)

2004-08-10 Thread Dr Mike Waters

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Marc Schwartz
 Sent: 09 August 2004 15:13
 To: Dr Mike Waters
 Cc: R-Help
 Subject: RE: [R] R packages install problems linux - X not 
 found (WhiteBoxEL 3)
 
 
 On Mon, 2004-08-09 at 08:13, Dr Mike Waters wrote:
 
 snip
 
  Marc,
  
  Sorry for the confusion yesterday - in my defence, it was 
 very hot and 
  humid here in Hampshire (31 Celsius at 15:00hrs and still 25 at 
  20:00hrs).
  
  What had happened was that I had done a clean install of WB Linux, 
  including the XFree86 and other developer packages. However, the 
  on-line updating system updated the XFree86 packages to a newer sub 
  version. It seems that it didn't do this correctly for the XFree86 
  developer package, which was missing vital files. However 
 it showed up 
  in the rpm database as being installed (i.e. rpm -qa | grep XFree 
  showed it thus). I downloaded another rpm for this manually 
 and I only 
  forced the upgrade because it was the same version as already 
  'installed' (as far as the rpm database was concerned). I 
 assumed that 
  all dependencies were sorted out through the install in the first 
  place.
 
 OK, that helps. I still have a lingering concern that, given 
 the facts above, there may be other integrity issues in the 
 RPM database, if not elsewhere.
 
 From reading the WB web site FAQ's
 (http://www.whiteboxlinux.org/faq.html) , it appears that 
 they are using up2date/yum for system updates. Depending upon 
 the version in use, there have been issues especially with 
 up2date (hangs, incomplete updates,
 etc.) which could result in other problems. I use yum via the 
 console here (under FC2), though I note that a GUI version of 
 yum has been created, including replacing the RHN/up2date 
 system tray alert icon.
 
 A thought relative to this specifically:
 
 If there is or may be an integrity problem related to the rpm 
 database, you should review the information here:
 
 http://www.rpm.org/hintskinks/repairdb/
 
 which provides instructions on repairing the database. Note 
 the important caveats regarding backups, etc.
 
 The two key steps there are to remove any residual lock files 
 using (as
 root):
 
 rm -f /var/lib/rpm/__*
 
 and then rebuilding the rpm database using (also as root):
 
 rpm -vv --rebuilddb
 
 I think that there needs to be some level of comfort that 
 this basic foundation for the system is intact and correct.
 
  I only mentioned RH9 to show that I had some familiarity with the 
  RedHat policy of separating out the 'includes' etc into a separate 
  developer package.
  
  Once all this had been sorted out, I was then left with a 
 compilation 
  error which pointed to a missing dependency or similar, 
 which was not 
  due to missing developer packages, but, as you and Prof Ripley 
  correctly point out, from the R installation itself. Having 
 grown fat 
  and lazy on using R under the MS Windows environment, I was 
 struggling 
  to identify the precise nature of this remaining problem.
  
  As regards the R installation, I did this from the RH9 binary for 
  version 1.9.1, as I did not think that the Fedora Core 2 
 binary would 
  be appropriate here. Perhaps I should now compile from the source 
  instead?
 
 I would not use the FC2 RPM, since FC2 has many underlying 
 changes not the least of which includes the use of the 2.6 
 kernel series and the change from XFree86 to x.org. Both 
 changes resulted in significant havoc during the FC2 testing 
 phases and there was at least one issue here with R due to 
 the change in X.
 
 According to the WB FAQs:
 
 If you cannot find a package built specifically for RHEL3 or 
 WBEL3 you can try a package for RH9 since many of the 
 packages in RHEL3 are the exact same packages as appeared in RH9.
 
 Thus, it would seem reasonable to use the RH9 RPM that Martyn 
 has created. An alternative would certainly be to compile R 
 from the source tarball.
 
 In either case, I would remove the current installation of R 
 and after achieving a level of comfort that your RPM database 
 is OK, reinstall R using one of the above methods. Pay close 
 attention to any output during the installation process, 
 noting any error or warning messages that may occur.
 
 If you go the RPM route, be sure that the MD5SUM of the RPM 
 file matches the value that Martyn has listed on CRAN to 
 ensure that the file has been downloaded in an intact fashion.
 
 These are my thoughts at this point. You need to get to a 
 point where the underlying system is stable and intact, then 
 get R to the same state before attempting to install new packages.
 
 HTH,
 
 Marc
 
From unpacking the tarball and running ./configure in the R source
directory, I obtain the fact that crti.o is needed by ld.so and was not
found. This file is not present on the system. This file, along with crtn.o
is usually installed by the gnu libc packages, I believe. However, I know
that not all *nix 

Re: AW: [R] built-in Sweave-like documentation in R-2.x

2004-08-10 Thread A.J. Rossini
Khamenia, Valery [EMAIL PROTECTED] writes:


 I think just smarter C-c C-r would be kind of trade-off here.

 hm, maybe there are some other voices here similar to mine?
 It would be easier to discuss the subj. 

Within ESS, you've got the ess-thread-eval (similar to
ess-chunk-eval), so the guts for cross-chunk evals are there, so the
next part would be as you say, making C-c C-r SWeave-aware.

An simpler alternative to code would be to allow one to
eval-chunk-and-step, stepping through chunks, similar to C-c C-n for
stepping through lines.  Would this solve the basic problem?  3 x (2
or 3 keystrokes) for 3 chunks.

I can't imagine an evaluation which would cross chunks but use only
part of chunks (this suggests bad programming design to me), but
perhaps you (or others) have an example of when this functionality
would be useful?  (i.e. actual regions to eval which cross code-chunk
boundaries but contain 1 or 2 incomplete code-chunks?).

best,
-tony





-- 
Anthony Rossini Research Associate Professor
[EMAIL PROTECTED]http://www.analytics.washington.edu/ 
Biomedical and Health Informatics   University of Washington
Biostatistics, SCHARP/HVTN  Fred Hutchinson Cancer Research Center
UW (Tu/Th/F): 206-616-7630 FAX=206-543-3461 | Voicemail is unreliable
FHCRC  (M/W): 206-667-7025 FAX=206-667-4812 | use Email

CONFIDENTIALITY NOTICE: This e-mail message and any attachme...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Re: date axes and formats in levelplot

2004-08-10 Thread Deepayan Sarkar
On Tuesday 10 August 2004 01:34, [EMAIL PROTECTED] wrote:
 Hi all (and particularly Deepayan),

 A while back Deepayan helped me with the query in the text below
 (thanks again). Specifically it was about changing the way that dates
 plotted on the axes of lattice plots.

 While this works using xyplot, everything comes apart when I use
 levelplot. The axis labels on the date axis are shown as the integer
 representation of the date (number of seconds since the origin I
 assume). I guess that the POSIX dates are getting coerced into
 numeric objects somewhere along the way and that there is no easy fix
 for this.

You are right. At first glance, it appears that I have been negligent in 
properly updating the default prepanel function for levelplot to handle 
DateTime objects. For now, you could use xyplot's default instead: 

levelplot(z ~ x * y, 
  prepanel = lattice:::prepanel.default.xyplot)

Add a 'scales = list(axs = i)' to get a better looking result (with 
the bordering rectangles partially clipped).

Deepayan

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] R packages install problems linux - X not found (WhiteBoxEL 3)

2004-08-10 Thread Marc Schwartz
On Tue, 2004-08-10 at 08:15, Dr Mike Waters wrote:

snip 

 From unpacking the tarball and running ./configure in the R source
 directory, I obtain the fact that crti.o is needed by ld.so and was not
 found. This file is not present on the system. This file, along with crtn.o
 is usually installed by the gnu libc packages, I believe. However, I know
 that not all *nix distributions include these files among their packages.
 From a web search, I have not been able to ascertain whether this lack of a
 crti.o is due to there not being one in the distribution, or to another
 incomplete package install.
 
 So, I did a completely fresh installation of WhiteBox, followed by R built
 from source, checked that it ran and then installed the R packages. Only
 then did I run up2date. At least crti.o and crtn.o are still there this
 time, along with the XFree86 includes.
 
 A bit of a cautionary tale, all in all. 
 
 Thanks for all the help and support.
 
 Regards
 
 M


Mike,

From my FC2 system:

$ rpm -qf /usr/lib/crti.o
glibc-devel-2.3.3-27

$ rpm -qf /usr/lib/crtn.o
glibc-devel-2.3.3-27

So, you are correct relative to the source of these two files. A follow
up question might be, did you include the devel packages during your
initial install? If not, that would explain the lack of these files. if
you did, then it would add another data point to support the notion that
your system was, to some level, compromised and a clean install was
probably needed, rather than just trying to re-create the RPM database.

Glad that you are up and running at this point. Given Martyn's follow up
messages, it looks like there may be an issue with the RH9 RPM, so for
the time being using the source tarball would be appropriate.

Best regards,

Marc

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Approaches to using RUnit

2004-08-10 Thread Klaus Juenemann
Hi Seth,
first of all note that it was a deliberate decision to leave it up to 
the RUnit user to load all the functions and packages to be tested 
because loading and sourcing is always very site-specific. RUnit just 
assumes that all functionality to be tested is already present in the R 
session.

If you don't organize your code into packages but source individual R 
files your approach to source the code at the beginning of a test file 
looks the right thing to do.

We mainly use packages and the code we use to test packages A and B, 
say, looks like this:

library(A)
library(B)
testsuite.A - defineTestSuite(A, location_of_package_A/tests)
testsuite.B - defineTestSuite(B, location_of_package_B/tests)
testresult - runTestSuite(list(testsuite.A, testsuite.B))
printHTMLProtocol(testresult, location_of_testProtocol)
We use the tests subdirectory of a package to store our RUnit tests even 
though this is not really according to R conventions.

The nice thing is that this code can be executed in batch mode from a 
shell script. This script is executed nightly (and before starting R 
checks out and installs the packages from CVS). In this way, we know the 
test status of our code every morning.

Hope this helps,
Klaus




--
Klaus Juenemann   Software Engineer/ Biostatistician
Epigenomics AGKleine Praesidentenstr. 110178 Berlin, Germany
phone:+49-30-24345-393  fax:+49-30-24345-555
http://www.epigenomics.com   [EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Re: Enduring LME confusion or Psychologists and Mixed-Effects

2004-08-10 Thread Christophe Pallier
Hello,
Suppose I have a typical psychological experiment that is a 
within-subjects design with multiple crossed variables and a 
continuous response variable. Subjects are considered a random 
effect. So I could model
 aov1 - aov(resp~fact1*fact2+Error(subj/(fact1*fact2))

However, this only holds for orthogonal designs with equal numbers of 
observation and no missing values. These assumptions are easily 
violated so I seek refuge in fitting a mixed-effects model with the 
nlme library.

I suppose that you have, for each subject, enough observations to 
compute his/her average response for each combination of factor1 and 
factor2, no?
If this is the case, you can perform the analysis with the above formula 
on the data obtained by 'aggregate(resp,list(subj,fact1,fact2),mean)'.

This is an analysis with only *within-subject* factors and there 
*cannot* be a problem of unequal number of observation when you have 
only within-subject factors (supposing you have at least one 
observations for each subject in each condition).

I believe the problem with unequal number of observations only  occurs 
when you have at least two crossed *between-subject* (group) variables.

Let's imagine you have two binary group factors (A and B) yielding four 
subgroups of subjects, and for some reason, you do have the same number 
of observations in each subgroup,
Then there are several ways of defining the main effects of A and B.

In many cases, the most reasonable definition of the main effect of A is 
to take the average of A in B1 and in B2 (thus ignoring the number of 
observations, or weithting equally the four subgroups).
To test the null hypothesis of no difference in A when all groups are 
equally weighted, one common approach in psychology is to pretend that 
the number of observation is each group is equal to the harmonic mean of 
the number of observations in each subgroups. The sums of square thud 
obtained can be compared with the error sum of square in the standard 
anova to form an F-test.
This is called the unweighted approach.

This can easily be done 'by hand' in R, but there is another approach:
You get equivalent statistics as in the unweighted anova when you use so 
called 'type III' sums of square (I read this in Howell, 1987 
'Statistical methods in psychology',
and in John Fox book 'An R and S-plus companion to appied regression, p. 
140).

It is possible to get type III sums of square using John Fox 'car' library.
library(car)
contrasts(A)=contr.sum
contrasts(B)=contr.sum
Anova(aov(resp~A*B),type='III')

You can compute the equally weighted cell means defining the effect of A 
with, say:

with(aggregate(resp,list(a=a,b=b),mean),tapply(x,a,mean))
I have seen some people advise against using 'type III' sums of square 
but I do not know their rationale. The important thing, it seems to me, 
is to know
which null hypothesis is  tested in a given test. If indeed the  type 
III sums of square test the effect on equally weighted means, they seem 
okay to me
(when this is indeed the hypothesis I want to test). 

Sorry for not answering any of your questions about the use of 'lme' (I 
hope others will do), but I feel that 'lme' is not needed in the context 
of unequal cell frequencies.
(I am happy to be corrected if I am wrong). It seems to me that 'lme' is 
useful when some assumptions of standard anova are violate (e.g. with 
repeated measurements when the assumption of sphericity is false), or 
when you have several random factors.

Christophe Pallier
http://www.pallier.org
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] persp, array and colors

2004-08-10 Thread Camarda, Carlo Giovanni
Dear R-users,
I'd like to plot a three-dimensional surface and at the meantime I'm using
an array. I would like to have the values of my first matrix in the heights
of the plot and the colors of the single facet taking into account the
second matrix.
I hope that the next code will help all of you to understand better my
issue, 
Thanks in advance, Giancarlo

## creating my array
m1 - runif(30,min=1, max=10)
m2 - c(rep(0,12), rep(1,5), rep(0,3), rep(1,30-12-5-3))
mm - array(c(m1,m2), dim=c(6,5,2))

## colors
colo - c(red, blue)

## axis
x  - 1:6
y  - 1:5
z  - mm[,,1]
z1 - mm[,,2]

## surface with heights and colors 
## related to the first matrix (no good)
persp(x, y, z, theta = 30, phi = 30, expand = 0.5, col = colo,
ltheta = 120, ticktype = detailed,
xlab = X, ylab = Y, zlab = values )

## surface with heights and colors 
## related to the second matrix (no good as well)
persp(x, y, z1, theta = 30, phi = 30, expand = 0.5, col = colo,
ltheta = 120, ticktype = detailed,
xlab = X, ylab = Y, zlab = values )




+
This mail has been sent through the MPI for Demographic Rese...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Question about mle function

2004-08-10 Thread Victoria Landsman
Dear all, 
I'd like to find the mle esttimates using the mle function  
mle(negloglik, start = list(), fixed=list(), method=...). 
I am using the L-BGFS-B method and I don't supply the gradient function. Is there a 
way to print the gradients found at the solution value? 

I am using R-1.9.1 on Windows and on Unix. 
Thank you in advance, 
Victoria Landsman. 

 



[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] summary output for inverse Gaussian GLM

2004-08-10 Thread Edward Dick
I'm getting inconsistent output about the link function from the
summary() command when fitting an inverse Gaussian GLM:
 summary(glm(ig.formula, family=inverse.gaussian(link = log),
+ data=mydata, start=start.vals))$call
glm(formula = ig.formula, family = inverse.gaussian(link = log),
data = mydata, start = start.vals)
 summary(glm(ig.formula, family=inverse.gaussian(link = log),
+ data=mydata, start=start.vals))$family
Family: inverse.gaussian
Link function: 1/mu^2
Has anyone else run into this problem?
I'm running v1.9.1 in Windows.
   E.J.
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] summary output for inverse Gaussian GLM

2004-08-10 Thread Thomas Lumley
On Tue, 10 Aug 2004, Edward Dick wrote:

 I'm getting inconsistent output about the link function from the
 summary() command when fitting an inverse Gaussian GLM:


Yes, it's a bug in the name, not in the result, fortunately.  The
inverse.gaussian family object always has the name of the link set to
1/m^2, regardless of the actual link function.

-thomas

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to import specific column(s) using read.table?

2004-08-10 Thread F Duan
Thanks a lot. 

Your way works perfect. And one more tiny question related to your codes:

My data file has many columns to be omitted (suppose the first 20 ones), but I 
found scan(myfile, what=list(rep(NULL, 20), rep(0, 5)) doesn't work. I had to 
to type NULL 20 times and 0 five times in the list(...). 

But anyway, it works and saves a lot of memory for me. Thank you again.

Frank 


Quoting Gabor Grothendieck [EMAIL PROTECTED]:

 Gabor Grothendieck ggrothendieck at myway.com writes:
 
 : 
 : F Duan f.duan at yale.edu writes:
 : 
 :  I have a very big tab-delim txt file with header and I only want to
 import 
 :  several columns into R. I checked the options for “read.table” and only
 
 : 
 : Try using scan with the what=list(...) and flush=TRUE arguments.  
 : For example, if your data looks like this:
 : 
 : 1 2 3 4 
 : 5 6 7 8 
 : 9 10 11 12
 : 13 14 15 16
 : 
 : then you could read columns 2 and 4 into a list with:
 : 
 
 oops. That should be 1 and 3.
 
 :scan(myfile, what = list(0, NULL, 0), flush = TRUE)
 : 
 : or read in and convert to a data frame via:
 : 
 :do.call(cbind, scan(myfile, what = list(0, NULL, 0), flush = TRUE))
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
 


__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] How to import specific column(s) using read.table?

2004-08-10 Thread Liaw, Andy
Use as.list.

Andy

 From: F Duan
 
 Thanks a lot. 
 
 Your way works perfect. And one more tiny question related to 
 your codes:
 
 My data file has many columns to be omitted (suppose the 
 first 20 ones), but I 
 found scan(myfile, what=list(rep(NULL, 20), rep(0, 5)) 
 doesn't work. I had to 
 to type NULL 20 times and 0 five times in the list(...). 
 
 But anyway, it works and saves a lot of memory for me. Thank 
 you again.
 
 Frank 
 
 
 Quoting Gabor Grothendieck [EMAIL PROTECTED]:
 
  Gabor Grothendieck ggrothendieck at myway.com writes:
  
  : 
  : F Duan f.duan at yale.edu writes:
  : 
  :  I have a very big tab-delim txt file with header and I 
 only want to
  import 
  :  several columns into R. I checked the options for 
 Âread.table and only
  
  : 
  : Try using scan with the what=list(...) and flush=TRUE arguments.  
  : For example, if your data looks like this:
  : 
  : 1 2 3 4 
  : 5 6 7 8 
  : 9 10 11 12
  : 13 14 15 16
  : 
  : then you could read columns 2 and 4 into a list with:
  : 
  
  oops. That should be 1 and 3.
  
  :scan(myfile, what = list(0, NULL, 0), flush = TRUE)
  : 
  : or read in and convert to a data frame via:
  : 
  :do.call(cbind, scan(myfile, what = list(0, NULL, 0), 
 flush = TRUE))
  
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide!
  http://www.R-project.org/posting-guide.html
  
 
 
 __
 [EMAIL PROTECTED] mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 


__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Error message in function mars() in package mda

2004-08-10 Thread Jude Ryan
Hi,
I am using function mars() in package mda to find knots in a whole bunch 
of predictor variables. I hope to be able to replicate all or some of 
the basis functions that the MARS software from Salford Systems creates. 
When I ran mars() on a small dataset, I was able to get the knots. 
However, when I tried running mars() on a larger dataset (145 predictor 
variables), for a different project, I get the following error message:

 fit1 - mars(disney2[,-146], disney2[,146])
Error in mars(disney2[, -146], disney2[, 146]) :
   NA/NaN/Inf in foreign function call (arg 5)
In addition: Warning messages:
1: NAs introduced by coercion
2: NAs introduced by coercion

Does arg 5 refer to the 5th column in my dataset? This seems to be a 
data problem, is this correct?

Are there any other functions in R that will give me the knots for a set 
of predictor variables?

Any help is greatly appreciated.
Thanks,
Jude
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to import specific column(s) using read.table?

2004-08-10 Thread Tony Plate
At Tuesday 01:55 PM 8/10/2004, F Duan wrote:
Thanks a lot.
Your way works perfect. And one more tiny question related to your codes:
My data file has many columns to be omitted (suppose the first 20 ones), 
but I
found scan(myfile, what=list(rep(NULL, 20), rep(0, 5)) doesn't work. I 
had to
to type NULL 20 times and 0 five times in the list(...).
That's because rep(NULL, 20) returns a single NULL -- it's not obvious what 
else it could sensibly return.  What you need to do is replicate 20 times a 
list containing NULL (and a list containing NULL is quite a different 
object to NULL).  E.g.:

 rep(NULL, 20)
NULL
 c(rep(list(NULL), 3), rep(list(0), 2))
[[1]]:
NULL
[[2]]:
NULL
[[3]]:
NULL
[[4]]:
[1] 0
[[5]]:
[1] 0

Tony Plate

But anyway, it works and saves a lot of memory for me. Thank you again.
Frank
Quoting Gabor Grothendieck [EMAIL PROTECTED]:
 Gabor Grothendieck ggrothendieck at myway.com writes:

 :
 : F Duan f.duan at yale.edu writes:
 :
 :  I have a very big tab-delim txt file with header and I only want to
 import
 :  several columns into R. I checked the options for “read.table” 
and only

 :
 : Try using scan with the what=list(...) and flush=TRUE arguments.
 : For example, if your data looks like this:
 :
 : 1 2 3 4
 : 5 6 7 8
 : 9 10 11 12
 : 13 14 15 16
 :
 : then you could read columns 2 and 4 into a list with:
 :

 oops. That should be 1 and 3.

 :scan(myfile, what = list(0, NULL, 0), flush = TRUE)
 :
 : or read in and convert to a data frame via:
 :
 :do.call(cbind, scan(myfile, what = list(0, NULL, 0), flush = TRUE))

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html



__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Question about mle function

2004-08-10 Thread Peter Dalgaard
Victoria Landsman [EMAIL PROTECTED] writes:

 Dear all, 
 I'd like to find the mle esttimates using the mle function  
 mle(negloglik, start = list(), fixed=list(), method=...). 
 I am using the L-BGFS-B method and I don't supply the gradient function. Is there a 
 way to print the gradients found at the solution value? 

No. The details slot in an mle object is simply the return value
from optim(), and that doesn't provide the gradient. Might be an idea
to change that, though, since the gradient is obviously not zero where
the box constraints are active.

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Question about mle function

2004-08-10 Thread Prof Brian Ripley
On 10 Aug 2004, Peter Dalgaard wrote:

 Victoria Landsman [EMAIL PROTECTED] writes:
 
  Dear all, 
  I'd like to find the mle esttimates using the mle function  
  mle(negloglik, start = list(), fixed=list(), method=...). 
  I am using the L-BGFS-B method and I don't supply the gradient
  function. Is there a way to print the gradients found at the solution
  value?
 
 No. The details slot in an mle object is simply the return value
 from optim(), and that doesn't provide the gradient. Might be an idea
 to change that, though, since the gradient is obviously not zero where
 the box constraints are active.

Not so easy, as optim does not return it and indeed the gradient is not 
known at R level (it is computed in the C code, for L-BFGS-B rather deep 
inside at that).  It may not even have been computed at the final 
solution.  We would need something similar to do_optimHess, or even to 
make use of that code as it does evaluate the gradient at the solution.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] hist and normal curve

2004-08-10 Thread Laura Holt
Hi R People:
I have a data set and I want to use the hist command and produce a 
histogram.
Then I want to superimpose a normal curve over the histogram.

Is there a simple way to do this, please?
R version 1.9.1 Windows.
thanks in advance,
Sincerely,
Laura Holt
mailto: [EMAIL PROTECTED]
hthttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Polar decomposition of a rectangular matrix

2004-08-10 Thread simon gatehouse
Dear R users,
Is anyone aware of an R implementation of a matrix polar decomposition?
X = US.  
I can get it from SVD but I understand that this is inefficient when
dealing with large matrices.
Cheers 
Simon Gatehouse
 
---
School of Biological, Earth  Environmental Sciences
University of New South Wales
Kensington
Sydney NSW
 
ph  61 2 9385 8720
mb  61 0407 130 635
email  [EMAIL PROTECTED] 
 
(Hellman  Schofield Pty.Ltd.
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
tel  61 2 9858 3863)
-
 

[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] hist and normal curve

2004-08-10 Thread Sundar Dorai-Raj

Laura Holt wrote:
Hi R People:
I have a data set and I want to use the hist command and produce a 
histogram.
Then I want to superimpose a normal curve over the histogram.

Is there a simple way to do this, please?
R version 1.9.1 Windows.
thanks in advance,
Sincerely,
Laura Holt
mailto: [EMAIL PROTECTED]

This has been answered numerous times. See, for example,
http://finzi.psych.upenn.edu/R/Rhelp01/archive/5029.html
I found this using superimpose histogram normal density on the R Site 
Search page.

--sundar
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] intersect two files

2004-08-10 Thread Christian Mora
Hi all;
Im working with two datasets in R, say data1 and data2. Both datasets
are composed of several rows and columns (dataframe) and some of the
rows are identical in both datasets. Im wondering if there is any way to
remove from one set, say data1, the rows that are identical in the other
set, say data2, using R?
Thanks for any hint in advance
Christian

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] intersect two files

2004-08-10 Thread Liaw, Andy
You have not given enough info.  Do the data sets have the same columns?  If
not, you need to tell us more about how you can tell whether one row of a
data frame is `identical' to some row of another.

Assuming the columns are the same between the two, the basic idea is to
combine all columns into a single vector for each, then check which elements
of one is in the other.  Something like (code untested!):

id1 - do.call(paste, c(data1, sep=:)
id2 - do.call(paste, c(data2, sep=:)
## Rows of data1 that are in data2:
r1 - which(id1 %in% id2)

## Remove:
data1.reduced - data1[-r1,]

Andy


 From: Christian Mora
 
 Hi all;
 Im working with two datasets in R, say data1 and data2. Both datasets
 are composed of several rows and columns (dataframe) and some of the
 rows are identical in both datasets. Im wondering if there is 
 any way to
 remove from one set, say data1, the rows that are identical 
 in the other
 set, say data2, using R?
 Thanks for any hint in advance
 Christian
 
 __
 [EMAIL PROTECTED] mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 


__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] intersect two files

2004-08-10 Thread Adaikalavan Ramasamy
In short, merge with all=FALSE followed by removal of redundant columns might do the 
trick. 
If rownames serve as common key, use the argument by=0.

See http://tolstoy.newcastle.edu.au/R/help/04/07/1250.html and many
other hits on http://maths.newcastle.edu.au/~rking/R/


On Tue, 2004-08-10 at 23:44, Christian Mora wrote:
 Hi all;
 Im working with two datasets in R, say data1 and data2. Both datasets
 are composed of several rows and columns (dataframe) and some of the
 rows are identical in both datasets. Im wondering if there is any way to
 remove from one set, say data1, the rows that are identical in the other
 set, say data2, using R?
 Thanks for any hint in advance
 Christian
 
 __
 [EMAIL PROTECTED] mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] str and Surv objects

2004-08-10 Thread Laura Holt
Dear R People:
I used the Surv function to produce the following object:
a - Surv(1:4,2:5,c(0,1,1,0)) a
[1] (1,2+] (2,3 ] (3,4 ] (4,5+]
str(a)
Error in [.Surv(object, 1:ile) : subscript out of bounds

Why does str(a) give an error, please?  Or did I do something wrong?
Thanks in advance.
R Version 1.9.1 Windows
Sincerely,
Laura Holt
mailto: [EMAIL PROTECTED]
hthttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] str and Surv objects

2004-08-10 Thread Liaw, Andy
Buried in str.default is a line:

iv.len - round(2.5 * v.len)

which seems to be the culprit.  v.len is 4, which seems correct for the data
given.  Why multiply by 2.5?

Andy

 From: Laura Holt
 
 Dear R People:
 
 I used the Surv function to produce the following object:
 a - Surv(1:4,2:5,c(0,1,1,0)) a
 [1] (1,2+] (2,3 ] (3,4 ] (4,5+]
 str(a)
 Error in [.Surv(object, 1:ile) : subscript out of bounds
 
 
 Why does str(a) give an error, please?  Or did I do something wrong?
 
 Thanks in advance.
 R Version 1.9.1 Windows
 Sincerely,
 Laura Holt
 mailto: [EMAIL PROTECTED]
 
 
 hthttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
 
 __
 [EMAIL PROTECTED] mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 


__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Observation error in arima

2004-08-10 Thread Guiming Wang
Hi, 

Does anyone know how to include observation errors in the arima of R.  I read the 
manual and tried the example codes, but did not find the solution.  From the outputs 
of the components model, it seems to me that the default setting of the arima does 
not include the observational error in the fitting. Am I right on this one?  Thanks in 
advance.

Sincerely,

Guiming Wang
Natural Resource Ecology Lab
Colorado State University
Fort Collins, CO 80523
 
[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] R packages install problems linux - X not found(WhiteBoxEL 3)

2004-08-10 Thread Dr Mike Waters
Marc,

Yes - the glibc-devel package was shown to be installed using rpm -qa. It is
also one of the packages upgraded by up2date from the original version
supplied with the WhiteBox distribution. I concluded that there were
probably more such improperly/incompletely upgraded packages and cut my
losses. Everything seems to be fine second time around. I must have been
unlucky.

Regards

Mike

 -Original Message-
 From: Marc Schwartz [mailto:[EMAIL PROTECTED] 
 Sent: 10 August 2004 15:30
 To: Dr Mike Waters
 Cc: R-Help
 Subject: RE: [R] R packages install problems linux - X not 
 found(WhiteBoxEL 3)
 
 
 On Tue, 2004-08-10 at 08:15, Dr Mike Waters wrote:
 
 snip 
 
  From unpacking the tarball and running ./configure in the R source
  directory, I obtain the fact that crti.o is needed by ld.so and was 
  not found. This file is not present on the system. This file, along 
  with crtn.o is usually installed by the gnu libc packages, 
 I believe. 
  However, I know that not all *nix distributions include these files 
  among their packages.
  From a web search, I have not been able to ascertain whether this 
  lack of a
  crti.o is due to there not being one in the distribution, or to 
  another incomplete package install.
  
  So, I did a completely fresh installation of WhiteBox, 
 followed by R 
  built from source, checked that it ran and then installed the R 
  packages. Only then did I run up2date. At least crti.o and 
 crtn.o are 
  still there this time, along with the XFree86 includes.
  
  A bit of a cautionary tale, all in all.
  
  Thanks for all the help and support.
  
  Regards
  
  M
 
 
 Mike,
 
 From my FC2 system:
 
 $ rpm -qf /usr/lib/crti.o
 glibc-devel-2.3.3-27
 
 $ rpm -qf /usr/lib/crtn.o
 glibc-devel-2.3.3-27
 
 So, you are correct relative to the source of these two 
 files. A follow up question might be, did you include the 
 devel packages during your initial install? If not, that 
 would explain the lack of these files. if you did, then it 
 would add another data point to support the notion that your 
 system was, to some level, compromised and a clean install 
 was probably needed, rather than just trying to re-create the 
 RPM database.
 
 Glad that you are up and running at this point. Given 
 Martyn's follow up messages, it looks like there may be an 
 issue with the RH9 RPM, so for the time being using the 
 source tarball would be appropriate.
 
 Best regards,
 
 Marc
 
 


__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html