[R] [R-pkgs] New package - fastTS

2024-04-10 Thread Peterson, Ryan
Hi R enthusiasts,

I am happy to announce a new package available on CRAN: fastTS 
(https://cran.r-project.org/web/packages/fastTS/). fastTS is especially useful 
for large time series with exogenous features and/or complex seasonality (i.e. 
with multiple modes), allowing for possibly high-dimensional feature sets. The 
method can also facilitate inference on exogenous features, conditional on a 
series' autoregressive structure. The regularization-based method is 
considerably faster than competitors, while often producing more accurate 
predictions. See our open-access publication for more information: 
https://doi.org/10.1177/1471082X231225307

The package has several vignettes, one of which is an detailed walkthrough of 
an application to an (included) data set consisting of an hourly series of 
arrivals into the University of Iowa Emergency Department with concurrent local 
temperature.

If you encounter any issues or would like to make contributions, please do so 
via the package's GitHub page: https://github.com/petersonR/fastTS

Best,
Ryan

Ryan Peterson
Assistant Professor
Department of Biostatistics and Informatics
University of Colorado - Anschutz Medical Campus

[[alternative HTML version deleted]]

___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Question regarding reservoir volume and water level

2024-04-10 Thread javad bayat
Dear all;
Thank you for your reply.
David has explained an interesting method.
David I have DEM file of the region and I have extracted the xyz data from
that.
Also I can extract bathymetry data as xyz file.
I have calculated the storage (volume) of reservoir at the current
elevation.
But the method I have used to calculate the volume is different from your
method. I have crop DEM by the reservoir boundary and then calculate the
volume.
I would be more than happy if you please please explain more or write codes
for me how to get volume at different elevation.
And also about the following function, especially f(Storage).

lm(Elevation~f(Storage)
Sincerely

On Tue, 9 Apr 2024, 21:26 David Stevens via R-help, 
wrote:

> Water engineer here. The standard approach is to 1) get the storage vs.
> elevation data from the designers of the reservoir or, barring that, 2)
> get the bathymetry data from USBR or state DWR, or, if available, get
> the DEM data from USGS if the survey was done before the reservoir was
> built or 3) get a boat+sonar with GPS  +lots of time and survey the
> bottom elevation yourself. Put the xyz data into ArcGIS and have it
> create the bottom surface, then, with several elevations, integrate the
> xyz data from Z to the bottom to find the storage. Plot the storage at
> each water surface to get an idea of the shape and then use
> lm(Elevation~f(Storage) where f(Storage) may be a cubic or quartic
> polynomial. Then double the Storage and calculate Elevation. This type
> of thing is done everyday by hydrologists.
>
> Good luck
>
> David K Stevens, PhD, PE, Professor
> Civil and Environmental Engineering
> Utah Water Research Laboratory
> Utah State University
> 8200 Old Main Hill
> Logan, UT 84322-8200
> david.stev...@usu.edu
> (435) 797-3229 (office)
>
> On 4/9/2024 8:01 AM, peter dalgaard wrote:
> > So, you know how to get volume for given water level.
> >
> > For the reverse problem, you get in trouble because of the nonlinearity
> inherent in the dependence of surface area on the level.
> >
> > I don't think there is a simple solution to this, save for mapping out
> the volume as a function of water level and solving equations for the water
> level using (say) uniroot(). Which may actually suffice for practical
> purposes.
> >
> > For small changes, finding the derivative of the relation is easy:
> d(volume) = Area * d(level) and this can be used as an approximate relation
> as long as the Area remains nearly constant.
> >
> > However generic questions like doubling the volume are impossible to
> answer without knowledge of the reservoir shape. E.g. in a cylindrical
> reservoir halving the water level also halves the volume, but in a conical
> reservoir, halving the level leaves only 1/8 of the volume.
> >
> > -pd
> >
> >
> >
> >> On 8 Apr 2024, at 05:55 , javad bayat  wrote:
> >>
> >> Dear all;
> >> Many thanks for your replies. This was not homework. I apologize.
> >> Let me explain more.
> >> There is a dam constructed in a valley with the highest elevation of
> 1255
> >> m. The area of its reservoir can be calculated by drawing a polygon
> around
> >> the water and it is known.
> >> I have the Digital Elevation Model (DEM) of the region (reservoir and
> its
> >> surrounding area). I have calculated the volume of the current reservoir
> >> (7e6 m3) using the following codes.
> >> library(raster)
> >> library(terra)
> >> library(exactextractr)
> >> library(dplyr)
> >> library(sf)
> >> # Calculate volume for polygon
> >> # Read the DEM raster file
> >> r <- rast("E:/...DEM.tif")
> >> # Read the polygon shapefile
> >> p <- st_read("E:/...Dam.shp")
> >>
> >> r <- crop(r, extent(p))
> >> r <- mask(r, p)
> >>
> >> # Extract the cells in each polygon and calculate the area of each cell
> >> x <- exact_extract(r, p, coverage_area = TRUE)
> >> # Extract polygon values as a dataframe
> >> x1 = as.data.frame(x[1])
> >> head(x1)
> >> x1 = na.omit(x1)
> >> # Calculate the height above the minimum elevation in the polygon
> >> x1$Height = max(x1[,1]) - x1[,1]
> >> # Calculate the volume of each cell
> >> x1$Vol = x1[,2] * x1[,3]
> >> sum(x1$Vol)
> >> x2 = x1[,c(1,2,4)]
> >> x2 = sort(x2,'value')
> >> head(x2)
> >> x3 <- aggregate(Vol ~ value, data = x2, FUN = sum)
> >> x4 <- aggregate(coverage_area ~ value, data = x2, FUN = sum)
> >> x5 = cbind(x3, Area = x4[,2])
> >> library(dplyr)
> >> x6 <- x5 %>%
> >>   mutate(V_sum = cumsum(Vol)) %>%
> >>   mutate(A_sum = cumsum(Area))
> >> plot(x6$value~x6$V_sum)
> >>
> >> And I thought that it is possible to get the elevation for a specific
> >> volume by linear model between elevation and volume, as follow:
> >>
> >> # Get a linear model between elevation and the volume
> >> lm1 <- lm(value ~ V_sum, data = x6)
> >> d <- data.frame(V_sum = 14e6)  #
> >> predict(lm1, newdata = d)
> >>
> >> But it is not possible through the LM.
> >> Now I want to know what would be the water level in the reservoir if the
> >> reservoir volume doubled or we adding a known volu

Re: [R] Exceptional slowness with read.csv

2024-04-10 Thread avi.e.gross
Dave,

Your method works for you and seems to be a one-time fix of a corrupted data 
file so please accept what I write not as a criticism but explaining my 
alternate reasoning which I suspect may work faster in some situations.

Here is my understanding of what you are doing:

You have a file in CSV format containing N rows with commas to make M columns. 
A few rows have a glitch in that there is a double quote character at the 
beginning or end (meaning between commas adjacent to one, or perhaps at the 
beginning or end of the line of text) that mess things up. This may be in a 
specific known column or in several.

So your algorithm is to read the entire file in, or alternately you could do 
one at a time. Note the types of the columns may not be apparent to you when 
you start as you are not allowing read.csv() see what it needs to or perform 
all kinds of processing like dealing with a comment.
You then call functions millions of times (N) such as read.csv(). Argh!

You do that by setting up an environment N times to catch errors. Of course, 
most lines are fine and no error.

Only on error lines do you check for a regular expression that checks for 
quotes not immediately adjacent to a comma. I am not sure what you used albeit 
I imagine sometimes spaces could intervene. You fix any such lines and 
re-evaluate.

It seems your goal was to rewrite a corrected file so you are doing so while 
appending to it a row/line at a time.

My strategy was a bit different.

- Call read.csv() just once with no error checking but an option to not treat a 
quote specially. Note if the quoted region may contain commas, this is a bad 
strategy. If all it has is spaces or other non-comma items, it may be fine. 

There is now a data.frame or other similar data structure in memory if it works 
with N rows and M columns.

- Pick only columns that may have this issue, meaning the ones containing say 
text as compared to numbers or logical values.
- Using those columns, perhaps one at a time, evaluate them all at once for a 
regular expression that tests the entry for the presence of exactly one quote 
either at the start or end (the commas you used as anchors are not in this 
version.) So you are looking for something like:

"words perhaps including, commas
Or
words perhaps including, commas"

but not for:

words perhaps including, commas
"words perhaps including, commas"

You can save the query as a Boolean vector of TRUE/FALSE as one method, to mark 
which rows need fixing. Or you might use an ifelse() or the equivalent in which 
you selectively apply a fix to the rows. One method is to use something like 
sub() to both match all text except an initial or terminal quote and replace it 
with a quote followed by the match followed by a quote, if any quotes were 
found.

Whatever you choose can be done in a vectorized manner that may be more 
efficient. You do not need to check for failures, let alone N times. And you 
only need process those columns that need it.

When done, you may want to make sure all the columns are of the type you want 
as who knows if read.csv() made a bad choice on those columns, or others.

Note again, this is only a suggestion and it fails if commas can be part of the 
quoted parts or even misquoted parts.

-Original Message-
From: R-help  On Behalf Of Dave Dixon
Sent: Wednesday, April 10, 2024 12:20 PM
To: Rui Barradas ; r-help@r-project.org
Subject: Re: [R] Exceptional slowness with read.csv

That's basically what I did

1. Get text lines using readLines
2. use tryCatch to parse each line using read.csv(text=...)
3. in the catch, use gregexpr to find any quotes not adjacent to a comma 
(gregexpr("[^,]\"[^,]",...)
4. escape any quotes found by adding a second quote (using str_sub from 
stringr)
6. parse the patched text using read.csv(text=...)
7. write out the parsed fields as I go along using write.table(..., 
append=TRUE) so I'm not keeping too much in memory.

I went directly to tryCatch because there were 3.5 million records, and 
I only expected a few to have errors.

I found only 6 bad records, but it had to be done to make the datafile 
usable with read.csv(), for the benefit of other researchers using these 
data.


On 4/10/24 07:46, Rui Barradas wrote:
> Às 06:47 de 08/04/2024, Dave Dixon escreveu:
>> Greetings,
>>
>> I have a csv file of 76 fields and about 4 million records. I know 
>> that some of the records have errors - unmatched quotes, 
>> specifically. Reading the file with readLines and parsing the lines 
>> with read.csv(text = ...) is really slow. I know that the first 
>> 2459465 records are good. So I try this:
>>
>>  > startTime <- Sys.time()
>>  > first_records <- read.csv(file_name, nrows = 2459465)
>>  > endTime <- Sys.time()
>>  > cat("elapsed time = ", endTime - startTime, "\n")
>>
>> elapsed time =   24.12598
>>
>>  > startTime <- Sys.time()
>>  > second_records <- read.csv(file_name, skip = 2459465, nrows = 5)
>>  > endTime <- Sys.time()
>>  > cat("elapsed 

Re: [R] Exceptional slowness with read.csv

2024-04-10 Thread Dave Dixon

That's basically what I did

1. Get text lines using readLines
2. use tryCatch to parse each line using read.csv(text=...)
3. in the catch, use gregexpr to find any quotes not adjacent to a comma 
(gregexpr("[^,]\"[^,]",...)
4. escape any quotes found by adding a second quote (using str_sub from 
stringr)

6. parse the patched text using read.csv(text=...)
7. write out the parsed fields as I go along using write.table(..., 
append=TRUE) so I'm not keeping too much in memory.


I went directly to tryCatch because there were 3.5 million records, and 
I only expected a few to have errors.


I found only 6 bad records, but it had to be done to make the datafile 
usable with read.csv(), for the benefit of other researchers using these 
data.



On 4/10/24 07:46, Rui Barradas wrote:

Às 06:47 de 08/04/2024, Dave Dixon escreveu:

Greetings,

I have a csv file of 76 fields and about 4 million records. I know 
that some of the records have errors - unmatched quotes, 
specifically. Reading the file with readLines and parsing the lines 
with read.csv(text = ...) is really slow. I know that the first 
2459465 records are good. So I try this:


 > startTime <- Sys.time()
 > first_records <- read.csv(file_name, nrows = 2459465)
 > endTime <- Sys.time()
 > cat("elapsed time = ", endTime - startTime, "\n")

elapsed time =   24.12598

 > startTime <- Sys.time()
 > second_records <- read.csv(file_name, skip = 2459465, nrows = 5)
 > endTime <- Sys.time()
 > cat("elapsed time = ", endTime - startTime, "\n")

This appears to never finish. I have been waiting over 20 minutes.

So why would (skip = 2459465, nrows = 5) take orders of magnitude 
longer than (nrows = 2459465) ?


Thanks!

-dave

PS: readLines(n=2459470) takes 10.42731 seconds.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.

Hello,

Can the following function be of help?
After reading the data setting argument quote=FALSE, call a function 
applying gregexpr to its character columns, then transforming the 
output in a two column data.frame with columns


 Col - the column processed;
 Unbalanced - the rows with unbalanced double quotes.

I am assuming the quotes are double quotes. It shouldn't be difficult 
to adapt it to other cas, single quotes, both cases.





unbalanced_dquotes <- function(x) {
  char_cols <- sapply(x, is.character) |> which()
  lapply(char_cols, \(i) {
    y <- x[[i]]
    Unbalanced <- gregexpr('"', y) |>
  sapply(\(x) attr(x, "match.length") |> length()) |>
  {\(x) (x %% 2L) == 1L}() |>
  which()
    data.frame(Col = i, Unbalanced = Unbalanced)
  }) |>
  do.call(rbind, args = _)
}

# read the data disregardin g quoted strings
df1 <- read.csv(fl, quote = "")
# determine which strings have unbalanced quotes and
# where
unbalanced_dquotes(df1)


Hope this helps,

Rui Barradas




__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Exceptional slowness with read.csv

2024-04-10 Thread avi.e.gross
It sounds like the discussion is now on how to clean your data, with a twist. 
You want to clean it before you can properly read it in using standard methods.

Some of those standard methods already do quite a bit as they parse the data 
such as looking ahead to determine the data type for a column.

The specific problem being discussed seems to be related to a lack of balance 
in individual lines of a CSV file related to double quotes that then mess up 
that row and following rows for a while. I am not clear on the meaning of the 
quotes to the user but wonder if they can simply not be viewed as quotes. 
Functions like read.csv() or the tidyverse variant of read_csv() allow you to 
specify the quote character or disable it.

So what would happen to the damaged line/row in your case, or any row with both 
quotes intact if you tried reading it in with an argument disabling processing 
quoted regions? It may cause problems but in your case, maybe it won't.

If so, after reading in the file, you can march through it and make fixes, such 
as discussed. The other alternative seems to be to read the lines in the 
old-fashioned way, do some surgery on whole lines rather than individual 
row/column entries, and perhaps feed the huge amount of data in some form to 
read.csv as text=TEXT or write it out to another file and read it in again.

And, of course, if there is just one bad line, then you might just open it with 
a program such as EXCEL or anything that lets you edit it once, ...




-Original Message-
From: R-help  On Behalf Of Rui Barradas
Sent: Wednesday, April 10, 2024 9:46 AM
To: Dave Dixon ; r-help@r-project.org
Subject: Re: [R] Exceptional slowness with read.csv

Às 06:47 de 08/04/2024, Dave Dixon escreveu:
> Greetings,
> 
> I have a csv file of 76 fields and about 4 million records. I know that 
> some of the records have errors - unmatched quotes, specifically. 
> Reading the file with readLines and parsing the lines with read.csv(text 
> = ...) is really slow. I know that the first 2459465 records are good. 
> So I try this:
> 
>  > startTime <- Sys.time()
>  > first_records <- read.csv(file_name, nrows = 2459465)
>  > endTime <- Sys.time()
>  > cat("elapsed time = ", endTime - startTime, "\n")
> 
> elapsed time =   24.12598
> 
>  > startTime <- Sys.time()
>  > second_records <- read.csv(file_name, skip = 2459465, nrows = 5)
>  > endTime <- Sys.time()
>  > cat("elapsed time = ", endTime - startTime, "\n")
> 
> This appears to never finish. I have been waiting over 20 minutes.
> 
> So why would (skip = 2459465, nrows = 5) take orders of magnitude longer 
> than (nrows = 2459465) ?
> 
> Thanks!
> 
> -dave
> 
> PS: readLines(n=2459470) takes 10.42731 seconds.
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
Hello,

Can the following function be of help?
After reading the data setting argument quote=FALSE, call a function 
applying gregexpr to its character columns, then transforming the output 
in a two column data.frame with columns

  Col - the column processed;
  Unbalanced - the rows with unbalanced double quotes.

I am assuming the quotes are double quotes. It shouldn't be difficult to 
adapt it to other cas, single quotes, both cases.




unbalanced_dquotes <- function(x) {
   char_cols <- sapply(x, is.character) |> which()
   lapply(char_cols, \(i) {
 y <- x[[i]]
 Unbalanced <- gregexpr('"', y) |>
   sapply(\(x) attr(x, "match.length") |> length()) |>
   {\(x) (x %% 2L) == 1L}() |>
   which()
 data.frame(Col = i, Unbalanced = Unbalanced)
   }) |>
   do.call(rbind, args = _)
}

# read the data disregardin g quoted strings
df1 <- read.csv(fl, quote = "")
# determine which strings have unbalanced quotes and
# where
unbalanced_dquotes(df1)


Hope this helps,

Rui Barradas


-- 
Este e-mail foi analisado pelo software antivírus AVG para verificar a presença 
de vírus.
www.avg.com

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Exceptional slowness with read.csv

2024-04-10 Thread Rui Barradas

Às 06:47 de 08/04/2024, Dave Dixon escreveu:

Greetings,

I have a csv file of 76 fields and about 4 million records. I know that 
some of the records have errors - unmatched quotes, specifically. 
Reading the file with readLines and parsing the lines with read.csv(text 
= ...) is really slow. I know that the first 2459465 records are good. 
So I try this:


 > startTime <- Sys.time()
 > first_records <- read.csv(file_name, nrows = 2459465)
 > endTime <- Sys.time()
 > cat("elapsed time = ", endTime - startTime, "\n")

elapsed time =   24.12598

 > startTime <- Sys.time()
 > second_records <- read.csv(file_name, skip = 2459465, nrows = 5)
 > endTime <- Sys.time()
 > cat("elapsed time = ", endTime - startTime, "\n")

This appears to never finish. I have been waiting over 20 minutes.

So why would (skip = 2459465, nrows = 5) take orders of magnitude longer 
than (nrows = 2459465) ?


Thanks!

-dave

PS: readLines(n=2459470) takes 10.42731 seconds.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.

Hello,

Can the following function be of help?
After reading the data setting argument quote=FALSE, call a function 
applying gregexpr to its character columns, then transforming the output 
in a two column data.frame with columns


 Col - the column processed;
 Unbalanced - the rows with unbalanced double quotes.

I am assuming the quotes are double quotes. It shouldn't be difficult to 
adapt it to other cas, single quotes, both cases.





unbalanced_dquotes <- function(x) {
  char_cols <- sapply(x, is.character) |> which()
  lapply(char_cols, \(i) {
y <- x[[i]]
Unbalanced <- gregexpr('"', y) |>
  sapply(\(x) attr(x, "match.length") |> length()) |>
  {\(x) (x %% 2L) == 1L}() |>
  which()
data.frame(Col = i, Unbalanced = Unbalanced)
  }) |>
  do.call(rbind, args = _)
}

# read the data disregardin g quoted strings
df1 <- read.csv(fl, quote = "")
# determine which strings have unbalanced quotes and
# where
unbalanced_dquotes(df1)


Hope this helps,

Rui Barradas


--
Este e-mail foi analisado pelo software antivírus AVG para verificar a presença 
de vírus.
www.avg.com

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with base::order

2024-04-10 Thread Sigbert Klinke

Hi,

you are unfortunately right. Executing

x <- sample(c(1,2,NA), 26, replace=TRUE)
y <- sample(c(1,2,NA), 26, replace=TRUE)
o <- order(x, y, decreasing = c(T,F), na.last=c(F,T))
cbind(x[o], y[o])

shows that the second entry of na.last is ignored without warning.

Thanks Sigbert

Am 10.04.24 um 10:29 schrieb Ivan Krylov:

В Wed, 10 Apr 2024 09:33:19 +0200
Sigbert Klinke  пишет:


decreasing=c(F,F,F)


This is only documented to work with method = 'radix':


For the ‘"radix"’ method, this can be a vector of length equal to
the number of arguments in ‘...’ and the elements are recycled as
necessary.  For the other methods, it must be length one.



na.last=c(T,T,T),


I think this is supposed to be a scalar, no matter the sort method. At
the very least, I don't see it documented to accept a logical vector,
and the C code in both src/main/sort.c and src/main/radixsort.c treats
the argument as a scalar (using asLogical(...), not LOGICAL(...) on the
R value).



--
https://hu.berlin/sk
https://hu.berlin/mmstat
https://hu.berlin/mmstat-ar

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with base::order

2024-04-10 Thread Ivan Krylov via R-help
В Wed, 10 Apr 2024 09:33:19 +0200
Sigbert Klinke  пишет:

> decreasing=c(F,F,F)

This is only documented to work with method = 'radix':

>> For the ‘"radix"’ method, this can be a vector of length equal to
>> the number of arguments in ‘...’ and the elements are recycled as
>> necessary.  For the other methods, it must be length one.

> na.last=c(T,T,T), 

I think this is supposed to be a scalar, no matter the sort method. At
the very least, I don't see it documented to accept a logical vector,
and the C code in both src/main/sort.c and src/main/radixsort.c treats
the argument as a scalar (using asLogical(...), not LOGICAL(...) on the
R value).

-- 
Best regards,
Ivan

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem with base::order

2024-04-10 Thread Sigbert Klinke

Hi,

when I execute

order(letters, LETTERS, 1:26)

then everything is fine. But if I execute

order(letters, LETTERS, 1:26, na.last=c(T,T,T), decreasing=c(F,F,F))

I get the error message

Error in method != "radix" && !is.na(na.last) :
'length = 3' in constraint to 'logical(1)'

Shouldn't both give the same result?

Sigbert

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.