[R] different results between cor and ccf

2024-01-16 Thread Patrick Giraudoux
Dear listers,

I am working on a time series but find that for a given non-zero time 
lag correlations obtained by ccf and cor are different.

x <- c(0.85472102802704641, 1.6008990694641689, 2.5019632258894835, 
2.514654801253164, 3.3359198688206368, 3.5401357138398208, 
2.6304117871193538, 3.6694074965420009, 3.9125153101706776, 
4.4006592535478566, 3.0208991912866829, 2.959090589344433, 
3.8434635568566056, 2.1683644330520457, 2.3060571563512973, 
1.4680350663043942, 2.0346918622459054, 2.3674524446877538)

y <- c(2.3085729270534765, 2.0809088217491416, 1.6249456563631131, 
1.513338933177, 0.66754156827555422, 0.3080839731181978, 
0.5265304299394, 0.89070463020837132, 0.71600791432232669, 
0.82152341002975027, 0.22200290782700527, 0.6608410635137173, 
0.90715232876618945, 0.45624062770725898, 0.35074487486980244, 
1.1681750562971052, 1.6976462236079737, 0.88950230250556417)

cc<-ccf(x,y)

> cc Autocorrelations of series ‘X’, by lag -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 
2 0.098 0.139 0.127 -0.043 -0.049 0.069 -0.237 -0.471 -0.668 -0.595 
-0.269 -0.076 3 4 5 6 7 8 9 -0.004 0.123 0.272 0.283 0.401 0.435 0.454

cor(x,y) [1] -0.5948694

So far so good, but when I lag one of the series, I cannot find the same 
correlation as with ccf

> cor(x[1:(length(x)-1)],y[2:length(y)]) [1] -0.7903428

... where I expect -0.668 based on ccf

Can anyone explain why ?

Best,

Patrick

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] glm.nb and Error in x[good, , drop = FALSE] * w : non-conformable arrays

2023-04-21 Thread Patrick Giraudoux
Many thanks Ivan ! This is fairly clear to me, now... When I dumped the 
data.frame, I found strange to have a "table" declaration for deg, but 
was not able to judge if it was a problem or not (I would have expected 
something as "numeric")
Your workaround is fine to me (I do not need to carry on with table 
attributes).
Best,
Patrick



Le 21/04/2023 à 10:08, Ivan Krylov a écrit :
> On Fri, 21 Apr 2023 09:02:37 +0200
> Patrick Giraudoux  wrote:
>
>> I meet an error with glm.nb that I cannot explain the origin (and
>> find a fix). The model I want to fit is the following:
>>
>> library(MASS)
>>
>> glm.nb(deg~offset(log(durobs))+zone,data=db)
>>
>> and the data.frame is dumped below.
> Thank you for providing both the code and a small piece of data that
> reproduces the error!
>
> (It almost worked. Your mailer automatically generated a plain text
> version of the e-mail and put Unicode non-breaking spaces in there. R
> considers it a syntax error to encounter any of the various Unicode
> space-like characters outside string literals.)
>
>> deg = structure(c(0, 1, 0, 3, 0, 1, 0, 2, 1, 0, 3, 0, 0, 0, 4, 1, 0,
>> 0, 0, 0, 4, 0, 0, 0, 4, 3, 2, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0,
>> 0, 3, 2, 3, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 2,
>> 0, 0, 0, 2, 1, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2,
>> 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 2, 1, 0, 0, 1, 3, 2,
>> 1, 2, 0, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 2,
>> 1, 1, 0), dim = 135L, class = "table")
> The problem is that `deg` is a table, which ends up making the
> effective weights a table too. tables are arrays, and element-wise
> product rules are stricter for them than for plain matrices. The code
> makes use of the ability to take an element-wise product between a
> matrix and a vector of the same length as the number of rows in the
> matrix:
>
> matrix(1:12, 4) * 1:4 # works
> matrix(1:12, 4) * as.array(1:4) # results in the same error
>
> # the right way to take products with an array is to make sure that the
> # shapes match exactly
> matrix(1:12, 4) * as.array(cbind(1:4, 1:4, 1:4))
>
> One possible solution is to to remove all attributes from db$deg:
>
> db$deg <- as.vector(db$deg)
>
> This way the values of the expressions involved in glm.fit end up being
> of the expected type, and the function completes successfully.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] glm.nb and Error in x[good, , drop = FALSE] * w : non-conformable arrays

2023-04-21 Thread Patrick Giraudoux
Dear Listers,

I meet an error with glm.nb that I cannot explain the origin (and find a 
fix). The model I want to fit is the following:

library(MASS)

glm.nb(deg~offset(log(durobs))+zone,data=db)

and the data.frame is dumped below.

Has anyone an idea about what the trouble comes from ? (except computing 
leads to a non-conformable array somewhere... the question is why; 
fitting goes through without any problem eg with a Poisson link)

Best,

Patrick


db <-
structure(list(deg = structure(c(0, 1, 0, 3, 0, 1, 0, 2, 1, 0,
3, 0, 0, 0, 4, 1, 0, 0, 0, 0, 4, 0, 0, 0, 4, 3, 2, 0, 0, 0, 0,
0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 3, 2, 3, 0, 1, 1, 0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 2, 0, 0, 0, 2, 1, 0, 2, 0, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 1, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 2, 1, 0, 0, 1, 3, 2, 1, 2, 0, 0, 0, 0,
0, 1, 0, 0, 2, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 2, 1, 1, 0), dim = 135L, 
class = "table"),
     durobs = c(371, 371, 371, 371, 371, 371, 239, 266, 234, 71,
     436, 407, 407, 414, 415, 418, 415, 329, 414, 414, 415, 330,
     435, 436, 210, 436, 214, 436, 436, 210, 434, 438, 438, 402,
     402, 289, 264, 264, 434, 435, 434, 434, 434, 434, 434, 427,
     427, 427, 328, 422, 291, 412, 221, 417, 416, 416, 79, 322,
     213, 440, 434, 462, 397, 457, 419, 406, 316, 392, 392, 392,
     392, 392, 452, 386, 399, 305, 240, 404, 226, 226, 381, 385,
     392, 388, 388, 391, 396, 392, 385, 385, 385, 237, 378, 378,
     378, 381, 126, 315, 379, 314, 185, 313, 312, 301, 312, 312,
     310, 310, 307, 306, 304, 455, 472, 472, 466, 467, 334, 565,
     429, 429, 425, 422, 421, 419, 417, 417, 410, 405, 195, 422,
     419, 419, 426, 426, 442), zone = c("MO1", "MO1", "MO1", "MO1",
     "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1",
     "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1",
     "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1",
     "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1",
     "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1",
     "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1",
     "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1", "MO1",
     "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2",
     "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2",
     "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2",
     "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2",
     "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2",
     "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2",
     "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2", "MO2",
     "MO2", "MO2", "MO2", "MO2", "MO2")), row.names = c(NA, 135L
), class = "data.frame")

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] readxl, read_excel: how colon (:) is read ?

2022-04-01 Thread Patrick Giraudoux
This can be made using the TEXT (TEXTE in the French version) function 
of Excel, hence:

TEXT(M2;"HH:MM")

Changes the time into text, and it can be imported from R as wanted.



Le 01/04/2022 à 08:34, Patrick Giraudoux a écrit :
> Absolutely correct ! I checked in Excel and when I change the format 
> to "text", then I get in Excel the same fractional numbers as those 
> obtained importing text from R... Hence the issue comes from Excel 
> itself. Will find a way to change this format to text in Excel without 
> avoiding such conversion...
> Thanks Andrew !
>
> Le 01/04/2022 à 08:26, Andrew Simmons a écrit :
>> Probably (but not entirely sure), Excel is storing your text as a 
>> number of days, so 13:38 is a little more than half a day. Open your 
>> spreadsheet in excel and save those columns as text instead of times, 
>> that (should) fix your issue.
>>
>> On Fri, Apr 1, 2022, 02:12 Patrick Giraudoux 
>>  wrote:
>>
>> I have a unexpected behaviour reading times with colon from an Excel
>> file, using the package readxl.
>>
>> In an Excel sheet, I have a column with times in hours:minutes, e.g:
>>
>> Arrival_time
>> 13:39
>> 13:51
>>
>> When read from R with readxl::read_excel, this gives a tibble column
>> with full date by defaut being the last day of 1899. OK. Why not,
>> I know
>> that POSIX variables are starting in 1900 after R doc (however I
>> wonder
>> why here the defaut is one day before January 1, 1900
>>
>> > tmp$Arrival_time  [1] "1899-12-31 13:39:00 UTC" "1899-12-31
>> 13:51:00 UTC"
>>
>> Well, this is not exactly what I want to. I do not care about the
>> year
>> and the day... Therefore I decided to import this column as "text"
>> explicitely (in order to manage it within R then). And this is
>> what I
>> get now:
>>
>> >
>> 
>> read_excel("saisie_data_durban_rapaces_LPO.xlsx",sheet=2,col_types="text")
>> > tmp$Arrival_time [1] "0.568750009" "0.577083328"
>>
>> Can someone tell me what happens ?
>>
>> I would really appreciate to understand the trick...
>>
>>
>>         [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> <http://www.R-project.org/posting-guide.html>
>> and provide commented, minimal, self-contained, reproducible code.
>>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] readxl, read_excel: how colon (:) is read ?

2022-04-01 Thread Patrick Giraudoux
Le 01/04/2022 à 08:40, Jeff Newmiller a écrit :
> Both R and Excel assume a date is associated with every time object. In 
> Excel, when you show a date it is an integer number of days since 1899-12-31 
> (due to a mistake made early in programming it). Whenever you show a time, it 
> it merely displaying the time portion (fraction of a day) of a date/time. The 
> date part of that value may or may not be 1899-12-31.
>
> With this in mind, you are tilting at windmills hoping to import a "pure 
> time" because no such thing exists in either program. You can choose to 
> render a `POSIXct` as showing only the time portion when you convert it to 
> character if you so choose.

Thanks for the infos. Yes, this is exactly what  I did yesterday with 
POSIXct > POSIXlt to go ahead. However I wanted to understand fully what 
happened, hence the call to the list. Jeff and Andrew, now eveything is 
clear to me thanks to you...


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] strptime, date and conversion of week number into POSIX

2021-02-22 Thread Patrick Giraudoux
Thanks Uwe and  Bert,
I got the essential now, and can manage. Date handling stays quite a 
challenge with a variable number of weeks in a year, but I can 
understand why. Means eye-control (or NA detection) of strptime 
conversion stays necessary...
Best,
Patrick


Le 22/02/2021 à 17:09, Uwe Ligges a écrit :
> That monday does not exist. FOr the week before:
>
> strptime(paste0("2020-52","-1"),format="%Y-%W-%u")
> [1] "2020-12-28"
>
> One week later is no longer in 2020, so there is no 53th week.
>
> Best,
> Uwe Ligges
>
>
>
>
>
> On 22.02.2021 16:15, Patrick Giraudoux wrote:
>> Sorry to answer to myself, but the format was clearly incorrect in the
>> previous post. It should read, refering to the 1th day of the week:
>>
>> strptime(paste0(mydate,"-1"),format="%Y-%W-%u")
>>
>> It converts better, but with a NA on week 53
>>
>>> strptime(paste0(pays$year_week,"-1"),format="%Y-%W-%u")
>>    [1] "2020-01-06 CET"  "2020-01-13 CET"  "2020-01-20 CET" 
>> "2020-01-27 CET"
>>    [5] "2020-02-03 CET"  "2020-02-10 CET"  "2020-02-17 CET" 
>> "2020-02-24 CET"
>>    [9] "2020-03-02 CET"  "2020-03-09 CET"  "2020-03-16 CET" 
>> "2020-03-23 CET"
>> [13] "2020-03-30 CEST" "2020-04-06 CEST" "2020-04-13 CEST" 
>> "2020-04-20 CEST"
>> [17] "2020-04-27 CEST" "2020-05-04 CEST" "2020-05-11 CEST" 
>> "2020-05-18 CEST"
>> [21] "2020-05-25 CEST" "2020-06-01 CEST" "2020-06-08 CEST" 
>> "2020-06-15 CEST"
>> [25] "2020-06-22 CEST" "2020-06-29 CEST" "2020-07-06 CEST" 
>> "2020-07-13 CEST"
>> [29] "2020-07-20 CEST" "2020-07-27 CEST" "2020-08-03 CEST" 
>> "2020-08-10 CEST"
>> [33] "2020-08-17 CEST" "2020-08-24 CEST" "2020-08-31 CEST" 
>> "2020-09-07 CEST"
>> [37] "2020-09-14 CEST" "2020-09-21 CEST" "2020-09-28 CEST" 
>> "2020-10-05 CEST"
>> [41] "2020-10-12 CEST" "2020-10-19 CEST" "2020-10-26 CET" "2020-11-02 
>> CET"
>> [45] "2020-11-09 CET"  "2020-11-16 CET"  "2020-11-23 CET" "2020-11-30 
>> CET"
>> [49] "2020-12-07 CET"  "2020-12-14 CET"  "2020-12-21 CET" "2020-12-28 
>> CET"
>> [53] NA    "2021-01-04 CET"  "2021-01-11 CET" "2021-01-18 
>> CET"
>> [57] "2021-01-25 CET"  "2021-02-01 CET"  "2021-02-08 CET"
>> Warning message:
>> In strptime(paste0(pays$year_week, "-1"), format = "%Y-%W-%u") :
>>     (0-based) yday 369 in year 2020 is invalid
>>
>>
>> Any idea on how to handle this ?
>>
>>
>>
>>
>> Le 22/02/2021 à 15:26, Patrick Giraudoux a écrit :
>>>
>>> Dear all,
>>>
>>> I have a trouble trying to convert dates  given in character to POSIX.
>>> The date is expressed as a year then the week number e.g. "2020-01"
>>> (first week of 2020). I thought is can be converted as following:
>>>
>>> strptime(mydate,format="%Y-%W")
>>>
>>> %W refering to the week of the year as decimal number (00–53) using
>>> Monday as the first day of week (and typically with the first Monday
>>> of the year as day 1 of week 1), as indicated in the doc.
>>>
>>> However, I got this result, with the month fixed to 02 (february) and
>>> day 22 (only the year is  converted correctly):
>>>
>>> strptime(mydate,format="%Y-%W") [1] "2020-02-22 CET" "2020-02-22 CET"
>>> "2020-02-22 CET" "2020-02-22 CET" [5] "2020-02-22 CET" "2020-02-22
>>> CET" "2020-02-22 CET" "2020-02-22 CET" [9] "2020-02-22 CET"
>>> "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" [13] "2020-02-22
>>> CET" "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" [17]
>>> "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET"
>>> [21] "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" "2020-02-22
>>> CET&qu

Re: [R] strptime, date and conversion of week number into POSIX

2021-02-22 Thread Patrick Giraudoux
Sorry to answer to myself, but the format was clearly incorrect in the 
previous post. It should read, refering to the 1th day of the week:

strptime(paste0(mydate,"-1"),format="%Y-%W-%u")

It converts better, but with a NA on week 53

> strptime(paste0(pays$year_week,"-1"),format="%Y-%W-%u")
  [1] "2020-01-06 CET"  "2020-01-13 CET"  "2020-01-20 CET"  "2020-01-27 CET"
  [5] "2020-02-03 CET"  "2020-02-10 CET"  "2020-02-17 CET"  "2020-02-24 CET"
  [9] "2020-03-02 CET"  "2020-03-09 CET"  "2020-03-16 CET"  "2020-03-23 CET"
[13] "2020-03-30 CEST" "2020-04-06 CEST" "2020-04-13 CEST" "2020-04-20 CEST"
[17] "2020-04-27 CEST" "2020-05-04 CEST" "2020-05-11 CEST" "2020-05-18 CEST"
[21] "2020-05-25 CEST" "2020-06-01 CEST" "2020-06-08 CEST" "2020-06-15 CEST"
[25] "2020-06-22 CEST" "2020-06-29 CEST" "2020-07-06 CEST" "2020-07-13 CEST"
[29] "2020-07-20 CEST" "2020-07-27 CEST" "2020-08-03 CEST" "2020-08-10 CEST"
[33] "2020-08-17 CEST" "2020-08-24 CEST" "2020-08-31 CEST" "2020-09-07 CEST"
[37] "2020-09-14 CEST" "2020-09-21 CEST" "2020-09-28 CEST" "2020-10-05 CEST"
[41] "2020-10-12 CEST" "2020-10-19 CEST" "2020-10-26 CET"  "2020-11-02 CET"
[45] "2020-11-09 CET"  "2020-11-16 CET"  "2020-11-23 CET"  "2020-11-30 CET"
[49] "2020-12-07 CET"  "2020-12-14 CET"  "2020-12-21 CET"  "2020-12-28 CET"
[53] NA"2021-01-04 CET"  "2021-01-11 CET"  "2021-01-18 CET"
[57] "2021-01-25 CET"  "2021-02-01 CET"  "2021-02-08 CET"
Warning message:
In strptime(paste0(pays$year_week, "-1"), format = "%Y-%W-%u") :
   (0-based) yday 369 in year 2020 is invalid


Any idea on how to handle this ?




Le 22/02/2021 à 15:26, Patrick Giraudoux a écrit :
>
> Dear all,
>
> I have a trouble trying to convert dates  given in character to POSIX. 
> The date is expressed as a year then the week number e.g. "2020-01" 
> (first week of 2020). I thought is can be converted as following:
>
> strptime(mydate,format="%Y-%W")
>
> %W refering to the week of the year as decimal number (00–53) using 
> Monday as the first day of week (and typically with the first Monday 
> of the year as day 1 of week 1), as indicated in the doc.
>
> However, I got this result, with the month fixed to 02 (february) and 
> day 22 (only the year is  converted correctly):
>
> strptime(mydate,format="%Y-%W") [1] "2020-02-22 CET" "2020-02-22 CET" 
> "2020-02-22 CET" "2020-02-22 CET" [5] "2020-02-22 CET" "2020-02-22 
> CET" "2020-02-22 CET" "2020-02-22 CET" [9] "2020-02-22 CET" 
> "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" [13] "2020-02-22 
> CET" "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" [17] 
> "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" 
> [21] "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 
> CET" [25] "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" 
> "2020-02-22 CET" [29] "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 
> CET" "2020-02-22 CET" [33] "2020-02-22 CET" "2020-02-22 CET" 
> "2020-02-22 CET" "2020-02-22 CET" [37] "2020-02-22 CET" "2020-02-22 
> CET" "2020-02-22 CET" "2020-02-22 CET" [41] "2020-02-22 CET" 
> "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" [45] "2020-02-22 
> CET" "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" [49] 
> "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" "2020-02-22 CET" 
> [53] "2020-02-22 CET" "2021-02-22 CET" "2021-02-22 CET" "2021-02-22 
> CET" [57] "2021-02-22 CET" "2021-02-22 CET" "2021-02-22 CET"
>
> You'll find below a dump of "mydate" you can copy and paster if you 
> need a try
>
> Any hint welcome...
>
> Best,
>
> Patrick
>
> mydate <-
> c("2020-01", "2020-02", "2020-03", "2020-04", "2020-05", "2020-06",
> "2020-07", "2020-08", "2020-09", "2020-10", "2020-11", "2020-12",
> "2020-13", "2020-14", "2020-15", "2020-16", "2020-17", "2020-18",
> "2020-19", "2020-20", "2020-21", "2020-22", "2020-23", "2020-24",
> "2020-25", "2020-26", "2020-27", "2020-28", "2020-29", "2020-30",
> "2020-31", "2020-32", "2020-33", "2020-34", "2020-35", "2020-36",
> "2020-37", "2020-38", "2020-39", "2020-40", "2020-41", "2020-42",
> "2020-43", "2020-44", "2020-45", "2020-46", "2020-47", "2020-48",
> "2020-49", "2020-50", "2020-51", "2020-52", "2020-53", "2021-01",
> "2021-02", "2021-03", "2021-04", "2021-05", "2021-06")
>


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] strptime, date and conversion of week number into POSIX

2021-02-22 Thread Patrick Giraudoux
Dear all,

I have a trouble trying to convert dates  given in character to POSIX. 
The date is expressed as a year then the week number e.g. "2020-01" 
(first week of 2020). I thought is can be converted as following:

strptime(mydate,format="%Y-%W")

%W refering to the week of the year as decimal number (00–53) using 
Monday as the first day of week (and typically with the first Monday of 
the year as day 1 of week 1), as indicated in the doc.

However, I got this result, with the month fixed to 02 (february) and 
day 22 (only the year is  converted correctly):

strptime(mydate,format="%Y-%W") [1] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [5] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [9] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [13] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [17] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [21] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [25] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [29] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [33] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [37] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [41] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [45] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [49] "2020-02-22 CET" "2020-02-22 CET" 
"2020-02-22 CET" "2020-02-22 CET" [53] "2020-02-22 CET" "2021-02-22 CET" 
"2021-02-22 CET" "2021-02-22 CET" [57] "2021-02-22 CET" "2021-02-22 CET" 
"2021-02-22 CET"

You'll find below a dump of "mydate" you can copy and paster if you need 
a try

Any hint welcome...

Best,

Patrick

mydate <-
c("2020-01", "2020-02", "2020-03", "2020-04", "2020-05", "2020-06",
"2020-07", "2020-08", "2020-09", "2020-10", "2020-11", "2020-12",
"2020-13", "2020-14", "2020-15", "2020-16", "2020-17", "2020-18",
"2020-19", "2020-20", "2020-21", "2020-22", "2020-23", "2020-24",
"2020-25", "2020-26", "2020-27", "2020-28", "2020-29", "2020-30",
"2020-31", "2020-32", "2020-33", "2020-34", "2020-35", "2020-36",
"2020-37", "2020-38", "2020-39", "2020-40", "2020-41", "2020-42",
"2020-43", "2020-44", "2020-45", "2020-46", "2020-47", "2020-48",
"2020-49", "2020-50", "2020-51", "2020-52", "2020-53", "2021-01",
"2021-02", "2021-03", "2021-04", "2021-05", "2021-06")


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] dir and pattern = ".r"

2020-02-28 Thread Patrick Giraudoux
Ups... Thank you both. Indeed I must repeat lessons on regular 
expression... obviously forgotten...



Le 28/02/2020 à 18:32, Jeremie Juste a écrit :

Hello,

you need

dir(pattern="\\.r$",ignore.case=TRUE)

remember that the pattern is a regular expression.
so ".r" is [any single character]r. So basically it will give you any
file that contains r (but not that starts with r)


so you got what you expected with

  dir(pattern=".txt")

just by chance. as  or   would have been printed as well

HTH,

Jeremie



I have this directory contain listed at the page bottom

Can somebody tell me why with

dir(pattern = ".txt")

dir(pattern = ".dbf")

etc.

I get exactly what I want (a vector with the file names correctly
suffixed), but with

dir(pattern = ".r")

I get this:


dir(pattern=".r") [1] "Article Predation" "BD Carto" [3] "broma1.txt"

"broma3.txt" [5] "BromaBusesMilanMicha.xlsx" "Bromadiolone" [7]
"BufferRenard.dbf" "BufferRenard.prj" [9] "BufferRenard.shp"
"BufferRenard.shx" [11] "clpboard" "DatesDiurnes.txt" [13]
"DatesDiurnes_plus.txt" "DatesDiurnes_plus.xlsx" [15]
"DatesDiurnesFREDON" "DatesDiurnesFREDON_plus.txt" [17]
"DatesDiurnesFREDON_plus.xlsx" "DatesDiurnesRegis.txt" [19]
"DatesDiurnesRegis_plus.txt" "DatesDiurnesRegis_plus.xlsx" [21]
"DatesNocturnes.txt" "DatesNocturnes_plus.txt" [23]
"DatesNocturnes_plus.xlsx" "DatesNocturnesFREDON" [25]
"DatesNocturnesFREDON_plus.txt" "DatesNocturnesFREDON_plus.xlsx" [27]
"DatesNocturnesRegis.txt" "DatesNocturnesRegis_plus.txt" [29]
"DatesNocturnesRegis_plus.xlsx" "Figures" [31] "ParcBuf300n.dbf"
"ParcBuf300n.prj" [33] "ParcBuf300n.shp" "ParcBuf300n.shx" [35]
"ParcBuf350n.dbf" "ParcBuf350n.prj" [37] "ParcBuf350n.shp"
"ParcBuf350n.shx" [39] "Script_190517_preparation.r"
"Script_190518_scores_AT.r" [41] "Script_190519_1430_cinetiques_IKA.r"
"Script_190519_1622_transects_camp.r" [43]
"Script_190530_1700_preparation2.r"
"Script_190530_1903_cinetiques_IKA2.r" [45]
"Script_190531_0922_scores_AT2.r" "Script_190531_1729_prey_resource.r"
[47] "Script_190601_1509_graphIKAd.r"
"Script_190601_1509_graphIKAd_old.r" [49]
"Script_190601_1509_graphIKAn.r" "Script_190601_1509_graphIKAn_old.r"
[51] "Script_190601_1955_spatial.r" "Script_190708_0930_distance.r" [53]
"Script_190709_0930_impact renard.r"
"Script_200117_cinetiques_article.r" [55]
"Script_200119_source_stats_explore_diurne.r" "Script_200119_stats.r"
[57] "Script_200122_spatial_distribution.r" "Script_200124_distance.r"
[59] "Script_200124_distance_source_d.r"
"Script_200124_distance_source_n.r" [61]
"Script_200201_impacts_on_prey.R" "ScriptCompteLignes.r" [63]
"Scripts_avant_200112.zip" "shinyPred" [65] "StudyArea.dbf"
"StudyArea.prj" [67] "StudyArea.shp" "StudyArea.shx" [69]
"SurfaceZoneEtude.dbf" "SurfaceZoneEtude.prj" [71]
"SurfaceZoneEtude.shp" "SurfaceZoneEtude.shx" [73] "transects_camp.Rdata"

How can I get all the files, only these files, suffixed with ".r" ?

Thanks in advance,




dir() [1] "Analyse_190523_baseline_190523_1506.docx"

"Analyse_190523_baseline_190531_2110.docx" [3]
"Analyse_190523_baseline_190531_2110.Rmd" "Analyse_190531_baseline.docx"
[5] "Analyse_190531_baseline.Rmd" "Analyse_190531_baseline_cache" [7]
"Analyse_190531_baseline_files" "Analyse_190603_spatial.docx" [9]
"Analyse_190603_spatial.Rmd" "Analyse_190603_spatial_cache" [11]
"Analyse_190603_spatial_files" "Article Predation" [13] "BD Carto"
"Biblio" [15] "broma1.txt" "broma3.txt" [17] "BromaBusesMilanMicha.xlsx"
"Bromadiolone" [19] "BufferRenard.dbf" "BufferRenard.prj" [21]
"BufferRenard.shp" "BufferRenard.shx" [23] "clpboard" "DatesDiurnes.txt"
[25] "DatesDiurnes_plus.txt" "DatesDiurnes_plus.xlsx" [27]
"DatesDiurnesFREDON" "DatesDiurnesFREDON_plus.txt" [29]
"DatesDiurnesFREDON_plus.xlsx" "DatesDiurnesRegis.txt" [31]
"DatesDiurnesRegis_plus.txt" "DatesDiurnesRegis_plus.xlsx" [33]
"DatesNocturnes.txt" "DatesNocturnes_plus.txt" [35]
"DatesNocturnes_plus.xlsx" "DatesNocturnesFREDON" [37]
"DatesNocturnesFREDON_plus.txt" "DatesNocturnesFREDON_plus.xlsx" [39]
"DatesNocturnesRegis.txt" "DatesNocturnesRegis_plus.txt" [41]
"DatesNocturnesRegis_plus.xlsx" "Figures" [43] "IKAZ_old.zip"
"IKAZ1999.txt" [45] "IKAZ2000.txt" "IKAZ2007.txt" [47] "IKAZ2008.txt"
"IKAZ2009.txt" [49] "IKAZ2010.txt" "IKAZ2011.txt" [51] "IKAZ2012.txt"
"IKAZ2013.txt" [53] "IKAZ2014.txt" "IKAZ2015.txt" [55] "IKAZ2016.txt"
"IKAZ2017.txt" [57] "IKAZ2018.txt" "ParcBuf300n.dbf" [59]
"ParcBuf300n.prj" "ParcBuf300n.shp" [61] "ParcBuf300n.shx"
"ParcBuf350n.dbf" [63] "ParcBuf350n.prj" "ParcBuf350n.shp" [65]
"ParcBuf350n.shx" "Photos ZELAC" [67] "plot.ds.R" "plot.dsmodel.R" [69]
"RData" "Script_190517_preparation.r" [71] "Script_190518_scores_AT.r"
"Script_190519_1430_cinetiques_IKA.r" [73]
"Script_190519_1622_transects_camp.r"
"Script_190530_1700_preparation2.r" [75]
"Script_190530_1903_cinetiques_IKA2.r" "Script_190531_0922_scores_AT2.r"
[77] "Script_190531_1729_prey_resource.r"

[R] dir and pattern = ".r"

2020-02-28 Thread Patrick Giraudoux
I have this directory contain listed at the page bottom

Can somebody tell me why with

dir(pattern = ".txt")

dir(pattern = ".dbf")

etc.

I get exactly what I want (a vector with the file names correctly 
suffixed), but with

dir(pattern = ".r")

I get this:

> dir(pattern=".r") [1] "Article Predation" "BD Carto" [3] "broma1.txt" 
"broma3.txt" [5] "BromaBusesMilanMicha.xlsx" "Bromadiolone" [7] 
"BufferRenard.dbf" "BufferRenard.prj" [9] "BufferRenard.shp" 
"BufferRenard.shx" [11] "clpboard" "DatesDiurnes.txt" [13] 
"DatesDiurnes_plus.txt" "DatesDiurnes_plus.xlsx" [15] 
"DatesDiurnesFREDON" "DatesDiurnesFREDON_plus.txt" [17] 
"DatesDiurnesFREDON_plus.xlsx" "DatesDiurnesRegis.txt" [19] 
"DatesDiurnesRegis_plus.txt" "DatesDiurnesRegis_plus.xlsx" [21] 
"DatesNocturnes.txt" "DatesNocturnes_plus.txt" [23] 
"DatesNocturnes_plus.xlsx" "DatesNocturnesFREDON" [25] 
"DatesNocturnesFREDON_plus.txt" "DatesNocturnesFREDON_plus.xlsx" [27] 
"DatesNocturnesRegis.txt" "DatesNocturnesRegis_plus.txt" [29] 
"DatesNocturnesRegis_plus.xlsx" "Figures" [31] "ParcBuf300n.dbf" 
"ParcBuf300n.prj" [33] "ParcBuf300n.shp" "ParcBuf300n.shx" [35] 
"ParcBuf350n.dbf" "ParcBuf350n.prj" [37] "ParcBuf350n.shp" 
"ParcBuf350n.shx" [39] "Script_190517_preparation.r" 
"Script_190518_scores_AT.r" [41] "Script_190519_1430_cinetiques_IKA.r" 
"Script_190519_1622_transects_camp.r" [43] 
"Script_190530_1700_preparation2.r" 
"Script_190530_1903_cinetiques_IKA2.r" [45] 
"Script_190531_0922_scores_AT2.r" "Script_190531_1729_prey_resource.r" 
[47] "Script_190601_1509_graphIKAd.r" 
"Script_190601_1509_graphIKAd_old.r" [49] 
"Script_190601_1509_graphIKAn.r" "Script_190601_1509_graphIKAn_old.r" 
[51] "Script_190601_1955_spatial.r" "Script_190708_0930_distance.r" [53] 
"Script_190709_0930_impact renard.r" 
"Script_200117_cinetiques_article.r" [55] 
"Script_200119_source_stats_explore_diurne.r" "Script_200119_stats.r" 
[57] "Script_200122_spatial_distribution.r" "Script_200124_distance.r" 
[59] "Script_200124_distance_source_d.r" 
"Script_200124_distance_source_n.r" [61] 
"Script_200201_impacts_on_prey.R" "ScriptCompteLignes.r" [63] 
"Scripts_avant_200112.zip" "shinyPred" [65] "StudyArea.dbf" 
"StudyArea.prj" [67] "StudyArea.shp" "StudyArea.shx" [69] 
"SurfaceZoneEtude.dbf" "SurfaceZoneEtude.prj" [71] 
"SurfaceZoneEtude.shp" "SurfaceZoneEtude.shx" [73] "transects_camp.Rdata"


How can I get all the files, only these files, suffixed with ".r" ?

Thanks in advance,




> dir() [1] "Analyse_190523_baseline_190523_1506.docx" 
"Analyse_190523_baseline_190531_2110.docx" [3] 
"Analyse_190523_baseline_190531_2110.Rmd" "Analyse_190531_baseline.docx" 
[5] "Analyse_190531_baseline.Rmd" "Analyse_190531_baseline_cache" [7] 
"Analyse_190531_baseline_files" "Analyse_190603_spatial.docx" [9] 
"Analyse_190603_spatial.Rmd" "Analyse_190603_spatial_cache" [11] 
"Analyse_190603_spatial_files" "Article Predation" [13] "BD Carto" 
"Biblio" [15] "broma1.txt" "broma3.txt" [17] "BromaBusesMilanMicha.xlsx" 
"Bromadiolone" [19] "BufferRenard.dbf" "BufferRenard.prj" [21] 
"BufferRenard.shp" "BufferRenard.shx" [23] "clpboard" "DatesDiurnes.txt" 
[25] "DatesDiurnes_plus.txt" "DatesDiurnes_plus.xlsx" [27] 
"DatesDiurnesFREDON" "DatesDiurnesFREDON_plus.txt" [29] 
"DatesDiurnesFREDON_plus.xlsx" "DatesDiurnesRegis.txt" [31] 
"DatesDiurnesRegis_plus.txt" "DatesDiurnesRegis_plus.xlsx" [33] 
"DatesNocturnes.txt" "DatesNocturnes_plus.txt" [35] 
"DatesNocturnes_plus.xlsx" "DatesNocturnesFREDON" [37] 
"DatesNocturnesFREDON_plus.txt" "DatesNocturnesFREDON_plus.xlsx" [39] 
"DatesNocturnesRegis.txt" "DatesNocturnesRegis_plus.txt" [41] 
"DatesNocturnesRegis_plus.xlsx" "Figures" [43] "IKAZ_old.zip" 
"IKAZ1999.txt" [45] "IKAZ2000.txt" "IKAZ2007.txt" [47] "IKAZ2008.txt" 
"IKAZ2009.txt" [49] "IKAZ2010.txt" "IKAZ2011.txt" [51] "IKAZ2012.txt" 
"IKAZ2013.txt" [53] "IKAZ2014.txt" "IKAZ2015.txt" [55] "IKAZ2016.txt" 
"IKAZ2017.txt" [57] "IKAZ2018.txt" "ParcBuf300n.dbf" [59] 
"ParcBuf300n.prj" "ParcBuf300n.shp" [61] "ParcBuf300n.shx" 
"ParcBuf350n.dbf" [63] "ParcBuf350n.prj" "ParcBuf350n.shp" [65] 
"ParcBuf350n.shx" "Photos ZELAC" [67] "plot.ds.R" "plot.dsmodel.R" [69] 
"RData" "Script_190517_preparation.r" [71] "Script_190518_scores_AT.r" 
"Script_190519_1430_cinetiques_IKA.r" [73] 
"Script_190519_1622_transects_camp.r" 
"Script_190530_1700_preparation2.r" [75] 
"Script_190530_1903_cinetiques_IKA2.r" "Script_190531_0922_scores_AT2.r" 
[77] "Script_190531_1729_prey_resource.r" 
"Script_190601_1509_graphIKAd.r" [79] 
"Script_190601_1509_graphIKAd_old.r" "Script_190601_1509_graphIKAn.r" 
[81] "Script_190601_1509_graphIKAn_old.r" "Script_190601_1955_spatial.r" 
[83] "Script_190708_0930_distance.r" "Script_190709_0930_impact 
renard.r" [85] "Script_200117_cinetiques_article.r" 
"Script_200119_source_stats_explore_diurne.r" [87] 
"Script_200119_stats.r" "Script_200122_spatial_distribution.r" [89] 
"Script_200124_distance.r" 

Re: [R] using a variable and a superscript in a legend

2019-10-20 Thread Patrick Giraudoux


Would be nice to put those two way examples in the documentation of the 
function 'expression' and 'bquote' in the next R version (we are in the 
base) for other users  ;-) I am sure many would enjoy.



Le 20/10/2019 à 19:15, Patrick Giraudoux a écrit :
> Great !  You have helped to solve a problem on which I was sweating 
> (sporadically, however) since months...
>
> Thanks,
>
> Best,
>
>
> Le 20/10/2019 à 18:29, Bert Gunter a écrit :
>> The legend must be "an expression vector."
>> c("Sans renard",bquote(.(densren) (ind./km)^2))   is not because the 
>> first element is a character string.
>>
>> This works:
>>
>> plot(1:100,1:100,type="n")
>>    legend(list(x=0,y=100),legend=c(expression("Sans 
>> renard"),bquote(.(densren) 
>> (ind./km)^2)),lty=c(1,2),col=c("black","red"),bty="n")
>>
>> Cheers,
>> Bert
>>
>>
>> Bert Gunter
>>
>> "The trouble with having an open mind is that people keep coming 
>> along and sticking things into it."
>> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>>
>>
>> On Sun, Oct 20, 2019 at 9:02 AM Patrick Giraudoux 
>> > <mailto:patrick.giraud...@univ-fcomte.fr>> wrote:
>>
>> Thanks Bert and Peter,
>>
>> Yes Bert, I was aware of the legend() function syntax, and just
>> quoting the legend argument within the function.
>>
>> However, Bert and Peter, I do not understand why it works with
>> your absolutely reproducible examples and not in the slightly
>> (not so slightly apparently) different context where I used it...
>>
>> densren=1.25
>> plot(1:100,1:100,type="n")
>> legend(list(x=0,y=100),legend=c("Sans renard",bquote(.(densren)
>> (ind./km)^2)),lty=c(1,2),col=c("black","red"),bty="n")
>>
>> densren=1.25
>> plot(1:100,1:100,type="n")
>> legend(list(x=0,y=100),legend=c("Sans renard",bquote(.(densren) *
>> " ind."/"km"^2)),lty=c(1,2),col=c("black","red"),bty="n"
>>
>> Probably because the result of bquote() is concatenated in a
>> character vector, but how to deal with this ?
>>
>> Best,
>>
>> Patrick
>>
>>
>>
>> Le 20/10/2019 à 16:42, Bert Gunter a écrit :
>>> Assuming you are using base graphics, your syntax for adding the
>>> legend appears to be wrong.
>>> legend() is a separate function, not a parameter of plot.default
>>> afaics.
>>>
>>> The following works for me:
>>>
>>> > densren <- 1.25
>>> > plot(1:10)
>>> > legend (x="center", legend =bquote(.(densren) (ind./km)^2))
>>>
>>> See ?legend
>>>
>>> Bert Gunter
>>>
>>> "The trouble with having an open mind is that people keep coming
>>> along and sticking things into it."
>>> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>>>
>>>
>>> On Sun, Oct 20, 2019 at 5:30 AM Patrick Giraudoux
>>> >> <mailto:patrick.giraud...@univ-fcomte.fr>> wrote:
>>>
>>> Dear listers,
>>>
>>> I am trying to pass an expression inlcuding a variable and a
>>> superpscript to a legend. What I want to obtain is e.g. with
>>> densren = 1.25
>>>
>>> 1.25 ind./km^2
>>>
>>> I have tried many variants of the following:
>>>
>>> legend=bquote(.(densren) (ind./km)^2)
>>>
>>> but if not errors, do obtain
>>>
>>> 1.25 (ind./km^2)
>>>
>>> hence not what I want (no parenthesis, 2 in superscript...)
>>>
>>> Any idea about a correct syntax to get what I need ?
>>>
>>> Best,
>>>
>>> Patrick
>>>
>>>
>>>         [[alternative HTML version deleted]]
>>>
>>> __
>>> R-help@r-project.org <mailto:R-help@r-project.org> mailing
>>> list -- To UNSUBSCRIBE and more, see
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible
>>> code.
>>>
>>
>


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using a variable and a superscript in a legend

2019-10-20 Thread Patrick Giraudoux
Now, we have two solutions working. This is great since I did not find 
any example of the kind searching r-help archives and google...

Thanks !

Le 20/10/2019 à 19:31, Peter Dalgaard a écrit :

It's tricky, but I think what you want is

legend(list(x=0,y=100),
legend=as.expression(list(
  "Sans renard",
  bquote(.(densren) * " ind."/"km"^2)
)),
lty=c(1,2),col=c("black","red"),bty="n")

Generally, if you want a vector of unevaluated expressions, you need an object of mode 
"expression", but you cannot create it directly with expression() because then 
the bquote() is left unevaluated:


expression("Sans renard",bquote(.(densren) * " ind."/"km"^2))

expression("Sans renard", bquote(.(densren) * " ind."/"km"^2))

Putting the bquote on the outside _looks_ like it might work:


bquote(expression("Sans renard",.(densren) * " ind."/"km"^2))

expression("Sans renard", 1.25 * " ind."/"km"^2)

but that is not an "expression" object, but a call to expression() (!). Try it 
and see.

Evaluating the call does actually work (notice that the printed value is 
exactly the same, but the object is not):


eval(bquote(expression("Sans renard",.(densren) * " ind."/"km"^2)))

expression("Sans renard", 1.25 * " ind."/"km"^2)

but I think I prefer the as.expression(list()) construction.

An alternative tack is this:


e <- expression(0,0)
e[[1]] <- "sans renard"
e[[2]] <- bquote(.(densren) * " ind."/"km"^2)
plot(1:100,1:100,type="n")
legend(list(x=0,y=100),legend=e, lty=c(1,2),col=c("black","red"),bty="n")




On 20 Oct 2019, at 18:02 , Patrick Giraudoux  
wrote:

Thanks Bert and Peter,

Yes Bert, I was aware of the legend() function syntax, and just quoting the 
legend argument within the function.

However, Bert and Peter, I do not understand why it works with your absolutely 
reproducible examples and not in the slightly (not so slightly apparently) 
different context where I used it...

densren=1.25
plot(1:100,1:100,type="n")
legend(list(x=0,y=100),legend=c("Sans renard",bquote(.(densren) 
(ind./km)^2)),lty=c(1,2),col=c("black","red"),bty="n")

densren=1.25
plot(1:100,1:100,type="n")
legend(list(x=0,y=100),legend=c("Sans renard",bquote(.(densren) * " 
ind."/"km"^2)),lty=c(1,2),col=c("black","red"),bty="n"

Probably because the result of bquote() is concatenated in a character vector, 
but how to deal with this ?

Best,

Patrick



Le 20/10/2019 à 16:42, Bert Gunter a écrit :

Assuming you are using base graphics, your syntax for adding the legend appears 
to be wrong.
legend() is a separate function, not a parameter of plot.default afaics.

The following works for me:


densren <- 1.25
plot(1:10)
legend (x="center", legend =bquote(.(densren) (ind./km)^2))

See ?legend

Bert Gunter

"The trouble with having an open mind is that people keep coming along and sticking 
things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )


On Sun, Oct 20, 2019 at 5:30 AM Patrick Giraudoux 
 wrote:
Dear listers,

I am trying to pass an expression inlcuding a variable and a
superpscript to a legend. What I want to obtain is e.g. with densren = 1.25

1.25 ind./km^2

I have tried many variants of the following:

legend=bquote(.(densren) (ind./km)^2)

but if not errors, do obtain

1.25 (ind./km^2)

hence not what I want (no parenthesis, 2 in superscript...)

Any idea about a correct syntax to get what I need ?

Best,

Patrick


 [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using a variable and a superscript in a legend

2019-10-20 Thread Patrick Giraudoux
Great !  You have helped to solve a problem on which I was sweating 
(sporadically, however) since months...

Thanks,

Best,


Le 20/10/2019 à 18:29, Bert Gunter a écrit :
> The legend must be "an expression vector."
> c("Sans renard",bquote(.(densren) (ind./km)^2))   is not because the 
> first element is a character string.
>
> This works:
>
> plot(1:100,1:100,type="n")
>    legend(list(x=0,y=100),legend=c(expression("Sans 
> renard"),bquote(.(densren) 
> (ind./km)^2)),lty=c(1,2),col=c("black","red"),bty="n")
>
> Cheers,
> Bert
>
>
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along 
> and sticking things into it."
> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>
>
> On Sun, Oct 20, 2019 at 9:02 AM Patrick Giraudoux 
>  <mailto:patrick.giraud...@univ-fcomte.fr>> wrote:
>
> Thanks Bert and Peter,
>
> Yes Bert, I was aware of the legend() function syntax, and just
> quoting the legend argument within the function.
>
> However, Bert and Peter, I do not understand why it works with
> your absolutely reproducible examples and not in the slightly (not
> so slightly apparently) different context where I used it...
>
> densren=1.25
> plot(1:100,1:100,type="n")
> legend(list(x=0,y=100),legend=c("Sans renard",bquote(.(densren)
> (ind./km)^2)),lty=c(1,2),col=c("black","red"),bty="n")
>
> densren=1.25
> plot(1:100,1:100,type="n")
> legend(list(x=0,y=100),legend=c("Sans renard",bquote(.(densren) *
> " ind."/"km"^2)),lty=c(1,2),col=c("black","red"),bty="n"
>
> Probably because the result of bquote() is concatenated in a
> character vector, but how to deal with this ?
>
> Best,
>
> Patrick
>
>
>
> Le 20/10/2019 à 16:42, Bert Gunter a écrit :
>> Assuming you are using base graphics, your syntax for adding the
>> legend appears to be wrong.
>> legend() is a separate function, not a parameter of plot.default
>> afaics.
>>
>> The following works for me:
>>
>> > densren <- 1.25
>> > plot(1:10)
>> > legend (x="center", legend =bquote(.(densren) (ind./km)^2))
>>
>> See ?legend
>>
>> Bert Gunter
>>
>> "The trouble with having an open mind is that people keep coming
>> along and sticking things into it."
>> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>>
>>
>> On Sun, Oct 20, 2019 at 5:30 AM Patrick Giraudoux
>> > <mailto:patrick.giraud...@univ-fcomte.fr>> wrote:
>>
>> Dear listers,
>>
>> I am trying to pass an expression inlcuding a variable and a
>> superpscript to a legend. What I want to obtain is e.g. with
>> densren = 1.25
>>
>> 1.25 ind./km^2
>>
>> I have tried many variants of the following:
>>
>> legend=bquote(.(densren) (ind./km)^2)
>>
>> but if not errors, do obtain
>>
>> 1.25 (ind./km^2)
>>
>> hence not what I want (no parenthesis, 2 in superscript...)
>>
>> Any idea about a correct syntax to get what I need ?
>>
>> Best,
>>
>> Patrick
>>
>>
>>         [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org <mailto:R-help@r-project.org> mailing
>> list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible
>> code.
>>
>


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using a variable and a superscript in a legend

2019-10-20 Thread Patrick Giraudoux
Thanks Bert and Peter,

Yes Bert, I was aware of the legend() function syntax, and just quoting 
the legend argument within the function.

However, Bert and Peter, I do not understand why it works with your 
absolutely reproducible examples and not in the slightly (not so 
slightly apparently) different context where I used it...

densren=1.25
plot(1:100,1:100,type="n")
legend(list(x=0,y=100),legend=c("Sans renard",bquote(.(densren) 
(ind./km)^2)),lty=c(1,2),col=c("black","red"),bty="n")

densren=1.25
plot(1:100,1:100,type="n")
legend(list(x=0,y=100),legend=c("Sans renard",bquote(.(densren) * " 
ind."/"km"^2)),lty=c(1,2),col=c("black","red"),bty="n"

Probably because the result of bquote() is concatenated in a character 
vector, but how to deal with this ?

Best,

Patrick



Le 20/10/2019 à 16:42, Bert Gunter a écrit :
> Assuming you are using base graphics, your syntax for adding the 
> legend appears to be wrong.
> legend() is a separate function, not a parameter of plot.default afaics.
>
> The following works for me:
>
> > densren <- 1.25
> > plot(1:10)
> > legend (x="center", legend =bquote(.(densren) (ind./km)^2))
>
> See ?legend
>
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along 
> and sticking things into it."
> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>
>
> On Sun, Oct 20, 2019 at 5:30 AM Patrick Giraudoux 
>  <mailto:patrick.giraud...@univ-fcomte.fr>> wrote:
>
> Dear listers,
>
> I am trying to pass an expression inlcuding a variable and a
> superpscript to a legend. What I want to obtain is e.g. with
> densren = 1.25
>
> 1.25 ind./km^2
>
> I have tried many variants of the following:
>
> legend=bquote(.(densren) (ind./km)^2)
>
> but if not errors, do obtain
>
> 1.25 (ind./km^2)
>
> hence not what I want (no parenthesis, 2 in superscript...)
>
> Any idea about a correct syntax to get what I need ?
>
> Best,
>
> Patrick
>
>
>         [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org <mailto:R-help@r-project.org> mailing list --
> To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using a variable and a superscript in a legend

2019-10-20 Thread Patrick Giraudoux
Thanks Eric. I got it too already (and already tried some variations 
based on it), but to my understanding it does not include a variable 
whose contents is used in the expression as in the case submitted...


Le 20/10/2019 à 14:56, Eric Berger a écrit :
> I did a Google search on
>
> R plot superscript in legend
>
> and the first search result was
> https://stackoverflow.com/questions/20453408/superscript-r-squared-for-legend 
>
>  which looks like it might address your question.
>
> On Sun, Oct 20, 2019 at 3:30 PM Patrick Giraudoux 
>  <mailto:patrick.giraud...@univ-fcomte.fr>> wrote:
>
> Dear listers,
>
> I am trying to pass an expression inlcuding a variable and a
> superpscript to a legend. What I want to obtain is e.g. with
> densren = 1.25
>
> 1.25 ind./km^2
>
> I have tried many variants of the following:
>
> legend=bquote(.(densren) (ind./km)^2)
>
> but if not errors, do obtain
>
> 1.25 (ind./km^2)
>
> hence not what I want (no parenthesis, 2 in superscript...)
>
> Any idea about a correct syntax to get what I need ?
>
> Best,
>
> Patrick
>
>
>         [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org <mailto:R-help@r-project.org> mailing list --
> To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] using a variable and a superscript in a legend

2019-10-20 Thread Patrick Giraudoux
Dear listers,

I am trying to pass an expression inlcuding a variable and a 
superpscript to a legend. What I want to obtain is e.g. with densren = 1.25

1.25 ind./km^2

I have tried many variants of the following:

legend=bquote(.(densren) (ind./km)^2)

but if not errors, do obtain

1.25 (ind./km^2)

hence not what I want (no parenthesis, 2 in superscript...)

Any idea about a correct syntax to get what I need ?

Best,

Patrick


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] troubles with foreign:read.dbf

2019-04-20 Thread Patrick Giraudoux
Ashes of my head and all those sorts of things...
If I was a totally a newbie in R, I could claim for some sort of excuse, 
but it is definitely not the case, even.
Thanks !
Patrick


Le 20/04/2019 à 19:13, Eric Berger a écrit :
> You seem to have a typo.
> In the case that works your filename is "Mailles_2011a.dbf"
> but in the case that fails your filename is "Mailles_2011a.shp"
> (different extensions)
>
> HTH,
> Eric
>
>
> On Sat, Apr 20, 2019 at 8:00 PM Patrick Giraudoux 
>  <mailto:patrick.giraud...@univ-fcomte.fr>> wrote:
>
> Dear listers,
>
> I am using the package foreign function read.dbf and meet the
> following
> issue:
>
> i<-"Mailles_2011a.dbf"
>
> dbf<-read.dbf(i)
>
> works well BUT
>
> if I have a vector such as
>
> files <- c("Mailles_2011a.shp", "Mailles_2011p.shp",
> "Mailles_2012a.shp", "Mailles_2012p.shp", "Mailles_2013a.shp",
> "Mailles_2013p.shp", "Mailles_2014p.shp", "Mailles_2015a.shp",
> "Mailles_2015p.shp", "Mailles_2016p.shp")
>
> for(i in files) {
> dbf<-read.dbf(i)
> names(dbf)
> }
>
> gives the following error message:
>
> Error in read.dbf(i) : unable to open DBF file
>
> Same error with e.g.
>
> dbf<-read.dbf(files[1])
>
> Any idea about what's happening?
>
> Patrick
>
>
>         [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org <mailto:R-help@r-project.org> mailing list --
> To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] troubles with foreign:read.dbf

2019-04-20 Thread Patrick Giraudoux
Dear listers,

I am using the package foreign function read.dbf and meet the following 
issue:

i<-"Mailles_2011a.dbf"

dbf<-read.dbf(i)

works well BUT

if I have a vector such as

files <- c("Mailles_2011a.shp", "Mailles_2011p.shp", 
"Mailles_2012a.shp", "Mailles_2012p.shp", "Mailles_2013a.shp", 
"Mailles_2013p.shp", "Mailles_2014p.shp", "Mailles_2015a.shp", 
"Mailles_2015p.shp", "Mailles_2016p.shp")

for(i in files) {
dbf<-read.dbf(i)
names(dbf)
}

gives the following error message:

Error in read.dbf(i) : unable to open DBF file

Same error with e.g.

dbf<-read.dbf(files[1])

Any idea about what's happening?

Patrick


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Read text files with Chinese characters

2018-12-15 Thread Patrick Giraudoux
Dear listers,

There is number of requests about reading Chinese characters from Excel 
or text files. I had to cope with the issue and wrote a small manual 
about it. It might not be an optimal solution, but at least it works :-)

One can download the pdf at: 
https://chrono-environnement.univ-fcomte.fr/personnes/annuaire/article/giraudoux-patrick?lang=en#chinese

Cheers,

Patrick



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Getting a 404 error reading CRAN mirror repository

2017-06-21 Thread Patrick Giraudoux
Dear all,

I try to get a CRAN mirror repository working on my Ubuntu trusty 
plateform. So, including e.g.:

deb https://mirror.ibcp.fr/pub/CRAN/bin/linux/ubuntu trusty universe

in /etc/apt/source.lst

However, on every mirror tried I get:

Err https://mirror.ibcp.fr trusty/universe amd64 Packages
   HttpError404
Err https://mirror.ibcp.fr trusty/universe i386 Packages
   HttpError404

Can someone figure out what is going wrong ?

Best,

Patrick



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] AIC models are not all fitted to the same number of observation

2012-03-21 Thread Patrick Giraudoux

Hi,

Using lme from the package nlme  3.1-103, I meet a strange warning. I am 
trying to compare to models with:


library(nlme)
lmez6=lme(lepus~vulpes,random=~1|troncon/an,data=ika_z6_test)
lmez60=lme(lepus~1,random=~1|troncon/an,data=ika_z6_test)

Both have the same number of observations and groups:

lmez6
Linear mixed-effects model fit by REML
  Data: ika_z6_test
  Log-restricted-likelihood: -2267.756
  Fixed: lepus ~ vulpes
(Intercept)  vulpes
 1.35017117  0.04722338

Random effects:
 Formula: ~1 | troncon
(Intercept)
StdDev:   0.8080261

 Formula: ~1 | an %in% troncon
(Intercept)  Residual
StdDev:1.086611 0.4440076

Number of Observations: 1350
Number of Groups:
troncon an %in% troncon
1691350


 lmez60
Linear mixed-effects model fit by REML
  Data: ika_z6_test
  Log-restricted-likelihood: -2266.869
  Fixed: lepus ~ 1
(Intercept)
   1.435569

Random effects:
 Formula: ~1 | troncon
(Intercept)
StdDev:   0.8139646

 Formula: ~1 | an %in% troncon
(Intercept)  Residual
StdDev:1.086843 0.4445815

Number of Observations: 1350
Number of Groups:
troncon an %in% troncon
1691350

...but when I want to compare their AIC, I get:

AIC(lmez6,lmez60)
   df  AIC
lmez6   5 4545.511
lmez60  4 4541.737
Warning message:
In AIC.default(lmez6, lmez60) :
  models are not all fitted to the same number of observations


Has anybody an explanation about this strange warning ? To what extent 
this warning may limit the conclusions that could be drawn from AIC 
comparison ?


Thanks in advance,

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] AIC models are not all fitted to the same number of observation

2012-03-21 Thread Patrick Giraudoux

Le 21/03/2012 10:56, Patrick Giraudoux a écrit :

Hi,

Using lme from the package nlme  3.1-103, I meet a strange warning. I 
am trying to compare to models with:


library(nlme)
lmez6=lme(lepus~vulpes,random=~1|troncon/an,data=ika_z6_test)
lmez60=lme(lepus~1,random=~1|troncon/an,data=ika_z6_test)

Both have the same number of observations and groups:

lmez6
Linear mixed-effects model fit by REML
  Data: ika_z6_test
  Log-restricted-likelihood: -2267.756
  Fixed: lepus ~ vulpes
(Intercept)  vulpes
 1.35017117  0.04722338

Random effects:
 Formula: ~1 | troncon
(Intercept)
StdDev:   0.8080261

 Formula: ~1 | an %in% troncon
(Intercept)  Residual
StdDev:1.086611 0.4440076

Number of Observations: 1350
Number of Groups:
troncon an %in% troncon
1691350


 lmez60
Linear mixed-effects model fit by REML
  Data: ika_z6_test
  Log-restricted-likelihood: -2266.869
  Fixed: lepus ~ 1
(Intercept)
   1.435569

Random effects:
 Formula: ~1 | troncon
(Intercept)
StdDev:   0.8139646

 Formula: ~1 | an %in% troncon
(Intercept)  Residual
StdDev:1.086843 0.4445815

Number of Observations: 1350
Number of Groups:
troncon an %in% troncon
1691350

...but when I want to compare their AIC, I get:

AIC(lmez6,lmez60)
   df  AIC
lmez6   5 4545.511
lmez60  4 4541.737
Warning message:
In AIC.default(lmez6, lmez60) :
  models are not all fitted to the same number of observations


Has anybody an explanation about this strange warning ? To what extent 
this warning may limit the conclusions that could be drawn from AIC 
comparison ?


Thanks in advance,

Patrick





Sorry to go on on the thread, I have created, but the trouble I meet is 
above my level in stats... Actually, not using AIC but an anova 
approach, I get a more informative message:


anova(lmez6, lmez60)
   Model df  AIC  BIClogLik   Test  L.Ratio p-value
lmez6  1  5 4545.511 4571.543 -2267.756
lmez60 2  4 4541.737 4562.566 -2266.869 1 vs 2 1.774036  0.1829
Warning message:
In anova.lme(lmez6, lmez60) :
  Fitted objects with different fixed effects. REML comparisons are not 
meaningful.


And fubbling a bit more, I disclosed that this was an effect of fitting 
the model using REML. If fitted using ML, things are going (apparently) 
smoothly:


lmez6=lme(lepus~vulpes,random=~1|troncon/an,data=ika_z6_test,method=ML)
 lmez60=lme(lepus~1,random=~1|troncon/an,data=ika_z6_test,method=ML)
 anova(lmez6, lmez60)
   Model df  AIC  BIClogLik   Test  L.Ratio p-value
lmez6  1  5 4536.406 4562.445 -2263.203
lmez60 2  4 4538.262 4559.093 -2265.131 1 vs 2 3.856102  0.0496

 AIC(lmez6,lmez60)
   df  AIC
lmez6   5 4536.406
lmez60  4 4538.262

Now I have the following problem. What I understood from Pinheiro and 
Bates's book and some forums, is that ML estimations are biased to some 
extent tending to underestimate variance parameters. So probably not to 
recommend however results looks consistent here.


Thus, I am lost. The two models looks to me clearly embedded (one is 
just a null model with the only intercept to estimate and the other with 
intercept + one independent variable (numeric), both have the same 
random effects, the same response variable and the same number of 
observations). Warnings, from this point of view sounds inconsistent. 
They are probably not, but beyond my understanding...


Any idea ?

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] some CRAN mirrors not accessible

2011-12-11 Thread Patrick Giraudoux

Dear all,

Since some weeks, look like the following CRAN mirrors are no longer 
accessible for package update:


http://cran.univ-lyon1.fr
http://mirror.ibcp.fr/pub/CRAN/

 update.packages(ask='graphics',checkBuilt=TRUE)
Warning: unable to access index for repository 
http://cran.univ-lyon1.fr/bin/windows/contrib/2.14


update.packages(ask='graphics',checkBuilt=TRUE)
Warning: unable to access index for repository 
http://mirror.ibcp.fr/pub/CRAN/bin/windows/contrib/2.14


Furthermore, attempting to connect via a web browser  gives

However, I don't know how to get in touch with the webmasters in charge.

Any idea about hos to signal the trouble ?

PG

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] some CRAN mirrors not accessible

2011-12-11 Thread Patrick Giraudoux

Le 11/12/2011 18:57, Prof Brian Ripley a écrit :

First, see the status links in the first para of
http://cran.r-project.org/mirrors.html .  So it seems that the issue 
is not that the mirror is not accessible, but the part you are looking 
for is not current/available.


yes indeed. For one R for Windows  contrib  deny access, and for 
the other has no 2.14/ folder




Second, the mirror list is
http://cran.r-project.org/CRAN_mirrors.csv
and that lists the maintainers.  It is also in your R distribution, in 
directory 'doc' (but that is as old as your distribution, and mirrors 
do change).


Got it ! Thanks. Also fubbling in the chooseCRANmirror() doc I also 
discovered that  getCRANmirrors() returns a data.frame with the 
maintainer address; looking into the function it either straight  reads 
http://cran.r-project.org/CRAN_mirrors.csv, and if no connection reads 
de csv file stored in the local directory 'doc'.


Now the info about accessibility/availability has been conveyed to the 
maintainers.


Thanks again,




On 11/12/2011 17:43, Patrick Giraudoux wrote:

Dear all,

Since some weeks, look like the following CRAN mirrors are no longer
accessible for package update:

http://cran.univ-lyon1.fr
http://mirror.ibcp.fr/pub/CRAN/

 update.packages(ask='graphics',checkBuilt=TRUE)
Warning: unable to access index for repository
http://cran.univ-lyon1.fr/bin/windows/contrib/2.14

update.packages(ask='graphics',checkBuilt=TRUE)
Warning: unable to access index for repository
http://mirror.ibcp.fr/pub/CRAN/bin/windows/contrib/2.14

Furthermore, attempting to connect via a web browser gives

However, I don't know how to get in touch with the webmasters in charge.

Any idea about hos to signal the trouble ?

PG

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] symbols and legend: how to harmonize point size ?

2011-11-11 Thread Patrick Giraudoux

Hi,

I was wondering if it is possible to harmonize the ouput of symbols() 
and legend() both from the graphics package.


Let us take this example:

x-runif(10)
y-runif(10)
z-runif(10)

leg-round(seq(min(z),max(z),l=4),2) # 4 values rounded up to 2 decimals 
for the legend


symbols(x,y,circles=z,inches=0.2)

legend(topright,legend=leg,pch=1,pt.cex=leg/max(leg)*2) # multiplied 
by 2 arbitrarily just to make it visible


Actually, what I want to do is to pass to pt.cex a value which would 
make the biggest circle in the legend (leg/max(leg) = 1) exactly the 
same size as the one specified in symbols (here 0.2 inches). I suppose 
this is possible using par(cin) but I cannot figure out how to do it 
properly.


Any hint appreciated,

Best,

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to write a shapefile with projection

2011-11-05 Thread Patrick Giraudoux

Hi,

Sorry i have put such a detailed question to the list about writing a shapefile 
with projection. I realized that if i use writeOGR from rgdal and not the other 
write shapefile functions i can get a shapefile with projection recognized by 
ArcGIS. The command is (in case anybody wonders):

?writeOGR(crest.sp, I:\\LA_levee\\Shape, llev_crest_pts6, driver = ESRI 
Shapefile)

where crest.sp is a spatial point data frame with projection.

Thanks,

Monica


Indeed.

writePointsShape() does not write the projection file, but  using the 
function showWKT from rgdal, you can also create one like that:


writePointsShape(crest.sp,crest)
cat(showWKT(proj4string(crest.sp)),file=crest.prj)

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] logistic regression where the independant variable is a ratio

2011-06-12 Thread Patrick Giraudoux

Dear Lister,

I have collected data in 6 geographical areas on prevalence of a 
parasite in humans and in foxes. The results are expressed as a number 
of positive or negative cases in human and foxes in the following 
data.frame:


Pvtab -
structure(list(posHum = c(3, 5, 3, 17, 0, 4), negHum = c(32631,
16293, 27988, 231282, 53215, 51046), posFox = c(18, 23, 18, 191,
12, 55), negFox = c(14, 24, 62, 105, 55, 43)), .Names = c(posHum,
negHum, posFox, negFox), row.names = c(zone 1, zone 2,
zone 3, zone 4, zone 5, zone 6), class = data.frame)

I want to check a possible link between prevalences in humans (the 
reponse variable) and prevalences in foxes (the independant variable). I 
though about a logistic regression of the form:


pvFox-Pvtab$posFox/(Pvtab$posFox+Pvtab$negFox) # computes the 
prevalence in foxes for each area


mod0-mod0-glm(cbind(Pvtab$posHum,Pvtab$negHum)~pvFox,family=binomial)

But in this cas the number of foxes that have been used to compute the 
prevalence estimate in foxes (pvFox) is deliberatly not taken into 
account in the model. I can hardly figure out how to do it (weighing the 
model with the square root of the number of fox in each area ?).


Any advise appreciated about how to model a prevalence as a response of 
another prevalence at best.


Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to use # in a rd doc in url address

2009-11-11 Thread Patrick Giraudoux
I am writing a rd doc, and need to use # in a url adress. This would make:

\url{http://www..org/myfolder/#myanchor}

Of course, I suppose this will not work because # is a special character 
starting a comment line in the rd dialect.  I did not found a similar 
example in Writing R exentions. I am not sure bout using \dQuote{a 
quotation}), and use \sQuote and \dQuote correctly. Does anyone know how 
to get the thing right ?

Patrick

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to use # in a rd doc in url address

2009-11-11 Thread Patrick Giraudoux

Daniel Malter a écrit :

x=\url{http://www..org/myfolder/#myanchor};
print(x,quote=F)

Does this work for you? 
Daniel



  


I am not working on consol mode (which would make your suggestion 
straight applicable), but writing a rd documentationn (the documentation 
that comes out with the command ?myfunction). The rd file has a Latex 
style syntax and I just want to insert the url within this 
documentation. Eg.


\details{
You may want to connect to \url{http://www..org/myfolder/#myanchor}
}

I am not sure one can define a variable and print it in such context...

Best

Patrick







-
cuncta stricte discussurus
-

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im
Auftrag von Patrick Giraudoux
Gesendet: Wednesday, November 11, 2009 12:15 PM
An: r-help@r-project.org
Betreff: [R] how to use # in a rd doc in url address

I am writing a rd doc, and need to use # in a url adress. This would make:

\url{http://www..org/myfolder/#myanchor}

Of course, I suppose this will not work because # is a special character
starting a comment line in the rd dialect.  I did not found a similar
example in Writing R exentions. I am not sure bout using \dQuote{a
quotation}), and use \sQuote and \dQuote correctly. Does anyone know how to
get the thing right ?

Patrick

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to use # in a rd doc in url address

2009-11-11 Thread Patrick Giraudoux

Duncan Murdoch a écrit :

On 11/11/2009 12:15 PM, Patrick Giraudoux wrote:
I am writing a rd doc, and need to use # in a url adress. This 
would make:


\url{http://www..org/myfolder/#myanchor}


That should work.

Of course, I suppose this will not work because # is a special 
character starting a comment line in the rd dialect. 


That's not correct.  # is only special in R code, and with \url{} the 
text is considered as verbatim text, i.e. only \, %, { and } are special.


 I did not found a similar
example in Writing R exentions. I am not sure bout using \dQuote{a 
quotation}), and use \sQuote and \dQuote correctly. Does anyone know 
how to get the thing right ?


I don't understand this question.



You answered it above... There is no reason for using special 
quotation considering your reminder: with \url{} the text is 
considered as verbatim text


Thanks for the focus,

Best,

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to use # in a rd doc in url address

2009-11-11 Thread Patrick Giraudoux

Patrick Giraudoux a écrit :

Duncan Murdoch a écrit :

On 11/11/2009 12:15 PM, Patrick Giraudoux wrote:
I am writing a rd doc, and need to use # in a url adress. This 
would make:


\url{http://www..org/myfolder/#myanchor}


That should work.

Of course, I suppose this will not work because # is a special 
character starting a comment line in the rd dialect. 


That's not correct.  # is only special in R code, and with \url{} the 
text is considered as verbatim text, i.e. only \, %, { and } are 
special.


 I did not found a similar
example in Writing R exentions. I am not sure bout using \dQuote{a 
quotation}), and use \sQuote and \dQuote correctly. Does anyone know 
how to get the thing right ?


I don't understand this question.



You answered it above... There is no reason for using special 
quotation considering your reminder: with \url{} the text is 
considered as verbatim text


Thanks for the focus,

Best,

Patrick



Yes, can confirmed it works perfect without any complication... Good 
lesson. Being used to prepare oneself to the worst, one over-anticipates 
it, but occasionally  it does not happen


Cheers,

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] rd doc truncated with R 2.10.0

2009-11-08 Thread Patrick Giraudoux

Hi,

I am routinely compiling a package and since I have moved to R 2.10.0, 
it troncates some section texts in the doc:


With the following section in the rd file:

\details{
The function calls gpsbabel via the system. The gpsbabel program must 
be present and on the user's PATH for the function to work see 
http://www.gpsbabel.org/. The function has been tested on the 
following Garmin GPS devices: Etrex Summit, Etrex Vista Cx and GPSmap 60CSx.

}

...compiling under R 2.9.2 (rcmd build --binary --auto-zip pgirmess) I 
get this


Details:

The function calls gpsbabel via the system. The gpsbabel program
must be present and on the user's PATH for the function to work,
see http://www.gpsbabel.org/. The function has been tested on
the following Garmin GPS devices: Etrex Summit, Etrex Vista Cx and
GPSmap 60CSx.

and compiling now under R 2.10.0

Details:

The function has been tested on the following Garmin GPS devices:
Etrex Summit, Etrex Vista Cx and GPSmap 60CSx. The function calls
gpsbabel via the system. The gpsbabel program must be presen


Has anyone an explanation and a workaround ?

Best,

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rd doc truncated with R 2.10.0

2009-11-08 Thread Patrick Giraudoux

Duncan Murdoch a écrit :

On 08/11/2009 12:07 PM, Patrick Giraudoux wrote:

Hi,

I am routinely compiling a package and since I have moved to R 
2.10.0, it troncates some section texts in the doc:


With the following section in the rd file:

\details{
 The function calls gpsbabel via the system. The gpsbabel program 
must be present and on the user's PATH for the function to work see 
http://www.gpsbabel.org/. The function has been tested on the 
following Garmin GPS devices: Etrex Summit, Etrex Vista Cx and GPSmap 
60CSx.

}

...compiling under R 2.9.2 (rcmd build --binary --auto-zip pgirmess) 
I get this


Details:

 The function calls gpsbabel via the system. The gpsbabel program
 must be present and on the user's PATH for the function to work,
 see http://www.gpsbabel.org/. The function has been tested on
 the following Garmin GPS devices: Etrex Summit, Etrex Vista Cx and
 GPSmap 60CSx.

and compiling now under R 2.10.0

Details:

 The function has been tested on the following Garmin GPS devices:
 Etrex Summit, Etrex Vista Cx and GPSmap 60CSx. The function calls
 gpsbabel via the system. The gpsbabel program must be presen


Has anyone an explanation and a workaround ?


You will need to make the complete file available to us to diagnose 
this.  Is it in pgirmess 1.4.0?  Which topic?


Duncan Murdoch


OK. Will send it offlist.

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] What happen for Negative binomial link in Lmer

2009-11-04 Thread Patrick Giraudoux
Seems the message below and the thread have reveived no attention/answer. The 
output presented is quite tricky. Looks like if lmer (lme4 0.9975-10) 
has accepted a negative binomial link with reasonable estimates, although it 
was not designed for... 

What can one think about result validity ?

Best

Patrick


Message: 34
Date: Thu, 29 Oct 2009 06:51:24 -0700 (PDT)
From: E. Robardet e.robar...@gmail.com
Subject: Re: [R] What happen for Negative binomial link in Lmer
fonction?
To: r-help@r-project.org
Message-ID: 26113408.p...@talk.nabble.com
Content-Type: text/plain; charset=us-ascii


Thank you for your answers,

I have an exemple of that i was using:

m1a-lmer(atpos~ninter+saison+milieu*zone+(1|code),family=neg.bin(0.429),method=Laplace,data=manu)
summary(m1a)
Generalized linear mixed model fit using Laplace 
Formula: atpos ~ ninter + saison + milieu * zone + (1 | code) 
   Data: manu 
 Family: Negative Binomial(log link)
   AIC   BIC logLik deviance
 125.1 147.6 -54.57109.1

I think It was the version lme4 0.9975-10.
Unfortunately, I have this version no more available on my computer..
I wonder if this old results are still valid..


Ben Bolker wrote:
  
  
  
  ROBARDET Emmanuelle wrote:
   
  
  Dear R users,
  I'm performing some GLMMs analysis with a negative binomial link.
  I already performed such analysis some months ago with the lmer()
  function but when I tried it today I encountered this problem:
  Erreur dans famType(glmFit$family) : unknown GLM family: 'Negative
  Binomial'
  
  Does anyone know if the negative binomial family has been removed from
  this function?
  I really appreciate any response.
  Emmanuelle
  
  
 
  
  I would be extremely surprised if this worked in the past; to
  the best of my knowledge the negative binomial family has
  never been implemented in lmer.  One could in principle
  do a glmmPQL fit with the negative binomial family
  (with a fixed value of the overdispersion parameter).
  glmmADMB is another option.
  Can you say which version etc. you were using???
  
  Follow-ups should probably be sent to r-sig-mixed-mod...@r-project.org
  
  
   
-- View this message in context: 
http://www.nabble.com/What-happen-for-Negative-binomial-link-in-Lmer-fonction--tp26013041p26113408.html
 
Sent from the R help mailing list archive at Nabble.com.




[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lmer and negative binomial family

2009-10-29 Thread Patrick Giraudoux
Dear listers,

One of my former students is trying to fit a model of the negative 
binomial family with lmer. In the past (two years ago), the following 
call was working well:

m1a-lmer(mapos~ninter+saison+milieu*zone+(1|code),family=neg.bin(0.451),REML=TRUE,data=manu)

But now (R version 2.9.2 and lme4 version  0.999375-32), that gives 
(even with the library MASS loaded):

 
m1a-lmer(mapos~ninter+saison+milieu*zone+(1|code),family=neg.bin(0.451),REML=TRUE,data=manu)
Error in famType(glmFit$family) : unknown GLM family: 'Negative Binomial'

Any idea about what happens ?

Patrick



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lmer and negative binomial family

2009-10-29 Thread Patrick Giraudoux
Dear listers,

One of my former students is trying to fit a model of the negative 
binomial family with lmer. In the past (two years ago), the following 
call was working well:

m1a-lmer(mapos~ninter+saison+milieu*zone+(1|code),family=neg.bin(0.451),REML=TRUE,data=manu)

But now (R version 2.9.2 and lme4 version  0.999375-32), that gives 
(even with the library MASS loaded):

 
m1a-lmer(mapos~ninter+saison+milieu*zone+(1|code),family=neg.bin(0.451),REML=TRUE,data=manu)
Error in famType(glmFit$family) : unknown GLM family: 'Negative Binomial'

Any idea about what happens ?

Patrick




[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lmer and negative binomial family

2009-10-29 Thread Patrick Giraudoux
Patrick Giraudoux a écrit :
 Dear listers,

 One of my former students is trying to fit a model of the negative 
 binomial family with lmer. In the past (two years ago), the following 
 call was working well:

 m1a-lmer(mapos~ninter+saison+milieu*zone+(1|code),family=neg.bin(0.451),REML=TRUE,data=manu)

 But now (R version 2.9.2 and lme4 version  0.999375-32), that gives 
 (even with the library MASS loaded):

  
 m1a-lmer(mapos~ninter+saison+milieu*zone+(1|code),family=neg.bin(0.451),REML=TRUE,data=manu)
 Error in famType(glmFit$family) : unknown GLM family: 'Negative Binomial'

 Any idea about what happens ?

 Patrick





Oups. Sorry to reply to myself, but the answer was here:

http://www.nabble.com/What-happen-for-Negative-binomial-link-in-Lmer-fonction--td26013041.html



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] decimal troubles ?

2009-05-12 Thread Patrick Giraudoux

Dear all,

I have some trouble with the number of decimals in R (currently R 
2.9.0). For instance:


 options()$digits
[1] 3

let me hope that I will get three digits where useful when a number is 
printed. BUT:


 44.25+31.1+50
[1] 125

No way to get the right result 125.35

Can anybody tell me what's happens ?

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] decimal troubles ?

2009-05-12 Thread Patrick Giraudoux
Shame on me... I confused digits and decimals 

Thanks anyway to make me come to the English basics...

Patrick


Peter Alspach a écrit :
 Tena koe Patrick

 If you want more than three digits, change the options:

 options(digits=7)
 44.25+31.1+50
 [1] 125.35

 HTH 

 Peter Alspach 

   
 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of Patrick Giraudoux
 Sent: Tuesday, 12 May 2009 8:08 p.m.
 To: r-help@r-project.org
 Subject: [R] decimal troubles ?

 Dear all,

 I have some trouble with the number of decimals in R 
 (currently R 2.9.0). For instance:

   options()$digits
 [1] 3

 let me hope that I will get three digits where useful when a 
 number is printed. BUT:

   44.25+31.1+50
 [1] 125

 No way to get the right result 125.35

 Can anybody tell me what's happens ?

 Patrick

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 

 The contents of this e-mail are confidential and may be subject to legal 
 privilege.
  If you are not the intended recipient you must not use, disseminate, 
 distribute or
  reproduce all or any part of this e-mail or attachments.  If you have 
 received this
  e-mail in error, please notify the sender and delete all material pertaining 
 to this
  e-mail.  Any opinion or views expressed in this e-mail are those of the 
 individual
  sender and may not represent those of The New Zealand Institute for Plant and
  Food Research Limited.

   


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls, convergence and starting values

2009-03-28 Thread Patrick Giraudoux

Patrick Burns a écrit :

Patrick Giraudoux wrote:

Bert Gunter a écrit :
Based on a simple scatterplot of pourcma vs  transat, a 4 parameter 
logistic
looks like wild overfitting, and that may be the source of your 
problems.

Given the huge scatter, a straight line is about as much as would seem
sensible. I think this falls into the Why ever would you want to do 
such a

thing? category.

-- Bert
  


Right, well, the general idea was just to show that the straight 
line was the best model indeed (in the other data sets, with model 
comparison, the logistic one was clearly shown to be the best... ). 
Can the fact that convergence cannot be obtained be an acceptable and 
sufficient reason to select the null model (the straight line) ?


It is my experience that convergence problems are
often encountered when the model makes little sense.
I'm not so sure that non-convergence on its own is
a good reason to reject  the model.  That is, to answer
your specific question, I think it is acceptable but not
sufficient.

Patrick Burns
patr...@burns-stat.com
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of The R Inferno and A Guide for the Unwilling S User) 



OK. Thanks for this opinion. Actually I was sharing it intuitively but 
facing such situation for the first time, was quite unconfortable to 
make a decision (and still I am). We are touching epistemology...  and 
maybe a bit far from purely technical thus from the R list issues.


Tanks again, anyway,

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nls, convergence and starting values

2009-03-27 Thread Patrick Giraudoux

in non linear modelling finding appropriate starting values is
something like an art... (maybe from somewhere in Crawley , 2007)  Here
a colleague and I just want to compare different response models to a
null model. This has worked OK for almost all the other data sets except
that one (dumped below). Whatever our trials and algorithms, even
subsetting data (to check if some singular point was the cause of the
mess), we do not reach convergence... or screw up with singular
gradients (?) etc...

eg:

nls(pourcma~SSlogis(transat, Asym, xmid, scal), start=c(Asym=30,
xmid=0.07, scal=0.02),data=bdd, weights=sqrt(nbfeces),trace=T,alg=plinear)

As anyone a hint about an alternate approach to fit a model ? Or an idea
to get evidence that such model cannot be fitted to the data


bdd -
structure(list(transat = c(0.0697, 0.13079, 0.314265, 0.241613,
0.039319, 0, 0, 0, 0, 0, 0.0805, 0.41, 0.30585, 0.27465, 0.06085,
0.09114, 0.05766, 0.036983, 0.093186, 0.046624, 0, 0, 0, 0, 0.000616,
0, 0.0025, 0.0325, 0.03125, 0.04599, 0.38398, 0.524505, 0.450337,
0.061831, 0.133926, 0.091806, 0.00928, 0.25114, 0.3074, 0.431056,
0.026158), transma = c(0.04141, 0.01599, 0.101803, 0.002378,
0.039319, 0.00472459016393443, 0.0031016393442623, 0.000178524590163934,
0.00255704918032787, 0.000346229508196721, 0.0665, 0.012, 0.0553,
0.0045, 0.0056, 0.00155, 0.00124, 0.011966, 0.001736, 0.004712,
3.62903225806452e-05, 9.79838709677419e-05, 2.20161290322581e-05,
0.00462, 0.01006444, 0.00213, 0.046, 0.005,
0.01195, 0.07154, 0.08468, 0.141182, 0.086578, 0.027959, 0.003159,
0.003081, 0.13862, 0.00754, 0.078648, 0.068324, 0.025288), nbfeces = c(22L,
26L, 43L, 30L, 35L, 25L, 21L, 36L, 34L, 37L, 23L, 32L, 40L, 35L,
30L, 16L, 25L, 37L, 37L, 34L, 31L, 35L, 41L, 31L, 34L, 39L, 5L,
14L, 31L, 13L, 21L, 34L, 32L, 36L, 36L, 40L, 31L, 35L, 39L, 29L,
32L), pourcma = c(50, 34.6153846153846, 27.9069767441860, 43.3,
65.7142857142857, 32, 28.5714285714286, 22.2, 50,
10.8108108108108, 26.0869565217391, 40.625, 12.5, 22.8571428571429,
43.3, 6.25, 4, 10.8108108108108, 16.2162162162162,
23.5294117647059, 25.8064516129032, 45.7142857142857, 39.0243902439024,
25.8064516129032, 41.7, 27.5, 20, 14.2857142857143,
22.5806451612903, 15.3846153846154, 38.0952380952381, 17.6470588235294,
78.125, 61.1, 25, 37.5, 22.5806451612903, 40, 17.9487179487179,
41.3793103448276, 50), pourcat = c(22.7272727272727, 30.7692307692308,
41.8604651162791, 56.7, 5.71428571428571, 0, 0, 0,
0, 0, 30.4347826086957, 15.625, 45, 74.2857142857143, 13.3,
50, 12, 18.9189189189189, 27.0270270270270, 20.5882352941176,
0, 0, 0, 0, 0, 5, 40, 0, 0, 7.69230769230769, 9.52380952380952,
38.2352941176471, 59.375, 5.56, 41.7,
42.5, 9.67741935483871, 14.2857142857143, 51.2820512820513,
79.3103448275862,
6.25)), .Names = c(transat, transma, nbfeces, pourcma,
pourcat), class = data.frame, row.names = c(NA, -41L))

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls, convergence and starting values

2009-03-27 Thread Patrick Giraudoux

Bert Gunter a écrit :

Based on a simple scatterplot of pourcma vs  transat, a 4 parameter logistic
looks like wild overfitting, and that may be the source of your problems.
Given the huge scatter, a straight line is about as much as would seem
sensible. I think this falls into the Why ever would you want to do such a
thing? category.

-- Bert
  


Right, well, the general idea was just to show that the straight line 
was the best model indeed (in the other data sets, with model 
comparison, the logistic one was clearly shown to be the best... ). Can 
the fact that convergence cannot be obtained be an acceptable and 
sufficient reason to select the null model (the straight line) ?


Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Comparison of age categories using contrasts

2009-02-16 Thread Patrick Giraudoux

Dear listers,

I would like to compare the levels of a factor with 8 age categories
(0,10] (10,20] (20,30] (30,40] (40,50] (50,60] (60,70] (70,90] (however,
the factor has not been ordered yet). The default in glm is
cont.treatment (for unordered factors) and that leads to compare each
level to the first one. I would rather prefer to compare the 2nd to the
1st, the 3rd to the 2nd, the 4th to the 3rd, etc... My understanding is
that cont.poly may make the trick, eg specified like this:

mod3-glm(AE~agecat, family=binomial,data=qinghai2,
contrasts=list(agecat=contr.poly))

but I am not sure to be right.

Would be grateful if a true statistician can confirm or fire me... and
before definitive fire tell me how to manage with this...

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Comparison of age categories using contrasts

2009-02-16 Thread Patrick Giraudoux
Greg Snow a écrit :
 One approach is to create your own contrasts matrix:

   
 mycmat - diag(8)
 mycmat[ row(mycmat) == col(mycmat) + 1 ] - -1
 mycmati - solve(mycmat)
 contrasts(agefactor) - mycmati[,-1]
 

 Now when you use agefactor, the intercept will be the first age group and the 
 slopes will be the differences between the pairs of groups (make sure that 
 the order of the levels of agefactor is correct).

 The difference between this method and the contr.sdif function in MASS is how 
 the intercept will end up being interpreted (and the dimnames).

 Hope this helps,

   

Actually, exactly what I needed including the reference to contr.sdif in 
MASS I did not spot before (although I am a faithful reader of the 
yellow book... but so many things still escape to me). Again thanks a lot.

Patrick

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Comparison of age categories using contrasts

2009-02-15 Thread Patrick Giraudoux

Dear listers,

I would like to compare the levels of a factor with 8 age categories 
(0,10] (10,20] (20,30] (30,40] (40,50] (50,60] (60,70] (70,90] (however, 
the factor has not been ordered yet). The default in glm is 
cont.treatment (for unordered factors) and that leads to compare each 
level to the first one. I would rather prefer to compare the 2nd to the 
1st, the 3rd to the 2nd, the 4th to the 3rd, etc... My understanding is 
that cont.poly may make the trick, eg specified like this:


mod3-glm(AE~agecat, family=binomial,data=qinghai2, 
contrasts=list(agecat=contr.poly))


but I am not sure to be right.

Would be grateful if a true statistician can confirm or fire me... and 
before definitive fire tell me how to manage with this...


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Coordinate systems for geostatistics in R (imicola)

2008-08-23 Thread Patrick Giraudoux
 If you use the spatial objects provided by the 
sp-package (http://cran.r-project.org/web/packages/sp/vignettes/sp.pdf) 
you transform your data to other projections using the spTransform package.


Thus you will need the rgdal package in complement (it actually includes 
spTransform). This function is extremely convenient: you can manage 
coordinate transformations extremely easily for common systems (WGS84, 
UTM) within the R environment.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] svIDE and Tinn-R

2008-05-19 Thread Patrick Giraudoux
Probably an old moon since evoqued one year ago in this link: 
http://tolstoy.newcastle.edu.au/R/e2/help/07/04/15738.html


but I have recently re-installed Tinn-R with R 2.7.0 and forgot to insert

options(warn=-1)
library(svIDE)
...
options(warn=0)

in Rprofile.site... and could see that we have still the same warning 
launching R:


Warning messages:
1: '\A' is an unrecognized escape in a character string
2: unrecognized escape removed from ;for Options\AutoIndent: 0=Off, 
1=follow language scoping and 2=copy from previous line\n
3: In grep(paste([{]TclEval , topic, [}], sep = ), 
tclvalue(.Tcl(dde services TclEval {})),  :

 argument 'useBytes = TRUE' will be ignored

I wonder how far it may be problematical ?

Patrick


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] AIC and anova, lme

2008-02-26 Thread Patrick Giraudoux

Dear listers,

Here we have a strange result we can hardly cope with. We want to 
compare a null mixed model with a mixed model with one independent 
variable.


 lmmedt1-lme(mediane~1, random=~1|site, na.action=na.omit, data=bdd2)
 lmmedt9-lme(mediane~log(0.0001+transat), random=~1|site, 
na.action=na.omit, data=bdd2)


Using the Akaike Criterion and selMod of the package pgirmess gives the 
following output:


 selMod(list(lmmedt1,lmmedt9))
model   LL K  N2K   AIC  deltAIC  w_i  AICc 
deltAICc w_ic
2 log(1e-04 + transat) 44.63758 4  7.5 -81.27516 0.00 0.65 -79.67516 
0.00 0.57
11 43.02205 3 10.0 -80.04410 1.231069 0.35 -79.12102 
0.554146 0.43


The usual conclusion would be that the two models are equivalent and to 
keep the null model for parsimony (!).


However, an anova shows that the variable 'log(1e-04 + transat)' is 
significantly different from 0 in model 2 (lmmedt9)


 anova(lmmedt9)
numDF denDF   F-value p-value
(Intercept)  120 289.43109  .0001
log(1e-04 + transat) 120  31.18446  .0001

Has anyone an opinion about what looks like a paradox here ?

Patrick



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] AIC and anova, lme

2008-02-26 Thread Patrick Giraudoux

ian white a écrit :

Patrick,

The likelihoods of two models fitted using REML cannot be compared
unless the fixed effects are the same in the two models.  
  
Many thanks for this reminder. Shame on me: it recalls me that this 
subject may have been already largely discussed on this list. Now, I can 
search the archives specifically with the REML issue...


All the best,

Patrick


On Tue, 2008-02-26 at 14:38 +0100, Patrick Giraudoux wrote:
  

Dear listers,

Here we have a strange result we can hardly cope with. We want to 
compare a null mixed model with a mixed model with one independent 
variable.


  lmmedt1-lme(mediane~1, random=~1|site, na.action=na.omit, data=bdd2)
  lmmedt9-lme(mediane~log(0.0001+transat), random=~1|site, 
na.action=na.omit, data=bdd2)


Using the Akaike Criterion and selMod of the package pgirmess gives the 
following output:


  selMod(list(lmmedt1,lmmedt9))
 model   LL K  N2K   AIC  deltAIC  w_i  AICc 
deltAICc w_ic
2 log(1e-04 + transat) 44.63758 4  7.5 -81.27516 0.00 0.65 -79.67516 
0.00 0.57
11 43.02205 3 10.0 -80.04410 1.231069 0.35 -79.12102 
0.554146 0.43


The usual conclusion would be that the two models are equivalent and to 
keep the null model for parsimony (!).


However, an anova shows that the variable 'log(1e-04 + transat)' is 
significantly different from 0 in model 2 (lmmedt9)


  anova(lmmedt9)
 numDF denDF   F-value p-value
(Intercept)  120 289.43109  .0001
log(1e-04 + transat) 120  31.18446  .0001

Has anyone an opinion about what looks like a paradox here ?

Patrick



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





  


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] random location in polygons sp spsample splancs csr

2008-02-17 Thread Patrick Giraudoux
Dear all,

I had to place points at random, one in each of larger number of 
polygons (actually in objects of class 'SpatialPolygonsDataFrame' , see 
sp library), and  tried first to do it  using spsample (from sp). 
Surprisingly, every 5-15 trials, the output was a NULL value. The doc 
says that ' this may occur when trying to hit a small and awkwardly 
shaped polygon in a large bounding box with a small number of points', 
but in my case, the shapes were not really awkward, and the bounding box 
just the smallest rectangle including the shape, just the number of 
points was 1 in each polygon.

Thus I tried csr (from splancs) after having extracted the polygon 
coordinates of each shape from the Spatial object, and everything went 
smoothly, with hit success every trial.

Has anybody (anybody will probably be Edzer or/and Roger...) an idea why 
here splancs looks like outperforming spsample ?

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] random location in polygons sp spsample splancs csr

2008-02-17 Thread Patrick Giraudoux
Thanks for those detailed explanation and the time taken to write them.
  The spsample methods for polygons have an iter= argument that can be 
 used to make then try harder, did you try it (with what values - the 
 help page senctence you quote is from the iter= description)?
Yes sure, I went up to 10, but no success.

 Could you provide an example with a set.seed() value that does what 
 you say, or at least the code you used?
The easiest way is to send the data and the script off list. I will do it.

 Did you try asking for multiple points and then choosing a single 
 point at random? This would be equivalent to increasing iter while 
 asking for a single point.
I did not try this one

Actually, I found my way out easy with csr() in splancs, and did not 
fight too much with spsample. My question on the list was just for 
general information
 PS. Perhaps R-sig-geo is a more appropriate list?
I was wondering too... and chose r-help because I though the question 
was of 'general' interest enough. This is debatable indeed...

Thank you anyway for your answer, and see you in a few minutes off list...

Cheers,

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] change in I(x) ? in R 2.6.0

2007-10-25 Thread Patrick Giraudoux
Dear listers,

I am trying to use an old script which was working well in the previous 
R version. It looks like if it no longer works in R.6.0. I have a model 
of the form:

glm(nath2$Positif ~ n + yearday + x + y + I(x^2) + I(y^2) + yearday:x + 
yearday:y, family = poisson, data = nath2)

and want to get predicts from a data.frame whose column names are:

  names(data1)
[1] x y yearday n

x and y are geographical coordinates, yearday is equal to 120 and n 
equal 100; they are all numerics:

  sapply(data1,is.numeric)
x y yearday n
TRUE TRUE TRUE TRUE

when I use the function predict:

  z1-predict(mod1b,newdata=data1,type=response)
Error: variables ‘I(x^2)’, ‘I(y^2)’ were specified with different types 
from the fit

Any idea about what goes wrong ?

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R-2.6.0 and RWinEdt

2007-10-08 Thread Patrick Giraudoux
Installed and tested right now. Works fine, no problem.

Thanks,

Patrick

Uwe Ligges a écrit :


 Patrick Giraudoux wrote:
 Dear Listers,

 I have just installed R-2.6.0 and the RWinEdt package 1.7-6 under
 Windows XP.

 wait for 1.7-7 which should appear on CRAN real soon now.

 Uwe


 The R-WinEdt menu well appears at launching (the command
 library(RWinEdt) is in .Rprofile), but  WinEdt is NOT started
 automatically (this was not the case in the earlier versions of R). When
 WinEdt is started by hand (eg double-click on a RWinEdt alias after R
 launching), syntax highlighting and connexion to R works well.

 Any idea about how to fix this and get WinEdt automatically started when
 library(RWinEdt) is called?

 Patrick







__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R-2.6.0 and RWinEdt

2007-10-06 Thread Patrick Giraudoux
Dear Listers,

I have just installed R-2.6.0 and the RWinEdt package 1.7-6 under
Windows XP.

The R-WinEdt menu well appears at launching (the command
library(RWinEdt) is in .Rprofile), but  WinEdt is NOT started
automatically (this was not the case in the earlier versions of R). When
WinEdt is started by hand (eg double-click on a RWinEdt alias after R
launching), syntax highlighting and connexion to R works well.

Any idea about how to fix this and get WinEdt automatically started when
library(RWinEdt) is called?

Patrick

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.