[R] Error while installing 'netmodels'

2010-02-16 Thread anupam sinha
Dear all ,
  I am trying to install a package by name 'netmodels' keep
on getting the following error :


> install.packages("netmodels")
Warning in install.packages("netmodels") :
  argument 'lib' is missing: using '/usr/lib64/R/library'
also installing the dependency ‘VGAM’

trying URL 'http://cran.stat.ucla.edu/src/contrib/VGAM_0.7-10.tar.gz'
Content type 'application/x-tar' length 1608488 bytes (1.5 Mb)
opened URL
==
downloaded 1.5 Mb

trying URL 'http://cran.stat.ucla.edu/src/contrib/netmodels_0.2.tar.gz'
Content type 'application/x-tar' length 20049 bytes (19 Kb)
opened URL
==
downloaded 19 Kb

* Installing *source* package ‘VGAM’ ...
** libs
gfortran   -fpic  -O2 -g -c cqof.f -o cqof.o
gfortran: error trying to exec 'f951': execvp: No such file or directory
make: *** [cqof.o] Error 1
ERROR: compilation failed for package ‘VGAM’
* Removing ‘/usr/lib64/R/library/VGAM’
* Installing *source* package ‘netmodels’ ...
** R
** data
** preparing package for lazy loading
Error : package 'VGAM' required by 'netmodels' could not be found
ERROR: lazy loading failed for package ‘netmodels’
* Removing ‘/usr/lib64/R/library/netmodels’

The downloaded packages are in
‘/tmp/RtmpnUr2jb/downloaded_packages’
Updating HTML index of packages in '.Library'
Warning messages:
1: In install.packages("netmodels") :
  installation of package 'VGAM' had non-zero exit status
2: In install.packages("netmodels") :
  installation of package 'netmodels' had non-zero exit status

Here's my sessionInfo(). Can anyone help me out with this ?


> sessionInfo()
R version 2.9.0 (2009-04-17)
x86_64-redhat-linux-gnu

locale:
LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] tcltk_2.9.0 tools_2.9.0
> Detecting remotely related proteins by their
Error: unexpected symbol in "Detecting remotely"
> interactions and sequence similarity
Error: unexpected symbol in "interactions and"



Regards,

Anupam Sinha

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sql query variable

2010-02-16 Thread Dieter Menne


RagingJim wrote:
> 
> This is the very last thing I need to make everything work properly. My
> query:
> sqlQuery(conn, "select to_char(lsd,'-mm') as yr,ttl_mo_prcp from
> mo_rains where stn_num=023000")
> 
> Is there a way to may the stn_num in the query variable, ie make it so
> that whenever my script is run, the user must choose and input the station
> number?
> 


You could use tclTk to input the stn_num, and generate the query string with
paste()

Dieter

-- 
View this message in context: 
http://n4.nabble.com/sql-query-variable-tp1558189p1558308.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to compute heteroskedasticity-robust LM statistic?

2010-02-16 Thread roach

Anyone using Wooldrige's Introductory Econometrics: A Modern Approach?
In the 3rd edition example 8.3, I use the following method to compute
heteroskedasticity-robust LM statistic

library(car)
linear.hypothesis(model,c("avgsen=0","I(avgsen^2)=0")
,test="Chisq",vcov=hccm(model,type="hc0")) 
 
The result is different from the one introduced in the text. The p-value is
also differ greatly from using heteroskedasticity-robust F statistic.
Any thought on this?
-- 
View this message in context: 
http://n4.nabble.com/How-to-compute-heteroskedasticity-robust-LM-statistic-tp1558305p1558305.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] error message when downloading packages using the OS X shell

2010-02-16 Thread Jessica Joganic
Thank you all for your help thus far. I have taken care of the destdir
issue (you do need to specify a destdir, something I never had to do
with OS X 10.4). However, a delightful new message has popped up. Has
anyone seen this one before? I've tried both an R CMD INSTALL command
from the shell and the download.packages() command within the R
environment but neither works. In the terminal I get this error:

$ R CMD INSTALL genetics_1.3.4.tgz
Warning: invalid package ‘genetics_1.3.4.tgz’

while in R I get this error:

> download.packages(genetics,destdir=.libPaths())
Error in unique(pkgs) : object 'genetics' not found

The errors are not worded the same but they appear to be suggesting
the same issue. I've tried the names of 6 or so different packages
just to see what would happen but it's always the same message. And,
in answer to Ista, I am using download.packages() instead of
install.packages() because they are both giving me the same error
message but at least download.packages() gives me some response by
loading a Tcl/Tk interface to select my CRAN mirror so I at least know
it's working.

Thank you all!
Jessica

On Mon, Feb 15, 2010 at 11:11 PM, Ista Zahn  wrote:
>
> Hi Jessica,
> As far as I know that's the expected behavior (at least it's not OS
> X-specific: I get the same on OpenSuse 11.2). What happens if you do
>
> download.packages("ape", destdir = "~/Desktop/")
>
> ?
>
> Also, out of curiosity, why are you using download.packages() instead
> of install.packages()?
>
> Best,
> Ista
>
> On Mon, Feb 15, 2010 at 10:51 PM, Jessica Joganic  wrote:
> > I apologize for not including my entire script. What I typed into the shell
> > was:
> >
> > *download.packages(ape)*
> >
> > to which R responded with a Tcl/Tk interface allowing me to set my CRAN.
> > After I did so it proceeded to spit out the following error:
> >
> > *Loading Tcl/Tk interface ... done*
> > Error in **file.info* *(x) : argument "destdir" is
> > missing, with no default*
> >
> > Let me know if that doesn't clarify my query. If this ends up being a
> > product of my preferring the terminal over the R.app then I'll post to the
> > Mac people and see what they have to say.
> >
> > Jessica
> > **
> >
> > On Mon, Feb 15, 2010 at 9:06 PM, David Winsemius 
> > wrote:
> >
> >>
> >> On Feb 15, 2010, at 8:58 PM, Jessica Joganic wrote:
> >>
> >>  Hi Fellow R Users,
> >>> I recently upgraded to Mac OS X 10.5 (Snow Leopard) and had some issues
> >>> downloading and running R 2.10.1. I fixed the tcl/tk problem I was
> >>> originally having but it was replaced with another. I run R out of the
> >>> shell
> >>> (terminal) and when I ask it to download.packages() it gives me the
> >>> following message:
> >>>
> >>> *Loading Tcl/Tk interface ... done*
> >>> *Error in file.info(x) : argument "destdir" is missing, with no default*
> >>> *
> >>> *
> >>>
> >>
> >> I am a bit puzzled by that function call. I would have expected there to be
> >> some arguments in the call. What were you expecting to be the results?
> >>  After I run that function, I get a window of CRAN sites to choose and 
> >> after
> >> choosing the CMU site I get:
> >>
> >> > download.packages()
> >> --- Please select a CRAN mirror for use in this session ---
> >>
> >> Loading Tcl/Tk interface ... done
> >> Error in file.info(x) : argument "destdir" is missing, with no default
> >>
> >>  I searched around on the help archives and various online message boards
> >>> and
> >>> the most I could discern was that the directory where the library is
> >>> located
> >>> isn't being recognized by R (it should recognize it by default). I tried
> >>> setting the "destdir" argument manually to no avail. I am able to
> >>> successfully download and install packages with no problem if I run the
> >>> actual R program out of the Applications folder,
> >>>
> >>
> >> Heh, the thing you are calling the "actual R program" is probably the
> >> R-GUI, typically named R.app or R64.app, and it is a front-end to the R
> >> executable.
> >>
> >>
> >>  however I prefer to use the
> >>> shell. I did find one mention of a recognized issue with OS X 10.5 and R
> >>> 2.10.1 conflicting when trying to download packages, which results in the
> >>> library pathway being broken. The only problem is the fix for this problem
> >>>
> >>
> >>  given in the R manual doesn't work. Has anyone had a similar problem or
> >>> have
> >>> any ideas as to how to solve this?
> >>>
> >>
> >> It happens for me as well, but I guess I don't see it as a problem, since I
> >> did not offer a sensible set of arguments to the function. (Plus, I always
> >> use the GUI despite Rolf Turner's efforts to shame me into being a terminal
> >> dude.) You may want to post follow-ups to the Mac-SIG mailing list where
> >> this would be amore appropriate question:
> >>
> >> https://stat.ethz.ch/mailman/listinfo/r-sig-mac
> >>
> >>>
> >>>  --
> >>
> >> David Winsemius, MD
> >> Heritage Laboratories
> >> West Hartford, CT
> >>
> >>
> >
> 

Re: [R] Bivariate Uniform distribution

2010-02-16 Thread Daniel Nordlund
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of li li
> Sent: Tuesday, February 16, 2010 6:46 PM
> To: r-help
> Subject: [R] Bivariate Uniform distribution
> 
> Hi all,
>Is  there a function in R to calculate the probability, or quantile, or
> generate random numbers,  and so on for
> bivariate uniform distribution, like for the bivariate normal
> distribution?
>Thanks!
> Hannah
> 

Hannah,

It might help to know what you are trying to do, so that better advice can be 
given.  A bivariate uniform distribution is simply pairs of uniform random 
numbers. The density is constant, so it is easy to get to the other quantities 
that you want based on that.  So what is it that you are actually wanting to do?

Dan

Daniel Nordlund
Bothell, WA USA
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using text put into a dialog box

2010-02-16 Thread RagingJim

Thanks mate, awesome :)

Everything is now working as ordered. Thanks to everyone who has chipped in
and helped out this past week, much appreciated!!!
-- 
View this message in context: 
http://n4.nabble.com/Using-text-put-into-a-dialog-box-tp1555761p1558278.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using text put into a dialog box

2010-02-16 Thread Greg Snow
This function will pop up the window, get the number, and return it:

tmpfun <- function() {
tt <- tktoplevel()
Name <- tclVar("")
entry.Name <-tkentry(tt,width="20",textvariable=Name)
tkgrid(tklabel(tt,text="Please enter site number."))
tkgrid(entry.Name)
OnOK <- function()

{
NameVal <- tclvalue(Name)
use.this=NameVal
tkdestroy(tt)
}

OK.but <-tkbutton(tt,text="   OK   ",command=OnOK)
tkbind(entry.Name, "",OnOK)
tkgrid(OK.but)

tkwait.window(tt)

return(as.numeric(tclvalue(Name)))
}

Run it like:

> tmp <- tmpfun()

And after clicking on OK tmp will have the number in it.

The database experts will need to tell you how to include that into your query 
(either using the above function, or putting the query in place of the return 
and returning the result).

Hope this helps,




-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.org] On Behalf Of RagingJim
> Sent: Tuesday, February 16, 2010 5:31 PM
> To: r-help@r-project.org
> Subject: Re: [R] Using text put into a dialog box
> 
> 
> Thanks Greg, the problem is I have no idea how to return and use what I
> have
> typed into the pop up. Add to that the complication that with this
> query
> 
> x<-sqlQuery(conn, "select to_char(lsd,'-mm') as yr,ttl_mo_prcp from
> mo_rains where stn_num=23090")
> 
> if I put anything other than a number into stn_num=... then it does not
> run.
> eg if I just do sn=23090 and then use that in the query
> 
> sqlQuery(conn, "select to_char(lsd,'-mm') as yr,ttl_mo_prcp from
> mo_rains where stn_num=sn")
> 
> I get the following error.
> 
> [1] "[RODBC] ERROR: Could not SQLExecDirect"
> [2] "42S22 904 [Oracle][ODBC][Ora]ORA-00904: \"SN\": invalid
> identifier\n"
> 
> So my question is in two parts. Firstly, how do I return the number
> written
> in the popup, and then secondly how do I actually put it into the
> query?
> 
> Thanks again
> --
> View this message in context: http://n4.nabble.com/Using-text-put-into-
> a-dialog-box-tp1555761p1558124.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] converting character vector "hh:mm" to chron or strptime 24 clock time vectors

2010-02-16 Thread Gabor Grothendieck
Try this (and note that times must be less than 24 hours):

> Lines <- "LogData   date  time
+ 177.16 2008/04/24 02:00
+ 261.78 2008/04/24 04:00
+ 375.44 2008/04/24 06:00
+ 489.43 2008/04/24 08:00
+ 595.83 2008/04/24 10:00
+ 696.88 2008/04/24 24:00"
>
> DF <- read.table(textConnection(Lines))
>
> library(chron)
> DF2 <- transform(DF,
+ chron = as.chron(paste(date, time)),
+ POSIXct = as.POSIXct(paste(date, time)))
> DF2
  LogData   date  time   chron POSIXct
1   77.16 2008/04/24 02:00 (04/24/08 02:00:00) 2008-04-24 02:00:00
2   61.78 2008/04/24 04:00 (04/24/08 04:00:00) 2008-04-24 04:00:00
3   75.44 2008/04/24 06:00 (04/24/08 06:00:00) 2008-04-24 06:00:00
4   89.43 2008/04/24 08:00 (04/24/08 08:00:00) 2008-04-24 08:00:00
5   95.83 2008/04/24 10:00 (04/24/08 10:00:00) 2008-04-24 10:00:00
6   96.88 2008/04/24 24:00 (NA NA)



On Tue, Feb 16, 2010 at 5:47 AM, Alex Anderson
 wrote:
> Hi All,
> I am attempting to work with some data from loggers. I have read in a .csv
> exported from MS Access that already has my dates and times (in 24 clock
> format), (with StringsAsFactors=FALSE).
>
>> head(tdata)
>
>  LogData       date              time
> 1    77.16     2008/04/24     02:00
> 2    61.78     2008/04/24     04:00
> 3    75.44     2008/04/24     06:00
> 4    89.43     2008/04/24     08:00
> 5    95.83     2008/04/24     10:00
> 6    96.88     2008/04/24     24:00
>
> I wish to be able to summarise the data using the character vectors $data
> and $time (daily, monthly averages, maxima of my $LogData for example) so I
> am trying to get R to recognise the $date and $time columns as valid dates
> and times. Using...
>
>> tdata$date2 = as.Date(as.character(tdata$date))
>
> I can get a new column of valid dates, but neither:
>
>> tdata$time2= strptime(tdata$time,"%k")
>
> Error in `$<-.data.frame`(`*tmp*`, "time2", value = list(sec = c(0, 0,  :
>  replacement has 9 rows, data has 10
>
> nor trying:
>
>>  tdata$time2=chron(times=as.character(tdata$time, format= "hh:mm"))
>
> In addition: Warning messages:
> 1: In unpaste(times, sep = fmt$sep, fnames = fmt$periods, nfields = 3) :
>  wrong number of fields in entry(ies) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
> 2: In convert.times(times., fmt) : NAs introduced by coercion
> 3: In convert.times(times., fmt) : NAs introduced by coercion
> 4: In convert.times(times., fmt) : NAs introduced by coercion
>
> gives me any valid times from my time vector.  the Chron documentation
> doesn't mention 24 clocks, strptime neither, and the Rnews issue 1/4 with an
> article about time is no help... Any thoughts would be much appreciated.
> regards
>
> Alex Anderson
> James Cook University
> Townsville, Australia
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Keyboard

2010-02-16 Thread Erik Iverson

Steven Martin wrote:

All,

I installed R-2.10.1 with Readline=no.  Now for some reason R does not 
recognize some key strokes like the directional arrows.
I am not sure if Readline is the problem or not.

  
What particular OS are you using?  In many cases, there is a 
preconfigured package available to save you from compiling it yourself.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [Reminder] R/Finance 2010: Applied Finance with R

2010-02-16 Thread Dirk Eddelbuettel

   [ Registration for R/Finance 2010 is going strong: after only ten days 
 of registrations one tutorial is already at 65% of capacity, and two 
 others are approaching the 50% mark. Tutorials are capped at fourty
 participants each, the conference itself may be capped at three 
 hundred registrations.  Conference details are provided below. ]



   R/Finance 2010: Applied Finance with R
   April 16 & 17, Chicago, IL, USA
 
   The second annual R/Finance conference for applied finance using R, the
   premier free software system for statistical computation and graphics,
   will be held this spring in Chicago, IL, USA on Friday April 16 and
   Saturday April 17, 2010.
  
   The two-day conference will cover portfolio management, time series
   analysis, advanced risk tools, high-performance computing, econometrics
   and more. All will be discussed within the context of using R as a primary
   tool for financial risk management, analysis and trading.
  
   The 2010 conference will build upon the success of last year's event. It
   will include traditional keynotes from leading names in the R and finance
   community, presentations of contributed papers, short "lightning-style"
   presentations as well as the chance to meet and discuss colloboratively
   the future of the R in Finance community.
  
   R/Finance 2010 is organized by a local group of R package authors and
   community contributors, hosted by the International Center for Futures and
   Derivatives [ICFD] at the University of Illinois at Chicago and made
   possible via sponsorship support from ICFD, REvolution Computing,
   OneMarketData and Insight Algorithmics.

   The conference will feature invited keynote lectures by:
  
 * Bernhard Pfaff, Author, Analysis of Integrated and Co-integrated Time
   Series with R

 * Ralph Vince, Author, Leverage Space Portfolio Model

 * Marc Wildi, Author, Signal Extraction. ZHAW, Zurich, Switzerland

 * Achim Zeileis, Author, Applied Econometrics with R. Universitaet
   Innsbruck, Austria

   Plus additional talks over two days from:

  Maria Belianina, Kris Boudt, Josh Buckner, Peter Carl, Jon Cornelissen,
  Dirk Eddelbuettel, Robert Grossman, Saptarshi Guha, Mike Kane, Ruud
  Koning, Bryan Lewis, Wei-han Liu, James "JD" Long, Brian Peterson,
  Soren MacBeth, Khanh Nguyen, Michael North, Stefan Theussl, Josh
  Ulrich, Tony Plate, Jeff Ryan, Mark Seligman, David Smith, and Eric
  Zivot.

   Also offered are four optional pre-conference tutorials:   

  Josh Buckner & Mark Seligman 
  GPU Programming with R - An Introduction To GPU Programming with R

  Peter Carl & Brian Peterson
  Complex Portfolio Optimization with Generalized Business Objectives

  Dirk Eddelbuettel
  Rcpp / RInside - Extending and Embedding R with C++ for Fun and Profit

  Jeff Ryan 
  Trading with R - Idea to Execution in 50 Minutes with IBrokers and R

   More details and registration information can be found at the website at  

   http://www.RinFinance.com

   For the program committee:

Gib Bassett, Peter Carl, Dirk Eddelbuettel, John Miller,
Brian Peterson, Dale Rosenthal, Jeffrey Ryan


-- 
  Registration is open for the 2nd International conference R / Finance 2010
  See http://www.RinFinance.com for details, and see you in Chicago in April!

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interactive plot (histogram) in R..........possible?

2010-02-16 Thread Greg Snow
This can be done fairly simply using the tkexamp function from the 
TeachingDemos package.

Here are some examples:

library(TeachingDemos)

histfunc <- function(n, mu=0, sigma=1) {
x <- rnorm(n, mu, sigma)
hist(x)
}

hist.list1 <- list( n=list('slider', from=10, to=1, resolution=10),
mu=list('numentry', init=0), sigma=list('numentry', init=1) )

tkexamp( histfunc, hist.list1 )

### maybe better

hist.list2 <- list( n=list('spinbox', values=c(10,50,100,500,1000,5000,1),
init=10, from=10, to=1))

tkexamp( histfunc, hist.list2 )


# another variation

x <- rnorm(1)
histfunc2 <- function(n) {
hist(x[seq_len(n)])
}
tkexamp( histfunc2, 
list( n=list('slider', from=10, to=1, resolution=10) ) )


You may want to set some of the other arguments as well.

Hope this helps,


-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.org] On Behalf Of Megh
> Sent: Tuesday, February 16, 2010 2:44 PM
> To: r-help@r-project.org
> Subject: [R] Interactive plot (histogram) in R..possible?
> 
> 
> Dear all,
> 
> I am looking for some kind of interactive plot to draw a histogram for
> a
> normal distribution with different sample size. In that plot there
> would be
> some sort of "scroll-bar" which will take min value 10 and maximum
> value of
> 10,000 as sample size (n) from a standard normal distribution.
> 
> In that case user will scroll in that scroll bar and histogram with
> different sample numbers will be drawn in the same plot.
> 
> Is there any function/package in R to do that? Your help will be highly
> appreciated.
> 
> Thanks,
> --
> View this message in context: http://n4.nabble.com/Interactive-plot-
> histogram-in-R-possible-tp1557984p1557984.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] converting character vector "hh:mm" to chron or strptime 24 clock time vectors

2010-02-16 Thread Jim Lemon

...
If it gets too confusing, just coerce your POSIXlt objects to POSIXct
objects which don't have issues with odd lengths.

Thanks to Mark and Charlie, your messages have enlightened me and 
hopefully provided a solution for Alex, who was the one with the problem.


tdata$datetime<-as.POSIXct(strptime(paste(tdata$date,tdata$time,sep="-"),
 "%Y/%m/%d-%H:%M"))

Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] converting character vector "hh:mm" to chron or strptime 24 clock time vectors

2010-02-16 Thread Sharpie


Jim Lemon wrote:
> 
> On 02/16/2010 09:47 PM, Alex Anderson wrote:
>> ...
> This is the problem
>> 6 96.88 2008/04/24 24:00
>>
>> Error in `$<-.data.frame`(`*tmp*`, "time2", value = list(sec = c(0, 0, :
>> replacement has 9 rows, data has 10
> 
> Hi Alex,
> You have a problem with an invalid time. The line should read:
> 
> 6 96.88 2008/05/24 00:00
> 
> However, there is something else going on here that puzzles me. When I 
> tried this with your sample data:
> 
> tdata$datetime<-strptime(paste(tdata$date,tdata$time,sep="-"),"%Y/%m/%d-%H:%M")
> Error in `$<-.data.frame`(`*tmp*`, "datetime", value = list(sec = c(0,  :
>replacement has 9 rows, data has 6
> 
> Yet:
> 
> datetime<-strptime(paste(tdata$date,tdata$time,sep="-"),
>   "%Y/%m/%d-%H:%M")
> datetime
> [1] "2008-04-24 02:00:00" "2008-04-24 04:00:00" "2008-04-24 06:00:00"
> [4] "2008-04-24 08:00:00" "2008-04-24 10:00:00" "2008-05-24 00:00:00"
> length(datetime)
> [1] 9
> 
> This is the first time I have encountered a discrepancy between "length" 
> and the printed extent of an object, and I can't work out what is going
> on.
> 
> Jim
> 

This is because you are working with a POSIXlt object which has 9 components
and therefore always has a length of 9.  That second part has always struck
me as an odd design decision-- but one which we will have to live with.  The
important detail is that each component of POSIXlt, such as "min", are the
length you would expect them to be.

I.E. if you have a POSIXlt object storing 15 timestamps:

  length( object )  = 9   <- not what you expect
  length( object$min ) = 15  <- what you expect

If it gets too confusing, just coerce your POSIXlt objects to POSIXct
objects which don't have issues with odd lengths.

Hope this helps!

-Charlie
-- 
View this message in context: 
http://n4.nabble.com/converting-character-vector-hh-mm-to-chron-or-strptime-24-clock-time-vectors-tp1557308p1558213.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] converting character vector "hh:mm" to chron or strptime 24 clock time vectors

2010-02-16 Thread Jim Lemon

On 02/16/2010 09:47 PM, Alex Anderson wrote:

...

This is the problem

6 96.88 2008/04/24 24:00

Error in `$<-.data.frame`(`*tmp*`, "time2", value = list(sec = c(0, 0, :
replacement has 9 rows, data has 10


Hi Alex,
You have a problem with an invalid time. The line should read:

6 96.88 2008/05/24 00:00

However, there is something else going on here that puzzles me. When I 
tried this with your sample data:


tdata$datetime<-strptime(paste(tdata$date,tdata$time,sep="-"),"%Y/%m/%d-%H:%M")
Error in `$<-.data.frame`(`*tmp*`, "datetime", value = list(sec = c(0,  :
  replacement has 9 rows, data has 6

Yet:

datetime<-strptime(paste(tdata$date,tdata$time,sep="-"),
 "%Y/%m/%d-%H:%M")
datetime
[1] "2008-04-24 02:00:00" "2008-04-24 04:00:00" "2008-04-24 06:00:00"
[4] "2008-04-24 08:00:00" "2008-04-24 10:00:00" "2008-05-24 00:00:00"
length(datetime)
[1] 9

This is the first time I have encountered a discrepancy between "length" 
and the printed extent of an object, and I can't work out what is going on.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extract the data that match

2010-02-16 Thread Sharpie


Roslina Zakaria wrote:
> 
> Hi r-users,
>  
> I would like to extract the data that match.
> I'm interested in matchind the value in column 'intg' with value in column
> 'rand_no'
> 

Match how? Rows where intg equals rand_no at the same position?  The rows of
intg that are present somewhere in rand_no regardless of position?  You need
to be more specific.


Roslina Zakaria wrote:
> 
> Attached is my data:
> 
>> cbind(z=z,intg=dd,rand_no = rr)
>     z  intg rand_no
>    [1,]  0.00 0.000   0.001
>    [2,]  0.01 0.000   0.002
> 
> 
> Thank you for your help.
> 

This isn't a good way to provide tabular data-- people that want to help you
can't just paste it into R and start working.  They will have to reprocess
the columns so that they can be read in by a function like read.table or
read.csv.  This will take some time and most won't bother-- they will just
ignore your question.

The best way to provide tabulated R data is to run your data frame through
the dput() function and post that output.  It won't look as nice, but any
list user will be able to paste the code into their console and regenerate
your table.

-Charlie
-- 
View this message in context: 
http://n4.nabble.com/extract-the-data-that-match-tp1558193p1558200.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extract the data that match

2010-02-16 Thread Peter Alspach
Tena koe Roslina 

It is not entirely clear to me what you want to do, but have you looked at 
match or %in%?  Or do you simply want yourData[yourData[,2]==yourData[,3],]?

Also, you might need to bear in mind issues with floating point arithmetic 
which is a frequent question on this list.

HTH .

Peter Alspach

> -Original Message-
> From: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] On Behalf Of Roslina Zakaria
> Sent: Wednesday, 17 February 2010 3:39 p.m.
> To: r-help@r-project.org
> Subject: [R] extract the data that match
> 
> Hi r-users,
>  
> I would like to extract the data that match.  Attached is my data:
> I'm interested in matchind the value in column 'intg' with 
> value in column 'rand_no'
> > cbind(z=z,intg=dd,rand_no = rr)
>     z  intg rand_no
>    [1,]  0.00 0.000   0.001
>    [2,]  0.01 0.000   0.002
>    [3,]  0.02 0.000   0.002
>    [4,]  0.03 0.000   0.003
>    [5,]  0.04 0.000   0.003
>    [6,]  0.05 0.000   0.004
>    [7,]  0.06 0.000   0.004
>    [8,]  0.07 0.000   0.004
>    [9,]  0.08 0.000   0.004
>   [10,]  0.09 0.000   0.008
>   [11,]  0.10 0.000   0.008
>   [12,]  0.11 0.000   0.008
>   [13,]  0.12 0.000   0.009
>   [14,]  0.13 0.000   0.009
>   [15,]  0.14 0.000   0.010
>   [16,]  0.15 0.000   0.010
>   [17,]  0.16 0.001   0.010
>   [18,]  0.17 0.001   0.011
>   [19,]  0.18 0.001   0.012
>   [20,]  0.19 0.001   0.012
>   [21,]  0.20 0.001   0.012
>   [22,]  0.21 0.001   0.012
>   [23,]  0.22 0.002   0.013
>   [24,]  0.23 0.002   0.014
>   [25,]  0.24 0.002   0.014
>   [26,]  0.25 0.002   0.014
>   [27,]  0.26 0.003   0.015
>   [28,]  0.27 0.003   0.016
>   [29,]  0.28 0.003   0.016
>   [30,]  0.29 0.004   0.017
>   [31,]  0.30 0.004   0.017
>   [32,]  0.31 0.005   0.018
>   [33,]  0.32 0.005   0.019
>   [34,]  0.33 0.005   0.020
>   [35,]  0.34 0.006   0.020
>   [36,]  0.35 0.006   0.020
>   [37,]  0.36 0.007   0.021
>   [38,]  0.37 0.007   0.024
>   [39,]  0.38 0.008   0.025
>   [40,]  0.39 0.008   0.025
>   [41,]  0.40 0.009   0.026
>   [42,]  0.41 0.010   0.027
>   [43,]  0.42 0.010   0.028
>   [44,]  0.43 0.011   0.030
>   [45,]  0.44 0.011   0.030
>   [46,]  0.45 0.012   0.031
>   [47,]  0.46 0.013   0.031
>   [48,]  0.47 0.014   0.032
>   [49,]  0.48 0.014   0.033
>   [50,]  0.49 0.015   0.033
>   [51,]  0.50 0.016   0.033
>   [52,]  0.51 0.017   0.034
>   [53,]  0.52 0.017   0.037
>   [54,]  0.53 0.018   0.039
>   [55,]  0.54 0.019   0.039
>   [56,]  0.55 0.020   0.040
>   [57,]  0.56 0.021   0.040
>   [58,]  0.57 0.022   0.040
>   [59,]  0.58 0.022   0.041
>   [60,]  0.59 0.023   0.042
>   [61,]  0.60 0.024   0.042
>   [62,]  0.61 0.025   0.043
>   [63,]  0.62 0.026   0.045
>   [64,]  0.63 0.027   0.046
>   [65,]  0.64 0.028   0.046
>   [66,]  0.65 0.029   0.047
>   [67,]  0.66 0.030   0.047
>   [68,]  0.67 0.031   0.047
>   [69,]  0.68 0.032   0.048
>   [70,]  0.69 0.033   0.051
>   [71,]  0.70 0.034   0.051
>   [72,]  0.71 0.036   0.051
>   [73,]  0.72 0.037   0.052
>   [74,]  0.73 0.038   0.052
>   [75,]  0.74 0.039   0.052
>   [76,]  0.75 0.040   0.052
>   [77,]  0.76 0.041   0.052
>   [78,]  0.77 0.042   0.053
>   [79,]  0.78 0.044   0.053
>   [80,]  0.79 0.045   0.053
>   [81,]  0.80 0.046   0.054
>   [82,]  0.81 0.047   0.054
>   [83,]  0.82 0.048   0.055
>   [84,]  0.83 0.050   0.055
>   [85,]  0.84 0.051   0.055
>   [86,]  0.85 0.052   0.057
>   [87,]  0.86 0.054   0.060
>   [88,]  0.87 0.055   0.062
>   [89,]  0.88 0.056   0.063
>   [90,]  0.89 0.057   0.064
>   [91,]  0.90 0.059   0.064
>   [92,]  0.91 0.060   0.064
>   [93,]  0.92 0.062   0.066
>   [94,]  0.93 0.063   0.067
>   [95,]  0.94 0.064   0.067
>   [96,]  0.95 0.066   0.068
>   [97,]  0.96 0.067   0.068
>   [98,]  0.97 0.068   0.069
>   [99,]  0.98 0.070   0.071
>  [100,]  0.99 0.071   0.071
>  [101,]  1.00 0.073   0.071
>  [102,]  1.01 0.074   0.071
>  [103,]  1.02 0.076   0.072
>  [104,]  1.03 0.077   0.072
>  [105,]  1.04 0.079   0.075
>  [106,]  1.05 0.080   0.075
>  [107,]  1.06 0.082   0.076
>  [108,]  1.07 0.083   0.076
>  [109,]  1.08 0.085   0.078
>  [110,]  1.09 0.086   0.078
>  [111,]  1.10 0.088   0.078
>  [112,]  1.11 0.089   0.079
>  [113,]  1.12 0.091   0.080
>  [114,]  1.13 0.092   0.080
>  [115,]  1.14 0.094   0.081
>  [116,]  1.15 0.095   0.081
>  [117,]  1.16 0.097   0.081
>  [118,]  1.17 0.099   0.084
>  [119,]  1.18 0.100   0.086
>  [120,]  1.19 0.102   0.086
>  [121,]  1.20 0.103   0.086
>  [122,]  1.21 0.105   0.087
>  [123,]  1.22 0.107   0.087
>  [124,]  1.23 0.108   0.088
>  [125,]  1.24 0.110   0.088
>  [126,]  1.25 0.111   0.089
>  [127,]  1.26 0.113   0.090
>  [128,]  1.27 0.115   0.090
>  [129,]  1.28 0.116   0.093
>  [130,]  1.29 0.118   0.094
>  [131,]  1.30 0.120   0.094
>  [132,]  1.31 0.121   0.096
>  [133,]  1.32 0.123   0.097
>  [134,]  1.33 0.125   0.097
>  [135,]  1.34 0.127   0.100
>  [136,]  1.35 0.128   0.100
>  [137,]  1.36 0.130   0.100
>  [138,]  1.37 0.132   0.100
>  [139,]  1.38 0.133   0.101
>  [140,]  1

[R] Building R from source

2010-02-16 Thread rkevinburton
I found the problem but not a solution. It turns out if I add the following 
lines to dqrdc2.f I get the error:

  write(*,300) ldx,n,p
  300 format(3i4)

I don't get a compile error but I get the seemingly unrelated error in linking 
R.DLL
I guess the question now is, "How do I add a simple print statement?". Or, what 
is wrong with the above print statement?

Thank you.

Kevin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Bivariate Uniform distribution

2010-02-16 Thread li li
Hi all,
   Is  there a function in R to calculate the probability, or quantile, or
generate random numbers,  and so on for
bivariate uniform distribution, like for the bivariate normal distribution?
   Thanks!
Hannah

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] extract the data that match

2010-02-16 Thread Roslina Zakaria
Hi r-users,
 
I would like to extract the data that match.  Attached is my data:
I'm interested in matchind the value in column 'intg' with value in column 
'rand_no'
> cbind(z=z,intg=dd,rand_no = rr)
    z  intg rand_no
   [1,]  0.00 0.000   0.001
   [2,]  0.01 0.000   0.002
   [3,]  0.02 0.000   0.002
   [4,]  0.03 0.000   0.003
   [5,]  0.04 0.000   0.003
   [6,]  0.05 0.000   0.004
   [7,]  0.06 0.000   0.004
   [8,]  0.07 0.000   0.004
   [9,]  0.08 0.000   0.004
  [10,]  0.09 0.000   0.008
  [11,]  0.10 0.000   0.008
  [12,]  0.11 0.000   0.008
  [13,]  0.12 0.000   0.009
  [14,]  0.13 0.000   0.009
  [15,]  0.14 0.000   0.010
  [16,]  0.15 0.000   0.010
  [17,]  0.16 0.001   0.010
  [18,]  0.17 0.001   0.011
  [19,]  0.18 0.001   0.012
  [20,]  0.19 0.001   0.012
  [21,]  0.20 0.001   0.012
  [22,]  0.21 0.001   0.012
  [23,]  0.22 0.002   0.013
  [24,]  0.23 0.002   0.014
  [25,]  0.24 0.002   0.014
  [26,]  0.25 0.002   0.014
  [27,]  0.26 0.003   0.015
  [28,]  0.27 0.003   0.016
  [29,]  0.28 0.003   0.016
  [30,]  0.29 0.004   0.017
  [31,]  0.30 0.004   0.017
  [32,]  0.31 0.005   0.018
  [33,]  0.32 0.005   0.019
  [34,]  0.33 0.005   0.020
  [35,]  0.34 0.006   0.020
  [36,]  0.35 0.006   0.020
  [37,]  0.36 0.007   0.021
  [38,]  0.37 0.007   0.024
  [39,]  0.38 0.008   0.025
  [40,]  0.39 0.008   0.025
  [41,]  0.40 0.009   0.026
  [42,]  0.41 0.010   0.027
  [43,]  0.42 0.010   0.028
  [44,]  0.43 0.011   0.030
  [45,]  0.44 0.011   0.030
  [46,]  0.45 0.012   0.031
  [47,]  0.46 0.013   0.031
  [48,]  0.47 0.014   0.032
  [49,]  0.48 0.014   0.033
  [50,]  0.49 0.015   0.033
  [51,]  0.50 0.016   0.033
  [52,]  0.51 0.017   0.034
  [53,]  0.52 0.017   0.037
  [54,]  0.53 0.018   0.039
  [55,]  0.54 0.019   0.039
  [56,]  0.55 0.020   0.040
  [57,]  0.56 0.021   0.040
  [58,]  0.57 0.022   0.040
  [59,]  0.58 0.022   0.041
  [60,]  0.59 0.023   0.042
  [61,]  0.60 0.024   0.042
  [62,]  0.61 0.025   0.043
  [63,]  0.62 0.026   0.045
  [64,]  0.63 0.027   0.046
  [65,]  0.64 0.028   0.046
  [66,]  0.65 0.029   0.047
  [67,]  0.66 0.030   0.047
  [68,]  0.67 0.031   0.047
  [69,]  0.68 0.032   0.048
  [70,]  0.69 0.033   0.051
  [71,]  0.70 0.034   0.051
  [72,]  0.71 0.036   0.051
  [73,]  0.72 0.037   0.052
  [74,]  0.73 0.038   0.052
  [75,]  0.74 0.039   0.052
  [76,]  0.75 0.040   0.052
  [77,]  0.76 0.041   0.052
  [78,]  0.77 0.042   0.053
  [79,]  0.78 0.044   0.053
  [80,]  0.79 0.045   0.053
  [81,]  0.80 0.046   0.054
  [82,]  0.81 0.047   0.054
  [83,]  0.82 0.048   0.055
  [84,]  0.83 0.050   0.055
  [85,]  0.84 0.051   0.055
  [86,]  0.85 0.052   0.057
  [87,]  0.86 0.054   0.060
  [88,]  0.87 0.055   0.062
  [89,]  0.88 0.056   0.063
  [90,]  0.89 0.057   0.064
  [91,]  0.90 0.059   0.064
  [92,]  0.91 0.060   0.064
  [93,]  0.92 0.062   0.066
  [94,]  0.93 0.063   0.067
  [95,]  0.94 0.064   0.067
  [96,]  0.95 0.066   0.068
  [97,]  0.96 0.067   0.068
  [98,]  0.97 0.068   0.069
  [99,]  0.98 0.070   0.071
 [100,]  0.99 0.071   0.071
 [101,]  1.00 0.073   0.071
 [102,]  1.01 0.074   0.071
 [103,]  1.02 0.076   0.072
 [104,]  1.03 0.077   0.072
 [105,]  1.04 0.079   0.075
 [106,]  1.05 0.080   0.075
 [107,]  1.06 0.082   0.076
 [108,]  1.07 0.083   0.076
 [109,]  1.08 0.085   0.078
 [110,]  1.09 0.086   0.078
 [111,]  1.10 0.088   0.078
 [112,]  1.11 0.089   0.079
 [113,]  1.12 0.091   0.080
 [114,]  1.13 0.092   0.080
 [115,]  1.14 0.094   0.081
 [116,]  1.15 0.095   0.081
 [117,]  1.16 0.097   0.081
 [118,]  1.17 0.099   0.084
 [119,]  1.18 0.100   0.086
 [120,]  1.19 0.102   0.086
 [121,]  1.20 0.103   0.086
 [122,]  1.21 0.105   0.087
 [123,]  1.22 0.107   0.087
 [124,]  1.23 0.108   0.088
 [125,]  1.24 0.110   0.088
 [126,]  1.25 0.111   0.089
 [127,]  1.26 0.113   0.090
 [128,]  1.27 0.115   0.090
 [129,]  1.28 0.116   0.093
 [130,]  1.29 0.118   0.094
 [131,]  1.30 0.120   0.094
 [132,]  1.31 0.121   0.096
 [133,]  1.32 0.123   0.097
 [134,]  1.33 0.125   0.097
 [135,]  1.34 0.127   0.100
 [136,]  1.35 0.128   0.100
 [137,]  1.36 0.130   0.100
 [138,]  1.37 0.132   0.100
 [139,]  1.38 0.133   0.101
 [140,]  1.39 0.135   0.102
 [141,]  1.40 0.137   0.102
 [142,]  1.41 0.139   0.103
 [143,]  1.42 0.140   0.103
 [144,]  1.43 0.142   0.105
 [145,]  1.44 0.144   0.105
 [146,]  1.45 0.146   0.107
 [147,]  1.46 0.147   0.107
 [148,]  1.47 0.149   0.107
 [149,]  1.48 0.151   0.109
 [150,]  1.49 0.153   0.110
 [151,]  1.50 0.154   0.111
 [152,]  1.51 0.156   0.111
 [153,]  1.52 0.158   0.111
 [154,]  1.53 0.160   0.112
 [155,]  1.54 0.161   0.112
 [156,]  1.55 0.163   0.114
 [157,]  1.56 0.165   0.114
 [158,]  1.57 0.167   0.114
 [159,]  1.58 0.169   0.114
 [160,]  1.59 0.170   0.115
 [161,]  1.60 0.172   0.115
 [162,]  1.61 0.174   0.116
 [163,]  1.62 0.176   0.116
 [164,]  1.63 0.178   0.117
 [165,]  1.64 0.180   0.117
 [166,]  1.65 0.181   0.119
 [167,]  1.66 0.183   0.120
 [168,]  1.67 0.185   0.121
 [169,]  1.68 0.187   0.121
 [170,]  1.69 0.189   0.122
 [171,]  1

[R] sql query variable

2010-02-16 Thread RagingJim

This is the very last thing I need to make everything work properly. My
query:

sqlQuery(conn, "select to_char(lsd,'-mm') as yr,ttl_mo_prcp from
mo_rains where stn_num=023000")

Is there a way to may the stn_num in the query variable, ie make it so that
whenever my script is run, the user must choose and input the station
number?

Cheers.
-- 
View this message in context: 
http://n4.nabble.com/sql-query-variable-tp1558189p1558189.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Keyboard

2010-02-16 Thread Sharpie


Steven Martin wrote:
> 
> All,
> 
> I installed R-2.10.1 with Readline=no.  Now for some reason R does not
> recognize some key strokes like the directional arrows.
> I am not sure if Readline is the problem or not.
> 
Yes-- readline supplies functionality such as command history.


Steven Martin wrote:
> 
> I have tried .Cofigure with Readline = yes but it doesn't fix the problem
> nor do I really know if readline is the problem to start with.
> 
> Has anybody else run into similar problems?
> 
> Thanks,
> Steve
> 

Did you try:

  make distclean
  ./configure 
  make
  make install

?

If that doesn't work, try:

  which R

To see if the version of R being run is the one that you just
compiled/installed.

Good luck!

-Charlie
-- 
View this message in context: 
http://n4.nabble.com/Keyboard-tp1558126p1558185.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] side-effects in functions that replace object values andattributes

2010-02-16 Thread William Dunlap
> -Original Message-
> From: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] On Behalf Of Stephen Tucker
> Sent: Tuesday, February 16, 2010 5:32 PM
> To: r-help@r-project.org
> Subject: [R] side-effects in functions that replace object 
> values andattributes
> 
> Hello list,
> 
> I encountered some surprising behavior in R and wanted to 
> check to see if I was imagining things. Previously, I thought 
> that replacement/setter operators in prefix notation (e.g., 
> `[<-`(x,1,y) rather than x[1] <- y) did not produce side 
> effects (i.e., left 'x' unchanged), but just realized that 
> this is not the case anymore (or has it always been this way? 
> - I don't have access to old R distributions at the moment, 
> but using R 2.10.1 on Ubuntu and OS X)? I went through the 
> NEWS file but did not see a change documented if there was 
> one, but just going by my recollection...
> 
> In any case, my understanding was that when modifying a value 
> or attribute of an object, R reassigned the modified object 
> to the original symbol. For instance, the help file for 
> `name<-`() says that 
> 
> > names(z)[3] <- "c2"
> 
> is evaluated as
> 
> > z <- "names<-"(z, "[<-"(names(z), 3, "c2"))
> 
> But the final (re)assignment (`<-`) seems redundant as 
> 
> > invisible("names<-"(z, "[<-"(names(z), 3, "c2")))
> 
> does the same thing.
> 
> In this case, I wonder if there is a preferred way to use the 
> replacement/setter operators in prefix notation without 
> producing such side effects

I would never recommend using replacement operators
this way.  In part this is because S+ doesn't like it:
  S+> x<-1:10
  S+> `[<-`(x, 1, value=66.6)
   [1] 66.6  2.0  3.0  4.0  5.0  6.0  7.0  8.0  9.0 10.0
  Warning messages:
looks like the internal reference count was not updated in: x[1]  <-
66.6
  S+> x
   [1]  1  2  3  4  5  6  7  8  9 10
in part because it is ugly, but mostly it is because
forcing the program to accept such usage closes off,
or at least restricts, an avenue for saving memory when
doing replacement operations.

If you have a replacement function that you are tempted
to use in this way, I recommend that you write a wrapper
function for it.  E.g., instead of using
   namedX <- `names<-`(x, value=as.character(x))
write
   setNames <- function(x, names) {
   names(x) <- names
   x
   }
and use it as
   namesX <- setNames(x, as.character(x))


Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com 

> (replace() exists for `[<-`() and 
> `[[<-`(), but perhaps something more general). For instance, 
> if I did not desire the following modification in x:
> 
> > x <- c("1","2")
> > y <- `names<-`(x,c("a","b"))
> > y
>   a   b 
> "1" "2" 
> > x
>   a   b 
> "1" "2" 
> > 
> 
> I might take advantage of R's lazy evaluation and use create 
> a copy in the local function environment and allow that copy 
> to be modified:
> 
> > x <- c("1","2")
> > y <- `names<-`(`<-`(x,x),c("a","b"))
> > y
>   a   b 
> "1" "2" 
> > x
> [1] "1" "2"
> 
> The interesting thing is that `mode<-`(), while also a 
> "setter" function in that it sets an object attribute, does 
> not behave as `names<-`():
> 
> > x <- c("1","2")
> > y <- `mode<-`(x,"integer")
> > y
> [1] 1 2
> > x
> [1] "1" "2"
> > mode(x)
> [1] "character"
> 
> So another question that naturally arises is whether there a 
> general rule by which we can predict the behavior of these 
> types of operators (whether they produce side effects or not)?
> 
> Thanks,
> 
> Stephen
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] strangeness in Predict() {rms}

2010-02-16 Thread William Dunlap
Both plyr and rms contain an object called ".".
In plyr it is a "closure" (the common kind of
function) and in rms is is NA.  If plyr is attached
in front of rms then you get your problem with
Predict(). 

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com  

> -Original Message-
> From: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] On Behalf Of 
> bill.venab...@csiro.au
> Sent: Tuesday, February 16, 2010 4:21 PM
> To: dylan.beaude...@gmail.com; r-help@r-project.org
> Subject: Re: [R] strangeness in Predict() {rms}
> 
> This works without a glitch on my linux system (info below).  
> You might try upgrading your R to 2.10.1, perhaps.
> 
> > sessionInfo()
> R version 2.10.1 (2009-12-14) 
> x86_64-unknown-linux-gnu 
> 
> locale:
>  [1] LC_CTYPE=en_AU.UTF-8   LC_NUMERIC=C  
>  [3] LC_TIME=en_AU.UTF-8LC_COLLATE=en_AU.UTF-8
>  [5] LC_MONETARY=C  LC_MESSAGES=en_AU.UTF-8   
>  [7] LC_PAPER=en_AU.UTF-8   LC_NAME=C 
>  [9] LC_ADDRESS=C   LC_TELEPHONE=C
> [11] LC_MEASUREMENT=en_AU.UTF-8 LC_IDENTIFICATION=C   
> 
> attached base packages:
> [1] splines   grid  stats graphics  grDevices utils   
>   datasets 
> [8] methods   base 
> 
> other attached packages:
> [1] rms_2.1-0   plyr_0.1.9  Design_2.3-0Hmisc_3.7-0
> [5] survival_2.35-9
> 
> loaded via a namespace (and not attached):
> [1] ASOR_0.1   cluster_1.12.1 lattice_0.18-3 tcltk_2.10.1 
>   tools_2.10.1  
> >  
> 
> 
> Bill Venables
> CSIRO/CMIS Cleveland Laboratories
> 
> 
> -Original Message-
> From: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] On Behalf Of Dylan Beaudette
> Sent: Wednesday, 17 February 2010 10:05 AM
> To: r-help@r-project.org
> Subject: [R] strangeness in Predict() {rms}
> 
> Hi,
> 
> Running the following example from ?Predict() throws an error 
> I have never 
> seen before:
> 
> set.seed(1)
> x1 <- runif(300)
> x2 <- runif(300)
> ddist <- datadist(x1,x2); options(datadist='ddist')
> y  <- exp(x1+ x2 - 1 + rnorm(300))
> f  <- ols(log(y) ~ pol(x1,2) + x2)
> p1 <- Predict(f, x1=., conf.type='mean')
> 
> Error in paste(nmc[i], "=", if (is.numeric(x)) format(x) else 
> x, sep = "") : 
>   cannot coerce type 'closure' to vector of type 'character'
> In addition: Warning message:
> In is.na(v) : is.na() applied to non-(list or vector) of type 
> 'closure'
> 
> Here is the output from sessionInfo()
> 
> R version 2.9.0 (2009-04-17) 
> i686-pc-linux-gnu 
> 
> locale:
> LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLA
> TE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=
> en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREME
> NT=en_US.UTF-8;LC_IDENTIFICATION=C
> 
> attached base packages:
> [1] grid  splines   stats graphics  grDevices utils   
>   datasets 
> [8] methods   base 
> 
> other attached packages:
> [1] plyr_0.1.9 mgcv_1.5-5 RColorBrewer_1.0-2 
> nlme_3.1-94   
> [5] rms_2.1-0  Hmisc_3.7-0survival_2.35-6
> lattice_0.17-25   
> 
> loaded via a namespace (and not attached):
> [1] cluster_1.12.0
> 
> 
> Any ideas?
> Thanks!
> 
> Dylan
> 
> 
> -- 
> Dylan Beaudette
> Soil Resource Laboratory
> http://casoilresource.lawr.ucdavis.edu/
> University of California at Davis
> 530.754.7341
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] side-effects in functions that replace object values and attributes

2010-02-16 Thread Stephen Tucker
Hello list,

I encountered some surprising behavior in R and wanted to check to see if I was 
imagining things. Previously, I thought that replacement/setter operators in 
prefix notation (e.g., `[<-`(x,1,y) rather than x[1] <- y) did not produce side 
effects (i.e., left 'x' unchanged), but just realized that this is not the case 
anymore (or has it always been this way? - I don't have access to old R 
distributions at the moment, but using R 2.10.1 on Ubuntu and OS X)? I went 
through the NEWS file but did not see a change documented if there was one, but 
just going by my recollection...

In any case, my understanding was that when modifying a value or attribute of 
an object, R reassigned the modified object to the original symbol. For 
instance, the help file for `name<-`() says that 

> names(z)[3] <- "c2"

is evaluated as

> z <- "names<-"(z, "[<-"(names(z), 3, "c2"))

But the final (re)assignment (`<-`) seems redundant as 

> invisible("names<-"(z, "[<-"(names(z), 3, "c2")))

does the same thing.

In this case, I wonder if there is a preferred way to use the 
replacement/setter operators in prefix notation without producing such side 
effects (replace() exists for `[<-`() and `[[<-`(), but perhaps something more 
general). For instance, if I did not desire the following modification in x:

> x <- c("1","2")
> y <- `names<-`(x,c("a","b"))
> y
  a   b 
"1" "2" 
> x
  a   b 
"1" "2" 
> 

I might take advantage of R's lazy evaluation and use create a copy in the 
local function environment and allow that copy to be modified:

> x <- c("1","2")
> y <- `names<-`(`<-`(x,x),c("a","b"))
> y
  a   b 
"1" "2" 
> x
[1] "1" "2"

The interesting thing is that `mode<-`(), while also a "setter" function in 
that it sets an object attribute, does not behave as `names<-`():

> x <- c("1","2")
> y <- `mode<-`(x,"integer")
> y
[1] 1 2
> x
[1] "1" "2"
> mode(x)
[1] "character"

So another question that naturally arises is whether there a general rule by 
which we can predict the behavior of these types of operators (whether they 
produce side effects or not)?

Thanks,

Stephen

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Keyboard

2010-02-16 Thread Steven Martin
All,

I installed R-2.10.1 with Readline=no.  Now for some reason R does not 
recognize some key strokes like the directional arrows.
I am not sure if Readline is the problem or not.

I have tried .Cofigure with Readline = yes but it doesn't fix the problem nor 
do I really know if readline is the problem to start with.

Has anybody else run into similar problems?

Thanks,
Steve
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using text put into a dialog box

2010-02-16 Thread RagingJim

Thanks Greg, the problem is I have no idea how to return and use what I have
typed into the pop up. Add to that the complication that with this query 

x<-sqlQuery(conn, "select to_char(lsd,'-mm') as yr,ttl_mo_prcp from
mo_rains where stn_num=23090")

if I put anything other than a number into stn_num=... then it does not run.
eg if I just do sn=23090 and then use that in the query

sqlQuery(conn, "select to_char(lsd,'-mm') as yr,ttl_mo_prcp from
mo_rains where stn_num=sn")

I get the following error.

[1] "[RODBC] ERROR: Could not SQLExecDirect"  
[2] "42S22 904 [Oracle][ODBC][Ora]ORA-00904: \"SN\": invalid identifier\n"

So my question is in two parts. Firstly, how do I return the number written
in the popup, and then secondly how do I actually put it into the query? 

Thanks again
-- 
View this message in context: 
http://n4.nabble.com/Using-text-put-into-a-dialog-box-tp1555761p1558124.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Separating columns, and sorting by rows

2010-02-16 Thread RagingJim

Thanks for the help guys. This worked:

x<-sqlQuery(conn, "select to_char(lsd,'-mm') as yr,ttl_mo_prcp from
mo_rains where stn_num=23090")


myDF=x
myDF[,1]=as.yearmon(myDF[,1])
myDF$R=myDF$TTL_MO_PRCP
myDF[,-2]
myDF$<-substr(myDF$YR,5,8)
myDF$mm<-substr(myDF$YR,1,4) 
myDF<-subset(myDF, select=c(,mm,R))
myDF.reshape<-reshape(myDF,v.names="R",idvar="",timevar="mm",direction="wide")
myDF.reshape

kent=myDF.reshape


library(zoo)

rows=nrow(myDF.reshape)
kent[,1]=as.numeric(as.character(kent[,1]))
ann=c(kent1[,2]+kent1[,3]+kent1[,4]+kent1[,5]+kent1[,6]+kent1[,7]+kent1[,8]+kent1[,9]+kent1[,10]+kent1[,11]+kent1[,12]+kent1[,13])

kent1=cbind(kent$,kent$R.Jan,kent$R.Feb,kent$R.Mar,kent$R.Apr,kent$R.May,kent$R.Jun,kent$R.Jul,kent$R.Aug,kent$R.Sep,kent$R.Oct,kent$R.Nov,kent$R.Dec,ann)


Still some cleaning up to do, but it now works as promised :)

All I ahve to figure out now is how to fix have a pop up for people to write
in a station number, and that station number then gets input into the query.

x<-sqlQuery(conn, "select to_char(lsd,'-mm') as yr,ttl_mo_prcp from
mo_rains where stn_num=23090")

Any ideas at all?

Cheers again!
-- 
View this message in context: 
http://n4.nabble.com/Separating-columns-and-sorting-by-rows-tp1555806p1558118.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] strangeness in Predict() {rms}

2010-02-16 Thread Bill.Venables
This works without a glitch on my linux system (info below).  You might try 
upgrading your R to 2.10.1, perhaps.

> sessionInfo()
R version 2.10.1 (2009-12-14) 
x86_64-unknown-linux-gnu 

locale:
 [1] LC_CTYPE=en_AU.UTF-8   LC_NUMERIC=C  
 [3] LC_TIME=en_AU.UTF-8LC_COLLATE=en_AU.UTF-8
 [5] LC_MONETARY=C  LC_MESSAGES=en_AU.UTF-8   
 [7] LC_PAPER=en_AU.UTF-8   LC_NAME=C 
 [9] LC_ADDRESS=C   LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_AU.UTF-8 LC_IDENTIFICATION=C   

attached base packages:
[1] splines   grid  stats graphics  grDevices utils datasets 
[8] methods   base 

other attached packages:
[1] rms_2.1-0   plyr_0.1.9  Design_2.3-0Hmisc_3.7-0
[5] survival_2.35-9

loaded via a namespace (and not attached):
[1] ASOR_0.1   cluster_1.12.1 lattice_0.18-3 tcltk_2.10.1   tools_2.10.1  
>  


Bill Venables
CSIRO/CMIS Cleveland Laboratories


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of Dylan Beaudette
Sent: Wednesday, 17 February 2010 10:05 AM
To: r-help@r-project.org
Subject: [R] strangeness in Predict() {rms}

Hi,

Running the following example from ?Predict() throws an error I have never 
seen before:

set.seed(1)
x1 <- runif(300)
x2 <- runif(300)
ddist <- datadist(x1,x2); options(datadist='ddist')
y  <- exp(x1+ x2 - 1 + rnorm(300))
f  <- ols(log(y) ~ pol(x1,2) + x2)
p1 <- Predict(f, x1=., conf.type='mean')

Error in paste(nmc[i], "=", if (is.numeric(x)) format(x) else x, sep = "") : 
  cannot coerce type 'closure' to vector of type 'character'
In addition: Warning message:
In is.na(v) : is.na() applied to non-(list or vector) of type 'closure'

Here is the output from sessionInfo()

R version 2.9.0 (2009-04-17) 
i686-pc-linux-gnu 

locale:
LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C

attached base packages:
[1] grid  splines   stats graphics  grDevices utils datasets 
[8] methods   base 

other attached packages:
[1] plyr_0.1.9 mgcv_1.5-5 RColorBrewer_1.0-2 nlme_3.1-94   
[5] rms_2.1-0  Hmisc_3.7-0survival_2.35-6lattice_0.17-25   

loaded via a namespace (and not attached):
[1] cluster_1.12.0


Any ideas?
Thanks!

Dylan


-- 
Dylan Beaudette
Soil Resource Laboratory
http://casoilresource.lawr.ucdavis.edu/
University of California at Davis
530.754.7341

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] strangeness in Predict() {rms}

2010-02-16 Thread Dylan Beaudette
Hi,

Running the following example from ?Predict() throws an error I have never 
seen before:

set.seed(1)
x1 <- runif(300)
x2 <- runif(300)
ddist <- datadist(x1,x2); options(datadist='ddist')
y  <- exp(x1+ x2 - 1 + rnorm(300))
f  <- ols(log(y) ~ pol(x1,2) + x2)
p1 <- Predict(f, x1=., conf.type='mean')

Error in paste(nmc[i], "=", if (is.numeric(x)) format(x) else x, sep = "") : 
  cannot coerce type 'closure' to vector of type 'character'
In addition: Warning message:
In is.na(v) : is.na() applied to non-(list or vector) of type 'closure'

Here is the output from sessionInfo()

R version 2.9.0 (2009-04-17) 
i686-pc-linux-gnu 

locale:
LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C

attached base packages:
[1] grid  splines   stats graphics  grDevices utils datasets 
[8] methods   base 

other attached packages:
[1] plyr_0.1.9 mgcv_1.5-5 RColorBrewer_1.0-2 nlme_3.1-94   
[5] rms_2.1-0  Hmisc_3.7-0survival_2.35-6lattice_0.17-25   

loaded via a namespace (and not attached):
[1] cluster_1.12.0


Any ideas?
Thanks!

Dylan


-- 
Dylan Beaudette
Soil Resource Laboratory
http://casoilresource.lawr.ucdavis.edu/
University of California at Davis
530.754.7341

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RODBC missing values in integer columns

2010-02-16 Thread Rob Forler
It turns out that in the sqlQuery I must set rows_at_time =0 to get rid of
this problem.

Does anyone have any idea why this might be?

On Tue, Feb 16, 2010 at 12:52 PM, Rob Forler  wrote:

> some more info
> > t(t(odbcGetInfo(connection)))
>  [,1]
> DBMS_Name"Adaptive Server Anywhere"
> DBMS_Ver "12.70."
> Driver_ODBC_Ver  "03.51"
> Data_Source_Name "dbname"
> Driver_Name  "Adaptive Server Anywhere"
> Driver_Ver   "09.00.0001"
> ODBC_Ver "03.52."
> Server_Name  "dbname"
>
>
>
>
> On Tue, Feb 16, 2010 at 11:39 AM, Rob Forler  wrote:
>
>> Hello,
>>
>> We are having some strange issues with RODBC related to integer columns.
>> Whenever we do a sql query the data in a integer column is 150 actual data
>> points then 150 0's then 150 actual data points then 150 0's. However, our
>> database actually has numbers where the 0's are filled in. Furthermore,
>> other datatypes do not have this problem: double and varchar are correct and
>> do not alternate to null. Also, if we increase the rows_at_time to 1024
>> there are larger gaps between the 0's and actual data. The server is a
>> sybase IQ database. We have tested it on a different database sybase ASE and
>> we still get this issue.
>>
>> For example :
>>
>> We have the following query
>>
>> sqlString = "Select ActionID, Velocity from ActionDataTable"
>>
>> #where ActionID is of integer type and Velocity is of double type.
>> connection = odbcConnect("IQDatabase"); #this database is a sybase IQ
>> database
>> sqlData = sqlQuery(connection, sqlString);
>>
>>
>> sqlData$ActionID might be 1,2,3,4,5,6,150, 0,0,0,0,0,0,0,,0,0,0,
>> 301,302,303,304,.448,449,500,0,0,0...,0,0
>>
>> and Velocity will have data values along the whole column without these
>> big areas of 0's.
>>
>> Thanks for the help,
>> Robert Forler
>>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls.lm & AIC

2010-02-16 Thread Katharine Mullen
I will consider putting methods for AIC and logLik into the next version
of minpack.lm (contributions welcome).

For now, the following should work for logLik, where 'object' is the
return value of nls.lm.

logLik.nls.lm <- function(object, REML = FALSE, ...)
{
res <- object$fvec
N <- length(res)
val <-  -N * (log(2 * pi) + 1 - log(N) + log(sum(res^2)))/2
## the formula here corresponds to estimating sigma^2.
attr(val, "df") <- 1L + length(coef(object))
attr(val, "nobs") <- attr(val, "nall") <- N
class(val) <- "logLik"
val
}

On Tue, 16 Feb 2010, Baudron, Alan Ronan wrote:

> Hi there,
>
> I'm a PhD student investigating growth patterns in fish. I've been using
> the minpack.lm package to fit extended von Bertalanffy growth models
> that include explanatory covariates (temperature and density). I found
> the nls.lm comand a powerful tool to fit models with a lot of
> parameters. However, in order to select the best model over the possible
> candidates (without covariates, with both covariates, or with only one
> of them) I'd like to compare them based on their AIC criterion. However,
> it seems that the nls.lm comand doesn't return an AIC, or a Log
> Likelihood. Does someone have any idea of how I could proceed to get
> such informations about my models?
>
> Thanks for your help. Best regards,
>
> Alan Baudron
>
>
> The University of Aberdeen is a charity registered in Scotland, No SC013683.
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problems building from sources

2010-02-16 Thread rkevinburton
I am trying to build R-2.9.2 from source on a Windows 7 machine. I have 
installed all of the requisite software and followed the instructions. I also 
could have sworn that I had a successful build. But now I get the following 
error.

gcc -std=gnu99 -shared -s -mwindows -o R.dll R.def console.o dataentry.o dynload
.o edit.o editor.o embeddedR.o extra.o opt.o pager.o preferences.o psignal.o rho
me.o rt_complete.o rui.o run.o shext.o sys-win32.o system.o dos_wglob.o e_pow.o
malloc.o ../main/libmain.a ../appl/libappl.a ../nmath/libnmath.a getline/gl.a ..
/extra/xdr/libxdr.a ../extra/pcre/libpcre.a ../extra/bzip2/libbz2.a ../extra/int
l/libintl.a ../extra/trio/libtrio.a ../extra/tzone/libtz.a dllversion.o -L. -lgf
ortran -lRblas -L../../bin -lRzlib -lRgraphapp -lRiconv -lcomctl32 -lversion
c:/rtools/mingw/bin/../lib/gcc/mingw32/4.2.1-sjlj/../../../libmingwex.a(mingw_sn
printf.o):mingw_snprintf.c:(.text+0x1970): multiple definition of `snprintf'
../extra/trio/libtrio.a(compat.o):compat.c:(.text+0x30): first defined here
c:/rtools/mingw/bin/../lib/gcc/mingw32/4.2.1-sjlj/../../../libmingwex.a(mingw_sn
printf.o):mingw_snprintf.c:(.text+0x170): multiple definition of `vsnprintf'
../extra/trio/libtrio.a(compat.o):compat.c:(.text+0x20): first defined here
collect2: ld returned 1 exit status
make[3]: *** [R.dll] Error 1
make[2]: *** [../../bin/R.dll] Error 2
make[1]: *** [rbuild] Error 2
make: *** [all] Error 2

It seems like vsnprint and snprintf are multiply defined. Any ideas?

Kevin

 Dimitri Shvorob  wrote: 
> 
> Now that we have a reproducible example... ;)
> -- 
> View this message in context: 
> http://n4.nabble.com/Problems-with-boxplot-in-ggplot2-qplot-tp1555338p1557994.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problems with boxplot in ggplot2:qplot

2010-02-16 Thread Dimitri Shvorob

Now that we have a reproducible example... ;)
-- 
View this message in context: 
http://n4.nabble.com/Problems-with-boxplot-in-ggplot2-qplot-tp1555338p1557994.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interactive plot (histogram) in R..........possible?

2010-02-16 Thread David Winsemius


On Feb 16, 2010, at 4:44 PM, Megh wrote:



Dear all,

I am looking for some kind of interactive plot to draw a histogram  
for a
normal distribution with different sample size. In that plot there  
would be
some sort of "scroll-bar" which will take min value 10 and maximum  
value of

10,000 as sample size (n) from a standard normal distribution.

In that case user will scroll in that scroll bar and histogram with
different sample numbers will be drawn in the same plot.

Is there any function/package in R to do that? Your help will be  
highly

appreciated.



Have you looked at TeachingDemos? It has a bunch of "tk" interactive  
stuff like that.



Thanks,
--
View this message in context: 
http://n4.nabble.com/Interactive-plot-histogram-in-R-possible-tp1557984p1557984.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Math.factor error message

2010-02-16 Thread David Winsemius


On Feb 16, 2010, at 4:40 PM, David Winsemius wrote:



On Feb 16, 2010, at 2:25 PM, Hichem Ben Khedhiri wrote:


Dear R-helpers,

I am using a vrtest on time series data. My commands are as follows;

read.table("B.txt",sep="\t",fill=TRUE, na.strings = "NA")

require(vrtest)

rm(list=ls(all=TRUE))

datamat <- read.table("B.txt",sep="\t",fill=TRUE, na.strings = "NA")

column <- 1

nob <- nrow(datamat)

y <- log(datamat[2:nob,column])-log(datamat[1:(nob-1),column])

After, the use of last command, I get the following message;

in Math.factor(c(37L, 36L, 42L, 41L, 44L, 38L, 31L, 61L, 66L, 91L,  :

log not meaningful for factors


My data is composed of one column.


It appears that you may need to apply as.numeric(as.character( )) to  
that column after input (See the FAQ if you don't know why you need  
both functions.) You seem to have created a factor in datamat. You  
can prevent factor-formation with stringsAsFactors or colClasses  
arguments to read.table.


OOOPs. I did not notice that you had attached the file. It's not a tab  
separated file and so both of your columns get read in as one  
character variable. (or maybe they are commas as decimal points in  
which case see below.)  You also did not tell read.table() that you  
had a header and furthermore to completely complicate things, you  
enclosed the two columns within paired double-quotes.


Get rid of the enclosing quotes and read back in with correct  
arguments, perhaps sep="," unless you are in a locale that uses commas  
as decimal points, in which case you only need read.csv2(). The  
fill=TRUE appears superfluous.






I would appreciate, if any one could provide me with some hints to  
get

around the problem.



Best regards,




David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problems with boxplot in ggplot2:qplot

2010-02-16 Thread Brian Diggs
On 2/15/2010 2:41 PM, Dimitri Shvorob wrote:
> library(sqldf)
> library(ggplot2)
> 
> t = data.frame(t = seq.Date(as.Date("2009-01-01"), to =
> as.Date("2009-12-01"), by = "month"))
> x = data.frame(x = rnorm(5))
> df = sqldf("select * from t, x")

A simpler way to get random data that doesn't involve the sqldf package, and 
gets different x values for each date:

df <- data.frame(t = seq.Date(as.Date("2009-01-01"), to=as.Date("2009-12-01"), 
by="month"), x=rnorm(60))

> qplot(factor(df$t), df$x, geom = "boxplot") + theme_bw()

You are converting your dates to a factor, so they are no longer dates.  I'm 
guessing you did this to get a separate boxplot for each date, but that is not 
the right way to do that.  Use the "group" aesthetic to make different groups.

qplot(df$t, df$x, geom = "boxplot", group=df$t) + theme_bw()

> qplot(factor(df$t), df$x, geom = "boxplot") + theme_bw() +
> scale_x_date(major = "months",  minor = "weeks", format = "%b") 

qplot(df$t, df$x, geom = "boxplot", group=df$t) + theme_bw() +
scale_x_date(major = "months",  minor = "weeks", format = "%b")

> qplot(factor(df$t), df$x, geom = "boxplot") + theme_bw() +
> scale_x_date(format = "%b") 

qplot(df$t, df$x, geom = "boxplot", group=df$t) + theme_bw() +
scale_x_date(format = "%b")

--
Brian Diggs, Ph.D.
Senior Research Associate, Department of Surgery, Oregon Health & Science 
University

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Interactive plot (histogram) in R..........possible?

2010-02-16 Thread Megh

Dear all,

I am looking for some kind of interactive plot to draw a histogram for a
normal distribution with different sample size. In that plot there would be
some sort of "scroll-bar" which will take min value 10 and maximum value of
10,000 as sample size (n) from a standard normal distribution.

In that case user will scroll in that scroll bar and histogram with
different sample numbers will be drawn in the same plot.

Is there any function/package in R to do that? Your help will be highly
appreciated.

Thanks,
-- 
View this message in context: 
http://n4.nabble.com/Interactive-plot-histogram-in-R-possible-tp1557984p1557984.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Math.factor error message

2010-02-16 Thread David Winsemius


On Feb 16, 2010, at 2:25 PM, Hichem Ben Khedhiri wrote:


Dear R-helpers,

I am using a vrtest on time series data. My commands are as follows;

read.table("B.txt",sep="\t",fill=TRUE, na.strings = "NA")

require(vrtest)

rm(list=ls(all=TRUE))

datamat <- read.table("B.txt",sep="\t",fill=TRUE, na.strings = "NA")

column <- 1

nob <- nrow(datamat)

y <- log(datamat[2:nob,column])-log(datamat[1:(nob-1),column])

After, the use of last command, I get the following message;

in Math.factor(c(37L, 36L, 42L, 41L, 44L, 38L, 31L, 61L, 66L, 91L,  :

 log not meaningful for factors


My data is composed of one column.


It appears that you may need to apply as.numeric(as.character( )) to  
that column after input (See the FAQ if you don't know why you need  
both functions.) You seem to have created a factor in datamat. You can  
prevent factor-formation with stringsAsFactors or colClasses arguments  
to read.table.




I would appreciate, if any one could provide me with some hints to get
around the problem.



Best regards,




David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] igraph

2010-02-16 Thread Paul Murrell

Hi


gabrielap wrote:

Hello...

 I am a system engeenering student and I am using R for first time for
Graph Theory. I would like to know if there is anyway you can plot an ghaph
(igraph library) and obtain a graph whose vertices dont appear identified
with number, instead I would like the vertices to be identified with letters
or words, names... is there anyway I can do this?



You can set the "label" attribute for the graph vertices, for example, ...

g <- graph.ring(10)
plot(g)
g <- set.vertex.attribute(g, "label", value=letters[1:10])
plot(g)

Paul



 Thanks in Advance for your Help.

Sincerly.

Gabriela Prasad

P.D. Sorry for my not at all perfect english. I am from a Spanish Country.


--
Dr Paul Murrell
Department of Statistics
The University of Auckland
Private Bag 92019
Auckland
New Zealand
64 9 3737599 x85392
p...@stat.auckland.ac.nz
http://www.stat.auckland.ac.nz/~paul/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R on MAC OS X

2010-02-16 Thread David Winsemius


On Feb 16, 2010, at 2:23 PM, Mestat wrote:



Hi listers,
I just got a MAC, so I am trying to use the command READ.TABLE but I  
am
getting a error that is probably caused by the wrong path that I am  
using...

The command is the following...

file<-read.table("/Users/Márcio/UdeM/Travail Dirigé/Data/MU284
Population.txt",header=T,skip=24)

And I am getting the following error...

Erro em file(file, "rt") : não é possível abrir a conexão
Além disso: Warning message:
In file(file, "rt") :
 cannot open file
'/User/Márcio/UdeM/Travail_Dirigé/Data/MU284_Population.txt': No  
such file

or directory

I checked already the messages but I didn't find what could be my  
mistake...

Any suggestions???


1) Post on the correct list:
https://stat.ethz.ch/mailman/listinfo/r-sig-mac

2a) Include sessionInfo()
2b) if not, then at least tell us whether you are running as a  
terminal session or with the GUI.


3a) use file.choose() as your file argument  ... or
3b) click-hold-drag the file icon onto the console ... this should  
create a correct folder/file designation suitable for enclosure within  
quotes.


4) do not spell R functions with all caps.

--
David.




--

David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] odfTable: table width and alignment

2010-02-16 Thread Max Kuhn
Aleksey,

Go ahead and send me a test file.

Also: what version of OO are you using (or are you using something else)?

Thanks,

Max

On Tue, Feb 16, 2010 at 3:40 PM, Aleksey Naumov  wrote:
> Max,
>
> Thank you for your help. Please see my responses below.
>
>
> On Tue, Feb 9, 2010 at 8:20 PM, Max Kuhn  wrote:
>>
>> > I am trying to figure out how to control table width and alignment on
>> > the
>> > page for a table generated by odfTable. Based on reading odfWeave
>> > documentation (including formattingOut.odt), here is how I manipulate
>> > the
>> > styles:
>> >
>> st = getStyleDefs()
>>  # modify the table style
>>  tab = getStyles()$table
>> st[[tab]]$align = "center"            # seems to have no effect
>> st[[tab]]$marginLeft = "2.0 in"     # seems to have no effect
>> setStyleDefs(st)
>> >
>> > My table always ends up fully justified (taking all page width). When I
>> > check Table Format in the output .odf, Alignment is always "Automatic".
>> > When
>> > doing Column/Optimal Width on the table, the table shrinks but becomes
>> > left
>> > aligned, not centered.
>>
>> It's hard to tell without an actual file to test with (and what
>> versions of R and odfWeave, and what platform etc)
>
> I am working with R 2.10.1 om Windows XP Professional 2002, SP 3. My package
> versions are listed below:
>> pkgVersions(type = "matrix")
>  [,1]    [,2] [,3]
> [,4]
> [1,] "base (2.10.1)" "grDevices (2.10.1)" "MASS (7.3-4)"  "stats
> (2.10.1)"
> [2,] "datasets (2.10.1)" "grid (2.10.1)"  "methods (2.10.1)"  "utils
> (2.10.1)"
> [3,] "graphics (2.10.1)" "lattice (0.17-26)"  "odfWeave (0.7.10)" "XML
> (2.6-0)"
>
> Would it help if I send you a small ODT file which concisely shows what I am
> trying to do?
>
>>
>> Did you read the example files in the examples directory of the
>> package? There are examples of what you are doing there and they seem
>> to work (last time I checked at least).
>>
>> Can you you reproduce those example tables?
>
> Yes, I looked into these examples, they are nice, thank you. I am able to
> run (odfWeave) 3 out of 4: "examples.odt", "simple.odt" and "testCases.odt",
> but they do not make use of the table style "align" or "margin" properties,
> so all tables are fully justified per default (except for the case where a
> code chuck lives in a small 1x1 table, which effectively restricts the
> output width -- in fact, this is a partial alternative to manipulating table
> margins).
>
> I cannot run odfWeave on "formatting.odt". The initial error message has to
> do with the fact that I do not have the "RTable2" style, which is referenced
> in chunk:
>
>    13 : echo term verbatim(label=showTableStyles)
>
> Error:  chunk 13 (label=showTableStyles)
> Error in names(x) <- value :
>   'names' attribute [1] must be the same length as the vector [0]
>
> However, once I comment out styleDetails("RTable2") from this chunk I get a
> different error, here is the full output:
>
>> odfWeave("formatting.odt", "[AN]_formatting_out.odt")
>   Copying  formatting.odt
>   Setting wd to
> C:\DOCUME~1\anaumov\LOCALS~1\Temp\RtmplqVqs5/odfWeave16142147610
>   Unzipping ODF file using unzip -o "formatting.odt"
> Archive:  formatting.odt
>  extracting: mimetype
>    creating: Configurations2/statusbar/
>   inflating: Configurations2/accelerator/current.xml
>    creating: Configurations2/floater/
>    creating: Configurations2/popupmenu/
>    creating: Configurations2/progressbar/
>    creating: Configurations2/menubar/
>    creating: Configurations2/toolbar/
>    creating: Configurations2/images/Bitmaps/
>  extracting: Pictures/121301899B642BA5.png
>  extracting: Pictures/10EA01757E0E322B.png
>  extracting: Pictures/1211018638600D3B.png
>   inflating: layout-cache
>   inflating: content.xml
>   inflating: styles.xml
>  extracting: meta.xml
>   inflating: Thumbnails/thumbnail.png
>   inflating: settings.xml
>   inflating: META-INF/manifest.xml
>
>   Removing  formatting.odt
>
>   Pre-processing the contents
> Error: cc$parentId == parentId is not TRUE
> In addition: Warning message:
> In function (name, .state)  : found start of code chunk in a code chunk
>
> Surprisingly, even if I remove the comment I still get the same message as
> above ("found start of code chunk ...") which is very puzzling. Is
> formatting.odt working for you with the latest R and package versions?
>
>
>> The table style is understood by odfWeave -- here is the output I get
>> after
>> sourcing my style definition file:
>>
>>> getStyles()$table
>> [1] "RTable1"
>>> getStyleDefs()[[getStyles()$table]]
>> $type
>> [1] "Table"
>>
>> $marginLeft
>> [1] "2.0 in"
>>
>> $marginRight
>> [1] "0.05in"
>>
>> $marginTop
>> [1] "0.05in"
>>
>> $marginBottom
>> [1] "0.05in"
>>
>> $align
>> [1] "center"
>>
>> I am not sure why these style options do not seem to have effect on the
>> output. My specific questions are:
>>
>> (1) How do I get table alignment to work?
>>
>>

Re: [R] READ.TABLE for Mac

2010-02-16 Thread Sharpie


Mestat wrote:
> 
> Hi listers,
> I just got a MAC, so I am trying to use the command READ.TABLE but I am
> getting a error that is probably caused by the wrong path that I am
> using...
> The command is the following...
> 
> file<-read.table("/Users/Márcio/UdeM/Travail Dirigé/Data/MU284
> Population.txt",header=T,skip=24)
> 
> And I am getting the following error...
> 
> Erro em file(file, "rt") : não é possível abrir a conexão
> Além disso: Warning message:
> In file(file, "rt") :
>   cannot open file
> '/User/Márcio/UdeM/Travail_Dirigé/Data/MU284_Population.txt': No such file
> or directory
> 
> I checked already the messages but I didn't find what could be my
> mistake... Any suggestions???
> Thanks in advance,
> Marcio 
> 



Well there are a couple inconsistencies between the example command you give
and the error message you are reporting:

  1. The root path '/Users'  is '/User' in the error message.

  2. The file name 'MU284 Population.txt'  is 'MU284_Population.txt' in the
error message.  The same thing has happened to the directory name "Travail
Dirigé"

I also use R on OS X and have never observed it alter or sanitize my file
names-- I suspect you may have mistyped the path.  If this is the case, use
the TAB key to help you autofill the path rather then trying to type it all
out correctly yourself.  Just type the first few letters of a folder or file
name and push TAB to have the rest autocompleted.

Good luck!

-Charlie
-- 
View this message in context: 
http://n4.nabble.com/READ-TABLE-for-Mac-tp1557879p1557965.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R on MAC OS X

2010-02-16 Thread Mestat

Hi listers,
I just got a MAC, so I am trying to use the command READ.TABLE but I am
getting a error that is probably caused by the wrong path that I am using...
The command is the following...

file<-read.table("/Users/Márcio/UdeM/Travail Dirigé/Data/MU284
Population.txt",header=T,skip=24)

And I am getting the following error...

Erro em file(file, "rt") : não é possível abrir a conexão
Além disso: Warning message:
In file(file, "rt") :
  cannot open file
'/User/Márcio/UdeM/Travail_Dirigé/Data/MU284_Population.txt': No such file
or directory

I checked already the messages but I didn't find what could be my mistake...
Any suggestions???
Thanks in advance,
Marcio
-- 
View this message in context: 
http://n4.nabble.com/R-on-MAC-OS-X-tp802729p1557818.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Analyzing event times with densityplot

2010-02-16 Thread David Lindelöf
Dear useRs,

I have a file with a sequence of event timestamps, for instance the
times at which someone visits a website:

02.02.2010 09:00:00
02.02.2010 09:00:00
02.02.2010 09:00:00
02.02.2010 09:00:01
02.02.2010 09:00:03
02.02.2010 09:00:05
02.02.2010 09:00:06
02.02.2010 09:00:06
02.02.2010 09:00:09
02.02.2010 09:00:11
02.02.2010 09:00:11
02.02.2010 09:00:11
etc, for several thousand rows.

I'd like to get an idea how the web hits are distributed over time,
over the week etc. I extract the data to a dataframe and I tried
plotting densityplots:

library(lattice)
data <- as.POSIXct(scan("data.txt",
what=character(0),
sep="\n"),
   format="%d.%m.%Y %T")
data.lt <- as.POSIXlt(data)
data.df <- data.frame(time=data,
  sec=jitter(data.lt$sec, amount=.5),
  min=data.lt$min,
  hour=data.lt$hour,
  wday=weekdays(data))

densityplot(~(sec+60*min+3600*hour)|wday,
data.df,
plot.points=FALSE)


1) Is a densityplot the most appropriate way to analyze this kind of data?

2) The densityplot yields a pdf, but I'd rather see the number of
visits per second on the y-axis. How can I do that?

3) I've found that the shape of the plot depends heavily on the chosen
bandwidth. Ideally I'd like to identify spikes when several visitors
come to the site at the "same" time (say, within 5 seconds of each
others). How should I choose the bandwidth (and kernel for that
matter)?

Your help would be much, much appreciated.


-- 

David Lindelöf, Ph.D.
+41 (0)79 415 66 41 or skype:david.lindelof
Better Software for Tomorrow's Cities:
  http://computersandbuildings.com
Follow me on Twitter:
  http://twitter.com/dlindelof

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Math.factor error message

2010-02-16 Thread Hichem Ben Khedhiri
Dear R-helpers,



I am using a vrtest on time series data. My commands are as follows;



read.table("B.txt",sep="\t",fill=TRUE, na.strings = "NA")

require(vrtest)

rm(list=ls(all=TRUE))

datamat <- read.table("B.txt",sep="\t",fill=TRUE, na.strings = "NA")

column <- 1

nob <- nrow(datamat)

y <- log(datamat[2:nob,column])-log(datamat[1:(nob-1),column])



After, the use of last command, I get the following message;



in Math.factor(c(37L, 36L, 42L, 41L, 44L, 38L, 31L, 61L, 66L, 91L,  :

  log not meaningful for factors



My data is composed of one column.



I would appreciate, if any one could provide me with some hints to get
around the problem.



Best regards,

Ben
BNS CER
"11,14"
"11,11"
"11,25"
"11,22"
"11,31"
"11,15"
"11,00"
"11,88"
"11,94"
"12,44"
"12,90"
"12,94"
"12,97"
"12,80"
"12,78"
"13,14"
"13,72"
"13,12"
"12,93"
"12,62"
"12,62"
"12,22"
"12,30"
"12,12"
"12,16"
"11,89"
"12,30"
"12,37"
"12,50"
"12,50"
"12,41"
"12,72"
"12,54"
"12,44"
"12,73"
"12,94"
"13,04"
"13,16"
"13,60"
"13,58"
"13,65"
"13,67"
"13,80"
"13,87"
"13,33"
"13,32"
"13,42"
"13,51"
"13,63"
"13,90"
"13,68"
"13,57"
"13,43"
"13,61"
"13,84"
"13,48"
"13,39"
"12,83"
"12,54"
"12,59"
"12,57"
"12,15"
"11,93"
"11,98"
"12,09"
"11,91"
"12,27"
"12,12"
"12,00"
"12,35"
"12,69"
"12,83"
"12,59"
"12,64"
"12,88"
"12,90"
"12,37"
"12,97"
"13,37"
"13,41"
"13,56"
"13,59"
"13,52"
"13,53"
"13,33"
"13,22"
"13,36"
"13,43"
"13,40"
"13,45"
"13,43"
"13,36"
"13,38"
"13,27"
"13,10"
"13,02"
"12,81"
"12,70"
"12,69"
"12,66"
"12,60"
"12,64"
"12,67"
"12,67"
"12,69"
"12,81"
"12,67"
"12,36"
"12,40"
"12,39"
"12,64"
"12,87"
"12,85"
"12,85"
"12,87"
"12,92"
"12,90"
"12,88"
"13,01"
"12,70"
"12,32"
"12,53"
"12,19"
"11,98"
"11,94"
"12,00"
"11,80"
"11,92"
"11,88"
"12,00"
"11,80"
"12,09"
"11,80"
"11,82"
"11,65"
"11,42"
"11,16"
"11,40"
"11,03"
"11,04"
"11,10"
"10,61"
"11,10"
"11,50"
"11,45"
"11,35"
"11,50"
"12,03"
"12,20"
"12,13"
"12,65"
"12,79"
"12,52"
"12,79"
"12,79"
"12,56"
"12,58"
"12,33"
"12,10"
"12,55"
"12,00"
"11,74"
"11,76"
"11,85"
"12,03"
"12,27"
"12,81"
"12,47"
"12,06"
"11,92"
"11,59"
"11,80"
"11,47"
"11,59"
"11,50"
"11,61"
"11,63"
"11,35"
"11,14"
"11,00"
"10,88"
"11,00"
"11,01"
"10,90"
"10,89"
"10,74"
"10,45"
"10,30"
"10,56"
"10,85"
"10,60"
"10,83"
"10,45"
"10,29"
"10,30"
"9,75"
"9,55"
"10,23"
"10,72"
"10,22"
"10,77"
"11,25"
"11,74"
"11,43"
"11,20"
"10,83"
"10,78"
"11,00"
"10,63"
"10,15"
"10,80"
"10,78"
"10,16"
"9,45"
"9,50"
"9,91"
"9,25"
"8,68"
"8,94"
"9,30"
"9,75"
"9,10"
"8,28"
"7,95"
"8,26"
"7,60"
"7,96"
"8,78"
"9,64"
"9,60"
"9,61"
"9,76"
"10,00"
"10,34"
"10,60"
"10,54"
"10,61"
"10,53"
"10,73"
"10,43"
"10,63"
"10,51"
"10,39"
"10,46"
"11,30"
"12,21"
"12,06"
"12,33"
"12,58"
"12,68"
"12,80"
"13,53"
"13,55"
"13,30"
"13,25"
"13,53"
"13,40"
"13,06"
"13,01"
"12,83"
"12,91"
"13,07"
"13,09"
"13,90"
"13,43"
"13,61"
"13,22"
"13,36"
"13,23"
"13,38"
"13,28"
"13,18"
"13,90"
"14,31"
"14,34"
"14,58"
"14,08"
"14,45"
"14,18"
"14,15"
"14,35"
"13,90"
"13,66"
"15,13"
"15,15"
"15,55"
"15,89"
"16,08"
"15,74"
"15,63"
"15,69"
"15,66"
"15,28"
"15,22"
"15,44"
"15,08"
"15,02"
"15,23"
"15,50"
"15,25"
"15,17"
"16,20"
"16,95"
"16,65"
"18,05"
"18,65"
"18,93"
"18,98"
"19,85"
"20,26"
"19,65"
"19,08"
"19,78"
"18,83"
"18,55"
"18,38"
"18,80"
"19,57"
"18,88"
"18,65"
"18,88"
"19,83"
"19,43"
"19,56"
"20,15"
"20,18"
"20,18"
"19,25"
"18,98"
"18,90"
"19,10"
"19,59"
"18,83"
"18,85"
"19,13"
"20,05"
"20,31"
"20,65"
"20,25"
"20,45"
"20,80"
"20,90"
"20,63"
"20,65"
"20,30"
"20,10"
"20,05"
"20,25"
"20,00"
"19,98"
"19,90"
"19,60"
"19,74"
"19,83"
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Random center effect

2010-02-16 Thread David Hajage
Hello R users,

I'm trying to take into acount the center effect in a clinical study
comparing 3 treatments (data from the book Applied Mixed Models in Medicine
Statistics by Helen Brown and Robin Prescott):
- dbp: diastolic blood pressure at 8 weeks
- dbp0: diastolic blood pressure at inclusion
- treat: three treatments A, B and C
- centre: centres of the study

First, I have fitted a marginal model with gls (nlme package), with a
compound symmetry structure:

require(nlme)
gls1 <- gls(dbp ~ factor(centre) + treat + dbp0, data = tension1,
correlation = corCompSymm(form = ~ 1 | centre))
summary(gls1)
anova(gls1)

Denom. DF: 224
   numDF  F-value p-value
(Intercept)1 33349.45  <.0001
factor(centre)28 2.38  0.0003
treat  2 1.79  0.1700
dbp0   1 2.61  0.1075

Second, I tried to fit a random effect model with lme:

lme1 <- lme(dbp ~ factor(centre) + treat + dbp0, random = ~ 1 | centre, data
= tension1)
anova(lme1)
   numDF denDF  F-value p-value
(Intercept)1   224 3753.228  <.0001
factor(centre)28 00.335 NaN
treat  2   2241.786  0.1700
dbp0   1   2242.612  0.1075
Message d'avis :
In pf(q, df1, df2, lower.tail, log.p) : production de NaN

I thought this two models equivalent, and I don't understand why the fixed
center effect produced NaN in lme model.

I also tried with lmer (from lme4 package):

require(lme4)
lmer1 <- lmer(dbp ~ factor(centre) + treat + dbp0 + (1 | centre), data =
tension1)
anova(lmer1)

Analysis of Variance Table
   Df  Sum Sq Mean Sq F value
factor(centre) 28 1214.42  43.372  0.7026
treat   2  220.52 110.262  1.7862
dbp01  161.23 161.231  2.6118

F values for centre effect are very different in the 3 models.

It is surely a naive question, but does someone could to me explain what
happens here?

Thank you very much in advance.

Here the data :

tension1 <-
structure(list(patient = c(1L, 3L, 4L, 5L, 7L, 8L, 9L, 10L, 11L,
13L, 14L, 15L, 18L, 19L, 23L, 24L, 25L, 28L, 30L, 31L, 32L, 34L,
35L, 36L, 37L, 38L, 43L, 44L, 45L, 46L, 47L, 48L, 49L, 50L, 52L,
53L, 54L, 55L, 56L, 57L, 58L, 60L, 63L, 64L, 70L, 71L, 72L, 73L,
74L, 80L, 81L, 82L, 84L, 92L, 93L, 94L, 95L, 96L, 97L, 98L, 99L,
100L, 101L, 102L, 103L, 104L, 105L, 106L, 107L, 108L, 109L, 110L,
111L, 112L, 113L, 114L, 116L, 117L, 118L, 119L, 120L, 122L, 124L,
125L, 127L, 128L, 129L, 130L, 131L, 132L, 133L, 134L, 135L, 136L,
137L, 139L, 140L, 141L, 142L, 143L, 144L, 145L, 146L, 147L, 150L,
157L, 158L, 160L, 161L, 162L, 165L, 166L, 167L, 169L, 170L, 171L,
172L, 173L, 174L, 175L, 176L, 177L, 178L, 180L, 181L, 182L, 183L,
185L, 186L, 187L, 189L, 190L, 191L, 192L, 199L, 200L, 201L, 202L,
203L, 204L, 205L, 206L, 207L, 208L, 212L, 213L, 214L, 216L, 217L,
218L, 219L, 220L, 221L, 223L, 224L, 225L, 226L, 227L, 228L, 229L,
230L, 237L, 241L, 242L, 243L, 244L, 245L, 246L, 247L, 248L, 249L,
250L, 251L, 252L, 255L, 256L, 258L, 260L, 261L, 262L, 263L, 264L,
265L, 266L, 267L, 268L, 269L, 270L, 271L, 272L, 273L, 274L, 275L,
279L, 280L, 281L, 282L, 283L, 284L, 285L, 286L, 287L, 290L, 291L,
292L, 293L, 294L, 301L, 302L, 303L, 304L, 305L, 306L, 307L, 308L,
309L, 310L, 311L, 312L, 319L, 320L, 322L, 323L, 324L, 325L, 326L,
327L, 328L, 329L, 330L, 332L, 333L, 334L, 335L, 337L, 338L, 339L,
340L, 341L, 342L, 343L, 344L, 345L, 346L, 347L, 348L, 350L, 352L,
355L, 357L, 361L, 362L, 363L, 364L, 365L, 366L), centre = c(29L,
5L, 5L, 29L, 3L, 3L, 3L, 3L, 3L, 36L, 36L, 36L, 36L, 5L, 5L,
5L, 15L, 15L, 15L, 4L, 4L, 4L, 4L, 4L, 23L, 23L, 7L, 7L, 7L,
7L, 7L, 7L, 30L, 30L, 30L, 30L, 30L, 18L, 18L, 18L, 18L, 18L,
24L, 23L, 35L, 35L, 35L, 27L, 27L, 1L, 1L, 1L, 1L, 26L, 26L,
26L, 26L, 26L, 14L, 14L, 14L, 14L, 14L, 14L, 11L, 11L, 11L, 11L,
11L, 11L, 7L, 7L, 7L, 7L, 7L, 7L, 6L, 6L, 6L, 6L, 6L, 32L, 32L,
32L, 25L, 25L, 25L, 25L, 25L, 25L, 31L, 31L, 31L, 31L, 31L, 14L,
14L, 14L, 14L, 14L, 14L, 37L, 37L, 37L, 37L, 31L, 31L, 31L, 31L,
31L, 13L, 13L, 13L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 9L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 31L, 31L, 31L, 31L, 31L,
31L, 8L, 8L, 8L, 8L, 11L, 11L, 11L, 11L, 12L, 12L, 12L, 12L,
12L, 36L, 36L, 36L, 36L, 36L, 36L, 40L, 40L, 41L, 14L, 14L, 14L,
14L, 14L, 14L, 4L, 4L, 4L, 4L, 4L, 4L, 36L, 36L, 36L, 1L, 1L,
1L, 1L, 1L, 7L, 7L, 7L, 7L, 7L, 7L, 1L, 1L, 1L, 1L, 1L, 5L, 5L,
5L, 5L, 25L, 26L, 26L, 26L, 26L, 12L, 12L, 12L, 12L, 12L, 15L,
15L, 15L, 15L, 15L, 15L, 1L, 1L, 1L, 1L, 1L, 1L, 36L, 36L, 36L,
36L, 36L, 31L, 31L, 31L, 31L, 31L, 31L, 1L, 1L, 1L, 1L, 14L,
14L, 14L, 14L, 14L, 14L, 31L, 31L, 31L, 31L, 31L, 31L, 1L, 1L,
36L, 36L, 31L, 31L, 31L, 31L, 31L, 31L), treat = structure(c(3L,
2L, 1L, 1L, 1L, 2L, 2L, 1L, 3L, 2L, 1L, 3L, 1L, 2L, 1L, 3L, 3L,
2L, 2L, 2L, 3L, 2L, 1L, 3L, 3L, 3L, 2L, 1L, 1L, 3L, 3L, 2L, 2L,
3L, 3L, 2L, 1L, 2L, 1L, 3L, 3L, 2L, 3L, 1L, 3L, 2L, 1L, 2L, 3L,
2L, 3L, 1L, 3L, 3L, 1L, 1L, 2L, 3L, 2L, 1L, 1L, 3L, 2L, 3L, 3L,
2L, 3L, 2L, 1L, 1L, 3L, 2L, 1L, 3L, 1L, 2L, 3L, 1L, 3L, 2L, 1L,
1L,

Re: [R] odfTable: table width and alignment

2010-02-16 Thread Aleksey Naumov
Max,

Thank you for your help. Please see my responses below.


On Tue, Feb 9, 2010 at 8:20 PM, Max Kuhn  wrote:

> > I am trying to figure out how to control table width and alignment on the
> > page for a table generated by odfTable. Based on reading odfWeave
> > documentation (including formattingOut.odt), here is how I manipulate the
> > styles:
> >
> st = getStyleDefs()
>  # modify the table style
>  tab = getStyles()$table
> st[[tab]]$align = "center"# seems to have no effect
> st[[tab]]$marginLeft = "2.0 in" # seems to have no effect
> setStyleDefs(st)
> >
> > My table always ends up fully justified (taking all page width). When I
> > check Table Format in the output .odf, Alignment is always "Automatic".
> When
> > doing Column/Optimal Width on the table, the table shrinks but becomes
> left
> > aligned, not centered.
>
> It's hard to tell without an actual file to test with (and what
> versions of R and odfWeave, and what platform etc)
>

I am working with R 2.10.1 om Windows XP Professional 2002, SP 3. My package
versions are listed below:
> pkgVersions(type = "matrix")
 [,1][,2] [,3]
[,4]
[1,] "base (2.10.1)" "grDevices (2.10.1)" "MASS (7.3-4)"  "stats
(2.10.1)"
[2,] "datasets (2.10.1)" "grid (2.10.1)"  "methods (2.10.1)"  "utils
(2.10.1)"
[3,] "graphics (2.10.1)" "lattice (0.17-26)"  "odfWeave (0.7.10)" "XML
(2.6-0)"

Would it help if I send you a small ODT file which concisely shows what I am
trying to do?


> Did you read the example files in the examples directory of the
> package? There are examples of what you are doing there and they seem
> to work (last time I checked at least).
>
> Can you you reproduce those example tables?
>

Yes, I looked into these examples, they are nice, thank you. I am able to
run (odfWeave) 3 out of 4: "examples.odt", "simple.odt" and "testCases.odt",
but they do not make use of the table style "align" or "margin" properties,
so all tables are fully justified per default (except for the case where a
code chuck lives in a small 1x1 table, which effectively restricts the
output width -- in fact, this is a partial alternative to manipulating table
margins).

I cannot run odfWeave on "formatting.odt". The initial error message has to
do with the fact that I do not have the "RTable2" style, which is referenced
in chunk:

   13 : echo term verbatim(label=showTableStyles)

Error:  chunk 13 (label=showTableStyles)
Error in names(x) <- value :
  'names' attribute [1] must be the same length as the vector [0]

However, once I comment out styleDetails("RTable2") from this chunk I get a
different error, here is the full output:

> odfWeave("formatting.odt", "[AN]_formatting_out.odt")
  Copying  formatting.odt
  Setting wd to
C:\DOCUME~1\anaumov\LOCALS~1\Temp\RtmplqVqs5/odfWeave16142147610
  Unzipping ODF file using unzip -o "formatting.odt"
Archive:  formatting.odt
 extracting: mimetype
   creating: Configurations2/statusbar/
  inflating: Configurations2/accelerator/current.xml
   creating: Configurations2/floater/
   creating: Configurations2/popupmenu/
   creating: Configurations2/progressbar/
   creating: Configurations2/menubar/
   creating: Configurations2/toolbar/
   creating: Configurations2/images/Bitmaps/
 extracting: Pictures/121301899B642BA5.png
 extracting: Pictures/10EA01757E0E322B.png
 extracting: Pictures/1211018638600D3B.png
  inflating: layout-cache
  inflating: content.xml
  inflating: styles.xml
 extracting: meta.xml
  inflating: Thumbnails/thumbnail.png
  inflating: settings.xml
  inflating: META-INF/manifest.xml

  Removing  formatting.odt

  Pre-processing the contents
Error: cc$parentId == parentId is not TRUE
In addition: Warning message:
In function (name, .state)  : found start of code chunk in a code chunk

Surprisingly, even if I remove the comment I still get the same message as
above ("found start of code chunk ...") which is very puzzling. Is
formatting.odt working for you with the latest R and package versions?


> The table style is understood by odfWeave -- here is the output I get
after
> sourcing my style definition file:
>
>> getStyles()$table
> [1] "RTable1"
>> getStyleDefs()[[getStyles()$table]]
> $type
> [1] "Table"
>
> $marginLeft
> [1] "2.0 in"
>
> $marginRight
> [1] "0.05in"
>
> $marginTop
> [1] "0.05in"
>
> $marginBottom
> [1] "0.05in"
>
> $align
> [1] "center"
>
> I am not sure why these style options do not seem to have effect on the
> output. My specific questions are:
>
> (1) How do I get table alignment to work?
>
> (2) Is the only way to control table width via setting the margins in the
> $table style? What am I doing wrong in the above style code?

 At present, yes.
>

Ok, thank you, As I found out by looking at testCases.odt, another way which
may work for simple tables is to embed a chuck into a 1x1 table with
specified dimensions, even though this approach is not flexible (no way to
control table dimensions at

[R] General JRI questions

2010-02-16 Thread Ralf B
Hi all,

I recently started to work with JRI/rJava and have already run R
statements from Java. Since I have larger units encapsulated in R
scripts, I would like to run those from Java directly. I was not able
to answer the following questions and would appreciate your help on
this:

How do I run entire R scripts from Java using JRI? (scripts are
located in the same folder)
How do I pass data from java to these scripts?
How can I return results from these scripts back to Java?

Ralf

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] False convergence of a glmer model

2010-02-16 Thread Douglas Bates
On Tue, Feb 16, 2010 at 9:38 AM, Shige Song  wrote:
> Hi Doug,
>
> Thanks. Next time I will post it to the R-SIG0-mixed-models mailing
> list, as you suggested.

I have added R-SIG-mixed-models to the cc: list.  I suggest we drop
the cc: to R-help after this message.

> With respect to your question, the answer is no, these parameters do
> not make sense. Here is the Stata output from "exactly" the same
> model:

> . xi:xtlogit inftmort i.cohort, i(code)
> i.cohort          _Icohort_1-3        (naturally coded; _Icohort_1 omitted)
>
> Fitting comparison model:
>
> Iteration 0:   log likelihood = -1754.4476
> Iteration 1:   log likelihood = -1749.3366
> Iteration 2:   log likelihood = -1749.2491
> Iteration 3:   log likelihood = -1749.2491
>
> Fitting full model:
>
> tau =  0.0     log likelihood = -1749.2491
> tau =  0.1     log likelihood = -1743.8418
> tau =  0.2     log likelihood = -1739.0769
> tau =  0.3     log likelihood = -1736.4914
> tau =  0.4     log likelihood = -1739.5415
>
> Iteration 0:   log likelihood = -1736.4914
> Iteration 1:   log likelihood = -1722.6629
> Iteration 2:   log likelihood = -1694.9114
> Iteration 3:   log likelihood = -1694.6509
> Iteration 4:   log likelihood =  -1694.649
> Iteration 5:   log likelihood =  -1694.649
>
> Random-effects logistic regression              Number of obs      =     21694
> Group variable: code                            Number of groups   =     10789
>
> Random effects u_i ~ Gaussian                   Obs per group: min =         1
>                                                               avg =       2.0
>                                                               max =         9
>
>                                                Wald chi2(2)       =      8.05
> Log likelihood  =  -1694.649                    Prob > chi2        =    0.0178

Well, the quantities being displayed in the iteration output from
glmer are the deviance and the parameter values.  This stata
log-likelihood corresponds to a deviance of 3389.3.  It is possible
that glmer and stata don't measure the log-likelihood on the same
scale but, if not, then the estimates where glmer gets stalled are
producing a lower deviance of 2837.49.

The reason that glmer is getting stalled is because of the coefficient
of -7.45883 for the intercept.  This corresponds to a mean success
probability of 0.0005 for cohort 1.  The stata coefficient estimate of
-5.214642 corresponds to a mean success probability of  0.0055 for
cohort 1, which is still very, very small.  The success probabilities
for the other cohorts are going to be even smaller.  What is the
overall proportion of zeros in the response?

The optimization to determine the maximum likelihood estimates of the
coefficients is being done on the scale of the linear predictor.  When
the values of the linear predictor become very small or very large,
the fitted values become insensitive to the coefficients.  The fact
the one program converges and another one doesn't may have more to do
with the convergence criterion than with the quality of the fit.


> --
>    inftmort |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
> -+
>  _Icohort_2 |  -.5246846   .1850328    -2.84   0.005    -.8873422   -.1620269
>  _Icohort_3 |  -.1424331    .140369    -1.01   0.310    -.4175513     .132685
>       _cons |  -5.214642   .1839703   -28.35   0.000    -5.575217   -4.854067
> -+
>    /lnsig2u |   .9232684   .1416214                      .6456956    1.200841
> -+
>     sigma_u |   1.586665   .1123528                      1.381055    1.822885
>         rho |   .4335015   .0347791                      .3669899    .5024984
> --
> Likelihood-ratio test of rho=0: chibar2(01) =   109.20 Prob >= chibar2 = 0.000
>
> The difference is quite huge, and Stata did not have any difficulties
> estimating this model, which makes feel that I might get some very
> basic specification wrong in my R model...
>
> Best,
> Shige
>
> On Tue, Feb 16, 2010 at 10:29 AM, Douglas Bates  wrote:
>> On Tue, Feb 16, 2010 at 9:05 AM, Shige Song  wrote:
>>> Dear All,
>>
>>> I am trying to fit a 2-level random intercept logistic regression on a
>>> data set of 20,000 cases.  The model is specified as the following:
>>
>>>  m1 <- glmer(inftmort ~ as.factor(cohort) + (1|code), family=binomial, 
>>> data=d)
>>
>>> I got "Warning message: In mer_finalize(ans) : false convergence (8)"
>>
>> That message means that the optimizer function, nlminb, got stalled.
>> It has converged but the point at which is has converged is not
>> clearly the optimum.  In many cases this just indicates that the
>> optim

Re: [R] Legend Text Font Size

2010-02-16 Thread Uwe Ligges



On 16.02.2010 19:59, Rob Helpert wrote:

Hi.

I have a plot containing a large number of lines.  I have placed a
legend in the plot, but with so many lines, the legend takes up a lot
of space.  I have tried to reduce the spacing between the lines using
the legend parameter x.intersp=0.7, but this does not compress the
legend enough.  Is there some way to make the text font size smaller
in "legend"?



Use argument cex, as usual in plot functions:

plot(1)
legend(1,1,legend="a very long text - some very small font size", pch=1, 
cex=0.5)


Uwe Ligges


Thanks,
RH.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lmer - error asMethod(object) : matrix is not symmetric

2010-02-16 Thread Douglas Bates
On Tue, Feb 16, 2010 at 10:54 AM, Luisa Carvalheiro
 wrote:
> Dear Douglas,
>
> Thank you for your reply.
> Just some extra info on the dataset: In my case Number of obs is 33,
> and number of groups of factor(Farm_code) is 12.
> This is the information on iterations I get:
>
> summary(lmer(round(SR_SUN)~Dist_NV + (1|factor(Farm_code)) ,
> family=poisson, verbose =TRUE))
>  0:     60.054531:  1.06363  2.14672 -0.000683051
>  1:     60.054531:  1.06363  2.14672 -0.000683051
> Error in asMethod(object) : matrix is not symmetric [1,2]
> In addition: Warning message:
> In mer_finalize(ans) : singular convergence (7)

> When I run a similar model (exp variable Dist_hives) the number of
> iterations is 11:

>  summary(lmer(round(SR_SUN)~Dist_hives + (1|factor(Farm_code)) ,
> family=poisson, verbose =TRUE))
>  0:     61.745238: 0.984732  1.63769 0.000126484
>  1:     61.648229: 0.984731  1.63769 -2.08637e-05
>  2:     61.498777: 0.984598  1.63769 4.11867e-05
>  3:     47.960908: 0.381062  1.63585 6.77029e-05
>  4:     46.223789: 0.250732  1.66727 8.31854e-05
>  5:     46.23: 0.250732  1.66727 6.97790e-05
>  6:     46.216710: 0.250730  1.66727 7.60560e-05
>  7:     46.168835: 0.230386  1.64883 9.16430e-05
>  8:     46.165955: 0.228062  1.65658 8.70694e-05
>  9:     46.165883: 0.228815  1.65737 8.63400e-05
>  10:     46.165883: 0.228772  1.65734 8.63698e-05
>  11:     46.165883: 0.228772  1.65734 8.63701e-05

> I am very confused with the fact that it runs with Dist_hives and not
> with Dist_NV. Both variables are distance values, the first having no
> obvious relation with the response variable and the second (Dist_NV)
> seems to have a negative effect on SR_SUN.

As you say, Dist_hives has very little relationship to the response
variable.  The two fixed-effects coefficients are the last two
parameters in the iteration output (the first parameter is the
standard deviation of the random effects).  So the slope with respect
to Dist_hives for the linear predictor is 0.863.  Either you have
very large magnitudes of Dist_hives or that variable does not have
much predictive power.

For the second (Dist_NV) variable, the optimization algorithm is not
able to make progress from the starting estimates.  This may be an
indication that the problem is badly scaled.  Are the values of
Dist_NV very large?  If so, you may want to change the unit (say from
meters to kilometers) so the values are much smaller.

It may also help to use a starting estimate for the standard deviation
of the random effects derived from the other model.  That is, include
start = 0.22 in the call to lmer.

> Does this information helps identifying the problem with my data/analysis?
>
> Thank you,
>
> Luisa
>
>
>
>
> On Tue, Feb 16, 2010 at 5:35 PM, Douglas Bates  wrote:
>> This is similar to another question on the list today.
>>
>> On Tue, Feb 16, 2010 at 4:39 AM, Luisa Carvalheiro
>>  wrote:
>>> Dear R users,
>>>
>>> I  am having problems using package lme4.
>>>
>>> I am trying to analyse the effect of a continuous variable (Dist_NV)
>>> on a count data response variable (SR_SUN) using Poisson error
>>> distribution. However, when I run the model:
>>>
>>> summary(lmer((SR_SUN)~Dist_NV + (1|factor(Farm_code)) ,
>>> family=poisson, REML=FALSE))
>>>
>>> 1 error message and 1 warning message show up:
>>>
>>> in asMethod(object) : matrix is not symmetric [1,2]
>>> In addition: Warning message:
>>> In mer_finalize(ans) : singular convergence (7)
>>
>> So the first thing to do is to include the optional argument verbose =
>> TRUE in the call to lmer.  (Also, REML = FALSE is ignored for
>> Generalized Linear Mixed Models and can be omitted. although there is
>> no harm in including it.)
>>
>> You need to know where the optimizer is taking the parameter values
>> before you can decide why.
>>
>> P.S. Questions like this will probably be more readily answered on the
>> R-SIG-Mixed-Models mailing list.
>>
>>> A model including  Dist_NV together with other variables runs with no 
>>> problems.
>>> What am I doing wrong?
>>>
>>> Thank you,
>>>
>>> Luisa
>>>
>>>
>>> --
>>> Luisa Carvalheiro, PhD
>>> Southern African Biodiversity Institute, Kirstenbosch Research Center, 
>>> Claremont
>>> & University of Pretoria
>>> Postal address - SAWC Pbag X3015 Hoedspruit 1380, South Africa
>>> telephone - +27 (0) 790250944
>>> carvalhe...@sanbi.org
>>> lgcarvalhe...@gmail.com
>>>
>>> __
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>
>
>
> --
> Luisa Carvalheiro, PhD
> Southern African Biodiversity Institute, Kirstenbosch Research Center, 
> Claremont
> & University of Pretoria
> Postal address - SAWC Pbag X3015 Hoedspruit 1380, South Africa
> telephone - +27 (0) 790250944
> carvalhe...@sanbi.org
> lgcarvalhe...@gmail.com
>

___

Re: [R] Reading sas7bdat files directly

2010-02-16 Thread Annoyia Mouse

If you don't have SAS and still need to read or write sas7bdat files: there
is the "World Programming System" (WPS) (commercial software). 
http://www.teamwpc.co.uk/home/
-- 
View this message in context: 
http://n4.nabble.com/Reading-sas7bdat-files-directly-tp1469515p1557807.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] delete repeated values - not unique...

2010-02-16 Thread Alex Yuan

rle() is cool.

-
PhD Candidate in Statistics
Dept. of Mathematics & Statistics
University of New Hampshire
Durham, NH 03824 USA
-- 
View this message in context: 
http://n4.nabble.com/delete-repeated-values-not-unique-tp1557625p1557728.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Legend Text Font Size

2010-02-16 Thread Rob Helpert
Hi.

I have a plot containing a large number of lines.  I have placed a
legend in the plot, but with so many lines, the legend takes up a lot
of space.  I have tried to reduce the spacing between the lines using
the legend parameter x.intersp=0.7, but this does not compress the
legend enough.  Is there some way to make the text font size smaller
in "legend"?

Thanks,
RH.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Double Integral Minimization Problem

2010-02-16 Thread MVika

Hi,
I am using v1.0-4 of adapt and v2.10.1 of R.

Thank you,
M.
-- 
View this message in context: 
http://n4.nabble.com/Double-Integral-Minimization-Problem-tp1475150p155.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] SVM e1071

2010-02-16 Thread Anderson de Rezende Rocha
Dear R-users, 

Does anyone know how to get margins' information using SVM under package e1071?

I have a two class classification problem and I'd like to have, for each input 
example, the distance of this example to the margin just like it's possible to 
obtain using C-based SVM-light, for instance. 

Thanks in advance, 

-- Anderson


  
[[elided
 Yahoo spam]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RODBC missing values in integer columns

2010-02-16 Thread Rob Forler
some more info
> t(t(odbcGetInfo(connection)))
 [,1]
DBMS_Name"Adaptive Server Anywhere"
DBMS_Ver "12.70."
Driver_ODBC_Ver  "03.51"
Data_Source_Name "dbname"
Driver_Name  "Adaptive Server Anywhere"
Driver_Ver   "09.00.0001"
ODBC_Ver "03.52."
Server_Name  "dbname"



On Tue, Feb 16, 2010 at 11:39 AM, Rob Forler  wrote:

> Hello,
>
> We are having some strange issues with RODBC related to integer columns.
> Whenever we do a sql query the data in a integer column is 150 actual data
> points then 150 0's then 150 actual data points then 150 0's. However, our
> database actually has numbers where the 0's are filled in. Furthermore,
> other datatypes do not have this problem: double and varchar are correct and
> do not alternate to null. Also, if we increase the rows_at_time to 1024
> there are larger gaps between the 0's and actual data. The server is a
> sybase IQ database. We have tested it on a different database sybase ASE and
> we still get this issue.
>
> For example :
>
> We have the following query
>
> sqlString = "Select ActionID, Velocity from ActionDataTable"
>
> #where ActionID is of integer type and Velocity is of double type.
> connection = odbcConnect("IQDatabase"); #this database is a sybase IQ
> database
> sqlData = sqlQuery(connection, sqlString);
>
>
> sqlData$ActionID might be 1,2,3,4,5,6,150, 0,0,0,0,0,0,0,,0,0,0,
> 301,302,303,304,.448,449,500,0,0,0...,0,0
>
> and Velocity will have data values along the whole column without these big
> areas of 0's.
>
> Thanks for the help,
> Robert Forler
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] replicating aov results with lmer

2010-02-16 Thread Elizabeth Purdom
I am trying to replicate the results of an aov command with lmer, to understand 
the syntax, but I can't quite figure it out. I have a dataset from Montgomery 
p. 520 with a nested and factorial layout. There are 3 fixtures, 2 layouts (the 
treatments) in a factorial design, but the operators who perform the runs are 
nest in layouts (4 operators per layout=8 different operators). Thus, each 
operator tries each fixture twice, but only 1 layout. Everything is balanced so 
I know what the results should be.  I give a dump of the data at the bottom of 
the email.

If I use the aov command, I get exactly what I would expect:
> ass.aov<-aov(Time~Fixture*Layout+Error(Operator/Layout),data=ass)
> summary(ass.aov)

Error: Operator
  Df Sum Sq Mean Sq F value Pr(>F)
Layout 1  4.083  4.0833  0.3407 0.5807
Residuals  6 71.917 11.9861   

Error: Within
   Df  Sum Sq Mean Sq F valuePr(>F)
Fixture 2  82.792  41.396 12.2319 8.842e-05 ***
Fixture:Layout  2  19.042   9.521  2.8133   0.07325 .  
Residuals  36 121.833   3.384 

If I use the lmer command, the layout SS is NOT what I would expect from the 
aov (and my calculations). However, the estimates of variance components (not 
shown here) are exactly the same. Note, I have coded Operator as 1-8 (not 1-4), 
so if understand lmer correctly I can just add a (1|Operator) term into the  
model and the nesting is taken care of. 

If I try to add a Operator:Fixture interaction (as done in Montgomery) I also 
do not get agreement with my manual calculations (which are the same as in the 
Montgomery book). Can I recreate the standard anova table broken into the 
correct strata using lmer?

Thanks,
Elizabeth

> ass.lmer2<-lmer(Time~Layout*Fixture+(1|Operator),data=ass)
> anova(ass.lmer2)
Analysis of Variance Table
   Df Sum Sq Mean Sq F value
Layout  1  1.153   1.153  0.3406
Fixture 2 82.792  41.396 12.2319
Layout:Fixture  2 19.042   9.521  2.8133
> summary(ass.lmer2)
Linear mixed model fit by REML 
Formula: Time ~ Layout + (1 | Operator) + Fixture + Layout:Fixture 
   Data: ass 
 AIC BIC logLik deviance REMLdev
 215 230  -99.5198.4 199
Random effects:
 Groups   NameVariance Std.Dev.
 Operator (Intercept) 1.4336   1.1973  
 Residual 3.3843   1.8396  
Number of obs: 48, groups: Operator, 8

Fixed effects:
 Estimate Std. Error t value
(Intercept)   26.0833 0.4997   52.20
Layout1   -0.2917 0.4997   -0.58
Fixture1  -0.8333 0.3755   -2.22
Fixture2   1.8542 0.37554.94
Layout1:Fixture1  -0.2083 0.3755   -0.55
Layout1:Fixture2   0.8542 0.37552.27

Correlation of Fixed Effects:
(Intr) Layot1 Fixtr1 Fixtr2 Ly1:F1
Layout1  0.000
Fixture1 0.000  0.000 
Fixture2 0.000  0.000 -0.500  
Layt1:Fxtr1  0.000  0.000  0.000  0.000   
Layt1:Fxtr2  0.000  0.000  0.000  0.000 -0.500


ass <-
structure(list(Time = c(22, 24, 30, 27, 25, 21, 23, 24, 29, 28, 
24, 22, 28, 29, 30, 32, 27, 25, 25, 23, 27, 25, 26, 23, 26, 28, 
29, 28, 27, 25, 27, 25, 30, 27, 26, 24, 28, 25, 24, 23, 24, 27, 
24, 23, 28, 30, 28, 27), Operator = structure(c(1L, 1L, 1L, 1L, 
1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 
4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 6L, 6L, 
7L, 7L, 7L, 7L, 7L, 7L, 8L, 8L, 8L, 8L, 8L, 8L), .Label = c("1", 
"2", "3", "4", "5", "6", "7", "8"), class = "factor"), Layout = structure(c(1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c("1", 
"2"), class = "factor"), Fixture = structure(c(1L, 1L, 2L, 2L, 
3L, 3L, 1L, 1L, 2L, 2L, 3L, 3L, 1L, 1L, 2L, 2L, 3L, 3L, 1L, 1L, 
2L, 2L, 3L, 3L, 1L, 1L, 2L, 2L, 3L, 3L, 1L, 1L, 2L, 2L, 3L, 3L, 
1L, 1L, 2L, 2L, 3L, 3L, 1L, 1L, 2L, 2L, 3L, 3L), .Label = c("1", 
"2", "3"), class = "factor"), opF = structure(c(1L, 1L, 9L, 9L, 
17L, 17L, 2L, 2L, 10L, 10L, 18L, 18L, 3L, 3L, 11L, 11L, 19L, 
19L, 4L, 4L, 12L, 12L, 20L, 20L, 5L, 5L, 13L, 13L, 21L, 21L, 
6L, 6L, 14L, 14L, 22L, 22L, 7L, 7L, 15L, 15L, 23L, 23L, 8L, 8L, 
16L, 16L, 24L, 24L), .Label = c("1:1", "1:2", "1:3", "1:4", "1:5", 
"1:6", "1:7", "1:8", "2:1", "2:2", "2:3", "2:4", "2:5", "2:6", 
"2:7", "2:8", "3:1", "3:2", "3:3", "3:4", "3:5", "3:6", "3:7", 
"3:8"), class = "factor"), LF = structure(c(1L, 1L, 3L, 3L, 5L, 
5L, 1L, 1L, 3L, 3L, 5L, 5L, 1L, 1L, 3L, 3L, 5L, 5L, 1L, 1L, 3L, 
3L, 5L, 5L, 2L, 2L, 4L, 4L, 6L, 6L, 2L, 2L, 4L, 4L, 6L, 6L, 2L, 
2L, 4L, 4L, 6L, 6L, 2L, 2L, 4L, 4L, 6L, 6L), .Label = c("1:1", 
"1:2", "2:1", "2:2", "3:1", "3:2"), class = "factor")), .Names = c("Time", 
"Operator", "Layout", "Fixture", "opF", "LF"), row.names = c(NA, 
-48L), class = "data.frame")

__
R-help@r-project.org mailing list
https://sta

Re: [R] suppress printing within a function

2010-02-16 Thread David Winsemius


On Feb 16, 2010, at 1:31 PM, Jarrett Byrnes wrote:


I'm working with a few functions (e.g. do.base.descriptions in the
netstat package) that, in addition to returning an object with
variables I want to extract, also print output.  There is no way to
turn this default printing behavior off in many of the functions.

Is there a blanket way to suppress such printing, say, within a loop
or a ddply statement?


?capture.output
?sink




Thanks!

-Jarrett




David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Random Forest

2010-02-16 Thread Liaw, Andy
From: Dror
> 
> Hi,
> i'm using randomForest package and i have 2 questions:
> 1. Can i drop one tree from an RF object?

Yes.

> 2. i have a 300 trees forest, but when i use the predict 
> function on new
> data (with predict.all=TRUE) i get only 270 votes. did i do 
> something wrong?

Try to follow the posting guide (link in the footer of the message) and
you may just get the help you're looking for.  Please help us to help
you!

Andy

> Thanks
> 
> -- 
> View this message in context: 
> http://n4.nabble.com/Random-Forest-tp1557464p1557464.html
> Sent from the R help mailing list archive at Nabble.com.
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
Notice:  This e-mail message, together with any attachme...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] suppress printing within a function

2010-02-16 Thread Jarrett Byrnes
I'm working with a few functions (e.g. do.base.descriptions in the  
netstat package) that, in addition to returning an object with  
variables I want to extract, also print output.  There is no way to  
turn this default printing behavior off in many of the functions.

Is there a blanket way to suppress such printing, say, within a loop  
or a ddply statement?

Thanks!

-Jarrett





Jarrett Byrnes
Postdoctoral Associate, Santa Barbara Coastal LTER
Marine Science Institute
University of California Santa Barbara
Santa Barbara, CA 93106-6150
http://www.lifesci.ucsb.edu/eemb/labs/cardinale/people/byrnes/index.html


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Triangular filled contour plot

2010-02-16 Thread Walmes Zeviani

Johannes,

Some months ago it was posted on R-help a list of packages that handle with
ternaryplots, altough none of them can handle surfaceplots, just
scatterplots, on a triangular area. The packages were

plot.acomp in compositions
tri in cwhmisc.cwhtool
triax in plotrix
ternary in StatDA
ternaryplot in vcd
ternaryplot in Zelig 

See http://n4.nabble.com/triangular-plot-td893434.html#a893435

Due my needs I implemented something a little bit useful (at least to me) to
make a prediction surface triangle plot using Lattice package. My
reproducible code is the following:

paint <- data.frame(mono=c(17.5, 10, 15, 25, 5, 5, 11.25, 5, 18.13, 8.13,
25, 15, 10, 5),
cross=c(32.5, 40, 25, 25, 25, 32.5, 32.5, 40, 28.75,
28.75, 25, 25, 40, 25),
resin=c(50, 50, 60, 50, 70, 62.5, 56.25, 55, 53.13,
63.13, 50, 60, 50, 70),
hardness=c(29, 26, 17, 28, 35, 31, 21, 20, 29, 25, 19,
14, 30, 23))

pseudo <- with(paint, data.frame(mono=(mono-5)/(25-5),
resin=(resin-50)/(70-50)))
pseudo$cross <- with(pseudo, 1-mono-resin)
pseudo$hardness <- paint$hardness

m1 <- lm(hardness~(mono+cross+resin)^2-mono, data=pseudo)

trian <- expand.grid(base=seq(0,1,l=100*2), high=seq(0,sin(pi/3),l=87*2))
trian <- subset(trian, (base*sin(pi/3)*2)>high)
trian <- subset(trian, ((1-base)*sin(pi/3)*2)>high)

new2 <- data.frame(cross=trian$high*2/sqrt(3))
new2$resin <- trian$base-trian$high/sqrt(3)
new2$mono <- 1-new2$resin-new2$cross

trian$yhat <- predict(m1, newdata=new2)

grade.trellis <- function(from=0.2, to=0.8, step=0.2, col=1, lty=2,
lwd=0.5){
  x1 <- seq(from, to, step)
  x2 <- x1/2
  y2 <- x1*sqrt(3)/2
  x3 <- (1-x1)*0.5+x1
  y3 <- sqrt(3)/2-x1*sqrt(3)/2
  panel.segments(x1, 0, x2, y2, col=col, lty=lty, lwd=lwd)
  panel.text(x1, 0, label=x1, pos=1)
  panel.segments(x1, 0, x3, y3, col=col, lty=lty, lwd=lwd)
  panel.text(x2, y2, label=rev(x1), pos=2)
  panel.segments(x2, y2, 1-x2, y2, col=col, lty=lty, lwd=lwd)
  panel.text(x3, y3, label=rev(x1), pos=4)
}

levelplot(yhat~base*high, trian, aspect="iso", xlim=c(-0.1,1.1),
ylim=c(-0.1,0.96),
  xlab=NULL, ylab=NULL, contour=TRUE,
  par.settings=list(axis.line=list(col=NA), axis.text=list(col=NA)),
  panel=function(..., at, contour=TRUE, labels=NULL){
panel.levelplot(..., at=at, contour=contour, #labels=labels,
lty=3, lwd=0.5, col=1)
  })
trellis.focus("panel", 1, 1, highlight=FALSE)
panel.segments(c(0,0,0.5), c(0,0,sqrt(3)/2), c(1,1/2,1), c(0,sqrt(3)/2,0))
grade.trellis()
panel.text(0, 0, label="mono", pos=2)
panel.text(1/2, sqrt(3)/2, label="cross", pos=3)
panel.text(1, 0, label="resin", pos=4)
trellis.unfocus()

http://n4.nabble.com/file/n1557735/one.png 

I think (and hope :-) ) Deepayan Sarkar will implement this on Lattice soon
because R doesn't have this kind of graphic yet. My code isn't perfect so I
expect suggestions.

Walmes Zeviani, Brasil.

-
..ooo0
...
..()... 0ooo...  Walmes Zeviani
...\..(.(.)... Master in Statistics and Agricultural
Experimentation
\_). )../   walmeszevi...@hotmail.com, Lavras - MG, Brasil

(_/
-- 
View this message in context: 
http://n4.nabble.com/Triangular-filled-contour-plot-tp1557386p1557735.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] delete repeated values - not unique...

2010-02-16 Thread Erik Iverson

Ah, the request was 'hidden' in the subject of the message, apologies!

Erik Iverson wrote:
Well, can you algorithmically describe what you are trying to do? Your 
example is not sufficient to determine it.  For instance, are you trying 
to:


1) remove repeated elements of a vector and concatenate the first 
element at the end?


2) remove repeated elements of a vector and concatenate the minimum 
element at the end?


3) always return the vector c(4, 5, 6, 4) ?

4) something else?



jorgusch wrote:

Hello,

I must be blind not to see it, but I have the following vector:

4
4
5
6
6
4

What I would like to have as a result is:

4
5
6
4

All repeated values are gone. I cannot use unique for this, as the 
second 4

would disappear. Is there another fast function for this problem?

Thanks in advance!



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] delete repeated values - not unique...

2010-02-16 Thread David Winsemius


On Feb 16, 2010, at 12:01 PM, jorgusch wrote:



Hello,

I must be blind not to see it, but I have the following vector:

4
4
5
6
6
4

What I would like to have as a result is:

4
5
6
4



?diff

> vec <- c(4,4,5,6,6,4)
> vec[ c(1, diff(vec)) != 0 ]
[1] 4 5 6 4


All repeated values are gone. I cannot use unique for this, as the  
second 4

would disappear. Is there another fast function for this problem?

Thanks in advance!

--
View this message in context: 
http://n4.nabble.com/delete-repeated-values-not-unique-tp1557625p1557625.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] delete repeated values - not unique...

2010-02-16 Thread William Dunlap
> -Original Message-
> From: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] On Behalf Of jorgusch
> Sent: Tuesday, February 16, 2010 9:01 AM
> To: r-help@r-project.org
> Subject: [R] delete repeated values - not unique...
> 
> 
> Hello,
> 
> I must be blind not to see it, but I have the following vector:
> 
> 4
> 4
> 5
> 6
> 6
> 4
> 
> What I would like to have as a result is:
> 
> 4
> 5
> 6
> 4
> 
> All repeated values are gone. I cannot use unique for this, 
> as the second 4
> would disappear. Is there another fast function for this problem?

Is this what you want?  It keeps only those values
which are the not the same as the previous value in the
vector.
  > isFirstInRun <- function(x) c(TRUE, x[-1]!=x[-length(x)])
  > data <- c(4, 4, 5, 6, 6, 4)
  > data[isFirstInRun(data)]
  [1] 4 5 6 4

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com  

> 
> Thanks in advance!
> 
> -- 
> View this message in context: 
> http://n4.nabble.com/delete-repeated-values-not-unique-tp15576
25p1557625.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] delete repeated values - not unique...

2010-02-16 Thread Erik Iverson
Well, can you algorithmically describe what you are trying to do? Your 
example is not sufficient to determine it.  For instance, are you trying to:


1) remove repeated elements of a vector and concatenate the first 
element at the end?


2) remove repeated elements of a vector and concatenate the minimum 
element at the end?


3) always return the vector c(4, 5, 6, 4) ?

4) something else?



jorgusch wrote:

Hello,

I must be blind not to see it, but I have the following vector:

4
4
5
6
6
4

What I would like to have as a result is:

4
5
6
4

All repeated values are gone. I cannot use unique for this, as the second 4
would disappear. Is there another fast function for this problem?

Thanks in advance!



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] delete repeated values - not unique...

2010-02-16 Thread Marc Schwartz
On Feb 16, 2010, at 11:01 AM, jorgusch wrote:

> 
> Hello,
> 
> I must be blind not to see it, but I have the following vector:
> 
> 4
> 4
> 5
> 6
> 6
> 4
> 
> What I would like to have as a result is:
> 
> 4
> 5
> 6
> 4
> 
> All repeated values are gone. I cannot use unique for this, as the second 4
> would disappear. Is there another fast function for this problem?
> 
> Thanks in advance!


See ?rle

x <- c(4, 4, 5, 6, 6, 4)

> rle(x)$values
[1] 4 5 6 4


HTH,

Marc Schwartz

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lint for R? and debugging

2010-02-16 Thread Esmail

On 16-Feb-10 09:03, Karl Ove Hufthammer wrote:

On Tue, 16 Feb 2010 08:00:09 -0500 Esmail  wrote:

And along the same lines, any type of interactive debugging
utility for R?


See this article in R News:

'Debugging Without (Too Many) Tears'
http://cran.r-project.org/doc/Rnews/Rnews_2003-3.pdf#page=29



Thanks for the pointer, that looks very interesting.

Any lint-like utilities out there? I miss a lot of the development
tools I have available for Python or Java with R, esp once the code
starts to grow beyond a few hundred lines.

Esmail

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] delete repeated values - not unique...

2010-02-16 Thread jorgusch

Hello,

I must be blind not to see it, but I have the following vector:

4
4
5
6
6
4

What I would like to have as a result is:

4
5
6
4

All repeated values are gone. I cannot use unique for this, as the second 4
would disappear. Is there another fast function for this problem?

Thanks in advance!

-- 
View this message in context: 
http://n4.nabble.com/delete-repeated-values-not-unique-tp1557625p1557625.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] combining dataframes with different row lenght

2010-02-16 Thread fishman.R

If you can specify the questions in more details, that would be more helpful.
-- 
View this message in context: 
http://n4.nabble.com/combining-dataframes-with-different-row-lenght-tp1557569p1557610.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] margin text warning message NAs coercion

2010-02-16 Thread fishman.R

I have a question for you. What's your purpose to put "additional text" in
the margin?  In your code, the first expression(A[1]~B[2]) corresponds to
text. If you wanna "additional text" together with expression(A[1]~B[2]),
you should use paste. If you can clarify what kind of margin you want, that
would be better to clear your confusions. By the way, you forgot a right ")"
after ylab= statement.   
-- 
View this message in context: 
http://n4.nabble.com/margin-text-warning-message-NAs-coercion-tp1557567p1557591.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] argh .. if/else .. why?

2010-02-16 Thread David Winsemius


On Feb 16, 2010, at 12:15 PM, William Dunlap wrote:


-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Peter Dalgaard
Sent: Tuesday, February 16, 2010 1:33 AM
To: Gabor Grothendieck
Cc: r-help@r-project.org
Subject: Re: [R] argh .. if/else .. why?

Gabor Grothendieck wrote:

On Mon, Feb 15, 2010 at 11:24 AM, David Winsemius
 wrote:

On Feb 15, 2010, at 11:01 AM, hadley wickham wrote:


I, personally, utilize the

ifelse(test,statement,statement) function when

possible over the methodology outlined.

if + else and ifelse perform quite different tasks, and

in general can

not (and should not) be exchanged.  In particular, note that for
ifelse, "the class attribute of the result is taken from

'test' and

may be inappropriate for the values selected from 'yes'

and  'no'".

I have always been puzzled by that bit of advice/knowledge

on the help page.

"test" will of necessity be of class "logical", and yet I

regularly succeed

in producing numeric and character vectors with ifelse. In

fact ifelse would

be rather limited in utility if it only returned logical vectors.



I think it had intended to refer to oldClass rather than class.


oldClass(TRUE)

NULL

oldClass(ifelse(TRUE, 1, 2))

NULL


Well, it does date back to S v.3, but the docs do say class
_attribute_
and "logical" & friends aren't. It happens with all attributes, and I
suspect that the original intention was for things like this to work


ifelse(matrix(c(T,T,F,F),2),1,2)

[,1] [,2]
[1,]12
[2,]12

I can't think of a situation where it is actually useful to copy the
class attribute from the condition to the result.


The container-related attributes of ifelse's first argument
can be profitably copied to the output.  E.g., for classes
"matrix" and "ts", which can contain a variety of primitive
data types, we get:

m<-matrix(1:12,3,4)
ifelse(m>5, TRUE, FALSE) # return a matrix the shape of m

   [,1]  [,2] [,3] [,4]
 [1,] FALSE FALSE TRUE TRUE
 [2,] FALSE FALSE TRUE TRUE
 [3,] FALSE  TRUE TRUE TRUE

t <- ts(1:5, start=2010, freq=12)
ifelse(t>3, TRUE, FALSE) # return a ts like t

Jan   Feb   Mar   Apr   May
 2010 FALSE FALSE FALSE  TRUE  TRUE



Thanks, Bill that is very helpful. It's possible that the same notion  
was expressed earlier but I didn't "get it" until your version.  The  
message I take away is that as long as the functional evaluation (of  
for example ">") produces a "vectorish" arrangement of logicals having  
some additional structure, that structure is being  preserved by the  
indexing within ifelse. Looking at the code of ifelse is helpful in  
this regard, and it also answers my question about the evaluation/ 
performance issues. Whenever the antecedents are a mixture of T and F,  
then _all_ of the positive and negative consequents will be evaluated  
before they are then selectively transferred to the result.


--
David

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com


Presumably, the idea
is that of the three possibilities, only the condition attributes are
unambiguous (think if(cond, A, B) vs. if(!cond, B, A)), so if
any set of
attributes should be copied, those are the ones.

--
  O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
 c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K






David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Triangular filled contour plot

2010-02-16 Thread Cleber Borges


hello,
maybe this code can be useful for you.

cleber
---


trimage <- function(f){
x = y = seq( 1, 0, l=181 )
t1 = length(x)
im = aux = numeric(0)
for( i in seq( 1, t1, by = 2 ) ){

#idx = seq( t1**2, i*t1, by = -t1 ) - ((t1 - i):0)

idx = seq( i*t1, t1**2, by = t1 ) - (i-1)
im = c(im, aux, idx, aux )
aux = c(aux, NA)
}
z =  outer(X=x, Y=y, FUN=f)
return( matrix(z[im],nr=t1) )
}


### for chemical mixtures
### restriction:   sum( x[i]==1 ) and  0 < x[i] < 1
### naive example
f <- function(x1,x2) { x3=1-x2-x1;  -100*x1 + 0*x2 + 100*x3 }

windows(w=4.5, h=4.5, restoreConsole = TRUE )
par(mar=c(0,0,0,0), pty='s', xaxt='n', yaxt='n', bty='n' )

trimat <- trimage( f )
image( trimat )
contour( trimat, add=T)









Em 16/2/2010 11:25, kajo escreveu:

Hi all,

I am working on a filled contour plot which shows a triangular matrix data
set (as shown below). Is there a possibilty to draw a triangular filled
contour in a equilateral triangle (like a ternary plot)?

Thanks in advance
Johannes

http://n4.nabble.com/file/n1557386/Bild3.png
   



--
O bom senso é a coisa do mundo mais bem distribuída:
Todos pensamos tê-lo em tal medida que até os mais difíceis
de contentar nas outras coisas não costumam desejar mais bom senso do que 
aquele que têm.
[René Descartes]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] RODBC missing values in integer columns

2010-02-16 Thread Rob Forler
Hello,

We are having some strange issues with RODBC related to integer columns.
Whenever we do a sql query the data in a integer column is 150 actual data
points then 150 0's then 150 actual data points then 150 0's. However, our
database actually has numbers where the 0's are filled in. Furthermore,
other datatypes do not have this problem: double and varchar are correct and
do not alternate to null. Also, if we increase the rows_at_time to 1024
there are larger gaps between the 0's and actual data. The server is a
sybase IQ database. We have tested it on a different database sybase ASE and
we still get this issue.

For example :

We have the following query

sqlString = "Select ActionID, Velocity from ActionDataTable"

#where ActionID is of integer type and Velocity is of double type.
connection = odbcConnect("IQDatabase"); #this database is a sybase IQ
database
sqlData = sqlQuery(connection, sqlString);


sqlData$ActionID might be 1,2,3,4,5,6,150, 0,0,0,0,0,0,0,,0,0,0,
301,302,303,304,.448,449,500,0,0,0...,0,0

and Velocity will have data values along the whole column without these big
areas of 0's.

Thanks for the help,
Robert Forler

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] margin text warning message NAs coercion

2010-02-16 Thread David Winsemius


On Feb 16, 2010, at 12:18 PM, e-letter wrote:


On 16/02/2010, Peter Ehlers  wrote:

On 2010-02-16 9:21, e-letter wrote:

Readers,

I tried to the following commands:

plot(y~x,ylab=expression(A[1]~B[2],xlab=expression(C~D))
mtext(expression(A[1]~B[2]),"additional text",side=3,line=1)


Your plot() call is not reproducible.

Anyway, try

mtext(expression(A[1]~B[2]~~"additional text"),side=3,line=1)


Thank you. I had seen the command (~~) in the help guide for the
package plotmath(grdevices), but there is no statement that this
command can be used to join the expression to literal text.


It's not a "command", it's one of the available separators (or  
connectors depending on how you think about them) for expressions. If  
you wanted no separation between adjacent terms, you would use a  
"*" (asterisk). You cannot use "," because it creates a separate  
element in the expression vector:


plot(1,1)
mtext(expression(A[1]~B[2],"additional text", "more  
text"),side=3,line=1:3)


(Which results in the reverse order of what this newbie expected.)


--


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] margin text warning message NAs coercion

2010-02-16 Thread Peter Ehlers

On 2010-02-16 10:18, e-letter wrote:

On 16/02/2010, Peter Ehlers  wrote:

On 2010-02-16 9:21, e-letter wrote:

Readers,

I tried to the following commands:

plot(y~x,ylab=expression(A[1]~B[2],xlab=expression(C~D))
mtext(expression(A[1]~B[2]),"additional text",side=3,line=1)


Your plot() call is not reproducible.

Anyway, try

mtext(expression(A[1]~B[2]~~"additional text"),side=3,line=1)


Thank you. I had seen the command (~~) in the help guide for the
package plotmath(grdevices), but there is no statement that this
command can be used to join the expression to literal text.



It's a hard space, borrowed from TeX.

?plotmath has this

 x ~~ y  put extra space between x and y

Try also x ~ y.

 -Peter Ehlers

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] argh .. if/else .. why?

2010-02-16 Thread William Dunlap
> -Original Message-
> From: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] On Behalf Of Peter Dalgaard
> Sent: Tuesday, February 16, 2010 1:33 AM
> To: Gabor Grothendieck
> Cc: r-help@r-project.org
> Subject: Re: [R] argh .. if/else .. why?
> 
> Gabor Grothendieck wrote:
> > On Mon, Feb 15, 2010 at 11:24 AM, David Winsemius
> >  wrote:
> >> On Feb 15, 2010, at 11:01 AM, hadley wickham wrote:
> >>
>  I, personally, utilize the 
> ifelse(test,statement,statement) function when
>  possible over the methodology outlined.
> >>> if + else and ifelse perform quite different tasks, and 
> in general can
> >>> not (and should not) be exchanged.  In particular, note that for
> >>> ifelse, "the class attribute of the result is taken from 
> 'test' and
> >>> may be inappropriate for the values selected from 'yes' 
> and  'no'".
> >> I have always been puzzled by that bit of advice/knowledge 
> on the help page.
> >> "test" will of necessity be of class "logical", and yet I 
> regularly succeed
> >> in producing numeric and character vectors with ifelse. In 
> fact ifelse would
> >> be rather limited in utility if it only returned logical vectors.
> >>
> > 
> > I think it had intended to refer to oldClass rather than class.
> > 
> >> oldClass(TRUE)
> > NULL
> >> oldClass(ifelse(TRUE, 1, 2))
> > NULL
> 
> Well, it does date back to S v.3, but the docs do say class 
> _attribute_
> and "logical" & friends aren't. It happens with all attributes, and I
> suspect that the original intention was for things like this to work
> 
> > ifelse(matrix(c(T,T,F,F),2),1,2)
>  [,1] [,2]
> [1,]12
> [2,]12
> 
> I can't think of a situation where it is actually useful to copy the
> class attribute from the condition to the result. 

The container-related attributes of ifelse's first argument
can be profitably copied to the output.  E.g., for classes
"matrix" and "ts", which can contain a variety of primitive
data types, we get:
  > m<-matrix(1:12,3,4)
  > ifelse(m>5, TRUE, FALSE) # return a matrix the shape of m
[,1]  [,2] [,3] [,4]
  [1,] FALSE FALSE TRUE TRUE
  [2,] FALSE FALSE TRUE TRUE
  [3,] FALSE  TRUE TRUE TRUE
  > t <- ts(1:5, start=2010, freq=12)
  > ifelse(t>3, TRUE, FALSE) # return a ts like t
 Jan   Feb   Mar   Apr   May
  2010 FALSE FALSE FALSE  TRUE  TRUE

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com 

> Presumably, the idea
> is that of the three possibilities, only the condition attributes are
> unambiguous (think if(cond, A, B) vs. if(!cond, B, A)), so if 
> any set of
> attributes should be copied, those are the ones.
> 
> -- 
>O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
>   c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
>  (*) \(*) -- University of Copenhagen   Denmark  Ph:  
> (+45) 35327918
> ~~ - (p.dalga...@biostat.ku.dk)  FAX: 
> (+45) 35327907
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] margin text warning message NAs coercion

2010-02-16 Thread e-letter
On 16/02/2010, Peter Ehlers  wrote:
> On 2010-02-16 9:21, e-letter wrote:
>> Readers,
>>
>> I tried to the following commands:
>>
>> plot(y~x,ylab=expression(A[1]~B[2],xlab=expression(C~D))
>> mtext(expression(A[1]~B[2]),"additional text",side=3,line=1)
>
> Your plot() call is not reproducible.
>
> Anyway, try
>
> mtext(expression(A[1]~B[2]~~"additional text"),side=3,line=1)
>
Thank you. I had seen the command (~~) in the help guide for the
package plotmath(grdevices), but there is no statement that this
command can be used to join the expression to literal text.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] popbio and stochastic lambda calculation

2010-02-16 Thread Shawn Morrison

Hello R users,

I am trying to calculate the stochastic lambda for a published matrix 
population model using the popbio package.


Unfortunately, I have been unable to match the published results. Can 
anyone tell me whether this is due to slightly different methods being 
used, or have I gone wrong somewhere in my code?


Could the answer be as simple as comparing deterministic lambdas to 
stochastic lambdas? My stoch. lambda is lower (as expected) however the 
CI is substantially larger than published. The authors of the published 
example used a different method to calculate the CIs around lambda, but 
I would have expected the results of the vitalsim to be closer. (The 
authors used the 'delta' method from Caswell (2001)).


My main questions are:
1. Have I correctly calculated stochastic lambda and its 95% CIs?
2. Why am I unable to use any distributions other than beta? What can I 
do about that?


Thank you,
Shawn Morrison

#

#My example:

rm(list = ls())
objects()

library(popbio)


# Vital rate means and variances, and 'types' for the vrtypes argument 
in vitalsim

# 'names' is not used, but indicates what the vital
# rates represent: Sad = adult survival, Scub = cub survival
# Syrl = yearling survival, Ssub - subadult survival
# mx = number of female offspring per year

names = c("Sad", "Scub", "Syrl", "Ssub", "mx") # vital rate names, not used
mean = c(0.835, 0.640, 0.670, 0.765, 0.467) # vital rate means
var = c(0.213, 0.252, 0.241, 0.133, 0.0405) #variances of means
var.se = c(0.106, 0.107, 0.142, 0.149, 0.09) #standard errors of means
types = c(1,1,1,1,1) #for vrtypes argument


ex.vrs = data.frame(cbind(mean, var, var.se, types))
attach(ex.vrs)

## Define the matrix
ex.mxdef= function (vrs)
{
   matrix(c(0,0,  0,  0,   
vrs[5]*vrs[1],
   vrs[2],  0,  0,  
0,   0,
0, vrs[3],  0,  
0,   0,
0,0,  vrs[4],  
0,0,
0,0,   0, 
vrs[4], vrs[1]),

nrow = 5, byrow = TRUE)
}



ex.mxdef(ex.vrs$mean)

# Matrix in published article is:
# 0,   0,   0,  0,  0.390,
# 0.640,   0,   0,  0,0,
# 0,  0.67, 0,  0,0, # 0,0,  0.765,  0,0,
# 0,0,  0,  0.765, 0.835
# "My' result matches this.

###  TRIAL 1
###  Using variances estimated from published paper returns an error 
(ex.var$var), perhaps because variances are too large?


## run no.corr model
no.corr<-vitalsim(ex.vrs$mean, ex.vrs$var, diag(5),
diag(5), ex.mxdef, vrtypes=ex.vrs$types,
n0=c(200,130,90,80,490), yrspan=20 , runs=200)

###  TRIAL 2
###  Using standard error instead of variances (ex.var$var.se)
### This also produces an error when 'types' is set to anything but 1 
(betas).


## run no.corr model
no.corr<-vitalsim(ex.vrs$mean, ex.vrs$var.se, diag(5),
diag(5), ex.mxdef, vrtypes=ex.vrs$types,
n0=c(200,130,90,80,490), yrspan=20 , runs=200)

## deterministic lambda
no.corr[1:2]  # The published deterministic lambda is 0.953 so this 
appear to be working correctly



?popbio

# stochastic lambda
no.corr[2]
# The paper reports a 95% CI of 0.79 - 1.10
# "My" reproduced result for the CIs is much larger, especially on the 
upper end. Why would this be?
# The authors report using the 'delta' method (Caswell, 2001) to 
calculate the CI in which the

# sensitivities were used to weight the variances.

## log stochastic lambda
log(no.corr$stochLambda)
sd(no.corr$logLambdas)

se=function(x) sqrt(var(x)/length(x))
se(no.corr$logLambdas)  # The published article reports deterministic 
lambda ± SE to be 0.953 ± 0.079


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] margin text warning message NAs coercion

2010-02-16 Thread Peter Ehlers

On 2010-02-16 9:21, e-letter wrote:

Readers,

I tried to the following commands:

plot(y~x,ylab=expression(A[1]~B[2],xlab=expression(C~D))
mtext(expression(A[1]~B[2]),"additional text",side=3,line=1)


Your plot() call is not reproducible.

Anyway, try

mtext(expression(A[1]~B[2]~~"additional text"),side=3,line=1)

 -Peter Ehlers



I receive the text that I want, but the command terminal shows the
following response:

Warning message:
NAs introduced by coercion in: mtext(expression(A[1]~B[2]),"additional
text", side = 3,

What is my mistake please?

Yours,

r251
rhelpatconference.jabber.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




--
Peter Ehlers
University of Calgary
403.202.3921

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] OT: computing percentage changes with negative and zero values?

2010-02-16 Thread Liviu Andronic
Dear all
I need to compute percentage changes of my data, but unfortunately
they contain both negative and zero values, and I am quite confused on
how to proceed. Searching the internet I found that many people ran
into similar issues, with no obvious solution available.

The last couple of weeks I've been playing with all the data
transformations that I could think of. Below I will expose  on a dummy
example the issues encountered:
> x$var
 [1]  0.43 -0.79  0.69  0.76  0.00 -1.51 -0.71  0.80  1.17  1.58  1.48
-1.83 -0.88  1.44 -0.72 -0.22  1.89 -1.27 -0.76
[20]  1.33

- raw data: percentage variations of the original data---containing
negative and zero values---get messed up when passing from a negative
to a positive value, and around the value 0.
> x[, "raw"] <- c(NA, diff(x$var) / x[1:19,"var"])

- raw data with abs denominator: compared to the above improves the
handling of the signs, but still fails around zero, and in some cases
gives unexpected results (see [1]).
> x[, "raw abs"] <- c(NA, diff(x$var) / abs(x[1:19,"var"]))

- raw data + constant: add a constant to the data to transform them to
strictly positive, then compute the deltas. This solves the negative
and zero value problems, but I am not sure if this introduces some
bias along the way.
> x[, "raw +cst"] <- c(NA, diff((2 + x$var)) / (2 + x[1:19,"var"]))

- log, car::box.cox.powers: both transformations involve adding a
constant to the original data.
> x[, "log"] <- c(NA, diff(log(2 + x$var)) / log(2 + x[1:19,"var"]))
> require(car)
> x1 <- box.cox.powers(2 + x$var); x1$lambda
> x[, "box cox"] <- c(NA, diff(box.cox(2 + x$var, x1$lambda)) / box.cox(2 + 
> x[1:19,"var"], x1$lambda))

- sqrt: very similar to the above, but the results are a bit different
(and apparently better).
> x[, "sqrt"] <- c(NA, diff(sqrt(2 + x$var)) / sqrt(2 + x[1:19,"var"]))

- exp: the exponential transformation introduces too much, and
unevenly distributed variability (my actual data contain values bigger
than "5"), and the variations can quickly get to astronomical levels.
> x[, "exp"] <- c(NA, diff(exp(x$var)) / exp(x[1:19,"var"]))

- atan transformation: this is an in-house bred solution, which
insures that values from -Inf to +Inf are stacked between 0 and pi.
Again, not sure what bias this might introduce.
> mytan <- function(x) .5*pi + atan(x)
> x[, "mytan"] <- c(NA, diff(mytan(x$var)) / mytan(x[1:19,"var"]))

The resulting data frame:
> round(x, 3)
 varraw raw abs raw +cstlog   sqrt box coxexp  mytan
1   0.43 NA  NA   NA NA NA  NA NA NA
2  -0.79 -2.837  -2.837   -0.502 -0.785 -0.294  -0.840 -0.705 -0.544
3   0.69 -1.873   1.8731.223  4.191  0.491   6.289  3.393  1.411
4   0.76  0.101   0.1010.026  0.026  0.013   0.038  0.073  0.021
5   0.00 -1.000  -1.000   -0.275 -0.317 -0.149  -0.407 -0.532 -0.293
6  -1.51   -Inf-Inf   -0.755 -2.029 -0.505  -1.591 -0.779 -0.628
7  -0.71 -0.530   0.5301.633 -1.357  0.623  -1.517  1.226  0.630
8   0.80 -2.127   2.1271.171  3.043  0.473   4.631  3.527  1.355
9   1.17  0.462   0.4620.132  0.121  0.064   0.185  0.448  0.084
10  1.58  0.350   0.3500.129  0.105  0.063   0.169  0.507  0.059
11  1.48 -0.063  -0.063   -0.028 -0.022 -0.014  -0.035 -0.095 -0.012
12 -1.83 -2.236  -2.236   -0.951 -2.421 -0.779  -1.450 -0.963 -0.804
13 -0.88 -0.519   0.5195.588 -1.064  1.567  -1.124  1.586  0.698
14  1.44 -2.636   2.6362.071  9.902  0.753  16.643  9.176  1.985
15 -0.72 -1.500  -1.500   -0.628 -0.800 -0.390  -0.870 -0.885 -0.626
16 -0.22 -0.694   0.6940.391  1.336  0.179   1.679  0.649  0.430
17  1.89 -9.591   9.5911.185  1.356  0.478   2.333  7.248  0.960
18 -1.27 -1.672  -1.672   -0.812 -1.232 -0.567  -1.115 -0.958 -0.749
19 -0.76 -0.402   0.4020.699 -1.684  0.303  -1.841  0.665  0.381
20  1.33 -2.750   2.7501.685  4.592  0.639   7.559  7.085  1.711


As you have noticed, I'm quite unsure on how to proceed. My actual
data represents financial EPS (earnings per share) forecasts, ranging
from -1 to 5. So, it has a "natural zero point"  (see David Winsemius'
comments in [2]). However, I need to compute percentage variations
since I am primarily interested in the evolution of the forecasts (for
a given company), while EPS data between two companies are not
necessarily comparable. The percentage data would subsequently be used
in performing statistical analyses (regression, etc.).

Please advise
Liviu

[1] http://sci.tech-archive.net/Archive/sci.stat.math/2006-04/msg00544.html
[2] http://sci.tech-archive.net/Archive/sci.stat.math/2006-04/msg00548.html

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reading quattro pro spreadsheet .qpw into R

2010-02-16 Thread Marc Schwartz
On Feb 16, 2010, at 10:35 AM, Barry Rowlingson wrote:

> On Tue, Feb 16, 2010 at 4:12 PM, stephen sefick  wrote:
>> I have many quattro pro spreadsheets and no quattro pro.  Is there a
>> way to access the data using R, or any other solution that anyone can
>> think of?
> 
> OpenOffice claims it can read Quattro Pro 6.0 'wb2' files, but maybe
> they are different to .qpw files. MS Excel claims some Quattro Pro
> readability - I've just read something about wb1 files. Maybe Gnumeric
> can read them? What have you tried?
> 
> Perhaps if you put a representative file somewhere we can download and try?
> 
> Barry


Gnumeric comes with the ssconvert utility, but that does not appear to support 
.QPW files:

  http://projects.gnome.org/gnumeric/doc/file-format-qpro.shtml

I don't see any indication that Excel or OO.org's Calc support the .QPW format 
either.

It would appear that you can download a trial of Corel's WordPerfect Office 
suite, within which Quattro is bundled. You should be able to open the files in 
that application and then save them to .XLS formats:

  http://www.corel.com/servlet/Satellite/ca/en/Content/1152796555406

HTH,

Marc Schwartz

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sampling from Bivariate Uniform Distribution

2010-02-16 Thread Greg Snow
The correlation will not be exactly 0, but will represent a draw from an 
independent population.

There may be something in the copulas package to allow for more independence 
(but that about exhausts my knowledge of that package).

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.org] On Behalf Of Haneef_An
> Sent: Monday, February 15, 2010 11:53 AM
> To: r-help@r-project.org
> Subject: Re: [R] Sampling from Bivariate Uniform Distribution
> 
> 
> When I wrap those values in to a matrix will it be still independent ?
> ( non
> zero correlation).
> 
> Can I do this for any multivariate distribution which has the
> univariate
> form?
> 
> Thank you for the response.
> 
> Haneef
> --
> View this message in context: http://n4.nabble.com/Sampling-from-
> Bivariate-Uniform-Distribution-tp1476485p1556481.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lmer - error asMethod(object) : matrix is not symmetric

2010-02-16 Thread Luisa Carvalheiro
Dear Douglas,

Thank you for your reply.
Just some extra info on the dataset: In my case Number of obs is 33,
and number of groups of factor(Farm_code) is 12.
This is the information on iterations I get:

summary(lmer(round(SR_SUN)~Dist_NV + (1|factor(Farm_code)) ,
family=poisson, verbose =TRUE))
  0: 60.054531:  1.06363  2.14672 -0.000683051
  1: 60.054531:  1.06363  2.14672 -0.000683051
Error in asMethod(object) : matrix is not symmetric [1,2]
In addition: Warning message:
In mer_finalize(ans) : singular convergence (7)


When I run a similar model (exp variable Dist_hives) the number of
iterations is 11:

 summary(lmer(round(SR_SUN)~Dist_hives + (1|factor(Farm_code)) ,
family=poisson, verbose =TRUE))
  0: 61.745238: 0.984732  1.63769 0.000126484
  1: 61.648229: 0.984731  1.63769 -2.08637e-05
  2: 61.498777: 0.984598  1.63769 4.11867e-05
  3: 47.960908: 0.381062  1.63585 6.77029e-05
  4: 46.223789: 0.250732  1.66727 8.31854e-05
  5: 46.23: 0.250732  1.66727 6.97790e-05
  6: 46.216710: 0.250730  1.66727 7.60560e-05
  7: 46.168835: 0.230386  1.64883 9.16430e-05
  8: 46.165955: 0.228062  1.65658 8.70694e-05
  9: 46.165883: 0.228815  1.65737 8.63400e-05
 10: 46.165883: 0.228772  1.65734 8.63698e-05
 11: 46.165883: 0.228772  1.65734 8.63701e-05



I am very confused with the fact that it runs with Dist_hives and not
with Dist_NV. Both variables are distance values, the first having no
obvious relation with the response variable and the second (Dist_NV)
seems to have a negative effect on SR_SUN.

Does this information helps identifying the problem with my data/analysis?

Thank you,

Luisa




On Tue, Feb 16, 2010 at 5:35 PM, Douglas Bates  wrote:
> This is similar to another question on the list today.
>
> On Tue, Feb 16, 2010 at 4:39 AM, Luisa Carvalheiro
>  wrote:
>> Dear R users,
>>
>> I  am having problems using package lme4.
>>
>> I am trying to analyse the effect of a continuous variable (Dist_NV)
>> on a count data response variable (SR_SUN) using Poisson error
>> distribution. However, when I run the model:
>>
>> summary(lmer((SR_SUN)~Dist_NV + (1|factor(Farm_code)) ,
>> family=poisson, REML=FALSE))
>>
>> 1 error message and 1 warning message show up:
>>
>> in asMethod(object) : matrix is not symmetric [1,2]
>> In addition: Warning message:
>> In mer_finalize(ans) : singular convergence (7)
>
> So the first thing to do is to include the optional argument verbose =
> TRUE in the call to lmer.  (Also, REML = FALSE is ignored for
> Generalized Linear Mixed Models and can be omitted. although there is
> no harm in including it.)
>
> You need to know where the optimizer is taking the parameter values
> before you can decide why.
>
> P.S. Questions like this will probably be more readily answered on the
> R-SIG-Mixed-Models mailing list.
>
>> A model including  Dist_NV together with other variables runs with no 
>> problems.
>> What am I doing wrong?
>>
>> Thank you,
>>
>> Luisa
>>
>>
>> --
>> Luisa Carvalheiro, PhD
>> Southern African Biodiversity Institute, Kirstenbosch Research Center, 
>> Claremont
>> & University of Pretoria
>> Postal address - SAWC Pbag X3015 Hoedspruit 1380, South Africa
>> telephone - +27 (0) 790250944
>> carvalhe...@sanbi.org
>> lgcarvalhe...@gmail.com
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>



-- 
Luisa Carvalheiro, PhD
Southern African Biodiversity Institute, Kirstenbosch Research Center, Claremont
& University of Pretoria
Postal address - SAWC Pbag X3015 Hoedspruit 1380, South Africa
telephone - +27 (0) 790250944
carvalhe...@sanbi.org
lgcarvalhe...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multiple-test correlation

2010-02-16 Thread Greg Snow
?p.adjust

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.org] On Behalf Of Manuel Jesús López Rodríguez
> Sent: Sunday, February 14, 2010 5:35 AM
> To: r-help@r-project.org
> Subject: [R] multiple-test correlation
> 
> Dear all,
> I am trying to study the correlation between one "independent" variable
> ("V1") and several others dependent among them ("V2","V3","V4" and
> "V5"). For doing so, I would like to analyze my data by multiple-test
> (applying the Bonferroni´s correction or other similar), but I do not
> find the proper command in R. What I want to do is to calculate
> Kendall´s correlation between "V1" and the others variables (i.e. "V1"
> vs "V2", "V1" vs "V3", etc.) and to correct the p values by Bonferroni
> or other. I have found "outlier.test", but I do not know if this is
> what I need (also, I would prefer to use a less conservative method
> than Bonferroni´s, if possible).
> Thank you very much in advance!
> 
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Does the R "statistical language includes modules/packagesto carry out nonlinear optimization similar to the SAS NLINand NLP procedures?

2010-02-16 Thread Bert Gunter
Randall:

Are you familiar with R's Search facilities? If not, don't you think you
should be? If so, why don't you try using them BEFORE posting on this list.

?help
?help.search

help.search("optimization") ##gives several alternatives

Bert Gunter
Genentech Nonclinical Biostatistics
 
 
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of David Winsemius
Sent: Tuesday, February 16, 2010 8:31 AM
To: Powers, Randall - BLS
Cc: r-help@r-project.org
Subject: Re: [R] Does the R "statistical language includes
modules/packagesto carry out nonlinear optimization similar to the SAS
NLINand NLP procedures?


On Feb 16, 2010, at 11:09 AM, Powers, Randall - BLS wrote:

> Hello R folks,
>
> I'm hoping the answer to the question in the subject line.
>
> I have in the past used SAS PROC NLIN and PROC NLP to carry out
> nonlinear optimizations. I'm wondering if there is analogous ways for
> doing this using R. If so, could someone please point me to some
> literature that would help me examine this further?

The CRAN Task Views are a good place to start any such search:

http://cran.r-project.org/web/views/

There is one for Optimization:

http://cran.r-project.org/web/views/Optimization.html

-- 
David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reading quattro pro spreadsheet .qpw into R

2010-02-16 Thread Tom Backer Johnsen

stephen sefick wrote:

I have many quattro pro spreadsheets and no quattro pro.  Is there a
way to access the data using R, or any other solution that anyone can
think of?
thanks,

  
One possibility is to download the trial version of Corel Office and use 
that to convert the files to something more common, er even a simple 
.csv file, which is way easy to read into R.


Tom

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reading quattro pro spreadsheet .qpw into R

2010-02-16 Thread Barry Rowlingson
On Tue, Feb 16, 2010 at 4:12 PM, stephen sefick  wrote:
> I have many quattro pro spreadsheets and no quattro pro.  Is there a
> way to access the data using R, or any other solution that anyone can
> think of?

 OpenOffice claims it can read Quattro Pro 6.0 'wb2' files, but maybe
they are different to .qpw files. MS Excel claims some Quattro Pro
readability - I've just read something about wb1 files. Maybe Gnumeric
can read them? What have you tried?

 Perhaps if you put a representative file somewhere we can download and try?

Barry

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Reminder: ASA Stat Comp/Graph Chambers Award Competition Deadline 2/22

2010-02-16 Thread Fei Chen









Just a gentle reminder that the deadline for submission to the Chambers Award 
Competition is fast approaching. All application materials must be received by 
5:00pm EST, Monday,
February 22, 2010.

For submission guidelines, please visit
http://stat-computing.org/awards/jmc/announcement.html

  
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
http://clk.atdmt.com/GBL/go/201469229/direct/01/
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Does the R "statistical language includes modules/packages to carry out nonlinear optimization similar to the SAS NLIN and NLP procedures?

2010-02-16 Thread David Winsemius


On Feb 16, 2010, at 11:09 AM, Powers, Randall - BLS wrote:


Hello R folks,

I'm hoping the answer to the question in the subject line.

I have in the past used SAS PROC NLIN and PROC NLP to carry out
nonlinear optimizations. I'm wondering if there is analogous ways for
doing this using R. If so, could someone please point me to some
literature that would help me examine this further?


The CRAN Task Views are a good place to start any such search:

http://cran.r-project.org/web/views/

There is one for Optimization:

http://cran.r-project.org/web/views/Optimization.html

--
David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] for loop Vs apply function Vs foreach (REvolution enhancement)

2010-02-16 Thread Steve Lianoglou
Hi,

> 2. foreach (REvolution enhancement)
>
>seems the rationale of this function is to facilitate the use of 
>multithreading to enhance the for loop speed. Given a moderate time 
>sensitivity (process must run fast but a gain of 10-20% speed seen as probably 
>not justifying the additional learning + dependence from yet another package), 
>is it really worth going down that route?
>
> Has anyone extensive experience with this matter (using foreach to boost for 
> loop running time)? any feedback welcome.

I'm not sure what you mean by "moderate time sensitivity" notion, but
you should definitely use foreach if you have a block of code that you
are iterating over that (i) takes a moderately long time to execute;
(ii) is independent of the code that runs before/after it in the loop;
and (tangentially but not really pertinent) (iii) running a linux/os x
machine so you can use the multicore package.

There isn't much learning involved since parallelizing over the cpu's
of a single machine is pretty much painless as long as you satisfy
(iii) above. This is only because the last I heard the "multicore"
package (which foreach/doMC depends on) doesn't really work on
windows. For instance, instead of something like:

results <- lapply(1:100, function(x) doSomethingWith(x))

or:

results <- list()
for (x in 1:100) {
  results[[x]] <- doSomethingWith(x)
}

You do:

results <- foreach(x=1:100) %dopar% {
  doSomethingWith(x)
}

That having been said, I wouldn't use foreach all the time as a
"default" replacement for the normal/sequential "for" loop, because
there is some rigging involved in using it, and it might not be worth
it if the code you are iterating over isn't too heavy.

Another nice thing is that the foreach process "degrades" gracefully.
For instance, if you are running on a machine that doesn't have any
foreach backend packages installed/enabled (the backend package
determines the "parallelization strategy", eg: "doMC" is a foreach
backend that parallelizes over the cpus/cores of 1 machine, others
parallelize over different machines in a cluster), then it will just
run the code in the %dopar% block sequentially.

Hope that helps,
-steve

-- 
Steve Lianoglou
Graduate Student: Computational Systems Biology
 | Memorial Sloan-Kettering Cancer Center
 | Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Build failure on Solaris 10 (SPARC)

2010-02-16 Thread Dr. David Kirkby
I'm trying to build R 2.10.1 on a Sun Blade 1000 running Solaris 10 (03/05 
release). I've installed iconv 1.13.1 and used:


CPPFLAGS="-I /export/home/drkirkby/sage-4.3.3.alpha0/local/include"
(which is where iconv is)

LDFLAGS=
-R/export/home/drkirkby/sage-4.3.3.alpha0/local/lib 
-L/export/home/drkirkby/sage-4.3.3.alpha0/local/lib



The build of R fails as below.


gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2 
-I../../src/extra/pcre -I../../src/extra  -I../../src/extra/xz/api -I. 
-I../../src/include -I../../src/include -I 
/export/home/drkirkby/sage-4.3.3.alpha0/local/include -DHAVE_CONFIG_H-g -O2 
-c unique.c -o unique.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2 
-I../../src/extra/pcre -I../../src/extra  -I../../src/extra/xz/api -I. 
-I../../src/include -I../../src/include -I 
/export/home/drkirkby/sage-4.3.3.alpha0/local/include -DHAVE_CONFIG_H-g -O2 
-c util.c -o util.o

util.c: In function 'Rf_Scollate':
util.c:1679: warning: implicit declaration of function 'uiter_setUTF8'
util.c:1681: warning: implicit declaration of function 'ucol_strcollIter'
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2 
-I../../src/extra/pcre -I../../src/extra  -I../../src/extra/xz/api -I. 
-I../../src/include -I../../src/include -I 
/export/home/drkirkby/sage-4.3.3.alpha0/local/include -DHAVE_CONFIG_H-g -O2 
-c version.c -o version.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2 
-I../../src/extra/pcre -I../../src/extra  -I../../src/extra/xz/api -I. 
-I../../src/include -I../../src/include -I 
/export/home/drkirkby/sage-4.3.3.alpha0/local/include -DHAVE_CONFIG_H-g -O2 
-c vfonts.c -o vfonts.o

gfortran   -g -O2 -c xxxpr.f -o xxxpr.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2 
-I../../src/extra/pcre -I../../src/extra  -I../../src/extra/xz/api -I. 
-I../../src/include -I../../src/include -I 
/export/home/drkirkby/sage-4.3.3.alpha0/local/include -DHAVE_CONFIG_H-g -O2 
-c mkdtemp.c -o mkdtemp.o
ar cr libR.a CConverters.o CommandLineArgs.o Rdynload.o Renviron.o RNG.o agrep.o 
apply.o arithmetic.o array.o attrib.o base.o bind.o builtin.o character.o 
coerce.o colors.o complex.o connections.o context.o cov.o cum.o dcf.o datetime.o 
debug.o deparse.o deriv.o devices.o dotcode.o dounzip.o dstruct.o duplicate.o 
engine.o envir.o errors.o eval.o format.o fourier.o gevents.o gram.o gram-ex.o 
gramRd.o graphics.o grep.o identical.o inlined.o inspect.o internet.o 
iosupport.o lapack.o list.o localecharset.o logic.o main.o mapply.o match.o 
memory.o model.o names.o objects.o optim.o optimize.o options.o par.o paste.o 
pcre.o platform.o plot.o plot3d.o plotmath.o print.o printarray.o printvector.o 
printutils.o qsort.o random.o raw.o registration.o relop.o rlocale.o saveload.o 
scan.o seq.o serialize.o size.o sort.o source.o split.o sprintf.o startup.o 
subassign.o subscript.o subset.o summary.o sysutils.o unique.o util.o version.o 
vfonts.o xxxpr.o  mkdtemp.o  libs/*o

ranlib libR.a
gcc -std=gnu99  -R/export/home/drkirkby/sage-4.3.3.alpha0/local/lib 
-L/export/home/drkirkby/sage-4.3.3.alpha0/local/lib -o R.bin Rmain.o libR.a 
-L../../lib -lRblas -R/export/home/drkirkby/sage-4.3.3.alpha0/local/lib 
-lgfortran -lm-lreadline -ltermcap  -lnsl -lsocket -ldl -lm -liconv -licuuc 
-licui18n

Undefined   first referenced
 symbol in file
uiter_setUTF8   libR.a(util.o)
ucol_strcollIterlibR.a(util.o)
ld: fatal: Symbol referencing errors. No output written to R.bin
collect2: ld returned 1 exit status


note the two earlier lines:
util.c:1679: warning: implicit declaration of function 'uiter_setUTF8'
util.c:1681: warning: implicit declaration of function 'ucol_strcollIter'

Any ideas how I might solve this? I'm using gcc 4.4.3.

Dave

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] line options on read.spss

2010-02-16 Thread Never Read

Hi,

I am a newbie here. I like the ability to read SPSS file since it comes with 
other info.
My problem with it is that it seems that I have to read the whole file into the 
memory.

For the csv file, I can read part of it and dump them into the database so that 
even
though I don't have a powerful system, I can work with a big data file as long 
as I
only extract the part I need from the database at a time.

I think can be a useful addition to the project.

Thanks.

Duncan






  


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Does the R "statistical language includes modules/packages to carry out nonlinear optimization similar to the SAS NLIN and NLP procedures?

2010-02-16 Thread Powers, Randall - BLS
Hello R folks,

I'm hoping the answer to the question in the subject line.

I have in the past used SAS PROC NLIN and PROC NLP to carry out
nonlinear optimizations. I'm wondering if there is analogous ways for
doing this using R. If so, could someone please point me to some
literature that would help me examine this further?

Thanks very much.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] combining dataframes with different row lenght

2010-02-16 Thread Juan Pablo Fededa
Hello list,

I want to combine dataframes from tsv files which have different row lenght.
I tryed cbind but doesn't work.
Is there an easy way to fill with NAs till a common point of rows for all
the dataframes and then merge them?
Any other idea?
Thanks a lot,


Juan

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] margin text warning message NAs coercion

2010-02-16 Thread e-letter
Readers,

I tried to the following commands:

plot(y~x,ylab=expression(A[1]~B[2],xlab=expression(C~D))
mtext(expression(A[1]~B[2]),"additional text",side=3,line=1)

I receive the text that I want, but the command terminal shows the
following response:

Warning message:
NAs introduced by coercion in: mtext(expression(A[1]~B[2]),"additional
text", side = 3,

What is my mistake please?

Yours,

r251
rhelpatconference.jabber.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] reading quattro pro spreadsheet .qpw into R

2010-02-16 Thread stephen sefick
I have many quattro pro spreadsheets and no quattro pro.  Is there a
way to access the data using R, or any other solution that anyone can
think of?
thanks,

-- 
Stephen Sefick

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods.  We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Odp: error : unused argument(s) when boxplot

2010-02-16 Thread Petr PIKAL
Hi

r-help-boun...@r-project.org napsal dne 16.02.2010 15:18:21:

> Dear all,
> 
> I am a total beginner in R, so sorry if this is the wrong place. I am 
using R 
> 2.10.1 on a Mac (Mac OS 10.6.2). 
> I have this small dataset :
> growth   sugar
> 75  C
> 72  C
> 73  C
> 61  F
> 67  F
> 64  F
> 62  S
> 63  S
> I have no problem reading the table, or getting the summary, but if I 
try 
> boxplot(growth~sugar, ylab="growth", xlab="sugar", data=Dataset), I have 
the 
> following error : ERROR:  unused argument(s) (sugar).
> 
> Any suggestions ?

What does str(Dataset) tell you about your data?

Petr


> 
> Thanks a lot,
> 
> PH
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Random Forest

2010-02-16 Thread Dror

Hi,
i'm using randomForest package and i have 2 questions:
1. Can i drop one tree from an RF object?
2. i have a 300 trees forest, but when i use the predict function on new
data (with predict.all=TRUE) i get only 270 votes. did i do something wrong?
Thanks

-- 
View this message in context: 
http://n4.nabble.com/Random-Forest-tp1557464p1557464.html
Sent from the R help mailing list archive at Nabble.com.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Total and heading of portfoilo table

2010-02-16 Thread Petr PIKAL
Hi

r-help-boun...@r-project.org napsal dne 16.02.2010 08:05:08:

> Hi!
>  
> I am not expert in R, but perhaps you can try the following -
>  
> X = as.numeric(read.csv('quantity.csv'))
> Y = read.csv('equity_price.csv')
> Y = Y[, -1]
>  
> Z = X*Y
>  
> port_val = NULL
>  
> for(i in 1 : nrow(Z))
> {
>  
> port_val[i] = sum(Z[i,])
>  
> }

If I am not mistaken and if you ensure that column positions in both data 
frames are the same.

 rowSums(mapply("*",  Y, X))

Shall do the trick

Regards
Petr



>  
> write.csv(data.frame(Z, port_val = port_val), 'PORTFOLIO.csv', row.names 
= FALSE)
> 
> 
> I am sure the experts will have much simpler way to address this 
problem.
>  
> Regards
>  
> Madhavi
> 
> --- On Mon, 15/2/10, Sarah Sanchez  wrote:
> 
> 
> From: Sarah Sanchez 
> Subject: [R] Total and heading of portfoilo table
> To: r-help@r-project.org
> Date: Monday, 15 February, 2010, 10:08 PM
> 
> 
> Dear R helpers,
> 
> I have two input files as 'quantity.csv' and 'equity_price.csv' as (for 
> example) given below.
> 
> 'quantity.csv'
> GOOG YHOO
> 1000 100
> 
> 
> 'equity_price.csv'
> sr_no   GOOG_price   YHOO_price
> 115.22 536.40
> 215.07 532.97
> 315.19 534.05  
> 415.16 531.86 
> 515.11 532.11
> 
> My problem is to calculate the portfolio value for each of these 5 days 
> (actually my portfolio 
> consists of 47 comanies and prices taken are for last 1 year).
> 
> I had defined 
> 
> X = read.csv('quantity.csv')
> Y = read.csv('equity_price.csv')
> 
> I have tried the loop 
> 
> Z = array()
> 
> for (i in 1:2)
> {
> Z[i] = (X[[i]]*Y[i])
> }
> 
> # When I write this dataframe as
> 
> write.csv(data.frame(Z), 'Z.csv', row.names = FALSE)
> 
> When I open 'Z.csv' file, I get
> 
> c.2500L..3300L..4500L..1000L..4400L.
c.14000L..45000L..48000L..26000L..15000L.
> 250014000
> 330045000
> 450048000
> 100026000
> 440015000
> 
> My requirement is to have the column heads and the portfolio total as
> GOOGYHOO Total
> 2500   14000 16500
> 3300   45000 48300
> 4500   48000 52500
> 1000   26000 27000
> 4400   15000 19400
> 
> 
> Please guide
> 
> Regards
> 
> Sarah
> 
> 
> 
> 
>   
> [[alternative HTML version deleted]]
> 
> 
> -Inline Attachment Follows-
> 
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 
> 
> 
> tp://downloads.yahoo.com/in/internetexplorer/
>[[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   >