Re: [R] New installation

2016-06-09 Thread Leonardo Ferreira Fontenelle
I have tried many Linux distributions before, and never looked back
after switching for Arch Linux. It is one of the best distributions with
regard to having an up to date but still reasonably stable system. Other
options are Fedora  Rawhide (there's a Fedora SIG mailing list) or
Debian Sid (as others mentioned, there's a Debian SIG mailing list), but
I don't know how dependable those versions are.

Leonardo Ferreira Fontenelle
Former GNOME translator

Em Qui 9 jun. 2016, às 20:08, Ista Zahn escreveu:
> Perhaps r-sig-debian is more appropriate, though it is not clear to me
> that
> a debian based linux is in fact the best for running R. Of course "best"
> is
> not clearly defined here, but I highly recommend Archlinux.
> 
> Best,
> Ista
> On Jun 9, 2016 6:47 PM, "Bert Gunter"  wrote:
> 
> > I suggest that you post to the r-sig-debian list instead of here. I
> > think you are more likely to get good answers to your query there.
> >
> > Cheers,
> > Bert
> > Bert Gunter
> >
> > "The trouble with having an open mind is that people keep coming along
> > and sticking things into it."
> > -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
> >
> >
> > On Thu, Jun 9, 2016 at 1:44 PM, jax200  wrote:
> > > Hi
> > >
> > > I'm starting off with both R and Linux Mint.  During a recent R course, I
> > > had multiple difficulties with installing updates needed for the course.
> > >
> > > As such, I'd like to hit the restart button with fresh installs of Linux
> > > and R.  I would appreciate your help with which Linux platform works best
> > > with R, and how to go about getting all the updates installed for both
> > > programs.
> > >
> > > Many thanks,  Jack
> > >
> > > [[alternative HTML version deleted]]
> > >
> > > __
> > > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > > and provide commented, minimal, self-contained, reproducible code.
> >
> > __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Power Calculation:2-sided exact equivalence test for Binomial Proportions

2016-06-07 Thread Leonardo Ferreira Fontenelle
Em Ter 7 jun. 2016, às 13:26, Munjal Patel escreveu:
> Dear R-Users,
> I am an intermediate level R user.
> 
> I am performing the power calculations for the Binomial proportions (2
> sided).
> I want to find the Power using the Exact test for the Equivalence of
> Binomial proportions.
> I do have the SAS code which is generating the Power for me but i am
> unable
> to find the Similar code in R.
> I need help for finding the similar computation in R.
> 

I'm not a statistician, and I have never hear of power calculation using
exact test. On the other hand, with hundreds of observations on each
group, does it matter? Maybe you could simply use power.prop.test(),
from the loaded-by-default package "stats".

Of course, you could look for packages about proportions or binomial in
CRAN, or set up a simulation.

Hope that helps,

Leonardo Ferreira Fontenelle, MD, MPH

PhD candidate, Federal University of Pelotas
Professor of Medicine, Vila Velha University
Legislative consultant, Municipal Chamber of Vitória

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Power Calculation: Binomial Proportions (2 sided exact test for equivalence)

2016-06-07 Thread Leonardo Ferreira Fontenelle
Em Ter 7 jun. 2016, às 13:08, Munjal Patel escreveu:
> Dear R-Sig-teaching users,
> I am an intermediate level R user.

You posted both emails to the same mailing list.

Please remember that "cross-posting is considered to be impolite" and
that "you should configure your e-mail software in such a way as to send
only plain text": https://www.r-project.org/mail.html

Best regards, 

Leonardo Ferreira Fontenelle, MD, MPH

PhD candidate, Federal University of Pelotas
Professor of Medicine, Vila Velha University
Legislative consultant, Municipal Chamber of Vitória

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] sandwich package: HAC estimators

2016-05-30 Thread Leonardo Ferreira Fontenelle
Em Sáb 28 mai. 2016, às 15:50, Achim Zeileis escreveu:
> On Sat, 28 May 2016, T.Riedle wrote:
> > I thought it would be useful to incorporate the HAC consistent 
> > covariance matrix into the logistic regression directly and generate an 
> > output of coefficients and the corresponding standard errors. Is there 
> > such a function in R?
> 
> Not with HAC standard errors, I think.
> 

Don't glmrob() and summary.glmrob(), from robustbase, do that?


Leonardo Ferreira Fontenelle, MD, MPH

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] How to replace all commas with semicolon in a string

2016-05-30 Thread Leonardo Ferreira Fontenelle
Em Sex 27 mai. 2016, às 12:10, Jun Shen escreveu:
> Dear list,
> 
> Say I have a data frame
> 
> test <- data.frame(C1=c('a,b,c,d'),C2=c('g,h,f'))
> 
> I want to replace the commas with semicolons
> 
> sub(',',';',test$C1) -> test$C1 will only replace the first comma of a
> string.
> 
> How do I replace them all in one run? Thanks.

If it's a CSV, you may read the file and then write it again (second
file name, for safety) with write.csv2.

HTH,

Leonardo Ferreira Fontenelle, MD, MPH
PhD candidate in epidemiology, Federal University of Pelotas
Professor of medicine, Vila Velha University
Legislative consultant in health, Municipal Chamber of Vitória

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] svyciprop object

2016-05-06 Thread Leonardo Ferreira Fontenelle
Em Sex 6 mai. 2016, às 06:20, kende jan via R-help escreveu:
> Hi, I'd like to access to the different elements in a svyciprop object
> (to the confidence intervals in particular...). But none of the functions
> I know works.Thank you for your help !

I don't know what data set you are using, so for reproducibility I'm
using the data set from the example in the function documentation.

=
library(survey)
data(api)
dclus1 <- svydesign(ids = ~ dnum, fpc = ~ fpc, data = apiclus1)
grr <- svyciprop(~ I(ell == 0), dclus1, method = "likelihood")
attr(grr, "ci")
# 2.5%    97.5% 
# 0.0006639212 0.1077784084
=

HTH,

Leonardo Ferreira Fontenelle
PhD candidate in epidemiology, Federal University of Pelotas
Professor of medicine, Vila Velha University
Legislative analyst in health, Municipal Chamber of Vitória

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Unexpected scores from weighted PCA with svyprcomp()

2016-05-03 Thread Leonardo Ferreira Fontenelle
Thanks for remembering me to cc him!

Thomas Lumley is the package maintainer, and he frequently answers
questions in this list, but it is obviously hard for anyone to keep up
with so many emails.

Att,

Leonardo Ferreira Fontenelle
http://lattes.cnpq.br/9234772336296638


Em Ter 3 mai. 2016, às 21:16, Jeff Newmiller escreveu:
> Your question is a mixture of statistical and implementation (package) 
> issues. This isn't really the right forum for "what is the 
> statistically-correct answer" questions, and as to whether the package is 
> correct or you are using it right would require someone familiar with that 
> particular CONTRIBUTED package to be reading this list. (While this is 
> probably one of the more widely used contributed packages,  there are over 
> 8000 contributed packages so far and I for one don't use it...)
>  
>  You could ask on stats.stackexchange.com where theory is more on topic, or 
> you could try to get the maintainer to chime in (use the maintainer() 
> function to find out who to cc), or you could just be patient. 
>  -- 
>  Sent from my phone. Please excuse my brevity.
> 
> On May 3, 2016 3:34:15 PM PDT, Leonardo Ferreira Fontenelle 
>  wrote:
>> Is there something I could do to improve my chances of getting an
>> answer?
>> 
>> Leonardo Ferreira Fontenelle
>> http://lattes.cnpq.br/9234772336296638
>> 
>> Em Sex 29 abr. 2016, às 23:40, Leonardo Ferreira Fontenelle escreveu:
>>>  Hello!
>>>  
>>>  I'd like to create an assets-based economic indicator using data from a
>>>  national household survey. The economic indicator is to be the first
>>>  principal component from a principal components analysis, which (given
>>>  the source of the data) I believe should take in consideration the
>>>  sampling weights of the observations. After running the PCA with
>>>  svyprcomp(), from the survey package, I wanted to list the loading (with
>>>  regard to the first principal component) and the scale of the variables,
>>>  so
that I can tell people how to "reconstitute" the economic indicator
>>>  from the variables without any knowledge of PCA. This reconstituted
>>>  indicator wouldn't be centered, but that's OK because the important
>>>  thing for the application is the relative position of the observations.
>>>  The unexpected (at least for me) behavior was that the principal
>>>  component returned by svyprcomp() was very different from from the
>>>  reconstituted indicator as well as from the indicator returned by
>>>  predict(). "Different" here means weak correlation and different
>>>  distributions.
>>>  
>>>  I hope the following code illustrates what I mean:
>>>  
>>>  =
>>>  
>>>  svycor <- function(formula, design) {
>>># https://stat.ethz.ch/pipermail/r-help/2003-July/036645.html
>>>stopifnot(require(survey))
>>>covariance.matrix <- svyvar(formula, design)
>>>variables <-
diag(covariance.matrix)
>>>correlation.matrix <- covariance.matrix / sqrt(variables %*%
>>>t(variables))
>>>return(correlation.matrix)
>>>  }
>>>  
>>>  library(survey)
>>>  data(api)
>>>  dclus2 <- svydesign(ids = ~ dnum + snum, fpc = ~ fpc1 + fpc2, data =
>>>  apiclus2)
>>>  pc <- svyprcomp( ~ api99 + api00, design = dclus2, scale = TRUE, scores
>>>  = TRUE)
>>>  dclus2$variables$pc1 <- pc$x[, "PC1"]
>>>  dclus2$variables$pc2 <- predict(pc, apiclus2)[, "PC1"]
>>>  mycoef <- pc$rotation[, "PC1"] / pc$scale
>>>  dclus2$variables$pc3 <- with(apiclus2, api99 * mycoef["api99"] + api00 *
>>>  mycoef["api00"])
>>>  svycor(~ pc1 + pc2 + pc3, dclus2)[, ]
>>>  #   pc1   pc2   pc3
>>>  # pc1 1.000 0.2078789 0.2078789
>>>  # pc2 0.2078789 1.000 1.000
>>>  # pc3 0.2078789 1.000 1.000
>>>  plot(svysmooth(~ pc1, dclus2), xlim = c(-2.5, 5), ylim = 0:1)
>>>  lines(svysmooth(~ pc2, dclus2), col = 2)
>>> 
lines(svysmooth(~ pc3, dclus2), col = 3)
>>>  legend("topright", legend = c('pc$x[, "PC1"]', 'predict(pc, apiclus2)[,
>>>  "PC1"]', 'Reconstituted indicator'), col = 1:3, lty = 1)
>>>  
>>>  sessionInfo()
>>>  # R version 3.2.4 Revised (2016-03-16 r70336)
>>>  # Platform: x86_64-pc-linux-gnu (64-bit)
>>>  # Running under: Arch Linux
>>>  

Re: [R] Unexpected scores from weighted PCA with svyprcomp()

2016-05-03 Thread Leonardo Ferreira Fontenelle
Is there something I could do to improve my chances of getting an
answer?

Leonardo Ferreira Fontenelle
http://lattes.cnpq.br/9234772336296638

Em Sex 29 abr. 2016, às 23:40, Leonardo Ferreira Fontenelle escreveu:
> Hello!
> 
> I'd like to create an assets-based economic indicator using data from a
> national household survey. The economic indicator is to be the first
> principal component from a principal components analysis, which (given
> the source of the data) I believe should take in consideration the
> sampling weights of the observations. After running the PCA with
> svyprcomp(), from the survey package, I wanted to list the loading (with
> regard to the first principal component) and the scale of the variables,
> so that I can tell people how to "reconstitute" the economic indicator
> from the variables without any knowledge of PCA. This reconstituted
> indicator wouldn't be centered, but that's OK because the important
> thing for the application is the relative position of the observations.
> The unexpected (at least for me) behavior was that the principal
> component returned by svyprcomp() was very different from from the
> reconstituted indicator as well as from the indicator returned by
> predict(). "Different" here means weak correlation and different
> distributions.
> 
> I hope the following code illustrates what I mean:
> 
> =
> 
> svycor <- function(formula, design) {
>   # https://stat.ethz.ch/pipermail/r-help/2003-July/036645.html
>   stopifnot(require(survey))
>   covariance.matrix <- svyvar(formula, design)
>   variables <- diag(covariance.matrix)
>   correlation.matrix <- covariance.matrix / sqrt(variables %*%
>   t(variables))
>   return(correlation.matrix)
> }
> 
> library(survey)
> data(api)
> dclus2 <- svydesign(ids = ~ dnum + snum, fpc = ~ fpc1 + fpc2, data =
> apiclus2)
> pc <- svyprcomp( ~ api99 + api00, design = dclus2, scale = TRUE, scores
> = TRUE)
> dclus2$variables$pc1 <- pc$x[, "PC1"]
> dclus2$variables$pc2 <- predict(pc, apiclus2)[, "PC1"]
> mycoef <- pc$rotation[, "PC1"] / pc$scale
> dclus2$variables$pc3 <- with(apiclus2, api99 * mycoef["api99"] + api00 *
> mycoef["api00"])
> svycor(~ pc1 + pc2 + pc3, dclus2)[, ]
> #   pc1   pc2   pc3
> # pc1 1.000 0.2078789 0.2078789
> # pc2 0.2078789 1.000 1.000
> # pc3 0.2078789 1.000 1.000
> plot(svysmooth(~ pc1, dclus2), xlim = c(-2.5, 5), ylim = 0:1)
> lines(svysmooth(~ pc2, dclus2), col = 2)
> lines(svysmooth(~ pc3, dclus2), col = 3)
> legend("topright", legend = c('pc$x[, "PC1"]', 'predict(pc, apiclus2)[,
> "PC1"]', 'Reconstituted indicator'), col = 1:3, lty = 1)
> 
> sessionInfo()
> # R version 3.2.4 Revised (2016-03-16 r70336)
> # Platform: x86_64-pc-linux-gnu (64-bit)
> # Running under: Arch Linux
> # 
> #  locale:
> #  [1] LC_CTYPE=pt_BR.UTF-8   LC_NUMERIC=C  
> #  [3] LC_TIME=pt_BR.UTF-8LC_COLLATE=pt_BR.UTF-8
> #  [5] LC_MONETARY=pt_BR.UTF-8LC_MESSAGES=pt_BR.UTF-8   
> #  [7] LC_PAPER=pt_BR.UTF-8   LC_NAME=C 
> #  [9] LC_ADDRESS=C   LC_TELEPHONE=C
> # [11] LC_MEASUREMENT=pt_BR.UTF-8 LC_IDENTIFICATION=C   
> # 
> # attached base packages:
> # [1] grid  stats graphics  utils datasets  grDevices
> # [7] methods   base 
> # 
> # other attached packages:
> # [1] KernSmooth_2.23-15 survey_3.30-3 
> # 
> # loaded via a namespace (and not attached):
> # [1] tools_3.2.4
> 
> =
> 
> This lack of correlation doesn't happen if the survey design object has
> uniform sampling weights or if the the data is analyzed with prcomp().
> 
> Why does the returned principal component is so different from the
> predicted and the reconstituted ones? Are predict() and my
> "reconstitution" missing something? Are the three methods equally valid
> but with different interpretations? Is there a bug in svyprcomp() ??
> 
> Thanks in advance,
> 
> Leonardo Ferreira Fontenelle
> http://lattes.cnpq.br/9234772336296638
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Declaring All Variables as Factors in GLM()

2016-04-30 Thread Leonardo Ferreira Fontenelle
This should do the trick:

history2 <- as.data.frame(lapply(history, as.factor))

Mind you that read.csv() by default reads string vectors as factors, so
that declaring the variables as factors should only be necessary for the
numeric ones, like income. Using as.factor() in factor variables may
drop unused levels, but in your case I believe it won't be a problem.

HTH,

Leonardo Ferreira Fontenelle
http://lattes.cnpq.br/9234772336296638

Em Sáb 30 abr. 2016, às 04:25, Preetam Pal escreveu:
> Hi guys,
> 
> I am running glm(y~., data = history,family=binomial)-essentially,
> logistic
> regression for credit scoring (y = 0 or 1). The dataset 'history' has 14
> variables, a few examples:
> history <- read.csv("history.csv". header = TRUE)
> 1> 'income = 100,200,300 (these are numbers in my dataset; however
> interpretation is that these are just tags or labels,for every
> observation,
> its income gets assigned one of these tags)
> 2> 'job' = 'private','government','unemployed','student'
> 
> I want to declare all the regressors and y variables *as factors*
> programmatically. Would be great if anyone can help me with this (idea is
> to loop over variable names and use as.factor - but not sure how to do
> this). Thanks
> 
> Regards,
> Preetam
> -- 
> Preetam Pal
> (+91)-9432212774
> M-Stat 2nd Year, Room No.
> N-114
> Statistics Division,   C.V.Raman
> Hall
> Indian Statistical Institute, B.H.O.S.
> Kolkata.
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Unexpected scores from weighted PCA with svyprcomp()

2016-04-30 Thread Leonardo Ferreira Fontenelle
Hello!

I'd like to create an assets-based economic indicator using data from a
national household survey. The economic indicator is to be the first
principal component from a principal components analysis, which (given
the source of the data) I believe should take in consideration the
sampling weights of the observations. After running the PCA with
svyprcomp(), from the survey package, I wanted to list the loading (with
regard to the first principal component) and the scale of the variables,
so that I can tell people how to "reconstitute" the economic indicator
from the variables without any knowledge of PCA. This reconstituted
indicator wouldn't be centered, but that's OK because the important
thing for the application is the relative position of the observations.
The unexpected (at least for me) behavior was that the principal
component returned by svyprcomp() was very different from from the
reconstituted indicator as well as from the indicator returned by
predict(). "Different" here means weak correlation and different
distributions.

I hope the following code illustrates what I mean:

=

svycor <- function(formula, design) {
  # https://stat.ethz.ch/pipermail/r-help/2003-July/036645.html
  stopifnot(require(survey))
  covariance.matrix <- svyvar(formula, design)
  variables <- diag(covariance.matrix)
  correlation.matrix <- covariance.matrix / sqrt(variables %*%
  t(variables))
  return(correlation.matrix)
}

library(survey)
data(api)
dclus2 <- svydesign(ids = ~ dnum + snum, fpc = ~ fpc1 + fpc2, data =
apiclus2)
pc <- svyprcomp( ~ api99 + api00, design = dclus2, scale = TRUE, scores
= TRUE)
dclus2$variables$pc1 <- pc$x[, "PC1"]
dclus2$variables$pc2 <- predict(pc, apiclus2)[, "PC1"]
mycoef <- pc$rotation[, "PC1"] / pc$scale
dclus2$variables$pc3 <- with(apiclus2, api99 * mycoef["api99"] + api00 *
mycoef["api00"])
svycor(~ pc1 + pc2 + pc3, dclus2)[, ]
#   pc1   pc2   pc3
# pc1 1.000 0.2078789 0.2078789
# pc2 0.2078789 1.000 1.000
# pc3 0.2078789 1.000 1.000
plot(svysmooth(~ pc1, dclus2), xlim = c(-2.5, 5), ylim = 0:1)
lines(svysmooth(~ pc2, dclus2), col = 2)
lines(svysmooth(~ pc3, dclus2), col = 3)
legend("topright", legend = c('pc$x[, "PC1"]', 'predict(pc, apiclus2)[,
"PC1"]', 'Reconstituted indicator'), col = 1:3, lty = 1)

sessionInfo()
# R version 3.2.4 Revised (2016-03-16 r70336)
# Platform: x86_64-pc-linux-gnu (64-bit)
# Running under: Arch Linux
# 
#  locale:
#  [1] LC_CTYPE=pt_BR.UTF-8   LC_NUMERIC=C  
#  [3] LC_TIME=pt_BR.UTF-8LC_COLLATE=pt_BR.UTF-8
#  [5] LC_MONETARY=pt_BR.UTF-8LC_MESSAGES=pt_BR.UTF-8   
#  [7] LC_PAPER=pt_BR.UTF-8   LC_NAME=C 
#  [9] LC_ADDRESS=C   LC_TELEPHONE=C
# [11] LC_MEASUREMENT=pt_BR.UTF-8 LC_IDENTIFICATION=C   
# 
# attached base packages:
# [1] grid  stats graphics  utils datasets  grDevices
# [7] methods   base 
# 
# other attached packages:
# [1] KernSmooth_2.23-15 survey_3.30-3 
# 
# loaded via a namespace (and not attached):
# [1] tools_3.2.4

=

This lack of correlation doesn't happen if the survey design object has
uniform sampling weights or if the the data is analyzed with prcomp().

Why does the returned principal component is so different from the
predicted and the reconstituted ones? Are predict() and my
"reconstitution" missing something? Are the three methods equally valid
but with different interpretations? Is there a bug in svyprcomp() ??

Thanks in advance,

Leonardo Ferreira Fontenelle
http://lattes.cnpq.br/9234772336296638

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Could not find function "pointsToRaster"

2016-04-30 Thread Leonardo Ferreira Fontenelle
Dear Ogbos Okike,

I can't know how your script depends on pointsToRaster(), but googling
around I found that the function seems to have been marked as obsolete:

http://www.inside-r.org/packages/cran/raster/docs/linesToRaste

Hope that helps,

Leonardo Ferreira Fontenelle
http://lattes.cnpq.br/9234772336296638

Em Sáb 30 abr. 2016, às 05:28, Ogbos Okike escreveu:
> Dear All,
> I have a script that draws longitude and latitude of lightning
> occurrence. This script was running fine before. But when I changed my
> system and do a fresh install on another laptop, this error persist.
>  source("script")
> Error in eval(expr, envir, enclos) :
>   could not find function "pointsToRaster"
> 
> I have tried to see if  there is any other package I need to install
> to take of the problem, I could not see. Already, when I installed
> raster, it installed its dependencies such as sp.
> Can any body please bail me out.
> 
> Thanks
> Ogbos
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.