Re: [R-sig-Geo] How to interpret the "residual" values in plot(quadrat.test(some_data_set))

2023-12-20 Thread Rolf Turner


On Wed, 20 Dec 2023 05:56:19 +
Xiang Ye via R-sig-Geo  wrote:



> According to what I learnt about the quadrat test (Rogersson and
> Yamada 2005, p. 48), and the help document from quadrat.test, I
> assume the "residual" should be (observed-expected)^2/expected.



Just to reinforce what Roger Bivand has already told you (with
admirable clarity) --- the expression that you give above is for the
*square* of the (Pearson) residual, not the actual residual.

cheers,

Rolf Turner

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Stats. Dep't. (secretaries) phone:
 +64-9-373-7599 ext. 89622
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] differences between risk() in sparr package, and relrisk() in spatstat

2023-04-27 Thread Rolf Turner


On Mon, 24 Apr 2023 12:37:39 -0400
Christopher Ryan via R-sig-Geo  wrote:

> Hello. I have a multi-type point pattern, a ppp object in spatstat,
> with two types, cases and controls. About 2600 point altogether.
> Overall, about 2% are cases. All in a polygonal window.  I'm
> interested in the spatially varying relative risk of being a case
> rather than a control. I'm aware of the relrisk() command in
> spatstat, and also the risk() command in the sparr package, both of
> which take a ppp object as an argument. I've done it with both. They
> yield essentially similar pixel images, except that the range of the
> relative risk, as shown in the colored legend, is very different in
> the two plots.
> 
> plot(relrisk(thin.ems.ppp, at = "pixels", weights = NULL, varcov =
> NULL, relative=TRUE, adjust=1, edge=TRUE, diggle=TRUE, se=FALSE,
> casecontrol=TRUE, case = "case"))
> 
> yields relative risks ranging from 0 to about 0.16
> 
> In contrast,
> 
> risk(thin.ems.ppp, log = FALSE, verbose = TRUE, doplot = TRUE)
> 
> yields an image with relative risks ranging from 0 to about 40.
> 
> I've read the documentation for the packages, but perhaps I am still
> misunderstanding what each package means by "relative risk."  Can
> anyone comment?

I have asked the maintainer of the sparr package (Tilman Davies) about
this issue, and after a bit of to-ing and fro-ing this is what has
turned up.

The basic reason for the discrepancy is the somewhat perplexing fact
that ‘relrisk’ computes the ratio of intensities, but ‘risk’ computes
the ratio of densities.  I'm afraid that I can give you no insight in
respect of the intuitive interpretation of the two forms of "relative
risk".

Another, less fundamental, problem is that the default bandwidths
differ between the two functions.  To get consistent results, these
bandwidths need to be set.  (It is not entirely clear from the help,
to go about doing so.)

The following example, provided by Tilman, will provide you with some
guidance:

library(sparr)
data(pbc)
cas <- split(pbc)[[1]]
con <- split(pbc)[[2]]

# Using risk() with the "bandwidth" (the standard deviation of the
# kernel density estimator) set equal to 3 (h0 = 3).
r1  <- risk(cas,con,h0=3,log=FALSE)$rr

# "Raw" calculation, imitating what risk() does, i.e.
# using a ratio of densities.
fhat <- density.ppp(cas,sigma=3)
ghat <- density.ppp(con,sigma=3)
r2   <- (fhat/integral(fhat))/(ghat/integral(ghat))

# Using relrisk() with the "bandwidth" set equal to 3 (sigma = 3).
# Note the difference in argument names and the fact that we have to
# specify "relative=TRUE" (see the help for relrisk as well as having
# to specify which mark corresponds to the "control"
r3 <- relrisk(pbc,sigma=3,relative=TRUE,control=2)

# "Raw" calculation, imitating what relrisk() does, i.e.
# using a ratio of intensities.
r4 <- fhat/ghat

# Now view the results.
plot(anylist(risk=r1,risk.raw=r2,relrisk=r3,
relrisk.raw=r4),nrows=2,main="")

All is in harmony!  OMM!

Hope this helps.

cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Stats. Dep't. (secretaries) phone:
 +64-9-373-7599 ext. 89622
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Unscribe requrest

2022-05-21 Thread Rolf Turner
On Sat, 21 May 2022 12:29:37 +0200
Ahoura Jafarimanesh  wrote:

> Dear Sir or Madam,
> 
> Is it kindly possible that you Unsubscribe my email from your mailing
> list? I tried to do it via the link provided and asked for the
> password therefore not successful.

Read what the link says: "If you just want to unsubscribe from this
list, click on the Unsubscribe button and a confirmation message will
be sent to you."

cheers,

Rolf Turner

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Shiny app for Spatial Econometric classes

2021-07-07 Thread Rolf Turner


On Wed, 7 Jul 2021 18:09:50 -0300
Raphael Saldanha  wrote:

> Hello all!
> 
> Me and Professor Eduardo Almeida (UFJF, Brazil) are creating an R
> Shiny app for Spatial Econometric classes. 



I hope that you won't find this comment offensive, but I could not
resist sending it to you, since I am rather fanatical (pedantic?) about
correct English usage.

The expression "Me and Professor Eduardo Almeida" is both (a) bad
grammar and (b) bad form.  (a) Instead of the pronoun "Me" (accusative or
objective case) you should use "I" (nominative or subjective case), since
this expression is the subject of the sentence.  (b) It is "bad form"
(it sounds vaguely self-promoting)  to put a first person pronoun before the
names of others to which it is linked.  You should say "Professor
Eduardo Almeida and I are creating ".

I realise that English is surely a second language for you, and your
English is infinitely better than my non-existent Portuguese.  Your
English is actually quite good, probably better than that of many
native English speakers.  Nevertheless the expression that you use is
not correct and you should strive to avoid such usage.

Again let me express the hope that you won't take offence at this
comment.

cheers,

Rolf Turner

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Species distribution modeling with presence-only data using log-Gaussian Cox process

2021-02-10 Thread Rolf Turner


On Wed, 10 Feb 2021 16:40:50 +
Barry Rowlingson  wrote:

> Before messing with lgcp I'd look at spatstat - lgcp is more intended
> for relative risk calculations where you have "cases" and "controls",
> and also for
> space-time data.
> 
> spatstat seems to have functions for fitting (and simulating) LGCPs to
> point patterns so I think that might be what you are after...


Seems, Barry?  Nay, 'tis so! :-)

Thanks for pointing the OP at spatstat.

cheers,

Rolf

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Could you please unsubscribe me?

2020-08-31 Thread Rolf Turner


On Mon, 31 Aug 2020 10:07:15 -0400
Scott Haag  wrote:

>   [[alternative HTML version deleted]]
> 
> ___
> R-sig-Geo mailing list
> R-sig-Geo@r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

You have to unsubscribe yourself.  (As in the popular epithet "Go
unsubscribe yourself! :-) )  No-one can do it for you! :-)

See (click on) the link above and then scroll to the bottom of the page
and click on the "Unsubscribe or edit options" button.

cheers,

Rolf Turner

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Distance rule that filtering a set of points in "ppp" class by minimum or maximum distance

2020-08-28 Thread Rolf Turner


On Thu, 27 Aug 2020 21:17:58 -0400
ASANTOS via R-sig-Geo  wrote:

> Dear R-Sig_Geo Members,
> 
> I'd like to find a more simple way to filtering a set of points in a 
> "ppp" object by minimum or maximum distance.
> 
> In my example using a ants ("ppp" object) in spatstat package:
> 
> #Packages
> library(spatstat)
> library(sp)
> 
> #Point process example
> data(ants)
> str(ants)
> #- attr(*, "class")= chr "ppp"
> 
> #Computes the matrix of distances between all pairs of ants nests and 
> selection of more than 20 m distance neighborhood
> ants.df<-as.data.frame(ants)
> coordinates(ants.df) <- ~x+y
> ants.dmat <- spDists(ants.df) #Calculate a distance matrix using
> spDists min.dist <- 20 #distance range
> ants.dmat[ants.dmat <= min.dist] <- NA # Na for less than 20 meters
> 
> Here yet I need to start a relative long code for identified columns 
> with NA in ants.dmat object, create a new data frame and than create
> a new ppp object based in the coordinates and marks etc...
> 
> Is there any more easy way for create a new "ppp" object just only 
> specified a distance rule that filtering a set of points in a
> original "ppp" object by minimum or maximum distance?
> 
> Thanks in advanced,
> 
> Alexandre
> 

It's very easy! :-)

ddd <- nndist(ants)
farAnts <- ants[ddd >= 20]

cheers,

Rolf

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[R-sig-Geo] Testing.

2020-08-14 Thread Rolf Turner


My apologies for the noise.  Please ignore this message.
I am just trying to test out message filters in a new mail client that
I am learning to use.

Again, sorry for the noise.

cheers,

Rolf Turner

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] How set up spatstat::densityHeat()?

2020-07-17 Thread Rolf Turner



On 17/07/20 9:04 pm, Michal Kvasnička wrote:


Dear Rolf.

Thanks a lot for the suggestions! I'll check it out.

Best wishes to you and all spatstat team,
*Ing. Michal Kvasnička, Ph.D.*



Good luck with your endeavours.  I'd like to point out a slight glitch 
in the advice that I proffered, explicitly my suggestion that you use


remotes::install_github("spatstat/spatstat",lib=.Rlib)

to install the latest version of spatstat.  The code above was copied 
and pasted from my .Rhistory and I neglected to  change the "lib=.Rlib", 
which is peculiar to my personal R set-up.


You should replace ".Rlib" with a text string specifying the library 
where you keep the packages that you install.


Apologies for any confusion that may have resulted.

cheers,

Rolf



--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] How set up spatstat::densityHeat()?

2020-07-16 Thread Rolf Turner



On 14/07/20 1:39 am, Michal Kvasnička wrote:


Hello.

I have to find hotspots of road accidents. It seems that
spatstat::densityHeat() implements McSwiggan et al. (2017) algorithm I want
to use. I plan to follow path of DrHotNet package:

1. estimate the density function
2. lixelize the network
3. calculate the density in the middle point of each lixel from (2) using
function estimated in (1)
4. find clusters of lixels from (3) with sufficiently high density

However, I don't know how to set the function's parameters.

I cannot use the implicit setting (finespacing=TRUE) because it suggests
tens of billions iterations. (I use OSM road maps of Czech districts
converted to Krovak projection. My testing district has about 2,700 km of
roads with an average length of road segments of about 30 meters due to
road curvatures. This is not the largest district I have to handle---Prague
is much larger.)

I can set finespacing = FALSE and then finetune the parameters myself.
However, I can't find any documentation. Having skimmed the source code,
I'm trying something like this:

density_function <- densityHeat(accident_lpp, sigma, finespacing = FALSE,
eps = 10, fun = TRUE)

This way I get the function I need. However, I'm not certain what I'm
doing. It seems that the densityHeat() uses some grid, based on lixelized
network. My questions are:

* Is it sufficient to set the eps parameter or do I need to set other
parameters too? Which ones?
* Is the parameter in meters (my map is projected with meters as its units)?
* If eps = 10 means grid of 10x10 meters, what precisely does it mean? How
is the network lixelized (and why)? How does it affect the estimated
function (not the linim object)?
* What is the (economically/substantially) reasonable value of eps (and
other necessary parameters)?
* Is there any documentation to the function? (I've read chapter 17 of the
spatstat book but found nothing there. The function help page doesn't cover
this either.)
* Does it any difference if I lixelize the network myself before running
densityHeat()?

Many thanks for your advice. (And an apology for such a long question.)

Best wishes,
*Ing. Michal Kvasnička, Ph.D.*


Dear Michal,

I am replying on behalf of the spatstat authors (and not claiming any 
personal competence in the area of point processes on networks).


My co-authors suggest that you:

   * upgrade to the latest development version of spatstat.  You
 could use

 > remotes::install_github("spatstat/spatstat",lib=.Rlib)

 to do this,

   * look at help(densityHeat), in particular the
 section "Discretisation and Error Messages" which explains
 how the discretisation is performed,

   * use densityfun.lpp instead of density.lpp if you want a
 function rather than a pixel image.

Note that if finespacing=FALSE is selected, the discretisation is 
determined by the default pixel resolution of the window containing the 
network. The arguments eps and dimyx would be passed to as.mask to 
determine pixel resolution in the usual way (their interpretation is 
explained in the help for as.mask()).


Note also that density.lpp() calls densityHeat() which calls the 
internal algorithm. Similarly densityfun.lpp() calls the same internal 
algorithm. Consequently they all handle discretisation arguments the 
same way.


The spatstat authors know nothing about the package DrHotNet and do not 
necessarily endorse whatever they are doing. Definitely *do not* 
'lixellate' (discretise) the network before applying density(), because 
this will screw up the discretisation procedure.


We are very sceptical about the validity of applying clustering to the 
elements of the discretisation.


HTH

cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] spatstat installation

2020-04-14 Thread Rolf Turner



On 15/04/20 5:23 am, Mauro Pedone wrote:


Hello ,
I am trying the spatstat package installation
on R 3.3.0

the package is spatstat_1.63-3.tar.gz
,the OS is oracle enterprise linux 7

at the end I receive the error
Error : .onAttach failed in attachNamespace() for 'spatstat', details:
   call: if (today - as.Date(rdate) > 365) {
   error: missing value where TRUE/FALSE needed

ask to the community if someone encountered this problem.

thanks and regards


Dear Mauro,

I would guess that this has really nothing to do with spatstat as such, 
but rather something to do with how your system is processing dates.


The code involved is found in the file versions.R.  One can run the 
relevant code from the command line:


today <- Sys.Date()
rv <- R.Version()
rdate <- with(rv, ISOdate(year, month, day))
chk <- today - as.Date(rdate)
if(chk > 365) cat("\nWhoops\n!") else cat("\nAlright!\n")

When I run this code, "Alright!" is duly echoed.

Printing "chk" gives "Time difference of 46 days" and 'chk > 365" 
produces FALSE.


What happens when you run this code?  Look at today, rv, rdate, and chk, 
and try to figure out why a missing value is appearing.


cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Re: Introducing new local spatial statistic, ELSA, and Entrogram (a variogram-like graph)

2020-04-07 Thread Rolf Turner



On 8/04/20 10:49 am, Babak Naimi wrote:


Hi Chris,

Thanks for your email. Unfortunately, the article is not open but you can
find the PDF from the following link:

https://www.dropbox.com/s/mpjra60b5yyzwjg/Naimi_2019_ELSA.pdf?dl=0


If the paper is not open source, are you not violating copyright by 
making it available in this way?


Could get you into trouble.

cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Request for help about running a loop and reading .NC files

2020-03-23 Thread Rolf Turner



On 24/03/20 10:52 am, Bhaskar Mitra wrote:


Hello Everyone,

I have written a loop which reads hundreds of .nc files
and extract information from each .nc file and
exports that corresponding information as a csv file.

The loop works fine until it encounters a .nc file which it cannot read
and the loop stops. I would appreciate if anyone can suggest how can I
modify
the loop, whereby the loop will continue to run by bypassing those
particular
files which it cannot read.

In the end I was also hoping to modify the loop such that it
will generate a report which will inform me which
files were not read by the loop.  The codes are given below


Perhaps use try()???

cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[R-sig-Geo] Testing --- please ignore.

2020-01-20 Thread Rolf Turner



Sorry for the noise. I'm just trying to test a message filter on my mailer.

cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] specifying axis intervals in maptools?

2019-09-25 Thread Rolf Turner



On 26/09/19 8:52 AM, Rodrigo Plei wrote:


Dear colleagues,

How to I specify latitude and longitude intervals on x and y axis in a map
using the maptools package?

Suppose that I plotted a map and using the argument "axes = TRUE", I got
the y (latitude) axis plotted to show "-23.6 -23.4 -23.2 -23 -22.8 -22.6"
(i.e. 0.2 intervals), but I was actually wanting to see "-23.6 -23.2 -22.8
" (0.4 intervals). How can I customize the intervals?

Any help will be greatly appreciated.


Given that I am not misunderstanding you (always a distinct possibility 
for me!) I believe that all you need to do is set axes=FALSE in your 
initial call, then add the axes using something like:


axis(side=1)
axis(side=2,at=c(-23.6, -23.2, -22.8, https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] spatstat - user-supplied list of point patterns in envelope() error

2019-08-11 Thread Rolf Turner



Dear Joe,

Further to my previous response to your post:  I have now been in touch 
with Adrian Baddeley and he has confirmed that the problem is as I 
conjectured:   The patterns in "pp_list" need to be compatible with 
"xx".  The error message was a bit misleading.  (Adrian has now adjusted 
the error message so as to make it more clear what the problem actually is.)


However there was also a genuine bug (that was revealed by your 
question; thank you!!!) in the software, so even if pp_list had been a 
list of patterns of the appropriate type, you would still have got an 
error.  (A spurious error, caused by the bug; different message of course.)


The bug has been been fixed in the development version of spatstat 
(1.60-1.022).  You will need to install that from github:


library(remotes)
install_github('spatstat/spatstat')

or wait for the next "official release" of spatstat on CRAN.  Such 
releases happen about every three months; the last one was on

23/June/2019.

cheers,

Rolf


On 12/08/19 8:51 AM, Joe Lewis wrote:


Dear list,

I'm trying to use a user-supplied list of point patterns in envelope()
rather than test against the CSR. Page 400 of Spatial Point Patterns:
Methodology and Applications in R states that "the argument simulate
can be a list of point patterns". However, I get the following error
when I try  to supply a list of ppp:

  ekls <- envelope.lpp(Y =  xx, fun = linearK, nsim=5, simulate = 
pp_list)
Error in envelopeEngine(X = X, fun = fun, simul = simrecipe, nsim = 
nsim,  :

   ‘simulate’ should be an expression, or a list of point patterns

for reference,

xx
Point pattern on linear network
680 points
Linear network with 22 vertices and 21 lines
Enclosing window: rectangle = [284086.69, 309740] x [709900, 726547.7] 
metres


class(xx)
[1] "lpp" "ppx"

pp_list
List of point patterns

Component 1:
Planar point pattern: 513 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

Component 2:
Planar point pattern: 422 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

Component 3:
Planar point pattern: 495 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

Component 4:
Planar point pattern: 557 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

Component 5:
Planar point pattern: 576 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

class(pp_list)
[1] "ppplist" "solist"  "anylist" "listof"  "list"

Any ideas why the error is occurring? Thanks.


It's hard to say without having a *reproducible* example.  Since we 
don't have access to "xx" or to "pp.list" we cannot experiment to see 
what's going on.


One problem that leaps out at me --- although it doesn't seem that this 
should trigger the error message that you received --- is that the 
entries of pp_list appear to be *planar point patterns* (of class "ppp") 
whereas "xx" is a pattern on a linear network (of class "lpp"). 
Consequently there is a fundamental incompatibility here.


However I don't see why you would get the error message that you did.  I 
am CC-ing this email to Adrian Baddeley who has more insight than I, and 
may be able to point you in the right direction.  Adrian is kind of 
overwhelmed with work at the moment, so it may be a while till you hear 
from him.


If you provide a reproducible example I *may* be able to help.


___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] spatstat - user-supplied list of point patterns in envelope() error

2019-08-11 Thread Rolf Turner



On 12/08/19 8:51 AM, Joe Lewis wrote:


Dear list,

I'm trying to use a user-supplied list of point patterns in envelope()
rather than test against the CSR. Page 400 of Spatial Point Patterns:
Methodology and Applications in R states that "the argument simulate
can be a list of point patterns". However, I get the following error
when I try  to supply a list of ppp:

  ekls <- envelope.lpp(Y =  xx, fun = linearK, nsim=5, simulate = pp_list)
Error in envelopeEngine(X = X, fun = fun, simul = simrecipe, nsim = nsim,  :
   ‘simulate’ should be an expression, or a list of point patterns

for reference,

xx
Point pattern on linear network
680 points
Linear network with 22 vertices and 21 lines
Enclosing window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

class(xx)
[1] "lpp" "ppx"

pp_list
List of point patterns

Component 1:
Planar point pattern: 513 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

Component 2:
Planar point pattern: 422 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

Component 3:
Planar point pattern: 495 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

Component 4:
Planar point pattern: 557 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

Component 5:
Planar point pattern: 576 points
window: rectangle = [284086.69, 309740] x [709900, 726547.7] metres

class(pp_list)
[1] "ppplist" "solist"  "anylist" "listof"  "list"

Any ideas why the error is occurring? Thanks.


It's hard to say without having a *reproducible* example.  Since we 
don't have access to "xx" or to "pp.list" we cannot experiment to see 
what's going on.


One problem that leaps out at me --- although it doesn't seem that this 
should trigger the error message that you received --- is that the 
entries of pp_list appear to be *planar point patterns* (of class "ppp") 
whereas "xx" is a pattern on a linear network (of class "lpp"). 
Consequently there is a fundamental incompatibility here.


However I don't see why you would get the error message that you did.  I 
am CC-ing this email to Adrian Baddeley who has more insight than I, and 
may be able to point you in the right direction.  Adrian is kind of 
overwhelmed with work at the moment, so it may be a while till you hear 
from him.


If you provide a reproducible example I *may* be able to help.

cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Simple Ripley's CRS test for market point patters --- addendum.

2019-07-26 Thread Rolf Turner



I have realised that there were a couple of oversights in my previous 
posting on this issue.  One is a bit subtle; the other is a bit of a 
blunder on my part.


First the "subtle" one. The test that I proposed for CSRI is a test done 
using the estimated parameters of the proposed model to generate 
realisations of data sets under the null hypothesis.  Such tests tend to 
be conservative.  (See section 10.6.3, p. 388 ff., in [1].)


In the current instance (testing for CSRI) the conservatism can be 
overcome by simulating data conditional on the numbers of points of each 
type in the "real" data.  This can be done here via:


foo <- function(W){
s <- runifpoint(10,win=W)
m <- runifpoint(9,win=W)
l <- runifpoint(27,win=W)
superimpose(s=s,m=m,l=l)
}
simex <- expression(foo(W))

and then

set.seed(42)
E <- envelope(syn.ppp,simulate=simex,savefuns=TRUE)
dtst <- dclf.test(E)
mtst <- mad.test(E)

This gives p-values of 0.06 from the dclf test and 0.09 from the mad 
test.  Thus there appears to be some slight evidence against the null 
hypothesis.  (Evidence at the 0.10 significance level.)


That this should be so is *OBVIOUS* (!!!) if we turn to the unsubtle 
point that I overlooked.  It is clear that the pattern of ants' nests 
cannot be truly a realisation of a Poisson process since there must be a 
bit of a "hard core" effect.  Two ants' nests cannot overlap.  Thus if 
we approximate the shape of each nest by a disc, points i and j must be 
a distance of at least r_i + r_j from each other, where r_i = 
sqrt(area_i/pi), and similarly for r_j.


However I note that the data provided seem to violate this principle in 
several instances.  E.g. points 41 and 42 are a distance of only 0.2460 
metres apart but areas 41 and 42 are 12.9 and 15.2 square metres, 
yielding putative radii of 3.5917 and 3.8987 metres, whence the closest

these points could possibly be (under the "disc-shaped assumption") is
7.4904 metres, far larger than 0.2460.   So something is a bit out of 
whack here.  Perhaps these are made-up ("synthetic") data and the 
process of making up the data did not take account of the minimum 
distance constraint.


How to incorporate the "hard core" aspect of your (real?) data into the 
modelling exercise, and what the impact of it is upon your research 
question(s), is unclear to me and is likely to be complicated.


cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

[1] Spatial Point Patterns: Methodology and Applications with R
1st Edition, Adrian Baddeley, Ege Rubak, Rolf Turner
Chapman and Hall/CRC, 2015

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Simple Ripley's CRS test for market point patters

2019-07-25 Thread Rolf Turner



On 25/07/19 12:50 PM, ASANTOS wrote:

Thanks for your help Marcelino e for the careful explanation Rolf Turner 
and so sorry about my post script configuration, I expected that I 
solved that in my new post.


First my variable area is a marked point (attribute or auxiliary 
information about my point process - page 7 and not a spatial covariate, 
effect in the outcome of my experimental area - page 50). Based in this 
information, my hypophyses is that the *size of ant nests* a cause of 
ecological intraspecific competition for resources (such as food and 
territory) *have different patterns of spatial distribution*, for this:


#Packages
require(spatstat)
require(sp)

# Create some points that represents ant nests

xp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,
371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,
371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,
371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,
371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,
371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,
371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,
371159.291,371158.552,370978.252,371120.03,371116.993)

yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,
8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,
8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,
8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,
8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,
8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,
8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,
8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,
8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

#Now I have the size of each nest (marked process)

area<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,
88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,
105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

# Make a countour for the window creation
W <- convexhull.xy(xp,yp)


# Class of nests size - large, medium and small
acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE, 
labels=c("s","m","l"))


#Create a ppp object

syn.ppp <- ppp(x=xp,y=yp,window=W, marks=acat)

# Test intensity hypothesis

f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.
f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity for 
each area category.

anova(f1,f2,test="Chi")

#0.002015 ** OK, the hypothesis that the intensities are the same was 
reject, but intensities is not the question.


Based in my initial hypothesis, I've like to show envelopes and observed 
values of the use of some function for the three point patters (large, 
medium and small ant nest size)under CSR. And for this I try to:


kS<-envelope(syn.ppp[syn.ppp$acat=="s"], nsim=99,fun=Kest)

plot(kS,lwd=list(3,1,1,1), main="")

kM<-envelope(syn.ppp[syn.ppp$acat=="m"], nsim=99,fun=Kest)

plot(kM,lwd=list(3,1,1,1), main="")

kL<-envelope(syn.ppp[syn.ppp$acat=="l"], nsim=99,fun=Kest)

plot(kL,lwd=list(3,1,1,1), main="")

But doesn't work yet. My approach now sounds correct?

Thanks in advanced,


The formatting of your latest email was a great improvement and is now 
admirably legible.


I am still not completely clear what your research question is, or what 
hypotheses you wish to test.


Let me preface my remarks by saying that I have so far gone along with 
your inclination to convert the numeric marks (area) of your pattern to 
categorical (small, medium, large) marks.  I believe that this may be a
suboptimal approach.  (As someone said in an old post to R-help, on an 
entirely  different topic, 'There is a reason that the speedometer in 
your car doesn't just read "slow" and "fast".' :-) )


On the other hand there is not a lot on offer in the way of techniques 
for handling numeric marks, so the "discretisation" approach may be the 
best that one can do.


Now to proceed with trying to answer your question(s).  You say:


my hypophyses is that the size of ant nests a cause of ecological
intraspecific competition for resources (such as food and territory)
have different patterns of spatial distribution


That's a (rather vague) "*alternative*" hypothesis.  The corresponding 
null  hypothesis is that the three categories have the *same* spatial 
distribution.   We reject that null hypothesis, since we reject the 
hypothesis that they have the same intensities.


Now what?

You say:


Based in my initial hypothesis, I've like to show envelopes and observed
values of the use of some function for the three point patters (large,
medium and small ant nest size) under CSR.


This is a bit incoherent and a demonstrates somewhat c

Re: [R-sig-Geo] [FORGED] Simple Ripley's CRS test for market point patters

2019-07-23 Thread Rolf Turner



Thanks to Marcelino for chiming in on this.  I should have responded 
earlier, but was "busy on a personal matter".


To add to what Marcelino said:

(1) Your post in HTML was horribly garbled and a struggle to read. 
*PLEASE* set your email client so that it does *NOT* post in HTML when 
posting to this list.


(2) A very minor point:  Your construction of "syn.ppp" was 
unnecessarily complicated and convoluted.  You could simply do:


syn.ppp  <- ppp(x=xp,y=yp,window=W,marks=area)

(3) You appear to be confused as to the distinction between "marks" and 
"covariates".   These are superficially similar but conceptually 
completely different.  What you are dealing with is *marks*.  There are

no covariates involved.  See [1], p. 147.

(4) Your calls to envelope() are mal-formed; the expression "area >= 0" 
and "area < 25" will be taken as the values of "nrank" and ... who knows 
what?  Did you not notice the warning messages you got about there being 
something wrong with "nrank"?


You are being hopelessly naïve in expecting envelope() to interpret 
"area >= 0" and "area < 25" in the way you want it to interpret them.

The spatstat package does not read minds.

Marcelino has shown you how to make a proper call.

(5) Since you are interested in categorising "area" into groups, rather 
than being interested in the *numerical* value of area, you should do 
the categorisation explicitly:


acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE,
labels=c("s","m","l")
# right=FALSE since you want "area < 25" rather than
# "area <= 25" --- although this makes no difference for the
# area values actually used.

syn.ppp <- ppp(x=xp,y=yp,marks=acat)

(6) It is not clear to me what your "research question" is.  Do you want 
to test whether the intensities differ between the area categories? 
Unless my thinking is hopelessly confused, this has nothing to do with 
(or at least does not require the use of) the envelope() function:


f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.
f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity
   # for each area category.
anova(f1,f2,test="Chi")

This test (not surprisingly) rejects the hypothesis that the intensities 
are the same; p-value = 0.0020.


If this is not the hypothesis that you are interested in, please clarify 
your thinking and your question, and get back to us.


cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

[1] Spatial Point Patterns: Methodology and Applications with R
1st Edition, Adrian Baddeley, Ege Rubak, Rolf Turner
Chapman and Hall/CRC, 2015

P. S.  It is of course (!!!) highly recommended that you spend some time 
reading the book cited above, if you are going to work in this area.


R. T.

On 23/07/19 9:36 AM, Alexandre Santos via R-sig-Geo wrote:

Dear R-Sig-Geo Members,     I"ve like to find any simple way for apply CRS test 
for market point patters, for this I try to create a script below:
#Packages require(spatstat)require(sp)

# Create some points that represents ant nests in 
UTMxp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,371159.291,371158.552,370978.252,371120.03,371116.993)
yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)
# Then I create the size of each nest - my covariate used as marked 
processarea<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

# Make a countour - only as exerciseW <- convexhull.xy(xp,yp)
#Create a ppp 
objectp_xy<-cbind(xp,yp)syn.ppp<-ppp(x=coordinates(p_xy)[,1],y=coordinates(p_xy)[,2],window=W,
 marks=area)syn.ppp <- as.ppp(syn.ppp)plot(syn.ppp, main=" ")
First, I've like to study CSR of marke

Re: [R-sig-Geo] DELDIR

2019-07-23 Thread Rolf Turner



This question should have been, in first instance, directed to me (the 
maintainer of the deldir package), rather than to the r-sig-geo list.


On 24/07/19 1:47 AM, PONS Frederic - CEREMA/DTerMed/DREC/SRILH wrote:


Dear all

Sometines we are using the deldir function for squeletisation.


Had to Google this word. According to what I see it should be spelled
"squelettisation" (with a double "t").  The English word is 
"skeletonisation", and this list is (for better or for worse) an English 
language list.



Since the last review, deldir stops in some shapefile because polygons
are too narrow. In the version berfore, there was no problems.

We have readed the " Notes on error messages" and the problem of
anticlockwise order of triangle is listed.

In the trifnd R function , the code is
# Check that the vertices of the triangle listed in tau are
# in anticlockwise order.  (If they aren't then alles upgefucken
# ist; throw an error.)
call acchk(tau(1),tau(2),tau(3),anticl,x,y,ntot,eps)
if(!anticl) {
  call acchk(tau(3),tau(2),tau(1),anticl,x,y,ntot,eps)
  if(!anticl) {
  call fexit("Both vertex orderings are clockwise. See help for
deldir.")
  } else {
  ivtmp  = tau(3)
  tau(3) = tau(1)
  tau(1) = ivtmp
  }
}

We don't understand why do not order the bad triangles into the good
order. Perhaps, if this problem appears, the beginning of the deldir
function is not good.

If someone can explain us.


The error message you quote indicates that *both* orderings are bad so 
something is toadally out of whack.  It would be rash (and arrogant) for 
the software to fool around with the structure.  In such situations the 
user should diagnose what is going wrong.


If you cannot diagnose the problem, please send me a reprex and I will 
look into it.


cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Create a Spatial Weight Matrix based on road distance

2019-06-23 Thread Rolf Turner



On 23/06/19 6:01 PM, Rolando Valdez wrote:


I apologize for the lack of clarity.

Let me try again:

The SWM captures the spatial structure among territories. In the case of 
a matrix based on distance, you define a distance-threshold, say 50 km, 
and every territory under that distance is considered as neighbor, in 
the matrix, those territories considered neighbors take the value 1, and 
0 otherwise (territories beyond 50 km). This is what 'dnearneigh' 
function does.


Then, I want to define a distance-threshold, say 50 km by road (not 
euclidean) and every territory under that distance (by road) be 
considered as neighbor.


You still have not defined what you mean by *distance* between 
territories (regions, counties). Distance between *points* is well 
defined; distance between territories is not.  You have to specify what 
you mean by such a distance.  This could be the minimum distance between 
points in the regions (which is not of course a metric), distance 
between centroids of the territories, Hausdorff distance, or something 
else.  This applies whether you are talking about the distance between 
points being Euclidean distance or road distance or some other metric. 
Thresholding that distance (e.g. at 50 km.) is then a trivial matter.


I have tried my best to get you to clarify what you mean, and my efforts 
seem to be in vain.  Since Juan Pablo thinks that I am "bullying you" 
(which mystifies me completely) I guess I'll give up.  And to respond to 
Juan Pablo's hope, nothing whatever is "bothering" me.


cheers,

Rolf



El sáb., 22 de jun. de 2019 a la(s) 21:15, Rolf Turner 
(r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>) escribió:


On 23/06/19 3:30 PM, Rolando Valdez wrote:
 > Sorry again.
 >
 > A Spatial Weight Matrix (swm) is an object used in spatial
econometrics
 > to characterize the spatial structure among territories. It is an
 > element nxn where n is the number of territorial units (counties,
 > districts, states, cities, regions) in the sample and it could be
based
 > on contiguity or distance. Usually, you can create a swm based on
 > distance using 'dnearneigh' from spdep and then convert to a listw
 > through 'nb2listw'. The problem is that the matrix that you generate
 > trough 'dnearneigh'computes the euclidean distance among
centroids of
 > polygons. This is where I spot my issue, I need to compute the
swm using
 > the road distance instead of euclidean distance computed through
 > 'dnearneigh'. I do have a shapefile with poligons (counties) and
another
 > shapefile with lines (roads).

OK.  It's getting a *bit* clearer   You are interested in "road
distances" between counties.  I'm still not entirely sure what this
means.  Is it the *minimum* distance by road from one county to
another?
In which case, if two counties are contiguous (adjacent) and there is a
road crossing the border between the two, is the distance between the
counties equal to zero?  (This doesn't seem like it would be
satisfactory )

Yes, actually it is possible that two counties were connected by more 
than one road, however it's not a big deal. If I define a distance of 50 
km, it doesn't matter how many times two counties are connected, I just 
need that they are at 50 km trough, at least, one road.


If this is not the case, then what *is* the case?  Perhaps you want
distances between the *centroids* of the counties.  What then do you
mean by road distance when the centroids do not lie on a road?


This is a big challenge, I'm still working on it.


You apparently need to deal with counties in which there are no
roads at
all.  To handle this you have to define what *you* mean by the distance
by road from county A to county B when there are no roads at all in
county B.  Perhaps infinity would be the appropriate distance, but *I*
don't know; you have to make the call.


If two counties are not connected through a road, they could not be 
neighbors. In this case, it would correspond to a value 0 in the matrix.



Previously you indicated that you needed to know (pairwise) road
distances between specified points in a given set, and I showed you how
to obtain those using pairdist(), from spatstat.  Now it seems that you
want something rather different, and it's still not clear what.

In a sense is the same, but you said so properly, We have different 
research fields.


You need to get *your* thoughts clear; make some definitions and
specifications, and decide what you really want or need.


I got it.


It seems that you are expecting R to magically do your thinking for
you;
it won't!

No, I'm not expecting that

Re: [R-sig-Geo] [FORGED] Create a Spatial Weight Matrix based on road distance

2019-06-22 Thread Rolf Turner

On 23/06/19 3:30 PM, Rolando Valdez wrote:

Sorry again.

A Spatial Weight Matrix (swm) is an object used in spatial econometrics 
to characterize the spatial structure among territories. It is an 
element nxn where n is the number of territorial units (counties, 
districts, states, cities, regions) in the sample and it could be based 
on contiguity or distance. Usually, you can create a swm based on 
distance using 'dnearneigh' from spdep and then convert to a listw 
through 'nb2listw'. The problem is that the matrix that you generate 
trough 'dnearneigh'computes the euclidean distance among centroids of 
polygons. This is where I spot my issue, I need to compute the swm using 
the road distance instead of euclidean distance computed through 
'dnearneigh'. I do have a shapefile with poligons (counties) and another 
shapefile with lines (roads).


OK.  It's getting a *bit* clearer   You are interested in "road 
distances" between counties.  I'm still not entirely sure what this 
means.  Is it the *minimum* distance by road from one county to another? 
In which case, if two counties are contiguous (adjacent) and there is a 
road crossing the border between the two, is the distance between the 
counties equal to zero?  (This doesn't seem like it would be 
satisfactory )


If this is not the case, then what *is* the case?  Perhaps you want 
distances between the *centroids* of the counties.  What then do you 
mean by road distance when the centroids do not lie on a road?


You apparently need to deal with counties in which there are no roads at 
all.  To handle this you have to define what *you* mean by the distance 
by road from county A to county B when there are no roads at all in 
county B.  Perhaps infinity would be the appropriate distance, but *I* 
don't know; you have to make the call.


Previously you indicated that you needed to know (pairwise) road 
distances between specified points in a given set, and I showed you how 
to obtain those using pairdist(), from spatstat.  Now it seems that you 
want something rather different, and it's still not clear what.


You need to get *your* thoughts clear; make some definitions and 
specifications, and decide what you really want or need.


It seems that you are expecting R to magically do your thinking for you; 
it won't!


cheers,

Rolf

El sáb., 22 de jun. de 2019 a la(s) 20:00, Rolf Turner 
(r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>) escribió:



On 23/06/19 2:38 PM, Rolando Valdez wrote:

 > I am sorry, I was not clear enough. My goal is to calculate a
spatial
 > weight matrix (nxn) across counties but, instead of euclidean
distance,
 > to use road distance.

I'm afraid I still don't understand.  To put it mildly. You presumably
have a clear idea of what you are trying to, but those of us who are
not
involved in your research have no such idea.  We (or at least I)
haven't
a clue as to what you are talking about.

What do you mean by "spatial weight"?  What are these weights used for?
What is n?  How are the counties involved?  Is n the number of
counties?
Are you interested in the road distance (minimum road distance?)
between
pairs of counties?

Please explain *clearly* and do not expect those who are trying to help
    you to be mind-readers!!!

cheers,

Rolf

 >
 > El sáb., 22 de jun. de 2019 a la(s) 19:28, Rolf Turner
 > (r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>
<mailto:r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>>)
escribió:
 >
 >
 >     On 23/06/19 1:17 PM, Rolando Valdez wrote:
 >
 >      > Thank you for your answer.
 >      >
 >      > I have a shapefile with, say, counties, and I got another
 >     shapefile with
 >      > the roads. ¿What if a county does not intersect any road?
 >
 >     I am sorry, but it is not at all clear to me just what the
problem is.
 >     How do the counties come into the picture?  You said you
wanted to get
 >     the road distance between points on the roads.  What have the
counties
 >     got to do with this?
     >
 >     Can you perhaps provide a reproducible example?
 >
 >     cheers,
 >
 >     Rolf
 >
 >      >
 >      > El jue., 20 de jun. de 2019 a la(s) 19:08, Rolf Turner
 >      > (r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>
<mailto:r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>>
 >     <mailto:r.tur...@auckland.ac.nz
<mailto:r.tur...@auckland.ac.nz> <mailto:r.tur...@auckland.ac.nz
<mailto:r.tur...@auckland.ac.nz>>>

Re: [R-sig-Geo] [FORGED] Create a Spatial Weight Matrix based on road distance

2019-06-22 Thread Rolf Turner



On 23/06/19 2:38 PM, Rolando Valdez wrote:

I am sorry, I was not clear enough. My goal is to calculate a spatial 
weight matrix (nxn) across counties but, instead of euclidean distance, 
to use road distance.


I'm afraid I still don't understand.  To put it mildly. You presumably 
have a clear idea of what you are trying to, but those of us who are not 
involved in your research have no such idea.  We (or at least I) haven't 
a clue as to what you are talking about.


What do you mean by "spatial weight"?  What are these weights used for? 
What is n?  How are the counties involved?  Is n the number of counties? 
Are you interested in the road distance (minimum road distance?) between 
pairs of counties?


Please explain *clearly* and do not expect those who are trying to help 
you to be mind-readers!!!


cheers,

Rolf



El sáb., 22 de jun. de 2019 a la(s) 19:28, Rolf Turner 
(r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>) escribió:



On 23/06/19 1:17 PM, Rolando Valdez wrote:

 > Thank you for your answer.
 >
 > I have a shapefile with, say, counties, and I got another
shapefile with
 > the roads. ¿What if a county does not intersect any road?

I am sorry, but it is not at all clear to me just what the problem is.
How do the counties come into the picture?  You said you wanted to get
the road distance between points on the roads.  What have the counties
got to do with this?

Can you perhaps provide a reproducible example?

cheers,

Rolf

 >
 > El jue., 20 de jun. de 2019 a la(s) 19:08, Rolf Turner
 > (r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>
<mailto:r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>>)
escribió:
 >
 >
 >     On 21/06/19 12:26 PM, Rolando Valdez wrote:
 >
 >      > Dear community,
 >      >
 >      > Is there any way to create a spatial weight matrix based
on road
 >     distance?
 >      > I am trying to use the road distance between two points
instead of
 >      > euclidean distance.
 >      >
 >      > I've seen that there is a package named osrm. Can anyone give
 >     some advice?
 >
 >     I don't know anything about "osrm".  Calculating "road distances"
 >     can be
 >     done in the spatstat package reasonably easily, if you take
the trouble
 >     to represent your collection of roads as a "linnet" object.
 >
 >     Given that you have done so, suppose that your linnet object
is "L" and
 >     that you have vectors "x" and "y" specifying the points on L
(i.e. on
 >     your roads) between which you want to know the distances.
 >
 >     Do:
 >
 >           X    <- lpp(data.frame(x=x,y=y),L)
 >           dMat <- pairdist(X)
 >
 >     The object "dMat" is a (symmetric) square matrix; dMat[i,j]
is the
 >     distance between point i and point j.  (Of course the
diagonal entries
 >     are all 0.)
 >
 >     If your collection of roads is specified by means of a shapefile,
 >     vignette("shapefiles") will tell you how to turn this
collection into a
 >     "psp" ("planar segment pattern") object; the function (method)
 >     as.linnet.psp() can then be used to turn the "psp" object into a
 >     "linnet" object.
 >
 >     HTH


___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Create a Spatial Weight Matrix based on road distance

2019-06-22 Thread Rolf Turner



On 23/06/19 1:17 PM, Rolando Valdez wrote:


Thank you for your answer.

I have a shapefile with, say, counties, and I got another shapefile with 
the roads. ¿What if a county does not intersect any road?


I am sorry, but it is not at all clear to me just what the problem is. 
How do the counties come into the picture?  You said you wanted to get 
the road distance between points on the roads.  What have the counties 
got to do with this?


Can you perhaps provide a reproducible example?

cheers,

Rolf



El jue., 20 de jun. de 2019 a la(s) 19:08, Rolf Turner 
(r.tur...@auckland.ac.nz <mailto:r.tur...@auckland.ac.nz>) escribió:



On 21/06/19 12:26 PM, Rolando Valdez wrote:

 > Dear community,
 >
 > Is there any way to create a spatial weight matrix based on road
distance?
 > I am trying to use the road distance between two points instead of
 > euclidean distance.
 >
 > I've seen that there is a package named osrm. Can anyone give
some advice?

I don't know anything about "osrm".  Calculating "road distances"
can be
done in the spatstat package reasonably easily, if you take the trouble
to represent your collection of roads as a "linnet" object.

Given that you have done so, suppose that your linnet object is "L" and
that you have vectors "x" and "y" specifying the points on L (i.e. on
your roads) between which you want to know the distances.

Do:

      X    <- lpp(data.frame(x=x,y=y),L)
      dMat <- pairdist(X)

The object "dMat" is a (symmetric) square matrix; dMat[i,j] is the
distance between point i and point j.  (Of course the diagonal entries
are all 0.)

If your collection of roads is specified by means of a shapefile,
vignette("shapefiles") will tell you how to turn this collection into a
"psp" ("planar segment pattern") object; the function (method)
as.linnet.psp() can then be used to turn the "psp" object into a
"linnet" object.

HTH


___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Create a Spatial Weight Matrix based on road distance

2019-06-20 Thread Rolf Turner



On 21/06/19 12:26 PM, Rolando Valdez wrote:


Dear community,

Is there any way to create a spatial weight matrix based on road distance?
I am trying to use the road distance between two points instead of
euclidean distance.

I've seen that there is a package named osrm. Can anyone give some advice?


I don't know anything about "osrm".  Calculating "road distances" can be 
done in the spatstat package reasonably easily, if you take the trouble 
to represent your collection of roads as a "linnet" object.


Given that you have done so, suppose that your linnet object is "L" and 
that you have vectors "x" and "y" specifying the points on L (i.e. on 
your roads) between which you want to know the distances.


Do:

X<- lpp(data.frame(x=x,y=y),L)
dMat <- pairdist(X)

The object "dMat" is a (symmetric) square matrix; dMat[i,j] is the 
distance between point i and point j.  (Of course the diagonal entries 
are all 0.)


If your collection of roads is specified by means of a shapefile, 
vignette("shapefiles") will tell you how to turn this collection into a 
"psp" ("planar segment pattern") object; the function (method) 
as.linnet.psp() can then be used to turn the "psp" object into a 
"linnet" object.


HTH

cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Plot a map in r (csv and shapefile)

2019-06-19 Thread Rolf Turner



On 19/06/19 10:41 PM, Lara Silva wrote:


Hello,

I am new in R and I need to plot the occurrence of a species.
I have the presences of species (CSV) and the shapefile with the limits of
study area.

Which code should I use?


You might consider having a look at the spatstat package which has tools 
explicitly designed for this sort of thing.


cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Getting averaged variable importance from bootstrap with the cubist models

2019-04-07 Thread Rolf Turner

On 8/04/19 3:52 AM, Hounkpatin Ozias wrote:




rm(list = ls())




NOO  Don't do this to other people!!!  As (I think 
it was) Jenny Bryan said at the NZSA Conference in December 2017:  "If 
you do this I will come to your office and set fire to your computer!!!"


cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Modifying the length of a matrix variable

2019-03-30 Thread Rolf Turner



On 31/03/19 12:56 PM, rain1...@aim.com wrote:


Model4 <- brick("MaxPrecCCCMACanESM2rcp45.nc", var="onedaymax")


That is how Model4 is derived.

When trying class(Model4), I receive:

[1] "RasterBrick" attr(,"package") [1] "raster"

**//___^

Meanwhile, I will check on Google to see what I come up with in terms of 
your suggestion. :)


I know nothing about rasters, the brick() function, or the raster 
package, so include me out at this stage.


Others on the list may be able to help you.  Particular if you can force 
yourself to ask a *focussed* question.


cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Modifying the length of a matrix variable

2019-03-30 Thread Rolf Turner



On 31/03/19 12:30 PM, rain1...@aim.com wrote:

Yes, I reproduced the example above and it works just fine (and is what 
I want!!), but I cannot see why it does not work with my data, as it is 
a 3-dimensional array (latitude, longitude and time).


This is what comes from print(Model4):

3 variables (excluding dimension variables): double 
onedaymax[lon,lat,time] (Contiguous storage) units: mm/day double 
fivedaymax[lon,lat,time] (Contiguous storage) units: mm/day short 
Year[time] (Contiguous storage) 3 dimensions: time Size:95 lat Size:64 
units: degree North lon Size:128 units: degree East



I reviewed it over and over again, but I cannot see why this would not work?


Psigh!  Clearly Model4 is *not* an array!!!  It is an object of some 
"specialised" class (for which there is specialised print() method).  I 
have no idea what that class might be, but *you can tell.  What does


   class(Model4)

return?

Where did this "Model4" object come from?  What are you trying to *do*?

You might be able to get somewhere by searching (e.g. via Google) on
"subsetting objects of class melvin" where "melvin" is what is returned 
by "class(Model4)".


Doing

str(Model4)

could be enlightening (but given your stubborn refusal to acquire 
insight into the workings of R, I am not optimistic).


This is not magic or religion.  You need to *understand* what you are 
dealing with, and proceed rationally.  Don't just hammer and hope.


cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Modifying the length of a matrix variable

2019-03-30 Thread Rolf Turner



On 31/03/19 11:14 AM, rain1...@aim.com wrote:


Hi Rolf (and others),

I tried your suggestion, but when I used dim(Model4.chopped), it still 
shows 95 layers, as shown below:


8192    95

I also find that the total number of cells is rather low for that many 
layers. I started with 778240 cells over 95 layers.


Well then you're doing something wrong, or there is something that you 
haven't told us.


E.g.:

junk <- array(runif(64*128*95),dim=c(64,128,95))
junk.chopped <- junk[,,1:90]
dim(junk)
[1]  64 128  95
dim(junk.chopped)
[1]  64 128  90


Perhaps Model.4 has some structure other than that of an array. 
(Originally you said it was a matrix.)


You really need to get your terminology and ideas *clear* in order to 
have any hope of receiving useful advice.


I have no idea what you are on about in respect of "the number of 
cells".  My mind-reading machine is in the repair shop.  I strongly 
suspect that your thoughts are confused.


cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Modifying the length of a matrix variable

2019-03-30 Thread Rolf Turner



On 31/03/19 10:22 AM, rain1...@aim.com wrote:


Hi Rolf,

My apologies - I meant "layers" as opposed to "length". The goal is to 
reduce the number of layers to 90 (from 95).


dim (Model4) yields:

64   128   95


You can see the 95 there. That is what I would like to reduce to 90, or 
isolate layer 1 to layer 90.


Please keep the list in the set of recipients.  I am not your private 
consultant, and furthermore others on the list may be able to provide 
better advice than I.  I have CC-ed this message to the list.


To keep only "layers" 1 through 90 you could do:

Model4.chopped <- Model4[,,1:90]

As I said before, it really is time that you learned something about R 
(e.g. by studying a tutorial).


cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Modifying the length of a matrix variable

2019-03-30 Thread Rolf Turner



On 31/03/19 3:56 AM, rain1290--- via R-sig-Geo wrote:


Hello there, I am currently trying to modify a variable's length. It
is called "Model4" and is a matrix. It currently has the length of
95, as per "length(Model4)". However, I would like to create a new
Model4 (let's say "NewModel4"), in which it has a length of 90,
instead of 95. Is there a way to do this? Thanks, and any assistance
would be greatly appreciated! [[alternative HTML version deleted]]


This is a plain text mailing list.  Please *DO NOT* post in HTML.
(In general this scrambles your post and makes it incomprehensible.)

To get to your question:  What you ask makes little sense.  The "length" 
of a matrix is the total number of entries --- nrow() * 
ncol().  Changing the "length" of a matrix would either involve 
changing the number of rows or the number of columns (or both).


Why do you want to do this?  What are you trying to accomplish?
What does dim(Model4) produce?

Don't you think it's time you got serious and learned a bit about R?
(There are many excellent introductory documents available online.)

cheers,

Rolf Turner

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Re: editing a correlogram

2019-03-16 Thread Rolf Turner



On 17/03/19 6:16 AM, Leonardo Matheus Servino wrote:


I tried the function par() and arguments inside the plot(), but some
parameters doesn't change.
For example, the argument pch=, which changes the symbols that represents
the points in the plot doesn't work.


If you want useful advice show the commands that you actually used, in 
the context of a *minimal reproducible example* (including the *data* 
involved, provided in such a way --- use dput()!!! --- that it can be 
accessed by your advisors.


cheers,

Rolf Turner


--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Re: Aggregating points based on distance

2019-03-14 Thread Rolf Turner



On 14/03/19 7:33 AM, Barry Rowlingson wrote:




The problem with "modern" syntax is that it's subject to rapid change
and often slower than using base R, which has had years to stabilise
and optimise.




Fortune nomination.

cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Parallelization of LMZ.F3GWR.test in spgwr package

2019-01-12 Thread Rolf Turner




On 1/13/19 1:06 AM, Roger Bivand wrote:


... GWR is very demanding for computation. It should only ever be used
for exploring model mis-specification, never for inference or prediction.
So making a bad test on a bad method run faster should not be a priority.


Fortune nomination!

cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] R Help

2018-12-20 Thread Rolf Turner



On 12/21/18 10:56 AM, Esu Esu via R-sig-Geo wrote:


Dear All,
I am new to R-programming. 


Learn to crawl before you start trying to walk or run.  Study some of 
the many R tutorials that are available.



I have question if anyone can help me with:Q: how can I create functions in R 
to compute the area
(A), centroid (Cx and Cy) and perimeter (P) of a polygon list .  I want to 
write R functions to
compute these quantities for individual polygons, as well asan overall function 
to take a polygon
list and return a data frame with four columns and the samenumber of rows as 
polygons in the list.
Each quantity should correspond to each column. it is (sp) class object. > Hope 
to find any suggestion or answer


The spatstat package will enable you to do all this pretty easily.  See

?centroid.owin
?perimeter
?area.owin

You will need to convert your polygons to objects of class "owin".  The 
as.owin() function should handle this for you.


cheers,

Rolf Turner


--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] intersections to individual polygons

2018-07-02 Thread Rolf Turner


Agustin:

I have attached a function that I think does what you want.
It returns either a tessellation (if you leave tess=TRUE) or
a list of owin objects otherwise.

I *think* that attachments with .R extensions make it through to the 
list.  The attachment should at least get through to Agustin (who is

the person of central interest!).

It produces all non-empty intersections between sets of tiles
in the tessellation argument "ttt".  If singles=TRUE (the default)
these "intersections" include the tiles themselves.  Otherwise
at least two tiles are involved in each intersection.

E.g.

x1 <- allIntersections(te) # a tessellation
x2 <- allIntersections(te,tess=FALSE) # a list of windows
x3 <- allIntersections(te,singles=FALSE) # tiles themselves omitted.

Note that the code includes a work-around for an infelicity in
in intersect.owin().  This infelicity will very likely be fixed
in a future release of spatstat.

cheers,

Rolf

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

On 02/07/18 23:35, Agustin Lobo wrote:

Mimicking your vignette:

require(rgdal)
require(raster)
require(spatstat)
require(rgeos)
require(maptools)

dzfile 
<-"https://www.dropbox.com/s/xxujcwqy3ec21sa/TDM1_DEM__04_S63W062_epsg32720.zip?dl=0";
download.file(dzfile,"TDM1_DEM__04_S63W062_epsg32720.zip",method="wget")
unzip("TDM1_DEM__04_S63W062_epsg32720.zip")
v <- readOGR(dsn="TDM1_DEM__04_S63W062_epsg32720",
  layer="TDM1_DEM__04_S63W062_epsg32720", stringsAsFactors = FALSE)
plot(v)
p <- slot(v, "polygons")
p2 <- lapply(p, function(x) { SpatialPolygons(list(x)) })
w <- lapply(p2, as.owin) #maptools required

te <- tess(tiles=w)
class(te)
plot.tess(te,do.labels=TRUE)

But this is still the original polygons,
not the mosaic of polygon parts I'm looking for.
Would I need to go doing all possible intersections and then
adding the "A not B" and "B not A" parts for all possible pairs?

Or is there a function in spatstat to convert "te" into a tesselation
of adjacent polygons?

Thanks



On Thu, Jun 28, 2018 at 2:02 PM, Rolf Turner  wrote:


On 29/06/18 00:51, Agustin Lobo wrote:


Given an SpatialPolygons DF with i.e. 2 intersecting polygons, A and
,B,  is there a function
that would split the original polygons into "onlyA", "onlyB" and
"AintersectingB" polygons?



You can do this easily in spatstat by converting your polygons to "owin"
objects.

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo
allIntersections <- local({
all.combs <- function(n,singles=TRUE) {
v <- vector("list",n)
inds <- if(singles) 1:n else 2:n
for(i in inds) {
v[[i]] <- as.list(as.data.frame(combn(n,i)))
}
do.call(c,v)
}

function (ttt,singles=TRUE,tess=TRUE) {
tiles  <- ttt$tiles
ntiles <- length(tiles)
inds   <- all.combs(ntiles,singles=singles)
rslt   <- vector("list",length(inds))
names(rslt) <- rep("",length(rslt))
K  <- 0
for(ind in inds) {
   K <- K+1
   w <- try(do.call(intersect.owin,c(tiles[ind],list(fatal=FALSE))),silent=TRUE)
   if(inherits(w,"try-error") || is.empty(w)) w <- NULL
   if(!is.null(w)) {
   lbl <- paste(ind,collapse=".")
   names(rslt)[K] <- lbl
   rslt[[K]] <- w
   }
}
rslt <- rslt[!sapply(rslt,is.null)]
if(tess) rslt <- tess(tiles=rslt)
rslt
}
})
___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] intersections to individual polygons

2018-06-28 Thread Rolf Turner



On 29/06/18 00:51, Agustin Lobo wrote:


Given an SpatialPolygons DF with i.e. 2 intersecting polygons, A and
,B,  is there a function
that would split the original polygons into "onlyA", "onlyB" and
"AintersectingB" polygons?


You can do this easily in spatstat by converting your polygons to "owin" 
objects.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Help

2018-05-14 Thread Rolf Turner


On 14/05/18 20:17, Soufianou Abou via R-sig-Geo wrote:


Bonjour , j'aimerais utiliser  maxent pour modéliser la distribution
potentielle du niébé sur la base des données de présence seuls. En
effet, jai acquis un certains nombre de variables environnementales
et bioclimatiques concernant ma zone d'étude.  Mais pour choisir les
variables les plus contributives dans le modèle; j'aimerai faire une
analyse de correlation de celles-ci. Sur ce, pourriez vous
m'expliquer etape par etape les procedures à suivre sous R ?
J'aimerais dire par là le scripts pour:  -compiler et appeler
toutes les variables environnementales et les données d'occurence; -
executer le tester de correlation;-pour faire une analyse
discriminante?

Merci par avance.

La langue de cette liste est l'anglais.
S'il vous plaît exprimer votre question en anglais.

I'm afraid that my French is insufficient to follow your question 
properly, but I gather that you have presence-only data (for some 
phenomenon) and a number of environmental variables from which you hope 
to predict occurrences of this phenomenon.


You also express an interest in undertaking a correlation analysis of 
your predictors and performing "tests of correlation".


Given that I am understanding you correctly, I would advise against 
this.  The proper strategy (IMHO) is to *fit a model* using your 
predictors and then assess their predictive power in this model,in some way.


If the "presence only" data, that you have, can be considered to be 
point locations, and if the values of your predictors are available at 
all points of your study region, then you may be able to effect the 
required model fitting using the facilities of the spatstat package.


Anyway, please re-post your question en anglais, if you can.  You are 
much more likely to get a useful answer if you do.  Bon chance.


Cordialement,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Tutorial on reading hyperspectral data

2018-05-05 Thread Rolf Turner


On 06/05/18 05:31, Raja Natarajan wrote:


hyperspectral astrology images


Did you mean *astronomy* ??? !!!

I certainly hope so! :-)

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[R-sig-Geo] Terrain elevation plots with points shown --- spatstat.

2018-04-25 Thread Rolf Turner

On 25/04/18 19:45, Nicholas Bradley wrote (in a miss-titled post):


Good morning

I hope you are well

I have a question regarding the persp and perspPoints functions

I have the x and y spatial coordinates and also the altitude of all
these coordinates. However, despite consulting several manuals, I
have as yet been unable to create a terrain elevation plot and also
to plot the points.
I have only managed to create a persp image showing density

I would therefore appreciate very much if anyone could be so kind as to help.


Your question is a bit vague and does not supply anything like a 
reproducible example, so it is hard to answer.


The cover of the book (Baddeley et al.) that Marcelino de la Cruz Rot 
has referred to, in the thread that you hijacked, displays a plot that 
might be something like what you want.  The code for producing this plot 
can be found in the online supplement to that book at:


http://book.spatstat.org/chapter-code/chapter09.html

For your convenience here are the relevant lines of code:

library(spatstat)
fit <- ppm(bei ~ polynom(grad, elev, 2), data=bei.extra)
lamhat  <- predict(fit)
col.lam <- topo.colors
col.pts <- 1
M   <- persp(bei.extra$elev, colin=lamhat,
 colmap=col.lam, shade=0.4, theta=-55,
 phi=25, expand=6, box=FALSE,
 apron=TRUE,  visible=TRUE, main="")
   perspPoints(bei, Z=bei.extra$elev, M=M,
   pch=".", col=col.pts, cex=1.25)

HTH

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] problem with enveloped test in spatstat

2018-04-25 Thread Rolf Turner

On 25/04/18 19:45, Nicholas Bradley wrote:

Good morning

I hope you are well

I have a question regarding the persp and perspPoints functions

I have the x and y spatial coordinates and also the altitude of all these
coordinates. However, despite consulting several manuals, I have as yet
been unable to create a terrain elevation plot and also to plot the points.
I have only managed to create a persp image showing density

I would therefore appreciate very much if anyone could be so kind as to help.


Please don't hijack threads.  Your question has nothing to do with 
envelope tests.  You have a new question, so start a new thread with an 
appropriate subject line.


I shall shortly attempt to answer your question in such a new thread.

cheers,

Rolf Turner


--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Estimate of the intensity of a point process, as a function of a marked point process as covariate.

2018-03-31 Thread Rolf Turner


On 01/04/18 09:36, ASANTOS via R-sig-Geo wrote:


Dear R Sig Geo Members,

     I've like to know if there are any function in any package for 
estimation density in a marked point process (e.g. geographic position 
and size of ants nests in square meters). My goal will be represents the 
density of ants nest estimated, but use nests sizes as covariate, this 
is possible?


The Smooth.ppp() function in the spatstat package might be what you are 
looking for.


Using nest sizes as a *covariate* does not make sense to me.  A 
covariate should be defined at all points of the observation window, not 
just at the points of the observed pattern.


See

  Adrian Baddeley, Ege Rubak, Rolf Turner (2015). Spatial Point
  Patterns: Methodology and Applications with R. London: Chapman and
  Hall/CRC Press, 2015. URL

http://www.crcpress.com/Spatial-Point-Patterns-Methodology-and-Applications-with-R/Baddeley-Rubak-Turner/9781482210200/

particularly section 5.6.4, pages 147, 148.

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Re: [R] How to label a polygon window (spatstat package)

2018-02-10 Thread Rolf Turner


On 10/02/18 18:43, Mohammad Tanvir Ahamed via R-sig-Geo wrote:


Thanks.
when there is multiple polygon , it a problem .
looking for something more .


It is not at all clear to me what you are trying to accomplish.
If it were clearer, I could probably (???) give you an idea of how to 
approach your task.


Can you provide a minimal reproducible example of your situation, with a 
clear explanation of what you want to do next?  Some subscribers to 
r-sig-geo (myself *not* included) are very clever, but few if any are 
telepathic.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] how to limit an image or a funxy to an irregular polygonal window, instead of the whole enclosing rectangle?

2017-12-15 Thread Rolf Turner

On 16/12/17 11:43, Christopher W. Ryan wrote:

Thanks Rolf. I'm going to have to reflect more on my code and my data,
to understand better what is going on.

Obviously this won't help you much, without having access to all my data
and preceeding code,


Too true!


but the error message that is tripping me up is:


rhohat(m12, pov.f)

Error:  the fitted intensity contains NA values


No idea what is causing that to happen, and it's impossible to work it 
out without having your data.  (Point pattern and the predictor image.)


I can't find that error message anywhere in the spatstat code.  Is it 
possible that you have some other rhohat() function (maybe from some 
other package) hanging around and getting in the way?  Using traceback() 
*might* provide some insight.




And yet,

table(is.na(fitted(m12)))
FALSE
   876

The predicted intensity, however, contains many NA values:

table(is.na(predict(m12)$v))
FALSE  TRUE
10379  6005


This is a *COMPLETE RED HERRING* !!!

Of course you will get NAs from this.  The predicted intensity is an 
*image* on the window of the point pattern to which the model is fitted.
All pixels within the bounding box of that window, but outside of the 
actual window, have pixel value NA.


I try to force predictions only within my window by specifying locations
(which I think requires a binary mask window) but get the same result:


table(is.na(predict(m12, locations = as.mask(sremsWindow))$v))

FALSE  TRUE
10379  6005


No, no, no!!!  The predictions are always "within your window".  That's 
*why* they are NA at pixels outside that window.




Does rhohat use fitted values (at quadrature points) or predicted values
(on a 128 x 128 pixel grid within the window)? Top of p. 415 in your
book Spatial Point Patterns: Methodology and Applications wtih R seems
to indicate the latter, while the error message from my rhohat command
above speaks of fitted values.


I don't understand where that error message is coming from, so I don't
get what it is on about, but essentially rhohat() looks at values of the 
predictive covariate and of the fitted intensity at all pixel centres 
within the window.  The 128 x 128 grid is the default here, but can be 
changed (in the call to rhohat()).



And how is a rectangular 128 x 128 grid
fit in an irregularly-shaped polygonal window?  Maybe that's how NA
predicted values arise--pixels outside an intra-window rectangular grid
but still inside the window?


The rectangular grid is over the bounding box of the window.  (Which is
rectangular!!!) Only pixels whose centres are within the window have 
non-NA values.





And I can see now that no residuals from the model are missing:


table(is.na(residuals(m12)$val))

FALSE
   876

All the NA's in my predicted values *around* my window, but inside the
bounding rectangle, led me down the garden path.


Indeed.



The origin of most of my predictors, such as pov.f, are shapefiles from
the US Census Bureau, with a discrete value of poverty level for each
census tract. So a tesselation of my window, really.  Through much
wrangling (possibly poorly-done) I was able to turn each predictor into
a funxy--therefore they are essentially step functions, constant across
a census tract and with abrupt changes at census tract boundaries.  I
notice that rhohat calls for a continuous covariate. Could that be an
issue? 


I don't think so.  The rhohat() function works by smoothing, and 
smoothing a step function doesn't really make sense.  But I think that

rhohat() should still give you an answer, even though it's a crappy answer.


Although, I have one predictor that is a continuous distfun, and
I get the same error message from rhohat with that.


Don't think that I can come up with any useful suggestions as to how to 
proceed from here.  Sorry.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] how to limit an image or a funxy to an irregular polygonal window, instead of the whole enclosing rectangle?

2017-12-15 Thread Rolf Turner


I am probably misunderstanding something (as usual) but I cannot fathom 
what you *expect* the values of the funxy object or the image to *be*, 
outside of the window.


See inline below.

On 16/12/17 08:37, Christopher W. Ryan wrote:

Using R 3.3.3 and spatstat 1.50-0 on Windows 7.  MWE below.


Thank you for providing a clear and simple example.



I have a SpatialPolygonsDataFrame of census tract poverty levels in 3
contiguous counties in the US, called sremsPoverty. I want to use this
as a predictor in a ppm model. The window for the point pattern is the
three counties--so an irregular polygon--called sremsWindow.

I understand ppm predictors need to be an image, a tesselation, a funxy,
a window, or a single number. So I proceed as follows:

### Poverty
p <- slot(sremsPoverty, "polygons")
v <- lapply(p, function(z) { SpatialPolygons(list(z)) })
sat <- tess(tiles = lapply(v, as.owin) )
pov.f <- as.function.tess(sat, values = sremsPoverty@data$propPoverty)

Thus pov.f is a spatial function (funxy) that I can use in a call to ppm():

m1 <- ppm(unique.cases.ppp ~ pov.f)

pov.f looks as expected when I plot it. But examing the structure of
as.im(pov.f) it appears it is a 128 x 128 pixel array, with the value of
the function at all pixels outside of the irregular polygonal window,
but inside the bounding rectangle, set to NA.


What else could/should they be set to?


I think this is the cause
of the NA values I am seeing among the residuals from m1,


I don't think this is the case.  Perhaps more detail is needed; perhaps 
Adrian or Ege will be able to provide more insight.



and those NA
residuals are causing me some trouble with model diagnostics such as
rhohat().


Again I, at least, would need more detail before being able to provide 
any constructive comment.



How do I constrain the funxy (or the image I can derive from it) to the
irregular polygonal window, so as to eliminate the NA values outside the
window but inside the bounding rectangle? Or can I constrain the
modeling activity of ppm() to the window?


The "modelling activity of ppm()" is *ALWAYS* constrained to the window 
of the pattern to which ppm() is applied.  This is fundamental to the 
the way that ppm() works, and to what a window *means*.




Thanks.

--Chris Ryan
Broome County Health Department
Binghamton University
SUNY Upstate Medical University
Binghamt, NY, US

MWE:

x <- c(0, 2.6, 3, 1, 0)
y <- c(1,2,3,2,1)
test.window <- owin(poly=list(x=x,y=y))
plot(test.window)  ## looks as expected
## make spatial function
test.f <- function(x,y){x+y}
test.funxy <- funxy(test.f, W = test.window)  ## I *thought* this would
restrict the funxy to the window, but I don't think it does
plot(test.funxy)  ## looks as expected
## but the image from test.funxy is square, with NA values outside the
polygonal window, which is not square
test.im <- as.im(test.funxy)
str(test.im)


Again, what could the values of the image, outside of test.window, 
possibly *be*, other than NA?



## my incorrect attempts to restrict the image to the window yields a
numeric vector
str(test.im[test.window])


You need to do

 new.im <- test.im[test.window,drop=FALSE]

to get an image rather than a numeric vector.  However the "new.im" that 
you obtain will be identical to test.im:


> all.equal(new.im,test.im)
[1] TRUE


## or an error message
window(test.im) <- test.window


You need a capital "W" on Window(test.im) (to avoid an error message). 
But again this won't change anything.


Finally:

set.seed(42)
X <- runifpoint(20,win=test.window)
xxx <- ppm(X ~ test.im)
plot(residuals(xxx))
# No sign of any missing values.

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Intensity Estimation Methods

2017-12-14 Thread Rolf Turner

On 15/12/17 01:01, Cenk İÇÖZ via R-sig-Geo wrote:

Hello list,,

I have a spatial point pattern . I am trying to estimate its
intensity both with a fixed bandwidth and  with an adaptive
bandwidth. How could I compare the goodness of these two fits? I mean
are there any things like mse, aic or any other criteria??? I want to
compare the difference between the estimated intensity and the
original pattern's intensity.


I think that the following fortune (fortunes::fortune(340)) might be 
relevant:



Bandwidth selection is an unresolved (and possibly unsolvable) problem in
smoothing, so you're perfectly justified in trying/choosing an arbitrary value
if it produces good pictures!
   -- Adrian Baddeley (answering a user's question about the choice of
  smoothing parameter when using the density.ppp() function from the
  spatstat package)
  private communication (March 2013)


I am cc-ing to the man himself to see if he has further comment.

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] [FORGED] Problem

2017-11-09 Thread Rolf Turner

On 10/11/17 15:21, Hari Sharma wrote:

Hi,
While running Glmer in R, I got some bugs in some variables. I have 10 
variables and run the model with the combination of these variables.  Response 
variable is binomial. got  following problem.

m743<-glmer(Fecalpellet~ Abgherbcover +  Aspect +Avgshrubcover   +  
 fallenlogno + slope + stumpnumber   +   waterdistance   + (1|Year), 
data=df, family=binomial)
Warning message:
In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
   Model failed to converge with max|grad| = 0.0430099 (tol = 0.001, component 
1)

Could you please suggest any idea to solve this problem?


This question is not appropriate for R-sig-Geo.  You should have sent it 
to R-sig-mixed-models.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] split lines at vertices with sf

2017-10-29 Thread Rolf Turner


On 30/10/17 09:55, Sergio Ibarra wrote:


Dear list members,

How to split lines at vertices with sf?


Huh?

I know they say brevity is the soul of wit, but I think that this 
posting carries that idea a bit too far.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] quadracount on multitype points

2017-09-13 Thread Rolf Turner

On 13/09/17 11:08, Guy Bayegnak wrote:

Thanks a lot for your response and suggestion Rolf.  Yes, by "quadrant" I mean 
the little sub-windows.  My problem is the following:

We have collected thousands of groundwater samples across a vast area, and analysed them. 
Based on the analysis we are able to assign a "type" to each water sample.  
When plotted, there seems to be a spatial trend in water type. But a given area may have 
more than one water type,  usually with a dominant type (most frequently occurring). What 
I am trying to do is identify the dominant type for each sub-region /sub-windows but show 
the count side by side, for example:

   x
  y [0,0.801)[0.801,1.6]
   [0.5,1]   Off = 36Off =  6
On =   3 On = 39

[0,0.5)   Off =  4  Off = 36
 On = 42On = 6



I don't understand the counts in the foregoing.  Have some digits been 
left off in places?  I.e. should this be:

>
>x
>   y [0,0.801)[0.801,1.6]
>[0.5,1]   Off = 36   Off = 36
>  On  = 35   On  = 39
>
> [0,0.5)  Off = 34   Off = 36
>  On  = 42   On  = 36   ???
>



I think I can achieve what I am looking for with your suggestion. Once I get 
the table list, I will copy the numbers side by side manually.


Yeucch!  Manually?  Saints preserve us!

Do you really mean "quadrant" or do you simply mean *quadrat*???

Sticking with quad*rant* (it doesn't really matter), how about something 
like:


rants <- tiles(quadrats(Window(amacrine),nx=2))
lapply(rants,function(w,pat){table(marks(pat[w]))},pat=amacrine)

which gives:


$`Tile row 1, col 1`

off  on 
 36  35 


$`Tile row 1, col 2`

off  on 
 36  39 


$`Tile row 2, col 1`

off  on 
 34  42 


$`Tile row 2, col 2`

off  on 
 36  36 


cheers,

Rolf

P. S.  But you are probably well-advised to forget all this quadrat 
counting stuff and use relrisk() as suggested by Ege Rubak.


R.

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] quadracount on multitype points

2017-09-13 Thread Rolf Turner


On 13/09/17 02:11, Guy Bayegnak wrote:


Dear all,

I am working on a multitype point pattern, and I'd like to estimate
how many of each type of point occurs into each quadrant. I know it
is possible to use the quandracount on split marks as follows using
spatstats: quadratcount(split(marks)). But the result produces as
many windows as they are marks. I am wondering is there is a way to
know many occurrence of each type there is per quadrant and to plot
it in a single grid.

Thanks.


You really should start by mentioning that you are dealing with the 
spatstat package.


It's not clear to me what you are after.  A minimal reproducible example 
would be helpful.  I presume that by "quadrant" you mean one of the four 
equal sub-windows formed by bisecting your (rectangular) window 
vertically and horizontally.


If my presumption is correct then perhaps

lapply(split(X),quadratcount,nx=2)

(where "X" is your point pattern) does what you want.  E.g.:


lapply(split(amacrine),quadratcount,nx=2)
$off
 x
y [0,0.801) [0.801,1.6]
  [0.5,1]36  36
  [0,0.5)34  36

$on
 x
y [0,0.801) [0.801,1.6]
  [0.5,1]35  39
  [0,0.5)42  36


Is this something like what you wish to achieve?

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] circular spatial polygon

2017-09-13 Thread Rolf Turner


Whoops. I cc-ed my message to r-help rather than to r-sig-geo.  (Du!)

Consequently I'm resending, with r-sig-geo as a cc recipient.

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

On 13/09/17 19:37, Rolf Turner wrote:

On 13/09/17 13:24, Kátia Emidio wrote:

Dear Rolf,

Thanks for your help!

What I need is a spatial window with shape equal to the figure 
attached. This figure I made using ArcGis, but it is important to me 
make it in R. After having this figure I will make some analysis using 
spatstat among others. The points within figure are trees...


(1) I believe that this discussion should be kept on-list.  It is not my 
role to provide private consulting for you.  I  am therefore cc-ing the 
list in this email.


(2) It is still not completely clear what you really want; the figure 
that you attached appears to be a disc with 4 diameters superimposed. So 
you might be after a single (circular) owin object and a line segment 
pattern consisting of the 4 diameters.  Or you might be after *eight* 
owin objects, each being one the eight disc-segments into which the 
diameters divide the disc.


I shall assume the latter.  To start with define a function, say "wedge":

wedge <- function(theta1,theta2,radius,npoly=100,centre=c(0,0)){
 library(spatstat)
# Should do some checking on the values of theta1 and theta2 here,
# but I shan't bother.
 theta <- seq(theta1,theta2,length=npoly+1)
 x <- c(0,radius*cos(theta),0)
 y <- c(0,radius*sin(theta),0)
 W <- owin(poly=list(x=x,y=y))
 return(affine(W,vec=centre))
}

Then do something like:

wedgies <- vector("list",length=8)
cntr <- c(673593.21,673593.21)
for(i in 1:8) wedgies[[i]] <- wedge((i-1)*pi/4,i*pi/4,15,centre=cntr)
ttt <- tess(tiles=wedgies)
plot(ttt) # Looks OK to me.

And maybe also do:

W <- do.call(union.owin,wedgies)
plot(W)
for(i in 1:8) {
 plot(wedgies[[i]],add=TRUE,border="red")
 readline("Go? ")
}

Also looks OK to me.

Is *this* what you want?


___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] [FORGED] circular spatial polygon

2017-09-12 Thread Rolf Turner


On 13/09/17 08:48, Kátia Emidio wrote:


Dear all,
My question is how to create a circular spatial polygon, with 8 arcs of 45
degrees, and radius measuring 15m. Having in the centre point the UTM
coordinates, zone 20S.

x= *673593.21*
y= *673593.21*


"Circular polygon" is a contradiction in terms.  If a shape is a polygon 
then it is *not* a circle.  (Of course in real life we use polygons with 
large numbers of sides to *approximate* circles.  But 8 is not large!)


You can create an octagon with the required centre and radius using 
*spatstat* via:


oct <- disc(radius=15,centre=c(673593.21,673593.21),npoly=8)

Does that provide (at least a start on) what you want?

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext.

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] [FORGED] Re: using spatialpolygonsdataframe in ppm (or, converting spatialpolygonsdataframe to pixel image or other object useful in ppm)

2017-09-02 Thread Rolf Turner

On 02/09/17 19:00, Michael Sumner wrote:





It's not clear to me if you need polygons in spatstat or a raster owin
version of them - admittedly spatstat does allow a combination of the two,


That's not actually true.  The spatstat package allows for observation 
windows (objects of class "owin" which are either polygonal (of type 
"polygonal") *or* "raster-like" (of type "mask"), consisting of a 
pixellation of the bounding box with the pixels having the values TRUE 
or FALSE.  There is also type "rectangle" which is conceptually a 
special case of "polygonal" but has (obviously?) practical advantages 
when it is applicable.


However an "owin" object *cannot* be a combination of polygonal and mask 
types.




cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] spatstat residuals.ppm with an intensity given as an image?

2017-06-14 Thread Rolf Turner


On 07/06/17 22:22, Seth Flaxman wrote:


I've got the intensity of an inhomogeneous Poisson process (fit using
some new methods I'm working on, so not created by spatstat) as an
image object, the observed point pattern as a ppp object, and I'd like
to call residuals.ppm to compute residuals. Is there a simple way to
create a fitted point process model (ppm) from an image and points so
I can pass this directly to residuals.ppm?


Sorry for the long delay in responding.

There are two ways to do this:

[1] Fit a model using the putative intensity as an offset:

 fit1 <- ppm(X ~ offset(log(lam)))

where 'lam' is the pixel image of intensity, and 'X' is the point pattern.

Then do:

   res <- residuals(fit1)

[2] Fit some other model (e.g. just a constant intensity model) and use 
the argument 'fittedvalues' of residuals.ppm to specify the fitted 
intensity values.


   fit2 <- ppm(X ~ 1, forcefit=TRUE)

   res  <- residuals(fit2, fittedvalues=lam[quad.ppm(fit2)])

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Negative mse values

2017-03-27 Thread Rolf Turner


On 27/03/17 21:28, Cenk İÇÖZ via R-sig-Geo wrote:


Hi  dear friends.

I was trying to smooth a spatial point pattern with kernel function
but I am a bit confused .
I need some explanation about negative MSE values obtained ( used
forselecting optimum bandwidth) by using mse2d function of "splancs " package.
As the definition of mse function for one dimension I know it might
not take negative values.
However I have no idea for the two dimensional definition of mse for
aspatial point pattern. Could the mse values be negative in this case? I
found an example taking minimum negative value of mse fort he optimum
bandwidth which is the lowest negative value


The short answer is that the value returned by mse2d() is not actually 
the MSE but rather MSE minus a data dependent constant.  So this can 
"legitimately" be negative.


The off-setting constant "does no harm" since interest lies in 
determining the optimum bandwidth, so it is the relative sizes of the 
values produced that are of interest.


It has been suggested to me in the past (by Barry Rowlingson, one of the 
original authors of splancs) that the function bw.diggle() from the 
spatstat package may be more reliable than mse2d().  The former function 
uses a somewhat different calculation procedure, whence the results of 
the two functions are not exactly comparable.


Note that bw.diggle() is expressed in terms of "sigma" whereas mse2d()
is expressed in terms of "h" where sigma = h/2. So if mse2d() gives an
"optimal" value of 3, one would *roughly* expect bw.diggle() to give an 
optimal value of 1.5.


Note also that estimating an "optimal" bandwidth is a pretty inexact 
endeavour under the best of circumstances.  The smooth surface to be 
fitted is an ill-determined creature and the bandwidth that gives the 
best fit is even more ill-determined.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] [FORGED] Re: Making an hexagonal grid using spsample

2017-03-04 Thread Rolf Turner


On 05/03/17 01:35, Marcelino de la Cruz Rot wrote:


My apologies for such a pair (or more) of embarrassing mistakes!

I should read a bit more these days...



I'm just happy to see that this sort of mistake happens to others as 
well as to my very good self! :-)


cheers,

Rolf


--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Re: Making an hexagonal grid using spsample

2017-03-03 Thread Rolf Turner

On 04/03/17 08:38, Marcelino de la Cruz Rot wrote:

Hi  Manuel,

I do answer to the question "How can I make a spatial grid of 1 ha (or
other size) in R?"

You can use function hextess in spatstat

library(spatstat)

# some arbitrary area, with coordinates in hectometres
 W <- Window(chorley)

# As Rolf said, hexagons of 1ha should have side of 402.0673 metres, so,
in hectometres:

s <- 4.020673

plot(hextess(W, s))
plot(hexgrid(W, s), add=TRUE)


Marcelino,

Actually I said hexagons of area *42* ha should have side length equal 
to 402.0673 metres.


Moreover the Chorley data set has units of *kilometres* not hectometres,
so that should be s <- 0.4020673.  Or, to avoid just a touch of 
round-off error, s <- sqrt(2*0.42)/3^0.75.


Note that if you then do

xxx <- hextess(W,s,trim=FALSE)
unique(sapply(tiles(xxx),area.owin))

you get 0.42 --- i.e. 0.42 square kilometres, or 42 hectares.

cheers,

Rolf

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] spsample: defining the size of the cell in an hexagonal grid

2017-03-01 Thread Rolf Turner

On 01/03/17 17:33, Manuel Spínola wrote:

Dear list members,

How can I define the size in hectares of the cell when doingan hexagonal
grid with the function spsample.  What it means "cellsize"?  I want to
create cell sizes according to some specific number of hectares.  How can I
do it?


data(meuse.grid)
gridded(meuse.grid) = ~x+y
plot(meuse.grid)
HexPts <-spsample(meuse.grid, type="hexagonal", cellsize=1000)
HexPols <- HexPoints2SpatialPolygons(HexPts)
plot(HexPols[meuse.grid,], add=TRUE)



I have no knowledge of spsample() nor of its argument cellsize (doesn't 
the help for spsample() tell you what this argument means?) but it might 
be useful for you to realise (if you are not already aware of this) that 
the area of a regular hexagon is


A = 3*sqrt(3)*s^2/2

where s is the length of side of the hexagon.  So if you want hexagons 
of area 42 hectares you would take their sides to be of length 402.0673 
metres.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] [FORGED] Selecting spatial points within spatial line segments

2017-02-08 Thread Rolf Turner

On 09/02/17 02:56, Manuel Spínola wrote:

Dear list members,

I have a spatial line (a road) segmented in 500-meters segments.  I also
have spatial points (roadkills) and I want to select the points related to
each segment in the line.  The points do not touch (intersect) the line,
but are close to it. How can I do this in R?



The project2segment() function from the spatstat package will probably 
do what you want.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] jitter within polygon

2017-01-29 Thread Rolf Turner


On 28/01/17 18:29, Paul Lantos wrote:


I am trying to figure out how to randomly jitter points within an
overlying polygon structure.

I can easily jitter the points themselves, but I would like this to
be constrained by the boundaries of polygons containing the points.

This isn't for visualization - I can do that easily enough in GIS. I
actually need to generate coordinates.


You can do this easily in spatstat by making each polygon into an object 
of class "owin" (see e.g. the "shapefiles" vignette) and then using the 
function rjitter() (with retry=TRUE).


Note that if you have a collection of polygons some of which are close
to being contiguous, it is possible (or at least conceivable) for a 
jittered point to "jump" to a location inside a different polygon. 
Hence if you were to represent your collection of polygons as a single
(disconnected) "owin" object you would run the risk of incurring this 
"jumping" phenomenon.  Hence it would be safer to bundle your collection 
of polygons into a list, and apply rjitter() to each list entry (using 
lapply()).


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] unsuscribe

2017-01-25 Thread Rolf Turner

On 26/01/17 00:01, Nicolas Fabre wrote:

This is not the way you unsubscribe.  See the list info (given at the 
bottom of every posting).


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Re: Error when I try to interpolate NA for each cell from Rasterstack

2017-01-14 Thread Rolf Turner

On 15/01/17 05:11, Vijay Lulla wrote:

I think this is because interpNA returns a matrix of one column whereas
raster::setValues (used internally by calc) expects the values to be a
vector of elements.  Changing the last line in `fun_interp` from
`return(z)` to `return(c(z))` might do the trick.




Better to use return(as.vector(z)).  See fortune(185).

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Spatial points pattern generation

2016-12-13 Thread Rolf Turner


On 14/12/16 12:17, Maurizio Marchi wrote:


Hello everybody, working with the spatstat package I would need to generate
some artificial spatial points pattern with an associate mark. More in
detail I would like to create 3 or 4 examples with approximately 1000 trees
on a square of 100x100 metres (1 hectare) with

1) Completely random distribution
2) Clustered distribution
3) Regular distribution
4) Mixed distribution

all the points must have an associated mark (e.g. in case of trees a
diameter in centimeters) and the autocorrelation must be detectable (the
exercise is exactly aimed to study the presence of autocorrelation)


"Completely random" patterns can be generated using rpoispp().

Clustered patterns can be generated using (amongst other functions)

  * rThomos()
  * rMatern()
  * rLGCP()

or more generally using rNeymanScott().

Regular patterns can be generated using rmh() with 'cif="strauss"'
or using rStrauss().

Not quite sure what you mean by a "Mixed distribution".  Perhaps you 
want to do something like generate patterns from 1), 2) and 3) and then 
superimpose them into a single pattern using superimpose().


Marks are assigned to points of a pattern by means of the syntax

marks(X) <- m

where m is a vector or factor of length equal to npoints(X) or a data 
frame with nrows(m) = npoints(X).


How you generate the marks is up to you.

Read the (incredibly well written :-) ) help for the functions referred 
to above, or better still read the (even more incredibly well
written :-) ) book to which you can find a pointer by pointing your 
browser at http://spatstat.github.io.


That's about all I can say in response to such a general question.  If 
you have further focussed and specific questions, please get back to me.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Calculating envelopes for a point pattern on a linear network (equivalent to rshift in 2D)

2016-10-19 Thread Rolf Turner


On 19/10/16 02:04, Ignacio Barbeito-Sanchez wrote:


Dear list members,

We have a bivariate point pattern (two tree species) in a linear
transect and would like to obtain a null model equivalent to the one
provided by rshift in 2D to test the independence of both populations
(in spatstat)

We are using spatstat, but this option does not seem available at the
moment, or we can't find it (the option available tests the complete
spatial randomness of one population (p) once the other population is
fixed (s) but does not keep the pattern of the second population (p)
unchanged-


I don't understand what you are saying here, not that it matters a great 
deal.



which is a problem because both populations are clustered in
our case)

An example of our code follows:

T1 is a point pattern on a linear transect as follows:
 > T1
Point pattern on linear network
232 points
Multitype, with possible types: p, s
Linear network with 2 vertices and 1 line
Enclosing window: rectangle = [-1, 101] x [-1, 1] units

We used multiple pair correlation functions (linearpfccross)

We computed an envelope to test the hypothesis of complete spatial
randomness and independance:
T1.env = envelope.lpp ( T1, fun = linearpcfcross , nsim = 30 , i = "s",
j = "p")

If anybody has experienced a similar problem or has some hints on how to
proceed we would be very grateful.


As of present there is no rshift() method for the lpp class in spatstat, 
and it may be a while before such a method is added.  However for the 
simple structure in your example it is not hard to write a little add 
hoc function to do the shifting.  I enclose a skeletal example below.


Note that the characteristics of your example are "hard wired" in the 
given code; it shouldn't to too difficult to make the code more general 
however.  The code does a "loop" type shift, analogous to setting 
edge="torus" in rshift.ppp; this may be inappropriate for clustered 
data.  Again it shouldn't be too difficult to change this behaviour.


Since you did not provide a reproducible example I have tried out my 
code on simplistically simulated data.


=
# Code:
X <- psp(0,1,100,1,window=owin(c(0,101),c(-50,50)))
X <- as.linnet(X)
set.seed(42)
X <- runiflpp(232,X)
marks(X) <- factor(sample(c("p","s"),232,TRUE))

foo <- function(r0,X){
   u <- runif(2,-r0/2,r0/2)
   xp <- X$data$x[marks(X)=="p"]+u[1]
   xp[xp < 1] <- xp[xp < 1] + 99
   xp[xp > 100] <- xp[xp > 100] - 99
   xs <- X$data$x[marks(X)=="s"]+u[2]
   xs[xs < 1] <- xs[xs < 1] + 99
   xs[xs > 100] <- xs[xs > 100] - 99
   X$data$x[marks(X)=="p"] <- xp
   X$data$x[marks(X)=="s"] <- xs
   X
}

E <- envelope(X,fun=linearpcfcross,i="s",j="p",
  simulate=expression(foo(15,X)))
plot(E)
=

HTH

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] uniformly sample points on a border of a polygon

2016-10-13 Thread Rolf Turner

On 14/10/16 07:03, Paolo Piras wrote:

HI folks,

I write for a (hopefully) relatively simple question:

I would need to uniformly sample 1000 or more points **along the border** of a 
polygon (not within the area enclosed) that is identified by ordered but not 
equally spaced points; which is the fastest way?

In a first moment I thought to sample between any pair of consecutive points 
but, given that starting points are not uniformly distributed, the final result 
would be very far from a uniform distribution.

here my polygon:


  mypol<-round(matrix(c(-13.8447497369687, -3.51439434200449, 6.09494902836977, 
6.83498916728338, 9.20403746769121, 15.3061452155498, 18.4050681631565, 
15.334153355932, 9.21809033073377, 6.90467983448734, 6.17942233200763, 
-3.4864867866601, -13.8299219386242, -17.5237987124776, -17.2262670680261, 
-17.5217563171495, -2.29667185082115, -7.72275721405543, -9.77084968112857, 
-8.81725304021858, -8.32894043391822, -4.76080777897439, -0.0600572363382094, 
4.62779963258511, 8.20771806467615, 8.70484104396818, 9.68531129857718, 
7.67574865642846, 2.46081860449754, 1.31152149442131, 0.0845735294613392, 
-1.11988475144136),ncol=2),digits=2)
  plot(mypol,asp=1,cex=0)
  text(mypol[,1],mypol[,2],c(1:nrow(mypol)))
Thanks in advance for any hints



This can be done reasonably easily using the spatstat package, for some 
value of the word "reasonably".  Here's how:


require(spatstat)
W <- owin(poly=mypol)
m <- cbind(mypol[-nrow(mypol),],mypol[-1,])
m <- rbind(m,c(mypol[nrow(mypol),],mypol[1,]))
m <- as.data.frame(m)
names(m) <- c("x0","y0","x1","y1")
L <- with(m,psp(x0,y0,x1,y1,window=boundingbox(W)))
set.seed(42)
#X <- runifpointOnLines(1000,L)
X <- runifpointOnLines(100,L)
plot(W,main="Piras's Polygon")
plot(X,add=TRUE)

Note that I have just generated 100 uniform points, r.t. 1000, so that 
the resulting plot is a little less cluttered.


There may be a sexier way of accomplishing your desideratum; I have 
cc-ed this email to my co-authors Adrian and Ege who may come up with 
better ideas.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] YNT: Complete spatial randomness testing

2016-10-05 Thread Rolf Turner

On 05/10/16 21:29, Cenk İÇÖZ wrote:

Dear Rolf,

I installed the fixed version of spatstat from github repository.
allstats and Jest worked perfectly.
Thanks for all the help. I will be waiting for the new version
release. Many thanks for the package and contributions to the point
pattern  analysis too, they are a great asset for people working with point
pattern data.



Glad to hear that it's working for you.  You will note (if you look at 
"latest.news") that you got an acknowledgement.  (With that, and 2 bob,

you can make a phone call. :-)  Said he, showing his age.)

Thank you for your kind words about spatstat.

cheers,

Rolf

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] YNT: Complete spatial randomness testing

2016-10-04 Thread Rolf Turner

On 03/10/16 20:35, Cenk İÇÖZ wrote:

According to your suggestion , I excluded duplicated points. However I took the 
same error message.

udp1<-unique.ppp(dp1)
 xxx<-allstats(udp1)
Error: in Fest(X, r) the successive r values must be finely spaced: given spacing 
= 0.010196; required spacing <=  0.00586

I attached the data for you to investigate. Thanks a lot.



There was indeed a bug in the Jest() function.  It has now been fixed.
The fixed version will be available in the next release of spatstat.
You can also get a fixed (updated) version of spatstat (version 
1.46-1.059) from github.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] YNT: Complete spatial randomness testing

2016-10-03 Thread Rolf Turner

On 03/10/16 20:35, Cenk İÇÖZ wrote:

According to your suggestion , I excluded duplicated points. However
I

took the same error message.


udp1<-unique.ppp(dp1) xxx<-allstats(udp1) Error: in Fest(X, r) the
successive r values must be finely spaced:

given spacing = 0.010196; required spacing <= 0.00586


I attached the data for you to investigate. Thanks a lot.



Thank you for providing the data.  There does indeed seem to be a 
problem.  We are looking into it.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] Complete spatial randomness testing

2016-10-01 Thread Rolf Turner

On 02/10/16 02:02, Cenk İÇÖZ via R-sig-Geo wrote:

Hi friends,

I have a point pattern that consist of earthquake locations.  I want
to test the pattern for complete spatial randomness based on distance
functions. I was using spatstat package allstats function to plot 4
of them together in a graph  however I took this error message:


"  Error in plot(allstats(dp1)) : error in evaluating the argument
'x' in selecting a method for function 'plot': Error: in Fest(X, r)
the successive r values must be finely spaced: given spacing =
0.010196; required spacing <=  0.00586   "


Is it something related to plotting window properties? Although I got
an error about F function , I could plot it individually. In addition
I could plot  all G, F, K  and L functions together in a graph
manually. Also I could not find a function to estimate J function in
spatstat package. It is only included in allstat function.

Thanks in advance .


(a) What version of spatstat are you using?  It may be out of date.  The 
version of spatstat currently on CRAN is 1.46-1.


(b) When you say "based on distance functions" I presume that you mean 
"based on various distributions of interpoint distances".


(c) I presume that "dp1" is the point pattern (object of class "ppp") of 
earthquake locations.


(d) You have not provided a reproducible example of your problem. 
Consequently I checked things out with a simulated example:


set.seed(42)
X <- rpoispp(100)
plot(allstats(X))

This ran with no problem, and produced the expected plot.

(e) The function to estimate the J function is Jest().  It has been in 
the spatstat package for a very long time.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: [R-sig-Geo] [FORGED] Geostatistics on geometric network

2016-09-21 Thread Rolf Turner

On 22/09/16 01:06, Roelof Coster wrote:

Dear list members,

I'm working with electric measurements that were taken on pipelines. These
are spatio-temporal data whose spatial domain is not Euclidean, because the
pipelines form a geometrical network. Has any work been done before to
study this kind of data?


The spatstat package has facilities for dealing with *spatial* point 
processes on geometric networks.  I am not aware of any work for dealing 
with spatio-temporal processes on such networks.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] dclf.test output.

2016-07-27 Thread Rolf Turner


(1) It would then appear to be the case that points of types B and D 
*do* tend to cluster together despite your expectations.


(2) What is the appearance of an envelope plot just for the Kcross 
function between B and D?


(3) If these ideas don't clear up the problem, perhaps you could make 
the data set available to me, off-list, and I could have a go at 
exploring the pattern and see if I can understand what's going on.


(4) It is always possible that there is something that I haven't 
properly comprehended in respect of these issues.  In particular I now 
feel a little bit nervous about the fact that as it stands your test is 
based on simulations of patterns that are CSRI (completely spatially 
random with independence of types).  It might be the case that this is 
inappropriate.  I'll have to think about this a bit more.


(5) I am a bit puzzled by the fact that you get "the same results" when 
you use alternative="greater".  Generally a one-sided test should yield
a smaller p-value than a two-sided test when the data are "pointing in 
the direction of the alternative hypothesis".


E.g.:

set.seed(42)
E <- envelope(ants,fun=Kcross,i="Cataglyphis",j="Messor",
  savepatterns=TRUE)
dclf.test(E) # Gives a p-value of 0.7
dclf.test(E,alternative="greater") # gives a p-value of 0.23

cheers,

Rolf

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

On 28/07/16 10:59, Guy Bayegnak wrote:

Thanks for your response Rolf,
You summarized it correctly.  However, B and D do not necessarily avoid each other. They 
could and do in fact occur next to each other at times just by coincidence, simply 
because both categories tend to occur all over the place, while I think A and C are 
influenced by D. I included the alternative="greater" but I still get the same 
results.
A sample of my data is provided below( I have more than 800 points).

LongitudeLatitude   Type
1 -113.1923  51.02913   C
2 -113.2013  52.83306   A
3 -113.6834 51.06585A
4 -113.0295  50.97140   C
5 -113.2366  50.96440   A
6 -113.5849  51.37568   A
7 -113.6877  51.09027   D
8 -113.5371  51.82780   D

 I used the following code and got the results provided earlier:


dclf.test(Data.ppp,Kcross, i = "A", j = "D", alternative="greater" ,correction = 
"border")
dclf.test(Data.ppp,Kcross, i = "B", j = "D", alternative="greater" ,correction = 
"border")
dclf.test(Data.ppp,Kcross, i = "C", j = "D", alternative="greater" ,correction = 
"border")




Thanks,
GAB

-Original Message-
From: Rolf Turner [mailto:r.tur...@auckland.ac.nz]
Sent: Wednesday, July 27, 2016 3:48 PM
To: Guy Bayegnak
Cc: r-sig-geo@r-project.org
Subject: Re: [R-sig-Geo] dclf.test output.


I gather that your problem is that you expect to reject the null hypothesis of "no 
clustering" for A vs. D and for C vs. D, but *not* to reject it for B vs. D.

I *think* that your problem might be the fact that you are using a two-sided test, which gives, roughly 
speaking, a test of "no association" rather than a test of "no clustering".  It could be 
the case that points of types B and D tend to *avoid* each other, so you get "significant" 
association between B and D, although the B points do the opposite of clustering around D points.

It's hard to tell for sure without a *reproducible example* (!!!).  We don't 
have access to Data.ppp.

Try using alternative="greater" in your call to dclf.test() and see if the 
results are more in keeping with your expectations.

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

On 28/07/16 05:48, Guy Bayegnak wrote:


Hi all, I have some marked spatial points and I am trying to assess
therelative association between different types of points using the
Diggle-Cressie-Loosmore-Ford test of CSR.
My observations are of 4 categories (A,B,C,D) and I am trying to
assess 3 categories (A,B,C,) against one (D), and I get the output
provided below. Knowing the sampling area, I know category "D" and
category "B" tend to occur all across the sampling area.
What I am trying to prove is that category "A" and "C" tend to be
clustered around "D". But u values I am getting are all positive, and
the p-value are all 0.01. However, the dclf.test between A-D and C-D
returns a u value at least 3 times as large than that of B-D.
My question is: how do I interpret these values. Does it still show
clustering of A and C relative to D? if yes how do I interpret the
output of dclf.test between B and D?
Thanks, GAB



Re: [R-sig-Geo] dclf.test output.

2016-07-27 Thread Rolf Turner


I gather that your problem is that you expect to reject the null 
hypothesis of "no clustering" for A vs. D and for C vs. D, but *not* to 
reject it for B vs. D.


I *think* that your problem might be the fact that you are using a 
two-sided test, which gives, roughly speaking, a test of "no 
association" rather than a test of "no clustering".  It could be the 
case that points of types B and D tend to *avoid* each other, so you get 
"significant" association between B and D, although the B points do the 
opposite of clustering around D points.


It's hard to tell for sure without a *reproducible example* (!!!).  We 
don't have access to Data.ppp.


Try using alternative="greater" in your call to dclf.test() and see if 
the results are more in keeping with your expectations.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

On 28/07/16 05:48, Guy Bayegnak wrote:


Hi all, I have some marked spatial points and I am trying to assess
therelative association between different types of points using the
Diggle-Cressie-Loosmore-Ford test of CSR.
My observations are of 4 categories (A,B,C,D) and I am trying to
assess 3 categories (A,B,C,) against one (D), and I get the output
provided below. Knowing the sampling area, I know category "D" and
category "B" tend to occur all across the sampling area.
What I am trying to prove is that category "A" and "C" tend to be
clustered around "D". But u values I am getting are all positive, and
the p-value are all 0.01. However, the dclf.test between A-D and C-D
returns a u value at least 3 times as large than that of B-D.
My question is: how do I interpret these values. Does it still show
clustering of A and C relative to D? if yes how do I interpret the
output of dclf.test between B and D?
Thanks, GAB



Diggle-Cressie-Loosmore-Ford test of CSR
Monte Carlo test based on 99 simulations
Summary function: Kcross["A", "D"](r)
Reference function: theoretical
Alternative: two.sided
Interval of distance values: [0, 1.05769125]
Test statistic: Integral of squared absolute deviation
Deviation = observed minus theoretical

data:  Data.ppp
u = 54.931, rank = 1, p-value = 0.01

Diggle-Cressie-Loosmore-Ford test of CSR
Monte Carlo test based on 99 simulations
Summary function: Kcross["B", "D"](r)
Reference function: theoretical
Alternative: two.sided
Interval of distance values: [0, 1.05769125]
Test statistic: Integral of squared absolute deviation
Deviation = observed minus theoretical

data:  Data.ppp
u = 19.315, rank = 1, p-value = 0.01

Diggle-Cressie-Loosmore-Ford test of CSR
Monte Carlo test based on 99 simulations
Summary function: Kcross["C", "D"](r)
Reference function: theoretical
Alternative: two.sided
Interval of distance values: [0, 1.05769125]
Test statistic: Integral of squared absolute deviation
Deviation = observed minus theoretical

data:  Data.ppp
u = 46.829, rank = 1, p-value = 0.01


___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Re: Problems with rbind(list(), makeUniqueIDs=T)

2016-07-22 Thread Rolf Turner

On 23/07/16 15:00, Bacou, Melanie wrote:

Thanks! I wonder why the original syntax passing a list used to work (I
believe).

For a much longer (and unknown) list of SpatialPolygonsDataFrames could
an approach using do.call() work instead? I tried but:

 > m <- do.call(rbind, m, makeUniqueIDs=T)
Error in do.call(rbind, m, makeUniqueIDs = T) :
   unused argument (makeUniqueIDs = T)




The syntax has to be do.call(function,list.of.args.to.function).

In this case:

do.call(rbind,c(m,list(makeUniqueIDs=TRUE)))

BTW --- *don't* use the abbreviation "T" for "TRUE".  The object T is 
over-writeable (whereas TRUE is not!) and this can lead on occasion to 
mysterious errors accompanied by incomprehensible error messages.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Calculate each polygon percentage inside a circles

2016-05-25 Thread Rolf Turner


On 26/05/16 00:23, ASANTOS wrote:


Dear Rolf Turner,

   It's much better a clean code with a minimum packages, thank
you very much for your answer. But "pct" object give me a total polygon
percentage around each point and I need too an identification (in
columns) of individual contribution of each polygon. In my simulation, I
find 50.38001% for the point 1, but this is a total percentage of
polygons and I'd like to know a percentage contribution for each polygon
(e.g. ID1 = 0.0 + ID1 = 10.0 + ID3 = 0.0 + ID4 = 40.38001 =
50.38001 total), this is possible?



Of course it's possible!  This is R. :-)

You just need to look at the component polygons as separate window 
objects and do the same thing to them that I did to the overall "W".

There is a small gotcha that has to be coped with when the intersection
of the disc with a polygon is empty (as often occurs).  (See below.)

There is a number of different ways to write the code for this.

One way is as follows:

==
library(spatstat)

sr1 <- owin(poly=cbind(c(180114, 180553, 181127, 181477, 181294,
 181007, 180409, 180162, 180114),
   c(332349, 332057, 332342, 333250, 333558,
 333676, 332618, 332413, 332349)))

sr2 <- owin(poly=cbind(rev(c(180042, 180545, 180553, 180314, 179955,
 179142, 179437, 179524, 179979, 180042)),
   rev(c(332373, 332026, 331426, 330889, 330683,
 331133, 331623, 332152, 332357, 332373

sr3 <- owin(poly=cbind(rev(c(179110, 179907, 180433, 180712, 180752,
 180329, 179875, 179668, 179572, 179269,
 178879, 178600, 178544, 179046, 179110)),
   rev(c(331086, 330620, 330494, 330265, 330075,
 330233, 330336, 330004, 329783, 329665,
 329720, 329933, 330478, 331062, 331086

sr4 <- owin(poly=cbind(c(180304, 180403,179632,179420,180304),
   c(332791, 333204, 333635, 333058, 332791)))

wins <- solist(sr1,sr2,sr3,sr4)

W <- union.owin(wins)

set.seed(42)
X <- rpoispp(28/area.owin(W),win=W)
N <- npoints(X)
plot(X,cols="blue")
AD <- area.owin(disc(radius=600))

pct <- matrix(nrow=N,ncol=4)
rownames(pct) <- paste("point",1:N,sep=".")
colnames(pct) <- paste("sr",1:4,sep=".")
for(i in 1:npoints(X)) {
Di <- disc(radius=600,centre=c(X$x[i],X$y[i]))
for(j in 1:4) {
Aij <- intersect.owin(Di,wins[[j]],fatal=FALSE)
pct[i,j] <- if(is.null(Aij)) 0 else 100*area.owin(Aij)/AD
}
}
==

HTH

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Calculate each polygon percentage inside a circles

2016-05-24 Thread Rolf Turner


Dear Alexandre,

Your tasks can all be done very simply in spatstat.  The code follows.
Note that I had to reverse the point order for sr2 and sr3 because
spatstat requires that the vertices of (exterior) boundaries of 
polygonal windows be given in anticlockwise order.


I commented out the plotting of the discs centred at each point of the 
simulated pattern since I found the plot to be unenlightening and messy 
looking.


The resulting percentages that you are after are in "pct".

If you want more precision for the disc areas, set npoly equal to a 
large number than the default 128 (e.g.1024 or 2048) in the calls to disc().


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

#==
library(spatstat)

bdry <- list(cbind(c(180114, 180553, 181127, 181477, 181294,
 181007, 180409, 180162, 180114),
   c(332349, 332057, 332342, 333250, 333558,
 333676, 332618, 332413, 332349)),

 cbind(rev(c(180042, 180545, 180553, 180314, 179955,
 179142, 179437, 179524, 179979, 180042)),
   rev(c(332373, 332026, 331426, 330889, 330683,
 331133, 331623, 332152, 332357, 332373))),

 cbind(rev(c(179110, 179907, 180433, 180712, 180752,
 180329, 179875, 179668, 179572, 179269,
 178879, 178600, 178544, 179046, 179110)),
   rev(c(331086, 330620, 330494, 330265, 330075,
 330233, 330336, 330004, 329783, 329665,
 329720, 329933, 330478, 331062, 331086))),

 cbind(c(180304, 180403,179632,179420,180304),
   c(332791, 333204, 333635, 333058, 332791)))

W <- owin(poly=bdry)

set.seed(42)
X  <- rpoispp(28/area.owin(W),win=W)
plot(X,cols="blue")

#for(i in 1:npoints(X)) plot(disc(radius=600,centre=c(X$x[i],X$y[i])),
# add=TRUE,col=rgb(1,0.5,0,alpha=0.4),border=NA)

# To be fair, calculate the area of the discs using area.owin()
# rather than as pi*600^2.

AD <- area.owin(disc(radius=600))

pct <- numeric(npoints(X))
for(i in 1:npoints(X)) {
Di <- disc(radius=600,centre=c(X$x[i],X$y[i]))
pct[i] <- 100*area.owin(intersect.owin(Di,W))/AD
}
#==

On 25/05/16 13:17, ASANTOS via R-sig-Geo wrote:

Dear members,

I will try to calculate each polygon percentage inside a circles
given an arbitrary radius in a shapefile object with the code below and
my output needs to be (Two first columns with center os circle
coordinates and values of each polygon percentage):

"pts$x"   "pts$y" "ID1" "ID2" "ID3" "ID4"
180486.1  330358.8  16.3   0.2  NA   17.3
179884.4  331420.8  88.3   NA   96.3 NA
179799.6  333062.5  25.3   22.3 0.5  NA
...

  For this I try to do:

#Packages
require(shapefiles)
require(rgdal)
require(maptools)
require(spatstat)
require(berryFunctions)


#Create 4 polygons (I create polygons because is more simple that given
a shapefile in a post)

sr1=Polygons(list(Polygon(cbind(c(180114, 180553, 181127, 181477,
181294, 181007, 180409,
   180162, 180114), c(332349, 332057, 332342, 333250, 333558, 333676,
   332618, 332413, 332349,'1')
sr2=Polygons(list(Polygon(cbind(c(180042, 180545, 180553, 180314,
179955, 179142, 179437,
   179524, 179979, 180042), c(332373, 332026, 331426, 330889, 330683,
   331133, 331623, 332152, 332357, 332373,'2')
sr3=Polygons(list(Polygon(cbind(c(179110, 179907, 180433, 180712,
180752, 180329, 179875,
   179668, 179572, 179269, 178879, 178600, 178544, 179046, 179110),
   c(331086, 330620, 330494, 330265, 330075, 330233, 330336, 330004,
   329783, 329665, 329720, 329933, 330478, 331062, 331086,'3')
sr4=Polygons(list(Polygon(cbind(c(180304, 180403,179632,179420,180304),
   c(332791, 333204, 333635, 333058, 332791,'4')

#Spatial Polygons
sr=SpatialPolygons(list(sr1,sr2,sr3,sr4))
srdf=SpatialPolygonsDataFrame(sr,
data.frame(row.names=c('1','2','3','4'), PIDS=1:4))

#Create shapefile
writeOGR(srdf, getwd(), 'POLY', 'ESRI Shapefile')

#Read shapefile
contorno_line_X <- readShapeSpatial("POLY.shp")
plot(contorno_line_X)

#Create ~28 random points in my window
contorno_line_spat <- as(contorno_line_X,"owin")
pts <- rpoispp(lambda=0.05, win=contorno_line_spat)
points(pts, col="blue")

#Set the radius for the plots
radius <- 600  # radius in meters

#Draw the circles using berryFunctions package
for(i in 1:length(pts$x)) circle(pts$x[i],pts$y[i],r=radius,
col=rgb(1,0.5,0,alpha=0.4), border=NA)
#

Now I'd like to calculate de po

Re: [R-sig-Geo] R 3.3.0 - spatstat 1.45-2 - envelope -- Error in nrank%%1 : non-numeric argument to binary operator

2016-05-17 Thread Rolf Turner

On 17/05/16 23:44, Domenico Giusti wrote:

Dear all,

I get "Error in nrank%%1 : non-numeric argument to binary operator" running


data("amacrine")
E <- envelope(amacrine, Kcross, "on", "off", nsim=19, global=FALSE)


Envelope works fine running


E <- alltypes(amacrine, Kcross, nsim=19, envelope=TRUE, global=FALSE)


Thanks,


Try:

E <- envelope(amacrine, Kcross, funargs=list(i="on",j="off"),
  nsim=19,global=FALSE)

cheers,

Rolf Turner

P. S.  Note that you *do not need* to set "global=FALSE"; this is the 
default.


R. T.


--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] how to calculate centroid (or centre of gravity) of a population (count data)

2016-04-20 Thread Rolf Turner


Maybe I'm being naive, or missing the point, or something.  But I would
presume that your data are something like:

x_1, x_2, ..., x_n # x-coordinates of the locations --- say, stored as x
y_1, y_2, ..., y_n # y-coordinates of the locations --- say, stored as y
k_1, k_2, ..., k_n # counts at the given locations --- say, stored as k

This is interpreted as meaning that (x_i,y_i) is the centroid of the 
i-th region, in which count k_i was obtained.


If so, can you not just calculate:

xc <- sum(k*x)/sum(k)
yc <- sum(k*y)/sum(k)

???

What am I not understanding?

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

On 20/04/16 18:45, Diego Pavon wrote:

Dear all

I am working with count data and I want to assess whether the centre of
gravity of the population (centroid or mean latitude?) has change over
time, indicating some redistribution or shift ongoing. To simplify, let's
say that I have ca. 2000 sites censused in two consecutive years (same
sites censused both years - all sites) and the abundance (count) of the
species registered.

I first thought about doing a kernelUD (package adehabitatHR) but
apparently this only takes into account the location of the sites to
calculate the kernel and then the centroids. Thus, since I have the exact
same sites in both years, the centroids for year 1 and year 2 are the same.
In my case, what I would like to do is to calculate that centroid but
taking into account the counts, because a site that had 3 individuals in
both years can't have the same weight than a site that hosted 3000
individuals when calculating the centroids.

So, what I would like to have is the centroid (or centre of gravity) of the
counts not of the sites surveyed (which is what adehabitatHR does,a s far
as I understood).

Do you have any suggestions which package other than adehabitatXX to use
for this purpose? Or if this can be done with adehabitat?

Thank you very much for your help.


___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] different models

2016-04-07 Thread Rolf Turner

On 07/04/16 19:52, Virginia Morera Pujol wrote:

Hi Rolf,

Thank you for your very complete response. If I understand it correctly
then, I should just include the Cartesian coordinates in my covariates
list if I want to model the intensity specifically in relation to them
as well as the covariates, correct?


Well, in a word, yes.  Dunno what more I can say without inducing 
obfuscation instead of clarification (of what is actually a simple issue.)


The best way to get an understanding of what is involved, IMHO, is to do 
some experimentation.  Fit some models to some data, plot the fitted 
surfaces (either as image plots or perspective plots) and see what the 
results look like.




cheers,

Rolf

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] different models

2016-04-06 Thread Rolf Turner

On 06/04/16 22:00, Virginia Morera Pujol wrote:

Hi all,

This might be a very dumb question that shows I have very little idea of
what I am talking about, but I'll risk it:

What is the difference between fitting a model using these 3 different
syntaxes?

1/ fit1 <- ppm(ppp, ~covariate),
2/ fit2 <- ppm(ppp, ~x+y+Z, covariates=list(Z=covariate))
3/ fit3 <- ppm(ppp, ~x+y+covariate)

where ppp is my point pattern and "covariate" is a pixel image? I realise
the outputs of 2 and 3 are the same and different to that of 1, so I guess
the question really is

a/ Is there any difference, practical or in the actual computations of the
model, between using 2 and 3?
b/ What is the difference between (2&3) and 1?


(1) There is essentially no difference between fits 2 & 3.  The fit 2 
syntax is provided so that the user can have the relevant covariates 
bundled up in a list without any need to extract these covariates from 
that list.   With the fit 2 syntax you don't need to have all covariates 
present in your workspace.


E.g.: fit <- ppm(bei ~ elev + grad, data=bei.extra)

(2) The fit 2 syntax is essentially the same as that used by lm() and 
glm() and was designed in imitation thereof.


(3) The preferred structure of a call to ppm() is

fit2 <- ppm(ppp ~ x + y + Z, data=list(Z=covariate))

Note:  "data" rather than "covariates"; no comma between the name of the 
response variable ("ppp") and the formula.


This makes the syntax identical to that of lm() and glm().

The syntax that you used is a remnant of earlier versions of spatstat 
and remains acceptable for reasons of backward compatibility.


(4) The difference between model 1 and models 2 and 3 is that models 2 
and 3 involve the Cartesian coordinates "x" and "y".  Model 1 is such 
that the model intensity takes the form


   exp(beta_0 + beta_1 * covariate)

In models 2 and 3 the model intensity takes the (more complex) form

   exp(beta_0 + beta_1 * x + beta_2 *y beta_3 * covariate)

Note that "x" and "y" are *reserved* names.  You cannot use these names 
for any covariates *other than* the Cartesian coordinates.


(5) The name "covariate" is probably *not* a good name for a covariate.
As fortune(77) puts it "Would you call your dog 'dog'?"

(6) Likewise (and even more so) "ppp" is *not* a good name for a point 
pattern, since it clashes the name of the creator function ppp().


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Re: [FORGED] kernel density: units of bandwidth

2016-02-22 Thread Rolf Turner

On 23/02/16 10:34, Tadaishi Yatabe-Rodriguez wrote:


Thanks, Rolf. This ,makes it clearer. I've another question though: the
ppp object I get the Kernel density from is not projected, then what
would be the unit of projection of this un-projected object? The
projection of the object from which it was created from (e.g.
SpatialPoints object)?

Thanks again,


You should probably keep this discussion on-list; others who subscribe 
to the list are much more knowledgeable than I and may have useful 
contributions to make (and may correct my errors).  I have taken the 
liberty of including the list in my reply.


I think that your question is not well-posed.  The density.ppm() 
function treats the coordinates of points as Euclidean coordinates (with 
the same units on both the x and y axes).  If your point pattern is 
"un-protected" then I presume that x and y are in longitude and latitude 
in which case the coordinates are not Euclidean and the x and y axis 
units are different, since longitude and latitude are constructed in 
different ways.


If the region in question is "small" (and not too near either pole, I 
guess) then this probably doesn't make much difference, and the units of 
sigma would, roughly speaking, be "degrees".  Otherwise it seems to me 
there is no meaningful answer to your question.


It would probably be best to project your data before forming the ppp 
object.


Ege has been doing some work on point patterns on the sphere which might 
possibly be of relevance here.  He may feel like chiming in on this issue.


cheers,

Rolf

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] kernel density: units of bandwidth

2016-02-22 Thread Rolf Turner

On 20/02/16 13:48, Tadaishi Yatabe-Rodriguez wrote:

Hello community,

I did a lovely kernel density from a point pattern, but now I'm wondering
what was the actual bandwidth I used. The value I used was 0.2770598, but
what does this mean? I suppose it's an area, but what is the unit? Is there
a way to transform this into a meaningful area unit?


As the help for density.ppp tells you, sigma is the standard deviation 
of the isotropic Gaussian "smoothing kernel" (i.e. density function with
mean c(0,0) and standard deviation of both "X" and "Y" equal to sigma, 
and cov(X,Y) = 0).  Consequently the units of sigma are the units in 
which X and Y are measured.


Thus if, for example, your point coordinates are in metres, then the 
units of sigma are metres.  (So sigma is measured not in area but in 
length or distance.)


HTH

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] removing internal boundaries from a spatstat window that was created from three adjacent counties

2016-02-15 Thread Rolf Turner


On 13/02/16 11:59, Christopher W. Ryan wrote:


In spatstat, I want to create a single window from three adjacent
counties in New York State, USA.  My original data is a shapefile,
"cty036" showing all the New York State county boundaries.  Here's what
I've done so far:

== begin code =
library(shapefiles)
library(spatstat)
library(maptools)
library(sp)
library(RColorBrewer)
library(rgdal)

nysArea <- readOGR("C:/DATA/GeographicData", "cty036")

nysAreaUTM <- spTransform(nysArea, CRS("+proj=utm +zone=18 +ellps=GRS80
+units=m +no_defs"))

sremsAreaUTM <- subset(nysAreaUTM, NAME=="Broome" | NAME=="Tioga" |
NAME=="Chenango")

sremsWindow <- as(sremsAreaUTM, "owin")
 end code =


This works pretty well, producing a 3-county owin object. But the
dividing lines between the counties are shown in the window, whenever I
plot it or plot subsequent point patterns that use the window.  In
essence, in plots it looks like 3 polygons instead of one big one. I'd
prefer not to have the inter-county boundaries be visible--I'd rather
have just one big polygon for the whole area. How can I remove them?  Or
should I create the window differently from the get-go?

Thanks.


I think it probable that your owin object has come out as a multiple 
polygon.  Look at length(sremsWindow$bdry) --- this is probably 3 (or more).


I would guess that the internal boundaries actually consist of *pairs*
of closely juxtaposed lines --- which look like single lines when plotted.

Have you read the spatstat vignette "shapefiles"?  That might give you 
some guidance as to how to proceed.


I would have thought the automatic repair process that spatstat now uses 
would fix up this problem.  What version of spatstat are you using?


You *might* be able to fix things up by doing (something like):

W1 <- owin(poly=sremsWindow$bdry[[1]])
W2 <- owin(poly=sremsWindow$bdry[[2]])
W3 <- owin(poly=sremsWindow$bdry[[3]])

W  <- union.owin(W1,W2,W3)

But if my guess about the internal boundaries actually being pairs of 
line is correct, this may not work.


It's hard to say without having your sremsWindow object to experiment on.

Or perhaps you need to extract the three counties separately as owin
objects and the apply union.owin() to the three results.  (Again, if my 
guess is correct, this may not work.)


Adrian or Ege may be able to propose a more efficient/effective solution.

cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] spsample: random points within a radius r from observed locations

2015-10-27 Thread Rolf Turner

On 27/10/15 21:37, Gabriele Cozzi wrote:

Dear list,

I have a set of relocation data (call it pnts).

What I want to do is to create, for each relocation of pnts, n alternative
relocations within a radius r.

My intuitive approach was to use of the spsample function in the sp
package.

buffer <- gBuffer(spgeom = pnts, width=r,  byid=T)
randompoints <- spsample(x=buffer, n=10, type="random", iter=10)

The problem here is that spsample creates 10 random points over all
Polygons in the buffer object and not for each Polygon within buffer.

Is there a function that returns random locations by passing a
SpatialPoints-class object to it and a radius r?

Any help is appreciated.


I don't know what is meant by "relocation" data, but I am pretty sure 
that you could easily accomplish your goal using the spatstat package. 
You would have to adjust your terminology and thought patterns somewhat, 
however.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [FORGED] Re: density from spatstat less than zero

2015-10-15 Thread Rolf Turner

On 16/10/15 11:47, Christopher W. Ryan wrote:

Thank you Rolf. The sub-zero density values are indeed extremely tiny in
absolute value. I will try to upgrade my R and spatstat (institutional
computer . . . ) and try again.


Just realised:  You can (of course!) work around the problem, without 
upgrading, via:


ddd <- eval.im(pmax(ddd,0))

Note that you need --- for reasons that are a wee bit too complicated to 
go into --- to call eval.im() explicitly in the foregoing.  I.e. you 
*cannot* just do "ddd <- pmax(ddd,0)".


That being said, it would still be a good idea to upgrade --- if you can 
persuade the IT weevils at your institution to cooperate.


cheers,

Rolf

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] density from spatstat less than zero

2015-10-15 Thread Rolf Turner


See inline below.

On 15/10/15 06:35, Christopher W. Ryan wrote:

Hello. I'm a longtime user of R and subscriber to R-help, but new to
R-sig-geo.  I'm running R 3.1.3 and spatstat 1.41-1 on Windows XP.


If you upgrade to the most recent version of spatstat (in order to do 
which you will need to upgrade to R 3.2.2) you will find that 
density.ppp() now has an argument "positive" which defaults to FALSE.

Setting this argument equal to TRUE should solve your problem.


I have a point pattern consisting of about 26,000 points, in an area as
follows:

Window: polygonal boundary
single connected closed polygon with 133 vertices
enclosing rectangle: [370698.1, 476022.1] x [4649777, 4732634] units
Window area = 652571 square units

It is clearly inhomogeneous, with most points clustered in the region's
central city, and very few of them in the vast rural areas.

The density() command from spatstat with all default values gives a
density that is overly-smoothed for my purposes. Using density()
command, with all default values except sigma=bw.diggle or sigma=bw.ppl,
yields densities that are, to the eye, a better match for the point
pattern, but some of the density values are less than zero, although
this is not apparent when I plot() the density.


The negative values that you get are very small in magnitude; they are 
really just numerical noise (unless something *really* funny --- and I 
cannot imagine what --- is going on).


Look at

ddd <- density.(X,sigma=bw.diggle)
min(ddd)

to see just how tiny the negative values are.


The same happens if I
use the default sigma and set adjust to much of anything less than 1.

Any ideas why, and how to avoid the negative values?


See the start of this message.


My apologies for not including a minimal working example. Including the
entire point pattern would be cumbersome. Whittling it down to 100 or so
points still yields the same problem, but I think that changes the game
and would not be a true reflection of my situation.


A whittled down example would have been perfectly "valid", but in this 
case an example was not actually needed.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] library fails to load in linux

2015-06-25 Thread Rolf Turner


On 25/06/15 18:49, Harold-Jeffrey Ship wrote:


Hi Rolf. Thank you for your quick response!

I had tried using install.packages, and just now tried the method you
suggested (R CMD INSTALL spatstat_1.42-1.tar.gz) with the same results.

By the way, we have the same problem when we try on RHEL 6.4 on a
different machine with R 3.2.0. I should mention that both of these are
VM's.


But was this on the same *physical* machine?  There *might* be some 
hardware (???) induced problem.  I'm clutching at straws here.



I can send you the entire stack trace if it will help.


Hmm.  Dunno if it will *help* --- I'm puzzled at the moment --- but I 
guess it can't hurt, so yes, please do send the stack trace.


Is there anyone out there on the R-sig-Geo list running Ubuntu 12.4 
"Precise"?  If so could you try downloading the spatstat tarball and 
installing from source, and let me know what happens?


Has anyone ever seen an error message about "stack smashing" (?!?!?!) 
before?  Anyone have any idea what this *means*?


cheers,

Rolf Turner


Rolf Turner  wrote on 06/25/2015 01:23:18 AM:

 > From: Rolf Turner 
 > To: Harold-Jeffrey Ship/Haifa/IBM@IBMIL
 > Cc: r-sig-geo@r-project.org, Adrian Baddeley

 > Date: 06/25/2015 01:23 AM
 > Subject: Re: [R-sig-Geo] library fails to load in linux
 >
 >
 > How did you effect the installation command?  Did you use
 >
 >  install.packages(.)
 >
 > from the R command line?  Or did you use a precompiled Ubuntu binary? Or
 > ?
 >
 > When I do
 >
 >  install.packages("spatstat", )
 >
 > from the R command line, under my antiquated Fedora system, it installs
 > with no problems.
 >
 > Have you tried downloading the tarball and installing from source, i.e.
 >
 >  R CMD INSTALL spatstat_1.42-1.tar.gz
 >
 > ???
 >
 > We need more detail.  If there really is a problem, we need to get to
 > the bottom of it.
 >
 > You might possibly get more help on the R-sig-Debian list; please keep
 > us informed of developments.
 >
 > cheers,
 >
 > Rolf Turner
 >
 > On 25/06/15 01:39, Harold-Jeffrey Ship wrote:
 > > I have Ubuntu 12.04 Precise. I just installed R and want to install
 > > spatstat. This is the R information:
 > >
 > > R version 3.2.1 (2015-06-18) -- "World-Famous Astronaut"
 > > Copyright (C) 2015 The R Foundation for Statistical Computing
 > > Platform: x86_64-pc-linux-gnu (64-bit)
 > >
 > > When I try to install spatstat things look fine at first until it
tries to
 > > verify the installation by loading the library. It crashes, with
the below
 > > error message (snipped).
 > > Any help would be greatly appreciated!
 > >
 > > installing to
/home/harold/R/x86_64-pc-linux-gnu-library/3.2/spatstat/libs
 > > ** R
 > > ** data
 > > *** moving datasets to lazyload DB
 > > ** demo
 > > ** inst
 > > ** byte-compile and prepare package for lazy loading
 > > ** help
 > > *** installing help indices
 > > ** building package indices
 > > ** installing vignettes
 > > ** testing if installed package can be loaded
 > > *** stack smashing detected ***: /usr/lib/R/bin/exec/R terminated
 > > === Backtrace: =
 > > /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f3895976e57]
 > > /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x0)[0x7f3895976e20]
 > > /usr/lib/R/lib/libR.so(+0xc9825)[0x7f3895f11825]
 > >[[alternative HTML version deleted]]


___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] library fails to load in linux

2015-06-24 Thread Rolf Turner


How did you effect the installation command?  Did you use

install.packages(.)

from the R command line?  Or did you use a precompiled Ubuntu binary? Or 
?


When I do

install.packages("spatstat", )

from the R command line, under my antiquated Fedora system, it installs 
with no problems.


Have you tried downloading the tarball and installing from source, i.e.

R CMD INSTALL spatstat_1.42-1.tar.gz

???

We need more detail.  If there really is a problem, we need to get to 
the bottom of it.


You might possibly get more help on the R-sig-Debian list; please keep 
us informed of developments.


cheers,

Rolf Turner

On 25/06/15 01:39, Harold-Jeffrey Ship wrote:

I have Ubuntu 12.04 Precise. I just installed R and want to install
spatstat. This is the R information:

R version 3.2.1 (2015-06-18) -- "World-Famous Astronaut"
Copyright (C) 2015 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

When I try to install spatstat things look fine at first until it tries to
verify the installation by loading the library. It crashes, with the below
error message (snipped).
Any help would be greatly appreciated!

installing to /home/harold/R/x86_64-pc-linux-gnu-library/3.2/spatstat/libs
** R
** data
*** moving datasets to lazyload DB
** demo
** inst
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded
*** stack smashing detected ***: /usr/lib/R/bin/exec/R terminated
=== Backtrace: =
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f3895976e57]
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x0)[0x7f3895976e20]
/usr/lib/R/lib/libR.so(+0xc9825)[0x7f3895f11825]
[[alternative HTML version deleted]]


___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] model fitting of randomly generated data in spatstat

2015-04-02 Thread Rolf Turner

On 03/04/15 01:12, robleaf wrote:

Rolf,

Really appreciate your responding to my query. We were interested in
simulating highly clumped distributions and the values that I used

rMatClust(kappa = 2, r = 2, mu = 2000))

Seemed to do a adequate job in creating them. I am appreciative of your
comment to the effect that the parameter values cannot be too strange. Looks
like I will need to spend more time with this. Really appreciate your in
input, regards, Robert



But those parameter values *won't* give you a "highly clumped" 
distribution.   You will generate on average *2* clumps and since the 
radius of each "clump" is 2 and you are simulating in the unit square, 
each clump will pretty much cover the window.  So you are (partially) 
superimposing (on average) two *very* dense clouds of points on the unit 
square.  This will not look ***at all*** "clumped.


Have you *plotted* the results of any of your simulations?

cheers,

Rolf

--
Rolf Turner
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] model fitting of randomly generated data in spatstat

2015-04-01 Thread Rolf Turner

On 02/04/15 03:09, Robert Leaf wrote:


I was generating some data for analysis and was curious to see if we could
fit a “MatClust” model using the function *spatstat*::kppm to some of our
observed data. As a first cut, and to see if we get values that conform to
our expectations, I fit models to simulated data and was curious about the
results. I am hoping that the group can help me understand the departures
from expecations.

Is it reasonable that the kppm function should return parameters values
that are similar to the those that generated the data?


Sure, given that the those used to generate the data are not too bizarre.



We are not getting value that are anywhere close to what we would expect.


That appears to be because you are using *bizarre* parameter values to 
generate your data.  The algorithms used by kppm() can be expected to 
return far-out results unless the data to which kppm() is applied have 
at least *some* reasonable prospect of conforming to the model that is 
being fitted.



library(*spatstat*)


What are those asterisks doing in that call???  That cannot have been 
the call that you actually used.




(point.vals <- rMatClust(kappa = 2, r = 2, mu = 2000)) # generate random
points

if (point.vals$n > 0) { # some realizations of the model return .ppp
variables of with no data


I was initially bewildered by this --- the expected number of points is 
4000, so how could you possibly get zero points? I asked.  Finally I saw
the light; with kappa = 2 you will zero parent points, and hence an 
empty pattern about 13.5% of the time.  I.e. kappa = 2 is just plain 
silly-small.


Using "r = 2" (these days the syntax is ***scale = 2*** means that you 
are forming clusters in discs of radius 2  in the unit square!!!

(You are using the default window.) This makes no sense to me.

Setting mu = 2000 means you are generating an average of 2000 points in
each such disk.  I really don't think this is a realistic value for a 
Matérn cluster process.


Your simulated pattern (if it is not empty) will have the appearance of 
having arisen from a very high intensity Poisson process.  Fitting a 
Matérn cluster process to such a pattern results in ill-determined 
parameter values.


Try:

set.seed(42)
X <- rMatClust(kappa=20,scale=0.04,mu=5)
fit <- kppm(X ~ 1,"MatClust")
fit



Fitted cluster parameters:
  kappa   scale
22.37058543  0.04168089
Mean cluster size:  4.514857 points

The estimated parameters are reasonably commensurate with those used
to generate the pattern.



cheers,

Rolf Turner

P.S. If your chosen parameter values (kappa = 2, mu = 2000) were 
selected in imitation of parameter estimates obtained from fitting a 
Matérn cluster model to real data, then I would suggest that you should 
probably *not* fit such a model to those data.


In modelling it is important to try fitting *appropriate* models to data 
sets.  Otherwise the results you get may well be meaningless.


R. T.

--
Rolf Turner
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Simulation of inhomogeneous point process

2015-03-25 Thread Rolf Turner
.05472 --- so "not quite significant" by this 
test.


Finally we might do

E <- envelope(X,savefuns=TRUE)
dclf.test(E)

which gave me p-value of 0.15.  This test is not really appropriate for 
inhomogeneity.  It is essentially testing for the presence of 
interaction *assuming* homogeneity.  However the fact that the null
hypothesis is not rejected says that the data are *consistent* with the 
assumption of a homogeneous Poisson process.


Thus the evidence against a homogeneous Poisson process is meagre at best.

(5) I am not really sure what you are asking in your question at the end 
of the post.  I *think* that you just want to simulate inhomogeneous 
Poisson patterns that "imitate" the behaviour of your observed pattern. 
 In order for that to make much sense you really

should have some model in mind for the intensity of your observed pattern.

One way to proceed without such a model would be to apply smoothing 
methods to get a non-parametric estimate of the intensity of your 
observed pattern:


est.int <- density(X)
Y <- rpoispp(est.int)

This produces a simulated realisation of an inhomogeneous Poisson 
process with intensity equal to the non-parametric estimate of the 
intensity of X.


Another way would be to use the model fitted in terms of the Cartesian
coordinate "y" --- given that there was a little evidence that this
coordinate has an influence on the intensity of X.

Y <- rmh(fit1)

Whether either of these two approaches makes any sense at all depends on 
what you are trying to accomplish --- which is unclear.


cheers,

Rolf Turner

P. S.  Don't send posts like this to r-sig-geo-requ...@r-project.org; 
that address is for matters pertaining to the administration of the 
r-sig-geo list, e.g. subscribing or unsubscribing.


R. T.

--
Rolf Turner
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Problem using trend covariates in predict.ppm (spatstat)

2015-03-25 Thread Rolf Turner

On 26/03/15 05:32, Rawlins, Barry G. wrote:

Dear list




I have been successfully forming spatial point pattern models using

the function ppm and a series of covariates stored as im objects:


Example here in which I have a spatial point pattern object

"CI_pipe_40_spp" and a covariate "cov_CI_len"


mod2<-ppm(CI_pipe40_spp, trend=~cov_CI_len,

covariates=list(cov_CI_len)) # this works fine giving me a ppm model

summary(mod2)

Point process model Fitting method: maximum likelihood (Berman-Turner
approximation) Model was fitted using glm() Algorithm converged
Call: ppm.ppp(Q = CI_pipe40_spp, trend = ~cov_CI_len, covariates =

list(cov_CI_len))

Edge correction: "border" [border correction distance r = 0 ]

I then want to use the predict function I next write in the same

workspace:

preds=predict.ppm(mod2,type="trend", window=mask40,ngrid=c(402,402),
covariates=list(cov_CI_len))

But I get the following error: Error in
mpl.get.covariates(covariates, list(x = xpredict, y =

ypredict), :

Each entry in the list 'covariates' should be an image, a function,
a

window, a tessellation or a single number


But if I check the class of "cov_CI_len": class(cov_CI_len) [1] "im"


Which shows that this object is an image. Can someone suggest what
is

wrong here? The


The help says "If covariates is a list of images, then the names of

the entries should correspond to the names of covariates in the model
formula trend."

I think I have the code correct - can anyone help with this?


I think that the problem is that you do not *name* the covariate in the 
list that you supply to predict.  The error message is less informative 
than it might be, I guess.


I believe that

preds=predict.ppm(mod2,type="trend", window=mask40,ngrid=c(402,402),
  covariates=list(cov_CI_len=cov_CI_len))

should work.

BTW, what version of spatstat are you using?  The syntax of your calls 
is unnecessarily cumbersome, and many aspects of it could be simplified 
considerably, particularly with recent versions of spatstat:


mmm <- ppm(CI_pipe_40_spp ~ cov_CI_len)
prd <- predict(mmm,window=mask40,ngrid=402)

I.e. there is no longer any need to specify "covariates" (the newer 
usage is "data" rather than "covariates", although the older usage still 
works) if the covariate in question is in your global environment/workspace.


Also there is no need to to type "predict.ppm"; the *generic* predict() 
automatically dispatches to the *method* predict.ppm().


Also there is no need to specify 'type="trend"' for a Poisson model.

Also ngrid=c(402,402) is equivalent to ngrid=402.

HTH

cheers,

Rolf Turner

--
Rolf Turner
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] categorical values in im-objects (spatstat/ppm)

2015-03-09 Thread Rolf Turner



I cannot make head nor tail of your question.

I don't understand your use of eval.im() --- that function is
intended to do calculations on objects that are (already) of class "im".

E.g. if X is an image with real positive pixel values then

   Y <- eval.im(sqrt(Y))

is an image whose pixel values are the square roots of the corresponding 
pixel values from X.


The help for im() in spatstat explains pretty clearly how to create a 
factor valued image.


I tried to have a look at the data to which you gave a link at the end
of your email.  The "owin.csv" file has 44 lines, all but 6 of which are
missing.  If the missing lines are eliminated the resulting window looks 
like a triangle, despite having 6 vertices.


Of the 224 points in "events.csv", 208 fall outside the window referred 
to above.


We can't really advise you about "zones" unless you provide "zones" 
which you seem not to have done.


I suggest that you back off, get a coherent story together, provide a 
complete and consistent set of data, and then ask again.


cheers,

Rolf Turner

On 10/03/15 11:02, Silvia Cordero-Sancho wrote:

Hello,

I will like to fit a point process model (ppm) with several covariates. One
of them is a grid with 15 categorical variables (zones).

To recognized the values in my grid as categorical, I followed the code in
the following link:

http://stackoverflow.com/questions/26162955/r-package-spatstat-how-to-use-point-process-model-covariate-as-factor-when-pixe?answertab=active#tab-top




*zone1<-eval.im <http://eval.im>(as.factor(zone))*

*levels(zone1)<-c("A1","A2","A3","A4","B1","B2","B3","B4",*
*  "C1","C2","C3","C4","C5","C6","D")*

*unitname(zone1)<-c("meter","meters")*

But I am not sure if it really worked. If I run the function
*is.factor(zone1)*, the result is FALSE, but if I run the function
selecting any column or row (e.g. *is.factor(zone1[1,])* or
*is.factor(zone1[,200])*) the results show as TRUE.

However, the function *summary(zone1)* indicates that it is a factor value
pixel image:

factor-valued pixel image
2641 x 680 pixel array (ny, nx)
enclosing rectangle: [992380, 1012780] x [732491, 811721] meters
dimensions of each pixel: 30 x 30 meters
Image is defined on a subset of the rectangular grid
Subset area = 1577529000 square meters
* Pixel values (inside window):

  A1 A2 A3 A4 B1 B2 B3 B4 C1 C2 C3
   C4 C5 C6D
116928   5670  16614   6823  27917   7547197   9354 132658 405515
1016 136784 576913 113978 194896

* *The distribution of the number of cells per zone is the same than the
original file *

However, when I used the layer within the ppm function, not all the
categories are included in the analysis:

*m1<-ppm(ag4u,~Z, covariates=list(Z=zone1), AreaInter(200))*

*coef(summary(m1))*

Estimate
(Intercept) -16.4787854
ZA3   2.6334407
ZA4   1.4900159
ZB1   0.6177496
ZB2   0.3502884
ZB4   1.4179890
ZC1  -2.0643563
ZC2  -0.6806136
ZC4  -0.1897898
ZC5  -2.8285278
ZC6   1.5300109
ZD2.1210203
Interaction   2.4118998

The zones identified as A1, A2, B3, C3 are excluded from the analysis

Similarly, I get the same results when I used the following expression:

*m2<-ppm(ag4u,~factor(Z),covariates=list(Z=zone),  AreaInter(200)) *

And the following error when I tried to plot the residuals...

*qqplot.ppm(m1,nsim=100,verbose=F)*

Error in model.frame.default(Terms, newdata, na.action = na.action, xlev =
object$xlevels) :
   factor Z has new levels A2, C3

So, I think that the problem could be associated with functions I am
employing to assign the factor-values. Is this is the problem, i*s there an
alternative to define categorical values for im-objects? Or, it could be
other reason for the exclusion of categories?*

I will appreciate any advise.



--
Rolf Turner
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] spatstat predict rhohat question

2015-02-23 Thread Rolf Turner



A follow-up to my previous post:  Adrian tells me that a revision of 
spatstat (version 1.41-0), which contains the fix to the 
predict.rhohat() problem, will *probably* be submitted to CRAN today.


cheers,

Rolf Turner

--
Rolf Turner
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] spatstat predict rhohat question

2015-02-22 Thread Rolf Turner



There was indeed a bug in predict.rhohat().  It is has now been fixed 
and the correct version of the function will be included in the next 
release of *spatstat* --- said release should happen "real soon now". :-)


In the mean time, anyone who needs a corrected version urgenty should 
contact me off-list.


cheers,

Rolf Turner

--
Rolf Turner
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] spatstat predict rhohat question

2015-02-21 Thread Rolf Turner



Thank you for the *excellently* reproducible example.  We are looking 
into it and will get back to you.


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619


On 22/02/15 17:06, js_wvu15 wrote:

Hello,

I'm using spatstat 1.40-0 in R version 3.1.2 and encountered this warning
for predict.rhohat when the rhohat object is a fitted point process model
and the window is irregular:

Warning message:
In Y * lambda :
   longer object length is not a multiple of shorter object length

I managed to duplicate my warning:

library(spatstat)
X <- rpoispp(function(x,y){exp(3+3*x)})
win1 <- owin(poly=list(list(x=c(0,1,1,0), y=c(0,0,1,1)),
list(x=c(0.6,0.4,0.4,0.6), y=c(0.2,0.2,0.4,0.4
X1 <- X[win1]
fit1 <- ppm(X1, ~x)
rho1 <- rhohat(fit1, "y")
plot(predict(rho1)) ###get the warning

I researched the warning but couldn't figure it out. Any help is greatly
appreciated!


___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] spatstat error: owin & "im" object

2015-02-12 Thread Rolf Turner


See at end.


On 13/02/15 11:09, Silvia Cordero-Sancho wrote:


Hello SIG-GEO community,

I followed the example for the Extract.im function to "subset" a object of
class "im" to the extent of an owin.

# make up an image
  X <- setcov(unit.square())
  plot <http://inside-r.org/r-doc/graphics/plot>(X)

  # a rectangular subset
  W <- owin(c <http://inside-r.org/r-doc/base/c>(0,0.5),c
<http://inside-r.org/r-doc/base/c>(0.2,0.8))
  Y <- X[W]
  plot <http://inside-r.org/r-doc/graphics/plot>(Y)


However, when I applied to my data, it does not properly work, the output
object is not a 'im'

Here the code:

# 1. Import tiff format file. It is a raster file, spatial resolution 30 x
30 m,  ny=2667, nx=700. Pixel value ranges: 1 to 11, each one represent a
category.

tn<-as(readGDAL("/rasters/tnn.tif"),"im")

class(tn)

[1] "im"

# 2. Subset to owin extent:


class(W)

[1] "owin"

# note: *W* was created from a irregular polygonal shapefile, and it is the
same file used to define the owin for the point pattern

tn2<-tn[W]
class(tn2)
[1] "integer"


The same problem occur with all the tiff-format imported grids, (e.g. DEM)


Has anyone experienced this problem before? Does anyone has a suggestion? I
will appreciate your input.


Nothing whatever to do with "tiff" images.  The crux of the problem is 
that in your toy example, W was a rectangle; in your "real" example, W 
was a polygon.


If X is an image and W is a window, then

   X[W]

is an image *only* when W is a rectangle.  If W is not a rectangle, then
you need to do

   X[W,drop=FALSE]

to get an image as the result.  (Otherwise you get a *vector* of the 
pixel values for pixels that fall inside W.)


This is all spelled out in the help file for "[.im" but the help is 
lengthy, the possibilities are manifold and the issue is complex, so you 
would have to read the help file *very* carefully to figure this out.


I hope that the situation is clear now.

cheers,

Rolf Turner


--
Rolf Turner
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Anova and confidence intervals for kppm models in spatstat

2015-01-13 Thread Rolf Turner

On 13/01/15 08:41, ASANTOS wrote:

Dear Members,

  I have two questions, first if there are approaches like
anova.ppm for comparing a clustered point process models kppm in
spatstat package (example above)? And second the same question for
estimate the confidence intervals for kppm model parameters, like
confint() function?

data(bei)
## Model with elevation
fit1 <- kppm(bei, ~elev, "Thomas", covariates = bei.extra)
fit1
##Null model
fit.null <- kppm(bei, ~1, "Thomas", covariates = bei.extra)
fit.null
#Comparing with ANOVA
anova.ppm(fit1,fit.null, test="Chi")
#but don't have anova.kppm?
##Confidence intervals
confint(fit1)


The short answer is "no".  A somewhat longer answer:  If the method of 
minimum contrast is used, there is no analogue of the log likelihood 
ratio available and consequently no form of "anova" based on the 
likelihood ratio test could possibly be done.  If either of the other 
two methods ("clik2" or "palm") were used then at least conceivably some 
form of anova *could* be done, but this possibility is not yet implemented.


Further theoretical development is needed. Adjustment of the analogue of 
the likelihood ratio is required in order for this statistic to have the 
required chi-squared null distribution, and the theory for this 
adjustment needs to be worked out.  This is on our "to-do list" but 
probably will not happen soon.


The necessary theory underlying the use of anova.ppm() for Gibbs models 
has only recently become available.  See the references given in the 
help for anova.ppm.  Previously anova could only be applied to *Poisson* 
models fitted to point patterns.


Until further development is carried out you will have to get by on 
Monte Carlo methods.


cheers,

Rolf Turner

--
Rolf Turner
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Home phone: +64-9-480-4619

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Dispatch point on a map

2014-11-10 Thread Rolf Turner

On 11/11/14 00:14, MBASSO Ilyass wrote:

Dear,

I have a question regarding using R in order to dispatch a group of points
in a map in order to cover the all map.
Each point have a buffer around (Drive time), representing the zone covered
by each point.
I would like to know if R can bring a solution to that problem, and if Yes
if someone can support to answer this issue.


It sounds like what you want is a hard core point process.  Check out 
the function rSSI() (random simple sequential inhibition) from the 
spatstat package; also look at the function rmh() from that package (use 
cif="hardcore" in the model specification).


cheers,

Rolf Turner

--
Rolf Turner
Technical Editor ANZJS

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


  1   2   3   >