Re: [R-sig-Geo] sp::disaggregate() does not seem to add the necessary "comment" attribute to identify holes if Polygons does not contain exactly 1 polygon with 0 or 1 holes

2017-05-01 Thread Anne C. Hanna
Roger,

Works for me now.  Thanks for your patience with this.

I'll definitely look at sf for future projects, but for now, I just want to
analyze my darn data!  :)

 - Anne


On 05/01/2017 05:23 AM, Roger Bivand wrote:
> On Sun, 30 Apr 2017, Anne C. Hanna wrote:
> 
>> Okay, I think I've tracked it down.  This is... absurdly complicated and I
>> don't know what the actual correct resolution is, but I'm pretty sure that at
>> least I know what's going wrong.
> 
> Good, and thanks for staying with this. I've pushed a fix to my github repo,
> and pull request #29 to edzer/sp.
> 
> Note that this also works out of the box:
> 
> library(sf)
> sf <- st_read(dsn = "2011 Voting District Boundary Shapefiles", layer =
> "MCDS", stringsAsFactors = FALSE)
> sfd <- st_cast(sf, "POLYGON")
> 
> However, using sf::st_touches() is a bit less intuitive for now than the spdep
> "nb" object generated by spdep::poly2nb(), which also provides Queen (touch at
> one or more boundary points) and Rook (touch at more than one boundary point),
> and sf::st_touches() is seven times slower than spdep::poly2nb() now.
> sf::st_touches() could be speeded up using an STR tree (spdep::poly2nb() can
> do this, but in any case only looks at candidate neighbours with intersecting
> bounding boxes). So:
> 
> sp_sfd <- as(sfd, "Spatial")
> library(spdep)
> sp_nb <- poly2nb(sp_sfd)
> 
> seems a reasonable workflow to find the adjacencies under Queen/Rook and
> snapping control.
> 
> Please try out the disaggregate fix and report back - and consider shifting to
> sf.
> 
> Roger
> 
>>
>> Basically it comes down to the fact that createSPComment() does not actually
>> check whether each individual polygon in the SpatialPolygons object has a
>> comment (much less a correct comment).  Instead, it looks at a top-level
>> "comment" attribute.  If that top-level comment attribute (i.e. sf@comment 
>> for
>> my SpatialPolygonsDataFrame object stored as "sf") is "TRUE", then
>> createSPComment() does nothing.
>>
>> And here's where the second layer of the problem comes in for my dataset ---
>> my original data, pre-disaggregation, has complete and correct comments.  
>> Some
>> of those comments (for objects where it's a single polygon with 0 or 1 holes)
>> are being retained by the basic disaggregation algorithm, while others are
>> not.  Your patch attempts to fill in the holes by running createSPComment() 
>> on
>> the final results.  But... the thing you are running createSPComment() on is
>> SpatialPolygons(p), where p is the list of disaggregated polygons.  And the
>> SpatialPolygons() constructor looks at the list of the polygons you feed it,
>> and if *any* of those polygons have comments, it sets the top-level "comment"
>> attribute of its output to "TRUE", even if other polygons are missing
>> comments.  (And it also doesn't check the comments it gets for correctness.)
>> So the fact that some polygon comments are retained in p means that
>> SpatialPolygons() lies to createSPComment() about whether it has any missing
>> comments that need to be fixed.
>>
>> On the other hand, when I manually run createPolygonComment() on the
>> individual Polygons objects in disaggregate()'s output, I don't even look at
>> the top-level "comment" attribute, so it works just fine.
>>
>> My little toy test polygons, on the other hand, were just not complex enough
>> to end up with partial comments in them, so they also didn't experience this
>> error.
>>
>> I am not familiar enough with the design philosophy behind sp to suggest at
>> what level this issue should be fixed, but I hope this is at least enough 
>> info
>> to help those who do know what they're doing get it corrected.
>>
>> - Anne
>>
>>
>> On 04/30/2017 12:18 PM, Anne C. Hanna wrote:
>>> Roger,
>>>
>>> Unfortunately I have a use case for you of createSPComment() not working.
>>>
>>> I tried your new version of disaggregate() on the original test script I 
>>> sent
>>> you, and it does seem to have fixed all the cases in there.  However, when I
>>> tried it on my actual data, it had the same failure mode as before. The
>>> original dataset has what appear to be correct comments, but the
>>> disaggregate() output is full of NULLs in place of comments for the 
>>> multi-part
>>> and multi-hole polygons.
>>>
>>> createSPComment() appears to be what's failing, rather than just some logic 
>>> in
>>> your disaggregate() function --- when I try to run createSPComment() on the
>>> disaggregate() output, I still get the same set of NULLs.  I can fix it if I
>>> run createPolygonsComment() individually on every Polygons object in my
>>> SpatialPolygonsDataFrame.
>>>
>>> I'm not sure what is different between my dataset and the test polygons I 
>>> was
>>> using earlier, as createSPComment() seems to handle the test shapes just 
>>> fine
>>> (and converting my SpatialPolygonsDataFrame into just a set of 
>>> SpatialPolygons
>>> also doesn't help).  But presumably it is better to have createSPComment()
>>> fixed 

Re: [R-sig-Geo] Cross-validation for kriging in R (package geoR): how to include the trend while reestimate is TRUE?

2017-05-01 Thread Patrick Schratz
Hi Vera,

I started debugging a bit and the error is in line 192 of the `xvalid()` 
function which uses a subfunction `cv.f`

    res <- as.data.frame(t(apply(matrix(locations.xvalid),
      1, cv.f)))

which then does the call to `vario()` in lines 125-141.
Here, the error appears because coords (which is created in line 90) seems to 
be of different length then trend (which is taken from your provided 
variog.obj).

if (is.null(variog.obj))
            stop("xvalid: when argument reestimate = TRUE an object with the 
fitted variogram model must be provided in the argument variog.obj ")
          CVvar <- variog(coords = cv.coords, data = cv.data,
            uvec = variog.obj$uvec, trend = variog.obj$trend,
            lambda = variog.obj$lambda, option = variog.obj$output.type,
            estimator.type = variog.obj$estimator.type,
            nugget.tolerance = variog.obj$nugget.tolerance,
            max.dist = max(variog.obj$u), pairs.min = 2,
            bin.cloud = FALSE, direction = variog.obj$direction,
            tolerance = variog.obj$tolerance, unit.angle = "radians",
            messages = FALSE, ...)
          CVmod <- variofit(vario = CVvar, ini.cov.pars = model$cov.pars,
            cov.model = model$cov.model, fix.nugget = model$fix.nugget,
            nugget = model$nugget, fix.kappa = model$fix.kappa,
            kappa = model$kappa, max.dist = model$max.dist,
            minimisation.function = model$minimisation.function,
            weights = model$weights, messages = FALSE,
            …)

Because we are debugging somewhat deep here and the issue might be quickly 
solved by contacting the package authors (they should get it working quickly 
since they provide the option ‘reestimate = TRUE'), I would try to do so first 
before doing any more detailed inspection of the error.

Cheers, Patrick

PhD Student at Department of Geography - GIScience group
Friedrich-Schiller-University Jena, Germany
Tel.: +49-3641-9-48973
Web: https://pat-s.github.io

On 1. May 2017, 16:31 +0200, wrote:
>
> http://stackoverflow.com/questions/43520716/cross-validation-for-kriging-in-r-how-to-include-the-trend-while-reestimating-t

[[alternative HTML version deleted]]

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

[R-sig-Geo] Cross-validation for kriging in R (package geoR): how to include the trend while reestimate is TRUE?

2017-05-01 Thread v.m.vanzoest
Dear all,

I have a question related to the function xvalid (package geoR), which I asked 
on StackOverflow before but unfortunately did not get answered, probably 
because it is too specifically related to spatial statistics and this specific 
function 
(http://stackoverflow.com/questions/43520716/cross-validation-for-kriging-in-r-how-to-include-the-trend-while-reestimating-t).
 I hope anyone of you is able to answer it.

I would like to compute a variogram, fit it, and then perform cross-validation. 
Function xvalid seems to work pretty nice to do the cross-validation. It works 
when I set reestimate=TRUE (so it reestimates the variogram for every point 
removed from the dataset in cross-validation) and it also works when using a 
trend. However, it does not seem to work when combining these two. Here is a 
reproducible example using the Meuse example dataset:

library(geoR)
library(sp)
data(meuse) # import data
coordinates(meuse) = ~x+y # make spatialpointsdataframe
meuse@proj4string <- CRS("+init=epsg:28992") # add projection
meuse_geo <- as.geodata(meuse) # create object of class geodata for geoR 
compatibility
meuse_geo$data <- meuse@data # attach all data (incl. covariates) to meuse_geo
meuse_vario <- variog(geodata=meuse_geo, data=meuse_geo$data$lead, trend= 
~meuse_geo$data$elev) # variogram
meuse_vfit <- variofit(meuse_vario, nugget=0.1, fix.nugget=T) # fit
# cross-validation works fine:
xvalid(geodata=meuse_geo, data=meuse_geo$data$lead, model=meuse_vfit, 
variog.obj = meuse_vario, reestimate=F)
# cross-validation does not work when reestimate = T:
xvalid(geodata=meuse_geo, data=meuse_geo$data$lead, model=meuse_vfit, 
variog.obj = meuse_vario, reestimate=T)

The error I get is:

Error in variog(coords = cv.coords, data = cv.data, uvec = variog.obj$uvec,  : 
coords and trend have incompatible sizes

It seems to remove the point from the dataset during cross-validation, but it 
doesn't seem to remove the point from the covariates/trend data. Any ideas on 
solving this / work-arounds? Thanks a lot in advance for thinking along.

Best, Vera

---

V.M. (Vera) van Zoest, MSc | PhD candidate |
University of Twente | Faculty ITC 
| Department Earth Observation Science (EOS) |
ITC Building, room 2-038 | T: +31 (0)53 - 489 4412 | 
v.m.vanzo...@utwente.nl |

Study Geoinformatics: 
www.itc.nl/geoinformatics


[[alternative HTML version deleted]]

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[R-sig-Geo] authorized model formulas for spatio-temporal models, please

2017-05-01 Thread Hodgess, Erin
Hello everyone.


Where would I find the authorized model formulas for spatio-temporal models, 
please?


?Thanks,

Erin



Erin M. Hodgess
Associate Professor
Department of Mathematics and Statistics
University of Houston - Downtown
mailto: hodge...@uhd.edu

[[alternative HTML version deleted]]

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] sp::disaggregate() does not seem to add the necessary "comment" attribute to identify holes if Polygons does not contain exactly 1 polygon with 0 or 1 holes

2017-05-01 Thread Roger Bivand

On Sun, 30 Apr 2017, Anne C. Hanna wrote:


Okay, I think I've tracked it down.  This is... absurdly complicated and I
don't know what the actual correct resolution is, but I'm pretty sure that at
least I know what's going wrong.


Good, and thanks for staying with this. I've pushed a fix to my github 
repo, and pull request #29 to edzer/sp.


Note that this also works out of the box:

library(sf)
sf <- st_read(dsn = "2011 Voting District Boundary Shapefiles", layer = 
"MCDS", stringsAsFactors = FALSE)

sfd <- st_cast(sf, "POLYGON")

However, using sf::st_touches() is a bit less intuitive for now than the 
spdep "nb" object generated by spdep::poly2nb(), which also provides Queen 
(touch at one or more boundary points) and Rook (touch at more than one 
boundary point), and sf::st_touches() is seven times slower than 
spdep::poly2nb() now. sf::st_touches() could be speeded up using an STR 
tree (spdep::poly2nb() can do this, but in any case only looks at 
candidate neighbours with intersecting bounding boxes). So:


sp_sfd <- as(sfd, "Spatial")
library(spdep)
sp_nb <- poly2nb(sp_sfd)

seems a reasonable workflow to find the adjacencies under Queen/Rook and 
snapping control.


Please try out the disaggregate fix and report back - and consider 
shifting to sf.


Roger



Basically it comes down to the fact that createSPComment() does not actually
check whether each individual polygon in the SpatialPolygons object has a
comment (much less a correct comment).  Instead, it looks at a top-level
"comment" attribute.  If that top-level comment attribute (i.e. sf@comment for
my SpatialPolygonsDataFrame object stored as "sf") is "TRUE", then
createSPComment() does nothing.

And here's where the second layer of the problem comes in for my dataset ---
my original data, pre-disaggregation, has complete and correct comments.  Some
of those comments (for objects where it's a single polygon with 0 or 1 holes)
are being retained by the basic disaggregation algorithm, while others are
not.  Your patch attempts to fill in the holes by running createSPComment() on
the final results.  But... the thing you are running createSPComment() on is
SpatialPolygons(p), where p is the list of disaggregated polygons.  And the
SpatialPolygons() constructor looks at the list of the polygons you feed it,
and if *any* of those polygons have comments, it sets the top-level "comment"
attribute of its output to "TRUE", even if other polygons are missing
comments.  (And it also doesn't check the comments it gets for correctness.)
So the fact that some polygon comments are retained in p means that
SpatialPolygons() lies to createSPComment() about whether it has any missing
comments that need to be fixed.

On the other hand, when I manually run createPolygonComment() on the
individual Polygons objects in disaggregate()'s output, I don't even look at
the top-level "comment" attribute, so it works just fine.

My little toy test polygons, on the other hand, were just not complex enough
to end up with partial comments in them, so they also didn't experience this
error.

I am not familiar enough with the design philosophy behind sp to suggest at
what level this issue should be fixed, but I hope this is at least enough info
to help those who do know what they're doing get it corrected.

- Anne


On 04/30/2017 12:18 PM, Anne C. Hanna wrote:

Roger,

Unfortunately I have a use case for you of createSPComment() not working.

I tried your new version of disaggregate() on the original test script I sent
you, and it does seem to have fixed all the cases in there.  However, when I
tried it on my actual data, it had the same failure mode as before. The
original dataset has what appear to be correct comments, but the
disaggregate() output is full of NULLs in place of comments for the multi-part
and multi-hole polygons.

createSPComment() appears to be what's failing, rather than just some logic in
your disaggregate() function --- when I try to run createSPComment() on the
disaggregate() output, I still get the same set of NULLs.  I can fix it if I
run createPolygonsComment() individually on every Polygons object in my
SpatialPolygonsDataFrame.

I'm not sure what is different between my dataset and the test polygons I was
using earlier, as createSPComment() seems to handle the test shapes just fine
(and converting my SpatialPolygonsDataFrame into just a set of SpatialPolygons
also doesn't help).  But presumably it is better to have createSPComment()
fixed than to have to work around it in disaggregate().  So I guess I'll look
at the code for that and see if I can find what's wrong.

Just in case you want to see this in action, I've attached a little test
script.  The data set I am working with is available at:

http://aws.redistricting.state.pa.us/Redistricting/Resources/GISData/2011-Voting-District-Boundary-Shapefiles.zip

 - Anne


On 04/29/2017 10:06 AM, Roger Bivand wrote:

On Sat, 29 Apr 2017, Anne C. Hanna wrote:


Roger,

This looks great, and I will try