Great stuff to finally see it make it! Well done to all involved!
On Fri, Jan 13, 2017 at 8:24 AM Matteo Mattiuzzi
wrote:
> Dear list,
>
>
> We would like to announce the release of the MODIS package to CRAN. This
> was finally possible thanks to the constant efforts of
Check out sdmtools if fragstats like patch/class metrics are what you're
looking for: https://cran.r-project.org/web/packages/SDMTools/index.html
Sincerely,
Forrest
On Tue, Dec 13, 2016 at 1:29 PM Manuel Spínola wrote:
> Dear list members,
>
> Is there any R package,
.
Congratulations,
Forrest Stevens
On Fri, Nov 4, 2016 at 11:04 AM Edzer Pebesma <edzer.pebe...@uni-muenster.de>
wrote:
> Package sf (for "simple features") is now on CRAN:
>
> https://cran.r-project.org/package=sf
>
> The package vignette is found here:
>
>
system.time(dist_rf <- distance(rf))
>
>
> I stopped the code with the functions "buffer" and "distance" after two
> hours.
>
>
> Thanks a lot for your time.
>
> Have a nice day.
>
> Nell
>
>
>
>
>
For the moment, I haven't find solutions.
>
>
> Thanks a lot for your time.
>
> Have a nice day.
>
> Nell
>
>
>
>
> --
> *De :* Forrest Stevens <r-sig-...@forreststevens.com>
> *Envoyé :* lundi 11 avril 2016 09:09
> *À :* Nelly Redua
Five million cells isn't all that many, how slow is too slow? Buffering a
raster of about 10 million cells on my laptop takes on the order of 20
seconds or so for a binary raster. Is it possible that you're fighting
memory problems?
In the past when doing multiple ring buffers I've found it
y have routines that do the same thing faster for large
> raster data. In raster, ``aggregate'' has a very different meaning.
>
>
> On 11/03/16 11:07, Alexander Shenkin wrote:
> > Hello All,
> >
> > I've been working to be able to make elevation profiles from a DEM along
>
Hey Allie, long time, hope things are going well!
Coincidentally I was working on something for a student of mine to do this
so I'll throw my two bits in. Starting from two points which specify a line
perpendicular to your swath and separated by the distance of your swath
width, you can create a
Hi Leila, to generate spatially autocorrelated data, which I think is what
you're trying to get at, I use the gstat package. You need to decide on
the geostatistical model that describes the nature of your correlation
structure, and then simulating a field is pretty easy:
library(gstat)
##
This is fantastic work, thank you both for troubleshooting it and the fix
Roger. Your tireless work is greatly appreciated! I just ran across this
problem a couple of weeks ago when exporting from R to GeoJSON, and will
happily test out the new version.
Sincerely,
Forrest
On Mon, Nov 2, 2015 at
Thanks for catching that! I've responded directly with a package file
attached, but the incomplete fix should be corrected in the next R-Forge
release.
Sincerely,
Forrest
On Thu, Jul 16, 2015 at 4:09 PM Roberto Horn robertomh...@gmail.com wrote:
Hello,
No, it is different:
Does it fail in exactly the same way? And no, in practice you shouldn't
see any large differences in reprojected/mosaicked output. The only
difference is less hassle with getting a working MRT in place. :)
Sincerely,
Forrest
On Thu, Jul 16, 2015 at 3:52 PM Roberto Horn robertomh...@gmail.com
Hi guys, thank you for bringing this to our attention. I believe I've
fixed the bug in the R-Forge version of the MODIS package. If either
of you could install this from R-Forge and test it in your
environments that would be ideal.
Notice that these changes won't be reflected in the automated
!
Amit
Forrest Stevens forr...@ufl.edu schrieb am 15:26 Dienstag, 31.März
2015:
Hi Amit, I can provide you with some help to get smoothing working to
leverage multiple cores. This is already baked into the
whittaker.raster() function of our MODIS package and it makes sense to
use
I'm guessing based on the data and situation you describe that some
variant of this would probably get you close:
## Sample data:
d - data.frame(A=c(T,F,F,T), B=c(F,T,F,T))
## Count row-wise trues:
sum( apply(d, MARGIN=1, sum) = 1 )
Hope that helps,
Forrest
--
Forrest R. Stevens
Ph.D.
Hi Amit, I can provide you with some help to get smoothing working to
leverage multiple cores. This is already baked into the
whittaker.raster() function of our MODIS package and it makes sense to
use it since the problem is of the embarrassingly parallel variety.
To leverage multiple cores you
The easiest way I can think of would be to create an index raster
based on your raster of interest, convert it to polygons, project it
to an equal area projection with linear units amenable to you, then
extract the polygon areas and assign them back to your index raster
for weighting. Perhaps
Institute
University of Florida
www.clas.ufl.edu/users/forrest
On Fri, Jan 9, 2015 at 6:11 PM, Forrest Stevens forr...@ufl.edu wrote:
The easiest way I can think of would be to create an index raster
based on your raster of interest, convert it to polygons, project it
to an equal area projection
Not to dissuade any interesting new algorithm development, and I may
be missing something in the motivation for what you're trying to
accomplish, but have you looked at the Kendall package and tried using
it on your raster object? I've been pleasantly surprised at its
efficiency for at least
comments. So I will improve the documentation
for Kendall by terminating the program with an error message when n=3
(this case is of no interest to me) and warning message when n12 that the
p-values may be inaccurate.
Thank you for everything!
Nuno
On 9 October 2014 00:26, Forrest Stevens
Hi Joseph.. With datasets that large, in my experience you're looking at
spatial subsetting to get something manageable and I've given up on zonal()
as it's just too slow for my needs. Rather, I would do your own feature to
raster conversion on the polygons based on the raster layer so cells
I've had no problems just using 7Zip to package the kml and images
together. I'm surprised WinZip doesn't work?
Forrest
--
Forrest R. Stevens
Ph.D. Candidate, QSE3 IGERT Fellow
Department of Geography
Land Use and Environmental Change Institute
University of Florida
I've been following this with some interest as it's an interesting problem.
I really like Thierry's solution and I think that he's right in that it
should converge across most cases I can think of. I was concerned it would
have problems with complex, multi-part polygons or polygons with holes,
Hi Julien, you've posed a really interesting problem, and the bulk of which
I'd argue might not be best solved using R but I'll be very interested to
hear other people's opinions on it. A large portion of your project is
going to rely on good image processing, so the raster and jpeg packages
will
I've used a similar technique with subs() in the past, but you'll still be
relying on the rather slow raster zonal() function to do population totals
and SD calculations for each country. As an alternative I've created much
more workable solutions by converting large rasters to data.table objects
Hi Yan, I guess I would be surprised for such a simple process if
rasterEngine() would be worth the overhead? Though, admittedly, Jonathan
Greenberg might have more information on the topic. To do such an
operation this is the approach I would take without using rasterEngine():
for (i in 1:5) {
Are these in projected coordinates with the same units? (They probably
should be if they aren't..) But if so then you can just use the
Pythagorean theorem, assuming two points, distance in 3D space is:
dist3d - sqrt((x1-x2)^2 + (y1-y2)^2 + (z1-z2)^2)
Example:
x1=0
y1=0
z1=0
x2=1
y2=2
Hi Edzer, this is definitely very cool. This may make some recent
work I've been doing on spatial and social network analyses of
academic co-authorship easier (early code is here if you're
interested: http://refnet.r-forge.r-project.org/ ). Thanks for
providing this to the community!
Hi Tom, I've done something similar in the past to visualize the
distribution of the predictions attained for each observation across
the many trees within a random forest while looking at various aspects
of those ranges and correlating that with cross-validated prediction
errors. It's relatively
You can do this to get the mean and SD for each observation (note the
mean should match the value in predictions$aggregate:
y_data$rf_mean - apply(predictions$individual, MARGIN=1, mean)
y_data$rf_sd - apply(predictions$individual, MARGIN=1, sd)
y_data$rf_cv - apply(predictions$individual,
I am trying to run a for loop 1000 times to randomly sample spatial points
...
different values). However, when I look at the results from the kde function
the first two outputs are different and then each iteration after that just
produces the exact same values.
Hi Kathryn, I believe the
*100, 0),% done))
}
close(pb)
Forrest Stevens
--
Ph.D. Candidate, QSE3 IGERT Fellow
Department of Geography
Land Use and Environmental Change Institute
University of Florida
www.clas.ufl.edu/users/forrest
___
R-sig-Geo mailing list
R-sig-Geo@r
that analysis chain, with a bottleneck that can
hopefully be bypassed at some point soon (I haven't had time to dig
into the source but maybe one day!)
Sincerely,
Forrest Stevens
--
Ph.D. Candidate, QSE3 IGERT Fellow
Department of Geography
Land Use and Environmental Change Institute
University of Florida
Hi Kat, a little more information would help. Which package are you using,
what are the symptoms you're experiencing (including a self-contained code
snippet would be great) and which original post are you referring to?
I'm a contributor to the MODIS package (
what the raw, digital numbers are. But 10,000 is a pretty
standard scaling factor for shifting floating point numbers to integers,
and is used for a wide variety of remotely sensed data in order to decrease
the file size requirement for storing certain data.
Good luck,
Forrest Stevens
--
Ph.D
?
Sincerely,
Forrest Stevens
Ph.D. Candidate
Department of Geography
Land Use and Environmental Change Institute
University of Florida
http://www.clas.ufl.edu/users/forrest
On Wed, Jan 4, 2012 at 2:23 PM, Tara Bridwell tarabridw...@gmail.comwrote:
My overall analytical goal is to take the mean (and later
ASCII grid, on which you can pull subsets or index like you would a
normal vector (e.g. pretend your map object has band1 as its only data,
you can pull a subset of the first 10 grid values like this:
map$band1[1:10]
I hope that gets you closer to solving the rest of your question,
Forrest Stevens
37 matches
Mail list logo