I prefere Grass and R :-)
In fact I believe that they have a set of sampling points on 4 times, and
not 4 repetions on time for EACH pixel. It is pratically insane, isn't it?
Except If they estimated the cover of a kind of vegetation from four
different
landcovers. So if the want to estimate something as a response variable
influenced
by the four cover maps, I suggest you use grass, set coarse and handleble
raster dimension, run your mixed effect model on Grass, reset your
fine resolution, and apply your model on tiles of your original map.

And how about your client be worried about multi-scale (moving windows with
different radii) instead on a unique 0.4m scale?

bests

milton

On Tue, Dec 1, 2009 at 2:15 PM, Steven J. Pierce <[email protected]> wrote:

> Hi folks,
>
> I've got a consulting client who has high-resolution (0.4 m) raster data
> from remote sensing that covers an area about 5 km x 5 km, which naturally
> yields a very large dataset (~ 15.625 million pixels) at each point in
> time.
> They have repeated measurements at 4 time points for this area on a
> continuous variable that essentially represents which kind of vegetation is
> most dominant (forage plants vs. weeds) within the pixel. They want to use
> things like land use type, precipitation, soil type, and the slope and
> aspect of the ground in each pixel to predict the changes over time in the
> outcome variable.
>
> My initial thought about how to analyze the data was to use a hierarchical
> linear (mixed effects) model with time points nested within pixels to model
> the typical longitudinal trajectory of the outcome and how the predictors
> affect that trajectory. My dilemma is that they want to use the entire
> dataset to do their models, which means the dataset is so large that most
> of
> the analysis tools I'm used to using are simply going to choke on it. In
> addition, using a random effect for each pixel might account for temporal
> autocorrelation, but I suspect there would still be substantial spatial
> autocorrelation not modeled with that approach.
>
> So, I thought I'd ask here to see what suggestions you have on software
> tools and/or statistical models that might be able to handle this. The
> client mentioned IDL & ENVI having good tools for handling large raster
> datasets, but I'm not familiar with them and what they can do in terms of
> estimating formal statistical models.
>
> Steven J. Pierce, M.S., Ph.D. Candidate
> Associate Director
> Center for Statistical Training & Consulting (CSTAT)
> Michigan State University
> 178 Giltner Hall
> East Lansing, MI 48824
>
> Office Phone: (517) 353-9288
> Office Fax: (517) 353-9307
> E-mail: [email protected]
> Web: http://www.cstat.msu.edu
>
>
>
>        [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-Geo mailing list
> [email protected]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[email protected]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Reply via email to