I usually export my dataset as raw binary strings and import them using 
readBin(). I then process each line of the image and write the results. The 
code I currently use to smooth the vegetation time-series is below. I have also 
included four AVHRR NDVI time-series that I use to test new smoothing methods. 
There are some good filtering methods in R, but they all assume that noise 
occurs in the time-series in both directions. I work under the assumption that 
downward spikes are likely cloud contamination and the upper envelope of the 
data are accurate vegetation index values. The improvements I would like to see 
in this code are:

1- Some quick way of checking for very large upward spikes, just in case there 
are some erroneous data in that direction.
2- Some implementation that is quicker (re: processing time) than the while 
loop. I ran this code on a MODIS EVI time- series for an image that contained 
1300 x 2995 pixels and it took about a day and a half.

Thanks in advance for any suggestions.

my.smooth <- function(x,threshold=50)
{
        fn <- function(xx){
                apply(cbind(xx,filter(xx,filter=c(0.5,0,0.5),circular=T)),1,max)
                }
        
        y <- fn(x)
        a <- max(apply(cbind(x,y),1,diff))
        while(a > threshold){
                x1 <- fn(y)
                a <- max(apply(cbind(y,x1),1,diff))
                y <- x1
                }
        return(y)
        }



 d1  d2  d3  d4
1  113 138 138 134
2  108 115 120 115
3  105 127 129 120
4  103 127 129 120
5  109 119 120 117
6  115 126 126 123
7  115 126 126 123
8  102 124 125 128
9  102 124 125 128
10 111 119 119 119
11 122 119 119 119
12 122 120 122 121
13 110 120 122 121
14 110 115 115 114
15 104 109 109 114
16 121 137 141 142
17 121 142 144 145
18 120 142 144 145
19 120 143 148 144
20 137 145 149 145
21 137 145 149 145
22 138 163 163 167
23 138 169 172 172
24 136 169 172 172
25 172 180 183 179
26 172 180 183 179
27 157 174 180 179
28 165 181 181 182
29 173 181 181 182
30 173 179 181 182
31 174 173 173 182
32 174 185 185 182
33 119 185 185 184
34 167 183 183 184
35 167 183 183 182
36 144 178 184 184
37 170 182 182 183
38 173 182 182 183
39 173 181 181 181
40 169 178 179 179
41 155 171 172 172
42 116 154 154 149
43 101 145 149 149
44 114 147 156 149
45 114 147 156 149
46 107 124 125 124
47 108 124 125 124
48 108 120 115 119
49 131 128 139 131
50 131 141 140 135
51 117 141 140 135
52 113 138 138 134


Jonathan B. Thayn, Ph.D.
Illinois State University
Department of Geography - Geology
200A Felmley Hall
Normal, Illinois 61790-4400

(309) 438-8112
jth...@ilstu.edu
my.ilstu.edu/~jthayn





On Jan 30, 2010, at 3:29 AM, Tomislav Hengl wrote:

> 
> Dear Jonathan,
> 
> Interesting topic. Please send us also your code example and if possible part 
> of your dataset (I assume it is a SpatialGridDataFrame?).
> 
> Operations on lists can be speed up by using lapply and by implementing your 
> own functions, consider also running Rprof to check which operation is using 
> the most time (read more in "R inferno" or e.g. here: 
> http://manuals.bioinformatics.ucr.edu/home/programming-in-r#Progr_noloops), 
> eventually if the operation is so complex, you simply cannot anticipate to 
> have the results within few minutes (except if you try using super or grid 
> computing; see 
> http://cran.r-project.org/web/views/HighPerformanceComputing.html).
> 
> HTH,
> 
> T. Hengl
> http://home.medewerker.uva.nl/t.hengl/ 
> 
> 
>> -----Original Message-----
>> From: r-sig-geo-boun...@stat.math.ethz.ch [mailto:r-sig-geo-
>> boun...@stat.math.ethz.ch] On Behalf Of Jonathan Thayn
>> Sent: Friday, January 29, 2010 5:02 PM
>> To: r-sig-geo@stat.math.ethz.ch
>> Subject: [R-sig-Geo] NDVI time-series filter
>> 
>> I need to smooth a collection of annual NDVI time-series data for a fairly 
>> large
>> image. Right now I am using an interruptive method that compares the 
>> difference
>> between each point and the mean of its two neighbors. This process repeats 
>> until
>> a threshold is reached. It works well, but it takes a long time. When I 
>> process
>> the image without smoothing the data it takes about 2 hours – with the 
>> smoothing
>> it looks like it will take up to 7 days.
>> 
>> The method needs to elevate troughs in the data but retain the peaks – I 
>> assume
>> that any troughs are really cloud contamination and that the peaks are 
>> accurate
>> NDVI values. Does anyone know of such a smoother that isn't so time 
>> consuming?
>> 
>> 
>> Jonathan B. Thayn, Ph.D.
>> Illinois State University
>> Department of Geography - Geology
>> 200A Felmley Hall
>> Normal, Illinois 61790-4400
>> 
>> (309) 438-8112
>> jth...@ilstu.edu
>> my.ilstu.edu/~jthayn
>> 
>> 
>> 
>> 
>> 
>> 
>>      [[alternative HTML version deleted]]
> 
> _______________________________________________
> R-sig-Geo mailing list
> R-sig-Geo@stat.math.ethz.ch
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo


        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
R-sig-Geo@stat.math.ethz.ch
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Reply via email to