Hi Robert, Thanks for the response. I will continue to keep forcing a write to disk to prevent this from occurring.
Based on this post, and another earlier in the year, I looked a bit more into other methods for calculating available RAM, at least for Linux and Mac systems. I understand from some of the posts I have seen that this is a fairly tricky subject, but something like the following might give available memory (in mb) on a linux system: system("free -t -m | awk 'FNR == 5 {print $4}'", intern = T) On a mac, I believe the "alloc" command might do the trick, but for some reason my system (10.5.8) doesn't have it so I couldn't test that. Another way might be to access the summary from top, e.g. as.numeric(sub("M", "", system("top -l 1 | awk 'FNR == 6 {print $10}'", intern = T) # 40M)) But I am not at all sure that that particular number is the right one to use as an indicator of available memory (looks very conservative). Anyway, looking into the code for canProcessInMemory, I see that you already have similar solutions worked in but commented out, so I guess these solutions might not be desirable. Cheers, and thanks again, Lyndon On Wed, Nov 23, 2011 at 7:38 PM, Robert J. Hijmans <r.hijm...@gmail.com> wrote: > Lyndon, > The reason might be that the large rasters trigger raster to processing by > chunk (writing to disk) whereas the smaller file is considered small enough > for processing in RAM. That may not be the case, and the OS may start > swapping memory (using disk as RAM) which is slow. If this happens, you can > change the option setOptions("maxmemory=a value") , or to be on the very > safe side use, as you indicated, todisk=TRUE. I wish I knew if a better way > to find out how much RAM is available on the different operating systems. > Robert > > > On Mon, Nov 21, 2011 at 8:40 AM, Lyndon Estes <lyndon.es...@gmail.com> > wrote: >> >> Hi Swen, >> >> Thanks for the suggestion. That brings to mind an earlier problem I >> had with a different functions substituting values on large rasters, >> and I got around it by writing the following lines into the code: >> >> if(raster:::.toDisk() != TRUE) { >> setOptions(todisk = TRUE) >> cat("We don't want memory problems--forcing write to disk.\n") >> } >> >> So I will see if that works in this case. However, I am still curious >> why the problem is occurring with my particular grid files, but not >> with the dummy rasters I made, which are in fact somewhat larger. >> Also, the combined size of the 9 grids is < 30 mb, which was well >> below my available memory (~about 500 mb free). >> >> In the meantime, I am continuing with stacks and checking whether a >> forced write or lower memory limits will help. >> >> Thanks again, >> >> Lyndon >> >> >> >> On Mon, Nov 21, 2011 at 6:24 AM, Swen Meyer <s.me...@lmu.de> wrote: >> > Dear Lydon, >> > I also had some trouble with stacking Layers in the Raster package. Try >> > to >> > use a 32 bit version of R. Sounds weired but in my case the stacking >> > with >> > the Raster package was much faster using the Raster Package on a 32 bit >> > - R >> > Version. This is what Robert Hijmans once wrote me when I had some >> > trouble >> > with the Raster Stacking: >> > " >> > Thanks that is very good to know. raster checks if the data can all be >> > processed in memory, and if so, does that (up to a point), for better >> > speed. >> > There is a limit set by maxmemory. See >> > >> > showOptions() >> > >> > Perhaps setting maxmemory to a lower value >> > >> > e.g., >> > >> > setOptions(maxmemory=1e+08)" >> > >> > Hope this will help you. >> > Greetings, >> > Swen >> > >> > _______________________________________________ >> > R-sig-Geo mailing list >> > R-sig-Geo@r-project.org >> > https://stat.ethz.ch/mailman/listinfo/r-sig-geo >> >> _______________________________________________ >> R-sig-Geo mailing list >> R-sig-Geo@r-project.org >> https://stat.ethz.ch/mailman/listinfo/r-sig-geo > > _______________________________________________ R-sig-Geo mailing list R-sig-Geo@r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-geo