Just wanted to update this thread in case anyone else comes looking. Some of these things were not immediately clear to me. I ended up doing: library(raster) library(ncdf4) fn=list.files('serverpath') fnstack=stack(fn) layerdates=names(fnstack) #instead of writeRaster, use ncdf4 directly to get around the issue in this thread <http://r-sig-geo.2731867.n2.nabble.com/writeRaster-does-not-preserve-names-when-writing-to-NetCDF-td7586909.html> . dim1=ncdim_def('Long','degree',seq(-112.25,-104.125,0.00416666667)) dim2=ncdim_def('Lat','degree',seq(43.75,33,-0.00416666667)) dim3=ncdim_def('time','yrdoy',unlim=T,vals=layerdates)#where layerdates is a vector something like 20120101, 20120109,...etc since thats what my files were called. var=ncvar_def('swe','meters',dim=list(dim1,dim2,dim3),missval=-99,longname='snow water equivalent',compression=9) #important to note, dim1 is the x direction and should be ascending. dim2 is the y direction and should be descending. this is because the cell numbers from a raster* object start top-left and count by row. outputfn='localpath' newnc=nc_create(outputfn,var) ncvar_put(newnc, var, vals=getValues(fnstack)) ncatt_put(ncnew,0,'proj4string','+proj=longlat +datum=WGS84')#add a global attribute defining the geographic information. nc_close(newnc)
Then when I open the file: ncnew=nc_open(outputfn) ncnew$dim[[3]]$vals #this will give the list of dates stored above in dim3. you can get the spatial coordinates likewise in dim[[1]] and dim[[2]] (or ncnew$dim$Lat$vals etc.) lyr=grep('20120109',ncnew$dim[[3]]$vals) #use grep to find the date again ncvar_get(ncnew,start=c(1,1,lyr),count=c(-1,-1,1))#get the raster I stored for that date. nc_close(outputfn) Hope that helps someone! Dominik Schneider o 303.735.6296 | c 518.956.3978 On Fri, Feb 6, 2015 at 1:30 PM, dschneiderch [via R-sig-geo] < ml-node+s2731867n7587748...@n2.nabble.com> wrote: > Ok - Looks like it worked this time for 112 files from 2012. The netcdf > is 2.25 GB while the compressed multiband geotiff is 510MB. Does the netcdf > have so much overhead- the 112 file at 10MB each are only 1.12 GB > individually? > I like the tidiness of 1 file per year so I'll have to play with how > easily these can be accessed and the best way of annotating the layers. I > was just reading that netcdf4 is based on hdf5 with a subset of features so > I might look to see if hdf5 can do what I want. > Thanks > ds > > ------------------------------ > If you reply to this email, your message will be added to the discussion > below: > > http://r-sig-geo.2731867.n2.nabble.com/stack-many-files-without-loading-into-memory-tp7587729p7587748.html > To unsubscribe from stack many files without loading into memory, click > here > <http://r-sig-geo.2731867.n2.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=7587729&code=RG9taW5pay5TY2huZWlkZXJAY29sb3JhZG8uZWR1fDc1ODc3Mjl8LTEwMzMyMTA1OQ==> > . > NAML > <http://r-sig-geo.2731867.n2.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> > -- View this message in context: http://r-sig-geo.2731867.n2.nabble.com/stack-many-files-without-loading-into-memory-tp7587729p7587831.html Sent from the R-sig-geo mailing list archive at Nabble.com. _______________________________________________ R-sig-Geo mailing list R-sig-Geo@r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-geo