On 24. September 2013 at 15:36:41, Boaz Citrin ([email protected]) wrote: > >Yep, I know that... >Had this at strict windows with fragmentation 70% so either this wasn't so >fragmented (but did fill HD...) or the window of 4 hours wasn't big enough. > > >On Tue, Sep 24, 2013 at 4:33 PM, Stanley Iriele wrote: > >> Random question... Did you know you can specify parameters for couchdb to >> run automatic compaction? >> On Sep 24, 2013 9:30 AM, "Boaz Citrin" wrote: >> >> > Hello, >> > >> > HD is full. Need to compact db and views, but no space for temporary >> > compaction file. >> > I have another drive whose size is enough for this temporary file but not >> > for the whole db. >> > >> > Is it possible to tell couchdb where to store the temporary db file >> during >> > compaction? >> > >> > Thanks!
Not really. There are good reasons for this unfortunately; the original & new db need to be present in the same filesystem so couch can swap over really fast when complete. There are 2 choices: with downtime: delete the view file completely (as this is what compaction will eventually do anyway), compact the db, regenerate the views. without much** downtime: set up a 2nd couchdb instance that uses a different port, and the other partition. set up one-shot replication from couch1/db -> couch2/db. trigger view generation for each ddoc. Swap couch instances over and use the replication again to ensure couch2/db is up to date with the very latest changes from couch1/db. There are a few more advanced variations on this, such as clone your entire /db/ dir to the 2nd drive, use rsync to get it up to date, and then swap couch instances very quickly, & using a fancy reverse proxy config in front to flip from old -> new. If you have more info about your h/w, OS, couch size & access patterns, maybe we can find a more tailored approach that still saves your bacon. A+ Dave
