Indeed puzzling.
If you delete the database (DELETE /dbname) and if this succeeds (2xx response)
then all of the db data is deleted fully. If you think you're seeing data
persisting after deletion you have a problem (the delete is failing, or you're
not really deleting the db, or something extr
Hi Willem,
Good question. CouchDB has a 100% copy-on-write storage engine, including for
all updates to btree nodes, etc. so any updates to the database will
necessarily increase the file size before compaction. Looking at your info I
don’t see a heavy source of updates, so it is a little puzzl
Hi Adam,
I ran "POST compact" on the DB mentioned in my post and 'disk_size' went
from 729884227 (yes, it had grown that much in 1 hour !?) to 1275480.
Wow.
I disabled compacting because I thought it was useless in our case since
the db's and the docs are so small. I do wonder how it is possible
Hi Willem,
Compaction would certainly reduce your storage space. You have such a small
number of documents in these databases that it would be a fast operation. Did
you try it and run into issues?
Changing cluster.q shouldn’t affect the overall storage consumption.
Adam
> On May 2, 2019, at
Hi,
Our CouchDb 2.3.1 standalone server (AWS Ubuntu 18.04) is using a lot of
disk space, so much so that it regularly causes a disk full and a crash.
The server contains approximately 100 databases each with a reported
(Fauxton) size of less than 2.5Mb and less than 250 docs. Yesterday the
'shard
Glad this worked out. Quick tip then, unless you run this on an 8-core (or
more) machine, you might want to look into reducing your q for this database.
q=2 or $num_cores is a good rule of thumb. You can use our couch-continuum tool
to migrate an existing db: https://npmjs.com/couch-continuum
C
OK, I found it.
In the 8 shards subdirectories (from -1fff to e000-)
there was still 8 frwiki directories (frwiki.1510609658.couch) with each 5 GB.
I deleted them with:
find . -name frwiki.1510609658.couch -delete
from the shards dir and gone they are.
Hopefully it won’t