Fabio,

If you are using Bitcask (the default storage engine), it won't compact its
data files until they reach the 2GB default max file size. Since your disk
space is constrained, I suggest you tune the max_file_size down (128-256MB
should be sufficient for your purposes) and merging threshold/triggers.
More info about those settings can be found on this page:
http://wiki.basho.com/Bitcask.html#Disk-Usage-and-Merging-Settings

Hope that helps!

On Tue, Apr 17, 2012 at 9:20 AM, Fábio Sato <[email protected]> wrote:

> Hello all,
>
> I've started using Riak as a data store for images and documents and I'm
> having trouble understanding its disk usage behaviour.
>
> We developed a web system and unfortunately it went into production
> without the appropriate hardware infrastructure, and we are currently
> running only one instance of Riak right now (I know this is dead wrong but
> I'm waiting for more hardware).
>
> Currently we have like 100 keys and update their content every 5 minutes.
> Every image has approximately 300kB so I would expect a disk usage around
>  300MB-1GB and also that riak would keep it constant. But every 2 weeks we
> get a full filesystem (10GB) and have to restart Riak (or temporally add
> another instance) to make it release space.
>
> To me it seems like it is maintaining the previous values for each key
> like a versioning system, but can't confirm that based on what I've read on
> the documentation.
>
> Could someone give me a hint why the disk space grows even if I'm not
> adding no new keys?
>
> Any feedback would be appreciated.
>
> Thanks!
> --
> Fábio Sato
>
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Sean Cribbs <[email protected]>
Software Engineer
Basho Technologies, Inc.
http://basho.com/
_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to