Yes, you can raise max_open_files as high as you have memory to cover.
The plan for Riak 2.0 is to make these settings automatic and dynamic. Then
the cache_size will automatically decline as max_open_files increases to cover
your dataset. Of course, there will be limits to prevent memory goin
On Sun, Jul 28, 2013 at 10:08 PM, Matthew Von-Maszewski
wrote:
>
> leveldb has two independent caches: file cache and data block cache. You
> have raised the data block cache from its default 8M to 256M per your
> earlier note. I would recommend the follow:
>
> {max_open_files, 50}, %%
Christian,
leveldb has two independent caches: file cache and data block cache. You have
raised the data block cache from its default 8M to 256M per your earlier note.
I would recommend the follow:
{max_open_files, 50}, %% 50 * 4Mbytes allocation for file cache
{cache_size, 10485760
On Thu, Jul 25, 2013 at 2:16 PM, Christian Rosnes <
christian.ros...@gmail.com> wrote:
> During a test I just performed on a small Riak 1.4 cluster setup on Azure,
> I started seeing the Riak errors messages listed below after about 10
> minutes.
>
> The simple test was performed using lastest Jm
Hi,
During a test I just performed on a small Riak 1.4 cluster setup on Azure,
I started seeing the Riak errors messages listed below after about 10
minutes.
The simple test was performed using lastest Jmeter running on two Azure
instances,
which also each runs haproxy and loadbalances the http/r