First:  Step 2 is talking about how many vnodes exist on a physical server.  If 
your ring size is 256, but you have 8 servers … then your vnode count for step 
2 is 32.

Second:  the 2048 is a constant forced by Google's leveldb implementation.  It 
is the portion of a file covered by a single bloom filter.  This calculation 
constant disappears with the upcoming 1.3 release.

Third:  yes there is a "block_size" parameter that is 4096.  Increase that only 
if you want to REDUCE the performance of the leveldb instance.  4096 is a very 
happy value.  We have customers and tests with 130K data values, all using 4096 
block size.  The block_size only governs the minimum written (aggregate size of 
small values that must be written as one unit at minimum).

Use 104Mbyte for your average sst file size.  It is "good enough"


I am not following the question stream for Step 4 and beyond.  Please state 
again.

Matthew




On Feb 3, 2013, at 3:44 PM, Simon Effenberg <seffenb...@team.mobile.de> wrote:

> Hi,
> 
> I'm not sure if I understand this all well to calculate the memory
> usage per file and other stuff.
> 
> The webpage tells me some steps but I'm completly unsure if I understand all 
> parameters.
> 
> "Step 1: Calculate Available Working Memory"
> 
> taking the example:
> 
> leveldb_working_memory = 32G * (1 - .50) = 16G
> 
> "Step 2: Calculate Working Memory per vnode"
> 
> vnode_working_memory = leveldb_working_memory / vnode_count
> 
> vnode_count = 256
> 
> => vnode_working_memory = 16G / 256 = 64MB/vnode
> 
> also easy
> 
> "Step 3: Estimate Memory Used by Open Files"
> 
> open_file_memory =
>   (max_open_files-10) * (
>     184 + (average_sst_filesize/2048) *
>     (8 + ((average_key_size+average_value_size)/2048 +1) *
>     0.6
>   )
> 
> so how do I know the average_sst_filesize (and what is this value exactly)
> (and is 2048 for both /2048 true or 4096 in riak 1.2?) and how do I know
> the max_open_files?
> 
> 
> average_key_size could be 16byte (I have to ask someone but taking it for now)
> average_value_size will be 14kbyte 
> 
> so for now
> 
> open_file_memory =
>   (max_open_files-10) * (
>     184 + (average_sst_filesize/2048) *
>     (8 + ((16+14336)/2048 +1) *
>     0.6
>   )
> 
> (side question: should I increase the block_size because of the big average 
> value size?
> and also should I leave the cache_size at the default value like it was 
> recommended?)
> 
> "Step 4: Calculate Average Write Buffer"
> 
> should I increase these values or not? If only two are held in memory and I 
> have, as an
> example, 32GB or RAM like in this scenario, shouldn't I increase it to 
> something else?
> 
> "Step 5: Calculate vnode Memory Used"
> 
> memory/vnode = average_write_buffer_size + cache_size + open_file_memory + 20 
> MB
> 
> So for now I miss almost all 3 values :(.
> 
> To get an Idea:
> 
> - 3 buckets
> - overall ~ 343347732 keys (but only 2/3 have 14kbyte in average)
> 
> 
> Thx for help!
> Simon
> 
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to