This seems like a question that would be answered in some of the docs, but I
can't find the details...

What kind of upper limitations are on large buckets? Millions of entries?
Billions? Directly correlating to the amount of disk space available to it?
Is there any kind of performance degradation of using one massive bucket,
over cutting things down in to more digestable chunks and splitting them in
to different buckets (which pulls more from a relational database mindset)?

Clearly there will be a map/reduce implication of having to iterate over
millions of entries, but is there an appreciable difference in the
read/write speed that Riak performs at, when its buckets get quite large?
_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to