I am using Riak with LevelDB as the storage engine.
app.config:
{storage_backend, riak_kv_eleveldb_backend},
{eleveldb, [
{data_root, "/var/lib/riak/leveldb"},
{write_buffer_size, 4194304}, %% 4MB in bytes
{max_open_files, 50}, %% Maximum number of files open at once
per partition
{block_size, 65536}, %% 4K blocks
{cache_size, 33554432}, %% 32 MB default cache size
per-partition
{verify_checksums, true} %% make sure data is what we expected
it to be
]},
I want to insert a million keys into the store ( into a given bucket ) .
pseudo-code:
riakClient = RiakFactory.pbcClient();
myBucket =
riakClient.createBucket("myBucket").nVal(1).execute();
for (int i = 1; i <= 1000000; ++i) {
final String key = String.valueOf(i);
myBucket.store(key, new String(payload)).returnBody(false);
}
after this operation, when I do:
int count = 0;
for (String key : myBucket.keys() ) {
++count;
}
return count;
This returns a total of 14K keys, while I was expecting close to 1 million
or so.
I am using riak-java-client (pbc).
Which setting / missing client code can explain the discrepancy ? Thanks.
_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com