Custom Object Mapper settings in Java Client

2016-02-22 Thread Cosmin Marginean
Hi I presume that Riak Java client is using Jackson for JSON-to-POJO and vice versa. Is there a way to easily inject a custom object mapper there? Or at least to get a reference to it in order to add custom serializers? Thank you Cosmin ___ riak-users

Re: Custom Object Mapper settings in Java Client

2016-02-22 Thread Vitaly E
Hi Cosmin, Have a look at com.basho.riak.client.api.convert.ConverterFactory. It's a singleton, you can register a custom converter there (the default for classes other than String and RiakObject is com.basho.riak.client.api.convert.JSONConverter). It's also possible to pass a custom converter to

riak crash

2016-02-22 Thread Raviraj Vaishampayan
Hi, We have been using riak to gather our test data and analyze results after test completes. Recently we have observed riak crash in riak console logs. This causes our tests failing to record data to riak and bailing out :-( The crash logs are as follow: 2016-02-19 16:25:26.255 [error] <0.2160.

Re: Regarding the number of partitions in riak cluser

2016-02-22 Thread Chathuri Gunawardhana
I'm running from the master version of this . I'm running on Ubuntu precise and have 10GB RAM,2 VCPUs and 100 GB hard disk. Each instance is running a single riak node. Altogether I have 10 instances (for 512 partitions) and 20 for 1024 partitions. The cluster works f

Increase number of partitions above 1024

2016-02-22 Thread Chathuri Gunawardhana
Hi, It is not possible to increase the number of partitions above 1024 and has been disabled via cuttlefish in riak.config. When I try to increase ring_size via riak.config, the error suggest that I should configure partition size>1024 via advanced config file. But I couldn't find a way of how I c

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Vanessa Williams
Hi Dmitri, this thread is old, but I read this part of your answer carefully: You can use the following strategies to prevent stale values, in increasing > order of security/preference: > 1) Use timestamps (and not pass in vector clocks/causal context). This is > ok if you're not editing objects,

Re: Custom Object Mapper settings in Java Client

2016-02-22 Thread Cosmin Marginean
Thank you, Vitaly, will give that a go. On Mon, Feb 22, 2016 at 9:54 AM, Vitaly E <13vitam...@gmail.com> wrote: > Hi Cosmin, > > Have a look at com.basho.riak.client.api.convert.ConverterFactory. It's a > singleton, you can register a custom converter there (the default for > classes other than S

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Alex Moore
Hi Vanessa, You might have a problem with your delete function (depending on it's return value). What does the return value of the delete() function indicate? Right now if an object existed, and was deleted, the function will return true, and will only return false if the object didn't exist or o

Riak 2.1.3 hooks not invoked

2016-02-22 Thread Adam Kovari
Hello I am trying to enable post commit hook for my bucket type, and I can see it is configured in the bucket-type: ➜  idvt-riak git:(master) ✗ riak-admin bucket-type status change_log change_log is active active: true allow_mult: true basic_quorum: false big_vclock: 50 chash_keyfun: {riak_core

Re: Increase number of partitions above 1024

2016-02-22 Thread Alex Moore
Hi Chathuri, Larger ring sizes are not usually recommended, you can overload disk I/O if the number of vnodes to nodes is too high. Similarly you can underload other system resources if the vnode/node ratio is too low. How many nodes are you planning on running? Thanks, Alex On Mon, Feb 22, 201

Re: riak crash

2016-02-22 Thread Matthew Von-Maszewski
Raviraj, Please run 'riak-debug'. This is in the bin directory along with 'riak start' and 'riak-admin'. riak-debug will produce a file named similar to /home/user/r...@10.0.0.15-riak-debug.tar.gz You should email that file to me directly,

Re: Increase number of partitions above 1024

2016-02-22 Thread Chathuri Gunawardhana
For my experiment I will be using 100 nodes. Thank you! On Mon, Feb 22, 2016 at 4:40 PM, Alex Moore wrote: > Hi Chathuri, > > Larger ring sizes are not usually recommended, you can overload disk I/O > if the number of vnodes to nodes is too high. > Similarly you can underload other system resou

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Vanessa Williams
See inline: On Mon, Feb 22, 2016 at 10:31 AM, Alex Moore wrote: > Hi Vanessa, > > You might have a problem with your delete function (depending on it's > return value). > What does the return value of the delete() function indicate? Right now > if an object existed, and was deleted, the functio

Re: Increase number of partitions above 1024

2016-02-22 Thread Alex Moore
Ok, what does `riak-admin status | grep riak_kv_version` return? The config files are different for Riak 1.x and 2.x. Also for your tests, are you using any "coverage query" features like MapReduce or 2i queries? Thanks, Alex On Mon, Feb 22, 2016 at 10:43 AM, Chathuri Gunawardhana < lanch.gu

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Alex Moore
> > That's the correct behaviour: it should return true iff a value was > actually deleted. Ok, if that's the case you should do another FetchValue after the deletion (to update the response.hasValues()) field, or use the async version of the delete function. I also noticed that we weren't passin

Re: Increase number of partitions above 1024

2016-02-22 Thread Chathuri Gunawardhana
I'm using riak master version on riak github (riak_kv_version : <<"2.1.1-38-ga8bc9e0">>) . I don't use coverage queries. When I try to set the partition count over 1024, it suggest me to do it via advanced config (in cuttlefish schema for riak core, there is a validation to see whether it is above

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Vanessa Williams
Hi Alex, would a second fetch just indicate that the object is *still* deleted? Or that this delete operation succeeded? In other words, perhaps what my contract really is is: return true if there was already a value there. In which case would the second fetch be superfluous? Thanks for your help.

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Alex Moore
If the contract is "Return true iff the object existed", then the second fetch is superfluous + so is the async example I posted. You can use the code you had as-is. Thanks, Alex On Mon, Feb 22, 2016 at 1:23 PM, Vanessa Williams < vanessa.willi...@thoughtwire.ca> wrote: > Hi Alex, would a secon

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Vanessa Williams
Thanks very much for the advice. I'll give it a good test and then write something. Somewhere. Cheers. On Mon, Feb 22, 2016 at 3:42 PM, Alex Moore wrote: > If the contract is "Return true iff the object existed", then the second > fetch is superfluous + so is the async example I posted. You can

Configure multiple riak clients in a cluster

2016-02-22 Thread Chathuri Gunawardhana
Hi All, I'm using distributed version of riak client (here ). I could configure one riak client for the cluster. But when I try to start 2, one of them crashes (error suggests that there is a global name conflict). Can yo

Re: Configure multiple riak clients in a cluster

2016-02-22 Thread Christopher Meiklejohn
Your client is registering with the name in the config file, and that name can only be used once. You need to have each client use a different name. > {riakclient_mynode, ['riak_bench@172.31.0.117', longnames]}. Christopher Sent from my iPhone > On Feb 22, 2016, at 15:46, Chathuri Gunawardha

Re: Configure multiple riak clients in a cluster

2016-02-22 Thread Chathuri Gunawardhana
I didn't get it clearly. Can you please provide me an example? Thank you very much! On Tue, Feb 23, 2016 at 2:07 AM, Christopher Meiklejohn < christopher.meiklej...@gmail.com> wrote: > Your client is registering with the name in the config file, and that name > can only be used once. > > You nee

Re: Configure multiple riak clients in a cluster

2016-02-22 Thread Chathuri Gunawardhana
I think it should use distributed basho bench configurations as in here . But still with that, I have the same throughput results as in one basho client. I'm not sure what I'm doing wrong. Lets say I have 2 nodes A