Re: 2i, search or something else for most efficient range lookup

2013-08-22 Thread Y N
the differences between startval and endval may be 0 or 1 or as high as 50 million). From: Jon Meredith jmered...@basho.com To: Y N yug...@yahoo.com Cc: riak-users@lists.basho.com riak-users@lists.basho.com Sent: Monday, August 19, 2013 5:34 AM Subject: Re: 2i

Re: 2i, search or something else for most efficient range lookup

2013-08-22 Thread Y N
and won't quite suite this particular use case which involves a somewhat range type of binary query. From: Kresten Krab Thorup k...@trifork.com To: Y N yug...@yahoo.com Cc: Jon Meredith jmered...@basho.com; riak-users@lists.basho.com riak-users

2i, search or something else for most efficient range lookup

2013-08-16 Thread Y N
Hi, I have a question regarding the most efficient way to perform a search / lookup for something. This isn't a typical range lookup (where you want to find all objects given a specific range). In this case I want to find a specific object where a lookup value falls within that objects range.

Mapreduce limits / scalability

2013-07-27 Thread Y N
Hi, I have recently started using Riak and had a question about mapreduce queries is there a limit on the number of queries that could be run concurrently (per node / cluster etc)? Also, does Riak optimize concurrent requests for the same mapreduce query / queries by returning results

Re: Connection timeout issues

2013-07-15 Thread Y N
surprised I'm running into a timeout issue. From: Jeremiah Peschka jeremiah.pesc...@gmail.com To: Y N yug...@yahoo.com Cc: riak-users@lists.basho.com riak-users@lists.basho.com Sent: Monday, July 15, 2013 6:33 AM Subject: Re: Connection timeout issues Riak

Connection timeout issues

2013-07-14 Thread Y N
I don't know if this is a client (Java 1.1.1 client) or server issue I recently upgraded to 1.4 and I am now seeing timeout issues when my app is trying to read data either via mapreduce or getting all keys for a bucket. Currently my server has no data (this is my test server and I wiped

Re: New Counters - client support

2013-07-11 Thread Y N
Unlimited MCITP: SQL Server 2008, MVP Cloudera Certified Developer for Apache Hadoop On Jul 10, 2013, at 8:00 PM, Y N yug...@yahoo.com wrote: Hi, The counters stuff looks awesome can't wait to use it. Is this already supported via the currently available clients (specifically, the Java

Riak 1.4 - Changing backend through API

2013-07-10 Thread Y N
Hi, I just upgraded to 1.4 and have updated my client to the Java 1.1.1 client. According to the release notes, it says all bucket properties are now configurable through the PB API. I tried setting my backend through the Java client, however I get an Exception Backend not supported for PB.

New Counters - client support

2013-07-10 Thread Y N
Hi, The counters stuff looks awesome can't wait to use it. Is this already supported via the currently available clients (specifically, the Java 1.1.1 client)?  Also, when can we expect some tutorial / documentation around using counters? I looked at the GitHub link, however, some use

Re: Java client - Bug with ifNotModified?

2013-06-04 Thread Y N
: Y N yug...@yahoo.com To: riak-users@lists.basho.com riak-users@lists.basho.com Cc: Brian Roach ro...@basho.com Sent: Saturday, May 25, 2013 7:21 PM Subject: Java client - Bug with ifNotModified? I am using ifNotModified and am running into a weird situation. I am using the following API

Java client - Bug with ifNotModified?

2013-05-25 Thread Y N
I am using ifNotModified and am running into a weird situation. I am using the following API: return bucket.store(key, new MyObject()).withMutator(mutator).withConverter(converter).ifNotModified(true).returnBody(true).execute(); The problem I run into is that I get a not found exception when

Java client and siblings question

2013-05-19 Thread Y N
Hi,   I am currently using the latest java client, and I have a question regarding updating data in a bucket where siblings are allowed (i.e. allowSiblings = true).   I finally understand the whole read-resolve-mutate-write cycle, and also doing an update / store using previously fetched data