Riak Java client 1.0.3

2012-01-26 Thread Brian Roach
Riak Users,

We are pleased to announce the release of the Riak Java client 1.0.3.
This is a bug fix release. Please see the
CHANGELOG
 for full list of changes in Riak Java client 1.0.3.

The 1.0.3 Java client is available from Maven Central. Add the dependency
to your pom.xml:


  com.basho.riak
  riak-client
  1.0.3
  pom




*You can also download the client .jar (and dependencies) from github.
*


If you prefer your client in source form, the source code can be cloned
from our GitHub repo  and 1.0.3
can be fetched by checking out the
riak-1.0.3
 tag.

As always, thanks for being the best community anywhere!
-Basho
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OutOfMemoryError in Java client

2012-02-14 Thread Brian Roach
John -

The problem is that when you create a new IRiakClient ... the previous one
doesn't shut down its threads when it goes out of scope, thus leading to
your problem.

We've already fixed this is master by adding a 'shutdown()' method to the
IRiakClient interface, but the latest release cut of the client (1.0.3)
does not include it.

Thanks,
Brian Roach

On Tue, Feb 14, 2012 at 2:53 PM, John DeTreville wrote:

> I have a simple single-threaded Java client for Riak that consistently
> runs out of memory creating threads.
>
> java.lang.OutOfMemoryError: unable to create new native thread
>at java.lang.Thread.start0(Native Method)
>at java.lang.Thread.start(Thread.java:658)
>at
> java.util.concurrent.ThreadPoolExecutor.addIfUnderCorePoolSize(ThreadPoolExecutor.java:703)
>at
> java.util.concurrent.ThreadPoolExecutor.prestartCoreThread(ThreadPoolExecutor.java:1381)
>at
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:222)
>at
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:443)
>at
> com.basho.riak.pbc.RiakConnectionPool.doStart(RiakConnectionPool.java:232)
>at
> com.basho.riak.pbc.RiakConnectionPool.access$100(RiakConnectionPool.java:41)
>at
> com.basho.riak.pbc.RiakConnectionPool$State$1.start(RiakConnectionPool.java:58)
>at
> com.basho.riak.pbc.RiakConnectionPool.start(RiakConnectionPool.java:227)
>at com.basho.riak.pbc.RiakClient.(RiakClient.java:90)
>at com.basho.riak.pbc.RiakClient.(RiakClient.java:81)
>at
> com.basho.riak.client.raw.pbc.PBClientAdapter.(PBClientAdapter.java:91)
>at com.basho.riak.client.RiakFactory.pbcClient(RiakFactory.java:107)
>
> The client is a JUnit test for some data structures I'm storing in Riak.
> When I run it, my Java client process starts about 2028 native threads
> before it collapses.
>
> This JUnit test creates a moderately large number of IRiakClient objects,
> but only one at a time. It does not close them, as there is no method for
> doing so.
>
> This happens with Riak 1.0.2 and with Riak 1.1.0RC2. As I've said, the
> client is single-theaded.
>
> Any ideas?
>
> Cheers,
> John
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client dw pr pw settings

2012-02-19 Thread Brian Roach
Daniel -

The client doesn't support modifying / creating those properties for
buckets themselves in Riak. This is a server-side limitation as Riak itself
doesn't support these operations via protocol buffers.

With Object reads / writes, however, there is no issue.

The DomainBucket object is a convenience object for when you want the same
settings for all reads / writes. You can specify them in the Builder:

DomainBucket domainBucket =
DomainBucket.builder(myBucket,
ShoppingCart.class).r(1).pr(1).w(1).build();

All calls to fetch() and store() will use those settings.

In your case, if you want only some fetch / store operations to use
alternate settings you have a couple options.

1) Create a second DomainBucket object with those alternate settings. You
can then use that DomainBucket object only for operations you want those
settings to apply to.

2) Don't use the DomainBucket wrapper and instead use the regular Bucket
interface. You can set those properties int the StoreObject and
FetchObject.

Thanks,
Brian Roach

On Sun, Feb 19, 2012 at 9:25 AM, ivenhov  wrote:

> Hi
>
> I just found that Java PB client does not support setting pr pw dw etc  for
> writes and reads.
> Is it because I'm using IRiakClient interface and domain buckets or is it
> not supported at all?
> I would like to be able to set those properties for certain operations and
> keep defaults or others and I don't mind using another client interface if
> that functionality is supported.
>
> When we can expect it to be implemented for domain buckets?
>
> Thanks
> Daniel
>
> --
> View this message in context:
> http://riak-users.197444.n3.nabble.com/Java-client-dw-pr-pw-settings-tp3758310p3758310.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client v1.0.4

2012-02-27 Thread Brian Roach
Greetings Riak Users!

We are pleased to announce the release of the Riak Java Client v1.0.4.  
Additionally we are now publishing the Javadoc online.

This release includes bug fixes as well as new functionality.  

New features include:
shutdown() method in the IRiakClient interface and underlying RawClients that 
cleanly shuts down client threads and allows an application to exit without 
hanging.
lazyLoadProperties() method in FetchBucket and WriteBucket that allows the 
deferral of the request to Riak for bucket properties.
Support of the Riak /stats operation

Please see the CHANGELOG  for a complete list of changes.

The 1.0.4 Java Client is available from Maven Central. Add the dependency to 
your pom.xml file:


  com.basho.riak
  riak-client
  1.0.4
  pom


You can also download the client jar and its dependencies from the downloads 
section on github

Last but not least, if you prefer your client in source form, you can clone 
from our Github repo. The 1.0.4 client can be checked out using the 
riak-client-1.0.4 tag___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Inconsistency while writing and reading in quorum level.

2012-02-29 Thread Brian Roach
Vijayakumar,

I'm not able to reproduce any odd behavior using the same calls with a 3 node 
cluster and storing/retrieving a String (tested both HTTP and ProtocolBuffers). 

When you say "records are not reflected", what do you mean exactly?  

Is the value in your returned IRiakObject null (meaning the key was not found), 
 just an empty String / Byte[], or there's something there but it's not the 
value you expected?

Also, what release version of Riak are you using?

Thank You,
Brian Roach



On Feb 29, 2012, at 11:02 AM, vijayakumar wrote:

> Records inserted into a bucket are not reflected  when read in a different 
> request while using java riak client (even when both read and write are in 
> quorum level).
> 
> Code fragment for Write:
>Bucket bucket = .fetchBucket(table).execute();
> IRiakObject riakObject = 
> RiakObjectBuilder.newBuilder(table,key).withValue(new 
> JSONObject(valueMap).toString()).build();
> riakObject.setContentType("application/json");
> bucket.store(riakObject).dw(Quora.QUORUM).execute();
>
> 
> Code fragment for Read:
>Bucket bucket = .fetchBucket(table).execute();
>IRiakObject fetched = bucket.fetch(key).r(Quora.QUORUM).execute();
> 
> Number of nodes in riak-server-cluster:3
> Riak Client Version : 1.4.0
> 
> All the default properties of  the buckets are retained as it is.
> 
> Am I missing anything else? Kindly help me out.
> 
> 
> Regards,
> Vijayakumar.
> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Client wire compatibility between versions

2012-03-07 Thread Brian Roach
Jacques - 

Older client -> newer Riak shouldn't pose any issues. 

Newer client -> older Riak should also be fine with the caveat that any 
attempts to use features that are not present in the older version of Riak will 
of course fail. 

Thanks,
Brian Roach


On Mar 1, 2012, at 10:17 AM, Jacques wrote:

> What is the client compatibility between the various versions of servers?
> 
> For example, will the latest riak-java-client work against .14, 1.0 and 1.1?
> Will a PB client used during the .14 era work against the 1.0 and 1.1 servers?
> 
> Thanks,
> Jacques
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: connection fail over

2012-03-27 Thread Brian Roach
Zheng,

By using a cluster client (HTTPClusterClient or PBClusterClient) you would get 
not only fail-over on an operation failure but also round-robin balancing of 
operations across your nodes. 

When you create either a HTTPClusterClient or a PBClusterClient you provide a 
list of Raik nodes. When performing an operation there's a Retrier object that 
is used by the Bucket object. The default Retrier tries an operation 3 times 
before reporting a failure. On each try, a different node is selected from the 
list of Riak nodes (this same mechanism also gives you load balancing across 
the nodes in your list in a round-robin fashion). 
The only deficiency here is that a node is never failed completely out of the 
list or deprioritized in any way. We do have plans to add a pluggable load 
balancing feature at some point that would allow for the end user to implement 
whatever scheme they would like.
I've recently started work on a Riak Java client "Cookbook" and have an example 
of how to create a cluster client here: 
https://github.com/basho/riak-java-client/wiki/ClientFactory

Thanks,
Brian Roach

On Mar 26, 2012, at 11:56 AM, Zheng Zhu wrote:

> Hello, I am new to Riak and want to know if Riak java client has the node 
> connection feature to automatically fail over to another node after timed out 
> for current node. If so, can you share links to doc, java doc or sample code?
> 
> Thanks
> Zheng
> The information contained in this e-mail is confidential and/or proprietary 
> to the 41st Parameter. The information transmitted herewith is intended only 
> for use by the individual or entity to which it is addressed. If you are not 
> the intended recipient, you should not copy, distribute, disclose or use the 
> information it contains, please e-mail the sender immediately and delete this 
> message from your system.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


The Riak Java Client Cookbook

2012-03-27 Thread Brian Roach
Hello Riak Java Client Users!

We're currently in the process of writing a Cookbook for the Riak Java client. 
Its purpose is to make starting with Riak easier for new developers as well as 
provide coverage of advanced topics by providing concrete examples of using the 
Riak Java client. 

A few topics have already been posted, but there are a lot more to go. If you'd 
like to be involved, we'd love the input (and we won't be shy about lavishing 
gifts of T-shirts, stickers, and praise upon contributors!).

Please take a look @ https://github.com/basho/riak-java-client/wiki/Cookbook 

Interested in contributing? See the TODO: page located @ 
https://github.com/basho/riak-java-client/wiki/TODO

Thanks!
Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client 1.0.5

2012-03-29 Thread Brian Roach
Greetings Riak Users!

We are pleased to announce the release of the Riak Java Client v1.0.5

This is a bug fix release that addresses two issues:

* PBClusterClient not reconnecting when nodes come back online.
* HTTPClusterClient.addHosts() methods not working correctly. 

The 1.0.5 Java Client is available from Maven Central. Add the dependency to 
your pom.xml file:


  com.basho.riak
  1.0.5
  pom


You can also download  the client jar file and its dependencies from: 
https://github.com/basho/riak-java-client/downloads

Last but not least, if you prefer your client in source form, you can clone 
from our Github repo: https://github.com/basho/riak-java-client using the 
riak-client-1.0.5 tag.___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak java pb client does not let go of bad sockets

2012-03-29 Thread Brian Roach
Thanks (and Sorry!) for reporting this Will.

This was a bug in the underlying Protocol Buffers RiakClient where Socket 
objects were not being closed when IOExceptions were received from their 
streams. This is now fixed in the 1.0.5 release, available today.

Thanks again,
Brian Roach


On Mar 28, 2012, at 12:09 PM, Will Gage wrote:

> Hello,
> 
> 
> I have run into a production issue that I think stems from either a defect in 
> the com.basho.riak:riak-client:jar:1.0.4 library, or a misunderstanding in my 
> use of it.  I'm actively trying to fix the issue, but I thought I'd put a 
> feeler out to this list to see if others have encountered the issue, or 
> whether there's a clear problem in our use of the library.
> 
> Environment:
> -
> * Java web application running in Tomcat:
> ** JDK: jdk1.6.0_24-jce6
> ** Tomcat: apache-tomcat-7.0.23
> ** Basho Riak Client version: com.basho.riak:riak-client:jar:1.0.4
> * 6 node Riak cluster running Riak 1.0.1
> 
> Error sequence:
> ---
> The production issue has happened a few times, and it follows this sequence:
> 
> 1. We get a rash of SocketException: Connection Reset errors
> 
> java.net.SocketException: Connection reset
>at java.net.SocketInputStream.read(SocketInputStream.java:168)
>at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>at java.io.DataInputStream.readInt(DataInputStream.java:370)
>at com.basho.riak.pbc.RiakConnection.receive(RiakConnection.java:92)
>at com.basho.riak.pbc.RiakClient.processFetchReply(RiakClient.java:278)
>at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:252)
>at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:241)
>at 
> com.basho.riak.client.raw.pbc.PBClientAdapter.fetch(PBClientAdapter.java:156)
>at 
> com.basho.riak.client.raw.pbc.PBClientAdapter.fetch(PBClientAdapter.java:139)
>at com.basho.riak.client.raw.ClusterClient.fetch(ClusterClient.java:107)
> 
> 2. Followed 50 milliseconds later by a steady stream of SocketException: 
> Broken pipe messages, until we restart the Tomcat container.
> 
> java.net.SocketException: Broken pipe
>at java.net.SocketOutputStream.socketWrite0(Native Method)
>at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
>at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>at com.basho.riak.pbc.RiakConnection.send(RiakConnection.java:82)
>at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:251)
>at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:241)
>at 
> com.basho.riak.client.raw.pbc.PBClientAdapter.fetch(PBClientAdapter.java:156)
>at 
> com.basho.riak.client.raw.pbc.PBClientAdapter.fetch(PBClientAdapter.java:139)
>at com.basho.riak.client.raw.ClusterClient.fetch(ClusterClient.java:107)
> 
> 3. Within our 6-node Riak cluster, almost exactly 1 minute after the initial 
> connection reset errors, one node emits a crash log:
> 
> 2012-03-27 14:56:51 =ERROR REPORT
> ** Generic server <0.289.0> terminating 
> ** Last message in was {inet_async,#Port<0.3319>,41462,{ok,#Port<0.73913715>}}
> ** When Server state == {state,riak_kv_pb_listener,#Port<0.3319>,{state,8087}}
> ** Reason for termination == 
> ** 
> {timeout,{'gen_server2',call,[<0.1838.1574>,{set_socket,#Port<0.73913715>}]}}
> 2012-03-27 14:57:08 =CRASH REPORT
>  crasher:
>initial call: gen_nb_server:init/1
>pid: <0.289.0>
>registered_name: []
>exception exit: 
> {timeout,{'gen_server2',call,[<0.1838.1574>,{set_socket,#Port<0.73913715>}]}}
>  in function  gen_server:terminate/6
>  in call from proc_lib:init_p_do_apply/3
>ancestors: [riak_kv_sup,<0.194.0>]
>messages: [{#Ref<0.0.704.111074>,ok}]
>links: [<0.200.0>]
>dictionary: []
>trap_exit: false
>status: running
>heap_size: 377
>stack_size: 24
>reductions: 117962
>  neighbours:
> 2012-03-27 14:57:09 =SUPERVISOR REPORT
> Supervisor: {local,riak_kv_sup}
> Context:child_terminated
> Reason: 
> {timeout,{'gen_server2',call,[<0.1838.1574>,{set_socket,#Port<0.73913715>}]}}
> Offender:   
> [{pid,<0.289.0>},{name,riak_kv_pb_listener},{mfargs,{riak_kv_pb_listener,start_link,[]}},{restart_type,permanent},{shutdown,5000},{child_type,worker}]
&

Re: Erlang map/reduce functions using the Java client library

2012-04-17 Thread Brian Roach
Ahmed - 

You can't define erlang functions in the Java client and send them as part of 
the MR like you can with JavaScript, but if you were to write the functions in 
erlang, put them on your RIak nodes (see below), and use them as named funcs in 
your MR jobs you'll find the performance is far better than JavaScript. 

To make your erlang modules and functions available to MR they need to be 
available on all nodes in the cluster. You can add them by adding the path to 
the "add_paths" setting in the app.config. 

Thanks,
Brian Roach


On Apr 16, 2012, at 12:00 PM, Ahmed Bashir wrote:

> Does anyone specify their Map/Reduce functions in Erlang using the
> Java client library in lieu of JavaScript for better performance?
> 
> Any examples would be appreciated, thanks!
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client Cookbook update

2012-04-25 Thread Brian Roach
Greetings Riak Java people!

We've recently completed a couple pages for the Riak Java Cookbook that I 
wanted to share:

Fetching Data from Riak:
https://github.com/basho/riak-java-client/wiki/Fetching-Data-from-Riak

Storing Data in Riak:
https://github.com/basho/riak-java-client/wiki/Storing-data-in-riak

By providing these we hope to make understanding and using the Java client as 
easy and painless as possible!

Thanks!
Brian Roach___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Riak Client / MapReduce Querying, Any Sample Code?

2012-05-03 Thread Brian Roach
Cluade,

The example code in the readme on github does have some basic examples of using 
MR in the Java client: https://github.com/basho/riak-java-client - scroll down 
to about 3/4 of the way to the end. 

I'm also currently working on a more comprehensive Java client "cookbook" and 
MR is probably the next item I'll be adding. 

Is there a particular thing you're trying to do that you need help with?

Thanks,
Brian Roach




On May 3, 2012, at 10:23 AM, clau...@br.ibm.com wrote:

> Dear colleagues, 
> 
> Is there any basic sample or documentation available that illustrate how to 
> construct execute the MapReduce syntax through the Riak Java Interface. 
> 
> Thanks in advance for your feedback.   
> 
> Regards, 
> Claude
> 
> Claude Falbriard 
> Certified IT Specialist L2 - Middleware
> AMS Hortolândia / SP - Brazil
> phone:+55 19 9837 0789
> cell: +55 13 8117 3316
> e-mail:clau...@br.ibm.com
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Link walking with a java client

2012-05-29 Thread Brian Roach
Deepak -

I'll take a look at it this week, but more than likely it's a bug.

Link walking is a REST only operation as far as Riak’s interfaces are 
concerned. Link Walking in the protocol buffers Java client is a hack that 
issues two m/r jobs to the protocol buffers interface (the first constructs the 
inputs to the second by walking the links, the second returns the data).

Thanks,
Brian Roach

On May 27, 2012, at 7:19 AM, Deepak Balasubramanyam wrote:

> This looks like a bug. The code to walk links via a HTTP client works 
> perfectly. The same code fails when the PB client is used. The POJO attached 
> in this email reproduces the problem.
> 
> I searched the email archives and existing issues and found no trace of this 
> problem. Please run the POJO by swapping the clients returned from the 
> getClient() method to reproduce the problem. I can create a bug report once 
> someone from the dev team confirms this really is a bug.
> 
> Riak client pom:
>   
>   com.basho.riak
>   riak-client
>   1.0.5
>   
> 
> Riak server version - 1.1.2. Built from source. 
> 4 nodes running on 1 machine. OS - Linux mint.
> 
> On Sun, May 27, 2012 at 10:05 AM, Deepak Balasubramanyam 
>  wrote:
> Hi,
> 
> I have a cluster that contains 2 buckets. A bucket named 'usersMine' contains 
> the key 'user2', which is linked to several keys (about 10) under a bucket 
> named userPreferences. The relationship exists under the name 'myPref'. A 
> user and a preference have String values.
> 
> I can successfully traverse the link over HTTP using the following URL - 
> 
> curl -v localhost:8091/riak/usersMine/user2/_,myPref,1
> 
>    
> > User-Agent: curl/7.21.6 (i686-pc-linux-gnu) libcurl/7.21.6 OpenSSL/1.0.0e 
> > zlib/1.2.3.4 libidn/1.22 librtmp/2.3
> > Host: localhost:8091
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
> < Expires: Sun, 27 May 2012 04:12:39 GMT
> < Date: Sun, 27 May 2012 04:02:39 GMT
> < Content-Type: multipart/mixed; boundary=IYGfKNqjGdco9ddfyjRP1Utzfi2
> < Content-Length: 3271
> < 
> 
> --IYGfKNqjGdco9ddfyjRP1Utzfi2
> Content-Type: multipart/mixed; boundary=3YVES0x2tFnUDOdTzfn1OGS6uMt
> 
> --3YVES0x2tFnUDOdTzfn1OGS6uMt
> X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/nvy7mSwZTImMfKUKpodpIvCwA=
> Location: /riak/userPreferences/preference3004
> Content-Type: text/plain; charset=UTF-8
> Link: ; rel="up"
> Etag: 5GucnGSk4TjQc8BO1eNLyI
> Last-Modified: Sun, 27 May 2012 03:54:29 GMT
> 
> junk
> 
> <<  Truncated  >>
> junk
> --3YVES0x2tFnUDOdTzfn1OGS6uMt--
> 
> --IYGfKNqjGdco9ddfyjRP1Utzfi2--
>    
> 
> However when I use the java client to walk the link, I get a 
> ClassCastException.
> 
>   
> Java code:
>   
> private void getAllLinks()
> {
> String user="user2";
> IRiakClient riakClient = null;
> try
> {
> long past = System.currentTimeMillis();
> riakClient = RiakFactory.pbcClient("localhost",8081);
> Bucket userBucket = riakClient.fetchBucket("usersMine").execute();
> DefaultRiakObject user1 =(DefaultRiakObject) 
> userBucket.fetch(user).execute();
> List links = user1.getLinks();
> System.out.println(links.size());
> WalkResult execute = 
> riakClient.walk(user1).addStep("userPreferences", "myPref",true).execute();
> Iterator iterator = execute.iterator();
> while(iterator.hasNext())
> {
> Object next = iterator.next();
> System.out.println(next);
> }
> long now = System.currentTimeMillis();
> System.out.println("Retrieval in " + (now-past) + " ms");
> }
> catch (Exception e)
> {
> e.printStackTrace();
> }
> finally
> {
> if(riakClient != null)
> {
> riakClient.shutdown();
> }
> }
> }
> 
>    
> Stack:
>  --

Re: Java Client Riak Builders...

2012-05-29 Thread Brian Roach
Guido -

The real fix is to enhance the client to support a Collection, I'll add an 
issue for this in github.

What you would need to do right now is write your own Converter (which would 
really just be a modification of our JSONConverter if you're using JSON) that 
does this for you. 

If you look at the source for JSONConverter you'll see where the indexes are 
processed.  As it is, the index processing is handled by the 
RiakIndexConverter class which is where the limitation of requiring the 
annotated field to be a String is coming from (it's actually buried lower than 
that in the underlying annotation processing, but that's the starting point for 
the problem). The actual RiakIndexes class that encapsulates the data and 
exists in the IRiakObject doesn't have this problem. 

The catch is that you'll need to do all the reflection ugliness yourself, as 
that's the part that's broken (the annotation processing). 

Basically, in JSONConverter.fromDomain() you would need to replace
RiakIndexes indexes = riakIndexConverter.getIndexes(domainObject);
with your own annotation processing. The same would need to be done in 
JSONConverter.toDomain() at
riakIndexConverter.populateIndexes(…) 

Obviously this is not ideal and I'm considering it a bug; I'll put this toward 
to top of the list of things I'm working on right now. 

Thanks,
Brian Roach

 


On May 28, 2012, at 8:03 AM, Guido Medina wrote:

> Hi,
> 
>  I'm looking for a work around @RiakIndex annotation to support multiple 
> values per index name, since the annotation is limited to one single value 
> per annotated property (no collection support), I would like to know if there 
> is a way of using the DomainBucketBuilder, mutation & conflict resolver and 
> at the same time has access to a method signature like addIndex(String or 
> int)...addIndex(String or int)...build() same as you can do with 
> RiakObjectBuilder which lacks support for conflict resolution and mutation 
> style.
> 
> Regards,
> 
> Guido.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client Riak Builders...

2012-05-29 Thread Brian Roach
Guido - 

Thanks, looking forward to it. 

Also as an FYI, on Friday I fixed the bug that was causing the requirement of 
the @JsonIgnore for Riak annotated fields without getters. 

- Brian Roach

On May 29, 2012, at 11:52 AM, Guido Medina wrote:

> I will request a "pull request", I fixed it, I enabled @RiakIndex for 
> collection fields AND methods (String, Integer or Collection of any of 
> those), on our coding is working, but still I need to test it more before 
> making it final.
> 
> I will share the details tomorrow, I already created a fork from your master 
> branch.
> 
> Now you can have something like:
> 
> @RiakIndex
> @JsonIgnore
> Collection getNumbers()
> 
> Also this works as index and with no getter (as of 1.0.6-SNAPSHOT) will only 
> be that, an index:
> 
> @RiakIndex
> Collection numbers;
> That will act as index and be ignored as property which is the intention of 
> the index, to be a dynamic calculated value(s) and not as property which 
> requires the caller to call a post-construct.
> 
> And of course, all subclasses of a collection apply.
> 
> Thanks for the answer,
> 
> Guido.
> 
> -Original Message- From: Brian Roach
> Sent: Tuesday, May 29, 2012 6:09 PM
> To: Guido Medina
> Cc: riak-users@lists.basho.com
> Subject: Re: Java Client Riak Builders...
> 
> Guido -
> 
> The real fix is to enhance the client to support a Collection, I'll add an 
> issue for this in github.
> 
> What you would need to do right now is write your own Converter (which would 
> really just be a modification of our JSONConverter if you're using JSON) that 
> does this for you.
> 
> If you look at the source for JSONConverter you'll see where the indexes are 
> processed.  As it is, the index processing is handled by the 
> RiakIndexConverter class which is where the limitation of requiring the 
> annotated field to be a String is coming from (it's actually buried lower 
> than that in the underlying annotation processing, but that's the starting 
> point for the problem). The actual RiakIndexes class that encapsulates the 
> data and exists in the IRiakObject doesn't have this problem.
> 
> The catch is that you'll need to do all the reflection ugliness yourself, as 
> that's the part that's broken (the annotation processing).
> 
> Basically, in JSONConverter.fromDomain() you would need to replace
> RiakIndexes indexes = riakIndexConverter.getIndexes(domainObject);
> with your own annotation processing. The same would need to be done in 
> JSONConverter.toDomain() at
> riakIndexConverter.populateIndexes(…)
> 
> Obviously this is not ideal and I'm considering it a bug; I'll put this 
> toward to top of the list of things I'm working on right now.
> 
> Thanks,
> Brian Roach
> 
> 
> 
> 
> On May 28, 2012, at 8:03 AM, Guido Medina wrote:
> 
>> Hi,
>> 
>> I'm looking for a work around @RiakIndex annotation to support multiple 
>> values per index name, since the annotation is limited to one single value 
>> per annotated property (no collection support), I would like to know if 
>> there is a way of using the DomainBucketBuilder, mutation & conflict 
>> resolver and at the same time has access to a method signature like 
>> addIndex(String or int)...addIndex(String or int)...build() same as you can 
>> do with RiakObjectBuilder which lacks support for conflict resolution and 
>> mutation style.
>> 
>> Regards,
>> 
>> Guido.
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client Riak Builders...

2012-05-29 Thread Brian Roach

Actually, it was on purpose. As sort of a "step 1" to getting rid of it, the 
goal was just to get the code out of our repo and use maven to pull it in. As 
you note and as far as I could find, the latest development on github is not 
being published to maven central. 

Long term I want to eliminate it completely and use Jackson for everything. 

Thanks,
- Roach


On May 29, 2012, at 1:07 PM, Guido Medina wrote:

> Also, the coming riak client version removed the embedded json package from 
> it and put an old implementation from the main maven repo, I think that what 
> was meant to do was to put this version: 
> https://github.com/douglascrockford/JSON-java which has lot of performance 
> improvements but no maven repo, the old:
> 
>   
>   org.json
>   json
>   20090211
>   
> 
> uses lot of StringBuffer instead of StringBuilders and StringWritters 
> introduced later on.
> 
> I'm wondering about the benchmark of one vs the other.
> 
> Regards,
> 
> Guido.
> 
> -Original Message- From: Brian Roach
> Sent: Tuesday, May 29, 2012 7:05 PM
> To: Guido Medina
> Cc: riak-users@lists.basho.com
> Subject: Re: Java Client Riak Builders...
> 
> Guido -
> 
> Thanks, looking forward to it.
> 
> Also as an FYI, on Friday I fixed the bug that was causing the requirement of 
> the @JsonIgnore for Riak annotated fields without getters.
> 
> - Brian Roach
> 
> On May 29, 2012, at 11:52 AM, Guido Medina wrote:
> 
>> I will request a "pull request", I fixed it, I enabled @RiakIndex for 
>> collection fields AND methods (String, Integer or Collection of any of 
>> those), on our coding is working, but still I need to test it more before 
>> making it final.
>> 
>> I will share the details tomorrow, I already created a fork from your master 
>> branch.
>> 
>> Now you can have something like:
>> 
>> @RiakIndex
>> @JsonIgnore
>> Collection getNumbers()
>> 
>> Also this works as index and with no getter (as of 1.0.6-SNAPSHOT) will only 
>> be that, an index:
>> 
>> @RiakIndex
>> Collection numbers;
>> That will act as index and be ignored as property which is the intention of 
>> the index, to be a dynamic calculated value(s) and not as property which 
>> requires the caller to call a post-construct.
>> 
>> And of course, all subclasses of a collection apply.
>> 
>> Thanks for the answer,
>> 
>> Guido.
>> 
>> -Original Message- From: Brian Roach
>> Sent: Tuesday, May 29, 2012 6:09 PM
>> To: Guido Medina
>> Cc: riak-users@lists.basho.com
>> Subject: Re: Java Client Riak Builders...
>> 
>> Guido -
>> 
>> The real fix is to enhance the client to support a Collection, I'll add an 
>> issue for this in github.
>> 
>> What you would need to do right now is write your own Converter (which would 
>> really just be a modification of our JSONConverter if you're using JSON) 
>> that does this for you.
>> 
>> If you look at the source for JSONConverter you'll see where the indexes are 
>> processed.  As it is, the index processing is handled by the 
>> RiakIndexConverter class which is where the limitation of requiring the 
>> annotated field to be a String is coming from (it's actually buried lower 
>> than that in the underlying annotation processing, but that's the starting 
>> point for the problem). The actual RiakIndexes class that encapsulates the 
>> data and exists in the IRiakObject doesn't have this problem.
>> 
>> The catch is that you'll need to do all the reflection ugliness yourself, as 
>> that's the part that's broken (the annotation processing).
>> 
>> Basically, in JSONConverter.fromDomain() you would need to replace
>> RiakIndexes indexes = riakIndexConverter.getIndexes(domainObject);
>> with your own annotation processing. The same would need to be done in 
>> JSONConverter.toDomain() at
>> riakIndexConverter.populateIndexes(…)
>> 
>> Obviously this is not ideal and I'm considering it a bug; I'll put this 
>> toward to top of the list of things I'm working on right now.
>> 
>> Thanks,
>> Brian Roach
>> 
>> 
>> 
>> 
>> On May 28, 2012, at 8:03 AM, Guido Medina wrote:
>> 
>>> Hi,
>>> 
>>> I'm looking for a work around @RiakIndex annotation to support multiple 
>>> values per index name, since the annotation is limited to one single value 
>>> per annotated property (no collection support), I would

Re: Are integer indexes in 2i 64-bit or 32-bit?

2012-06-08 Thread Brian Roach
The problem with long on Java is that it's a signed 64 bit int which doesn't 
fully solve the issue; there would be someone out there that would send in a 
bug saying that when you hit 2^63 it rolled over. 

The plan (if my memory is working, I have this written down on an ever growing 
todo list) is to switch to using BigInteger internally in RiakIndex with the 
interface / annotations allowing for any of the types to be passed in. 

- Roach

On Jun 8, 2012, at 4:18 AM, Guido Medina wrote:

> I would say just to add long and Long for indexing (So it that it supports 
> Long, Integer and their respective natives), using BigNumber subclasses have 
> a different semantic and would restrict the developers when designing the 
> Riak POJOs.
> 
> Guido.
> 
> On 08/06/12 06:37, Russell Brown wrote:
>> 
>> On 7 Jun 2012, at 22:55, Guido Medina wrote:
>> 
>>> All points to 32 bits, at least for the Java client side (indexes can be of 
>>> type Integer, not Long which is the 64 bits) Look for RiakIndex.java, that 
>>> will give you some answers.
>> 
>> That's a mistake on the part of the client developer at that time (me). They 
>> should probably be BigInteger, since integers can be arbitrarily large in 
>> erlang. I'm pretty sure Brian Roach (the new, smarter, Java developer) is 
>> addressing this https://github.com/basho/riak-java-client/issues/112
>> 
>> Russell
>> 
>>>  
>>> I don’t know the exact answer though.
>>>  
>>> Regards,
>>>  
>>> Guido.
>>>  
>>> From: Alexander Sicular
>>> Sent: Thursday, June 07, 2012 10:43 PM
>>> To: Berend Ozceri
>>> Cc: riak-users@lists.basho.com
>>> Subject: Re: Are integer indexes in 2i 64-bit or 32-bit?
>>>  
>>> I would say yes... Probably, if you're on a 64bit system. . Unless you're 
>>> shifting stuff through JavaScript in which case I doubt it. Cause last I 
>>> checked, js don't speak 64bit int. 
>>> 
>>> 
>>> @siculars on twitter
>>> http://siculars.posterous.com
>>>  
>>> Sent from my iRotaryPhone
>>> 
>>> On Jun 7, 2012, at 17:08, Berend Ozceri  wrote:
>>> 
>>>> I apologize for asking this question if it's an FAQ or is documented 
>>>> somewhere, but I don't see anything specific mentioned about the size of 
>>>> integer indexes in 2i:
>>>>  
>>>> http://wiki.basho.com/Secondary-Indexes.html
>>>>  
>>>> I certainly could dive into to source code to answer this question, but in 
>>>> case someone here knows, what's the size of an integer index in 2i? I'm 
>>>> hoping that the answer will be that it's 64 bits…
>>>>  
>>>> Thanks,
>>>>  
>>>> Berend
>>>>  
>>>> ___
>>>> riak-users mailing list
>>>> riak-users@lists.basho.com
>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> 
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak newbie

2012-06-21 Thread Brian Roach
Hi Anand,

The brew install actually gives you compiled code, and no makefile. In 
addition, the startup scripts are slightly different that when installing from 
source.

I recently answered the same question on StackOverflow which should point you 
in the right direction:
http://stackoverflow.com/questions/9906386/running-a-three-node-riak-cluster-using-a-homebrew-installation

Basically, you need to manually make copies of the /usr/local/Cellar/riak 
directory and edit the files accordingly, including the startup scripts 
themselves. 

Please let us know if you have additional questions!

Thanks,
Brian Roach


On Jun 21, 2012, at 7:21 AM, Anand Hegde wrote:

> I am trying to get the 4 node setup working on my os x lion machine.
> 
> sorry If you are getting this a second time but I think my first mail got 
> lost because I was a non member.
> 
> I installed riak using 
> 
> $ brew install riak
> 
> I am using the online documentation available - 
> http://wiki.basho.com/Building-a-Development-Environment.html
> 
> I am stuck on this step - "Use rebar to start up four nodes. "
> 
> After installing riak, do I have to install rebar as well? After installing 
> riak I directly tried the 
> 
> $ make devrel 
> 
> command and it gave the following error  
> 
> make: *** No rule to make target `devrel'.  Stop.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client pool threading

2012-07-11 Thread Brian Roach
Steve - 

Call the IRIakClient.shutdown() method when you're done with it - it'll kill 
the background threads. 

Thanks,
Brian Roach

On Jul 11, 2012, at 2:12 PM, Steve Warren wrote:

> I've noticed that while using the java client library the jvm is not exiting 
> when main completes. Is there a way to set the connection threads to daemon 
> mode so they don't hang the jvm?
> 
> Regards
> Steve
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client pool threading

2012-07-11 Thread Brian Roach
Steve -

Honestly, don't know. I think I've had that thought at least once :) 

I've got a couple things that I'm working on at the moment that need to get 
done for the Riak 1.2 release, but when I get a chance I'll take a look at it 
and see if there's any harm. 

Thanks,
Brian Roach


On Jul 11, 2012, at 2:26 PM, Steve Warren wrote:

> Thanks for the reply! That's what I did (I expose Riak as spring beans and so 
> destroy on context shutdown). But why not set the threads to deamon mode 
> since they're support threads and avoid the issue altogether? Is it just too 
> deep in another library or some other concern I don't understand?
> 
> On Wed, Jul 11, 2012 at 1:17 PM, Brian Roach  wrote:
> Steve -
> 
> Call the IRIakClient.shutdown() method when you're done with it - it'll kill 
> the background threads.
> 
> Thanks,
> Brian Roach
> 
> On Jul 11, 2012, at 2:12 PM, Steve Warren wrote:
> 
> > I've noticed that while using the java client library the jvm is not 
> > exiting when main completes. Is there a way to set the connection threads 
> > to daemon mode so they don't hang the jvm?
> >
> > Regards
> > Steve
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client: byte arrays as keys?

2012-07-16 Thread Brian Roach
Kaspar -

Russell and I got together and discussed this a little more this morning. 

The reason it's not exposed at the IRiakClient and RawClient interfaces is 
because the underlying original HTTP client API doesn't support a byte array as 
a key; it only uses String. Because of this,  our IRiakObject default 
implementation only takes a String for a key, and our annotation processing 
only works with a String as well. It was all designed around being protocol 
neutral to sit on top of both of the old clients. 

That being said, what does work perfectly fine is creating a new String from 
the byte array. Java does the right thing and this works regardless of whether 
you are using the PB or HTTP client:

IRiakClient client = RiakFactory.pbcClient(); // or .httpClient()
Bucket b = client.fetchBucket("this").execute();

 byte[] foo = { 0, 5, 120, 1 };
 String key = new String(foo);

 b.store(key, "this is my value").execute();
 IRiakObject o = b.fetch(key).execute();
 System.out.println(o.getValueAsString());

 client.shutdown();

Thanks!
Brian Roach

On Jul 16, 2012, at 1:01 AM, Russell Brown wrote:

> Hi Kaspar,
> 
> Sorry for the slow reply.
> 
> On 16 Jul 2012, at 07:49, Kaspar Thommen wrote:
> 
>> Anyone please?
>> 
>> On Jun 26, 2012 8:57 PM, "Kaspar Thommen"  wrote:
>> Hi,
>> 
>> The high-level API in the Java client library (IRiakClient) does not allow 
>> one to use byte[] arrays as keys, only Strings, whereas the underlying PBC 
>> and http APIs (e.g. com.basho.riak.pbc.RiakClient) do, via the 
>> fetch(ByteString, ...) methods. Any reason for this?
> 
> Oversight or oversimplifying by me.
> 
> 
>> Is it planned to add byte array keys to IRiakClient as well at some point?
> 
> We should. Please will you raise an issue for it on the RJC github repo[1] to 
> ensure we get to it?
> 
> Cheers
> 
> Russell
> 
> [1] Java client issues - 
> https://github.com/basho/riak-java-client/issues?direction=desc&sort=created&state=open
> 
>> 
>> Thanks,
>> Kaspar
>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client: byte arrays as keys?

2012-07-17 Thread Brian Roach
Kaspar -

Unfortunately it's worse than that. The reason it's doing that is because 
that's how the underlying, original Protobuf client is handling things - it 
uses ByteString.copyFromUtf8() on Strings that are passed to store(), fetch(), 
etc as the key.

Basically, above com.basho.riak.pbc.RiakClient there's simply not a way to use 
a non-UTF8 encoded String currently.

The problem with what you propose is that it requires modifying the higher 
level interfaces which will of course break code built on top of them (we have 
users who have built things on top of these interfaces). This is not something 
we want to do in a minor point release. 

That being said, there's a few things we'd like to change in the client but 
haven't for this same reason. Along with the Riak 1.2 release we'll be cutting 
the 1.0.6 release of the Java client. After that, I'd be willing to look into 
what it would take to sanely allow for arbitrary byte arrays to be used as keys 
for a 1.1 client release that would include other interface breaking changes.

Thanks,
Brian Roach


On Jul 17, 2012, at 4:32 AM, Kaspar Thommen wrote:

> Brian,
> 
> I just identified a potential issue. In my application code all keys are 
> byte[] arrays (or corresponding 64 bit integers), and in order to make sure I 
> can generate a valid Riak key String from that byte array I convert using 
> ISO-8859-1. However, when looking at PBClientAdapter.store() I see that it 
> calls ConversionUtil.convert(), which in turn calls 
> ConversionUtil.nullSafeToByteString() which uses UTF-8 encoding. Hence, my 
> original byte array will likely be different in Riak.
> 
> This could could lead to tricky problems like the application code logging a 
> byte array key that cannot be found in Riak. Do you think it would make sense 
> to allow the user to specify a custom String/byte[] converter?
> 
> Thanks,
> Kaspar
> Am 16.07.2012 15:51 schrieb "Brian Roach" :
> Kaspar -
> 
> Russell and I got together and discussed this a little more this morning.
> 
> The reason it's not exposed at the IRiakClient and RawClient interfaces is 
> because the underlying original HTTP client API doesn't support a byte array 
> as a key; it only uses String. Because of this,  our IRiakObject default 
> implementation only takes a String for a key, and our annotation processing 
> only works with a String as well. It was all designed around being protocol 
> neutral to sit on top of both of the old clients.
> 
> That being said, what does work perfectly fine is creating a new String from 
> the byte array. Java does the right thing and this works regardless of 
> whether you are using the PB or HTTP client:
> 
> IRiakClient client = RiakFactory.pbcClient(); // or .httpClient()
> Bucket b = client.fetchBucket("this").execute();
> 
>  byte[] foo = { 0, 5, 120, 1 };
>  String key = new String(foo);
> 
>  b.store(key, "this is my value").execute();
>  IRiakObject o = b.fetch(key).execute();
>  System.out.println(o.getValueAsString());
> 
>  client.shutdown();
> 
> Thanks!
> Brian Roach
> 
> On Jul 16, 2012, at 1:01 AM, Russell Brown wrote:
> 
> > Hi Kaspar,
> >
> > Sorry for the slow reply.
> >
> > On 16 Jul 2012, at 07:49, Kaspar Thommen wrote:
> >
> >> Anyone please?
> >>
> >> On Jun 26, 2012 8:57 PM, "Kaspar Thommen"  wrote:
> >> Hi,
> >>
> >> The high-level API in the Java client library (IRiakClient) does not allow 
> >> one to use byte[] arrays as keys, only Strings, whereas the underlying PBC 
> >> and http APIs (e.g. com.basho.riak.pbc.RiakClient) do, via the 
> >> fetch(ByteString, ...) methods. Any reason for this?
> >
> > Oversight or oversimplifying by me.
> >
> >
> >> Is it planned to add byte array keys to IRiakClient as well at some point?
> >
> > We should. Please will you raise an issue for it on the RJC github repo[1] 
> > to ensure we get to it?
> >
> > Cheers
> >
> > Russell
> >
> > [1] Java client issues - 
> > https://github.com/basho/riak-java-client/issues?direction=desc&sort=created&state=open
> >
> >>
> >> Thanks,
> >> Kaspar
> >>
> >>
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Does Java client support search with paging and sorting?

2012-07-20 Thread Brian Roach
Hello Lei,

The Java client currently doesn't use the Solr API for search; It uses 
Map/Reduce. Sadly there's no support for those solr parameters. The only way to 
do it currently would be by adding phases to sort then reduce the entire result 
set which isn't exactly optimal.

We currently do have plans on improving this situation on both sides. We're 
adding new features to the protocol buffers API that will support the 'rows' 
and 'start' parameters which I am hoping to get into the client "soon". There's 
a bit of work there because it involves ripping out the old MR search stuff and 
retrofitting the new, which is going to require a bit of interface breaking as 
well above the old underlying PB client.  I may also look at adding the 
solr-likeinterface to the HTTP side of the client after that. 

Unfortunately I don't have a good timeline for you. Hopefully in the next month 
for the PB additions is about the best I can offer at the moment. 

As for help, we're always thrilled when other people make contributions :) If 
you were to add in the solr API support to the original underlying HTTP client 
it'd be awesome. 

Thanks,
Brian Roach

On Jul 19, 2012, at 2:41 PM, Lei Gu wrote:

> Hi,
> We are exploring using Riak as our persistent storage for our next project.
> Does Java client support search with paging and sorting, like the web solr 
> api?
> If yes, can you point one with class/method support it?
> 
> Here is an example with page and number of rows per page set,
> 
> curl "http://localhost:8098/solr/books/select?start=0&rows=1&q=prog*";
> 
> If not, is there a plan to add the support to the Java client? Can we help?
> 
> Thanks.
> 
> -- Lei
> 
> 
> 
> The information contained in this electronic mail transmission is intended 
> only for the use of the individual or entity named in this transmission. If 
> you are not the intended recipient of this transmission, you are hereby 
> notified that any disclosure, copying or distribution of the contents of this 
> transmission is strictly prohibited and that you should delete the contents 
> of this transmission from your system immediately. Any comments or statements 
> contained in this transmission do not necessarily reflect the views or 
> position of GSI Commerce, Inc. or its subsidiaries and/or affiliates.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java PB API: r-value exception

2012-07-23 Thread Brian Roach
Hi Kaspar,

In your first example, you're trying to permanently change the bucket's r 
parameter; createBucket() returns a WriteBucket which when executed writes 
these values to Riak. We currently don't have support for this in the protobuf 
API.

In your DomainBucket example you're setting the r value for the individual read 
operations (which is supported via PB), rather than bucket parameter itself. It 
gets passed to the underlying StoreObject.

Thanks,
Brian Roach


On Jul 21, 2012, at 6:45 AM, Kaspar Thommen wrote:

> Hi,
> 
> I'm confused about the r-value when using the Java API. When I call:
> 
> Bucket bucket = 
> RiakFactory.pbcClient().createBucket("myBucket").r(2).execute();
> 
> I get the following exception:
> 
> com.basho.riak.client.bucket.UnsupportedPropertyException: r not supported 
> for PB
>   at 
> com.basho.riak.client.bucket.WriteBucket.httpOnly(WriteBucket.java:539)
>   at com.basho.riak.client.bucket.WriteBucket.r(WriteBucket.java:332)
> 
> However, when I run this:
> 
> DomainBucket myDomainBucket = 
> DomainBucket.builder(RiakFactory.pbcClient().createBucket("myBucket").execute(),
>  MyDomainObject.class).r(2).build();
> 
> and then later call any of the CRUT operations on that domain bucket I get no 
> exception. Why would I get it on the low-level Bucket API but not on the 
> high-level DomainBucket API?
> 
> Thanks,
> Kaspar
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Updating an index value using Java API

2012-07-25 Thread Brian Roach
Sorry, I missed this one.

As Ryan notes, you simply rewrite the object to Riak with the new indexes. 
There's no need to delete the object beforehand. 

With the Java client you can do this via a Mutation that would be used when you 
call StoreObject.execute() 

Thanks,
- Roach

On Jul 25, 2012, at 10:01 AM, Ryan Zezeski wrote:

> Kaspar,
> 
> I don't know the details of the Java API but re-writing the object should 
> suffice.  Riak will remove the old indexes for you and create the new ones.
> 
> -Z
> 
> On Wed, Jul 18, 2012 at 5:00 AM, Kaspar Thommen  
> wrote:
> Hi,
> 
> Say I have a 'users' bucket that stores user data (name, email) and I also 
> have secondary index to access users by e-mail, how do I update the index in 
> case a user's e-mail address changes?
> 
> The only approach I found so far was to completely delete the user object, 
> which implies a deletion of the index entry, and then recreate the update 
> object and add the new e-mail to the index. Is this the only way?
> 
> Thanks,
> Kaspar
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client key existence

2012-07-30 Thread Brian Roach
Hi Ukyo,

The Java client is … sort of complicated at the moment :) It's got a high level 
interface (IRiakClient) built on a "translation" level interface (RawClient) 
that then has the two separate, original clients under it (HTTP and Protocol 
buffers). 

At the highest level (IRiakClient)  … there isn't support for it. I suspect the 
reason for this is because Protocol Buffers doesn't support it, therefore we 
didn't want people to get unexpected results. 

At the RawClient level, however, it is supported and if you're using HTTP it 
will only do a HEAD and return the metadata (the headers) … unless there are 
siblings. If there are siblings (HEAD returns a 300 response) the client will 
then do a full fetch and return them. 

You can use the RawClient interface directly by instantiating and using a 
HTTPClientAdapter[1] rather than getting an IRiakClient instance from the 
RiakFactory. If you absolutely want this functionality that's the way to go. 

Thanks,
Brian Roach

[1] 
http://basho.github.com/riak-java-client/1.0.5/com/basho/riak/client/raw/http/HTTPClientAdapter.html


On Jul 28, 2012, at 10:44 AM, Ukyo Virgden wrote:

> Hi, 
> 
> Just installed riak and trying to get adjusted. Having looked at the java 
> client, I could not tell if there' a way to check key exixtence. What I need 
> to do is to check if a key exists and put it if it doesn't.  In HTTP request 
> from curl, a HEAD request instead of a GET gives the result I need. 
> 
> How can I check the existence of a key from java client without making a full 
> fetch?
> 
> Thanks.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client questions

2012-08-02 Thread Brian Roach
Hi Oved - replies inlined below.


On Aug 2, 2012, at 11:04 AM, Oved Machlev wrote:

> Hi,
>  
> I have been working with Riak in past 3 weeks, using the java client.
> I have encountered a few issues, and could not find solutions so far.
> This is the first time I write an email to this mailing list - Would be great 
> if it reaches the right destination J
>  
> 1.   Riak supports auto generated keys when storing an object - 
> http://wiki.basho.com/Basic-Riak-API-Operations.html:
> Store a new object and assign random key #
> 
> If your application would rather leave key-generation up to Riak, issue a 
> POST request to the bucket URL instead of a PUT to a bucket/key pair: POST 
> /riak/bucket If you don’t pass Riak a “key” name after the bucket, it will 
> know to create one for you.
> 
> Is it possible to do the same when using the java client? it seems that key 
> must be provided when storing an object.
> 

Right now the Java client doesn't support this. These is an issue open for this 
on github and I agree it needs to be added. 

After the 1.0.6 release (along with Riak 1.2) I hope to work on this and get it 
added. 

> 2.   I have a POJO which I store. It contains a set of objects which I do 
> not wish to persist, I mark it as transient:
>  
> transient private Set services;
> but still, this set is being persisted – when I fetch my POJO (curl and java) 
> I can see this:
>  
> 
> {"name":"oved","lastUpdate":"1343923201735","data":null,"services":[]}
>  
> Is there something else I need to do to tell Riak to ignore it?
>  

The default Converter used if you do not specify your own is the 
JSONConverter. This uses the Jackson JSON library. 

To exclude a field from serialization you would need to use the 
(org.codehaus.jackson.annotate) @JsonIgnore annotation. 

> 3.   This is more basic question – I keep on thinking that my stored 
> objects should implement the IRIAKObject, but I avoided doing that so far, 
> because in all the examples in the cookbook, it is never being done. So in 
> theoretical level – should the objects that are stored in riak database 
> implement this interface or not? Is there any value in doing that?
>  

Not particularly. That's what the Converter is doing behind the scenes; 
converting back and forth between your POJO and a IRiakObject


> 4.   MapReduce – If I perform the following without the line in bold 
> (link) I get the collection of ServiceProviders without the Services that are 
> linked.
> When adding the LinkPhase, I am getting a JsonMappingException (see below). 
> Any idea what is causing that? What am I missing? Both objects (service and 
> serviceProvider) are stored as JsonObjects.
>  
> public Collection getAllServiceProvider() throws 
> Exception{
> 
> BucketMapReduce m = riakClient.mapReduce("SERVICE_PROVIDER");
> m.addMapPhase(new NamedJSFunction("Riak.mapValuesJson"), true);
> m.addLinkPhase("SERVICE", "_");
> MapReduceResult result = m.execute();   
> return result.getResult(ServiceProvider.class);
> }
> 
>  
> The exception:
> com.basho.riak.client.convert.ConversionException: 
> org.codehaus.jackson.map.JsonMappingException: Can not deserialize instance 
> of com.att.cso.omss.datastore.riak.entities.ServiceProvider out of 
> START_ARRAY token
>  at [Source: java.io.StringReader@31958905; line: 1, column: 2]
> at 
> com.basho.riak.client.raw.http.ConversionUtil$1.getResult(ConversionUtil.java:601)
> 
>  

Jackson is having some issue trying to map the JSON returned to your 
ServiceProvider class. Can you check the JSON stored in Riak and make sure it's 
what you think it is? If I could see the class and the JSON I might be able to 
offer more insight.


> Sorry for the long email, and thanks in advance…

No Worries!

Thanks,
Brian Roach


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: HTTP or PB cluster client issue

2012-09-24 Thread Brian Roach
Agreed.

Unfortunately there's a number of issues with the ClusterClient that
are not easily solved; what you've discovered is one of them. As Guido
notes, HAProxy is a far better solution at the moment.

I'm looking at what I can do to address these issues but it's going to
take some fairly large scale changes.

Thanks,
Brian Roach

On Fri, Sep 21, 2012 at 9:43 AM, Guido Medina  wrote:
> Hi,
>
>   It is the Java client which to be honest, doesn't handle well one node
> going down, so, for example, in my company we use HA proxy for that, here is
> a starting configuration: https://gist.github.com/1507077
>
>   Once we switched to HA proxy we just use a simple client without cluster
> config, so the Java client doesn't know anything about the load balancing
> going on. It works well, I can upgrade and restart servers without our Java
> application be complaining.
>
> Regards,
>
> Guido.
>
>
> On 21/09/12 16:36, Lei Gu wrote:
>>
>> Hi Brian,
>> We are moving past the performance testing and onto failover testing.
>> I have a five-node cluster and created a PB and HTTP clustered client
>> based on the cook book. I tested out and everything seems to be working.
>> Then I shutdown one node in the cluster and all my tests failed. It seems
>> like the PB and HTTP cluster client connects to all configured nodes
>> initially and will throw an exception if any node is down. Does this defeat
>> the purpose of clustering? In my test, I have 10 threads, each creates a PB
>> and HTTP cluster client and then goes on to execute a series of tests. From
>> the attached log file, you can see every test thread is failing at
>> connection the down node, even though there are four other nodes that can do
>> the work.
>>
>> Also, is the failover automatic or we have to catch the connection refused
>> exception and creating a new client or just re-execute the method?
>>
>> Is using a load balancer a better way to handle failover/load balancing
>> issue?
>> Appreciate you insights and help.
>> Best regards,
>> -- Lei
>>
>>
>> pbf clients =
>> r1-riak-lei.lm4.eng.e-dialog.com,r2-riak-lei.lm4.eng.e-dialog.com,r3-riak-lei.lm4.eng.e-dialog.com,r4-riak-lei.lm4.eng.e-dialog.com,r5-riak-lei.lm4.eng.e-dialog.com
>>
>> http clients =
>> r1-riak-lei.lm4.eng.e-dialog.com,r2-riak-lei.lm4.eng.e-dialog.com,r3-riak-lei.lm4.eng.e-dialog.com,r4-riak-lei.lm4.eng.e-dialog.com,r5-riak-lei.lm4.eng.e-dialog.com
>>
>> Looup patterns = folder_root_2_5_%s_%s
>>
>> SLF4J: Class path contains multiple SLF4J bindings.
>>
>> SLF4J: Found binding in
>> [jar:file:/Users/legu/.m2/repository/ch/qos/logback/logback-classic/0.9.30/logback-classic-0.9.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>
>> SLF4J: Found binding in
>> [jar:file:/Users/legu/opensource-projects/logback-0.9.30/logback-classic-0.9.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>
>> SLF4J: Found binding in
>> [jar:file:/Users/legu/opensource-projects/apache-tomcat-6.0.35/lib/logback-classic-0.9.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>>
>> Exception in thread "Thread-11" java.lang.RuntimeException:
>> com.basho.riak.client.http.response.RiakIORuntimeException:
>> org.apache.http.conn.HttpHostConnectException: Connection to
>> http://r4-riak-lei.lm4.eng.e-dialog.com:8098 refused
>>
>> at com.edialog.persistence.riak.RiakClient.walk(RiakClient.java:457)
>>
>> at
>> com.edialog.riak.persistence.RiakPerfTest$PerfClient.run(RiakPerfTest.java:86)
>>
>> at java.lang.Thread.run(Thread.java:680)
>>
>> Caused by: com.basho.riak.client.http.response.RiakIORuntimeException:
>> org.apache.http.conn.HttpHostConnectException: Connection to
>> http://r4-riak-lei.lm4.eng.e-dialog.com:8098 refused
>>
>> at
>> com.basho.riak.client.http.util.ClientHelper.executeMethod(ClientHelper.java:499)
>>
>> at
>> com.basho.riak.client.http.util.ClientHelper.executeMethod(ClientHelper.java:526)
>>
>> at
>> com.basho.riak.client.http.util.ClientHelper.walk(ClientHelper.java:289)
>>
>> at com.basho.riak.client.http.RiakClient.walk(RiakClient.java:334)
>>
>> at com.basho.riak.client.http.RiakClient.walk(RiakClient.java:347)
>>
>> at
>> com.basho.riak.client.raw.http.HTTPClientAdapter.linkWalk(HTTPClientAdapter.java:374)
>>
>> at
>> com.basho.riak.client.raw.ClusterClient.linkWalk(ClusterClient.java:225)
>>
>> at com.basho.riak.client.query.LinkWal

Re: Bug on riak-client 1.0.6 - Riak index property not serialized

2012-09-25 Thread Brian Roach
This is not a bug, it's a feature ;) The fact that the index values
were were being serialized was actually not consistant with our other
annotated fields, so I made the decision to bring it in line and not
do so.

I actually highlighted the change in the CHANGELOG:

af12b6c - The default JSONConverter now supports multiple values via a
@RiakIndex annotated Set<> (Integer or String). Note this also brings
serialization/deserialization in line with our other annotaded fields
in that these values will not be present in the resulting JSON. To
override this behavior the Jackson @JsonProperty annotation can be
supplied

Thanks,
Brian Roach

On Tue, Sep 25, 2012 at 1:01 AM, Deepak Balasubramanyam
 wrote:
> I switched to the java riak-client 1.0.6 to take it for a spin, and several
> test cases of mine failed. Upon further investigation I found that any
> member variable that contains the @RiakIndex annotation does not serialize
> into Riak anymore. You can reproduce the problem with the following type
>
> @JsonSerialize(include=JsonSerialize.Inclusion.NON_NULL)
> public class MyType
> {
> public static final String SOME_NAME_STR_REFERENCE = "blah";
> @RiakKey
> private String myKey;
> @RiakIndex(name=SOME_NAME_STR_REFERENCE)
> private String indexedProp;
> // Getters and setters go here
> }
>
> Make a call to bucket.store(typeRef).execute() followed by a GET to
> /riak/myBucket/myKey. The indexedProp element will be missing in the json
> for calls made on riak-client version 1.0.6 but will be available when the
> call is made from riak-client version 1.0.5.
>
> Thanks
> Deepak Bala
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak java client - HTTP client...

2012-10-02 Thread Brian Roach
The travis CI build fails occasionally with a map/reduce timeout issue
that isn't due to the client (directly) that still needs addressing.
It's nothing to worry about.

On Tue, Oct 2, 2012 at 9:06 AM, Guido Medina  wrote:
> Hi,
>
>   Is there any particular issue with the Java Riak HttpClient when using a
> version other than 4.1.1? I have a pull request that doesn't pass the Travis
> CI build, even though I have verified it shouldn't have issues except for
> the that specific difference, if so, could it be tried with HttpClient
> 4.2.1?
>
>   I have tried the integration test on our 4 nodes development cluster but
> never gets as far as in the Travis CI system, so I also wonder if that Riak
> cluster has some extra requirement to run the integration test phase from
> maven.
>
> Best regards,
>
> Guido.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak search and basic write operations

2012-10-02 Thread Brian Roach
Greetings!

First and foremost, search and secondary indexes (2i) are not the same thing.

You need to enable 2i and use the ELevelDB backend as described here:
http://wiki.basho.com/Secondary-Indexes---Configuration-and-Examples.html

Secondly, the default Converter (JSONConverter) in the Java client
does support serializing/deserializing of secondary indexes. The page
you link to in the cookbook is demonstrating how to write your own
custom converter (for something other than JSON), and its first
example does not support them.

Enable 2i in Riak, annotate your POJO as you have, and use the default
Converter and you should be fine; nothing else is needed.

On Tue, Oct 2, 2012 at 4:03 PM, kamiseq  wrote:
> hej I am a bit confused about search functionality and basic
> read/write operations.
>
> Im using recent (1.0.6) java client and pb cluster riak client. just
> for testing Im only using two local machines joined into one cluster.
> storing and reading objects works fine  with key(let say I have users
> bucket and I generate id from timestamp).
>
> but
> 1. I cannot fetch objects using rest client in firefox after I store
> objects using java client (I ll check with curl) but I can query for
> keys in bucket (http://192.168.0.121:8098/riak/users/?keys=true&props=false).
> 2. I followed 
> https://github.com/basho/riak-java-client/wiki/Using-a-custom-Converter
> and I annotated one of the field with @RiakIndex(name = "user_email")
> and now I tried to look-up this object with email but I got
> Caused by: com.basho.riak.client.RiakException: java.io.IOException:
> {"phase":"index","error":"{indexes_not_supported,riak_kv_bitcask_backend}","input":"{cover,[{1370157784997721485815954530671515330927436759040,[1370157784997721485815954530671515330927436759040]}],{<<\"users\">>,{eq,<<\"user_email_bin\">>,<<\"js@gm.com1\">>}}}","type":"result","stack":"[]"}
> at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:80)
> at 
> com.basho.riak.client.raw.pbc.PBClientAdapter.fetchIndex(PBClientAdapter.java:436)
>
> I ve google a bit and all I could find was this
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-September/005749.html
> post a year old now. which points out that search needs to be enabled
> in app.config file. I changed settings and restarted both machines.
> but I keep getting same exception every time.
> 3. in same 
> https://github.com/basho/riak-java-client/wiki/Using-a-custom-Converter
> tutorial I read that default converter is not marshalling indexes,
> links and metadata and it shows how to get and restore those
> information back but looking into code for default converter
> implementation I could find the same lines of code as in the article
>
> Map usermetaData =
> usermetaConverter.getUsermetaData(domainObject);
> RiakIndexes indexes = riakIndexConverter.getIndexes(domainObject);
> Collection links = 
> riakLinksConverter.getLinks(domainObject);
>
> return RiakObjectBuilder.newBuilder(bucket, key)
> .withValue(value)
> .withVClock(vclock)
> .withUsermeta(usermetaData)
> .withIndexes(indexes)
> .withLinks(links)
> .withContentType(Constants.CTYPE_OCTET_STREAM)
> .build();
> 4. can I specify data encoding it seems that it is not UTF-8
> 5. I also saw in app.config that vnode_vclocks is set to true, so
> should I still set clientId on riak client?? I read it will be skipped
> anyway now.
>
> thanks for any comments
>
> pozdrawiam
> Paweł Kamiński
>
> kami...@gmail.com
> pkaminski@gmail.com
> __
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak search and basic write operations

2012-10-03 Thread Brian Roach
The problem you are having has nothing to do with the Java client;
the '#' character is a reserved character in an URL. It needs to be
URL encoded or it's interpreted as a fragment/anchor indicator.

http://en.wikipedia.org/wiki/Percent-encoding

In an URL you just need to replace it with %23


On Wed, Oct 3, 2012 at 4:24 PM, kamiseq  wrote:
> I found out why I couldn't read objects via rest client. my key
> contained # and it was probably mapped to different char by pbclient,
> when replaced # with _ everything works fine.
>
> how can I force charset using pbclient, what is the default?
>
> pozdrawiam
> Paweł Kamiński
>
> kami...@gmail.com
> pkaminski@gmail.com
> __
>
>
> On 3 October 2012 18:24, kamiseq  wrote:
>> ok,
>> I admit that somehow I didn't catch at first that example shows second
>> index usage - so thanks for pointing this out.
>>
>> what about questions 1,4,5??
>>
>> pozdrawiam
>> Paweł Kamiński
>>
>> kami...@gmail.com
>> pkaminski@gmail.com
>> __
>>
>>
>> On 3 October 2012 00:24, Brian Roach  wrote:
>>> Greetings!
>>>
>>> First and foremost, search and secondary indexes (2i) are not the same 
>>> thing.
>>>
>>> You need to enable 2i and use the ELevelDB backend as described here:
>>> http://wiki.basho.com/Secondary-Indexes---Configuration-and-Examples.html
>>>
>>> Secondly, the default Converter (JSONConverter) in the Java client
>>> does support serializing/deserializing of secondary indexes. The page
>>> you link to in the cookbook is demonstrating how to write your own
>>> custom converter (for something other than JSON), and its first
>>> example does not support them.
>>>
>>> Enable 2i in Riak, annotate your POJO as you have, and use the default
>>> Converter and you should be fine; nothing else is needed.
>>>
>>> On Tue, Oct 2, 2012 at 4:03 PM, kamiseq  wrote:
>>>> hej I am a bit confused about search functionality and basic
>>>> read/write operations.
>>>>
>>>> Im using recent (1.0.6) java client and pb cluster riak client. just
>>>> for testing Im only using two local machines joined into one cluster.
>>>> storing and reading objects works fine  with key(let say I have users
>>>> bucket and I generate id from timestamp).
>>>>
>>>> but
>>>> 1. I cannot fetch objects using rest client in firefox after I store
>>>> objects using java client (I ll check with curl) but I can query for
>>>> keys in bucket 
>>>> (http://192.168.0.121:8098/riak/users/?keys=true&props=false).
>>>> 2. I followed 
>>>> https://github.com/basho/riak-java-client/wiki/Using-a-custom-Converter
>>>> and I annotated one of the field with @RiakIndex(name = "user_email")
>>>> and now I tried to look-up this object with email but I got
>>>> Caused by: com.basho.riak.client.RiakException: java.io.IOException:
>>>> {"phase":"index","error":"{indexes_not_supported,riak_kv_bitcask_backend}","input":"{cover,[{1370157784997721485815954530671515330927436759040,[1370157784997721485815954530671515330927436759040]}],{<<\"users\">>,{eq,<<\"user_email_bin\">>,<<\"js@gm.com1\">>}}}","type":"result","stack":"[]"}
>>>> at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:80)
>>>> at 
>>>> com.basho.riak.client.raw.pbc.PBClientAdapter.fetchIndex(PBClientAdapter.java:436)
>>>>
>>>> I ve google a bit and all I could find was this
>>>> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-September/005749.html
>>>> post a year old now. which points out that search needs to be enabled
>>>> in app.config file. I changed settings and restarted both machines.
>>>> but I keep getting same exception every time.
>>>> 3. in same 
>>>> https://github.com/basho/riak-java-client/wiki/Using-a-custom-Converter
>>>> tutorial I read that default converter is not marshalling indexes,
>>>> links and metadata and it shows how to get and restore those
>>>> information back but looking into code for default converter
>>>> implementation I could find the same lines of code as in the article
>>>>
>>>> Map userme

Re: Riak JAVA Client Performance

2012-10-13 Thread Brian Roach
Some points about how the Java client works:

You use a single instance of the client and share it across threads.

The client holds a connection pool. It grows as necessary. You can
specify the starting size of the pool and the max size (default is
unlimited). There is an idle reaper thread in that connection pool
that evicts connections that are idle for 1 second by default; this is
also something you can change in the config.

As has been mentioned, the best way to deal with a Riak cluster is by
using HAProxy. There is a ClusterClient available in the Java client
that you can instantiate and use that will round-robin requests to
different nodes. That said, unfortunately there are currently a number
of issues that make HAProxy a superior solution. This is something we
plan to address as we further develop the Java client.

Thanks,
Brian Roach


On Thu, Oct 11, 2012 at 3:39 AM, Guido Medina  wrote:
> Hi Pavel,
>
>   I'm not an expert with the pool size, but depending on your average key
> size and nodes you could tune it to your needs, regarding the client, a
> single shared client instance will suffice, there is a retrier parameter
> which says how many times Riak will retry your operation before returning
> you an exception (3 by default), and there is a timeout on acquiring the
> connection, this is an example config:
>
> The pool size here is for 4 nodes cluster kind of guessing for Erlang 8
> threads per node to allow Riak nodes do other things too, remember they have
> to sync their data between the nodes:
>
> host = your balancer host
> port = your balancer port
>
> final PBClientConfig clientConfig=new
> PBClientConfig.Builder().withHost(host).withPort(port).withPoolSize(32).withConnectionTimeoutMillis(5000).build();
> final IRiakClient riakClient=RiakFactory.newClient(clientConfig);
>
> That we have it running with no issues, the pool size depends on your needs
> and data size, you could run with a pool size of 50 to a 100 if your keys
> are really small, you will have to try your own values.
>
> Regards,
>
> Guido.
>
>
> On 11/10/12 08:40, Pavel Kogan wrote:
>
> Thanks Guido, Pawel,
>
> I will try using HAProxy + holding N concurrent connections on the client
> side.
> I want clear for myself some point about concurrent connections:
> 1) What is reasonable limit of concurrent connections?
> 2) Concurrent connections = separate generated pbc clients or single shared
> pbc client?
> 3) Will connection timeout if no requests would be done for some period?
>
> Pavel
>
> On Wed, Oct 10, 2012 at 8:57 PM, Guido Medina 
> wrote:
>>
>> From that perspective, for now it is better to treat the client as you
>> would treat a JDBC DataSource pool, the tidy up comes when connecting the
>> client, either one node or many, the client will behave better if it has no
>> knowledge of whats going on at the cluster side, of course, that's as of
>> 1.0.6, so that might change.
>>
>> He could try to connect to one node with a pool from 8 to 16 concurrent
>> connections and start from there, then, when talking to a cluster, he needs
>> the balancer in the middle, main reason is because Riak expect you to
>> connect to all nodes (it will simply behave better), otherwise it will be
>> overloaded at one node and give you IOExceptions from time to time.
>>
>> Hope that helps,
>>
>> Guido.
>>
>>
>> On 10/10/12 19:24, kamiseq wrote:
>>>
>>> ok, you have 100% point here, on the other hand I think pavel looks
>>> for some guidance how to improve performance on client side, so he can
>>> be 100% sure he is not wasting time on something. this is maybe
>>> premature optimization but it maybe also good position to understand
>>> library and enter new world of riak
>>>
>>> pozdrawiam
>>> Paweł Kamiński
>>>
>>> kami...@gmail.com
>>> pkaminski@gmail.com
>>> __
>>>
>>>
>>> On 10 October 2012 17:30, Guido Medina  wrote:
>>>>
>>>> In fact, as more nodes, you might be surprised it that it might be
>>>> fastersee my point? Riak is a lot of things, 1st you have to be
>>>> aware of
>>>> the hashing, hashmap, how a key gets copied into different nodes, how
>>>> one or
>>>> more nodes are responsible for a key, etc...so it is not that simple.
>>>>
>>>>
>>>> On 10/10/12 16:28, Guido Medina wrote:
>>>>
>>>> That's why I keep pushing to one answer, Riak is not meant to be in one
>>>> cluster, you are removing the external factors and CAP settings you will
>>

Cluster Client retries

2012-10-16 Thread Brian Roach
The Java ClusterClient uses a very simple round-robin and downed nodes
are not removed from the rotation. In addition, it uses a global index
that gets incremented on every operation/retry from the client
threads.

Especially with 3 nodes and any amount of load it is likely a single
thread will end up retrying the same downed node and then fail the
operation.

The best solution currently is to use HAProxy or another load
balancer. This is something we aim to improve as we further develop
the Java client.

Thanks!
Brian Roach

On Tue, Oct 16, 2012 at 2:31 AM, Philippe Guillebert
 wrote:
> Hi list,
>
> We have a cluster of three Riak 0.14.2 nodes in production and quite happy
> with it. I'm planning the upgrade to 1.2.0 and while testing it, I wondered
> about how a client should behave during a rolling upgrade (1 node is down
> for maintenance but the cluster is working).
>
> My expectations for a client is, if a given node is down the client will try
> on another node of the cluster to "hide" the maintenance to the upper layers
> of my application.
>
> I tried with Clojure client Welle (internally it uses a PBClusterClient) and
> it didn't work. As soon as I stop a Riak node, the client throws Connection
> Refused exceptions (instead of retrying elsewhere).
> Our Java client library (uses PBClusterClient) has the same problem.
>
> So I realized here that if I restart a node (for maintenance) on my live
> cluster, my app breaks ?!?
>
> I tried googling but there is a lot of contradictory opinions out there :
>
> On the wiki
> https://github.com/basho/riak-java-client/wiki/ClientFactory#wiki-example3
> it says I should use another class of client :
>
> IRiakClient myPbClient = RiakFactory.newClient(myPbClusterConfig);
>
> Will this client retry correctly ? Does this mean the Welle developers used
> the "wrong" client ?
>
>
> This message on the list states that PBClusterClient should work as I expect
> :
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-March/007949.html
> but this message states that ClusterClient is not working as expected :
> http://comments.gmane.org/gmane.comp.db.riak.user/8680
>
>
> Can you help me keep my sanity here ? Thank you !
>
>
>
> --
>
> Philippe
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak secondary indexing 'Unknown field type for field' error

2012-10-23 Thread Brian Roach
How, exactly, are you creating 'indexes' ?

A secondary index in Riak has to be an integer or a string.

Thanks,
Brian Roach

On Tue, Oct 23, 2012 at 2:42 PM, Hrishikesh More
 wrote:
> Hi,
>
> Using following JSON I am trying to create secondary indexes in Riak.
>
> {
> “Id” :  “”,
> “login”   : “xxx”,
> “context” : “xxx”,
> “creationDate” : “”,
>  ...
>  ...
> “sku1” : {
>  quantity : 1,
>},
> “sku2” : {
>  quantity : 2,
>},
> }
>
>I prepare RiakIndexes by using above JSON and looping over it. When I try
> to store (the same json string) it in following way I get 'Unknown field
> type for field: 'sku1'  error.
>
>IRiakObject riakObj = RiakObjectBuilder.newBuilder(bucketName, id)
> .withIndexes(indexes)
> .withValue(json)
> .withContentType('application/json')
> .build();
>
>IRiakObject returnObject = bucket.store(riakObj);
>
> Error:
> com.basho.riak.client.http.response.RiakResponseRuntimeException:
> Unknown field type for field: 'sku1'.
> Unknown field type for field: 'sku2'.
>
> 1.  If I don't define nested JSON it works, however it I put 'skuid'  using
> objectMapper.createObjectNode()  and add to parent object node (while
> preparing JSON for testing), it gives above error.
>  Do I have to write custom serializer here?
> 2.  Is there a way to ignore this error through config in Riak?
>
> thanx.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak secondary indexing 'Unknown field type for field' error

2012-10-23 Thread Brian Roach
Without knowing exactly what you're doing, it's hard to tell. When
you're using an IRiakObject with StoreObject.store() without
specifying a Converter, there's not any JSON
serialization/deserialization occurring. It's simply storing whatever
you pass to withValue() in Riak.

If you can provide a short compilable example that exhibits the
behavior you're describing, I'll be happy to look into it.

Thanks!
Brian Roach

On Tue, Oct 23, 2012 at 3:08 PM, Hrishikesh More
 wrote:
> Index preparation works but storing the JSON is not working. It complains
> about 'sku1' (which is a 'object' type).
>
>
> On Wed, Oct 24, 2012 at 2:33 AM, Hrishikesh More
>  wrote:
>>
>> I have config with list of attributes and their types (e.g. string,
>> integer, boolean).  Based on attribute I appropriately put it in RiakIndexes
>> object.  It works if I do it with simple (non nested) JSON.
>>
>> e.g.
>> if (attrType.equals("String")) {
>>   riakIndexes.add(fieldName, parser.getText()); ==>
>> parser is JsonParser
>> }
>>
>>
>>
>> On Wed, Oct 24, 2012 at 2:27 AM, Brian Roach  wrote:
>>>
>>> How, exactly, are you creating 'indexes' ?
>>>
>>> A secondary index in Riak has to be an integer or a string.
>>>
>>> Thanks,
>>> Brian Roach
>>>
>>> On Tue, Oct 23, 2012 at 2:42 PM, Hrishikesh More
>>>  wrote:
>>> > Hi,
>>> >
>>> > Using following JSON I am trying to create secondary indexes in
>>> > Riak.
>>> >
>>> > {
>>> > “Id” :  “”,
>>> > “login”   : “xxx”,
>>> > “context” : “xxx”,
>>> > “creationDate” : “”,
>>> >  ...
>>> >  ...
>>> > “sku1” : {
>>> >  quantity : 1,
>>> >},
>>> > “sku2” : {
>>> >  quantity : 2,
>>> >},
>>> > }
>>> >
>>> >I prepare RiakIndexes by using above JSON and looping over it. When
>>> > I try
>>> > to store (the same json string) it in following way I get 'Unknown
>>> > field
>>> > type for field: 'sku1'  error.
>>> >
>>> >IRiakObject riakObj = RiakObjectBuilder.newBuilder(bucketName, id)
>>> > .withIndexes(indexes)
>>> > .withValue(json)
>>> > .withContentType('application/json')
>>> > .build();
>>> >
>>> >IRiakObject returnObject = bucket.store(riakObj);
>>> >
>>> > Error:
>>> > com.basho.riak.client.http.response.RiakResponseRuntimeException:
>>> > Unknown field type for field: 'sku1'.
>>> > Unknown field type for field: 'sku2'.
>>> >
>>> > 1.  If I don't define nested JSON it works, however it I put 'skuid'
>>> > using
>>> > objectMapper.createObjectNode()  and add to parent object node (while
>>> > preparing JSON for testing), it gives above error.
>>> >  Do I have to write custom serializer here?
>>> > 2.  Is there a way to ignore this error through config in Riak?
>>> >
>>> > thanx.
>>> >
>>> > ___
>>> > riak-users mailing list
>>> > riak-users@lists.basho.com
>>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> >
>>
>>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: "Bad message code" error from Java ProtoBuf client?

2012-12-14 Thread Brian Roach
Tim -

That error is generated when the original, underlying protocol buffers
Riak client gets the wrong protobuf message from the socket - as in,
not the one it's expecting.

The codes are here:
http://docs.basho.com/riak/latest/references/apis/protocol-buffers/

The code numbers you list basically say that you got a "get" response
message when expecting a "list keys" response method.

I'm digging into this to see how that would happen (I have an idea)
but by any chance are you listing keys and then not iterating through
the entire set?

Thanks,
Brian Roach

On Fri, Dec 14, 2012 at 2:55 PM, Tim W  wrote:
> I'm seeing the following message emitted from a query to Riak:
>
> com.basho.riak.client.RiakRetryFailedException: java.io.IOException: bad
> message code. Expected: 10 actual: 18
> at
> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:79)
> at
> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
> at
> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
> at
> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
>
> I'm also seeing this a bit further down in the log:
>
> Caused by: java.io.IOException: bad message code. Expected: 10 actual: 18
>  at com.basho.riak.pbc.RiakConnection.receive(RiakConnection.java:126)
>  at com.basho.riak.pbc.RiakClient.processFetchReply(RiakClient.java:278)
>  at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:252)
>  at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:241)
>  at
> com.basho.riak.client.raw.pbc.PBClientAdapter.fetch(PBClientAdapter.java:156)
>  at
> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:102)
> at
> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:100)
> at
> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:72)
>
>
> This crops up periodically, during different operations, and disappears just
> as mysteriously.
>
> Any hint as to what this really means?  (For example, is there somewhere I
> can look up these error codes?)
>
> For the record, I'm running Riak 1.1.4 with Java client 1.0.5.
>
> Thanks in advance!
> - Tim
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: "Bad message code" error from Java ProtoBuf client?

2012-12-14 Thread Brian Roach
Sorry - dyslexia acting up:

That should read that you received a "list keys" response when
expecting a "get" response.

- Roach

On Fri, Dec 14, 2012 at 3:40 PM, Brian Roach  wrote:
> Tim -
>
> That error is generated when the original, underlying protocol buffers
> Riak client gets the wrong protobuf message from the socket - as in,
> not the one it's expecting.
>
> The codes are here:
> http://docs.basho.com/riak/latest/references/apis/protocol-buffers/
>
> The code numbers you list basically say that you got a "get" response
> message when expecting a "list keys" response method.
>
> I'm digging into this to see how that would happen (I have an idea)
> but by any chance are you listing keys and then not iterating through
> the entire set?
>
> Thanks,
> Brian Roach
>
> On Fri, Dec 14, 2012 at 2:55 PM, Tim W  wrote:
>> I'm seeing the following message emitted from a query to Riak:
>>
>> com.basho.riak.client.RiakRetryFailedException: java.io.IOException: bad
>> message code. Expected: 10 actual: 18
>> at
>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:79)
>> at
>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
>> at
>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
>> at
>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
>>
>> I'm also seeing this a bit further down in the log:
>>
>> Caused by: java.io.IOException: bad message code. Expected: 10 actual: 18
>>  at com.basho.riak.pbc.RiakConnection.receive(RiakConnection.java:126)
>>  at com.basho.riak.pbc.RiakClient.processFetchReply(RiakClient.java:278)
>>  at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:252)
>>  at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:241)
>>  at
>> com.basho.riak.client.raw.pbc.PBClientAdapter.fetch(PBClientAdapter.java:156)
>>  at
>> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:102)
>> at
>> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:100)
>> at
>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:72)
>>
>>
>> This crops up periodically, during different operations, and disappears just
>> as mysteriously.
>>
>> Any hint as to what this really means?  (For example, is there somewhere I
>> can look up these error codes?)
>>
>> For the record, I'm running Riak 1.1.4 with Java client 1.0.5.
>>
>> Thanks in advance!
>> - Tim
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: "Bad message code" error from Java ProtoBuf client?

2012-12-14 Thread Brian Roach
Tim,

Right - I understood the call that was failing was a "get" (Sorry - I
sent a second email because when I answered originally I transposed
the messages).

The underlying reason for it is that somewhere, you're doing a list
keys operation but not iterating through them all. This isn't your
fault; it's a bug. That shouldn't be a problem.

I was able to reproduce the issue and just pushed a branch / created a
pull request:

https://github.com/basho/riak-java-client/pull/186

It is intermittent because of the specific set of conditions that have
to be met for it to occur. For the time being if you find where you're
not iterating through all the returned keys and do so, the error
should stop occurring.  I've been working through testing some
branches and merging things, I hope to release this new 1.0.7 version
of the client next week.

Thanks,
Brian Roach

On Fri, Dec 14, 2012 at 8:08 PM, Tim W  wrote:
> Brian!
>
> In this particular case, the failing call is requesting a single key.  And
> it didn't just happen once -- rather, for quite a while, every single one of
> these calls (login) would fail in this manner, repeatedly.   Then, the next
> day... *poof*... working again.
>
> Rather odd...
>
> - Tim
>
>
> On 12/14/12 3:43 PM, Brian Roach wrote:
>>
>> Sorry - dyslexia acting up:
>>
>> That should read that you received a "list keys" response when
>> expecting a "get" response.
>>
>> - Roach
>>
>> On Fri, Dec 14, 2012 at 3:40 PM, Brian Roach  wrote:
>>>
>>> Tim -
>>>
>>> That error is generated when the original, underlying protocol buffers
>>> Riak client gets the wrong protobuf message from the socket - as in,
>>> not the one it's expecting.
>>>
>>> The codes are here:
>>> http://docs.basho.com/riak/latest/references/apis/protocol-buffers/
>>>
>>> The code numbers you list basically say that you got a "get" response
>>> message when expecting a "list keys" response method.
>>>
>>> I'm digging into this to see how that would happen (I have an idea)
>>> but by any chance are you listing keys and then not iterating through
>>> the entire set?
>>>
>>> Thanks,
>>> Brian Roach
>>>
>>> On Fri, Dec 14, 2012 at 2:55 PM, Tim W  wrote:
>>>>
>>>> I'm seeing the following message emitted from a query to Riak:
>>>>
>>>> com.basho.riak.client.RiakRetryFailedException: java.io.IOException: bad
>>>> message code. Expected: 10 actual: 18
>>>>  at
>>>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:79)
>>>>  at
>>>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
>>>>  at
>>>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
>>>>  at
>>>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
>>>>
>>>> I'm also seeing this a bit further down in the log:
>>>>
>>>> Caused by: java.io.IOException: bad message code. Expected: 10 actual:
>>>> 18
>>>>   at com.basho.riak.pbc.RiakConnection.receive(RiakConnection.java:126)
>>>>   at
>>>> com.basho.riak.pbc.RiakClient.processFetchReply(RiakClient.java:278)
>>>>   at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:252)
>>>>   at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:241)
>>>>   at
>>>>
>>>> com.basho.riak.client.raw.pbc.PBClientAdapter.fetch(PBClientAdapter.java:156)
>>>>   at
>>>>
>>>> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:102)
>>>>  at
>>>>
>>>> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:100)
>>>>  at
>>>> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:72)
>>>>
>>>>
>>>> This crops up periodically, during different operations, and disappears
>>>> just
>>>> as mysteriously.
>>>>
>>>> Any hint as to what this really means?  (For example, is there somewhere
>>>> I
>>>> can look up these error codes?)
>>>>
>>>> For the record, I'm running Riak 1.1.4 with Java client 1.0.5.
>>>>
>>>> Thanks in advance!
>>>> - Tim
>>>>
>>>> ___
>>>> riak-users mailing list
>>>> riak-users@lists.basho.com
>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: strange behavior upgrading from riak-java-client 1.0.5 -> 1.0.6

2012-12-28 Thread Brian Roach
Dietrich -

I haven't seen this in testing or have had anyone report this; could I
get some more info?

What operations are timing out like this? Is that the complete message
in the riak error.log? Which version of Riak are you running?

Do you see this simply dropping in the 1.0.6 client to your existing
application, or is this after changing your code to use the new
'withoutFetch()' ?

Thanks,
Brian Roach

On Wed, Dec 26, 2012 at 7:28 PM, Dietrich Featherston  wrote:
> I had rolled out an upgrade to a JVM app that uses rjc 1.0.5. We had
> upgraded to 1.0.6 to take advantage of newly added abilities to do a
> put without preceding it with a fetch in order to reduce operational
> load on the cluster. However, after rolling out this change we
> frequently see large rises in latency across the cluster (up to the
> gen_fsm limit of 60s) and see the following in the riak logs
>
> [error] Unrecognized message {74392380,{error,timeout}}
>
> This is accompanied by repeated socket timeouts as seen by the 
> riak-java-client.
>
> Also worth mentioning, one of our nodes got into a state that the rjc
> was unable to establish a tcp connection on the protobuf port to riak
> on localhost. We were only able to fix this by restarting the riak
> process on that node and inducing a fair amount of handoff.
>
> Any thoughts?
>
> Thanks,
> D
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: strange behavior upgrading from riak-java-client 1.0.5 -> 1.0.6

2012-12-28 Thread Brian Roach
On Fri, Dec 28, 2012 at 11:37 AM, Dietrich Featherston
 wrote:
>
> All socket operations. It looks as though those that open a new socket are 
> especially
> impacted. We are running 1.2.1 with the leveldb backend. Same 9 node SSD 
> cluster info I
> have posted to the list before but don't have access to all of the details at 
> the moment.

Sorry, I mean what type of Riak operations? Store, fetch, MapReduce,
etc? What is actually timing out?

> I suspect that there are additional timeouts to be configured and the 
> previous default
> values have been lowered. I tried bumping the requestTimeout to no avail. 
> This wouldn't
> explain the strange latency spikes (via /stats) seen as we began rolling out 
> the new driver.

It really shouldn't change any of this. Even the withoutFetch()
feature as it just ... doesn't do a fetch.

Since you're using that new feature, how are you using it? Is this
storing new objects, or are you providing a vclock from a previous
fetch?

Thanks,
- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: strange behavior upgrading from riak-java-client 1.0.5 -> 1.0.6

2012-12-28 Thread Brian Roach
On Fri, Dec 28, 2012 at 12:34 PM, Dietrich Featherston
 wrote:
> Primarily stores but I did see one case of socket timeouts simply building a 
> new connection pool using the rjc.

This should be simply a result of attempting to bring up another
instance of the client when the node can't accept more connections and
when it tries to "warm up" the connection pool, those connections time
out.

> We are simply doing a put. It is not uncommon for keys to be overwritten but 
> we are not
> providing a vector clock. There is a dedicated master performing the write 
> for a given key
> upstream from riak and overwriting is always safe (assuming last one wins) 
> but we don't
> hold onto the vector clock.
>
> It seems possible/likely that we are inadvertently invoking some riak 
> consistency
> machinery by turning off the get prior to put using withoutFetch(). Would it 
> help to attempt
> to coordinate writes in another way?

Are you using a bucket with allow_mult enabled?

> Somewhat related: I've been curious about writing a smart riak client that 
> writes to a node
> based on the preflist for a key to avoid unnecessary internal handing off of 
> reads and
> writes when possible. Two things strike me though 1) would need to compute 
> this preflist
> outside of riak and 2) unsure how impactful this change would be without 
> better
> understanding where internal riak  bottlenecks present themselves. Perhaps 
> best left for
> another thread.

Yeah - a bit off topic but we've talked about this (not in great
detail as of yet) internally.

- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: store performance

2013-01-03 Thread Brian Roach
On Wed, Jan 2, 2013 at 1:04 PM, catchme  wrote:
> Hello,
> I am trying to store 1543400 records using the memory backend.
> I have a basic cluster setup with 2 nodes..
> I am using the pbcClient
> Bucket b = client.createBucket("test_bucket1").nVal(1).execute();
> //store object
> StoreObject storeObject =
> buckt.store((String) key,buf);
> storeObject.dw(Quora.ONE).returnBody(false).execute();
>
> The store takes forever..
> I have riak installed on RHEL 6
>
> Any suggestions?

Multi-threading. A single client thread is going to max out somewhere
in the neighborhood of 100 per second best case. (give or take,
depending on network, size of objects, etc).

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: store performance

2013-01-03 Thread Brian Roach
On Thu, Jan 3, 2013 at 10:01 AM, Brian Roach  wrote:
> On Wed, Jan 2, 2013 at 1:04 PM, catchme  wrote:
>> Hello,
>> I am trying to store 1543400 records using the memory backend.
>> I have a basic cluster setup with 2 nodes..
>> I am using the pbcClient
>> Bucket b = client.createBucket("test_bucket1").nVal(1).execute();
>> //store object
>> StoreObject storeObject =
>> buckt.store((String) key,buf);
>> 
>> storeObject.dw(Quora.ONE).returnBody(false).execute();
>>
>> The store takes forever..
>> I have riak installed on RHEL 6
>>
>> Any suggestions?
>
> Multi-threading. A single client thread is going to max out somewhere
> in the neighborhood of 100 per second best case. (give or take,
> depending on network, size of objects, etc).

Sorry for the double reply - you also do not want to be calling
`createBucket()` every time. This fetches/sends the bucket information
from/to Riak. You want to create the `Bucket` object once then pass it
around. If that's not possible due to how you have written your code,
use the `fetchBucket()` instead along with
`lazyLoadBucketProperties()` so as not to be querying the bucket
parameters each time.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: store performance

2013-01-07 Thread Brian Roach
The client is thread-safe. You can create an instance of it, pass it
to multiple threads, and perform multiple store operations in
parallel.

Thanks,
- Roach

On Mon, Jan 7, 2013 at 12:46 PM, catchme  wrote:
> Could you provide an example of using Multi-threading on the client?
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/store-performance-tp4026462p4026488.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak java client causing OOMs on high latency

2013-01-08 Thread Brian Roach
The code you cite is reading a size (32 bit int, network byte order)
and a message code (8bit int)  from the socket. It then creates a
byte[] of the size required for the amount of data that has been
requested and then sent back by Riak to the client in a response. (See
the docs here: 
http://docs.basho.com/riak/latest/references/apis/protocol-buffers/
 that show this format )

That byte[] is then passed into the Google protocol buffers generated
code where the appropriate protocol buffers object(s) are deserialized
from those bytes and the information contained therein is extracted
from them into our own objects which are returned to the caller as a
response.

>From the client's perspective, if that's how much data you're getting,
that's how much data you've requested, and how much Riak has sent you.

Thanks,
Brian Roach

On Mon, Jan 7, 2013 at 3:26 PM, Dietrich Featherston  wrote:
> We're seeing instances of a JVM app which talks to riak run out of
> memory when riak operations rise in latency or riak becomes otherwise
> unresponsive. A heap dump of the JVM at the time of the OOM show that
> 91% of the 1G (active) heap is consumed by large byte[] instances. In
> our case 3 of those byte[]s are in the 200MB range with size dropping
> off after that. The byte[] instances cannot be traced back to a
> specific variable as their references appear to be stack-allocated
> local method variables. But, based on the name of the thread, we can
> tell that the thread is doing a store operation against
> riak@localhost.
>
> Inspection of the data in one of these byte[]s shows what looks like
> an r_object response with headers and footer boilerplate around our
> object payload. This 200+MB byte[] is filled with 0s after the 338th
> element which is really confusing and indicates that far too much
> space is being allocated to read the protobuf payload. Here's a dump
> of one of these instances:
> https://gist.github.com/40ef9b2ff561e973a72c
>
> It's also worth mentioning that, according to /stats,
> get_fsm_objsize_100 is consistently under 1MB so there is no reason to
> think that our objects are actually this large.
>
> At this point I'm suspicious of the following code creating too large
> a byte[] from possibly too large a return from dis.readInt()
>
> https://github.com/basho/riak-java-client/blob/master/src/main/java/com/basho/riak/pbc/RiakConnection.java#L110
>
> Unsure if that indicates a problem in the driver or the server-side
> erlang protobuf server.
>
> Suspicious that requests pile up and many of these byte[]s are hanging
> out--enough to cause an OOM. It's possible that they are always very
> large, but are short-lived enough as to not cause a problem until
> latencies rise increasing their numbers briefly.
>
> Thoughts?
>
> Thanks,
> D
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java riak-client 1.0.7 build with dependencies

2013-01-11 Thread Brian Roach
Daniel -

We publish to maven central, and most people simply use maven to pull
down our jars as well as their dependencies.

As a convenience to people who aren't using maven I also used to
upload the compiled .jar as well as a "fat jar" that included all the
other dependencies to the github "downloads" section of the project
but unfortunately I just noticed / found out they discontinued that
feature so ... they're missing. Sorry about that!

I just uploaded the "fat jar" to S3 - you can grab it via

http://riak-java-client.s3.amazonaws.com/riak-client-1.0.7-jar-with-dependencies.jar


With all that said - you can also download just the .jar files from
maven central directly.

http://search.maven.org/#search%7Cga%7C1%7Criak

The client depends on the protocol buffers jar so you'll need both.
You'll then also need all the jars we depend on and any dependencies
they have.

Thanks,
Brian Roach







On Fri, Jan 11, 2013 at 7:47 AM, Daniel Iwan  wrote:
> Is there a repository/location where I could download 1.0.7 Java riak-client
> without building it myself?
>
> Thanks
> Daniel
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Strange Exception in java-client

2013-01-30 Thread Brian Roach
Ingo -

Riak is returning an object with no contents (which ends up being an
empty String passed to Jackson).

Unless you've somehow mangled the data yourself (which sounds unlikely
given the bit about the 404 from the command line; more on that in a
bit) what's happening is that you're encountering a tombstone; an
object that has been deleted via a delete operation but hasn't been
removed yet. This causes an "empty" object to be returned (the
tombstone) and causes Jackson to puke (HTTP will actually return this
as a 404 but if you look there's still a X-Riak-Vclock: header with a
vclock).

Probably the best description of how this works in Riak is a post by
Jon Meredith which can be found here:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-October/006048.html

Unfortunately this is something the Java client doesn't know what to
do with when using the default JSONConverter and your own POJOs. And
it's not as simple as "just return null" because of a case where a
tombstone could actually be a sibling and the client then needs to
resolve the conflict which is the next step in the process. It's
something I'm going to have to think about.

As you've discovered, when the tombstone isn't a sibling simply
retrying will often work because by then the delete has fully
completed and the tombstones have been removed from the Riak nodes.

Is there a reason you're rapidly doing a delete then a store (which
triggers that fetch)?

Thanks,
Brian Roach

On Wed, Jan 30, 2013 at 9:06 AM, Ingo Rockel
 wrote:
> Hi Dmitri,
>
> it doesn't happen in my code and it does happen while the riak-client tries
> to deserialize a fetched object from riak in a "fetch-before-store" (see the
> stack), I also get this error randomly while trying just to fetch an object
> from the database.
>
> And if I try to fetch the object from the cmdline I just get a 404. So I
> would expect the java-client just returns a null-result for this fetch and
> not to throw an exception.
>
> All my objects are stored using the riak-java-client and the
> json-serializer.
>
> Ahh, just tested: if I retry it sometimes works, although most of the time
> still fails (haven't tried with a sleep so far).
>
> Ingo
>
> Am 30.01.2013 16:57, schrieb Dmitri Zagidulin:
>>
>> Hi Ingo.
>>
>> It's difficult to diagnose the exact reason without looking at your code.
>> But that error is a JSON parser error. It gets thrown whenever the code
>> tries to parse an empty string as a json object.
>> The general-case solution is to validate your strings or input streams
>> that you're turning into JSON objects, or to catch an exception when
>> creating that object and deal with it accordingly.
>>
>> But again, it's hard to say why it's happening exactly, in your case --
>> try to determine where in your code that's happening and think of ways
>> some input or result is empty, and check for that.
>>
>> Dmitri
>>
>>
>>
>> On Wed, Jan 30, 2013 at 10:44 AM, Ingo Rockel
>> mailto:ingo.roc...@bluelionmobile.com>>
>>
>> wrote:
>>
>> Hi,
>>
>> I wrote a java tool to convert part of our data from a
>> mysql-database into riak. As this tool is running while our system
>> is still up, it needs to replay all modifications done in the mysql
>> database, during these modifications I sometimes get this exception
>> from the riak client:
>>
>> com.basho.riak.client.convert.__ConversionException:
>>
>> java.io.EOFException: No content to map to Object due to end of input
>> com.basho.riak.client.convert.__ConversionException:
>>
>> java.io.EOFException: No content to map to Object due to end of input
>>  at
>>
>> com.basho.riak.client.convert.__JSONConverter.toDomain(__JSONConverter.java:167)
>>  at
>>
>> com.basho.riak.client.__operations.FetchObject.__execute(FetchObject.java:110)
>>  at
>>
>> com.basho.riak.client.__operations.StoreObject.__execute(StoreObject.java:112)
>>  at
>>
>> com.bluelionmobile.qeep.__messaging.db.impl.__MessageKVImpl.__storeUniqueMessageDto(__MessageKVImpl.java:264)
>>  at
>>
>> com.bluelionmobile.qeep.__messaging.db.impl.__MessageKVImpl.__createDataFromDTO(__MessageKVImpl.java:138)
>>  at
>>
>> com.bluelionmobile.qeep.__messaging.db.impl.__MessageKVImpl.__updateDataFromDTO(__MessageKVImpl.java:205)
>>  at
>>
>> com.bluelionmobile.qeep.__messaging.db.utils.Replay$__ReplayRunner.run(Replay.

Re: Strange Exception in java-client

2013-01-31 Thread Brian Roach
Ingo -

Unfortunately once you've got a sibling tombstone, things get a bit
tricky. It's not going away until you resolve them which when using
the JSONConverter in the Java client, you can't. Oddly enough, this is
the first time anyone has hit this.

I've got a couple ideas on how to address this properly but I need to
look at some things first.

In the meantime, what I'd suggest as a workaround is to copy and paste
the source for the JSONConverter into your own Converter that
you'll pass to the StoreObject and modify it to return null:

https://github.com/basho/riak-java-client/blob/master/src/main/java/com/basho/riak/client/convert/JSONConverter.java#L141

Have it check to see if riakObject.getValue() returns null and if it
does ... return null. You'll also need to modify your ConflictResolver
to check for null as it iterates through the list of your POJOs that
gets passed to it and act accordingly. If there's only a tombstone,
just return null ... which means you will also need to modify your
Mutation to handle a null being passed to it in the case of there
only being a tombstone.

In the end this may well be what I do but I think I have a slightly
more elegant solution that I want to look into.

I've got an errand I need to run this morning, but I'll get to work on
this as soon as I get back.

Thanks, and sorry for the trouble.
- Brian Roach


On Thu, Jan 31, 2013 at 3:56 AM, Ingo Rockel
 wrote:
> Hi Brian,
>
> thanks for the detailed explaination!
>
> I had a look at an object which constantly fails to load even if retrying:
>
> lftp :~> cat "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081"
>  Verbinde mit 172.22.3.14 (172.22.3.14) Port 8091
> ---> GET /riak/m/Um|18498012|4|0|18298081 HTTP/1.1
> ---> Host: 172.22.3.14:8091
> ---> User-Agent: lftp/4.3.3
> ---> Accept: */*
> ---> Connection: keep-alive
> --->
> <--- HTTP/1.1 300 Multiple Choices
> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
> <--- Vary: Accept, Accept-Encoding
> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
> <--- Last-Modified: Thu, 31 Jan 2013 10:00:48 GMT
> <--- ETag: "6PSreYIOL25KOpNyG0XPe7"
> <--- Date: Thu, 31 Jan 2013 10:42:41 GMT
> <--- Content-Type: text/plain
> <--- Content-Length: 56
> <---
> <--* Siblings:
> <--* 50Uz9nvQWwOUBE6USi2gki
> <--* 1JsgLs3CE3k2mWsaCEiPp4
> cat: Zugriff nicht möglich: 300 Multiple Choices
> (/riak/m/Um|18498012|4|0|18298081)
> lftp :~> cat
> "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081?vtag=50Uz9nvQWwOUBE6USi2gki"
> ---> GET /riak/m/Um|18498012|4|0|18298081?vtag=50Uz9nvQWwOUBE6USi2gki
> HTTP/1.1
> ---> Host: 172.22.3.14:8091
> ---> User-Agent: lftp/4.3.3
> ---> Accept: */*
> ---> Connection: keep-alive
> --->
> <--- HTTP/1.1 200 OK
> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
> <--- Vary: Accept-Encoding
> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
> <--- Link: ; rel="up"
> <--- Last-Modified: Thu, 31 Jan 2013 09:57:38 GMT
> <--- ETag: "50Uz9nvQWwOUBE6USi2gki"
> <--- Date: Thu, 31 Jan 2013 10:42:49 GMT
> <--- Content-Type: application/octet-stream
> <--- Content-Length: 0
> <---
> lftp :~> cat
> "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081?vtag=1JsgLs3CE3k2mWsaCEiPp4"
> ---> GET /riak/m/Um|18498012|4|0|18298081?vtag=1JsgLs3CE3k2mWsaCEiPp4
> HTTP/1.1
> ---> Host: 172.22.3.14:8091
> ---> User-Agent: lftp/4.3.3
> ---> Accept: */*
> ---> Connection: keep-alive
> --->
> <--- HTTP/1.1 200 OK
> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
> <--- Vary: Accept-Encoding
> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
> <--- Link: ; rel="up"
> <--- Last-Modified: Thu, 31 Jan 2013 10:00:48 GMT
> <--- ETag: "1JsgLs3CE3k2mWsaCEiPp4"
> <--- Date: Thu, 31 Jan 2013 10:43:01 GMT
> <--- Content-Type: application/json; charset=UTF-8
> <--- Content-Length: 114
> <---
> {"sortKey":1359626448000106,"st":2,"t":4,"r":18498012,"s":18298081,"ct":1359626448000,"rv":21215685,"cv":1,"su":0}
>
> the object has two siblings, one the deleted empty "tombstone" and one with
> the new data. And there's a gap auf 2:30min between both siblings. There's
> no immediate write after the deletion. I logged the write operations and
> this gap is there also. And the java client constantly fails to load this
> object.
>

Re: Java client question...

2013-02-01 Thread Brian Roach
Scenario 1: If you know the object doesn't exist or you want to
overwrite it (because allow_multi is not enabled), you want to call
withoutFetch() - there's no reason to fetch something from Riak if
it's not there (not found) or you don't care what the value is
currently (and again, are not creating siblings)

Scenario 2: If you want to ... but since you're not fetching in the
first place the result should be exactly what is in your Mutation.

- Roach

On Thu, Jan 31, 2013 at 6:14 AM, Guido Medina  wrote:
> Hi,
>
> I have doubts about withoutFetch() and returnBody(boolean), I will put some
> scenarios:
>
> Store object (or overwrite) existing Riak object where I'm 100% I don't need
> to fetch from Riak (Last write wins and goes to memory cache)
> Apply a mutation to an object but this time return the mutated instance from
> the Riak operation without fetching it from Riak cluster (via apply()?) so
> that the mutated result gets updated in memory cache.
>
> I want to use accurately both methods but I'm a bit lost with their use
> case, so, is it safe to assume the following?
>
> Scenario 1: execute() without calling withoutFetch() and returnBody(false)
> because both by default are false?
> Scenario 2: execute() with returnBody(true) so I get the result of
> Mutation.apply()?
>
> All described scenarios have no siblings enabled and use default converter
> (Domain POJO annotated with Jackson)
>
> Thanks in advance for the response(s),
>
> Guido.
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Strange Exception in java-client

2013-02-02 Thread Brian Roach
Ingo -

As an FYI, once I started looking into this I found you had stumbled
into quite the can of worms. I just finished a pretty comprehensive
set of changes that will now allow proper handling of tombstones in
the Java client - https://github.com/basho/riak-java-client/pull/195

Sorry you had to be the one, but thanks for letting us know about this!
- Roach

On Thu, Jan 31, 2013 at 10:36 AM, Ingo Rockel
 wrote:
> Hi Brian,
>
> thanks for the suggestion but I already chose a different solution for now,
> if these messages get deleted I just delete the links to the message and
> mark the message as "abandoned" and available for reuse. So I don't run into
> the conflict if I need to store the message again.
>
> I just started the replay again and let it run for a while to see if this
> works for me.
>
> Thanks!
>
> Ingo
>
> Am 31.01.2013 16:54, schrieb Brian Roach:
>
>> Ingo -
>>
>> Unfortunately once you've got a sibling tombstone, things get a bit
>> tricky. It's not going away until you resolve them which when using
>> the JSONConverter in the Java client, you can't. Oddly enough, this is
>> the first time anyone has hit this.
>>
>> I've got a couple ideas on how to address this properly but I need to
>> look at some things first.
>>
>> In the meantime, what I'd suggest as a workaround is to copy and paste
>> the source for the JSONConverter into your own Converter that
>> you'll pass to the StoreObject and modify it to return null:
>>
>>
>> https://github.com/basho/riak-java-client/blob/master/src/main/java/com/basho/riak/client/convert/JSONConverter.java#L141
>>
>> Have it check to see if riakObject.getValue() returns null and if it
>> does ... return null. You'll also need to modify your ConflictResolver
>> to check for null as it iterates through the list of your POJOs that
>> gets passed to it and act accordingly. If there's only a tombstone,
>> just return null ... which means you will also need to modify your
>> Mutation to handle a null being passed to it in the case of there
>> only being a tombstone.
>>
>> In the end this may well be what I do but I think I have a slightly
>> more elegant solution that I want to look into.
>>
>> I've got an errand I need to run this morning, but I'll get to work on
>> this as soon as I get back.
>>
>> Thanks, and sorry for the trouble.
>> - Brian Roach
>>
>>
>> On Thu, Jan 31, 2013 at 3:56 AM, Ingo Rockel
>>  wrote:
>>>
>>> Hi Brian,
>>>
>>> thanks for the detailed explaination!
>>>
>>> I had a look at an object which constantly fails to load even if
>>> retrying:
>>>
>>> lftp :~> cat "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081"
>>>  Verbinde mit 172.22.3.14 (172.22.3.14) Port 8091
>>> ---> GET /riak/m/Um|18498012|4|0|18298081 HTTP/1.1
>>> ---> Host: 172.22.3.14:8091
>>> ---> User-Agent: lftp/4.3.3
>>> ---> Accept: */*
>>> ---> Connection: keep-alive
>>> --->
>>> <--- HTTP/1.1 300 Multiple Choices
>>> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
>>> <--- Vary: Accept, Accept-Encoding
>>> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
>>> <--- Last-Modified: Thu, 31 Jan 2013 10:00:48 GMT
>>> <--- ETag: "6PSreYIOL25KOpNyG0XPe7"
>>> <--- Date: Thu, 31 Jan 2013 10:42:41 GMT
>>> <--- Content-Type: text/plain
>>> <--- Content-Length: 56
>>> <---
>>> <--* Siblings:
>>> <--* 50Uz9nvQWwOUBE6USi2gki
>>> <--* 1JsgLs3CE3k2mWsaCEiPp4
>>> cat: Zugriff nicht möglich: 300 Multiple Choices
>>> (/riak/m/Um|18498012|4|0|18298081)
>>> lftp :~> cat
>>>
>>> "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081?vtag=50Uz9nvQWwOUBE6USi2gki"
>>> ---> GET /riak/m/Um|18498012|4|0|18298081?vtag=50Uz9nvQWwOUBE6USi2gki
>>> HTTP/1.1
>>> ---> Host: 172.22.3.14:8091
>>> ---> User-Agent: lftp/4.3.3
>>> ---> Accept: */*
>>> ---> Connection: keep-alive
>>> --->
>>> <--- HTTP/1.1 200 OK
>>> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
>>> <--- Vary: Accept-Encoding
>>> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
>>> <--- Link: ; rel="up"
>>> <--- Last-Modified: Thu, 31 Jan 2013 0

Re: Riak Java client 100% CPU

2013-02-14 Thread Brian Roach
Daniel -

Yes, sorry about that. This has been corrected in the current master
on github and version 1.1.0 of the client will be released today.
https://github.com/basho/riak-java-client/pull/212

Thanks!
Brian Roach

On Thu, Feb 14, 2013 at 9:31 AM, Daniel Iwan  wrote:
> I see 100% CPU very regularly on one of the Riak client (v1.0.7) threads.
> I think the place where it spins is connection reaper in RiakConnectionPool
>
> I looked at it briefly and it seems that when it finds first connection
> using peek but that does not expired it can spin in tight while loop.
> I guess second peek() should be outside if block?
>
> private synchronized void doStart() {
> if (idleConnectionTTLNanos > 0) {
> idleReaper.scheduleWithFixedDelay(new Runnable() {
> public void run() {
> RiakConnection c = available.peek();
> while (c != null) {
> long connIdleStartNanos = c.getIdleStartTimeNanos();
> if (connIdleStartNanos + idleConnectionTTLNanos <
> System.nanoTime()) {
> if (c.getIdleStartTimeNanos() ==
> connIdleStartNanos) {
> // still a small window, but better than
> locking
> // the whole pool
> boolean removed = available.remove(c);
> if (removed) {
> c.close();
> permits.release();
> }
> }
> c = available.peek();
> }
> }
> }
> }, idleConnectionTTLNanos, idleConnectionTTLNanos,
> TimeUnit.NANOSECONDS);
> }
>
> state = State.RUNNING;
> }
>
>
> Regards
> Daniel Iwan
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java client v1.1.0

2013-02-14 Thread Brian Roach
Greetings!

Today we have released the latest version of the Java client for Riak, v1.1.0

This is available immediately from Maven Central by adding the
following to your project's pom.xml:


com.basho.riak
riak-client
1.1.0
pom


For those not using maven we provide a single .jar file that contains
the client and all its dependencies:

http://riak-java-client.s3.amazonaws.com/riak-client-1.1.0-jar-with-dependencies.jar


This release is both a bugfix and feature release.

Most notably you will find that it now supports secondary indexes
natively if you are using protocol buffers. In addition, the int_index
typing has been changed from int to long to eliminate the 2^31 limit.

Also on the protocol buffers front, you should see a performance
increase if you are working with siblings and using the IRiakClient
level interfaces; an old bug was found where and extra get operation
was being made unnecessarily when siblings were present.

A cpu utilization bug was also found to have been introduced in 1.0.7
in the protocol buffers client (connection pool). This has been
corrected.

The complete list of changes in 1.1.0 can be found in the CHANGELOG on
github. Current Javadocs have been published and are available via
http://basho.github.com/riak-java-client/1.1.0

Thanks!
- Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client 100% CPU

2013-02-15 Thread Brian Roach
Daniel -

Fixed. I swear I thought I had edited the ACL.

Thanks,
- Roach

On Fri, Feb 15, 2013 at 6:08 AM, Daniel Iwan  wrote:
> Hi
>
> Thanks for that and also for building riak-client with all dependencies.
> But I'm afraid S3 bucket is password protected or link expired since I'm
> getting AccessDenied on that 1.1.0 jar
>
> Daniel
>
>
>
> On 14 February 2013 17:22, Brian Roach  wrote:
>>
>> Daniel -
>>
>> Yes, sorry about that. This has been corrected in the current master
>> on github and version 1.1.0 of the client will be released today.
>> https://github.com/basho/riak-java-client/pull/212
>>
>> Thanks!
>> Brian Roach
>>
>> On Thu, Feb 14, 2013 at 9:31 AM, Daniel Iwan 
>> wrote:
>> > I see 100% CPU very regularly on one of the Riak client (v1.0.7)
>> > threads.
>> > I think the place where it spins is connection reaper in
>> > RiakConnectionPool
>> >
>> > I looked at it briefly and it seems that when it finds first connection
>> > using peek but that does not expired it can spin in tight while loop.
>> > I guess second peek() should be outside if block?
>> >
>> > private synchronized void doStart() {
>> > if (idleConnectionTTLNanos > 0) {
>> > idleReaper.scheduleWithFixedDelay(new Runnable() {
>> > public void run() {
>> > RiakConnection c = available.peek();
>> > while (c != null) {
>> > long connIdleStartNanos =
>> > c.getIdleStartTimeNanos();
>> > if (connIdleStartNanos + idleConnectionTTLNanos
>> > <
>> > System.nanoTime()) {
>> > if (c.getIdleStartTimeNanos() ==
>> > connIdleStartNanos) {
>> > // still a small window, but better than
>> > locking
>> > // the whole pool
>> > boolean removed = available.remove(c);
>> > if (removed) {
>> > c.close();
>> > permits.release();
>> > }
>> > }
>> > c = available.peek();
>> > }
>> > }
>> > }
>> > }, idleConnectionTTLNanos, idleConnectionTTLNanos,
>> > TimeUnit.NANOSECONDS);
>> > }
>> >
>> > state = State.RUNNING;
>> > }
>> >
>> >
>> > Regards
>> > Daniel Iwan
>> >
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: best practices to work with buckets in java-client

2013-02-15 Thread Brian Roach
You only ever need to use client.createBucket() if you want to change
the bucket properties from the defaults.

This is due to the way Riak works; a put or fetch operation that uses
a bucket that doesn't exist will create that bucket with the default
bucket properties.

To be clear, if you do:

Bucket b = client.fetchBucket("Some_bucket").execute();

That bucket did not have to exist beforehand. It will be created with
the default properties.

- Roach

On Fri, Feb 15, 2013 at 11:53 PM, Deepak Balasubramanyam
 wrote:
>> Can I create it once (on application's start) and store somewhere (like
>> static field for example)
>
>
> Yes you can. There is no need to create the bucket every time the
> application starts. The client.fetchBucket() call will get the bucket
> successfully on subsequent runs.
>
> Thanks
> -Deepak
>
> On Sat, Feb 16, 2013 at 5:00 AM, Guido Medina 
> wrote:
>>
>> I would say it is totally safe to treat them as singleton (static
>> reference or just singleton pattern), we have been doing that for a year
>> with no issues so far.
>>
>> Hope that helps,
>>
>> Guido.
>>
>>
>> On 15/02/13 22:07, Mikhail Tyamin wrote:
>>>
>>> Hello guys,
>>>
>>> what is the best way to work with Bucket object in java-client?
>>>
>>> Can I create it once (on application's start) and store somewhere (like
>>> static field for example)
>>> or I should create it ( riakClient.createBucket("bucketName") ) once and
>>> then fetch it  ( riakClient.fetchBucket("bucketName") )
>>> every time using riakClient when I need it?
>>>
>>> P.S. I am going to use the same bucket's properties (nVal, allowSiblings
>>> and etc.) during all life of application.
>>>
>>> Thank you.
>>>
>>> Mikhail.
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Beginners performance problem

2013-02-24 Thread Brian Roach
One issue that immediately stands out:

> >>>
> Bucket bucket = sharedClient.createBucket("accounts1").execute();
> bucket.store(a.getId().toString(),
mapper.writeValueAsString(a)).execute();
> >>>

'CreateBucket()' is doing a fetch of the bucket properties and then storing
them back to the cluster when 'execute()' is called.

You want to fetch the bucket once then pass around the reference to it, or
at the very least use:
fetchBucket("accounts1").lazyLoadBucketProperties().execute();
Inside your threads.

The only time you ever want to use createBucket() is when you want to
modify the bucket properties.

- Roach
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using withoutFetch with DomainBucket

2013-03-12 Thread Brian Roach
Daniel -

The .withoutFetch() method isn't available when using the DomanBucket.

As for the vector clock, when using .withoutFetch() the .execute()
method of StoreObject is going to extract the vector clock from the
POJO returned from your Mutation by looking for a VectorClock or
byte[] field that is annotated with @RiakVClock. It is then passed to
the Converter's .fromDomain() method as an argument.  If you are
storing an object you previously fetched from Riak, that vector clock
and annotation needs to be there.

The easiest way to implement that is:
1. Have a VectorClock or byte[] field in your POJO annotated with @RiakVClock

2. When you fetch, in the .toDomain() method of your Converter have
the line of code you noted.

3. When you store, the vector clock stored in that field will be
passed to the .fromDomain() method of your Converter. Make sure to
call the .withVClock(vclock) method of the RiakObjectBuilder or
explicitly set it in the IRiakObject being returned.

- Roach


On Fri, Mar 8, 2013 at 3:31 PM, Daniel Iwan  wrote:
> Somehow I cannot find a way to avoid pre-fetch during store operation (Java
> client).
> I know in StoreObject there is withoutFetch method for that purpose but I
> cannot find corresponding method/property in DomainBucket or
> DomainBucketBuilder
>
> Am I missing something?
>
> Also on related note when withoutFetch is used I guess I need to provide
> annotated RiakVClock field and use something like:
>
> VClockUtil.setVClock(domainObject, riakObject.getVClock());
>
> in my Converter. Is that right or is there better way to do it?
>
>
> I'm using Riak Java client 1.1.0
>
> Thanks
> Daniel
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using withoutFetch with DomainBucket

2013-03-12 Thread Brian Roach
Daniel -

Nothing detects whether there is a vclock or not. If there isn't one
provided (the value is `null` in Java), then one isn't sent to Riak -
it is not a requirement for a store operation for it to be present. If
an object exists when such a store is performed and allow_multi=true
for the bucket, then a sibling is created.

The .withoutFetch() method was added to the StoreObject as a requested
feature. It is meant for when you are storing an object that was
previously fetched from Riak and want to avoid doing another fetch. If
that previous fetch returned nothing (the key was not found) then the
vector clock will be null.

When talking about deleted keys ... unless you change the default
`delete_mode` in Riak's app.config, you're not usually going to get a
tombstone - they are reaped after 3s.  Unless either you do a fetch
immediately following a delete, you're doing store operations without
vclocks with allow_multi=true for the bucket (which is basically
"doing it wrong") immediately after a delete and a sibling gets
created, or hit a very small window with multiple writers under heavy
load where the read/write cycle interleaves with a delete and a
tombstone sibling gets created.

With that being said, yes, unless you set 'returnDeletedVClock(true)`
they are silently discarded by the Java client and not passed to the
Converter. If that has been set, the default JSONConverter will return
a new instance of whatever POJO is being used (if possible - if
there's not a default constructor it will throw an exception) and then
set a @RiakTombstone annotated boolean field to `true` if one exists.
It detects this by calling the .isDeleted() method of the returned
IRiakObject.

- Roach

On Tue, Mar 12, 2013 at 9:43 AM, Daniel Iwan  wrote:
> Brian,
>
> Where I got lost was the fact that I was using custom Converter and I did
> not do anything with vclock passed into fromDomain().
> That was undetected because at the same time I wasn't using withoutFetch,
> which I believe is the only moment where missing @RiakVClock annotation
> can be detected. Normally when JSONConverter is used missing @RiakVClock
> would also be detected.
> Could you confirm?
>
> Few additional, related questions:
> - if I use byte[] or VClock field and use withoutFetch() what is default
> value it should be set to (since it will be extracted via StoreObject)?
> - if I want to avoid overwriting deleted keys, I guess I need to set
> returnDeletedVClock as below,
>  DomainBucketBuilder builder = DomainBucket.builder(bucket,
> Custom.class)
>  builder.returnDeletedVClock(true);
>
> and then check isDeleted on sibblings and use ConditionalStoreMutation to
> return false id one of he sibblings has that flag set to true?
> I believe it needs to use VClock of deleted sibbling as well?
>
> Thanks
> Daniel
>
>
>
>> The .withoutFetch() method isn't available when using the DomanBucket.
>>
>> As for the vector clock, when using .withoutFetch() the .execute()
>> method of StoreObject is going to extract the vector clock from the
>> POJO returned from your Mutation by looking for a VectorClock or
>> byte[] field that is annotated with @RiakVClock. It is then passed to
>> the Converter's .fromDomain() method as an argument.  If you are
>> storing an object you previously fetched from Riak, that vector clock
>> and annotation needs to be there.
>>
>> The easiest way to implement that is:
>> 1. Have a VectorClock or byte[] field in your POJO annotated with
>> @RiakVClock
>>
>> 2. When you fetch, in the .toDomain() method of your Converter have
>> the line of code you noted.
>>
>> 3. When you store, the vector clock stored in that field will be
>> passed to the .fromDomain() method of your Converter. Make sure to
>> call the .withVClock(vclock) method of the RiakObjectBuilder or
>> explicitly set it in the IRiakObject being returned.
>>
>> - Roach
>>
>>
>> On Fri, Mar 8, 2013 at 3:31 PM, Daniel Iwan  wrote:
>> > Somehow I cannot find a way to avoid pre-fetch during store operation
>> > (Java
>> > client).
>> > I know in StoreObject there is withoutFetch method for that purpose but
>> > I
>> > cannot find corresponding method/property in DomainBucket or
>> > DomainBucketBuilder
>> >
>> > Am I missing something?
>> >
>> > Also on related note when withoutFetch is used I guess I need to provide
>> > annotated RiakVClock field and use something like:
>> >
>> > VClockUtil.setVClock(domainObject, riakObject.getVClock());
>> >
>> > in my Converter. Is that right or is there better way to do it?
>> >
>> >
>> > I'm using Riak Java client 1.1.0
>> >
>> > Thanks
>> > Daniel
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Unable to delete using Java Client

2013-03-15 Thread Brian Roach
Hi Joachim,

The problem with your code is here:

myBucket.delete(guiPath);

The way the API flow works is just like doing a store and fetch; the
Bucket.delete() method returns a DeleteObject on which you then call
.execute() to actually perform the operation:

myBucket.delete(guiPath).execute();

Thanks,
- Roach

On Fri, Mar 15, 2013 at 11:45 AM, Joachim Haagen Skeie
 wrote:
> Hello,
>
> I am trying to delete items using the Java client, but for some reason, the
> data is still there when I try to get it out later.
>
> I have posted the relevant parts of the Java Class performing the deletion
> here: https://gist.github.com/joachimhs/5171629
>
> The following unit test fails on the last assertion:
>
> @Test
> public void testTreeMenu() throws InterruptedException {
> newEnv.getTreeMenuDao().persistTreeMenu(new
> BasicStatistics("EurekaJAgent:Memory:Heap:Used %", "Account Name", "Y"));
>
>
>
> Statistics statOne =
> newEnv.getTreeMenuDao().getTreeMenu("EurekaJAgent:Memory:Heap:Used %",
> "Account Name");
>
>
>
> Assert.assertNotNull(statOne);
> Assert.assertEquals("EurekaJAgent:Memory:Heap:Used %",
> statOne.getGuiPath());
> Assert.assertEquals("Account Name", statOne.getAccountName());
> Assert.assertEquals("Y", statOne.getNodeLive());
>
>
>
> newEnv.getTreeMenuDao().deleteTreeMenu("EurekaJAgent:Memory:Heap:Used
> %", "Account Name");
>
>
>
> Thread.sleep(550);
> Statistics deletedStatOne =
> newEnv.getTreeMenuDao().getTreeMenu("EurekaJAgent:Memory:Heap:Used %",
> "Account Name");
>
>
>
> Assert.assertNull(deletedStatOne);
> }
>
> Med Vennlig Hilsen | Very Best Regards,
>
> Joachim Haagen Skeie
> joac...@haagen-software.no
> http://haagen-software.no
> +47 4141 5805
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java driver createBucket/fetchBucket

2013-03-20 Thread Brian Roach
On Thu, Mar 14, 2013 at 11:25 PM, Kevin Burton  wrote:
> This sample calls ‘createBucket’. First what is the difference between
> ‘createBucket’ and ‘fetchBucket’? I have an existing bucket so I don’t want
> to create a new one and thereby remove the old one. So I felt that
> ‘fetchBucket’ would be the call I should make. The problem is that
> ‘fetchBucket’ returns a ‘FetchBucket’ object that doesn’t have the same
> methods as the ‘Bucket’ returned by createBucket. I would just like to query
> the bucket using a key. But that simple operation appears to be unavailable
> with the ‘FetchBucket’ object. Ideas?

For the most part all of this is simply semantics to have the Java API
model interacting with Riak in some logical fashion. Nothing ever
"removes" a bucket from Riak. The only difference between a
FetchBucket and a WriteBucket is whether or not the bucket properties
are written to Riak when you call execture().

Calling execute() on a FetchBucket queries Riak for the bucket
properties and returns a Bucket object. If you don't need to read the
properties you can call .lazyLoadBucketProperties() on the FetchBucket
prior to calling execute() and it will not query Riak at all (unless
you then call any of the property getters (e.g. getR() ) on the
returned Bucket).

Calling execute() on a WriteBucket (which is returned by
IRiakClient.createBucket() and IRiakClient.updateBucket() - they are
exactly the same thing) does a write to Riak of the properties
specified then a subsequent read of them back from Riak, returning a
Bucket object. Again, the read can be avoided or postponed using
.lazyLoadBucketProperties() prior to execute().

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to deal with tombstone values when writing a Riak (http) client

2013-03-25 Thread Brian Roach
Hi Age,

I'm the author of the pull request for the Java client you cite.
There's still inconsistency in how these are returned via the HTTP
protocol. Partially that's my fault in that I didn't open an actual
issue when I discovered the problem I note in my comments. While
reviewing the issue to make sure I answered your question correctly, I
found another.

As of right now (Riak 1.3), here's what you will find:

1) If *only* a tombstone exists when you do a GET for a key, you will
receive a 404 but it will contain the X-Riak-Vclock header (with a
vclock). A "normal" 404 (when there's no object) will not have this
header.

2) If there is a set of siblings, and one of them is a tombstone:

2a) Retrieving all the siblings at once by including "Accept:
multipart/mixed" in the GET will return all the siblings, and the
tombstone will include the "X-Riak-Deleted: true" header

2b) Retrieving each sibling manually by adding ?vtag=XX to the GET
will (unfortunately) return a 200 OK for the tombstone but it will
have an empty body (Content-Length: 0).

I'm going to open issues for 1 and 2b just so we get things to be
consistent. That being said, I can't think of a reason you'd ever want
to do 2b so at least the impact there is minimized. For 1 you can
obviously still identify a tombstone the same way I'm doing it in the
Java client -> 404 + vclock = tombstone.

Thanks,
_ Roach

On Sun, Mar 24, 2013 at 2:09 PM, Age Mooij  wrote:
> Hi,
>
> I've been trying to find some comprehensive docs on what Riak http clients
> need to do to properly support dealing with tombstone values. I ran into
> tombstones while debugging some unit tests and I was very surprised that the
> Basho (http) API docs don't mention anything about having to deal with them.
>
> It's very hard to find any kind of complete description on when Riak will
> produce tombstone values in http responses and what the proper way of
> dealing with them is. This makes it very hard to write good unit tests and
> to implement the "correct" behaviour for my riak-scala-client.
>
> Can anyone point me towards a comprehensive description of the expected
> behaviour? Or even a description of what most client libraries end up doing?
>
> For now I just ignore siblings with the X-Riak-Deleted header (undocumented
> AFAIK) when resolving conflicts caused by a delete followed by a put (based
> on the same vclock). I'm not sure this header could (or should) occur in any
> other situation.
>
> Here's the online stuff I've found so far:
>
> - A pull request for the java client:
> https://github.com/basho/riak-java-client/pull/195
>
> - The most important commit message for the above pull request:
> https://github.com/basho/riak-java-client/commit/416a901ff1de8e4eb559db21ac5045078d278e86
>
> - Some interesting code introduced in that commit:
>
> // There is a bug in Riak where the x-riak-deleted header is not returned
> // with a tombstone on a 404 (x-riak-vclock exists). The following block can
> // be removed once that is fixed
> byte[] body = r.getBody();
> if (r.getStatusCode() == 404) {
> headers.put(Constants.HDR_DELETED, "true");
> body = new byte[0]; // otherwise this will be "not found"
> }
>
> That bug apparently still exists… do all clients implement this hack? Should
> they?
>
> - A message to this mailing list from October 2011:
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-October/006048.html
>
> Thanks,
> Age
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to deal with tombstone values when writing a Riak (http) client

2013-03-25 Thread Brian Roach
On Mon, Mar 25, 2013 at 8:05 AM, Age Mooij  wrote:
> What do you think would be the proper way for Riak itself to deal with case 
> 1? Should it return a 200 with an empty body and the X-Riak-Deleted header?

Unfortunately this would break the API in regard to backwards
compatibility with older versions of Riak. This is not to say we
wouldn't ever do that, but it does mean we'd (in general) like to
avoid it. I've opened an issue with the suggestion of simply adding
the 'X-Riak-Deleted' header for consistency.
https://github.com/basho/riak_kv/issues/518

> Could you give me an example of a use case for reproducing case 1 in a unit 
> test? Case 2a is easy but I've tried several ways to reliably produce case 1 
> and I'm not getting anywhere.

A tombstone will exist for 3s by default. With the Java client I can
reproduce it every time with:

IRiakClient client = RiakFactory.httpClient();
Bucket b =
client.createBucket("sibling_test").allowSiblings(true).execute();
b.store("key1","value1").execute();
b.delete("key1").execute();
IRiakObject io = b.fetch("key1").returnDeletedVClock(true).execute();
System.out.println(io.isDeleted());
client.shutdown();
>
> Do you have an idea of how many people actually use the option to "return 
> deleted vclocks".

Honestly? No, other than "at least two or three" because of
interacting directly with them (one of which led to the PR you cited).

> My Scala client basically just treats tombstones like completely deleted 
> values, so for case 1 that would lead to "normal" 404 behavior and during 
> conflict resolution it will just ignore/skip tombstones. But if lots of 
> people are interested in dealing with tombstones directly I might have to 
> change that to something similar to what you did in the java client.

Yeah ... it's a hard road trying to guess how people are going to want
to use something. Personally? Since the feature exists I'd support it
if only for the sake of completeness.

> Are there any plans on adding a documentation section about tombstones and 
> deletes in Riak? I think that would definitely be helpful for other people 
> writing clients.

I'll raise that issue this week; there probably should be. In the mean
time probably the most comprehensive information on the subject can be
found here: 
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-October/006048.html

Thanks,
_ Roach

> On Mar 25, 2013, at 14:29, Brian Roach  wrote:
>
>> Hi Age,
>>
>> I'm the author of the pull request for the Java client you cite.
>> There's still inconsistency in how these are returned via the HTTP
>> protocol. Partially that's my fault in that I didn't open an actual
>> issue when I discovered the problem I note in my comments. While
>> reviewing the issue to make sure I answered your question correctly, I
>> found another.
>>
>> As of right now (Riak 1.3), here's what you will find:
>>
>> 1) If *only* a tombstone exists when you do a GET for a key, you will
>> receive a 404 but it will contain the X-Riak-Vclock header (with a
>> vclock). A "normal" 404 (when there's no object) will not have this
>> header.
>>
>> 2) If there is a set of siblings, and one of them is a tombstone:
>>
>> 2a) Retrieving all the siblings at once by including "Accept:
>> multipart/mixed" in the GET will return all the siblings, and the
>> tombstone will include the "X-Riak-Deleted: true" header
>>
>> 2b) Retrieving each sibling manually by adding ?vtag=XX to the GET
>> will (unfortunately) return a 200 OK for the tombstone but it will
>> have an empty body (Content-Length: 0).
>>
>> I'm going to open issues for 1 and 2b just so we get things to be
>> consistent. That being said, I can't think of a reason you'd ever want
>> to do 2b so at least the impact there is minimized. For 1 you can
>> obviously still identify a tombstone the same way I'm doing it in the
>> Java client -> 404 + vclock = tombstone.
>>
>> Thanks,
>> _ Roach
>>
>> On Sun, Mar 24, 2013 at 2:09 PM, Age Mooij  wrote:
>>> Hi,
>>>
>>> I've been trying to find some comprehensive docs on what Riak http clients
>>> need to do to properly support dealing with tombstone values. I ran into
>>> tombstones while debugging some unit tests and I was very surprised that the
>>> Basho (http) API docs don't mention anything about having to deal with them.
>>>
>>> It's very hard to find any kind of complete desc

Re: How to add a secondary index with the Java client

2013-04-08 Thread Brian Roach
Jeff,

If you're just passing in an instance of the core Java HashMap ... you can't.

The way the default JSONConverter works for metadata (such as indexes)
is via annotations.

The object being passed in needs to have a field annotated with
@RiakIndex("index_name"). That field can be a Long/Set or
String/Set (for _int and _bin indexes respectively).

These are not converted to JSON so they won't affect your serialized
data. You can have multiple fields for multiple indexes.

You don't have to append "_int" or "_bin" to the index name in the
annotation - it's done automatically based on the type.

Easiest thing to do woud be to extend HashMap and simply add the
annotated field(s).

Thanks,
_Roach

On Mon, Apr 8, 2013 at 2:56 PM, Jeff Peck  wrote:
> Hello,
>
> I have been looking through the documentation for an example of how to add a 
> secondary index in Riak, using the Java client.
>
> I am currently storing my object (which is a HashMap) like this:
>
> bucket.store(key, docHashMap).execute();
>
> What would I need to do to add an index to that object before it gets stored?
>
> Thanks,
> Jeff
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to add a secondary index with the Java client

2013-04-08 Thread Brian Roach
Jeff -

Yup, that should work perfectly.

You will have a secondary index in Riak named "status_bin" and the
value you set in your String 'status' will be the index key for that
object.

Thanks,
_Roach

On Mon, Apr 8, 2013 at 4:02 PM, Jeff Peck  wrote:
> Brian,
>
> Thank you for explaining that and suggesting to extend HashMap. I did exactly 
> that. Here is what it looks like:
>
> class DocMap extends HashMap {
> /**
>  * Generated id
>  */
> private static final long serialVersionUID = 5807773481499313384L;
>
> @RiakIndex(name="status") private String status;
>
> public String getStatus() {
> return status;
> }
>
> public void setStatus(String status) {
> this.status = status;
> }
> }
>
> I am about to try it, but I first need to make a few more changes in the code 
> to adapt this new object. In the meantime, would you say that this looks 
> correct and that it would be able to effectively write a status field to a 
> secondary index if I were to use "setStatus"?
>
> Thanks,
> Jeff
>
>
> On Apr 8, 2013, at 5:48 PM, Brian Roach  wrote:
>
>> Jeff,
>>
>> If you're just passing in an instance of the core Java HashMap ... you can't.
>>
>> The way the default JSONConverter works for metadata (such as indexes)
>> is via annotations.
>>
>> The object being passed in needs to have a field annotated with
>> @RiakIndex("index_name"). That field can be a Long/Set or
>> String/Set (for _int and _bin indexes respectively).
>>
>> These are not converted to JSON so they won't affect your serialized
>> data. You can have multiple fields for multiple indexes.
>>
>> You don't have to append "_int" or "_bin" to the index name in the
>> annotation - it's done automatically based on the type.
>>
>> Easiest thing to do woud be to extend HashMap and simply add the
>> annotated field(s).
>>
>> Thanks,
>> _Roach
>>
>> On Mon, Apr 8, 2013 at 2:56 PM, Jeff Peck  wrote:
>>> Hello,
>>>
>>> I have been looking through the documentation for an example of how to add 
>>> a secondary index in Riak, using the Java client.
>>>
>>> I am currently storing my object (which is a HashMap) like this:
>>>
>>> bucket.store(key, docHashMap).execute();
>>>
>>> What would I need to do to add an index to that object before it gets 
>>> stored?
>>>
>>> Thanks,
>>> Jeff
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: On siblings

2013-05-15 Thread Brian Roach
Jeremy -

As noted in the other replies, yes, you need to use 'return_body' to
get the new vector clock in order to avoid creating a sibling on a
subsequent write of the same key.

That said, you can supply the param 'return_head` in the proplist
along with `return_body` which will eliminate having the value
returned to you and get the vclock you need.

- Roach

On Wed, May 15, 2013 at 8:23 AM, John Daily  wrote:
> Thanks for the kind words, Jeremiah.
>
> Jeremy, if you find anything that's wrong with that description of sibling
> behavior, please let me know. It's always possible I missed something
> important.
>
> -John
>
>
> On Wednesday, May 15, 2013, Jeremiah Peschka wrote:
>>
>> John Daily (@macintux) wrote a great blog post that covers sibling
>> behavior [1]
>>
>> In short, though, because you're supplying an older vector clock, and you
>> have allow_mult turned on, Riak makes the decision that since a vector clock
>> is present that conflicts with what's already on disk a sibling should be
>> created.
>>
>> As I understand it, the only way to write into Riak and not get siblings
>> is to set allow_mult to false - even leaving out vector clocks will lead to
>> siblings if allow_mult is true. Or so John Daily's chart claims.
>>
>> [1]: http://basho.com/riaks-config-behaviors-part-2/
>>
>> ---
>> Jeremiah Peschka - Founder, Brent Ozar Unlimited
>> MCITP: SQL Server 2008, MVP
>> Cloudera Certified Developer for Apache Hadoop
>>
>>
>> On Tue, May 14, 2013 at 10:48 PM, Jeremy Ong 
>> wrote:
>>>
>>> To clarify, I am using the erlang client. From the looks of it, the
>>> vector clock transition to the new value is opaque to the client so the only
>>> way to streamline this use case is to pass the `return_body` option (My use
>>> case is one read, many subsequent writes while updating in memory).
>>>
>>> In this case however, I already have the value in memory, so it seems
>>> inefficient to have to get the entire riakc_obj back when I really just need
>>> the metadata to construct the new object. Is this correct?
>>>
>>>
>>> On Tue, May 14, 2013 at 9:06 PM, Jeremy Ong 
>>> wrote:

 Suppose I have an object X.

 I make an update to X and store it as X1. I perform a put operation
 using X1.

 The same client then makes a modification to X1 and stores it as X2.
 Then, I perform a put operation using X2.

 This will create two siblings X1 and X2 if allow_mult is true. Is there
 any way I can avoid this? To me, the vector clock should have incremented
 once when transitioning from X to X1, then once more when transitioning 
 from
 X1 to X2. This way, I shouldn't need to issue a get before I have to 
 perform
 another write since my data is already in memory.

 I probably am misunderstanding something about vector clocks. Does
 anybody care to clarify this?

 Thanks,
 Jeremy
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client and siblings question

2013-05-20 Thread Brian Roach
Hello!

When you do your fetch (read) and resolve any conflicts, you're going
to get a vector clock along with each sibling. If you're using the
default JSONConverter it will be stored in your POJO's @RiakVClock
annotated field. That's the vector clock you're going to use when you
do your store (write) later - the modified object you're passing to
Bucket.store() should contain it.

The withoutFetch() option simply allows you to break this into two
separate actions. Without it, when you called StoreObject.execute()
that's exactly what would be happening.

Thanks!
- Roach

On Sat, Apr 27, 2013 at 5:35 PM, Y N  wrote:
> Hi,
>
> I am currently using the latest java client, and I have a question regarding
> updating data in a bucket where siblings are allowed (i.e. allowSiblings =
> true).
>
> I finally understand the whole read-resolve-mutate-write cycle, and also
> doing an update / store using previously fetched data (i.e. not in the same
> "transaction").
>
> This question is regarding the latter case (updating previously fetched
> data). My read uses a resolver. My data class has a @RiakVClock field
> defined.
>
> The problem is when I do the store(blah).withoutFetch(). It seems to be
> generating siblings. I just realized that's probably because my resolver
> (during the read) is creating a new object and then merging then siblings
> into the new object, however it's not setting the vclock field.
>
> My question is, during the read resolve stage, what should I use for the
> vlock? Should I just copy it from one of the other siblings, or is there
> some specific sort order I should use to pick a particular vlock for the new
> object?
>
> Thanks.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client and siblings question

2013-05-20 Thread Brian Roach
If you're going to use the withoutFetch() method it is required that
you use that @RiakVClock annotated field in your class - you need to
store the vclock from when you fetched in that field.

When you call StoreObject.execute() it is extracted from your object
and passed to the Converter.fromDomain() method. Since you're using
your own Converter, in that method you need to store the vclock in the
IRiakObject you're constructing and returning. The RiakObjectBuilder
has a withVClock method (and of course the DefaultRiakObject
constructor takes it as a parameter).

As for your second question ... yeah, MapReduce doesn't have that.
It's probably something worth thinking about for a future release (and
yeah, I'm pretty much in charge of the Java client right now - I'll
add it to my backlog).

As-is the best suggestion I would have is using the
MapReduceResult.getResultRaw() and then pass that String to your own
code for conversion.

Thanks!
- Roach

(BTW - I apologize for the late reply - your original email was caught
up in our listserv server for some reason and I only received it
today).


On Mon, May 20, 2013 at 10:32 AM, Y N  wrote:
> Hi Brian,
>
> Thanks for the response.
>
> I am not using the default JSONConverter, but have my own. The way I am
> currently resolving siblings is as follows:
>
> Create a new object
> Merge fields (using whatever logic)
> Return new object with merged fields
>
> In this case, what should I use for the vclock for the newly created object
> that was resolved? Do I randomly pick from one of the objects being
> resolved, or is there some order or precedence I should use?
>
> On a side note, I am not sure if you are responsible for the Riak Java
> client. If so, I don't see an option to allow me to use my own converter for
> objects obtained via a MapReduce query (through the Java client). Is this
> feature currently available, or is this something that will be added at some
> point?
>
> A .withConverter(blah) would be nice for mapreduce queries as well.
>
> Thanks!
>
>
> 
> From: Brian Roach 
> To: Y N 
> Cc: "riak-users@lists.basho.com" 
> Sent: Monday, May 20, 2013 7:42 AM
> Subject: Re: Java client and siblings question
>
> Hello!
>
> When you do your fetch (read) and resolve any conflicts, you're going
> to get a vector clock along with each sibling. If you're using the
> default JSONConverter it will be stored in your POJO's @RiakVClock
> annotated field. That's the vector clock you're going to use when you
> do your store (write) later - the modified object you're passing to
> Bucket.store() should contain it.
>
> The withoutFetch() option simply allows you to break this into two
> separate actions. Without it, when you called StoreObject.execute()
> that's exactly what would be happening.
>
> Thanks!
> - Roach
>
> On Sat, Apr 27, 2013 at 5:35 PM, Y N  wrote:
>> Hi,
>>
>> I am currently using the latest java client, and I have a question
>> regarding
>> updating data in a bucket where siblings are allowed (i.e. allowSiblings =
>> true).
>>
>> I finally understand the whole read-resolve-mutate-write cycle, and also
>> doing an update / store using previously fetched data (i.e. not in the
>> same
>> "transaction").
>>
>> This question is regarding the latter case (updating previously fetched
>> data). My read uses a resolver. My data class has a @RiakVClock field
>> defined.
>>
>> The problem is when I do the store(blah).withoutFetch(). It seems to be
>> generating siblings. I just realized that's probably because my resolver
>> (during the read) is creating a new object and then merging then siblings
>> into the new object, however it's not setting the vclock field.
>>
>> My question is, during the read resolve stage, what should I use for the
>> vlock? Should I just copy it from one of the other siblings, or is there
>> some specific sort order I should use to pick a particular vlock for the
>> new
>> object?
>>
>> Thanks.
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java PB client stucks on fetching reply

2013-05-21 Thread Brian Roach
Hello Konstantin,

The protocol buffers client uses a standard TCP socket and does a
blocking read. If it's never returning from there, then the Riak node
you're talking to is in some state where it's not replying nor closing
the connection. By default in Java a read won't ever time out; it will
stay blocked until either there's something to read, or the TCP
connection is closed.

>From the client side, you can specify a read time out via the
PBClientConfig using the .withRequestTimeoutMillis() option in the
builder. This will cause the operation to time out rather than wait
forever.

- Roach



On Tue, May 21, 2013 at 3:46 PM, Konstantin Kalin
 wrote:
> Hello,
>
> I use Riak Java client (1.0.6) and riak-pb (1.2) versions. I see that a
> thread stucks on reading socket from time to time in production. Basically
> the thread is never released once it gets this state. Riak backend logs are
> empty at the same time. Could you please look at the following stack trace?
> I need an advise what can be wrong and how to investigate/solve the issue.
>
> Thank you,
> Konstantin.
>
> "http-8443-7" daemon prio=10 tid=0x7f886800a800 nid=0x1fda runnable
> [0x7f88d2794000]
>
>   java.lang.Thread.State: RUNNABLE
>
>at java.net.SocketInputStream.socketRead0(Native Method)
>
>at java.net.SocketInputStream.read(SocketInputStream.java:146)
>
>at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>
>at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>
>- locked <0x0007a668acb0> (a java.io.BufferedInputStream)
>
>at java.io.DataInputStream.readInt(DataInputStream.java:387)
>
>at com.basho.riak.pbc.RiakConnection.receive(RiakConnection.java:110)
>
>at
> com.basho.riak.pbc.RiakClient.processFetchReply(RiakClient.java:280)
>
>at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:254)
>
>at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:243)
>
>at
> com.basho.riak.client.raw.pbc.PBClientAdapter.fetch(PBClientAdapter.java:156)
>
>at
> com.basho.riak.client.raw.ClusterClient.fetch(ClusterClient.java:115)
>
>at
> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:102)
>
>at
> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:100)
>
>at
> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:72)
>
>at
> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:53)
>
>at
> com.basho.riak.client.operations.FetchObject.execute(FetchObject.java:106)
>
>at
> platform.sessionstore.riak.RiakSessionStore.executeCmd(RiakSessionStore.java:290)
>
>at
> platform.sessionstore.riak.RiakSessionStore.validate(RiakSessionStore.java:248)
>
>at
> platform.sessionstore.riak.RiakUserSessionsResolver.resolve(RiakUserSessionsResolver.java:74)
>
>at
> platform.sessionstore.riak.RiakUserSessionsResolver.resolve(RiakUserSessionsResolver.java:16)
>
>at
> com.basho.riak.client.operations.FetchObject.execute(FetchObject.java:113)
>
>at
> platform.sessionstore.riak.RiakSessionStore.executeCmd(RiakSessionStore.java:290)
>
>at
> platform.sessionstore.riak.RiakSessionStore.fetchUserSessions(RiakSessionStore.java:270)
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client - Bug with ifNotModified?

2013-06-10 Thread Brian Roach
With protocol buffers this is the case.

The ifNotModified() method expects you to supply a vector clock which
is then matched against the vector clock of an existing object in Riak
with that key. Since there is no object in riak ... it returns
"notfound" - it can't find it.

Unfortunately that makes your situation somewhat difficult to handle
all in one go using the StoreObject.

What I woud suggest is doing a fetch first then do the store with the
withoutFetch() option.

In the case where the fetch returned nothing, do your store of your
new object with the ifNoneMatch() option if it's possible another
writer created it between your fetch and store.

In the case where the fetch returned an object, use the
ifNotModified() since you have the vclock.

- Roach

On Sat, May 25, 2013 at 8:21 PM, Y N  wrote:
> I am using ifNotModified and am running into a weird situation.
>
> I am using the following API:
>
> return bucket.store(key, new
> MyObject()).withMutator(mutator).withConverter(converter).ifNotModified(true).returnBody(true).execute();
>
> The problem I run into is that I get a not found exception when there is no
> existing object in Riak for the specified key. If I change ifNotModified to
> false, then it works as expected. I am allocating a new object in my mutator
> if there is no existing object from the fetch cycle. Note, this is with the
> default bucket settings.
>
> My expectation was that even with ifNotModified set to true, this should
> succeed if there is no existing object in Riak matching the key (hence,
> nothing has been modified and the store should succeed).
>
> Please clarify the behavior of the API.
>
> Thanks.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: New Counters - client support

2013-07-11 Thread Brian Roach
On Thu, Jul 11, 2013 at 1:48 AM, Y N  wrote:
>
> Thanks, that's helpful. Now I need to figure out if / how this is surfaced
> in the Java 1.1.1 client (I don't seem to see it anywhere, but maybe I'm
> missing something).

It's not. We're about 80% through the 1.4 features with the Java
client. I haven't tackled the counters yet.

Provided I can get peer reviews on everything I hope to have the 1.4.0
release of the Java client out next week.

The issue for that particular feature can be found here:
https://github.com/basho/riak-java-client/issues/239 and I'll update
it as soon as I push a PR for it.

Thanks,
Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 1.4 - Changing backend through API

2013-07-11 Thread Brian Roach
Heya.

That feature is code complete and awaiting peer review - the PR for it
is here: https://github.com/basho/riak-java-client/pull/250

This will be part of the Java Client 1.4.0 release (with all the new
1.4 features) that I hope to have for next week.

Thanks,
Brian Roach

On Wed, Jul 10, 2013 at 2:06 PM, Y N  wrote:
> Hi,
>
> I just upgraded to 1.4 and have updated my client to the Java 1.1.1 client.
>
> According to the release notes, it says all bucket properties are now
> configurable through the PB API.
>
> I tried setting my backend through the Java client, however I get an
> Exception "Backend not supported for PB". Is this property not configurable
> through the API, or do I need a newer version of the Java client than 1.1.1?
>
> Thanks.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Best way to insert a collection of items at once

2013-07-22 Thread Brian Roach
Re your last Q:

>  I have read StoreObject does a read on every write, if true, can that
be disabled?

Yes. If you're not worried about creating siblings you can use the
withoutFetch() option in the StoreObject:

storeObject.withoutFetch().execute();

The StoreObject will not attempt to fetch the existing value (nor do
conflict resolution) when that's specified.

Also worth asking is how/when are you constructing your Bucket object?
By default the Java client does a fetch for bucket properties when you
call IRiakClient.fetchBucket(bucketName).execute()

This can be disabled by using:

Bucket b = client.fetchBucket(bucketName).lazyLoadBucketProperties().execute();

Thanks!
- Roach

On Mon, Jul 22, 2013 at 12:41 PM, rsb  wrote:
> Thank you for your reply, I gave that a shot and worked really well.
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Best-way-to-insert-a-collection-of-items-at-once-tp4028487p4028500.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Best way to insert a collection of items at once

2013-07-23 Thread Brian Roach
On Tue, Jul 23, 2013 at 6:51 AM, rsb  wrote:
> Is there any underlaying difference between performing an;
>
> storeObject.withoutFetch().execute();
> -or-
> myBucket.store(item.key, item).execute();
>
> In other words, will my second statement result in an implicit fetch as
> well?

Bucket.store() returns a StoreObject. You want to call its
withoutFetch() method:

myBucket.store(item.key, item).withoutFetch().execute();

Thanks,
- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client 1.1.2 and 1.4.0

2013-07-24 Thread Brian Roach
Greetings!

The Riak Java client versions 1.1.2 and 1.4.0 have been released and
are now available from maven central. For non-maven users a bundled
jar including all dependencies can be found for these versions at:

http://riak-java-client.s3.amazonaws.com/riak-client-1.4.0-jar-with-dependencies.jar
http://riak-java-client.s3.amazonaws.com/riak-client-1.1.2-jar-with-dependencies.jar

Javadoc is available via: http://basho.github.io/riak-java-client/

Why two versions you ask? Good question.

The recently released Riak 1.4.0 adds a number of new features and
brings parity between our HTTP and Protocol Buffers APIs. The Java
client 1.4.0 reflects this by allowing PB operations that previously
would throw exceptions (setting bucket properties, for example) and
supports those new features. If you're using Riak 1.4, you want to be
using the Java client 1.4.x to use the new features.

1.1.2 on the other hand is a minor maintenance / bug fix release for
use with Riak 1.3.x and below. It will also work with Riak 1.4 but
does not support the new Riak 1.4 features.

Probably the most notable feature in 1.4.0 is support for the new
counters in Riak. Check out the Javadoc here:
http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/operations/CounterObject.html

Thanks!
- Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client v2.0 and HTTP

2013-07-24 Thread Brian Roach
Code never sleeps. And it mostly comes at night. Mostly.

Now that v1.4.0 of the RJC is released we're back to working on v2.0.
It's a major overhaul of the client.

The one large change we're looking at is no longer using HTTP and
instead exclusively using Protocol Buffers to communicate with Riak.

I've posted an RFC here:
https://github.com/basho/riak-java-client/issues/268 and encourage
everyone to participate.

Thanks!
Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java Client 1.1.2 and 1.4.0

2013-07-25 Thread Brian Roach
Fixed. Apparently I updated the ACL for 1.1.2 and not 1.4.0, sorry.

Thanks,
- Roach

On Wed, Jul 24, 2013 at 7:53 PM, YN  wrote:
> Hi Brian,
>
> Thanks for putting the release together so quickly. It does not appear as
> though the download is working (for the non maven files). It looks like
> there's some permissions issue (get an access denied).
>
> Thanks.
> YN shared this with you.
> Riak Java Client 1.1.2 and 1.4.0
> Via Nabble - Riak Users
> Greetings!
>
> The Riak Java client versions 1.1.2 and 1.4.0 have been released and
> are now available from maven central. For non-maven users a bundled
> jar including all dependencies can be found for these versions at:
>
> http://riak-java-client.s3.amazonaws.com/riak-client-1.4.0-jar-with-dependencies.jar
> http://riak-java-client.s3.amazonaws.com/riak-client-1.1.2-jar-with-dependencies.jar
>
> Javadoc is available via: http://basho.github.io/riak-java-client/
>
> Why two versions you ask? Good question.
>
> The recently released Riak 1.4.0 adds a number of new features and
> brings parity between our HTTP and Protocol Buffers APIs. The Java
> client 1.4.0 reflects this by allowing PB operations that previously
> would throw exceptions (setting bucket properties, for example) and
> supports those new features. If you're using Riak 1.4, you want to be
> using the Java client 1.4.x to use the new features.
>
> 1.1.2 on the other hand is a minor maintenance / bug fix release for
> use with Riak 1.3.x and below. It will also work with Riak 1.4 but
> does not support the new Riak 1.4 features.
>
> Probably the most notable feature in 1.4.0 is support for the new
> counters in Riak. Check out the Javadoc here:
> http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/operations/CounterObject.html
>
> Thanks!
> - Brian Roach
>
> ___
> riak-users mailing list
> [hidden email]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> Sent from InoReader
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to connect a cluster with riak running non-standard PB port

2013-07-25 Thread Brian Roach
On Thu, Jul 25, 2013 at 7:47 AM, kiran kulkarni  wrote:
> Class PBClusterConfig has only method addHosts which allows to set hosts
> only. I am using a different port for PB connections. How do I set host and
> port both?

http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/raw/pbc/PBClientConfig.html

PBClientConfig myPbClientConfig = new
PBClientConfig.Builder().withPort().build();
myPbClusterConfig.addHosts(myPbClientConfig,
"192.168.1.10","192.168.1.11","192.168.1.12");


- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to connect a cluster with riak running non-standard PB port

2013-07-25 Thread Brian Roach
http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/raw/config/ClusterConfig.html#addClient(T)

On Thu, Jul 25, 2013 at 8:02 AM, kiran kulkarni  wrote:
> Thanks, This work if port is same for all hosts. My usecase is a bit
> different:
> Actually for development, I run 4 Riak instances locally with different
> ports 10017, 10027, 10037, 10047 for PB.
>
>
>
> On Thu, Jul 25, 2013 at 7:24 PM, Brian Roach  wrote:
>>
>> On Thu, Jul 25, 2013 at 7:47 AM, kiran kulkarni 
>> wrote:
>> > Class PBClusterConfig has only method addHosts which allows to set hosts
>> > only. I am using a different port for PB connections. How do I set host
>> > and
>> > port both?
>>
>>
>> http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/raw/pbc/PBClientConfig.html
>>
>> PBClientConfig myPbClientConfig = new
>> PBClientConfig.Builder().withPort().build();
>> myPbClusterConfig.addHosts(myPbClientConfig,
>> "192.168.1.10","192.168.1.11","192.168.1.12");
>>
>>
>> - Roach
>
>
>
>
> --
> Kiran Kulkarni
> http://about.me/kiran_kulkarni

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client stats question

2013-07-25 Thread Brian Roach
Guido -

Right now, no.

We've been having some internal discussions around that topic and
whether it's really a "client library" operation or not.

How are you using stats? Is it for a monitoring app or ... ?

Thanks,
Brian Roach

On Thu, Jul 25, 2013 at 4:25 AM, Guido Medina  wrote:
> Hi,
>
> Is there a way to get the JSON stats via PBC? This is how we are doing it
> now, we would like to get rid of any HTTP call, currently, this is the only
> call being made to HTTP:
>
>   private void collectNodeInfo(final PBClientConfig clientConfig)
>   {
> ...
> RiakClusterStats stats=null;
> try{
>   stats=new RiakClusterStats();
>   HttpClient client=new DefaultHttpClient();
>   HttpGet g=new HttpGet("http://"; + clientConfig.getHost() +
> ":8098/stats");
>   HttpResponse resonse=client.execute(g);
>   JSONObject statsMap;
>   InputStream contentStream=null;
>   try{
> contentStream=resonse.getEntity().getContent();
> JSONTokener tok=new JSONTokener(contentStream);
> statsMap=new JSONObject(tok);
> stats.addNode(clientConfig.getHost(),statsMap);
>   } finally{
> if(contentStream != null){
>   contentStream.close();
> }
>   }
> } catch(Exception e){
>   log.error("Huh? Exception when ",e);
> }
> lastClusterStats=stats;
>   }
>
>
> Kind regards,
>
> Guido.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client v1.4.1

2013-07-26 Thread Brian Roach
Hot on the heels of 1.4.0 ...

After releasing 1.4.0 it was reported to us that if you tried to
switch to using protocol buffers in existing code and you were already
using protocol buffers 2.5.0 ... the client would crash.

Apparently Google has introduced breaking changes in Protocol Buffers
2.5.0 that make code generated with 2.4.1 incompatible.

To solve this problem we've decided to use the maven `shade` plugin to
repackage 2.4.1 and include it in the Riak Java Client jar. This is
the only difference between 1.4.0 and 1.4.1.

With the release of v2.0 we will be moving to Protocol Buffers 2.5.0

Thanks,
Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: 2i timeouts in 1.4

2013-07-26 Thread Brian Roach
Sean -

The timeout isn't via a header, it's a query param -> &timeout=

You can also use stream=true to stream the results.

- Roach

Sent from my iPhone

On Jul 26, 2013, at 3:43 PM, Sean McKibben  wrote:

> We just upgraded to 1.4 and are having a big problem with some of our larger 
> 2i queries. We have a few key queries that takes longer than 60 seconds 
> (usually about 110 seconds) to execute, but after going to 1.4 we can't seem 
> to get around a 60 second timeout.
> 
> I've tried:
> curl -H "X-Riak-Timeout: 26" 
> "http://127.0.0.1:8098/buckets/mybucket/index/test_bin/myval?x-riak-timeout=26";
>  -i
> 
> But I always get
> HTTP/1.1 500 Internal Server Error
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
> Date: Fri, 26 Jul 2013 21:41:28 GMT
> Content-Type: text/html
> Content-Length: 265
> Connection: close
> 
> 500 Internal Server ErrorInternal 
> Server ErrorThe server encountered an error while processing this 
> request:{error,{error,timeout}}mochiweb+webmachine
>  web server
> 
> Right at the 60 second mark. What can I set to give my secondary index 
> queries more time??
> 
> This is causing major problems for us :(
> 
> Sean
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: question about java client

2013-07-30 Thread Brian Roach
Paul,

I'm not quite sure I understand what you're asking.

If you do a fetch and have siblings each one is converted to your
domain object using the Converter and then passed as a Collection to
the ConflictResolver. Each sibling is going to include its links
and/or indexes as long as the Converter is injecting them into the
domain object and you can resolve them in the ConflictResolver.

The default JSONConverter, for example, injects them into your domain
object via annotations from the com.basho.riak.client.convert[1]
package.

Thanks,
Brian Roach

http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/convert/package-summary.html


On Tue, Jul 30, 2013 at 11:41 AM, Paul Ingalls  wrote:
> Newbie with Riak, and looking at the java client.
>
> Specifically, I've been digging into the domain mapping apis.  Looking into
> the code, it appears to me that, if I'm using links a bunch or even
> secondary indexes, that I could lose some data during the conflict
> resolution phase.  I see where links and other relevant user data gets
> cached during the conversion phase from the fetch and then patched back in
> during the conversion phase for the store.  However, it doesn't look like
> you have the opportunity during the resolution phase to merge metadata.
> Should I focus on using the raw client, or am I missing something?
>
> Thanks!
>
> Paul
>
>
>
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: question about java client

2013-07-30 Thread Brian Roach
Paul,

The annotated fields are not included in the Serialization using the
JSONConverter (at least, not in the current version of the client; I
think I did some fixes around that way back in like v1.0.7). If they
are, you've got something odd going on in your domain object.

Here's a (very basic) example:

public class App3
{

public static void main(String[] args) throws RiakException
{
IRiakClient client = RiakFactory.pbcClient();
Bucket b = client.fetchBucket("test_bucket").execute();

MyPojo mp = new MyPojo();
mp.key = "key0";
mp.value = "This is my value";

Set links = new HashSet();
for (int i = 1; i < 4; i++)
{
RiakLink link = new RiakLink("test_bucket", "key" + i, "myLinkTag");
links.add(link);
}
mp.links = links;
b.store(mp).execute();

mp = b.fetch("key0", MyPojo.class).execute();

System.out.println(mp.key);
System.out.println(mp.value);
for (RiakLink link : mp.links)
{
System.out.println(link.getKey());
}

client.shutdown();
}

}

class MyPojo
{
public @RiakKey String key;
public @RiakLinks Collection links;
public String value;

}

---
That outputs:

key0
This is my value
key2
key3
key1

Checking it with curl shows it as it should be:

roach$ curl -v localhost:8098/buckets/test_bucket/keys/key0
* About to connect() to localhost port 8098 (#0)
*   Trying ::1... Connection refused
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8098 (#0)
> GET /buckets/test_bucket/keys/key0 HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 
> OpenSSL/0.9.8x zlib/1.2.5
> Host: localhost:8098
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcypz/fga+nNWUwZTInMfK4Lcq5zRfFgA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
< Link: ; riaktag="myLinkTag",
; riaktag="myLinkTag",
; riaktag="myLinkTag",
; rel="up"
< Last-Modified: Tue, 30 Jul 2013 21:21:18 GMT
< ETag: "1ikpRECrH40O93LxiTmnKz"
< Date: Tue, 30 Jul 2013 21:21:57 GMT
< Content-Type: application/json; charset=UTF-8
< Content-Length: 28
<
* Connection #0 to host localhost left intact
* Closing connection #0
{"value":"This is my value"}



On Tue, Jul 30, 2013 at 2:38 PM, Paul Ingalls  wrote:
> Hey Brian,
>
> After a bit of messing around, I'm now dropping objects into the correct
> bucket using the links annotation.  However,  I am noticing that the json is
> including the metadata from the domain object, i.e. things tagged with
> @RiakKey, @RiakIndex or @RiakLinks.  I was under the impression this data
> would be left out.  I wouldn't care a whole lot, but when I'm getting data
> back in via a fetch, the JSONConverter is crashing saying it doesn't know
> how to convert the RiakLink object since there isn't an appropriate
> constructor for it.
>
> Do I need to specifically @JsonIgnore fields tagged with one of the Riak
> tags?
>
> Thanks!
>
> Paul
>
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
> On Jul 30, 2013, at 11:20 AM, Paul Ingalls  wrote:
>
> Ok, thats perfect.  I totally missed the annotation for links…
>
> Will give that a shot…
>
> Thanks!
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
> On Jul 30, 2013, at 11:16 AM, Brian Roach  wrote:
>
> Paul,
>
> I'm not quite sure I understand what you're asking.
>
> If you do a fetch and have siblings each one is converted to your
> domain object using the Converter and then passed as a Collection to
> the ConflictResolver. Each sibling is going to include its links
> and/or indexes as long as the Converter is injecting them into the
> domain object and you can resolve them in the ConflictResolver.
>
> The default JSONConverter, for example, injects them into your domain
> object via annotations from the com.basho.riak.client.convert[1]
> package.
>
> Thanks,
> Brian Roach
>
> http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/convert/package-summary.html
>
>
> On Tue, Jul 30, 2013 at 11:41 AM, Paul Ingalls  wrote:
>
> Newbie with Riak, and looking at the java client.
>
> Specifically, I've been digging into the domain mapping apis.  Looking into
> the code, it appears to me that, if I'm using links a bunch or even
> secondary indexes, that I could 

Re: question about java client

2013-07-30 Thread Brian Roach
Paul,

Ugh ... ok, makes sense now. Jackson is like that. We actually had
someone do a PR for secondary indexes to allow method annotation, but
... we hadn't done it for links. It's now on the todo list.

Thanks,
- Roach

On Tue, Jul 30, 2013 at 3:37 PM, Paul Ingalls  wrote:
> I figured out what I was doing.  I had a getter/setter for the fields in
> addition to the fields themselves, since they were private.  I had to
> JsonIgnore the getters/setters since I couldn't tag them with the riak
> annotations.
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
> On Jul 30, 2013, at 2:34 PM, Brian Roach  wrote:
>
> Paul,
>
> The annotated fields are not included in the Serialization using the
> JSONConverter (at least, not in the current version of the client; I
> think I did some fixes around that way back in like v1.0.7). If they
> are, you've got something odd going on in your domain object.
>
> Here's a (very basic) example:
>
> public class App3
> {
>
>public static void main(String[] args) throws RiakException
>{
>IRiakClient client = RiakFactory.pbcClient();
>Bucket b = client.fetchBucket("test_bucket").execute();
>
>MyPojo mp = new MyPojo();
>mp.key = "key0";
>mp.value = "This is my value";
>
>Set links = new HashSet();
>for (int i = 1; i < 4; i++)
>{
>RiakLink link = new RiakLink("test_bucket", "key" + i,
> "myLinkTag");
>links.add(link);
>}
>mp.links = links;
>b.store(mp).execute();
>
>mp = b.fetch("key0", MyPojo.class).execute();
>
>System.out.println(mp.key);
>System.out.println(mp.value);
>for (RiakLink link : mp.links)
>{
>System.out.println(link.getKey());
>}
>
>client.shutdown();
>}
>
> }
>
> class MyPojo
> {
>public @RiakKey String key;
>public @RiakLinks Collection links;
>public String value;
>
> }
>
> ---
> That outputs:
>
> key0
> This is my value
> key2
> key3
> key1
>
> Checking it with curl shows it as it should be:
>
> roach$ curl -v localhost:8098/buckets/test_bucket/keys/key0
> * About to connect() to localhost port 8098 (#0)
> *   Trying ::1... Connection refused
> *   Trying 127.0.0.1... connected
> * Connected to localhost (127.0.0.1) port 8098 (#0)
>
> GET /buckets/test_bucket/keys/key0 HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4
> OpenSSL/0.9.8x zlib/1.2.5
> Host: localhost:8098
> Accept: */*
>
> < HTTP/1.1 200 OK
> < X-Riak-Vclock: a85hYGBgzGDKBVIcypz/fga+nNWUwZTInMfK4Lcq5zRfFgA=
> < Vary: Accept-Encoding
> < Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
> < Link: ; riaktag="myLinkTag",
> ; riaktag="myLinkTag",
> ; riaktag="myLinkTag",
> ; rel="up"
> < Last-Modified: Tue, 30 Jul 2013 21:21:18 GMT
> < ETag: "1ikpRECrH40O93LxiTmnKz"
> < Date: Tue, 30 Jul 2013 21:21:57 GMT
> < Content-Type: application/json; charset=UTF-8
> < Content-Length: 28
> <
> * Connection #0 to host localhost left intact
> * Closing connection #0
> {"value":"This is my value"}
>
>
>
> On Tue, Jul 30, 2013 at 2:38 PM, Paul Ingalls  wrote:
>
> Hey Brian,
>
> After a bit of messing around, I'm now dropping objects into the correct
> bucket using the links annotation.  However,  I am noticing that the json is
> including the metadata from the domain object, i.e. things tagged with
> @RiakKey, @RiakIndex or @RiakLinks.  I was under the impression this data
> would be left out.  I wouldn't care a whole lot, but when I'm getting data
> back in via a fetch, the JSONConverter is crashing saying it doesn't know
> how to convert the RiakLink object since there isn't an appropriate
> constructor for it.
>
> Do I need to specifically @JsonIgnore fields tagged with one of the Riak
> tags?
>
> Thanks!
>
> Paul
>
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
> On Jul 30, 2013, at 11:20 AM, Paul Ingalls  wrote:
>
> Ok, thats perfect.  I totally missed the annotation for links…
>
> Will give that a shot…
>
> Thanks!
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
&g

Re: PB Java Client API 1.4 runtime exception

2013-08-07 Thread Brian Roach
On Wed, Aug 7, 2013 at 10:44 AM, rsb  wrote:
> I have tried updating my project to use the new PB 1.4, however during
> runtime I get the following exception:
> ...
> Any ideas what is causing the issue, and how can I resolve it? - Thanks.

Yes; don't do that.

The 1.4.0 version of the riak-pb jar is for the 1.4.x version of the
Java client. It won't nor isn't meant to work with any previous
version.

Thanks,
- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client - conflict resolver on both fetch() and store()?

2013-08-11 Thread Brian Roach
Matt,

The original design of StoreObject (which is what Bucket.store()
returns) was that it would encapsulate the entire read/modify/write
cycle in a very Java-y / enterprise-y way. This is why it takes a
Resolver and a Mutator; it does a fetch, resolves conflicts, passes
the resolved object to the Mutator, then stores the result of the
mutation to Riak.

Several users put in requests to make the fetch/resolve portion of
that optional as they had a workflow where that wasn't ideal and
didn't wanted to store a previously fetched value without fetching it
again. This is why the 'withoutFetch()' method was introduced along
with the @RiakVClock annotation.

When using withoutFetch() no fetch is performed, and no conflict
resolution occurs. Any ConflictResolver you pass in is simply not used
/ ignored ... except possibly if you're using returnBody()

Your code here:

bucket.store(record).returnBody(true).withoutFetch().withResolver(myConflictResolver);

is not doing a fetch or conflict resolution before storing your data.
It's just storing `record` in Riak. If that POJO has a vclock from a
previous fetch available via a @RiakVClock annotated field it will be
used. Otherwise, you're doing a store without a vclock.

I suspect where your confusion is stemming from is that you've also
specified 'returnBody()' and you're creating a sibling in that store
operation. When that's the case the "body" is going to be multiple
objects (all the siblings) which require resolution as
StoreObject.execute() only returns a single object back to the caller.
The same Resolver used if you had done the pre-fetch is employed. If
you haven't passed in a Resolver then the DefaultResolver is used
which ... isn't really a "resolver" - it simply passes through an
object if there's only one, or throws an exception if there's multiple
(siblings) present.

Thanks,
- Roach




On Sun, Aug 11, 2013 at 5:41 AM, Guido Medina  wrote:
> Hi Matt,
>
> Like Sean said, you should have a mutator if you are dealing with conflict
> resolution in domain objects; a good side effect of using a mutator is that
> Riak Java client will fetch-modify-write so your conflict resolver will be
> called once(?), if you don't use mutators, you get the effect you are
> describing(?) or in other words, you have to treat the operations as
> non-atomic and do things twice.
>
> There are two interfaces for mutations: Mutation and
> ConditionalStoreMutation, the 2nd interface will write only if the object
> was actually mutated, you must return true or false to state if it was
> mutated or not, which can be helpful if you are "mutating" an object and you
> discover the change you are requesting to make was already in place, then to
> save I/O, siblings creation and all implied on a write operation you decide
> not to write back.
>
> Mutation and conflict resolution are two separate concerns, but if you
> specify a mutator and a conflict resolver, conflict resolution will happen
> after the object is fetched and it is ready to be modified, which will
> emulate an atomic operation if you use a domain object.
>
> If you use a raw RiakObject, you must fetch, resolve the conflicts and on
> the write operation pass the VClock which is not a trivial nor easy to
> understand in code.
>
> HTH,
>
> Guido.
>
>
>
> On 11/08/13 03:32, Sean Cribbs wrote:
>
> I'm sure Roach will correct me if I'm off-base, but I believe the store
> operation does a fetch and resolve before writing. I think the ideal way to
> do that is to create a Mutation (T being your POJO) as well, in which
> case it's less of a "store" and more of a "fetch-modify-write". The way to
> skip the fetch/modify is to use the withoutFetch() option on the operation
> builder.
>
>
> On Sat, Aug 10, 2013 at 6:50 PM, Matt Painter  wrote:
>>
>> Hi,
>>
>> I've just rolled up my sleeves and have started to make my application
>> more robust with conflict resolution.
>>
>> I am currently using a @RiakVClock in my POJO (I need to think more about
>> whether the read/modify/write approach is preferable or whether I'd have to
>> rearchitect things).
>>
>> I read in the Riak Handbook the recommendation that conflicts are best
>> resolved on read -  not write - however the example App.java snipping on the
>> Storing data in Riak page in the Java client's doco uses a resolver on both
>> the store() and fetch() operations.
>>
>> Indeed, if I don't specify my conflict resolver in my store(), things blow
>> up (in my unit test, mind - I'm still getting my head around the whole area
>> so my test may be a bit contrived).
>>
>> However when I use it in both places, my conflicts are being resolved
>> twice. Is this anticipated?
>>
>> My store is:
>>
>>
>> bucket.store(record).returnBody(true).withoutFetch().withResolver(myConflictResolver);
>>
>> and my fetch is:
>>
>> bucket.fetch(id, Record.class).withResolver(myConflictResolver).execute();
>>
>> The order of operations in my test is:
>>
>> Store new record
>> Fetch the record as firstRecord

Re: Java client - conflict resolver on both fetch() and store()?

2013-08-11 Thread Brian Roach
I think this has somewhat hijacked the original poster's question, but ...

doNotFetch() was never meant to be used with a Mutation or
returnBody(). It was an option added due to the aforementioned
specific feature requests.

The reason it exists is for this workflow and this workflow alone:

1) Fetch something from Riak, resolving any conflicts.
2) Do something with that data and change (mutate) the value / metadata
3) Store the changes back to Riak without specifying a Mutation or
ConflictResolver and avoiding the fetch.

It is assumed that any siblings created are dealt with in a subsequent
fetch using this workflow.

The most basic example:

IRiakObject ro = bucket.fetch("foo").execute();
ro.setValue("Some new value");
bucket.store(ro).withoutFetch().execute();

The DeafultBucket.store() in that case sets up the StoreObject to use
the PassThroughConverter and the ClobberMutation that simply returns
the passed in IRiakObject.

The Javadoc for doNotFetch() is very specific about what happens with
the Mutation:

* 1) null will be passed to the {@link Mutation} object (if
*you are using the default {@link ClobberMutation} this is fine).

That said, using a Mutation other than ClobberMutation with
doNotFetch() really makes little sense. The option was added for
people who have already fetched and mutated their data.

- Roach

On Sun, Aug 11, 2013 at 1:54 PM, Guido Medina  wrote:
> I hate it too but my point still stands, if DO NOT FETCH, what's the target
> object the mutation should work with? Isn't it the passed object instead?
>
> Anyway, I sent a pull request which hopefully applies a better semantics:
> https://github.com/basho/riak-java-client/pull/271
>
> Thanks,
>
> Guido.
>
>
> On 11/08/13 20:45, YN wrote:
>
> Guido,
>
> In this case it appears that fetching is enabled i.e. if (!doNotFetch) i.e.
> if NOT doNotFetch... so basically doNotFetch = false (fetching is true /
> enabled).
>
> I hate the double negative cases since it's easy to get confused / miss the
> logic that was intended.
> YN shared this with you.
> Re: Java client - conflict resolver on both fetch() and store()?
> Via gmane.comp.db.riak.user
>
> Brian,
>
> In StoreObject's execute() method, this condition, is it a bug or intended?
>
>  ...
>  ...
>  if (!doNotFetch) {
>  resolved = fetchObject.execute();
>  vclock = fetchObject.getVClock();
>  }
>  ...
>  ...
>
> My reasoning is: if do not fetch then shouldn't the resolved object be
> the one passed? I'm doing some tests and if I do store a mutation
> returning the body without fetching, I get a new mutated object and not
> the one I passed + mutation. So I'm wondering if that was the original
> intention.
>
> Thanks,
>
> Guido.
>
> On 11/08/13 18:49, Guido Medina wrote:
>
> ___
> riak-users mailing list
> riak-users< at >lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
> Sent from InoReader
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Simple explanation for 2i using the Java Client

2013-08-19 Thread Brian Roach
If you were to actually try it, you'd find it throws an exception
telling you we don't support array types with the @RiakIndex
annotation.

You need:
@RiakIndex(name = "cars") private Set cars;

and adjust things accordingly. At that point, if I understand your
question, yes - your index query would then return the keys for all
objects (including the one shown) that have "corsa" in their set of
cars.

Thanks,
- Roach


On Mon, Aug 19, 2013 at 11:02 AM, rsb  wrote:
> I am having a bit of a hard time finding resources that explain 2i
> specifically with the Riak Java Client. Would anyone be kind enough to point
> me to a straightforward example that I will be able to expand from.
>
> Assuming the following object:
>
>
>
> How can I create a secondary index on the cars that '/Mr Riak/' owns. And of
> course, how could I query my bucket to retrieve all the people that own a
> '/Corsa/'.
>
>
>
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Simple-explanation-for-2i-using-the-Java-Client-tp4028895.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Including @RiakKey and @RiakIndexes in the stored JSON

2013-08-22 Thread Brian Roach
No; they are explicitly excluded from serialization because they're metadata.

The 2i indexes are returned in the http headers, and the key you kinda
already have to know to make the query in the first place.

Thanks,
Roach

On Thu, Aug 22, 2013 at 9:32 AM, mex  wrote:
> If I declare @RiakKey and 2i indexes (@RiakIndex) for some fields in my
> "Item" class, then those fields will not be displayed when querying the
> record over the browser (ie. http://localhost/riak/myBucket/myKey1).
>
> I have tried adding the annotation @JSONInclude but does not seem to change
> the behaviour. Is there a way around it?
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Including-RiakKey-and-RiakIndexes-in-the-stored-JSON-tp4028933.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Cluster performance.

2013-09-12 Thread Brian Roach
Victor,

What I suspect is happening is that only one of the IP addresses you
are passing to addHosts() is actually accepting connections /
reachable.

The way the ClusterClient works, an operation is going to be retried
on failure (up to three times by default) and as long as the node that
actually is reachable / accepting connections gets tried, this is
going to be hidden from you.

You can test this by changing the retry behavior when you query. For example:

Bucket b = client.fetchBucket("bucket").execute();

for (int i = 0; i < 5; i++)
{
b.fetch("key").withRetrier(new DefaultRetrier(0)).execute();
}

Thanks,
- Roach




On Thu, Sep 12, 2013 at 2:40 PM, Victor  wrote:
> Hi,
>
> We are currently testing Riak as potential replacement for data warehouse.
> Programmers was pretty happy with single-node operations, but as we switched
> to testing of a cluster, performance of same applications dropped
> significantly with only changes in code:
>
> Configuration conf = new
> PBClientConfig.Builder().WithHost(“192.168.149.21”).WithPort(8087).build();
>
> IRiakClient client = RiakFactory.newClient(conf);
>
>
>
> to
>
>
>
> PBClusterConfig clusterConfig = new PBClusterConfig(20);
>
> PBClientConfig clientConfig = PBClientConfig.defaults();
>
> clusterConfig.addHosts(clientConfig, "192.168.*.*","192.168.*.*");
>
> IRiakClient client = RiakFactory.newClient(clusterConfig);
>
>
>
> In the same time, I’m noticed, that if I use riak-admin status |grep node*
> - node_gets_total and node_puts_total rises only on one of the clustered
> machines.
>
> Is there any way to monitor data distribution, activity and resources of
> nodes in cluster? I saw multiple applications, but usually they provide only
> bucket operations and status.
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client bug?

2013-09-25 Thread Brian Roach
Guido -

When you say "the client is configured to time out" do you mean you're
using PB and you set the SO_TIMEOUT on the socket via the
PBClientConfig's withRequestTimeoutMillis()?

- Roach

On Wed, Sep 25, 2013 at 5:54 AM, Guido Medina  wrote:
> Hi,
>
> Streaming 2i indexes is not timing out, even though the client is configured
> to timeout, this coincidentally is causing the writes to fail (or or the
> opposite?), is there anything elemental that could "lock" (I know the
> locking concept in Erlang is out of the equation so LevelDB?) something in
> Riak while trying to stream a 2i index?
>
> Basically, our cluster copier which runs every two minutes once it gets to
> this state it never exists (no timeout) and our app just starts slow writing
> (over a minute to write a key)
>
> Not sure what's going on.
>
> Guido.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client bug?

2013-09-25 Thread Brian Roach
That option is for the connection timeout (i.e. when the connection
pool makes a new connection to Riak).

You can set the read timeout on the socket with the aforementioned
withRequestTimeoutMillis()

The default is the Java default, which is to say it'll block on the
socket read until either there's data to read or the remote side
closes the socket. That would at least get the client "unstuck".

This, however, doesn't explain/solve the real issue you're describing
which is that Riak is hanging up and not sending data.

Someone else will need to chime in on that - are you seeing anything
in the Riak logs?

- Roach

On Wed, Sep 25, 2013 at 12:11 PM, Guido Medina  wrote:
> Like this: withConnectionTimeoutMillis(5000).build();
>
> Guido.
>
>
> On 25/09/13 18:08, Brian Roach wrote:
>>
>> Guido -
>>
>> When you say "the client is configured to time out" do you mean you're
>> using PB and you set the SO_TIMEOUT on the socket via the
>> PBClientConfig's withRequestTimeoutMillis()?
>>
>> - Roach
>>
>> On Wed, Sep 25, 2013 at 5:54 AM, Guido Medina 
>> wrote:
>>>
>>> Hi,
>>>
>>> Streaming 2i indexes is not timing out, even though the client is
>>> configured
>>> to timeout, this coincidentally is causing the writes to fail (or or the
>>> opposite?), is there anything elemental that could "lock" (I know the
>>> locking concept in Erlang is out of the equation so LevelDB?) something
>>> in
>>> Riak while trying to stream a 2i index?
>>>
>>> Basically, our cluster copier which runs every two minutes once it gets
>>> to
>>> this state it never exists (no timeout) and our app just starts slow
>>> writing
>>> (over a minute to write a key)
>>>
>>> Not sure what's going on.
>>>
>>> Guido.
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client not returning deleted sibling

2013-10-03 Thread Brian Roach
Daniel -

Yeah, that is the case. When the ability to pass fetch/store/delete
meta was added to DomainBucket way back when it appears that was
missed.

I'll add it and forward-port to 1.4.x as well and cut new jars. Should
be avail by tomorrow morning at the latest.

Thanks!
- Roach

On Thu, Oct 3, 2013 at 9:38 AM, Daniel Iwan  wrote:
> Hi I'm using Riak 1.3.1 and Java client 1.1.2
>
> Using http and curl I see 4 siblings for an object one of which has
> X-Riak-Deleted: true
> but when I'm using Java client with DomainBucket my Converter's method
> toDomain is called only 3 times.
>
> I have set the property
>
> builder.returnDeletedVClock(true);
>
> on my DomainBuilder which I keep reusing for all queries and store
> operations (I guess that's good practice btw.?)
>
>
> I run that under debugger and it seems raw client sees 4 siblings but passes
> over only 3 due to bug (I think) in DomainBucket.fetch() method which should
> have
>
> if (fetchMeta.hasReturnDeletedVClock()) {
>
> so.returnDeletedVClock(fetchMeta.getReturnDeletedVClock());
>
> }
>
> at the end, as store() method has.
>
> Could you confirm or I'm I completely wrong?
>
>
> Regards
>
> Daniel
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client not returning deleted sibling

2013-10-03 Thread Brian Roach
On Thu, Oct 3, 2013 at 10:32 AM, Daniel Iwan  wrote:
> Thanks Brian for quick response.
>
> As a side question, what is the best way to delete such an object i.e. once
> I know one of the siblings has 'deleted' flag true because I fetched it?
> Should I just use DomainBucket.delete(key) without providing any vclock?
> Would it wipe it from Riak or create yet another sibling?

You should always use vclocks when possible, which in the case it is.
There are additional issues around doing the delete without a vclock
and if there's concurrently a store operation occurring.

Ideally you should look at why you're getting that tombstone sibling.
If it's simply a case of high write concurrency and you're using
vclocks with your writes, then there's not much you can do except
resolve it later (without changing how you're using the DB)... but
usually these things are caused by writes without a vclock.

Thanks,
- Roach




>
> Regards
> Daniel
>
>
> On 3 October 2013 17:20, Brian Roach  wrote:
>>
>> Daniel -
>>
>> Yeah, that is the case. When the ability to pass fetch/store/delete
>> meta was added to DomainBucket way back when it appears that was
>> missed.
>>
>> I'll add it and forward-port to 1.4.x as well and cut new jars. Should
>> be avail by tomorrow morning at the latest.
>>
>> Thanks!
>> - Roach
>>
>> On Thu, Oct 3, 2013 at 9:38 AM, Daniel Iwan  wrote:
>> > Hi I'm using Riak 1.3.1 and Java client 1.1.2
>> >
>> > Using http and curl I see 4 siblings for an object one of which has
>> > X-Riak-Deleted: true
>> > but when I'm using Java client with DomainBucket my Converter's method
>> > toDomain is called only 3 times.
>> >
>> > I have set the property
>> >
>> > builder.returnDeletedVClock(true);
>> >
>> > on my DomainBuilder which I keep reusing for all queries and store
>> > operations (I guess that's good practice btw.?)
>> >
>> >
>> > I run that under debugger and it seems raw client sees 4 siblings but
>> > passes
>> > over only 3 due to bug (I think) in DomainBucket.fetch() method which
>> > should
>> > have
>> >
>> > if (fetchMeta.hasReturnDeletedVClock()) {
>> >
>> > so.returnDeletedVClock(fetchMeta.getReturnDeletedVClock());
>> >
>> > }
>> >
>> > at the end, as store() method has.
>> >
>> > Could you confirm or I'm I completely wrong?
>> >
>> >
>> > Regards
>> >
>> > Daniel
>> >
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client not returning deleted sibling

2013-10-04 Thread Brian Roach
Daniel -

I'll get 1.1.3 out today; I'll post to the list. There's actually a
couple other small things I need to squeeze in since we're going to do
a release.

Re: Setting the vclock on the tombstone in JSONConverter, you're
right. It would only be an issue if you only had a tombstone, but it
should be there.

Thanks,
- Roach

On Fri, Oct 4, 2013 at 2:58 AM, Daniel Iwan  wrote:
> Thanks Brian for putting fix together so quickly.
>
> I think I found something else though.
> In JSONConverter I don't see vclock being set in toDomain() when converting
> deleted sibling?
> That vclock should be used for following delete if I understood it
> correctly?
>
> Also where can I download latest build? I tried
> http://riak-java-client.s3.amazonaws.com/riak-client-1.1.3-jar-with-dependencies.jar
> but access is denied
>
> Cheers
> Daniel
>
>
> On 3 October 2013 19:36, Brian Roach  wrote:
>>
>> On Thu, Oct 3, 2013 at 10:32 AM, Daniel Iwan 
>> wrote:
>> > Thanks Brian for quick response.
>> >
>> > As a side question, what is the best way to delete such an object i.e.
>> > once
>> > I know one of the siblings has 'deleted' flag true because I fetched it?
>> > Should I just use DomainBucket.delete(key) without providing any vclock?
>> > Would it wipe it from Riak or create yet another sibling?
>>
>> You should always use vclocks when possible, which in the case it is.
>> There are additional issues around doing the delete without a vclock
>> and if there's concurrently a store operation occurring.
>>
>> Ideally you should look at why you're getting that tombstone sibling.
>> If it's simply a case of high write concurrency and you're using
>> vclocks with your writes, then there's not much you can do except
>> resolve it later (without changing how you're using the DB)... but
>> usually these things are caused by writes without a vclock.
>>
>> Thanks,
>> - Roach
>>
>>
>>
>>
>> >
>> > Regards
>> > Daniel
>> >
>> >
>> > On 3 October 2013 17:20, Brian Roach  wrote:
>> >>
>> >> Daniel -
>> >>
>> >> Yeah, that is the case. When the ability to pass fetch/store/delete
>> >> meta was added to DomainBucket way back when it appears that was
>> >> missed.
>> >>
>> >> I'll add it and forward-port to 1.4.x as well and cut new jars. Should
>> >> be avail by tomorrow morning at the latest.
>> >>
>> >> Thanks!
>> >> - Roach
>> >>
>> >> On Thu, Oct 3, 2013 at 9:38 AM, Daniel Iwan 
>> >> wrote:
>> >> > Hi I'm using Riak 1.3.1 and Java client 1.1.2
>> >> >
>> >> > Using http and curl I see 4 siblings for an object one of which has
>> >> > X-Riak-Deleted: true
>> >> > but when I'm using Java client with DomainBucket my Converter's
>> >> > method
>> >> > toDomain is called only 3 times.
>> >> >
>> >> > I have set the property
>> >> >
>> >> > builder.returnDeletedVClock(true);
>> >> >
>> >> > on my DomainBuilder which I keep reusing for all queries and store
>> >> > operations (I guess that's good practice btw.?)
>> >> >
>> >> >
>> >> > I run that under debugger and it seems raw client sees 4 siblings but
>> >> > passes
>> >> > over only 3 due to bug (I think) in DomainBucket.fetch() method which
>> >> > should
>> >> > have
>> >> >
>> >> > if (fetchMeta.hasReturnDeletedVClock()) {
>> >> >
>> >> >
>> >> > so.returnDeletedVClock(fetchMeta.getReturnDeletedVClock());
>> >> >
>> >> > }
>> >> >
>> >> > at the end, as store() method has.
>> >> >
>> >> > Could you confirm or I'm I completely wrong?
>> >> >
>> >> >
>> >> > Regards
>> >> >
>> >> > Daniel
>> >> >
>> >> >
>> >> > ___
>> >> > riak-users mailing list
>> >> > riak-users@lists.basho.com
>> >> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >> >
>> >
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client not returning deleted sibling

2013-10-07 Thread Brian Roach
Daniel -

Unfortunately returning the body from a store operation may not
reflect all the replicas (and in the case of a concurrent write on two
different nodes "may not" really means  "probably doesn't").

If you do a subsequent fetch after sending both your writes you'll get
back a single vclock with siblings.

Thanks,
- Roach

On Mon, Oct 7, 2013 at 12:37 PM, Daniel Iwan  wrote:
> Hi Brian
>
> Thanks for update.
> I'm using 1.1.3 now and still have some issues sibling related
>
> Two clients are updating the same key.
> Updated is my custom meta field, which should be merged to contain values
> from both clients (set)
> I see both client are doing fetch, resolving sibling (only 1 i.e. no
> siblings), apply mutation (their own values for meta field). After that
> object is converted using fromDomain() in my converter using vclock provided
> Both nodes use vclock
> 6bce61606060cc60ca05521cf385ab3e05053d2dc8604a64cc6365589678fc345f1600
>
> So far so god.
> But the next step is toDomain (which is pare of Store I think since I'm
> using withBody) and looks like each node contains info only about
> it own changes.
> Client one sees vclock
> 6bce61606060cc60ca05521cf385ab3e05053d2dc8604a64ca6365589678fc345f1600
> Client 2 sees vclock
> 6bce61606060ca60ca05521cf385ab3e05053d2dc8604a64ca6365589678fc341f548a4d4adb032ac508945a0e92ca0200
>
> Both of vclocks are different than original vclock given during store, which
> I assume means RIak accepted write.
> Resolve is called on both machines but there is only one sibling.
>
> I guess the fact that I'm changing only meta field should not matter here
> and I should see 2 siblings?
> allow_multi is of course true and lastWriteWins is false on that bucket
>
> Any help much appreciated
>
>
> Regards
> Daniel
>
>
>
>
>
>
>
>
>
>
>
>
> On 4 October 2013 21:41, Brian Roach  wrote:
>>
>> Hey -
>>
>> I'm releasing 1.1.3 and 1.4.2 but it'll take a while for them to get
>> staged at maven central so I can post an "official" release to the
>> mailing list.
>>
>> I've gone ahead and uploaded the jar-with-dependencies to the usual
>> place for you-
>>
>>
>> http://riak-java-client.s3.amazonaws.com/riak-client-1.1.3-jar-with-dependencies.jar
>>
>> It fixes up the DomainBucket stuff and the JSONConverter.
>>
>> Thanks,
>> - Roach
>>
>> On Fri, Oct 4, 2013 at 2:58 AM, Daniel Iwan  wrote:
>> > Thanks Brian for putting fix together so quickly.
>> >
>> > I think I found something else though.
>> > In JSONConverter I don't see vclock being set in toDomain() when
>> > converting
>> > deleted sibling?
>> > That vclock should be used for following delete if I understood it
>> > correctly?
>> >
>> > Also where can I download latest build? I tried
>> >
>> > http://riak-java-client.s3.amazonaws.com/riak-client-1.1.3-jar-with-dependencies.jar
>> > but access is denied
>> >
>> > Cheers
>> > Daniel
>> >
>> >
>> > On 3 October 2013 19:36, Brian Roach  wrote:
>> >>
>> >> On Thu, Oct 3, 2013 at 10:32 AM, Daniel Iwan 
>> >> wrote:
>> >> > Thanks Brian for quick response.
>> >> >
>> >> > As a side question, what is the best way to delete such an object
>> >> > i.e.
>> >> > once
>> >> > I know one of the siblings has 'deleted' flag true because I fetched
>> >> > it?
>> >> > Should I just use DomainBucket.delete(key) without providing any
>> >> > vclock?
>> >> > Would it wipe it from Riak or create yet another sibling?
>> >>
>> >> You should always use vclocks when possible, which in the case it is.
>> >> There are additional issues around doing the delete without a vclock
>> >> and if there's concurrently a store operation occurring.
>> >>
>> >> Ideally you should look at why you're getting that tombstone sibling.
>> >> If it's simply a case of high write concurrency and you're using
>> >> vclocks with your writes, then there's not much you can do except
>> >> resolve it later (without changing how you're using the DB)... but
>> >> usually these things are caused by writes without a vclock.
>> >>
>> >> Thanks,
>> >> - Roach
>> >>
>> >>
>> >>
>> >>
>> >> &g

Re: Java client blocked at shutdown?

2013-11-05 Thread Brian Roach
That particular chunk of the old underlying PB client is ugly and
needs to die in a fire, but it shouldn't be possible for it to cause
Tomcat to get stuck.

That Timer is set to be a daemon therefore it can't prevent the JVM
from exiting. On top of that the only time it has a task scheduled is
during a streaming operation.

I suspect you're just seeing that thread still there because another
thread is the problem, and since it's a daemon it's going to be around
until ... the JVM isn't.

- Roach

On Tue, Nov 5, 2013 at 7:05 AM, Guido Medina  wrote:
> You may be right, we are still investigating other 2 threads, was worth to
> add this one to the list just in case, daemon threads by contract should go
> down with no issues when JDK is going down.
>
> Guido.
>
>
> On 05/11/13 13:59, Konstantin Kalin wrote:
>
> Strange. If you call shutdown of Riak client it shouldn't stop Tomcat
> shutdown anymore. This is that I learn from source code. I called the method
> in Servlet listener and never had issues after that. Before I had similar
> behavior like you have.
>
> Thank you,
> Konstantin.
>
> On Nov 5, 2013 5:31 AM, "Guido Medina"  wrote:
>>
>> That's done already, I'm looking at the source now, not sure of the
>> following timer needs to be cancelled when Riak client shutdown method is
>> called:
>>
>>
>> public abstract class RiakStreamClient implements Iterable {
>>
>> static Timer TIMER = new Timer("riak-stream-timeout-thread", true);
>> ...
>> ...
>> }
>>
>> Guido.
>>
>> On 05/11/13 13:29, Konstantin Kalin wrote:
>>
>> You need to call shutdown method of Riak client when you are stopping your
>> application.
>>
>> Thank you,
>> Konstantin.
>>
>> On Nov 5, 2013, at 5:06, Guido Medina  wrote:
>>
>> Sorry, I meant "stopping Tomcat from shutting down properly"...I must have
>> been thinking of some FPS night game.
>>
>> On 05/11/13 13:04, Guido Medina wrote:
>>
>> Hi,
>>
>> We are tracing some threads at our webapp which are stopping Tomcat from
>> shooting down properly, one of them seems to be related with Riak Java
>> client, here is the repeating stack trace once all services have been
>> stopped properly:
>>
>> Thread Name: riak-stream-timeout-thread
>> State: in Object.wait()
>> Java Stack trace:
>> at java.lang.Object.wait(Native Method)
>> - waiting on [0x0004bb4001a0] (a java.util.TaskQueue)
>> at java.lang.Object.wait(Object.java:503)
>> at java.util.TimerThread.mainLoop(Timer.java:526)
>> - locked [0x0004bb4001a0] (a java.util.TaskQueue)
>> at java.util.TimerThread.run(Timer.java:505)
>>
>>
>> Thanks for the help,
>>
>> Guido.
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Timeout problems, Riak Python Client with protocol buffers

2013-11-05 Thread Brian Roach
On Tue, Nov 5, 2013 at 1:20 PM, finkle mcgraw  wrote:

> There's a load balancer between the server running the Riak Python Client
> and the actual Riak nodes. Perhaps this socket error is related to some
> configuration of that load balancer?

That's what it looks like. Normally the PB connection to Riak is only
ever closed from the server side if a node were to go down; we don't
time out idle PB connections.

I'd have to dig into the python code but it looks like write failures
are ignored / don't trigger an exception. So, when the client goes to
read the expected response from the socket it discovers the socket is
closed. The error is saying it was expecting 4 bytes (The first 4
bytes of a response are a 32bit int that represents the length of the
message) and it received 0 (the socket had been closed from the remote
side).

- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Keys that won't disappear from indexes

2013-11-05 Thread Brian Roach
Worth noting here; the current Java client is entirely UTF-8 centric
and is explicitly converting those bytes to UTF-8 strings, so yes ...
that's probably an issue here if I'm understanding things correctly.

Almost everything is copied to/from the protocol buffer message to
Java Strings using the ByteString.copyFromUtf8() and
ByteString.toStringUtf8() methods.

This is actually something that is addressed in the new 2.0 Java
client Dave and I are working on.

Thanks,
- Roach

On Tue, Nov 5, 2013 at 5:40 PM, Toby Corkindale
 wrote:
> On 06/11/13 11:30, Evan Vigil-McClanahan wrote:
>>
>> You can replace int_to_bin with int_to_str to make it easier to debug
>> in the future, I suppose.  I am not sure how to get them to be fetched
>> as bytes, without may altering the client.
>>
>> You could just attach to the console and run whatever listing command
>> you're running there, which would give you the answer as unfiltered
>> erlang binaries, which are easy to understand.
>
>
> Ah, I'm really not familiar enough with Erlang and Riak to be doing that.
> Which API applies to console commands? I'll take a look. (Is it just the
> same as the Erlang client?)
>
>
>
>> Is this easily replicable on a new cluster?
>
>
> I think it should be -- the only difference over default configuration is
> that LevelDB is configured as the default backend.
> Run basho_bench with the pbc-client test to generate the initial keys and
> you should be set.
>
>
> T
>
>> On Tue, Nov 5, 2013 at 4:17 PM, Toby Corkindale
>>  wrote:
>>>
>>> Hi Evan,
>>> These keys were originally created by basho-bench, using:
>>> {key_generator, {int_to_bin, {uniform_int, 1}}}.
>>>
>>> Of the 10k keys, it seems half could be removed, but not the other half.
>>>
>>> Now I've tried storing keys with the same key as the un-deleteable ones,
>>> waiting a minute, and then deleting them again.. this isn't seeming to
>>> help!
>>>
>>> I don't know if it's significant, but I'm working with the Java client
>>> here
>>> (protocol buffers). I note that the bad keys are basically just bytes,
>>> not
>>> actual ascii strings, and they do contain nulls.
>>>
>>> Actually, here's something I just noticed -- the keys I'm getting from
>>> the
>>> index are repeating! It's the same 39 keys, repeated 128 times.
>>>
>>> O.o
>>>
>>> Are there any known bugs in the PBC interface when it comes to binary
>>> keys?
>>> I know the HTTP interface just crashes out completely.
>>>
>>> I'm fetching the keys in a manner that returns strings; is there a way to
>>> fetch them as bytes? Maybe that would work better; I'm wondering if the
>>> client is attempting to convert the bytes into unicode strings and
>>> dropping
>>> invalid characters?
>>>
>>>
>>> On 05/11/13 03:44, Evan Vigil-McClanahan wrote:


 Hi Toby.

 It's possible, since they're stored separately, that the objects were
 deleted but the indices were left in place because of some error (e.g.
 the operation failed for some reason between the object removal and
 the index removal).  One of the things on the feature list for the
 next release is AAE of index values, which should take care of this
 case.  This is really rare, but not unknown.  It'd be interesting to
 know how you ended up with so many.

 In the mean time, the only way I can think of to get rid of them
 (other than deleting them from the console, which would require taking
 nodes down and a lot of manual effort), would be to write another
 value that would have the same index, then delete it, which should
 normally succeed.

 I'll ask around to see if there is anything that might work better.
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


  1   2   >