Re: Deserialising nested Json into java object through Java -clinet

2015-04-07 Thread Brian Roach
Santi,

There is nothing in the Riak Java client to do that, no.

The result of a search query over the Riak protocol buffers API
returns the fields you queried for as a series of key/value pairs,
with the value always being a string. This is unfortunate when it
comes to mapping those values back to a class (or just serializing
them to JSON) if there was any non-string values (numbers or null).

The Java client presents a returned document as a Map> for this reason, the list being required as there could
be multiple values for a single key.

That said, it shouldn't be too difficult to write a custom
deserializer for Jackson if you know what the results are going to
look like in advance. If you could show a dump of the Map being
returned by the Riak client we may be able to help with that. Another
option would be to use a Solr client and query Solr directly rather
than going through Riak.

Thanks,
Brian Roach




On Tue, Apr 7, 2015 at 8:32 AM, Santi Kumar  wrote:
> Hi
> We have a Composite pojo which we are indexing into Riak search. If I see
> the structure of the object in solr web console, it shows as nested json.
> Through java client when we query it returns the Map and having difficulties
> in converting it into java object. Is there a way in riak-java client to
> return a pojo instead of Map and we dealing with jakson mappers?
>
> Thanks
> Santi
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [Announcement] Official Riak Node.js client released.

2015-04-02 Thread Brian Roach
Jose,

https://github.com/basho/riak-nodejs-client/pull/35

We'll get that merged pronto.  Will be included in v1.0.2

Thanks!
- Brian Roach

On Thu, Apr 2, 2015 at 7:18 AM, Brian Roach  wrote:
> Jose,
>
> See: http://basho.github.io/riak-nodejs-client/classes/RiakObject.html
>
> As noted in the docs for StoreValue and FetchValue, you can use
> RiakObject instead of a plain string or JS object, and metadata is the
> specific reason.
>
> That said, it would appear I forgot about links as link walking is a
> deprecated feature in Riak 2.0. I apologize for the oversight and will
> add them.
>
> As for explicit examples beyond the normal API docs, our various pages
> on docs.bacho.com are being updated to include node.js and will be
> done soon.
>
> Thanks!
> - Brian Roach
>
>
>
>
>
> On Thu, Apr 2, 2015 at 5:47 AM, Jose G. Quenum
>  wrote:
>> Dear Brian,
>> Thank you very much for sharing this link. I have been looking for a riak 
>> client for node.js that is compatible with riak 2.0.
>> However, having glanced at it I noticed that there was no method to access 
>> the meta information. This is good for link manipulation, for example. As 
>> well in order to update an object the meta information could be useful. As a 
>> disclaimer here, I should mention that I have been using riak-js to access 
>> riak 1.4. Now I'd like to transition to riak 2.0 and I am looking for the 
>> right tools.
>>
>> Overall, where can one find more documentation and more concrete examples 
>> about how to manipulate riak with this client.
>> Thanks & regards,
>> Jose
>>
>> Sent from my iPad
>>
>>> On Apr 2, 2015, at 2:36 AM, Brian Roach  wrote:
>>>
>>> Greetings Riak Users!
>>>
>>> Today we released the official Node.js client for Riak.
>>>
>>> It's available via npm:
>>>
>>> https://www.npmjs.com/package/basho-riak-client
>>>
>>> The github repo can be found here:
>>>
>>> https://github.com/basho/riak-nodejs-client
>>>
>>> API docs are published here:
>>>
>>> http://basho.github.io/riak-nodejs-client/classes/Client.html
>>>
>>> Thanks!
>>> - Brian Roach
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [Announcement] Official Riak Node.js client released.

2015-04-02 Thread Brian Roach
Jose,

See: http://basho.github.io/riak-nodejs-client/classes/RiakObject.html

As noted in the docs for StoreValue and FetchValue, you can use
RiakObject instead of a plain string or JS object, and metadata is the
specific reason.

That said, it would appear I forgot about links as link walking is a
deprecated feature in Riak 2.0. I apologize for the oversight and will
add them.

As for explicit examples beyond the normal API docs, our various pages
on docs.bacho.com are being updated to include node.js and will be
done soon.

Thanks!
- Brian Roach





On Thu, Apr 2, 2015 at 5:47 AM, Jose G. Quenum
 wrote:
> Dear Brian,
> Thank you very much for sharing this link. I have been looking for a riak 
> client for node.js that is compatible with riak 2.0.
> However, having glanced at it I noticed that there was no method to access 
> the meta information. This is good for link manipulation, for example. As 
> well in order to update an object the meta information could be useful. As a 
> disclaimer here, I should mention that I have been using riak-js to access 
> riak 1.4. Now I'd like to transition to riak 2.0 and I am looking for the 
> right tools.
>
> Overall, where can one find more documentation and more concrete examples 
> about how to manipulate riak with this client.
> Thanks & regards,
> Jose
>
> Sent from my iPad
>
>> On Apr 2, 2015, at 2:36 AM, Brian Roach  wrote:
>>
>> Greetings Riak Users!
>>
>> Today we released the official Node.js client for Riak.
>>
>> It's available via npm:
>>
>> https://www.npmjs.com/package/basho-riak-client
>>
>> The github repo can be found here:
>>
>> https://github.com/basho/riak-nodejs-client
>>
>> API docs are published here:
>>
>> http://basho.github.io/riak-nodejs-client/classes/Client.html
>>
>> Thanks!
>> - Brian Roach
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


[Announcement] Official Riak Node.js client released.

2015-04-01 Thread Brian Roach
Greetings Riak Users!

Today we released the official Node.js client for Riak.

It's available via npm:

https://www.npmjs.com/package/basho-riak-client

The github repo can be found here:

https://github.com/basho/riak-nodejs-client

API docs are published here:

http://basho.github.io/riak-nodejs-client/classes/Client.html

Thanks!
- Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client update vs store

2015-02-27 Thread Brian Roach
This was due to the UpdateValueFuture not checking for the exception
when the get() methods were called. Just fixed this, it'll be in the
very-soon-to-be-cut 2.0.1 release.

https://github.com/basho/riak-java-client/pull/503

Sync calls are just a wrapper around the async call that calls get() -
You can currently work around this by calling UpdateValue async:

RiakFuture future =
Riakclient.executeAsync(updateOp);

future.await();
if (future.isSuccess()) {
...
} else {
   ...
}

Thanks,
- Roach

On Tue, Feb 3, 2015 at 3:39 PM, Cosmin Marginean  wrote:
> I have an edge case where consistency is favoured over availability so I’m
> using a "consistent": true bucket type for a very specific operation.
> I worked in testing my setup so ended up faking an entire failure by
> deliberately using an incorrect vClock.
>
> Using StoreValue, the (second) write fails as expected
>
>   FetchValue fetchOp = new FetchValue.Builder(location(id)).build();
>   VClock vClock = client.execute(fetchOp).getVectorClock();
>   //fiddle with vClock or allow the first write to finish before the next
> step
>   StoreValue storeOp = new StoreValue.Builder(value)
>   .withVectorClock(vClock)
>   .withLocation(location(id)).build();
>   StoreValue.Response response = client.execute(storeOp);
>
>
> Caused by: com.basho.riak.client.core.netty.RiakResponseException: failed
> at
> com.basho.riak.client.core.netty.RiakResponseHandler.channelRead(RiakResponseHandler.java:52)
> at
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:340)
> at
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:326)
>
>
> I managed to override the UpdateValue class to simulate a similar failure
> scenario (so I don’t have to do the fetch + store myself). I was expecting a
> similar result, however, after some analysis I realised that an exception is
> being swallowed somewhere.
> I believe the trouble might be around this area:
> https://github.com/basho/riak-java-client/blob/develop/src/main/java/com/basho/riak/client/api/commands/kv/UpdateValue.java#L581
>
> The exception is not allowed to bubble up to the client code. Additionally,
> another net effect of this seems to be that a null response is returned here
>
>   UpdateValue.Response res = client.execute(updateOp);
>
> So a call to res.wasUpdated() will produce a NPE!
>
> The way I see it, this code needs to either
> 1) return not-null res and res.wasUpdated() as false
> or
> 2) allow the exception to bubble up
>
> Please let me know your thoughts
>
> Thank you
> Cos
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: NoNodesAvailableException

2015-02-25 Thread Brian Roach
Ricardo,

That exception gets raised by the DefaultNodeManager when all RiakNode
instances are reporting that no connections are available from the
pool - e.g. a max number of connections have been defined and they're
all out of the pool and in use.

Not sure why that would be happening; if connections get closed by the
remote peer the socket is discarded and the permit is returned to the
pool's controlling semaphore.

Could you enable logging in the client (set to debug)? The RiakNode
will spit out a ton of info that should let us see what's going on
with the connections.

(The client uses SLF4J so you just need to configure a logger in the
dependencies)

Thanks,
- Roach


On Wed, Feb 25, 2015 at 2:54 PM, Ricardo Mayerhofer
 wrote:
> Hi all,
> We're deploying a new application using Riak to store user cart during
> purchase flow.
>
> The application runs fine, however after a few hours all Riak operation
> fails on the client side, even if the cluster is up and running ok.
>
> The full stack exception is pasted at the end of this e-mail
> (com.basho.riak.client.core.NoNodesAvailableException)
>
> If the application is restarted it gets back working.
>
> We're using Riak Client 2.0 along with Riak 1.4.10. We're using protocol
> buffer with a TPC Load Balancer in front of Riak Cluster.
>
> The load balancer has a Idle Period Time, so after that time it closes
> connection (60 seconds).
>
> It seem some sort of connection leak.
>
> Any help is appreciated. Thanks
>
> <25-02-2015 19:12:09> 
>   <[ERROR]
> [com.b2winc.cart.riak.ShoppingCartRiakRepository] [Error]
> ---Stack : com.basho.riak.client.core.NoNodesAvailableException :
> java.util.concurrent.ExecutionException at
> com.basho.riak.client.core.FutureOperation.get(FutureOperation.java:260)
>   at
> com.basho.riak.client.api.commands.CoreFutureAdapter.get(CoreFutureAdapter.java:52)
>   at com.basho.riak.client.api.RiakCommand.execute(RiakCommand.java:89)
>   at com.basho.riak.client.api.RiakClient.execute(RiakClient.java:293)
>   at
> com.b2winc.cart.riak.ShoppingCartRiakRepository.isDependencyWorking(ShoppingCartRiakRepository.java:123)
>   at
> com.b2winc.cart.health.HealthService.getDependencies(HealthService.java:21)
>   at
> com.b2winc.cart.controller.HealthController.health(HealthController.java:24)
>   at sun.reflect.GeneratedMethodAccessor474.invoke(Unknown Source)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at
> org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
>   at
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
>   at
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
>   at
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
>   at
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:689)
>   at
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
>   at
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:938)
>   at
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:870)
>   at
> org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
>   at
> org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:618)
>   at
> org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
>   at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:291)
>   at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
>   at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
>   at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at com.ocpsoft.pretty.PrettyFilter.doFilter(PrettyFilter.java:145)
>   at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
>   at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at
> com.b2winc.checkout.web.ServerErrorFilter.doFilter(ServerErrorFilter.java:24)
>   at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
>   at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> 

Re: Riak Java Client and Links

2015-01-27 Thread Brian Roach
Cosmin,

To use links with a POJO, there's an annotation:

http://basho.github.io/riak-java-client/2.0.0/com/basho/riak/client/api/annotations/RiakLinks.html

- Roach

On Tue, Jan 27, 2015 at 4:29 AM, Cosmin Marginean  wrote:
> I am implementing a custom way to handle Riak Links using a Java Client.
> Looking at the samples available
> (https://github.com/basho/riak-java-client/wiki/Using-links which is
> outdated) it seems that it’s not entirely straightforward to use RiakLinks
> with POJOs and automatic conversion. More importantly, when one wants to use
> RiakLinks, they have to use RiakObject and manually serialise the object.
>
> I’d like to know if I’m missing something in the docs or if there are
> alternative practices for this usecase (POJO + “manually" handled links)
>
> Thank you
> Cos
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Link Walking and Security

2015-01-20 Thread Brian Roach
Cosmin -

Link walking is deprecated as of Riak 2.0 and will be removed in a
future version.

As to whether something similar will replace it, as of now I do not
believe we have anything on the roadmap, no.

Thanks,
- Roach



On Tue, Jan 20, 2015 at 10:24 AM, Cosmin Marginean  wrote:
> (Apologies if this is a recurring topic, but I haven’t read a clear
> statement yet in relation to this)
>
> Using Riak, I sometimes feel that link walking might be a corner stone for
> certain data modelling techniques. The Riak documentation though states
> clearly that this is not feasible while also enabling security (another
> cornerstone for certain business cases):
> http://docs.basho.com/riak/latest/ops/running/authz/
>
> I was wondering if there’s any plans to retire link walking and replaces it
> with something else (that *is* compatible with the security design) or if
> there are alternatives that could help fill this gap.
>
> Thanks in advance
> Cos
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Multiget java client performance

2015-01-20 Thread Brian Roach
Santi -

The core of the 2.0 Java client uses Netty. There's a fixed number of
worker threads that process sockets using non-blocking IO ( w/
polling/select).

Due to the synchronous nature of the Riak API, each fetch operation
requires its own socket connection and we can't pipeline.

As noted in the Javadoc for MultiFetch
(http://basho.github.io/riak-java-client/2.0.0/com/basho/riak/client/api/commands/kv/MultiFetch.html)
there is a default of 10 simultaneous fetches in flight at once. This
is pretty conservative but ... that's what defaults are for :)

How does it scale? Depends on the hardware, network latency, etc. In
the end ... you've only got so many threads and you prob don't want
1000 sockets being created at once. Best suggestion is to adjust up
the number of simultaneous inflight and measure it.

Thanks,
- Roach



On Mon, Jan 19, 2015 at 6:43 PM, Santi Kumar  wrote:
> Brain
> I"m using Riak Client 2.0.0 and Riak 2.0.2.
>
>
>
> On Tue, Jan 20, 2015 at 1:54 AM, Brian Roach  wrote:
>>
>> Santi -
>>
>> Which version of the Java client?
>>
>> Thanks,
>> - Roach
>>
>> On Mon, Jan 19, 2015 at 7:36 AM, Santi Kumar  wrote:
>> > Hi
>> >
>> > We are using java client for accessing Riak KV/Search. For some use
>> > cases,
>> > we go to search, get the keys and access the data from Riak. There might
>> > be
>> > a case where we might get 1000's of keys. So want to understand what is
>> > the
>> > impact of that on multiget and how does it scale.
>> >
>> >
>> > We were using RDBMS and Elastic search earlier. Now we moved all the
>> > data to
>> > Riak KV and Search. We used to query all the audit entries from ES as we
>> > used to store the complete data there. Applicaitons flows used to get
>> > the
>> > data from RDBMS. Now as we replaced everything with Riak KV and search,
>> > We
>> > need to go to Riak Search for auditing reports. If we store all the
>> > objects
>> > in Riak Search, how does it impact ?
>> >
>> > Thanks
>> > Santi
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Multiget java client performance

2015-01-19 Thread Brian Roach
Santi -

Which version of the Java client?

Thanks,
- Roach

On Mon, Jan 19, 2015 at 7:36 AM, Santi Kumar  wrote:
> Hi
>
> We are using java client for accessing Riak KV/Search. For some use cases,
> we go to search, get the keys and access the data from Riak. There might be
> a case where we might get 1000's of keys. So want to understand what is the
> impact of that on multiget and how does it scale.
>
>
> We were using RDBMS and Elastic search earlier. Now we moved all the data to
> Riak KV and Search. We used to query all the audit entries from ES as we
> used to store the complete data there. Applicaitons flows used to get the
> data from RDBMS. Now as we replaced everything with Riak KV and search, We
> need to go to Riak Search for auditing reports. If we store all the objects
> in Riak Search, how does it impact ?
>
> Thanks
> Santi
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client and clobber update

2015-01-14 Thread Brian Roach
Cosmin -

It would appear that the crazy generics involved are indeed broken
when it comes to UpdateValue the "ClobberUpdate" - it's not pulling
out the type correctly. I'll have to try and figure out why that is.
Type erasure will make you weep ;)

Two ways to deal with it at the moment:

A) Pass in a TypeReference explicitly:

TypeReference tRef = new TypeReference(){};
...
.withUpdate(UpdateValue.Update.clobberUpdate(entity), tRef)
...

B) Create your own class that subclasses UpdateValue.Update that does
the same thing as a "clobber update" :

public static class UpdateEntity extends UpdateValue.Update
{
private final SomeEntity entity;

public UpdateEntity(SomeEntity e)
{
this.entity = e;
}

@Override
public SomeEntity apply(SomeEntity original)
{
return entity;
}

}

...
.withUpdate(new UpdateEntity(entity))
...



I've tested both of these solutions and they both work.

Thanks and sorry for the problem,
- Roach

On Wed, Jan 14, 2015 at 1:39 PM, Cosmin Marginean  wrote:
> I’m doing a fairly “by the book” clobber update (store and fetch below work
> fine) on an entity using the Java client. I’m seeing an error that happens
> at type-inference time within the Riak Java client. I’m pasting below the
> exact test that I’m using to generate this, as well as the stacktrace.
> Please let me know if I’m missing something or if it’s a known bug.
>
> Thank you
> Cosmin
>
> @Test
> public void testRiakUpdate() throws Exception {
> RiakNode node = new
> RiakNode.Builder().withRemoteAddress("192.168.168.2").withRemotePort(8087).build();
> RiakCluster cluster = new RiakCluster.Builder(node).build();
> cluster.start();
> RiakClient client = new RiakClient(cluster);
>
> SomeEntity entity = new SomeEntity();
> entity.setName("John Doe");
> entity.setDescription("Some Description");
> Location location = new Location(new Namespace("bucket"), "entity-key");
>
> // Store
> StoreValue storeOp = new
> StoreValue.Builder(entity).withLocation(location).build();
> client.execute(storeOp);
>
> // Fetch
> FetchValue fetchOp = new FetchValue.Builder(location).build();
> entity = client.execute(fetchOp).getValue(SomeEntity.class);
>
> // Update
> entity.setName("New name");
> UpdateValue updateOp = new UpdateValue.Builder(location)
> .withFetchOption(FetchValue.Option.DELETED_VCLOCK, true)
> .withUpdate(UpdateValue.Update.clobberUpdate(entity))
> .build();
> client.execute(updateOp).getValue(SomeEntity.class);
> }
>
> private static class SomeEntity {
> private String name;
> private String description;
>
> public String getName() {
> return name;
> }
>
> public void setName(String name) {
> this.name = name;
> }
>
> public String getDescription() {
> return description;
> }
>
> public void setDescription(String description) {
> this.description = description;
> }
> }
>
>
>
>
> java.lang.ClassCastException:
> sun.reflect.generics.reflectiveObjects.TypeVariableImpl cannot be cast to
> java.lang.Class
>   at
> com.basho.riak.client.api.commands.kv.UpdateValue$1.handle(UpdateValue.java:149)
> ~[riak-client-2.0.0.jar:na]
>   at
> com.basho.riak.client.api.commands.ListenableFuture.notifyListeners(ListenableFuture.java:78)
> ~[riak-client-2.0.0.jar:na]
>   at
> com.basho.riak.client.api.commands.CoreFutureAdapter.handle(CoreFutureAdapter.java:120)
> ~[riak-client-2.0.0.jar:na]
>   at
> com.basho.riak.client.core.FutureOperation.fireListeners(FutureOperation.java:131)
> ~[riak-client-2.0.0.jar:na]
>   at
> com.basho.riak.client.core.FutureOperation.setResponse(FutureOperation.java:170)
> ~[riak-client-2.0.0.jar:na]
>   at com.basho.riak.client.core.RiakNode.onSuccess(RiakNode.java:823)
> ~[riak-client-2.0.0.jar:na]
>   at
> com.basho.riak.client.core.netty.RiakResponseHandler.channelRead(RiakResponseHandler.java:58)
> ~[riak-client-2.0.0.jar:na]
>   at
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:340)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
>   at
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:326)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
>   at
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:155)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
>   at
> io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:108)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
>   at
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:340)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
>   at
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:326)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
>   at
> io.netty.channel.Defaul

Re: Riak API for Java

2014-11-12 Thread Brian Roach
On Wed, Nov 12, 2014 at 8:20 AM, Ebbinge  wrote:
> run:
> java.io.IOException: Error receiving outputs: normal
>
> That's the error I am receiving :/ Could you guide me to make it work with
> my "Producto" Class, I am total beginner to the Riak API for Java.

The error message you're receiving is coming from Riak itself, not the
client. You're discarding the stack trace in your code by printing
`Ex.getMessage()` instead of just `Ex`.

Your code, as it it written now, is fine. You've got some problem on
the Riak side that you'll ned to look in the log files to diagnose.

Here's a gist showing that your code isn't the problem:

https://gist.github.com/broach/d0d79df429f5ff725c8a

Thanks,
Roach



>
> Thanks in advance,
> Edwin.
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Riak-API-for-Java-tp4032055p4032059.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: MapReduce Java RIAK API

2014-11-11 Thread Brian Roach
Jackson (the JSON library we use and the thing throwing that error)
requires your class to have a no-arg constructor so it can instantiate
it via reflection.

See: 
http://stackoverflow.com/questions/7625783/jsonmappingexception-no-suitable-constructor-found-for-type-simple-type-class

Thanks,
- Roach

On Tue, Nov 11, 2014 at 11:43 AM, Ebbinge  wrote:
> Hello, I am trying to get the MapReduce to run using my cluster. I have a
> 3-node cluster[192.168.0.41,192.168.0.42,192.168.0.43] up and running.
>
> I am trying with this example:
>
> try{
> IRiakClient client = RiakFactory.pbcClient("192.168.0.41",
> 8087);
> Bucket myBucket = client.fetchBucket("Productos").execute();//I
> ALREADY HAVE DATA IN A BUCKET CALLED 'PRODUCTOS'.
> BucketMapReduce m = client.mapReduce("Productos");
> m.addMapPhase(new NamedJSFunction("Riak.mapValuesJson"), true);
> MapReduceResult result = m.execute();
> System.out.println(result.getResultRaw());
> Collection tmp = result.getResult(Producto.class);
> for (Producto p : tmp) {
> System.out.println(p.Nombre);
> }
> client.shutdown();
> }
>
> catch(Exception Ex){
> System.out.println(Ex.getMessage());
> }
>
>
> I am getting the following error message:
> com.fasterxml.jackson.databind.JsonMappingException: No suitable constructor
> found for type [simple type, class Clases.Producto]: can not instantiate
> from JSON object (need to add/enable type information?)
>  at [Source: [B@9e493eb; line: 1, column: 2]
>
> I have a "Producto" class which has the following:
>
> public String ID;
> public String Nombre;
> public String Descripcion;
> public String Vendedor;
> public String Url;
> public String Precio;
>
> The MAIN objective I want to get to by using MapReduce is to get the top ten
> words use to describe the products in the cluster, this is then MapReducing
> the String Descripcion.
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/MapReduce-Java-RIAK-API-tp4032050.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak mapreduce w/ bucket types

2014-10-30 Thread Brian Roach
Hello Cezary,

You need to supply the bucket type as part of the inputs:

{ "inputs":["sets","987"], ... }

Thanks,
- Roach

On Thu, Oct 30, 2014 at 11:51 AM, Cezary Kosko  wrote:
> Hi,
> I was trying to run a mapreduce job on a bucket of sets (the default 'sets'
> setup from Riak docs). However, running
> https://gist.github.com/cezary-ytrre/d707ee2f13911c274d69
> on a node returns [] while a simple 'curl
> localhost:8098/types/sets/buckets/987/datatypes/general'
> returns a perfectly valid record.
> Is there a way to tell Riak to query a specific bucket type? I've not found
> any in the docs so far.
>
> Kind regards,
> Cezary
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


[ANN] It's been a long road, but the new Java Client for Riak 2.0 is here.

2014-09-25 Thread Brian Roach
As of now, the new Java client for Riak v2.0 is available via Maven Central.

To use in your project:


  
com.basho.riak
riak-client
2.0.0
  
  ...


Javadocs are published to: http://basho.github.io/riak-java-client/2.0.0/

To get started see:
http://basho.github.io/riak-java-client/2.0.0/com/basho/riak/client/api/RiakClient.html

First Q: Why did you write a whole new client with a completely different API?!?

A: It was time.

The new client design addresses a number of issues customers and OSS
users have posed over the last two years.

First and foremost ... It's built from the ground up to be
asynchronous. We're now using Netty in the core of the client. You can
now use our client both synchronously or asynchronously.

In addition, it's now designed to talk to a Riak cluster, with built
in (pluggable) load balancing, node management, etc. These are
features users have been asking for, which were near impossible to
insert into the old client.

Beyond that, our long term maintenance costs are reduced, so we can
now provide what we feel is a first rate client going forward.

Thank you,
- Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to retrieve generated id in java client

2014-09-25 Thread Brian Roach
Hi Ricardo,

When using your own class you'll need to add a String field to it and
annotate it with @RIakKey. When the response comes back from Riak the
client will inject the generated key into that field.

Thanks,
- Roach

On Thu, Sep 25, 2014 at 2:39 AM, ricardo.ekm  wrote:
> Hi,
> When saving a object with null key using riak's java client is it possible
> to retrieve the generated id?
>
> I've found how to achieve this saving a string
> (https://github.com/basho/riak-java-client/commit/7084e30de2c16e3e3d39c06969ae3fc7311b4748):
> +Bucket b = client.fetchBucket(bucketName).execute();
> +IRiakObject o = b.store(null,
> "value").withoutFetch().returnBody(true).execute();
> +
> +String k = o.getKey();
> +assertNotNull(k);
>
> However couldn't figure out how to do this a custom object
> + MyClass execute =
> bucket.store(myobject).withoutFetch().returnBody(true).execute();
> + ??
>
> Any help is appreciated. Thanks!
>
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/How-to-retrieve-generated-id-in-java-client-tp4031828.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.0.0 RC1

2014-07-21 Thread Brian Roach
On Mon, Jul 21, 2014 at 4:01 PM, Jared Morrow  wrote:
> There is a Java driver,
> http://docs.basho.com/riak/latest/dev/using/libraries/  The 2.0 support for
> that will land very soon, so keep an eye out on this list for the updated
> Java client.

As of about 5 minutes ago the new Riak Java 2.0 RC1 client is cut.

The master branch in the Java client repo reflects this version:

https://github.com/basho/riak-java-client/tree/master

I've released it to maven central, but these days it takes about 3 - 4
hours for it to be synced over to the public repository. Once it shows
up in maven central, the new artifact info is:


  com.basho.riak
  riak-client
  2.0.0.RC1


I realize the Javadoc is sparse (and missing in some places). After a
much needed break I'll be working on that for the final release.

Thanks!
- Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Library - best practice

2014-05-27 Thread Brian Roach
Henning,

Yes, the Java client is designed to be used across multiple threads.
Instantiate it once and share it.

As you noted, there's options in the configuration for tuning the
internal connection pool. The client maintains a pool of connections
to the Riak node so that multi-threaded applications aren't creating a
new connection every time an operation is performed.

Thanks,
- Roach

On Mon, May 26, 2014 at 4:44 AM, Henning Verbeek  wrote:
> I'd like to get some advice how best to handle clients in a
> multi-threaded Java application. Riak is 1.4.8, java client library is
> 1.4.4, using the protocol buffers clients.
>
> I'm referring specifically to
> http://basho.github.io/riak-java-client/1.4.4/com/basho/riak/client/RiakFactory.html#newClient(com.basho.riak.client.raw.config.Configuration).
> Should I use one single instance of IRiakClient per JVM and share it
> between all threads that access Riak? Or should each thread obtain its
> own, new client instance when needed and terminate it again
> afterwards?
>
> If the PBConfiguration specifies a poolSize, is that being handled in
> the Factory?
>
> Thanks, Henning
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Error starting riak server

2014-05-09 Thread Brian Roach
Looking at your logs, they all show:

Error loading "erlang_js_drv": "Driver compiled with incorrect version
of erl_driver.h"

In your post you say you installed:

"Erlang - First otp_src_R14B02 and then otp_src_R15B01"

More than likely you've compiled Riak with the old version or erlang.

As Hector mentioned, you may want to use an RPM rather than building
from source. You also want to remove R14B02 or at the very least make
sure it's not in your path.

- Roach





On Fri, May 9, 2014 at 10:39 AM, Hector Castro  wrote:
> Hi Rachana,
>
> Is there any reason why you didn't elect to install via RPM? [0] That
> path may be easier to get started with than compiling Erlang and Riak
> from source, as we bundle Erlang into the RPM.
>
> --
> Hector
>
> [0] http://docs.basho.com/riak/latest/ops/building/installing/rhel-centos/
>
> On Fri, May 9, 2014 at 5:47 AM, Rachana Shroff  wrote:
>> Hi,
>>
>> I am exploring Riak 1.4.8 and facing issue starting riak server.
>> Additional details are-
>> OS - Linux
>> Red Hat Enterprise Linux Server release 6.1 (Santiago)
>> 2.6.32-131.0.15.el6.x86_64
>>
>> Erlang - First otp_src_R14B02 and then otp_src_R15B01
>>
>> I followed all installations steps from
>> http://docs.basho.com/riak/1.3.0/tutorials/installation
>>
>> steps used for Erlang-
>>
>> wget http://erlang.org/download/otp_src_R15B01.tar.gz
>> tar zxvf otp_src_R15B01.tar.gz
>> cd otp_src_R15B01
>> ./configure && make && sudo make install
>>
>>
>> steps used for Riak-
>>
>> tar zxvf riak-1.4.8.tar.gz
>> cd riak-1.4.8
>> make rel
>>
>> After installation i am trying to start riak server but failing to do so.
>>
>> /../riak-1.4.8/rel/riak/bin>riak start
>>
>> Error Ouput - Node 'riak@127.0.0.1' not responding to pings.
>>
>>
>> I tried all the below workarounds suggested on the foroums but no luck.
>>
>> 1-
>> ulimit -n
>> 10
>> 2- change the ip to machine ip in vm.args and app.config.
>> riak stop # stop the node
>> riak-admin down riak@127.0.0.1 # take it down
>> sudo rm -rf /var/lib/riak/ring/* # delete the riak ring
>> sudo sed -i "s/127.0.0.1/`hostname -i`/g" /etc/riak/vm.args # Change the
>> name in config
>> riak-admin cluster force-replace riak@127.0.0.1 riak@"`hostname -i`" #
>> replace the name
>>
>> riak start # start the node
>>
>> 3- restart server
>>
>>
>> I have attached all log files.
>>
>>
>> Would appreciate your quick support on this.
>>
>>
>> Thanks & Regards,
>> Rachana
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re:

2014-05-08 Thread Brian Roach
Hi Daniel,

I suspect this is the bug fixed in PR#352 (
https://github.com/basho/riak-java-client/pull/352  )

Try upgrading to the 1.1.4 client release and see if the problem persists.

Thanks,
- Roach

On Thu, May 8, 2014 at 8:02 AM, Daniel Iwan  wrote:
> Hi
>
> I got following exception with riak Java client 1.1.3, Riak cluster 1.3.1
> I don't see any error messages in Riak's console log. Any idea what may be
> causing this?
>
> Caused by: com.basho.riak.client.RiakRetryFailedException:
> java.io.IOException: bad message code. Expected: 14 actual: 1
> at com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:79)
> at com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
> at com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
> at com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:81)
> at com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:53)
> at
> com.basho.riak.client.operations.DeleteObject.execute(DeleteObject.java:111)
> at com.basho.riak.client.bucket.DomainBucket.delete(DomainBucket.java:484)
> at com.basho.riak.client.bucket.DomainBucket.delete(DomainBucket.java:418)
> at server.riak.k.b(SourceFile:104)
> ... 19 more
> Caused by: java.io.IOException: bad message code. Expected: 14 actual: 1
> at com.basho.riak.pbc.RiakConnection.receive_code(RiakConnection.java:153)
> at com.basho.riak.pbc.RiakClient.delete(RiakClient.java:622)
> at com.basho.riak.pbc.RiakClient.delete(RiakClient.java:609)
> at
> com.basho.riak.client.raw.pbc.PBClientAdapter.delete(PBClientAdapter.java:222)
> at
> com.basho.riak.client.operations.DeleteObject$2.call(DeleteObject.java:106)
> at
> com.basho.riak.client.operations.DeleteObject$2.call(DeleteObject.java:104)
> at com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:72)
> ... 27 more
>
> Regards
> Daniel
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Searchable list archive?

2014-05-07 Thread Brian Roach
Howdy.

Couple ways to go about it, really.

There's a web archive at lists.basho.com[1] which is indexed by
google, so simply prefixing a google search with
"site:lists.basho.com" should work pretty well.

There's also Nabble[2], which takes the list and presents it as a
forum sort of thing.

- Roach

[1] http://lists.basho.com/pipermail/riak-users_lists.basho.com/
[2] http://riak-users.197444.n3.nabble.com/

On Wed, May 7, 2014 at 1:41 AM, Finkle Mcgraw  wrote:
> Hi!
>
> Sorry for a potential noob question: but is it possible to search the
> archives of this mailing list?
>
> BR Finkle
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: oddness when using java client within storm

2014-04-14 Thread Brian Roach
Sean -

Sadly I've not gotten to tuning anything yet in the new client ... the
terrors of pre-release :)

One thing is that by default the connection pool only keeps one
connection around (for each RiakNode in the RiakCluster) and will time
out any others after one second.

You might try bumping that up to 10 per node with the
withMinConnections() option in the RiakNode builder so that client
isn't creating new connections each time you fire off 100 requests.

Thinking about it this may be a culprit as the TCP connect is handled
synchronously when it's needed; basically, you're not getting a future
back from RiakCluster.execute() until a connection is returned from
the pool, and if a new connection needs to be made, there's that
overhead there.

I'm using all default settings in Netty in terms of threads, etc, so
it may be something there as well ... but as I said, I haven't gotten
to trying to tune for performance yet.

Thanks,
- Roach

On Mon, Apr 14, 2014 at 10:10 AM, Sean Allen
 wrote:
> Protocol Buffer.
>
>
> On Mon, Apr 14, 2014 at 11:53 AM, Russell Brown 
> wrote:
>>
>> HTTP or PB? Pretty sure the HTTP client defaults to a pool of 50
>> connections.
>>
>> On 14 Apr 2014, at 16:50, Sean Allen  wrote:
>>
>> We fire off 100 requests for the items in the batch and wait on the
>> futures to complete.
>>
>>
>> On Mon, Apr 14, 2014 at 11:40 AM, Alexander Sicular 
>> wrote:
>>>
>>> I'm not sure what "looking up entries... in batches of 100 from Riak"
>>> devolves into in the java client but riak doesn't have a native multiget. It
>>> either does 100 get ops or a [search>]mapreduce. That might inform some of
>>> your performance issues.
>>>
>>> -Alexander
>>>
>>> @siculars
>>> http://siculars.posthaven.com
>>>
>>> Sent from my iRotaryPhone
>>>
>>> > On Apr 14, 2014, at 8:26, Sean Allen 
>>> > wrote:
>>> >
>>> > I'm seeing something very odd trying to scale out part of code I'm
>>> > working on.
>>> >
>>> > It runs inside of Storm and lookups up entries from 10 node riak
>>> > cluster.
>>> > I've hit a wall that we can't get past. We are looking up entries (json
>>> > representation of a job)
>>> > in batches of 100 from Riak, each batch gets handled by a bolt in
>>> > Storm, adding more
>>> > bolts (an instance of the bolt class with a dedicated thread) results
>>> > in no increase
>>> > in performance. I instrumted the code and saw that waiting for all riak
>>> > futures to finish
>>> > increases as more bolts are added. Thinking that perhaps there was
>>> > contention around the
>>> > RiakCluster object that we were sharing per jvm, I tried giving each
>>> > bolt instance its own
>>> > cluster object and there wasn't any change.
>>> >
>>> > Note that changing Thread spool size given to withExecutor not
>>> > withExecutionAttempts value
>>> > has any impact.
>>> >
>>> > We're working off of the develop branch for the java client. We've been
>>> > using d3cc30d but I also tried with cef7570 and had the same issue.
>>> >
>>> > A simplied version of the scala code running this:
>>> >
>>> >   // called once upon bolt initialization.
>>> >   def prepare(config: JMap[_, _],
>>> >   context: TopologyContext,
>>> >   collector: OutputCollector): Unit = {
>>> > ...
>>> >
>>> > val nodes = RiakNode.Builder.buildNodes(new RiakNode.Builder, (1 to
>>> > 10).map(n => s"riak-beavis-$n").toList.asJava)
>>> > riak = new RiakCluster.Builder(nodes)
>>> >   // varying this has made no difference
>>> >   .withExecutionAttempts(1)
>>> >  // nor has varying this
>>> >   .withExecutor(new ScheduledThreadPoolExecutor(200))
>>> >   .build()
>>> > riak.start
>>> >
>>> > ...
>>> >   }
>>> >
>>> >   private def get(jobLocationId: String):
>>> > RiakFuture[FetchOperation.Response] = {
>>> > val location = new
>>> > Location("jobseeker-job-view").setBucketType("no-siblings").setKey(jobLocationId)
>>> > val fop = new
>>> > FetchOperation.Builder(location).withTimeout(75).withR(1).build
>>> >
>>> > riak.execute(fop)
>>> >   }
>>> >
>>> >   def execute(tuple: Tuple): Unit = {
>>> > val indexType = tuple.getStringByField("index_type")
>>> > val indexName = tuple.getStringByField("index_name")
>>> > val batch =
>>> > tuple.getValueByField("batch").asInstanceOf[Set[Payload]]
>>> >
>>> > var lookups: Set[(Payload, RiakFuture[FetchOperation.Response])] =
>>> > Set.empty
>>> >
>>> > // this always returns in a standard time based on batch size
>>> > time("dispatch-calls") {
>>> >   lookups = batch.filter(_.key.isDefined).map {
>>> > payload => {(payload, get(payload.key.get))}
>>> >   }
>>> > }
>>> >
>>> > val futures = lookups.map(_._2)
>>> >
>>> > // this is what takes longer and longer when more bolts are added.
>>> > // it doesnt matter what the sleep time is.
>>> > time("waiting-on-futures") {
>>> >   while (futures.count(!_.isDone) > 0) {
>>> > Thread.sleep(25L)
>>> >   }
>>> > }
>>> >
>

Re: Riak Search API: Returning matching documents

2014-03-26 Thread Brian Roach
Just want to add: driver (client) dev is listening!

Adding this our clients is a fairly easy thing, and I'll ad it to our todo list.

- Roach

On Wed, Mar 26, 2014 at 4:03 PM, Alexander Sicular  wrote:
> Agree with a lot of your points, Elias. But I've found that as a solo
> developer pushing product in my organization, and I would venture to say
> there are others like mine, Riak's ops proposition trumps some of these
> developer issues. Not having to hire ops personnel to babysit a Riak app is
> a big win for organizations that barely have money to hire a dev.
>
> If you are a developer that pushes product you can deal with round trip
> issues, multi fetch issues, etc. Aka. Riak's lack of developer sugar. You
> mentioned it earlier, but a search > MR is exactly how I've done multi fetch
> in Riak 1.x and, it seems, will continue to do in Riak 2.x. Of course,
> solutions are specific to your application. A search > user land multi fetch
> wrapper function is trivial to implement. Actually, I don't know why Basho
> doesn't ship just such a wrapper in erlang that would take an array of
> bucket/key pairs and push out an array of responses. But either way, it's
> not really a show stopper.
>
> Ya sugar is nice but, as you know, eventually you crash.
>
> -Alexander Sicular
>
> @siculars
>
> On Mar 26, 2014, at 2:10 PM, Elias Levy  wrote:
>
> On Wed, Mar 26, 2014 at 10:36 AM, Eric Redmond  wrote:
>>
>> That is correct. Storing values in solr make the values redundant. That's
>> because solr and Riak are different systems with different storage
>> strategies. Since we only return the solr results, the only way we can
>> access the kv values would be to make a separate call, which is not much
>> different from your client making that same call.
>>
>> As for separating Riak Search from kv entirely, this is a possibility
>> we've looked into, but it won't be ready for 2.0. I'm sorry to say that, for
>> the time being, the only option for your request is to store values in both
>> places.
>
>
> Thanks for the response Eric.  I understand the current limitations.  My
> question was forward looking.
>
> Riak is an amazing piece of technology that provides great availability. Ops
> loves Riak. Alas, in my opinion, its weakness has always been one of easy of
> use for developers. When it was just the KV store, the complexities of
> eventual consistency were placed squarely in the developer's shoulders and
> querability was very limited.
>
> 2i helped somewhat, and the new CRDT data types improve things tremendously,
> as does Yokozuna.  But there are still gaps.  No bulk loading.  No bulk
> fetching.
>
> Riak has always felt like a collection of components, rather than an
> integrated system. KV is unaware of Bitcask expirations. Search doesn't
> returned matched documents.
>
> MongoDB's cluster and storage layers may be a disgrace, but the one thing
> they got right is the expressive API.  Its one reason why developers love
> Mongo, at the same time is hated by Ops.
>
> I'd love to see this ease of use within Riak, so I can actually get our
> developers to use it more.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: link walking in the Riak 2.0 Ruby client?

2014-03-25 Thread Brian Roach
Hi Paul,

Link walking is being deprecated as Christian notes.

We removed it from the official 2.0 clients for that reason, and also
that the new clients only use protocol buffers which on the Riak PB
API side never had that functionality directly.

It is still possible to perform the operation, you just have to do it
through map-reduce (which is what our clients have always done when
using PB)

Thanks,
- Roach

On Tue, Mar 25, 2014 at 8:56 AM, Paul Walk  wrote:
> I'm gradually getting up to speed with the technical release of Riak 2.0, and 
> am appreciating CRDT and the seamless integration into search.
>
> I had just turned my attention to links, and started to wonder how I should 
> implement links when the objects I will mostly be dealing with are CRDT 'map' 
> objects, when I discovered that the link-walking functions appear to have 
> been removed from the latest Ruby client!!
>
> Is link-walking deprecated and intentionally removed from Riak version 2.0? 
> Or is it just missing from the Ruby client?
>
> Paul
> ---
> Paul Walk
> http://www.paulwalk.net
> ---
>
>
>
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: 60 second timeout in Riak

2014-03-13 Thread Brian Roach
Replies are inlined below:

On Thu, Mar 13, 2014 at 9:40 AM, Matthew MacClary
 wrote:
> Interesting, so the Java client is just honoring the server's request that
> it try back in 60 seconds?

No, again, the timeout is on the Riak side. There is no timing in the
client, it's just doing a blocking read on a socket (with no timeout).
Riak sends an error message to the client when a timeout occurs and
the default behavior of the client is to retry all failed operations.

> Do you happen to know if there is a configuration
> variable to tune that default timeout value?

The operations in the client have an optional timeout to send with the
request that will override the default setting on the Riak side for
that operation. See:
http://basho.github.io/riak-java-client/1.4.4/com/basho/riak/client/operations/FetchObject.html#timeout(int)

> Our automated testing since yesterday has shown that changing pb_backlog did
> resolve this 60 second timeout issue for our system.

If this is the case then it seems what you're talking about isn't
related to operation timeouts at all but rather TCP connection
timeouts.

I do find it odd that the change you made would resolve any sort of
TCP timeout issue. AFAIK the pb_backlog is simply how many TCP
connections can be queued waiting to be accepted (the same as you
would pass directly to the listen() system call).

If you performed an operation with the client and it needed to create
a new TCP connection to Riak, you would expect the client connection
to be refused instantly if the listen queue was full. While the client
is going to retry making that connection, it certainly wouldn't take
60 seconds to do so - I'd expect all retries (3, by default) to fail
instantly and the client to throw an exception back to you saying it
couldn't connect.

Connections that were *in* the listen queue I could see causing a TCP
connection timeout if Riak never accepted them before the client
side's TCP stack gave up, but in my head your change would increase
the frequency of that happening rather than reducing it.

- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: 60 second timeout in Riak

2014-03-12 Thread Brian Roach
Just as a clarification, there is no default timeout in the Java client.
Riak 1.4 introduced server-side timeouts on operations and the default is
60 seconds (60k milliseconds). By default, the client does retry after
receiving the timeout error message which is the behavior you're seeing.

- Roach
On Mar 12, 2014 7:42 AM, "Matthew MacClary" <
maccl...@lifetime.oregonstate.edu> wrote:

> Thanks for the suggestion Christian. Right now I am running an experiment
> with pb_backlog set to 64 up from the default of 5. The way our application
> uses the client there would usually be 4 connections initiated at the same
> time, but there could be as many as 12 connections initiated at the same
> time right now. I wonder if that is causing the client to time out and try
> back in 60 seconds!
>
> I will report my results.
>
> Best regards,
>
> -Matt
>
>
> On Tue, Mar 11, 2014 at 11:38 PM, Christian Dahlqvist  > wrote:
>
>> Hi Matthew,
>>
>> I believe 60 seconds is the default timeout in the client, so it is
>> possible the `busy_dist_port` issues have caused a timeout and that the
>> automatic retry then has succeeded.
>>
>> A small +zdbbl value will cause the internal buffers to fill up and
>> result in `busy_dist_port` messages, which will cause performance problems.
>> I would recommend setting +zdbbl to 16384 (16MB) or 32768 (32MB) and verify
>> that you stop seeing busy_dist_post messages in the logs. If problems
>> persist it may be required to set it even higher.
>>
>> It is also important to note that `busy_dist_port` messages can be caused
>> by individual large objects even if +zdbbl is set to a reasonably large
>> value as outlined above. In Riak 1.4.8, logging of large objects has been
>> introduced, which will allow you to identify large objects that could cause
>> problems by going through the logs. You can also track large objects by
>> trending the `node_get_fsm_objsize_100` statistic.
>>
>> Best regards,
>>
>> Christian
>>
>>
>>
>> On Wed, Mar 12, 2014 at 5:43 AM, Matthew MacClary <
>> maccl...@lifetime.oregonstate.edu> wrote:
>>
>>> Hi everyone, we are running Riak 1.4.1 on RHEL 6.2 using bitcask. We are
>>> using protobufs with the Java client, and our binary objects are typically
>>> a few hundred KB in size. I have noticed a persistent anomaly with riak
>>> reads and writes. It seems like often, maybe 0.5% of the time, writing to
>>> Riak takes 60 seconds longer than it should. Here is a prime example I just
>>> trimmed from a log file (see the last time entry below).
>>>
>>> This is not bitcask merge related because it happens before any of the
>>> bitcask slabs are large enough to merge. I am seeing lots of busy_dist_port
>>> messages in the Riak logs. One unique setting is that we have a small zdbbl
>>> setting of 128K because this seemed to prevent congestive collapse of
>>> throughput at high sustained loads. I believe that this 60 second timeout
>>> persisted across the various zdbbl settings we tried. Also we see this
>>> occasional 60 second delay on both VMs and real server hardware.
>>>
>>> Does anyone know where this 60 second delay comes from?
>>>
>>> Thanks!
>>>
>>> -Matt
>>>
>>>
>>> 2014-03-11 20:40:00,747 INFO  [Thread-61] sally.ReportHandler - Riak
>>> load time for 88: 0.087 seconds
>>> 2014-03-11 20:40:01,137 INFO  [Thread-62] sally.ReportHandler - Riak
>>> load time for 70: 0.185 seconds
>>> 2014-03-11 20:40:01,958 INFO  [Thread-63] sally.ReportHandler - Riak
>>> load time for 97: 0.054 seconds
>>> 2014-03-11 20:40:02,566 INFO  [Thread-64] sally.ReportHandler - Riak
>>> load time for 90: 0.043 seconds
>>> 2014-03-11 20:40:02,830 INFO  [Thread-65] sally.ReportHandler - Riak
>>> load time for 85: 0.051 seconds
>>> 2014-03-11 20:40:04,162 INFO  [Thread-66] sally.ReportHandler - Riak
>>> load time for 101: 0.075 seconds
>>> 2014-03-11 20:40:04,503 INFO  [Thread-67] sally.ReportHandler - Riak
>>> load time for 103: 0.048 seconds
>>> 2014-03-11 20:40:05,745 INFO  [Thread-68] sally.ReportHandler - Riak
>>> load time for 98: 0.031 seconds
>>> 2014-03-11 20:40:06,041 INFO  [Thread-69] sally.ReportHandler - Riak
>>> load time for 102: 0.063 seconds
>>> 2014-03-11 20:40:06,444 INFO  [Thread-70] sally.ReportHandler - Riak
>>> load time for 92: 0.022 seconds
>>> 2014-03-11 20:40:06,903 INFO  [Thread-71] sally.ReportHandler - Riak
>>> load time for 99: 0.039 seconds
>>> 2014-03-11 20:40:09,847 INFO  [Thread-72] sally.ReportHandler - Riak
>>> load time for 106: 0.019 seconds
>>> 2014-03-11 20:40:10,107 INFO  [Thread-73] sally.ReportHandler - Riak
>>> load time for 108: 0.043 seconds
>>> 2014-03-11 20:40:47,820 INFO  [Thread-52] sally.ReportHandler - Riak
>>> load time for 62: 1 minutes, 0.190 seconds
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>
> ___
> riak-users mailing list
> riak-users@list

Re: Java RiakClient and thread-safety

2014-02-18 Thread Brian Roach
Hi John,

Yes the Java client is meant to be used exactly as you describe;
create one instance of IRiakClient and share it across threads. The
client itself maintains a connection pool internally so that multiple
operations can occur concurrently in different threads.

Thanks,
- Roach

On Tue, Feb 18, 2014 at 2:02 PM, John Pederzolli
 wrote:
> Hi -
>
> I am currently using the PBC implementation of the IRiakClient. Going
> through the code, it appears that it is safe to instantiate statically and
> share among threads.
>
> I just wanted to verify if this is the case, or it would be preferable to
> instantiate a new client each call.
>
> Thanks!
>
> - John
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: pagination over 2i indexes in java

2014-02-12 Thread Brian Roach
Joe -

The current Java client codebase it built in layers. The top-level
user API uses an instance of the mid-level RawClient interface which
passes it to the appropriate protocol-specific layer at the bottom.

We are not able to reproduce what you are describing using the 1.4.4
version of the client. Doing a 2i query (with a range or a single
value) and specifying a max number of results returns the results
requested and a continuation.

Here's a gist using the RawClient:

https://gist.github.com/broach/99880915e8c71c53e9bb

If you have a test case that demonstrates the issue you're having and
can post it as a gist in github (or a pastie, etc) we'll be happy to
look at it.

Thanks!
- Roach

On Tue, Feb 11, 2014 at 10:32 PM, joe dude  wrote:
> Yeah, i looked at your code and it looks very similar to what i'm doing. The
> difference is i'm just talking to the client directly:
>
> StreamingOperation result = rawClient.fetchIndex(query);
>
> And rawClient is a "PBClusterClient". Might that have something to do with
> it?
>
>
> On Tuesday, February 11, 2014 5:00 PM, Dave Rusek  wrote:
> Joe,
>
> Sorry, I wanted to get that email out before I stepped out. The specific
> test I added is here:
>
> https://github.com/basho/riak-java-client/blob/1.4.x-develop/src/test/java/com/basho/riak/client/itest/ITestBucket.java#L280
>
> Does this represent your use case? If not, do you happen to have a failing
> test I could take a look at?
>
> Thanks!
>
> --
> Dave Rusek
> Software Engineer
> Basho Technologies
> @davidjrusek
>
> On February 11, 2014 at 5:33:57 PM, Dave Rusek (dru...@basho.com) wrote:
>
> Joe,
>
> I added an integration test to the 1.4.x-develop and the 1.4.4 branch of the
> client and tried them against the latest 1.4 branch of Riak but was not able
> to reproduce your issue.
>
> https://github.com/basho/riak-java-client/tree/1.4.x-develop
>
> --
> Dave Rusek
> Software Engineer
> Basho Technologies
> @davidjrusek
>
> On February 11, 2014 at 5:21:11 PM, Brian Roach (ro...@basho.com) wrote:
>
> -- Forwarded message --
> From: joe dude 
> Date: Tue, Feb 11, 2014 at 2:15 PM
> Subject: Re: pagination over 2i indexes in java
> To: Brian Roach 
> Cc: "riak-users@lists.basho.com" 
>
>
> I'm using 1.4.4, and creating a PBClusterClient.
>
> Thanks.
>
>
> On Tuesday, February 11, 2014 11:03 AM, Brian Roach  wrote:
> Hi Joe -
>
> What version of the Riak Java client are you using, and which protocol
> (PB or HTTP)?
>
> Will take a look at it.
>
> Thanks!
> - Roach
>
> On Tue, Feb 11, 2014 at 11:50 AM, joe dude  wrote:
>> Hi, trying to figure out how to use the java client to do 2i index queries
>> with pagination.
>>
>> I can issue the query with max_results and get back exactly the number
>> that
>> i asked for, but the continuation value in the returned StreamingOperation
>> object is null.
>>
>> In contrast, i get pagination to work just fine using the examples from:
>> http://docs.basho.com/riak/latest/dev/using/2i/
>>
>> I'm using:
>>
>> public StreamingOperation fetchIndex(IndexSpec indexSpec)
>> throws IOException
>>
>>
>> It's as if i'm missing something when building IndexSpec, but not sure
>> what.
>>
>> Thx.
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: pagination over 2i indexes in java

2014-02-11 Thread Brian Roach
Hi Joe -

What version of the Riak Java client are you using, and which protocol
(PB or HTTP)?

Will take a look at it.

Thanks!
- Roach

On Tue, Feb 11, 2014 at 11:50 AM, joe dude  wrote:
> Hi, trying to figure out how to use the java client to do 2i index queries
> with pagination.
>
> I can issue the query with max_results and get back exactly the number that
> i asked for, but the continuation value in the returned StreamingOperation
> object is null.
>
> In contrast, i get pagination to work just fine using the examples from:
> http://docs.basho.com/riak/latest/dev/using/2i/
>
> I'm using:
>
> public StreamingOperation fetchIndex(IndexSpec indexSpec)
>   throws IOException
>
>
> It's as if i'm missing something when building IndexSpec, but not sure what.
>
> Thx.
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client, querying using domain bucket and 2i

2014-02-11 Thread Brian Roach
Hi Daniel,

Honestly, there's no reason other than ... no one ever added it to the class.

When we cut the next 1.x releases, I'll add it.

Thanks,
- Roach

On Sat, Feb 8, 2014 at 2:43 PM, Daniel Iwan  wrote:
> Hi all
> Is there a reason there's no 2i querying methods in DomainBucket?
> That requires to keep both Bucket and DomainBucket references which makes it
> a bit awkward when passing those around.
>
> Thanks
> Daniel
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Java-client-querying-using-domain-bucket-and-2i-tp4030476.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak pool connection

2014-02-08 Thread Brian Roach
Hi Matt.

Ignoring that you really shouldn't be running an app server on a
machine that's also a database server, I'm confused as to why you
would use any load balancing features of a client if that's the way
you're doing things.

Presumably you have something load balancing to your app servers, so
... just connect to the local riak node. If that machines crashes, it
seems unlikely your app server would continue running. If there's some
edge case where somehow the riak node stops responding but the app
server is still running, your load balancer should stop sending
requests to that app server.

All that said, the existing `ClusterClient` class in the 1.x Java
client is very basic/limited. All it does is round-robin requests
through the list of supplied nodes. As Sean Cribbs points out, the new
Riak 2.0 Java client will allow you to define your own node selection
code.

Thanks,
- Roach

On Sat, Feb 8, 2014 at 3:34 PM, Sean Cribbs  wrote:
> Hi Matt,
>
> I'm not positive of the implementation details, but I know for certain that
> the "new" (unreleased) Java client allows you to provide a load-balancing
> strategy yourself. This documentation should be a good start:
> http://basho.github.io/riak-java-client/2.0.0-SNAPSHOT/com/basho/riak/client/core/DefaultNodeManager.html
>
> I'm sure Brian Roach and Dave Rusek, who maintain the Java client, would
> also be happy to discuss it more with you.
>
>
> On Sat, Feb 8, 2014 at 11:23 PM, Matthew MacClary
>  wrote:
>>
>> I have the same use case as Massimiliano. We are using the java client and
>> our app runs on the same servers as the Riak cluster. We have found that
>> connecting to the Riak instance running on local host provides the best
>> performance. It would be nice if the cluster client could be told to prefer
>> one node, and fall back to other nodes if needed kind of like secondary DNS
>> servers.
>>
>> -Matt
>>
>>> Message: 5
>>> Date: Sat, 8 Feb 2014 15:07:36 +0100
>>> From: Massimiliano Ciancio 
>>> To: Sean Cribbs 
>>> Cc: riak-users 
>>> Subject: Re: Riak pool connection
>>> Message-ID:
>>>
>>> 
>>> Content-Type: text/plain; charset=ISO-8859-1
>>>
>>>
>>> Hi Sean,
>>> thanks for your answer!
>>>
>>> > I believe it may still be possible for the node to be selected if no
>>> > other
>>> > connections are available in the pool, because the logic used to
>>> > establish a
>>> > new connection might not use the filter.
>>>
>>> My problem is not to avoid that a node will be selected again after a
>>> fail (well, if this can be avoided by sure it's better...) but to set
>>> an order in the nodes: I want to connect first to the node on which my
>>> app is running and only in case of fail to the other nodes. The reason
>>> is to avoid network traffic: every instance of my app have to connect
>>> to the node on the same machine where it resides.
>>> How can I suggest a "preferred node" to RiakClient?
>>> My first idea was to use a connection with only the preferred node
>>> and, using a try/except, use an "emergency connection" with the list
>>> of all nodes to be used only in case of fail. But it's not so
>>> elegant
>>> Massimiliano
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>
>
> --
> Sean Cribbs 
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak read Time out - Bitcask

2014-02-07 Thread Brian Roach
On Fri, Feb 7, 2014 at 8:52 AM, Ramesh-Ecare  wrote:

>   int riakTimeout = 15;
>   ...
>   xBucket.store(requestId,
> xResponse).returnBody(false).withoutFetch().timeout(riakTimeout).execute();

In both your fetch and store you've set the timeout to 15 *milliseconds*.

In addition, you're not using vector clocks which means if that bucket
has allow_mult=true and you're re-using keys, you're going to be
creating siblings on every write.

- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Load balancer

2014-02-02 Thread Brian Roach
Konstantin,

Doing a HTTP Ping request[1] to Riak is one approach. You could also
do a HTTP Fetch[2] for a specific bucket/key pair.

Another thing worth noting is that the all-new v2.0 of the Java client
we'll be releasing for Riak 2.0 is much, much better in terms of load
balancing and node management. It's built from the ground up to work
with a cluster vs. the old client which just sort of had it tacked on.

Operationally, for example, you'll be able to just push a properties
file out that your application can monitor and reconfigure the running
client.
[1] http://docs.basho.com/riak/latest/dev/references/http/ping/
[2] http://docs.basho.com/riak/latest/dev/references/http/fetch-object/

Thanks,
- Roach

On Sun, Feb 2, 2014 at 10:23 AM, Konstantin Kalin
 wrote:
> Sorry jumping into the discussion. What is a best way to monitor Riak node
> health? Most loadbalancer uses HTTP request to check if a node is alive.
>
> Currently we use Riak Java client to load balance requests to Riak. The
> issue is if a node gets removed or added all Java servers need to update
> configuration to reflect the changes. It's kinda annoying for operation
> team. So we started thinking about putting LB between Riak cluster and Java
> clients.
>
> Thank you,
> Konstantin.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Failed: This node is already a member of a cluster

2014-01-30 Thread Brian Roach
>From your prompt you're in node1's install directory and node1 *is*
already a member of a cluster, as shown in member-status.

You need to run the riak-admin command from the directory for the node
that is joining;

http://docs.basho.com/riak/latest/quickstart/#Create-the-Cluster

On Thu, Jan 30, 2014 at 2:18 PM, Naveen Tamanam  wrote:
> Hi All,
>
> I am getting the following error  even through the node is not a member of
> cluster.
> The cluster is not prepared yet. First time I ran the nodes and I was trying
> to join them in the cluster.
>
> [root@node1 riak]# bin/riak-admin cluster join node3@172.31.6.244
> Failed: This node is already a member of a cluster
>
> But when I see the membership I am only getting the fillwing
>
> [root@node1 riak]# bin/riak-admin  member-status
> = Membership
> ==
> Status RingPendingNode
> ---
> joining 0.0%  --  'node1@172.31.6.248'
> valid 100.0%  --  'node5@172.31.15.58'
> ---
> Valid:1 / Leaving:0 / Exiting:0 / Joining:1 / Down:0
>
>
> I have 5 nodes, and their name in vm.arg are
> node1@ip
> node2@ip
> node3@ip
> node4@ip
> node5@ip
>
> like it  is being displayed in the member status.
>
> I only able to join one node from after onwards I am getting
> Failed: This node is already a member of a cluster.
>
> I also tried by clearing the ring data on each node . Any thing wrong?
>
>
>
>
>
> --
> Thanks & Regards,
> Naveen Tamanam
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java Protobuf objects

2014-01-21 Thread Brian Roach
On Tue, Jan 21, 2014 at 10:58 AM, Jon Brisbin  wrote:
> Is the Protobuf interaction still enforced sequentially? e.g. responses only
> come in the order in which the corresponding request was sent and a long
> request will hold up the results of subsequent, but smaller requests?

There has never been a guarantee of this; we don't support pipelining
of requests. Requests are sequential in that you should make a request
then wait for the reply before issuing another request on the same
connection.

> I guess the answer for parallelism is to simply use a pool of connections.

Correct. This is what all the clients have always done.

> How many per client are considered acceptable now that R16 has better
> non-blocking IO support?

The number of connections has always been largely irrelevant; make
1000 of them if you'd like ... just don't try to send 1000 operations
at once if the node can't handle it. The workload on the node is the
(far) greater limiting factor.

- Roach

>
>
> Thanks!
>
> Jon Brisbin
> http://about.me/jonbrisbin | Twitter: @JonBrisbin
>
> On Tuesday, January 21, 2014 at 11:51 AM, Brian Roach wrote:
>
> Jon,
>
> Yes, we released riak_pb 2.0.0.11 to maven central for the new Riak
> Java Client v2.0 core preview (the all-new official async Riak Java
> client based on Netty[1]) we announced a couple weeks ago - it
> supports all the new Riak 2.0 functionality.
>
> 
> com.basho.riak.protobuf
> riak-pb
> 2.0.0.11
> 
>
> [1] - https://github.com/basho/riak-java-client/tree/master
>
> On Tue, Jan 21, 2014 at 7:22 AM, Jon Brisbin  wrote:
>
> Have the Riak Java Protobuf artifacts been updated to take advantage of Riak
> 2.0 features yet?
>
> I'd like to work some more on getting the Riaktor (Riak async Java client
> based on Reactor [1]) up to Riak 2.0 functionality. Currently I'm using the
> Protobuf artifact for the actual objects I need to do the interactions. I'm
> handling all the network IO through Reactor/Netty and using the simple
> codecs in Reactor.
>
> If not, can I just generate the Java objects I need from the original
> protobuf definitions?
>
> [1] - https://github.com/reactor/reactor
>
>
> Thanks!
>
> Jon Brisbin | Reactor Project Lead
> http://about.me/jbrisbin | @j_brisbin
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java Protobuf objects

2014-01-21 Thread Brian Roach
Jon,

Yes, we released riak_pb 2.0.0.11 to maven central for the new Riak
Java Client v2.0 core preview (the all-new official async Riak Java
client based on Netty[1]) we announced a couple weeks ago - it
supports all the new Riak 2.0 functionality.


com.basho.riak.protobuf
riak-pb
2.0.0.11


[1] - https://github.com/basho/riak-java-client/tree/master

On Tue, Jan 21, 2014 at 7:22 AM, Jon Brisbin  wrote:
> Have the Riak Java Protobuf artifacts been updated to take advantage of Riak
> 2.0 features yet?
>
> I'd like to work some more on getting the Riaktor (Riak async Java client
> based on Reactor [1]) up to Riak 2.0 functionality. Currently I'm using the
> Protobuf artifact for the actual objects I need to do the interactions. I'm
> handling all the network IO through Reactor/Netty and using the simple
> codecs in Reactor.
>
> If not, can I just generate the Java objects I need from the original
> protobuf definitions?
>
> [1] - https://github.com/reactor/reactor
>
>
> Thanks!
>
> Jon Brisbin | Reactor Project Lead
> http://about.me/jbrisbin | @j_brisbin
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client v2.0 core preview

2014-01-03 Thread Brian Roach
Greetings Riak Users!

With Riak v2.0 in pre-release there have been some interest in a
client that can exercise the new features (specifically, CRDTs).

We're not quite done with the user level API, but the core of the new
Java client is to a point to where it can be used to do so.

The master branch in the github repo will now build the core preview
of the new client.

https://github.com/basho/riak-java-client/tree/master

The README has more info and an example of how to get it up and running.

I've also generated and posted the Javadocs online
http://basho.github.io/riak-java-client/2.0.0-SNAPSHOT/

Thanks!
- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: May allow_mult cause DoS?

2013-12-17 Thread Brian Roach
On Tue, Dec 17, 2013 at 10:28 AM, Viable Nisei  wrote:
> Here you can check our code sample (in java) reproducing this behavior:
> https://bitbucket.org/vsnisei/riak-allow_mult_wtf
> ...
> Anyway, looks like that some DoS/DDoS attack approach utilizing this
> behavior may be proposed. We should only know that some
> service/appliation/website is using Riak with allow_mult buckets then
> provoke concurrent writes into them...

You're writing to the same key over and over without a vector clock.
This is exactly the behavior you should expect. Every write is
creating a new sibling and that will cause what you're seeing.

Because you chose to use a low-level, internal interface of the Java
client instead of IRiakClient, nothing is done automatically for you.

There's two typical patterns when you have concurrent writers:

1) Everything is done in a fetch/modify/write cycle. You fetch the
existing object (and its vector clock), resolve any siblings, apply
modifications, then write it back with the vector clock. This is the
default behavior of the IRiakClient. Note that you *still* may need to
resolve siblings on a plain fetch depending on the amount of
concurrent writes you have occurring; you haven't eliminated the
sibling window, just narrowed it.

2) All writes are done without vclocks. When doing a fetch, you
resolve the siblings then write back the resolved version with the
vclock. This is generally not a good approach if you expect to do lots
of writes to the same key as your sample code does because of the same
problem; lots of siblings.

I've written a fairly extensive article on using the Java client and
storing data in Riak - you may want to check it out:
https://github.com/basho/riak-java-client/wiki/Storing-data-in-riak

Thanks,
- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: accessing CRDTs in riak 2.0

2013-12-17 Thread Brian Roach
Hi James,

Do you mean via the Erlang client, or one of the other client libs, or ... ?

Thanks,
- Roach

On Tue, Dec 17, 2013 at 12:42 PM, James Moore  wrote:
> Hey all,
>
> I'm working on testing out some of the CRDT features but haven't been able
> to sort through the incantations to store/query any CRDT other than a
> counter.  any tips?
>
> thanks,
>
> --James
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Changes in Riak Protobuf for 2.0?

2013-12-15 Thread Brian Roach
Hi David,

The riak_pb repo is structured with current development being done in
the 'develop' branch and then releases being cut from there to master
(with tags). We've switched to a 4 digit versioning and you'' see
sever 2.x.x.x tags.

Off the top of my head, the major changes are messages for Yokozuna
administration, authentication, CRDTs and the bucket type field added
to existing messages.

Thanks,
- Roach

On Sun, Dec 15, 2013 at 10:51 AM, David James  wrote:
> Hello,
>
> I watched the Riak Community Hangout #4:
> https://plus.google.com/u/1/events/ctv6lgms873seh3vv0e59nj311o
>
> Thanks for that conversation! From it, I hear that the new (fresh) Riak 2.0
> Java client may be ready soon. That's great!
>
> I want to start experimenting a little and dig in.
>
> What will change, if anything, in the Riak Protobuf definitions in
> https://github.com/basho/riak_pb for for Riak 2.0? I didn't see a 2.0 branch
> when I looked.
>
> I'm going to dig in a little and see what happens.
>
> Thanks,
> -David
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: build Riak 2 Java client (on windows)

2013-12-11 Thread Brian Roach
Hi Shimon,

As noted in the README, the new version of the Java client (v2.0) is a
work in progress and not yet usable; while the core is mostly complete
there's currently no user API. Work on that is progressing, and we
hope to have a release candidate available at the end of the month.

The current 1.4.2 version of the client will work with Riak 2.0
preview, but unfortunately does not support the new features in Riak
2.0

Thanks,
- Roach

On Wed, Dec 11, 2013 at 11:54 PM, Shimon Benattar  wrote:
> Hi Riak users,
>
> I want to start checking out Riak 2 and for that I need the new Java client
>
> I downloaded it from Git but it will not build.
> It seems to be missing a few dependencies (One of them was protobuf which I
> actually downloaded and built but it did not sync)
>
> Is there anywhere I can download the Jar or get detailed instructions of how
> to build the project?
>
> Thanks,
>
> Shimon
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Links and uniqueness

2013-11-21 Thread Brian Roach
Matt -

This has never been a restriction in Riak itself AFAIK. I fixed the
same issue in the Java client over a year ago - it was using a hashmap
for links so duplicates were discarded;
https://github.com/basho/riak-java-client/pull/165

- Roach

On Thu, Nov 21, 2013 at 7:00 PM, Matt Black  wrote:
> Apologies for the bump!
>
> Basho guys, can I get a confirmation on the uniqueness of links between two
> objects please? (Before I go an modify the code in my app to suit)
>
> Thanks
> Matt
>
>
>
> On 19 November 2013 14:31, Matt Black  wrote:
>>
>> Hello list,
>>
>> Once upon a time, a link from one object to another was unique - you
>> couldn't add two links from object A onto object B. I know this as I had to
>> code around it in our app.
>>
>> At some stage that limitation has been removed - in either the Python
>> bindings or Riak itself.
>>
>> Can anyone else confirm this? Basho peeps, are non-unique links the
>> intended behaviour?
>>
>> Thanks
>> Matt Black
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: FetchObject timeout method doesn't work

2013-11-20 Thread Brian Roach
Harish,

ConnectionTimeout and RequestTimeout are actual socket operations on
the client side. Specifically, the first is passed to the actual
connect() call and the second sets the SO_TIMEOUT (SO_RCVTIMEO) socket
option so that the (blocking) read will time out.

The latter was included because in the past (pre 1.4) Riak did not
have operation timeouts; the only way to abort an operation was to
time out the socket read on the client side and dump the connection
(ugly, and expensive).

I double-checked the client code in the debugger and the client is
indeed passing the timeout all the way down to the protocol buffer and
it's being sent over the wire.

What is happening is that your fetch request is being serviced in Riak
within 1ms. As a point of reference, in order to trigger a Riak-side
timeout when fetching on my machine with that 1ms timeout I had to
store a 5MB object.

Thanks,
- Roach

On Wed, Nov 20, 2013 at 4:20 AM, Harish Sharma  wrote:
> FetchObject has a timeout method -
>
> /**
>
>  * Set an operation timeout in milliseconds to be sent to Riak
>
>  *
>
>  * As of 1.4 Riak allows a timeout to be sent for get, put, and delete
> operations.
>
>  * The client will receive a timeout error if the operation is not
> completed
>
>  * within the specified time
>
>  * @param timeout the timeout in milliseconds
>
>  * @return this
>
>  */
>
>
>
> public FetchObject timeout(int timeout) {
>
> builder.timeout(timeout);
>
> return this;
>
> }
>
>
>
> But it doesn’t work, I am using as following
>
>
>
> FetchObject fo= xBucket.fetch("0.10717155098162867");
>
>
>
> IRiakObject iRiakObject =fo.timeout(1).execute();
>
>
>
> Other timeouts like ConnectionTimeout and RequestTimeout work perfect but
> timeout on fetch object with execute has no effect in my case it takes 31
> miliseconds to fetch and I have a timeout of 1 ms but I see no error., What
> I am missing here?
>
>
>
> FYI, I am making client as per following –
>
>
>
>private static IRiakClient setupRiakClusterClient() {
>
>   IRiakClient client = null;
>
>   try {
>
>  String riakClusteHost2 = "10.200.2.58";
>
>
>
>  int riakClusterPort = 8087;
>
>
>
>  int riakIntialPoolSize = 10;
>
>  int riakMaxPoolSize = 200;
>
>  int riakConnectionTimeoutMillis = 10;
>
>  int riakRequestTimeoutMillis = 2;
>
>
>
>  PBClusterConfig clusterConfig = new
> PBClusterConfig(riakMaxPoolSize);
>
>  PBClientConfig pbconfig = new
> PBClientConfig.Builder().withPort(riakClusterPort).withInitialPoolSize(riakIntialPoolSize).withPoolSize(riakMaxPoolSize).withConnectionTimeoutMillis(riakConnectionTimeoutMillis).withRequestTimeoutMillis(riakRequestTimeoutMillis).withIdleConnectionTTLMillis(1).build();
>
>
>
>  clusterConfig.addHosts(pbconfig, riakClusteHost2);
>
>  client = RiakFactory.newClient(clusterConfig);
>
>
>
>   } catch (RiakException e) {
>
>  e.printStackTrace();
>
>  System.exit(1);
>
>   }
>
>   return client;
>
>}
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client with POJOs

2013-11-11 Thread Brian Roach
If your mapping function you simply add a qualifier to detect tombstones;

if (values[i].metadata['X-Riak-Deleted'] == 'true')

- Roach

On Mon, Nov 11, 2013 at 1:59 PM, Michael Guymon
 wrote:
> Ahh, yes, now that makes sense. I see with @RiakUsermeta or @RiakTombstone
> it is possible to filter the results of the MapReduce for tombstones. Is it
> possible to add a phase to reduce the tombstones instead of manually
> filtering the final result?
>
> thanks,
> Michael
>
>
> On 11/11/2013 03:16 PM, Brian Roach wrote:
>>
>> Michael -
>>
>> You have something stored in that bucket that isn't the JSON you're
>> expecting when you run your second map/reduce. As I mentioned, there's
>> nothing special about how the Java client works; it just serializes
>> the POJO instance using Jackson.
>>
>> My suggestion would be using curl / your browser (or the Java client)
>> and seeing what that is; listing the keys and checking the contents.
>>
>> I notice you're using the ".withoutFetch()" option when storing;
>> that's guaranteed to create a sibling if you have allow_multi=true set
>> in the bucket. If that's the case then that behavior is expected; both
>> versions are stored in Riak.
>>
>> Also worth noting is that if you're recently deleted something
>> (explicitly via a delete operation) it's very likely to get a
>> tombstone pass to map/reduce.  If you're doing explicit deletes from
>> Riak you need to check the object metadata for the
>> <<"X-Riak-Deleted">> header being true, and then ignore that object in
>> your map function.
>>
>> - Roach
>>
>>
>> On Mon, Nov 11, 2013 at 12:46 PM, Michael Guymon
>>  wrote:
>>>
>>> Hi Roach,
>>>
>>> Thanks for taking a moment to give me a hand with this. Let me try and be
>>> a
>>> bit more clear on what I am trying to figure out. My first steps are a
>>> Class
>>> Account:
>>>
>>> public class Account implements Serializable {
>>>  private String email;
>>> }
>>>
>>> Storing the account via
>>>
>>> myBucket.store("key", account).withoutFetch().execute();
>>>
>>> then retrieving it with a map reduce using JS, along the lines of:
>>>
>>> var accounts = [];
>>> for( i=0; i>>if ( values[i].email == 't...@test.net ) {
>>>  accounts.push(values[i]);
>>>}
>>> }
>>> return accounts
>>>
>>> works as expected.
>>>
>>> Now I updated the Class Account to have the name property:
>>>
>>> public class Account implements Serializable {
>>>  private String name;
>>>  private String email;
>>> }
>>>
>>> and storing with data to the same bucket, for the same key and attempting
>>> to
>>> Map Reduce for "name" I get a
>>>
>>> {"phase":1,"error":"[{<<\"lineno\">>,1},{<<\"message\">>,<<\"TypeError:
>>> values[i].name is
>>>
>>> undefined\">>},{<<\"source\">>,<<\"unknown\">>}]","input":null,"type":null,"stack":null}.
>>>
>>> If I change the bucket to a new one, the Map Reduce runs successfully
>>> without the above error.
>>>
>>> This is Riak 1.4.2 running on Ubuntu 13.04
>>>
>>> thanks,
>>> Michael
>>>
>>>
>>> On 11/11/2013 02:32 PM, Brian Roach wrote:
>>>
>>> Hi Michael,
>>>
>>> I'm somewhat confused by your question; map/reduce doesn't really have
>>> anything to do with your Java POJO/class.
>>>
>>> When using the Riak Java client and storing a POJO, the default
>>> converter (JSONConverter)  uses the Jackson JSON library and converts
>>> the instance of your POJO into a JSON string and stores it in Riak.
>>>
>>> If you change that POJO class and store more things, the resulting
>>> JSON is obviously going to be different (in your case having an
>>> additional field named "minty").
>>>
>>> When doing Map/Reduce, whatever JavaScript or Erlang functions you
>>> provide are executing in Riak and being given the data stored in Riak
>>> (the JSON you stored); they have no connection to Java.
>>>
>&

Re: Java Client with POJOs

2013-11-11 Thread Brian Roach
Michael -

You have something stored in that bucket that isn't the JSON you're
expecting when you run your second map/reduce. As I mentioned, there's
nothing special about how the Java client works; it just serializes
the POJO instance using Jackson.

My suggestion would be using curl / your browser (or the Java client)
and seeing what that is; listing the keys and checking the contents.

I notice you're using the ".withoutFetch()" option when storing;
that's guaranteed to create a sibling if you have allow_multi=true set
in the bucket. If that's the case then that behavior is expected; both
versions are stored in Riak.

Also worth noting is that if you're recently deleted something
(explicitly via a delete operation) it's very likely to get a
tombstone pass to map/reduce.  If you're doing explicit deletes from
Riak you need to check the object metadata for the
<<"X-Riak-Deleted">> header being true, and then ignore that object in
your map function.

- Roach


On Mon, Nov 11, 2013 at 12:46 PM, Michael Guymon
 wrote:
> Hi Roach,
>
> Thanks for taking a moment to give me a hand with this. Let me try and be a
> bit more clear on what I am trying to figure out. My first steps are a Class
> Account:
>
> public class Account implements Serializable {
> private String email;
> }
>
> Storing the account via
>
> myBucket.store("key", account).withoutFetch().execute();
>
> then retrieving it with a map reduce using JS, along the lines of:
>
> var accounts = [];
> for( i=0; i   if ( values[i].email == 't...@test.net ) {
> accounts.push(values[i]);
>   }
> }
> return accounts
>
> works as expected.
>
> Now I updated the Class Account to have the name property:
>
> public class Account implements Serializable {
> private String name;
> private String email;
> }
>
> and storing with data to the same bucket, for the same key and attempting to
> Map Reduce for "name" I get a
>
> {"phase":1,"error":"[{<<\"lineno\">>,1},{<<\"message\">>,<<\"TypeError:
> values[i].name is
> undefined\">>},{<<\"source\">>,<<\"unknown\">>}]","input":null,"type":null,"stack":null}.
>
> If I change the bucket to a new one, the Map Reduce runs successfully
> without the above error.
>
> This is Riak 1.4.2 running on Ubuntu 13.04
>
> thanks,
> Michael
>
>
> On 11/11/2013 02:32 PM, Brian Roach wrote:
>
> Hi Michael,
>
> I'm somewhat confused by your question; map/reduce doesn't really have
> anything to do with your Java POJO/class.
>
> When using the Riak Java client and storing a POJO, the default
> converter (JSONConverter)  uses the Jackson JSON library and converts
> the instance of your POJO into a JSON string and stores it in Riak.
>
> If you change that POJO class and store more things, the resulting
> JSON is obviously going to be different (in your case having an
> additional field named "minty").
>
> When doing Map/Reduce, whatever JavaScript or Erlang functions you
> provide are executing in Riak and being given the data stored in Riak
> (the JSON you stored); they have no connection to Java.
>
> Can you expand on  "Now the map reduce fails for that the new
> property" with what exactly the problem is? It sounds like you have a
> problem with your JavaScript or Erlang function(s).
>
> Thanks!
> - Roach
>
>
> On Mon, Nov 11, 2013 at 12:07 PM, Michael Guymon
>  wrote:
>
> Hello,
>
> I have a (hopefully dumb) question about working with the Java client and
> POJOs. I justed started tinkering with Riak and have created a simple
> Account POJO and happily crammed it into a bucket "test1" and mapped reduced
> it (hooray). The problem starts when I updated the Class for Account, adding
> a new String property "minty".  Now the map reduce fails for that the new
> property in the bucket "test1". Seems like the POJO is always being
> serialized  to the format of the older Account class. If I create a new
> bucket, "test2", and cram and reduce anew, everything works again.
>
> I have been grepping around the docs, but have not been able to zero in on
> my issue. Am I doing something bone headed? Is it possible to update a
> bucket to support a modified POJO class?
>
> thanks,
> Michael
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client with POJOs

2013-11-11 Thread Brian Roach
Hi Michael,

I'm somewhat confused by your question; map/reduce doesn't really have
anything to do with your Java POJO/class.

When using the Riak Java client and storing a POJO, the default
converter (JSONConverter)  uses the Jackson JSON library and converts
the instance of your POJO into a JSON string and stores it in Riak.

If you change that POJO class and store more things, the resulting
JSON is obviously going to be different (in your case having an
additional field named "minty").

When doing Map/Reduce, whatever JavaScript or Erlang functions you
provide are executing in Riak and being given the data stored in Riak
(the JSON you stored); they have no connection to Java.

Can you expand on  "Now the map reduce fails for that the new
property" with what exactly the problem is? It sounds like you have a
problem with your JavaScript or Erlang function(s).

Thanks!
- Roach


On Mon, Nov 11, 2013 at 12:07 PM, Michael Guymon
 wrote:
> Hello,
>
> I have a (hopefully dumb) question about working with the Java client and
> POJOs. I justed started tinkering with Riak and have created a simple
> Account POJO and happily crammed it into a bucket "test1" and mapped reduced
> it (hooray). The problem starts when I updated the Class for Account, adding
> a new String property "minty".  Now the map reduce fails for that the new
> property in the bucket "test1". Seems like the POJO is always being
> serialized  to the format of the older Account class. If I create a new
> bucket, "test2", and cram and reduce anew, everything works again.
>
> I have been grepping around the docs, but have not been able to zero in on
> my issue. Am I doing something bone headed? Is it possible to update a
> bucket to support a modified POJO class?
>
> thanks,
> Michael
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Forcing Siblings to Occur

2013-11-08 Thread Brian Roach
On Fri, Nov 8, 2013 at 11:38 AM, Russell Brown  wrote:

> If you’re using a well behaved client like the Riak-Java-Client, or any other 
> that gets a vclock before doing a put, use whatever option stops that.

for (int i = 0; i < numReplicasWanted; i++) {
bucket.store("key", "value").withoutFetch().execute();
}

:)

- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: IRiakClient and fetchMeta

2013-11-07 Thread Brian Roach
Hi John -

You're looking for headOnly() in the FetchObject. Simplest example:

bucket.fetch("key").headOnly().execute();

Thanks,
-Roach

On Thu, Nov 7, 2013 at 5:05 PM, JohnP  wrote:
> Hi -
>
> I am using the java client library for Riak and trying to determine if there
> is an equivalent way to query only for metadata with IRiakClient in the same
> manner one can with the older RiakClient?
>
> RiakClient has the following method: fetchMeta(String bucket, String key)
>
> Thanks in advance!
>
> - John
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/IRiakClient-and-fetchMeta-tp4029723.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Keys that won't disappear from indexes

2013-11-05 Thread Brian Roach
Worth noting here; the current Java client is entirely UTF-8 centric
and is explicitly converting those bytes to UTF-8 strings, so yes ...
that's probably an issue here if I'm understanding things correctly.

Almost everything is copied to/from the protocol buffer message to
Java Strings using the ByteString.copyFromUtf8() and
ByteString.toStringUtf8() methods.

This is actually something that is addressed in the new 2.0 Java
client Dave and I are working on.

Thanks,
- Roach

On Tue, Nov 5, 2013 at 5:40 PM, Toby Corkindale
 wrote:
> On 06/11/13 11:30, Evan Vigil-McClanahan wrote:
>>
>> You can replace int_to_bin with int_to_str to make it easier to debug
>> in the future, I suppose.  I am not sure how to get them to be fetched
>> as bytes, without may altering the client.
>>
>> You could just attach to the console and run whatever listing command
>> you're running there, which would give you the answer as unfiltered
>> erlang binaries, which are easy to understand.
>
>
> Ah, I'm really not familiar enough with Erlang and Riak to be doing that.
> Which API applies to console commands? I'll take a look. (Is it just the
> same as the Erlang client?)
>
>
>
>> Is this easily replicable on a new cluster?
>
>
> I think it should be -- the only difference over default configuration is
> that LevelDB is configured as the default backend.
> Run basho_bench with the pbc-client test to generate the initial keys and
> you should be set.
>
>
> T
>
>> On Tue, Nov 5, 2013 at 4:17 PM, Toby Corkindale
>>  wrote:
>>>
>>> Hi Evan,
>>> These keys were originally created by basho-bench, using:
>>> {key_generator, {int_to_bin, {uniform_int, 1}}}.
>>>
>>> Of the 10k keys, it seems half could be removed, but not the other half.
>>>
>>> Now I've tried storing keys with the same key as the un-deleteable ones,
>>> waiting a minute, and then deleting them again.. this isn't seeming to
>>> help!
>>>
>>> I don't know if it's significant, but I'm working with the Java client
>>> here
>>> (protocol buffers). I note that the bad keys are basically just bytes,
>>> not
>>> actual ascii strings, and they do contain nulls.
>>>
>>> Actually, here's something I just noticed -- the keys I'm getting from
>>> the
>>> index are repeating! It's the same 39 keys, repeated 128 times.
>>>
>>> O.o
>>>
>>> Are there any known bugs in the PBC interface when it comes to binary
>>> keys?
>>> I know the HTTP interface just crashes out completely.
>>>
>>> I'm fetching the keys in a manner that returns strings; is there a way to
>>> fetch them as bytes? Maybe that would work better; I'm wondering if the
>>> client is attempting to convert the bytes into unicode strings and
>>> dropping
>>> invalid characters?
>>>
>>>
>>> On 05/11/13 03:44, Evan Vigil-McClanahan wrote:


 Hi Toby.

 It's possible, since they're stored separately, that the objects were
 deleted but the indices were left in place because of some error (e.g.
 the operation failed for some reason between the object removal and
 the index removal).  One of the things on the feature list for the
 next release is AAE of index values, which should take care of this
 case.  This is really rare, but not unknown.  It'd be interesting to
 know how you ended up with so many.

 In the mean time, the only way I can think of to get rid of them
 (other than deleting them from the console, which would require taking
 nodes down and a lot of manual effort), would be to write another
 value that would have the same index, then delete it, which should
 normally succeed.

 I'll ask around to see if there is anything that might work better.
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Timeout problems, Riak Python Client with protocol buffers

2013-11-05 Thread Brian Roach
On Tue, Nov 5, 2013 at 1:20 PM, finkle mcgraw  wrote:

> There's a load balancer between the server running the Riak Python Client
> and the actual Riak nodes. Perhaps this socket error is related to some
> configuration of that load balancer?

That's what it looks like. Normally the PB connection to Riak is only
ever closed from the server side if a node were to go down; we don't
time out idle PB connections.

I'd have to dig into the python code but it looks like write failures
are ignored / don't trigger an exception. So, when the client goes to
read the expected response from the socket it discovers the socket is
closed. The error is saying it was expecting 4 bytes (The first 4
bytes of a response are a 32bit int that represents the length of the
message) and it received 0 (the socket had been closed from the remote
side).

- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client blocked at shutdown?

2013-11-05 Thread Brian Roach
That particular chunk of the old underlying PB client is ugly and
needs to die in a fire, but it shouldn't be possible for it to cause
Tomcat to get stuck.

That Timer is set to be a daemon therefore it can't prevent the JVM
from exiting. On top of that the only time it has a task scheduled is
during a streaming operation.

I suspect you're just seeing that thread still there because another
thread is the problem, and since it's a daemon it's going to be around
until ... the JVM isn't.

- Roach

On Tue, Nov 5, 2013 at 7:05 AM, Guido Medina  wrote:
> You may be right, we are still investigating other 2 threads, was worth to
> add this one to the list just in case, daemon threads by contract should go
> down with no issues when JDK is going down.
>
> Guido.
>
>
> On 05/11/13 13:59, Konstantin Kalin wrote:
>
> Strange. If you call shutdown of Riak client it shouldn't stop Tomcat
> shutdown anymore. This is that I learn from source code. I called the method
> in Servlet listener and never had issues after that. Before I had similar
> behavior like you have.
>
> Thank you,
> Konstantin.
>
> On Nov 5, 2013 5:31 AM, "Guido Medina"  wrote:
>>
>> That's done already, I'm looking at the source now, not sure of the
>> following timer needs to be cancelled when Riak client shutdown method is
>> called:
>>
>>
>> public abstract class RiakStreamClient implements Iterable {
>>
>> static Timer TIMER = new Timer("riak-stream-timeout-thread", true);
>> ...
>> ...
>> }
>>
>> Guido.
>>
>> On 05/11/13 13:29, Konstantin Kalin wrote:
>>
>> You need to call shutdown method of Riak client when you are stopping your
>> application.
>>
>> Thank you,
>> Konstantin.
>>
>> On Nov 5, 2013, at 5:06, Guido Medina  wrote:
>>
>> Sorry, I meant "stopping Tomcat from shutting down properly"...I must have
>> been thinking of some FPS night game.
>>
>> On 05/11/13 13:04, Guido Medina wrote:
>>
>> Hi,
>>
>> We are tracing some threads at our webapp which are stopping Tomcat from
>> shooting down properly, one of them seems to be related with Riak Java
>> client, here is the repeating stack trace once all services have been
>> stopped properly:
>>
>> Thread Name: riak-stream-timeout-thread
>> State: in Object.wait()
>> Java Stack trace:
>> at java.lang.Object.wait(Native Method)
>> - waiting on [0x0004bb4001a0] (a java.util.TaskQueue)
>> at java.lang.Object.wait(Object.java:503)
>> at java.util.TimerThread.mainLoop(Timer.java:526)
>> - locked [0x0004bb4001a0] (a java.util.TaskQueue)
>> at java.util.TimerThread.run(Timer.java:505)
>>
>>
>> Thanks for the help,
>>
>> Guido.
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client not returning deleted sibling

2013-10-07 Thread Brian Roach
Daniel -

Unfortunately returning the body from a store operation may not
reflect all the replicas (and in the case of a concurrent write on two
different nodes "may not" really means  "probably doesn't").

If you do a subsequent fetch after sending both your writes you'll get
back a single vclock with siblings.

Thanks,
- Roach

On Mon, Oct 7, 2013 at 12:37 PM, Daniel Iwan  wrote:
> Hi Brian
>
> Thanks for update.
> I'm using 1.1.3 now and still have some issues sibling related
>
> Two clients are updating the same key.
> Updated is my custom meta field, which should be merged to contain values
> from both clients (set)
> I see both client are doing fetch, resolving sibling (only 1 i.e. no
> siblings), apply mutation (their own values for meta field). After that
> object is converted using fromDomain() in my converter using vclock provided
> Both nodes use vclock
> 6bce61606060cc60ca05521cf385ab3e05053d2dc8604a64cc6365589678fc345f1600
>
> So far so god.
> But the next step is toDomain (which is pare of Store I think since I'm
> using withBody) and looks like each node contains info only about
> it own changes.
> Client one sees vclock
> 6bce61606060cc60ca05521cf385ab3e05053d2dc8604a64ca6365589678fc345f1600
> Client 2 sees vclock
> 6bce61606060ca60ca05521cf385ab3e05053d2dc8604a64ca6365589678fc341f548a4d4adb032ac508945a0e92ca0200
>
> Both of vclocks are different than original vclock given during store, which
> I assume means RIak accepted write.
> Resolve is called on both machines but there is only one sibling.
>
> I guess the fact that I'm changing only meta field should not matter here
> and I should see 2 siblings?
> allow_multi is of course true and lastWriteWins is false on that bucket
>
> Any help much appreciated
>
>
> Regards
> Daniel
>
>
>
>
>
>
>
>
>
>
>
>
> On 4 October 2013 21:41, Brian Roach  wrote:
>>
>> Hey -
>>
>> I'm releasing 1.1.3 and 1.4.2 but it'll take a while for them to get
>> staged at maven central so I can post an "official" release to the
>> mailing list.
>>
>> I've gone ahead and uploaded the jar-with-dependencies to the usual
>> place for you-
>>
>>
>> http://riak-java-client.s3.amazonaws.com/riak-client-1.1.3-jar-with-dependencies.jar
>>
>> It fixes up the DomainBucket stuff and the JSONConverter.
>>
>> Thanks,
>> - Roach
>>
>> On Fri, Oct 4, 2013 at 2:58 AM, Daniel Iwan  wrote:
>> > Thanks Brian for putting fix together so quickly.
>> >
>> > I think I found something else though.
>> > In JSONConverter I don't see vclock being set in toDomain() when
>> > converting
>> > deleted sibling?
>> > That vclock should be used for following delete if I understood it
>> > correctly?
>> >
>> > Also where can I download latest build? I tried
>> >
>> > http://riak-java-client.s3.amazonaws.com/riak-client-1.1.3-jar-with-dependencies.jar
>> > but access is denied
>> >
>> > Cheers
>> > Daniel
>> >
>> >
>> > On 3 October 2013 19:36, Brian Roach  wrote:
>> >>
>> >> On Thu, Oct 3, 2013 at 10:32 AM, Daniel Iwan 
>> >> wrote:
>> >> > Thanks Brian for quick response.
>> >> >
>> >> > As a side question, what is the best way to delete such an object
>> >> > i.e.
>> >> > once
>> >> > I know one of the siblings has 'deleted' flag true because I fetched
>> >> > it?
>> >> > Should I just use DomainBucket.delete(key) without providing any
>> >> > vclock?
>> >> > Would it wipe it from Riak or create yet another sibling?
>> >>
>> >> You should always use vclocks when possible, which in the case it is.
>> >> There are additional issues around doing the delete without a vclock
>> >> and if there's concurrently a store operation occurring.
>> >>
>> >> Ideally you should look at why you're getting that tombstone sibling.
>> >> If it's simply a case of high write concurrency and you're using
>> >> vclocks with your writes, then there's not much you can do except
>> >> resolve it later (without changing how you're using the DB)... but
>> >> usually these things are caused by writes without a vclock.
>> >>
>> >> Thanks,
>> >> - Roach
>> >>
>> >>
>> >>
>> >>
>> >> &g

Re: Riak Java client not returning deleted sibling

2013-10-04 Thread Brian Roach
Daniel -

I'll get 1.1.3 out today; I'll post to the list. There's actually a
couple other small things I need to squeeze in since we're going to do
a release.

Re: Setting the vclock on the tombstone in JSONConverter, you're
right. It would only be an issue if you only had a tombstone, but it
should be there.

Thanks,
- Roach

On Fri, Oct 4, 2013 at 2:58 AM, Daniel Iwan  wrote:
> Thanks Brian for putting fix together so quickly.
>
> I think I found something else though.
> In JSONConverter I don't see vclock being set in toDomain() when converting
> deleted sibling?
> That vclock should be used for following delete if I understood it
> correctly?
>
> Also where can I download latest build? I tried
> http://riak-java-client.s3.amazonaws.com/riak-client-1.1.3-jar-with-dependencies.jar
> but access is denied
>
> Cheers
> Daniel
>
>
> On 3 October 2013 19:36, Brian Roach  wrote:
>>
>> On Thu, Oct 3, 2013 at 10:32 AM, Daniel Iwan 
>> wrote:
>> > Thanks Brian for quick response.
>> >
>> > As a side question, what is the best way to delete such an object i.e.
>> > once
>> > I know one of the siblings has 'deleted' flag true because I fetched it?
>> > Should I just use DomainBucket.delete(key) without providing any vclock?
>> > Would it wipe it from Riak or create yet another sibling?
>>
>> You should always use vclocks when possible, which in the case it is.
>> There are additional issues around doing the delete without a vclock
>> and if there's concurrently a store operation occurring.
>>
>> Ideally you should look at why you're getting that tombstone sibling.
>> If it's simply a case of high write concurrency and you're using
>> vclocks with your writes, then there's not much you can do except
>> resolve it later (without changing how you're using the DB)... but
>> usually these things are caused by writes without a vclock.
>>
>> Thanks,
>> - Roach
>>
>>
>>
>>
>> >
>> > Regards
>> > Daniel
>> >
>> >
>> > On 3 October 2013 17:20, Brian Roach  wrote:
>> >>
>> >> Daniel -
>> >>
>> >> Yeah, that is the case. When the ability to pass fetch/store/delete
>> >> meta was added to DomainBucket way back when it appears that was
>> >> missed.
>> >>
>> >> I'll add it and forward-port to 1.4.x as well and cut new jars. Should
>> >> be avail by tomorrow morning at the latest.
>> >>
>> >> Thanks!
>> >> - Roach
>> >>
>> >> On Thu, Oct 3, 2013 at 9:38 AM, Daniel Iwan 
>> >> wrote:
>> >> > Hi I'm using Riak 1.3.1 and Java client 1.1.2
>> >> >
>> >> > Using http and curl I see 4 siblings for an object one of which has
>> >> > X-Riak-Deleted: true
>> >> > but when I'm using Java client with DomainBucket my Converter's
>> >> > method
>> >> > toDomain is called only 3 times.
>> >> >
>> >> > I have set the property
>> >> >
>> >> > builder.returnDeletedVClock(true);
>> >> >
>> >> > on my DomainBuilder which I keep reusing for all queries and store
>> >> > operations (I guess that's good practice btw.?)
>> >> >
>> >> >
>> >> > I run that under debugger and it seems raw client sees 4 siblings but
>> >> > passes
>> >> > over only 3 due to bug (I think) in DomainBucket.fetch() method which
>> >> > should
>> >> > have
>> >> >
>> >> > if (fetchMeta.hasReturnDeletedVClock()) {
>> >> >
>> >> >
>> >> > so.returnDeletedVClock(fetchMeta.getReturnDeletedVClock());
>> >> >
>> >> > }
>> >> >
>> >> > at the end, as store() method has.
>> >> >
>> >> > Could you confirm or I'm I completely wrong?
>> >> >
>> >> >
>> >> > Regards
>> >> >
>> >> > Daniel
>> >> >
>> >> >
>> >> > ___
>> >> > riak-users mailing list
>> >> > riak-users@lists.basho.com
>> >> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >> >
>> >
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client not returning deleted sibling

2013-10-03 Thread Brian Roach
On Thu, Oct 3, 2013 at 10:32 AM, Daniel Iwan  wrote:
> Thanks Brian for quick response.
>
> As a side question, what is the best way to delete such an object i.e. once
> I know one of the siblings has 'deleted' flag true because I fetched it?
> Should I just use DomainBucket.delete(key) without providing any vclock?
> Would it wipe it from Riak or create yet another sibling?

You should always use vclocks when possible, which in the case it is.
There are additional issues around doing the delete without a vclock
and if there's concurrently a store operation occurring.

Ideally you should look at why you're getting that tombstone sibling.
If it's simply a case of high write concurrency and you're using
vclocks with your writes, then there's not much you can do except
resolve it later (without changing how you're using the DB)... but
usually these things are caused by writes without a vclock.

Thanks,
- Roach




>
> Regards
> Daniel
>
>
> On 3 October 2013 17:20, Brian Roach  wrote:
>>
>> Daniel -
>>
>> Yeah, that is the case. When the ability to pass fetch/store/delete
>> meta was added to DomainBucket way back when it appears that was
>> missed.
>>
>> I'll add it and forward-port to 1.4.x as well and cut new jars. Should
>> be avail by tomorrow morning at the latest.
>>
>> Thanks!
>> - Roach
>>
>> On Thu, Oct 3, 2013 at 9:38 AM, Daniel Iwan  wrote:
>> > Hi I'm using Riak 1.3.1 and Java client 1.1.2
>> >
>> > Using http and curl I see 4 siblings for an object one of which has
>> > X-Riak-Deleted: true
>> > but when I'm using Java client with DomainBucket my Converter's method
>> > toDomain is called only 3 times.
>> >
>> > I have set the property
>> >
>> > builder.returnDeletedVClock(true);
>> >
>> > on my DomainBuilder which I keep reusing for all queries and store
>> > operations (I guess that's good practice btw.?)
>> >
>> >
>> > I run that under debugger and it seems raw client sees 4 siblings but
>> > passes
>> > over only 3 due to bug (I think) in DomainBucket.fetch() method which
>> > should
>> > have
>> >
>> > if (fetchMeta.hasReturnDeletedVClock()) {
>> >
>> > so.returnDeletedVClock(fetchMeta.getReturnDeletedVClock());
>> >
>> > }
>> >
>> > at the end, as store() method has.
>> >
>> > Could you confirm or I'm I completely wrong?
>> >
>> >
>> > Regards
>> >
>> > Daniel
>> >
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client not returning deleted sibling

2013-10-03 Thread Brian Roach
Daniel -

Yeah, that is the case. When the ability to pass fetch/store/delete
meta was added to DomainBucket way back when it appears that was
missed.

I'll add it and forward-port to 1.4.x as well and cut new jars. Should
be avail by tomorrow morning at the latest.

Thanks!
- Roach

On Thu, Oct 3, 2013 at 9:38 AM, Daniel Iwan  wrote:
> Hi I'm using Riak 1.3.1 and Java client 1.1.2
>
> Using http and curl I see 4 siblings for an object one of which has
> X-Riak-Deleted: true
> but when I'm using Java client with DomainBucket my Converter's method
> toDomain is called only 3 times.
>
> I have set the property
>
> builder.returnDeletedVClock(true);
>
> on my DomainBuilder which I keep reusing for all queries and store
> operations (I guess that's good practice btw.?)
>
>
> I run that under debugger and it seems raw client sees 4 siblings but passes
> over only 3 due to bug (I think) in DomainBucket.fetch() method which should
> have
>
> if (fetchMeta.hasReturnDeletedVClock()) {
>
> so.returnDeletedVClock(fetchMeta.getReturnDeletedVClock());
>
> }
>
> at the end, as store() method has.
>
> Could you confirm or I'm I completely wrong?
>
>
> Regards
>
> Daniel
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client bug?

2013-09-25 Thread Brian Roach
That option is for the connection timeout (i.e. when the connection
pool makes a new connection to Riak).

You can set the read timeout on the socket with the aforementioned
withRequestTimeoutMillis()

The default is the Java default, which is to say it'll block on the
socket read until either there's data to read or the remote side
closes the socket. That would at least get the client "unstuck".

This, however, doesn't explain/solve the real issue you're describing
which is that Riak is hanging up and not sending data.

Someone else will need to chime in on that - are you seeing anything
in the Riak logs?

- Roach

On Wed, Sep 25, 2013 at 12:11 PM, Guido Medina  wrote:
> Like this: withConnectionTimeoutMillis(5000).build();
>
> Guido.
>
>
> On 25/09/13 18:08, Brian Roach wrote:
>>
>> Guido -
>>
>> When you say "the client is configured to time out" do you mean you're
>> using PB and you set the SO_TIMEOUT on the socket via the
>> PBClientConfig's withRequestTimeoutMillis()?
>>
>> - Roach
>>
>> On Wed, Sep 25, 2013 at 5:54 AM, Guido Medina 
>> wrote:
>>>
>>> Hi,
>>>
>>> Streaming 2i indexes is not timing out, even though the client is
>>> configured
>>> to timeout, this coincidentally is causing the writes to fail (or or the
>>> opposite?), is there anything elemental that could "lock" (I know the
>>> locking concept in Erlang is out of the equation so LevelDB?) something
>>> in
>>> Riak while trying to stream a 2i index?
>>>
>>> Basically, our cluster copier which runs every two minutes once it gets
>>> to
>>> this state it never exists (no timeout) and our app just starts slow
>>> writing
>>> (over a minute to write a key)
>>>
>>> Not sure what's going on.
>>>
>>> Guido.
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client bug?

2013-09-25 Thread Brian Roach
Guido -

When you say "the client is configured to time out" do you mean you're
using PB and you set the SO_TIMEOUT on the socket via the
PBClientConfig's withRequestTimeoutMillis()?

- Roach

On Wed, Sep 25, 2013 at 5:54 AM, Guido Medina  wrote:
> Hi,
>
> Streaming 2i indexes is not timing out, even though the client is configured
> to timeout, this coincidentally is causing the writes to fail (or or the
> opposite?), is there anything elemental that could "lock" (I know the
> locking concept in Erlang is out of the equation so LevelDB?) something in
> Riak while trying to stream a 2i index?
>
> Basically, our cluster copier which runs every two minutes once it gets to
> this state it never exists (no timeout) and our app just starts slow writing
> (over a minute to write a key)
>
> Not sure what's going on.
>
> Guido.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Cluster performance.

2013-09-12 Thread Brian Roach
Victor,

What I suspect is happening is that only one of the IP addresses you
are passing to addHosts() is actually accepting connections /
reachable.

The way the ClusterClient works, an operation is going to be retried
on failure (up to three times by default) and as long as the node that
actually is reachable / accepting connections gets tried, this is
going to be hidden from you.

You can test this by changing the retry behavior when you query. For example:

Bucket b = client.fetchBucket("bucket").execute();

for (int i = 0; i < 5; i++)
{
b.fetch("key").withRetrier(new DefaultRetrier(0)).execute();
}

Thanks,
- Roach




On Thu, Sep 12, 2013 at 2:40 PM, Victor  wrote:
> Hi,
>
> We are currently testing Riak as potential replacement for data warehouse.
> Programmers was pretty happy with single-node operations, but as we switched
> to testing of a cluster, performance of same applications dropped
> significantly with only changes in code:
>
> Configuration conf = new
> PBClientConfig.Builder().WithHost(“192.168.149.21”).WithPort(8087).build();
>
> IRiakClient client = RiakFactory.newClient(conf);
>
>
>
> to
>
>
>
> PBClusterConfig clusterConfig = new PBClusterConfig(20);
>
> PBClientConfig clientConfig = PBClientConfig.defaults();
>
> clusterConfig.addHosts(clientConfig, "192.168.*.*","192.168.*.*");
>
> IRiakClient client = RiakFactory.newClient(clusterConfig);
>
>
>
> In the same time, I’m noticed, that if I use riak-admin status |grep node*
> - node_gets_total and node_puts_total rises only on one of the clustered
> machines.
>
> Is there any way to monitor data distribution, activity and resources of
> nodes in cluster? I saw multiple applications, but usually they provide only
> bucket operations and status.
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Including @RiakKey and @RiakIndexes in the stored JSON

2013-08-22 Thread Brian Roach
No; they are explicitly excluded from serialization because they're metadata.

The 2i indexes are returned in the http headers, and the key you kinda
already have to know to make the query in the first place.

Thanks,
Roach

On Thu, Aug 22, 2013 at 9:32 AM, mex  wrote:
> If I declare @RiakKey and 2i indexes (@RiakIndex) for some fields in my
> "Item" class, then those fields will not be displayed when querying the
> record over the browser (ie. http://localhost/riak/myBucket/myKey1).
>
> I have tried adding the annotation @JSONInclude but does not seem to change
> the behaviour. Is there a way around it?
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Including-RiakKey-and-RiakIndexes-in-the-stored-JSON-tp4028933.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Simple explanation for 2i using the Java Client

2013-08-19 Thread Brian Roach
If you were to actually try it, you'd find it throws an exception
telling you we don't support array types with the @RiakIndex
annotation.

You need:
@RiakIndex(name = "cars") private Set cars;

and adjust things accordingly. At that point, if I understand your
question, yes - your index query would then return the keys for all
objects (including the one shown) that have "corsa" in their set of
cars.

Thanks,
- Roach


On Mon, Aug 19, 2013 at 11:02 AM, rsb  wrote:
> I am having a bit of a hard time finding resources that explain 2i
> specifically with the Riak Java Client. Would anyone be kind enough to point
> me to a straightforward example that I will be able to expand from.
>
> Assuming the following object:
>
>
>
> How can I create a secondary index on the cars that '/Mr Riak/' owns. And of
> course, how could I query my bucket to retrieve all the people that own a
> '/Corsa/'.
>
>
>
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Simple-explanation-for-2i-using-the-Java-Client-tp4028895.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client - conflict resolver on both fetch() and store()?

2013-08-11 Thread Brian Roach
I think this has somewhat hijacked the original poster's question, but ...

doNotFetch() was never meant to be used with a Mutation or
returnBody(). It was an option added due to the aforementioned
specific feature requests.

The reason it exists is for this workflow and this workflow alone:

1) Fetch something from Riak, resolving any conflicts.
2) Do something with that data and change (mutate) the value / metadata
3) Store the changes back to Riak without specifying a Mutation or
ConflictResolver and avoiding the fetch.

It is assumed that any siblings created are dealt with in a subsequent
fetch using this workflow.

The most basic example:

IRiakObject ro = bucket.fetch("foo").execute();
ro.setValue("Some new value");
bucket.store(ro).withoutFetch().execute();

The DeafultBucket.store() in that case sets up the StoreObject to use
the PassThroughConverter and the ClobberMutation that simply returns
the passed in IRiakObject.

The Javadoc for doNotFetch() is very specific about what happens with
the Mutation:

* 1) null will be passed to the {@link Mutation} object (if
*you are using the default {@link ClobberMutation} this is fine).

That said, using a Mutation other than ClobberMutation with
doNotFetch() really makes little sense. The option was added for
people who have already fetched and mutated their data.

- Roach

On Sun, Aug 11, 2013 at 1:54 PM, Guido Medina  wrote:
> I hate it too but my point still stands, if DO NOT FETCH, what's the target
> object the mutation should work with? Isn't it the passed object instead?
>
> Anyway, I sent a pull request which hopefully applies a better semantics:
> https://github.com/basho/riak-java-client/pull/271
>
> Thanks,
>
> Guido.
>
>
> On 11/08/13 20:45, YN wrote:
>
> Guido,
>
> In this case it appears that fetching is enabled i.e. if (!doNotFetch) i.e.
> if NOT doNotFetch... so basically doNotFetch = false (fetching is true /
> enabled).
>
> I hate the double negative cases since it's easy to get confused / miss the
> logic that was intended.
> YN shared this with you.
> Re: Java client - conflict resolver on both fetch() and store()?
> Via gmane.comp.db.riak.user
>
> Brian,
>
> In StoreObject's execute() method, this condition, is it a bug or intended?
>
>  ...
>  ...
>  if (!doNotFetch) {
>  resolved = fetchObject.execute();
>  vclock = fetchObject.getVClock();
>  }
>  ...
>  ...
>
> My reasoning is: if do not fetch then shouldn't the resolved object be
> the one passed? I'm doing some tests and if I do store a mutation
> returning the body without fetching, I get a new mutated object and not
> the one I passed + mutation. So I'm wondering if that was the original
> intention.
>
> Thanks,
>
> Guido.
>
> On 11/08/13 18:49, Guido Medina wrote:
>
> ___
> riak-users mailing list
> riak-users< at >lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
> Sent from InoReader
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client - conflict resolver on both fetch() and store()?

2013-08-11 Thread Brian Roach
Matt,

The original design of StoreObject (which is what Bucket.store()
returns) was that it would encapsulate the entire read/modify/write
cycle in a very Java-y / enterprise-y way. This is why it takes a
Resolver and a Mutator; it does a fetch, resolves conflicts, passes
the resolved object to the Mutator, then stores the result of the
mutation to Riak.

Several users put in requests to make the fetch/resolve portion of
that optional as they had a workflow where that wasn't ideal and
didn't wanted to store a previously fetched value without fetching it
again. This is why the 'withoutFetch()' method was introduced along
with the @RiakVClock annotation.

When using withoutFetch() no fetch is performed, and no conflict
resolution occurs. Any ConflictResolver you pass in is simply not used
/ ignored ... except possibly if you're using returnBody()

Your code here:

bucket.store(record).returnBody(true).withoutFetch().withResolver(myConflictResolver);

is not doing a fetch or conflict resolution before storing your data.
It's just storing `record` in Riak. If that POJO has a vclock from a
previous fetch available via a @RiakVClock annotated field it will be
used. Otherwise, you're doing a store without a vclock.

I suspect where your confusion is stemming from is that you've also
specified 'returnBody()' and you're creating a sibling in that store
operation. When that's the case the "body" is going to be multiple
objects (all the siblings) which require resolution as
StoreObject.execute() only returns a single object back to the caller.
The same Resolver used if you had done the pre-fetch is employed. If
you haven't passed in a Resolver then the DefaultResolver is used
which ... isn't really a "resolver" - it simply passes through an
object if there's only one, or throws an exception if there's multiple
(siblings) present.

Thanks,
- Roach




On Sun, Aug 11, 2013 at 5:41 AM, Guido Medina  wrote:
> Hi Matt,
>
> Like Sean said, you should have a mutator if you are dealing with conflict
> resolution in domain objects; a good side effect of using a mutator is that
> Riak Java client will fetch-modify-write so your conflict resolver will be
> called once(?), if you don't use mutators, you get the effect you are
> describing(?) or in other words, you have to treat the operations as
> non-atomic and do things twice.
>
> There are two interfaces for mutations: Mutation and
> ConditionalStoreMutation, the 2nd interface will write only if the object
> was actually mutated, you must return true or false to state if it was
> mutated or not, which can be helpful if you are "mutating" an object and you
> discover the change you are requesting to make was already in place, then to
> save I/O, siblings creation and all implied on a write operation you decide
> not to write back.
>
> Mutation and conflict resolution are two separate concerns, but if you
> specify a mutator and a conflict resolver, conflict resolution will happen
> after the object is fetched and it is ready to be modified, which will
> emulate an atomic operation if you use a domain object.
>
> If you use a raw RiakObject, you must fetch, resolve the conflicts and on
> the write operation pass the VClock which is not a trivial nor easy to
> understand in code.
>
> HTH,
>
> Guido.
>
>
>
> On 11/08/13 03:32, Sean Cribbs wrote:
>
> I'm sure Roach will correct me if I'm off-base, but I believe the store
> operation does a fetch and resolve before writing. I think the ideal way to
> do that is to create a Mutation (T being your POJO) as well, in which
> case it's less of a "store" and more of a "fetch-modify-write". The way to
> skip the fetch/modify is to use the withoutFetch() option on the operation
> builder.
>
>
> On Sat, Aug 10, 2013 at 6:50 PM, Matt Painter  wrote:
>>
>> Hi,
>>
>> I've just rolled up my sleeves and have started to make my application
>> more robust with conflict resolution.
>>
>> I am currently using a @RiakVClock in my POJO (I need to think more about
>> whether the read/modify/write approach is preferable or whether I'd have to
>> rearchitect things).
>>
>> I read in the Riak Handbook the recommendation that conflicts are best
>> resolved on read -  not write - however the example App.java snipping on the
>> Storing data in Riak page in the Java client's doco uses a resolver on both
>> the store() and fetch() operations.
>>
>> Indeed, if I don't specify my conflict resolver in my store(), things blow
>> up (in my unit test, mind - I'm still getting my head around the whole area
>> so my test may be a bit contrived).
>>
>> However when I use it in both places, my conflicts are being resolved
>> twice. Is this anticipated?
>>
>> My store is:
>>
>>
>> bucket.store(record).returnBody(true).withoutFetch().withResolver(myConflictResolver);
>>
>> and my fetch is:
>>
>> bucket.fetch(id, Record.class).withResolver(myConflictResolver).execute();
>>
>> The order of operations in my test is:
>>
>> Store new record
>> Fetch the record as firstRecord

Re: PB Java Client API 1.4 runtime exception

2013-08-07 Thread Brian Roach
On Wed, Aug 7, 2013 at 10:44 AM, rsb  wrote:
> I have tried updating my project to use the new PB 1.4, however during
> runtime I get the following exception:
> ...
> Any ideas what is causing the issue, and how can I resolve it? - Thanks.

Yes; don't do that.

The 1.4.0 version of the riak-pb jar is for the 1.4.x version of the
Java client. It won't nor isn't meant to work with any previous
version.

Thanks,
- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: question about java client

2013-07-30 Thread Brian Roach
Paul,

Ugh ... ok, makes sense now. Jackson is like that. We actually had
someone do a PR for secondary indexes to allow method annotation, but
... we hadn't done it for links. It's now on the todo list.

Thanks,
- Roach

On Tue, Jul 30, 2013 at 3:37 PM, Paul Ingalls  wrote:
> I figured out what I was doing.  I had a getter/setter for the fields in
> addition to the fields themselves, since they were private.  I had to
> JsonIgnore the getters/setters since I couldn't tag them with the riak
> annotations.
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
> On Jul 30, 2013, at 2:34 PM, Brian Roach  wrote:
>
> Paul,
>
> The annotated fields are not included in the Serialization using the
> JSONConverter (at least, not in the current version of the client; I
> think I did some fixes around that way back in like v1.0.7). If they
> are, you've got something odd going on in your domain object.
>
> Here's a (very basic) example:
>
> public class App3
> {
>
>public static void main(String[] args) throws RiakException
>{
>IRiakClient client = RiakFactory.pbcClient();
>Bucket b = client.fetchBucket("test_bucket").execute();
>
>MyPojo mp = new MyPojo();
>mp.key = "key0";
>mp.value = "This is my value";
>
>Set links = new HashSet();
>for (int i = 1; i < 4; i++)
>{
>RiakLink link = new RiakLink("test_bucket", "key" + i,
> "myLinkTag");
>links.add(link);
>}
>mp.links = links;
>b.store(mp).execute();
>
>mp = b.fetch("key0", MyPojo.class).execute();
>
>System.out.println(mp.key);
>System.out.println(mp.value);
>for (RiakLink link : mp.links)
>{
>System.out.println(link.getKey());
>}
>
>client.shutdown();
>}
>
> }
>
> class MyPojo
> {
>public @RiakKey String key;
>public @RiakLinks Collection links;
>public String value;
>
> }
>
> ---
> That outputs:
>
> key0
> This is my value
> key2
> key3
> key1
>
> Checking it with curl shows it as it should be:
>
> roach$ curl -v localhost:8098/buckets/test_bucket/keys/key0
> * About to connect() to localhost port 8098 (#0)
> *   Trying ::1... Connection refused
> *   Trying 127.0.0.1... connected
> * Connected to localhost (127.0.0.1) port 8098 (#0)
>
> GET /buckets/test_bucket/keys/key0 HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4
> OpenSSL/0.9.8x zlib/1.2.5
> Host: localhost:8098
> Accept: */*
>
> < HTTP/1.1 200 OK
> < X-Riak-Vclock: a85hYGBgzGDKBVIcypz/fga+nNWUwZTInMfK4Lcq5zRfFgA=
> < Vary: Accept-Encoding
> < Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
> < Link: ; riaktag="myLinkTag",
> ; riaktag="myLinkTag",
> ; riaktag="myLinkTag",
> ; rel="up"
> < Last-Modified: Tue, 30 Jul 2013 21:21:18 GMT
> < ETag: "1ikpRECrH40O93LxiTmnKz"
> < Date: Tue, 30 Jul 2013 21:21:57 GMT
> < Content-Type: application/json; charset=UTF-8
> < Content-Length: 28
> <
> * Connection #0 to host localhost left intact
> * Closing connection #0
> {"value":"This is my value"}
>
>
>
> On Tue, Jul 30, 2013 at 2:38 PM, Paul Ingalls  wrote:
>
> Hey Brian,
>
> After a bit of messing around, I'm now dropping objects into the correct
> bucket using the links annotation.  However,  I am noticing that the json is
> including the metadata from the domain object, i.e. things tagged with
> @RiakKey, @RiakIndex or @RiakLinks.  I was under the impression this data
> would be left out.  I wouldn't care a whole lot, but when I'm getting data
> back in via a fetch, the JSONConverter is crashing saying it doesn't know
> how to convert the RiakLink object since there isn't an appropriate
> constructor for it.
>
> Do I need to specifically @JsonIgnore fields tagged with one of the Riak
> tags?
>
> Thanks!
>
> Paul
>
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
> On Jul 30, 2013, at 11:20 AM, Paul Ingalls  wrote:
>
> Ok, thats perfect.  I totally missed the annotation for links…
>
> Will give that a shot…
>
> Thanks!
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
&g

Re: question about java client

2013-07-30 Thread Brian Roach
Paul,

The annotated fields are not included in the Serialization using the
JSONConverter (at least, not in the current version of the client; I
think I did some fixes around that way back in like v1.0.7). If they
are, you've got something odd going on in your domain object.

Here's a (very basic) example:

public class App3
{

public static void main(String[] args) throws RiakException
{
IRiakClient client = RiakFactory.pbcClient();
Bucket b = client.fetchBucket("test_bucket").execute();

MyPojo mp = new MyPojo();
mp.key = "key0";
mp.value = "This is my value";

Set links = new HashSet();
for (int i = 1; i < 4; i++)
{
RiakLink link = new RiakLink("test_bucket", "key" + i, "myLinkTag");
links.add(link);
}
mp.links = links;
b.store(mp).execute();

mp = b.fetch("key0", MyPojo.class).execute();

System.out.println(mp.key);
System.out.println(mp.value);
for (RiakLink link : mp.links)
{
System.out.println(link.getKey());
}

client.shutdown();
}

}

class MyPojo
{
public @RiakKey String key;
public @RiakLinks Collection links;
public String value;

}

---
That outputs:

key0
This is my value
key2
key3
key1

Checking it with curl shows it as it should be:

roach$ curl -v localhost:8098/buckets/test_bucket/keys/key0
* About to connect() to localhost port 8098 (#0)
*   Trying ::1... Connection refused
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8098 (#0)
> GET /buckets/test_bucket/keys/key0 HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 
> OpenSSL/0.9.8x zlib/1.2.5
> Host: localhost:8098
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcypz/fga+nNWUwZTInMfK4Lcq5zRfFgA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
< Link: ; riaktag="myLinkTag",
; riaktag="myLinkTag",
; riaktag="myLinkTag",
; rel="up"
< Last-Modified: Tue, 30 Jul 2013 21:21:18 GMT
< ETag: "1ikpRECrH40O93LxiTmnKz"
< Date: Tue, 30 Jul 2013 21:21:57 GMT
< Content-Type: application/json; charset=UTF-8
< Content-Length: 28
<
* Connection #0 to host localhost left intact
* Closing connection #0
{"value":"This is my value"}



On Tue, Jul 30, 2013 at 2:38 PM, Paul Ingalls  wrote:
> Hey Brian,
>
> After a bit of messing around, I'm now dropping objects into the correct
> bucket using the links annotation.  However,  I am noticing that the json is
> including the metadata from the domain object, i.e. things tagged with
> @RiakKey, @RiakIndex or @RiakLinks.  I was under the impression this data
> would be left out.  I wouldn't care a whole lot, but when I'm getting data
> back in via a fetch, the JSONConverter is crashing saying it doesn't know
> how to convert the RiakLink object since there isn't an appropriate
> constructor for it.
>
> Do I need to specifically @JsonIgnore fields tagged with one of the Riak
> tags?
>
> Thanks!
>
> Paul
>
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
> On Jul 30, 2013, at 11:20 AM, Paul Ingalls  wrote:
>
> Ok, thats perfect.  I totally missed the annotation for links…
>
> Will give that a shot…
>
> Thanks!
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
> On Jul 30, 2013, at 11:16 AM, Brian Roach  wrote:
>
> Paul,
>
> I'm not quite sure I understand what you're asking.
>
> If you do a fetch and have siblings each one is converted to your
> domain object using the Converter and then passed as a Collection to
> the ConflictResolver. Each sibling is going to include its links
> and/or indexes as long as the Converter is injecting them into the
> domain object and you can resolve them in the ConflictResolver.
>
> The default JSONConverter, for example, injects them into your domain
> object via annotations from the com.basho.riak.client.convert[1]
> package.
>
> Thanks,
> Brian Roach
>
> http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/convert/package-summary.html
>
>
> On Tue, Jul 30, 2013 at 11:41 AM, Paul Ingalls  wrote:
>
> Newbie with Riak, and looking at the java client.
>
> Specifically, I've been digging into the domain mapping apis.  Looking into
> the code, it appears to me that, if I'm using links a bunch or even
> secondary indexes, that I could 

Re: question about java client

2013-07-30 Thread Brian Roach
Paul,

I'm not quite sure I understand what you're asking.

If you do a fetch and have siblings each one is converted to your
domain object using the Converter and then passed as a Collection to
the ConflictResolver. Each sibling is going to include its links
and/or indexes as long as the Converter is injecting them into the
domain object and you can resolve them in the ConflictResolver.

The default JSONConverter, for example, injects them into your domain
object via annotations from the com.basho.riak.client.convert[1]
package.

Thanks,
Brian Roach

http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/convert/package-summary.html


On Tue, Jul 30, 2013 at 11:41 AM, Paul Ingalls  wrote:
> Newbie with Riak, and looking at the java client.
>
> Specifically, I've been digging into the domain mapping apis.  Looking into
> the code, it appears to me that, if I'm using links a bunch or even
> secondary indexes, that I could lose some data during the conflict
> resolution phase.  I see where links and other relevant user data gets
> cached during the conversion phase from the fetch and then patched back in
> during the conversion phase for the store.  However, it doesn't look like
> you have the opportunity during the resolution phase to merge metadata.
> Should I focus on using the raw client, or am I missing something?
>
> Thanks!
>
> Paul
>
>
>
>
> Paul Ingalls
> Founder & CEO Fanzo
> p...@fanzo.me
> @paulingalls
> http://www.linkedin.com/in/paulingalls
>
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: 2i timeouts in 1.4

2013-07-26 Thread Brian Roach
Sean -

The timeout isn't via a header, it's a query param -> &timeout=

You can also use stream=true to stream the results.

- Roach

Sent from my iPhone

On Jul 26, 2013, at 3:43 PM, Sean McKibben  wrote:

> We just upgraded to 1.4 and are having a big problem with some of our larger 
> 2i queries. We have a few key queries that takes longer than 60 seconds 
> (usually about 110 seconds) to execute, but after going to 1.4 we can't seem 
> to get around a 60 second timeout.
> 
> I've tried:
> curl -H "X-Riak-Timeout: 26" 
> "http://127.0.0.1:8098/buckets/mybucket/index/test_bin/myval?x-riak-timeout=26";
>  -i
> 
> But I always get
> HTTP/1.1 500 Internal Server Error
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
> Date: Fri, 26 Jul 2013 21:41:28 GMT
> Content-Type: text/html
> Content-Length: 265
> Connection: close
> 
> 500 Internal Server ErrorInternal 
> Server ErrorThe server encountered an error while processing this 
> request:{error,{error,timeout}}mochiweb+webmachine
>  web server
> 
> Right at the 60 second mark. What can I set to give my secondary index 
> queries more time??
> 
> This is causing major problems for us :(
> 
> Sean
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client v1.4.1

2013-07-26 Thread Brian Roach
Hot on the heels of 1.4.0 ...

After releasing 1.4.0 it was reported to us that if you tried to
switch to using protocol buffers in existing code and you were already
using protocol buffers 2.5.0 ... the client would crash.

Apparently Google has introduced breaking changes in Protocol Buffers
2.5.0 that make code generated with 2.4.1 incompatible.

To solve this problem we've decided to use the maven `shade` plugin to
repackage 2.4.1 and include it in the Riak Java Client jar. This is
the only difference between 1.4.0 and 1.4.1.

With the release of v2.0 we will be moving to Protocol Buffers 2.5.0

Thanks,
Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client stats question

2013-07-25 Thread Brian Roach
Guido -

Right now, no.

We've been having some internal discussions around that topic and
whether it's really a "client library" operation or not.

How are you using stats? Is it for a monitoring app or ... ?

Thanks,
Brian Roach

On Thu, Jul 25, 2013 at 4:25 AM, Guido Medina  wrote:
> Hi,
>
> Is there a way to get the JSON stats via PBC? This is how we are doing it
> now, we would like to get rid of any HTTP call, currently, this is the only
> call being made to HTTP:
>
>   private void collectNodeInfo(final PBClientConfig clientConfig)
>   {
> ...
> RiakClusterStats stats=null;
> try{
>   stats=new RiakClusterStats();
>   HttpClient client=new DefaultHttpClient();
>   HttpGet g=new HttpGet("http://"; + clientConfig.getHost() +
> ":8098/stats");
>   HttpResponse resonse=client.execute(g);
>   JSONObject statsMap;
>   InputStream contentStream=null;
>   try{
> contentStream=resonse.getEntity().getContent();
> JSONTokener tok=new JSONTokener(contentStream);
> statsMap=new JSONObject(tok);
> stats.addNode(clientConfig.getHost(),statsMap);
>   } finally{
> if(contentStream != null){
>   contentStream.close();
> }
>   }
> } catch(Exception e){
>   log.error("Huh? Exception when ",e);
> }
> lastClusterStats=stats;
>   }
>
>
> Kind regards,
>
> Guido.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to connect a cluster with riak running non-standard PB port

2013-07-25 Thread Brian Roach
http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/raw/config/ClusterConfig.html#addClient(T)

On Thu, Jul 25, 2013 at 8:02 AM, kiran kulkarni  wrote:
> Thanks, This work if port is same for all hosts. My usecase is a bit
> different:
> Actually for development, I run 4 Riak instances locally with different
> ports 10017, 10027, 10037, 10047 for PB.
>
>
>
> On Thu, Jul 25, 2013 at 7:24 PM, Brian Roach  wrote:
>>
>> On Thu, Jul 25, 2013 at 7:47 AM, kiran kulkarni 
>> wrote:
>> > Class PBClusterConfig has only method addHosts which allows to set hosts
>> > only. I am using a different port for PB connections. How do I set host
>> > and
>> > port both?
>>
>>
>> http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/raw/pbc/PBClientConfig.html
>>
>> PBClientConfig myPbClientConfig = new
>> PBClientConfig.Builder().withPort().build();
>> myPbClusterConfig.addHosts(myPbClientConfig,
>> "192.168.1.10","192.168.1.11","192.168.1.12");
>>
>>
>> - Roach
>
>
>
>
> --
> Kiran Kulkarni
> http://about.me/kiran_kulkarni

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to connect a cluster with riak running non-standard PB port

2013-07-25 Thread Brian Roach
On Thu, Jul 25, 2013 at 7:47 AM, kiran kulkarni  wrote:
> Class PBClusterConfig has only method addHosts which allows to set hosts
> only. I am using a different port for PB connections. How do I set host and
> port both?

http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/raw/pbc/PBClientConfig.html

PBClientConfig myPbClientConfig = new
PBClientConfig.Builder().withPort().build();
myPbClusterConfig.addHosts(myPbClientConfig,
"192.168.1.10","192.168.1.11","192.168.1.12");


- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java Client 1.1.2 and 1.4.0

2013-07-25 Thread Brian Roach
Fixed. Apparently I updated the ACL for 1.1.2 and not 1.4.0, sorry.

Thanks,
- Roach

On Wed, Jul 24, 2013 at 7:53 PM, YN  wrote:
> Hi Brian,
>
> Thanks for putting the release together so quickly. It does not appear as
> though the download is working (for the non maven files). It looks like
> there's some permissions issue (get an access denied).
>
> Thanks.
> YN shared this with you.
> Riak Java Client 1.1.2 and 1.4.0
> Via Nabble - Riak Users
> Greetings!
>
> The Riak Java client versions 1.1.2 and 1.4.0 have been released and
> are now available from maven central. For non-maven users a bundled
> jar including all dependencies can be found for these versions at:
>
> http://riak-java-client.s3.amazonaws.com/riak-client-1.4.0-jar-with-dependencies.jar
> http://riak-java-client.s3.amazonaws.com/riak-client-1.1.2-jar-with-dependencies.jar
>
> Javadoc is available via: http://basho.github.io/riak-java-client/
>
> Why two versions you ask? Good question.
>
> The recently released Riak 1.4.0 adds a number of new features and
> brings parity between our HTTP and Protocol Buffers APIs. The Java
> client 1.4.0 reflects this by allowing PB operations that previously
> would throw exceptions (setting bucket properties, for example) and
> supports those new features. If you're using Riak 1.4, you want to be
> using the Java client 1.4.x to use the new features.
>
> 1.1.2 on the other hand is a minor maintenance / bug fix release for
> use with Riak 1.3.x and below. It will also work with Riak 1.4 but
> does not support the new Riak 1.4 features.
>
> Probably the most notable feature in 1.4.0 is support for the new
> counters in Riak. Check out the Javadoc here:
> http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/operations/CounterObject.html
>
> Thanks!
> - Brian Roach
>
> ___
> riak-users mailing list
> [hidden email]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> Sent from InoReader
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client v2.0 and HTTP

2013-07-24 Thread Brian Roach
Code never sleeps. And it mostly comes at night. Mostly.

Now that v1.4.0 of the RJC is released we're back to working on v2.0.
It's a major overhaul of the client.

The one large change we're looking at is no longer using HTTP and
instead exclusively using Protocol Buffers to communicate with Riak.

I've posted an RFC here:
https://github.com/basho/riak-java-client/issues/268 and encourage
everyone to participate.

Thanks!
Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client 1.1.2 and 1.4.0

2013-07-24 Thread Brian Roach
Greetings!

The Riak Java client versions 1.1.2 and 1.4.0 have been released and
are now available from maven central. For non-maven users a bundled
jar including all dependencies can be found for these versions at:

http://riak-java-client.s3.amazonaws.com/riak-client-1.4.0-jar-with-dependencies.jar
http://riak-java-client.s3.amazonaws.com/riak-client-1.1.2-jar-with-dependencies.jar

Javadoc is available via: http://basho.github.io/riak-java-client/

Why two versions you ask? Good question.

The recently released Riak 1.4.0 adds a number of new features and
brings parity between our HTTP and Protocol Buffers APIs. The Java
client 1.4.0 reflects this by allowing PB operations that previously
would throw exceptions (setting bucket properties, for example) and
supports those new features. If you're using Riak 1.4, you want to be
using the Java client 1.4.x to use the new features.

1.1.2 on the other hand is a minor maintenance / bug fix release for
use with Riak 1.3.x and below. It will also work with Riak 1.4 but
does not support the new Riak 1.4 features.

Probably the most notable feature in 1.4.0 is support for the new
counters in Riak. Check out the Javadoc here:
http://basho.github.io/riak-java-client/1.4.0/com/basho/riak/client/operations/CounterObject.html

Thanks!
- Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Best way to insert a collection of items at once

2013-07-23 Thread Brian Roach
On Tue, Jul 23, 2013 at 6:51 AM, rsb  wrote:
> Is there any underlaying difference between performing an;
>
> storeObject.withoutFetch().execute();
> -or-
> myBucket.store(item.key, item).execute();
>
> In other words, will my second statement result in an implicit fetch as
> well?

Bucket.store() returns a StoreObject. You want to call its
withoutFetch() method:

myBucket.store(item.key, item).withoutFetch().execute();

Thanks,
- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Best way to insert a collection of items at once

2013-07-22 Thread Brian Roach
Re your last Q:

>  I have read StoreObject does a read on every write, if true, can that
be disabled?

Yes. If you're not worried about creating siblings you can use the
withoutFetch() option in the StoreObject:

storeObject.withoutFetch().execute();

The StoreObject will not attempt to fetch the existing value (nor do
conflict resolution) when that's specified.

Also worth asking is how/when are you constructing your Bucket object?
By default the Java client does a fetch for bucket properties when you
call IRiakClient.fetchBucket(bucketName).execute()

This can be disabled by using:

Bucket b = client.fetchBucket(bucketName).lazyLoadBucketProperties().execute();

Thanks!
- Roach

On Mon, Jul 22, 2013 at 12:41 PM, rsb  wrote:
> Thank you for your reply, I gave that a shot and worked really well.
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Best-way-to-insert-a-collection-of-items-at-once-tp4028487p4028500.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 1.4 - Changing backend through API

2013-07-11 Thread Brian Roach
Heya.

That feature is code complete and awaiting peer review - the PR for it
is here: https://github.com/basho/riak-java-client/pull/250

This will be part of the Java Client 1.4.0 release (with all the new
1.4 features) that I hope to have for next week.

Thanks,
Brian Roach

On Wed, Jul 10, 2013 at 2:06 PM, Y N  wrote:
> Hi,
>
> I just upgraded to 1.4 and have updated my client to the Java 1.1.1 client.
>
> According to the release notes, it says all bucket properties are now
> configurable through the PB API.
>
> I tried setting my backend through the Java client, however I get an
> Exception "Backend not supported for PB". Is this property not configurable
> through the API, or do I need a newer version of the Java client than 1.1.1?
>
> Thanks.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: New Counters - client support

2013-07-11 Thread Brian Roach
On Thu, Jul 11, 2013 at 1:48 AM, Y N  wrote:
>
> Thanks, that's helpful. Now I need to figure out if / how this is surfaced
> in the Java 1.1.1 client (I don't seem to see it anywhere, but maybe I'm
> missing something).

It's not. We're about 80% through the 1.4 features with the Java
client. I haven't tackled the counters yet.

Provided I can get peer reviews on everything I hope to have the 1.4.0
release of the Java client out next week.

The issue for that particular feature can be found here:
https://github.com/basho/riak-java-client/issues/239 and I'll update
it as soon as I push a PR for it.

Thanks,
Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client - Bug with ifNotModified?

2013-06-10 Thread Brian Roach
With protocol buffers this is the case.

The ifNotModified() method expects you to supply a vector clock which
is then matched against the vector clock of an existing object in Riak
with that key. Since there is no object in riak ... it returns
"notfound" - it can't find it.

Unfortunately that makes your situation somewhat difficult to handle
all in one go using the StoreObject.

What I woud suggest is doing a fetch first then do the store with the
withoutFetch() option.

In the case where the fetch returned nothing, do your store of your
new object with the ifNoneMatch() option if it's possible another
writer created it between your fetch and store.

In the case where the fetch returned an object, use the
ifNotModified() since you have the vclock.

- Roach

On Sat, May 25, 2013 at 8:21 PM, Y N  wrote:
> I am using ifNotModified and am running into a weird situation.
>
> I am using the following API:
>
> return bucket.store(key, new
> MyObject()).withMutator(mutator).withConverter(converter).ifNotModified(true).returnBody(true).execute();
>
> The problem I run into is that I get a not found exception when there is no
> existing object in Riak for the specified key. If I change ifNotModified to
> false, then it works as expected. I am allocating a new object in my mutator
> if there is no existing object from the fetch cycle. Note, this is with the
> default bucket settings.
>
> My expectation was that even with ifNotModified set to true, this should
> succeed if there is no existing object in Riak matching the key (hence,
> nothing has been modified and the store should succeed).
>
> Please clarify the behavior of the API.
>
> Thanks.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java PB client stucks on fetching reply

2013-05-21 Thread Brian Roach
Hello Konstantin,

The protocol buffers client uses a standard TCP socket and does a
blocking read. If it's never returning from there, then the Riak node
you're talking to is in some state where it's not replying nor closing
the connection. By default in Java a read won't ever time out; it will
stay blocked until either there's something to read, or the TCP
connection is closed.

>From the client side, you can specify a read time out via the
PBClientConfig using the .withRequestTimeoutMillis() option in the
builder. This will cause the operation to time out rather than wait
forever.

- Roach



On Tue, May 21, 2013 at 3:46 PM, Konstantin Kalin
 wrote:
> Hello,
>
> I use Riak Java client (1.0.6) and riak-pb (1.2) versions. I see that a
> thread stucks on reading socket from time to time in production. Basically
> the thread is never released once it gets this state. Riak backend logs are
> empty at the same time. Could you please look at the following stack trace?
> I need an advise what can be wrong and how to investigate/solve the issue.
>
> Thank you,
> Konstantin.
>
> "http-8443-7" daemon prio=10 tid=0x7f886800a800 nid=0x1fda runnable
> [0x7f88d2794000]
>
>   java.lang.Thread.State: RUNNABLE
>
>at java.net.SocketInputStream.socketRead0(Native Method)
>
>at java.net.SocketInputStream.read(SocketInputStream.java:146)
>
>at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>
>at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>
>- locked <0x0007a668acb0> (a java.io.BufferedInputStream)
>
>at java.io.DataInputStream.readInt(DataInputStream.java:387)
>
>at com.basho.riak.pbc.RiakConnection.receive(RiakConnection.java:110)
>
>at
> com.basho.riak.pbc.RiakClient.processFetchReply(RiakClient.java:280)
>
>at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:254)
>
>at com.basho.riak.pbc.RiakClient.fetch(RiakClient.java:243)
>
>at
> com.basho.riak.client.raw.pbc.PBClientAdapter.fetch(PBClientAdapter.java:156)
>
>at
> com.basho.riak.client.raw.ClusterClient.fetch(ClusterClient.java:115)
>
>at
> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:102)
>
>at
> com.basho.riak.client.operations.FetchObject$1.call(FetchObject.java:100)
>
>at
> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:72)
>
>at
> com.basho.riak.client.cap.DefaultRetrier.attempt(DefaultRetrier.java:53)
>
>at
> com.basho.riak.client.operations.FetchObject.execute(FetchObject.java:106)
>
>at
> platform.sessionstore.riak.RiakSessionStore.executeCmd(RiakSessionStore.java:290)
>
>at
> platform.sessionstore.riak.RiakSessionStore.validate(RiakSessionStore.java:248)
>
>at
> platform.sessionstore.riak.RiakUserSessionsResolver.resolve(RiakUserSessionsResolver.java:74)
>
>at
> platform.sessionstore.riak.RiakUserSessionsResolver.resolve(RiakUserSessionsResolver.java:16)
>
>at
> com.basho.riak.client.operations.FetchObject.execute(FetchObject.java:113)
>
>at
> platform.sessionstore.riak.RiakSessionStore.executeCmd(RiakSessionStore.java:290)
>
>at
> platform.sessionstore.riak.RiakSessionStore.fetchUserSessions(RiakSessionStore.java:270)
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client and siblings question

2013-05-20 Thread Brian Roach
If you're going to use the withoutFetch() method it is required that
you use that @RiakVClock annotated field in your class - you need to
store the vclock from when you fetched in that field.

When you call StoreObject.execute() it is extracted from your object
and passed to the Converter.fromDomain() method. Since you're using
your own Converter, in that method you need to store the vclock in the
IRiakObject you're constructing and returning. The RiakObjectBuilder
has a withVClock method (and of course the DefaultRiakObject
constructor takes it as a parameter).

As for your second question ... yeah, MapReduce doesn't have that.
It's probably something worth thinking about for a future release (and
yeah, I'm pretty much in charge of the Java client right now - I'll
add it to my backlog).

As-is the best suggestion I would have is using the
MapReduceResult.getResultRaw() and then pass that String to your own
code for conversion.

Thanks!
- Roach

(BTW - I apologize for the late reply - your original email was caught
up in our listserv server for some reason and I only received it
today).


On Mon, May 20, 2013 at 10:32 AM, Y N  wrote:
> Hi Brian,
>
> Thanks for the response.
>
> I am not using the default JSONConverter, but have my own. The way I am
> currently resolving siblings is as follows:
>
> Create a new object
> Merge fields (using whatever logic)
> Return new object with merged fields
>
> In this case, what should I use for the vclock for the newly created object
> that was resolved? Do I randomly pick from one of the objects being
> resolved, or is there some order or precedence I should use?
>
> On a side note, I am not sure if you are responsible for the Riak Java
> client. If so, I don't see an option to allow me to use my own converter for
> objects obtained via a MapReduce query (through the Java client). Is this
> feature currently available, or is this something that will be added at some
> point?
>
> A .withConverter(blah) would be nice for mapreduce queries as well.
>
> Thanks!
>
>
> 
> From: Brian Roach 
> To: Y N 
> Cc: "riak-users@lists.basho.com" 
> Sent: Monday, May 20, 2013 7:42 AM
> Subject: Re: Java client and siblings question
>
> Hello!
>
> When you do your fetch (read) and resolve any conflicts, you're going
> to get a vector clock along with each sibling. If you're using the
> default JSONConverter it will be stored in your POJO's @RiakVClock
> annotated field. That's the vector clock you're going to use when you
> do your store (write) later - the modified object you're passing to
> Bucket.store() should contain it.
>
> The withoutFetch() option simply allows you to break this into two
> separate actions. Without it, when you called StoreObject.execute()
> that's exactly what would be happening.
>
> Thanks!
> - Roach
>
> On Sat, Apr 27, 2013 at 5:35 PM, Y N  wrote:
>> Hi,
>>
>> I am currently using the latest java client, and I have a question
>> regarding
>> updating data in a bucket where siblings are allowed (i.e. allowSiblings =
>> true).
>>
>> I finally understand the whole read-resolve-mutate-write cycle, and also
>> doing an update / store using previously fetched data (i.e. not in the
>> same
>> "transaction").
>>
>> This question is regarding the latter case (updating previously fetched
>> data). My read uses a resolver. My data class has a @RiakVClock field
>> defined.
>>
>> The problem is when I do the store(blah).withoutFetch(). It seems to be
>> generating siblings. I just realized that's probably because my resolver
>> (during the read) is creating a new object and then merging then siblings
>> into the new object, however it's not setting the vclock field.
>>
>> My question is, during the read resolve stage, what should I use for the
>> vlock? Should I just copy it from one of the other siblings, or is there
>> some specific sort order I should use to pick a particular vlock for the
>> new
>> object?
>>
>> Thanks.
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client and siblings question

2013-05-20 Thread Brian Roach
Hello!

When you do your fetch (read) and resolve any conflicts, you're going
to get a vector clock along with each sibling. If you're using the
default JSONConverter it will be stored in your POJO's @RiakVClock
annotated field. That's the vector clock you're going to use when you
do your store (write) later - the modified object you're passing to
Bucket.store() should contain it.

The withoutFetch() option simply allows you to break this into two
separate actions. Without it, when you called StoreObject.execute()
that's exactly what would be happening.

Thanks!
- Roach

On Sat, Apr 27, 2013 at 5:35 PM, Y N  wrote:
> Hi,
>
> I am currently using the latest java client, and I have a question regarding
> updating data in a bucket where siblings are allowed (i.e. allowSiblings =
> true).
>
> I finally understand the whole read-resolve-mutate-write cycle, and also
> doing an update / store using previously fetched data (i.e. not in the same
> "transaction").
>
> This question is regarding the latter case (updating previously fetched
> data). My read uses a resolver. My data class has a @RiakVClock field
> defined.
>
> The problem is when I do the store(blah).withoutFetch(). It seems to be
> generating siblings. I just realized that's probably because my resolver
> (during the read) is creating a new object and then merging then siblings
> into the new object, however it's not setting the vclock field.
>
> My question is, during the read resolve stage, what should I use for the
> vlock? Should I just copy it from one of the other siblings, or is there
> some specific sort order I should use to pick a particular vlock for the new
> object?
>
> Thanks.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: On siblings

2013-05-15 Thread Brian Roach
Jeremy -

As noted in the other replies, yes, you need to use 'return_body' to
get the new vector clock in order to avoid creating a sibling on a
subsequent write of the same key.

That said, you can supply the param 'return_head` in the proplist
along with `return_body` which will eliminate having the value
returned to you and get the vclock you need.

- Roach

On Wed, May 15, 2013 at 8:23 AM, John Daily  wrote:
> Thanks for the kind words, Jeremiah.
>
> Jeremy, if you find anything that's wrong with that description of sibling
> behavior, please let me know. It's always possible I missed something
> important.
>
> -John
>
>
> On Wednesday, May 15, 2013, Jeremiah Peschka wrote:
>>
>> John Daily (@macintux) wrote a great blog post that covers sibling
>> behavior [1]
>>
>> In short, though, because you're supplying an older vector clock, and you
>> have allow_mult turned on, Riak makes the decision that since a vector clock
>> is present that conflicts with what's already on disk a sibling should be
>> created.
>>
>> As I understand it, the only way to write into Riak and not get siblings
>> is to set allow_mult to false - even leaving out vector clocks will lead to
>> siblings if allow_mult is true. Or so John Daily's chart claims.
>>
>> [1]: http://basho.com/riaks-config-behaviors-part-2/
>>
>> ---
>> Jeremiah Peschka - Founder, Brent Ozar Unlimited
>> MCITP: SQL Server 2008, MVP
>> Cloudera Certified Developer for Apache Hadoop
>>
>>
>> On Tue, May 14, 2013 at 10:48 PM, Jeremy Ong 
>> wrote:
>>>
>>> To clarify, I am using the erlang client. From the looks of it, the
>>> vector clock transition to the new value is opaque to the client so the only
>>> way to streamline this use case is to pass the `return_body` option (My use
>>> case is one read, many subsequent writes while updating in memory).
>>>
>>> In this case however, I already have the value in memory, so it seems
>>> inefficient to have to get the entire riakc_obj back when I really just need
>>> the metadata to construct the new object. Is this correct?
>>>
>>>
>>> On Tue, May 14, 2013 at 9:06 PM, Jeremy Ong 
>>> wrote:

 Suppose I have an object X.

 I make an update to X and store it as X1. I perform a put operation
 using X1.

 The same client then makes a modification to X1 and stores it as X2.
 Then, I perform a put operation using X2.

 This will create two siblings X1 and X2 if allow_mult is true. Is there
 any way I can avoid this? To me, the vector clock should have incremented
 once when transitioning from X to X1, then once more when transitioning 
 from
 X1 to X2. This way, I shouldn't need to issue a get before I have to 
 perform
 another write since my data is already in memory.

 I probably am misunderstanding something about vector clocks. Does
 anybody care to clarify this?

 Thanks,
 Jeremy
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to add a secondary index with the Java client

2013-04-08 Thread Brian Roach
Jeff -

Yup, that should work perfectly.

You will have a secondary index in Riak named "status_bin" and the
value you set in your String 'status' will be the index key for that
object.

Thanks,
_Roach

On Mon, Apr 8, 2013 at 4:02 PM, Jeff Peck  wrote:
> Brian,
>
> Thank you for explaining that and suggesting to extend HashMap. I did exactly 
> that. Here is what it looks like:
>
> class DocMap extends HashMap {
> /**
>  * Generated id
>  */
> private static final long serialVersionUID = 5807773481499313384L;
>
> @RiakIndex(name="status") private String status;
>
> public String getStatus() {
> return status;
> }
>
> public void setStatus(String status) {
> this.status = status;
> }
> }
>
> I am about to try it, but I first need to make a few more changes in the code 
> to adapt this new object. In the meantime, would you say that this looks 
> correct and that it would be able to effectively write a status field to a 
> secondary index if I were to use "setStatus"?
>
> Thanks,
> Jeff
>
>
> On Apr 8, 2013, at 5:48 PM, Brian Roach  wrote:
>
>> Jeff,
>>
>> If you're just passing in an instance of the core Java HashMap ... you can't.
>>
>> The way the default JSONConverter works for metadata (such as indexes)
>> is via annotations.
>>
>> The object being passed in needs to have a field annotated with
>> @RiakIndex("index_name"). That field can be a Long/Set or
>> String/Set (for _int and _bin indexes respectively).
>>
>> These are not converted to JSON so they won't affect your serialized
>> data. You can have multiple fields for multiple indexes.
>>
>> You don't have to append "_int" or "_bin" to the index name in the
>> annotation - it's done automatically based on the type.
>>
>> Easiest thing to do woud be to extend HashMap and simply add the
>> annotated field(s).
>>
>> Thanks,
>> _Roach
>>
>> On Mon, Apr 8, 2013 at 2:56 PM, Jeff Peck  wrote:
>>> Hello,
>>>
>>> I have been looking through the documentation for an example of how to add 
>>> a secondary index in Riak, using the Java client.
>>>
>>> I am currently storing my object (which is a HashMap) like this:
>>>
>>> bucket.store(key, docHashMap).execute();
>>>
>>> What would I need to do to add an index to that object before it gets 
>>> stored?
>>>
>>> Thanks,
>>> Jeff
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to add a secondary index with the Java client

2013-04-08 Thread Brian Roach
Jeff,

If you're just passing in an instance of the core Java HashMap ... you can't.

The way the default JSONConverter works for metadata (such as indexes)
is via annotations.

The object being passed in needs to have a field annotated with
@RiakIndex("index_name"). That field can be a Long/Set or
String/Set (for _int and _bin indexes respectively).

These are not converted to JSON so they won't affect your serialized
data. You can have multiple fields for multiple indexes.

You don't have to append "_int" or "_bin" to the index name in the
annotation - it's done automatically based on the type.

Easiest thing to do woud be to extend HashMap and simply add the
annotated field(s).

Thanks,
_Roach

On Mon, Apr 8, 2013 at 2:56 PM, Jeff Peck  wrote:
> Hello,
>
> I have been looking through the documentation for an example of how to add a 
> secondary index in Riak, using the Java client.
>
> I am currently storing my object (which is a HashMap) like this:
>
> bucket.store(key, docHashMap).execute();
>
> What would I need to do to add an index to that object before it gets stored?
>
> Thanks,
> Jeff
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to deal with tombstone values when writing a Riak (http) client

2013-03-25 Thread Brian Roach
On Mon, Mar 25, 2013 at 8:05 AM, Age Mooij  wrote:
> What do you think would be the proper way for Riak itself to deal with case 
> 1? Should it return a 200 with an empty body and the X-Riak-Deleted header?

Unfortunately this would break the API in regard to backwards
compatibility with older versions of Riak. This is not to say we
wouldn't ever do that, but it does mean we'd (in general) like to
avoid it. I've opened an issue with the suggestion of simply adding
the 'X-Riak-Deleted' header for consistency.
https://github.com/basho/riak_kv/issues/518

> Could you give me an example of a use case for reproducing case 1 in a unit 
> test? Case 2a is easy but I've tried several ways to reliably produce case 1 
> and I'm not getting anywhere.

A tombstone will exist for 3s by default. With the Java client I can
reproduce it every time with:

IRiakClient client = RiakFactory.httpClient();
Bucket b =
client.createBucket("sibling_test").allowSiblings(true).execute();
b.store("key1","value1").execute();
b.delete("key1").execute();
IRiakObject io = b.fetch("key1").returnDeletedVClock(true).execute();
System.out.println(io.isDeleted());
client.shutdown();
>
> Do you have an idea of how many people actually use the option to "return 
> deleted vclocks".

Honestly? No, other than "at least two or three" because of
interacting directly with them (one of which led to the PR you cited).

> My Scala client basically just treats tombstones like completely deleted 
> values, so for case 1 that would lead to "normal" 404 behavior and during 
> conflict resolution it will just ignore/skip tombstones. But if lots of 
> people are interested in dealing with tombstones directly I might have to 
> change that to something similar to what you did in the java client.

Yeah ... it's a hard road trying to guess how people are going to want
to use something. Personally? Since the feature exists I'd support it
if only for the sake of completeness.

> Are there any plans on adding a documentation section about tombstones and 
> deletes in Riak? I think that would definitely be helpful for other people 
> writing clients.

I'll raise that issue this week; there probably should be. In the mean
time probably the most comprehensive information on the subject can be
found here: 
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-October/006048.html

Thanks,
_ Roach

> On Mar 25, 2013, at 14:29, Brian Roach  wrote:
>
>> Hi Age,
>>
>> I'm the author of the pull request for the Java client you cite.
>> There's still inconsistency in how these are returned via the HTTP
>> protocol. Partially that's my fault in that I didn't open an actual
>> issue when I discovered the problem I note in my comments. While
>> reviewing the issue to make sure I answered your question correctly, I
>> found another.
>>
>> As of right now (Riak 1.3), here's what you will find:
>>
>> 1) If *only* a tombstone exists when you do a GET for a key, you will
>> receive a 404 but it will contain the X-Riak-Vclock header (with a
>> vclock). A "normal" 404 (when there's no object) will not have this
>> header.
>>
>> 2) If there is a set of siblings, and one of them is a tombstone:
>>
>> 2a) Retrieving all the siblings at once by including "Accept:
>> multipart/mixed" in the GET will return all the siblings, and the
>> tombstone will include the "X-Riak-Deleted: true" header
>>
>> 2b) Retrieving each sibling manually by adding ?vtag=XX to the GET
>> will (unfortunately) return a 200 OK for the tombstone but it will
>> have an empty body (Content-Length: 0).
>>
>> I'm going to open issues for 1 and 2b just so we get things to be
>> consistent. That being said, I can't think of a reason you'd ever want
>> to do 2b so at least the impact there is minimized. For 1 you can
>> obviously still identify a tombstone the same way I'm doing it in the
>> Java client -> 404 + vclock = tombstone.
>>
>> Thanks,
>> _ Roach
>>
>> On Sun, Mar 24, 2013 at 2:09 PM, Age Mooij  wrote:
>>> Hi,
>>>
>>> I've been trying to find some comprehensive docs on what Riak http clients
>>> need to do to properly support dealing with tombstone values. I ran into
>>> tombstones while debugging some unit tests and I was very surprised that the
>>> Basho (http) API docs don't mention anything about having to deal with them.
>>>
>>> It's very hard to find any kind of complete desc

Re: How to deal with tombstone values when writing a Riak (http) client

2013-03-25 Thread Brian Roach
Hi Age,

I'm the author of the pull request for the Java client you cite.
There's still inconsistency in how these are returned via the HTTP
protocol. Partially that's my fault in that I didn't open an actual
issue when I discovered the problem I note in my comments. While
reviewing the issue to make sure I answered your question correctly, I
found another.

As of right now (Riak 1.3), here's what you will find:

1) If *only* a tombstone exists when you do a GET for a key, you will
receive a 404 but it will contain the X-Riak-Vclock header (with a
vclock). A "normal" 404 (when there's no object) will not have this
header.

2) If there is a set of siblings, and one of them is a tombstone:

2a) Retrieving all the siblings at once by including "Accept:
multipart/mixed" in the GET will return all the siblings, and the
tombstone will include the "X-Riak-Deleted: true" header

2b) Retrieving each sibling manually by adding ?vtag=XX to the GET
will (unfortunately) return a 200 OK for the tombstone but it will
have an empty body (Content-Length: 0).

I'm going to open issues for 1 and 2b just so we get things to be
consistent. That being said, I can't think of a reason you'd ever want
to do 2b so at least the impact there is minimized. For 1 you can
obviously still identify a tombstone the same way I'm doing it in the
Java client -> 404 + vclock = tombstone.

Thanks,
_ Roach

On Sun, Mar 24, 2013 at 2:09 PM, Age Mooij  wrote:
> Hi,
>
> I've been trying to find some comprehensive docs on what Riak http clients
> need to do to properly support dealing with tombstone values. I ran into
> tombstones while debugging some unit tests and I was very surprised that the
> Basho (http) API docs don't mention anything about having to deal with them.
>
> It's very hard to find any kind of complete description on when Riak will
> produce tombstone values in http responses and what the proper way of
> dealing with them is. This makes it very hard to write good unit tests and
> to implement the "correct" behaviour for my riak-scala-client.
>
> Can anyone point me towards a comprehensive description of the expected
> behaviour? Or even a description of what most client libraries end up doing?
>
> For now I just ignore siblings with the X-Riak-Deleted header (undocumented
> AFAIK) when resolving conflicts caused by a delete followed by a put (based
> on the same vclock). I'm not sure this header could (or should) occur in any
> other situation.
>
> Here's the online stuff I've found so far:
>
> - A pull request for the java client:
> https://github.com/basho/riak-java-client/pull/195
>
> - The most important commit message for the above pull request:
> https://github.com/basho/riak-java-client/commit/416a901ff1de8e4eb559db21ac5045078d278e86
>
> - Some interesting code introduced in that commit:
>
> // There is a bug in Riak where the x-riak-deleted header is not returned
> // with a tombstone on a 404 (x-riak-vclock exists). The following block can
> // be removed once that is fixed
> byte[] body = r.getBody();
> if (r.getStatusCode() == 404) {
> headers.put(Constants.HDR_DELETED, "true");
> body = new byte[0]; // otherwise this will be "not found"
> }
>
> That bug apparently still exists… do all clients implement this hack? Should
> they?
>
> - A message to this mailing list from October 2011:
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-October/006048.html
>
> Thanks,
> Age
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java driver createBucket/fetchBucket

2013-03-20 Thread Brian Roach
On Thu, Mar 14, 2013 at 11:25 PM, Kevin Burton  wrote:
> This sample calls ‘createBucket’. First what is the difference between
> ‘createBucket’ and ‘fetchBucket’? I have an existing bucket so I don’t want
> to create a new one and thereby remove the old one. So I felt that
> ‘fetchBucket’ would be the call I should make. The problem is that
> ‘fetchBucket’ returns a ‘FetchBucket’ object that doesn’t have the same
> methods as the ‘Bucket’ returned by createBucket. I would just like to query
> the bucket using a key. But that simple operation appears to be unavailable
> with the ‘FetchBucket’ object. Ideas?

For the most part all of this is simply semantics to have the Java API
model interacting with Riak in some logical fashion. Nothing ever
"removes" a bucket from Riak. The only difference between a
FetchBucket and a WriteBucket is whether or not the bucket properties
are written to Riak when you call execture().

Calling execute() on a FetchBucket queries Riak for the bucket
properties and returns a Bucket object. If you don't need to read the
properties you can call .lazyLoadBucketProperties() on the FetchBucket
prior to calling execute() and it will not query Riak at all (unless
you then call any of the property getters (e.g. getR() ) on the
returned Bucket).

Calling execute() on a WriteBucket (which is returned by
IRiakClient.createBucket() and IRiakClient.updateBucket() - they are
exactly the same thing) does a write to Riak of the properties
specified then a subsequent read of them back from Riak, returning a
Bucket object. Again, the read can be avoided or postponed using
.lazyLoadBucketProperties() prior to execute().

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Unable to delete using Java Client

2013-03-15 Thread Brian Roach
Hi Joachim,

The problem with your code is here:

myBucket.delete(guiPath);

The way the API flow works is just like doing a store and fetch; the
Bucket.delete() method returns a DeleteObject on which you then call
.execute() to actually perform the operation:

myBucket.delete(guiPath).execute();

Thanks,
- Roach

On Fri, Mar 15, 2013 at 11:45 AM, Joachim Haagen Skeie
 wrote:
> Hello,
>
> I am trying to delete items using the Java client, but for some reason, the
> data is still there when I try to get it out later.
>
> I have posted the relevant parts of the Java Class performing the deletion
> here: https://gist.github.com/joachimhs/5171629
>
> The following unit test fails on the last assertion:
>
> @Test
> public void testTreeMenu() throws InterruptedException {
> newEnv.getTreeMenuDao().persistTreeMenu(new
> BasicStatistics("EurekaJAgent:Memory:Heap:Used %", "Account Name", "Y"));
>
>
>
> Statistics statOne =
> newEnv.getTreeMenuDao().getTreeMenu("EurekaJAgent:Memory:Heap:Used %",
> "Account Name");
>
>
>
> Assert.assertNotNull(statOne);
> Assert.assertEquals("EurekaJAgent:Memory:Heap:Used %",
> statOne.getGuiPath());
> Assert.assertEquals("Account Name", statOne.getAccountName());
> Assert.assertEquals("Y", statOne.getNodeLive());
>
>
>
> newEnv.getTreeMenuDao().deleteTreeMenu("EurekaJAgent:Memory:Heap:Used
> %", "Account Name");
>
>
>
> Thread.sleep(550);
> Statistics deletedStatOne =
> newEnv.getTreeMenuDao().getTreeMenu("EurekaJAgent:Memory:Heap:Used %",
> "Account Name");
>
>
>
> Assert.assertNull(deletedStatOne);
> }
>
> Med Vennlig Hilsen | Very Best Regards,
>
> Joachim Haagen Skeie
> joac...@haagen-software.no
> http://haagen-software.no
> +47 4141 5805
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using withoutFetch with DomainBucket

2013-03-12 Thread Brian Roach
Daniel -

Nothing detects whether there is a vclock or not. If there isn't one
provided (the value is `null` in Java), then one isn't sent to Riak -
it is not a requirement for a store operation for it to be present. If
an object exists when such a store is performed and allow_multi=true
for the bucket, then a sibling is created.

The .withoutFetch() method was added to the StoreObject as a requested
feature. It is meant for when you are storing an object that was
previously fetched from Riak and want to avoid doing another fetch. If
that previous fetch returned nothing (the key was not found) then the
vector clock will be null.

When talking about deleted keys ... unless you change the default
`delete_mode` in Riak's app.config, you're not usually going to get a
tombstone - they are reaped after 3s.  Unless either you do a fetch
immediately following a delete, you're doing store operations without
vclocks with allow_multi=true for the bucket (which is basically
"doing it wrong") immediately after a delete and a sibling gets
created, or hit a very small window with multiple writers under heavy
load where the read/write cycle interleaves with a delete and a
tombstone sibling gets created.

With that being said, yes, unless you set 'returnDeletedVClock(true)`
they are silently discarded by the Java client and not passed to the
Converter. If that has been set, the default JSONConverter will return
a new instance of whatever POJO is being used (if possible - if
there's not a default constructor it will throw an exception) and then
set a @RiakTombstone annotated boolean field to `true` if one exists.
It detects this by calling the .isDeleted() method of the returned
IRiakObject.

- Roach

On Tue, Mar 12, 2013 at 9:43 AM, Daniel Iwan  wrote:
> Brian,
>
> Where I got lost was the fact that I was using custom Converter and I did
> not do anything with vclock passed into fromDomain().
> That was undetected because at the same time I wasn't using withoutFetch,
> which I believe is the only moment where missing @RiakVClock annotation
> can be detected. Normally when JSONConverter is used missing @RiakVClock
> would also be detected.
> Could you confirm?
>
> Few additional, related questions:
> - if I use byte[] or VClock field and use withoutFetch() what is default
> value it should be set to (since it will be extracted via StoreObject)?
> - if I want to avoid overwriting deleted keys, I guess I need to set
> returnDeletedVClock as below,
>  DomainBucketBuilder builder = DomainBucket.builder(bucket,
> Custom.class)
>  builder.returnDeletedVClock(true);
>
> and then check isDeleted on sibblings and use ConditionalStoreMutation to
> return false id one of he sibblings has that flag set to true?
> I believe it needs to use VClock of deleted sibbling as well?
>
> Thanks
> Daniel
>
>
>
>> The .withoutFetch() method isn't available when using the DomanBucket.
>>
>> As for the vector clock, when using .withoutFetch() the .execute()
>> method of StoreObject is going to extract the vector clock from the
>> POJO returned from your Mutation by looking for a VectorClock or
>> byte[] field that is annotated with @RiakVClock. It is then passed to
>> the Converter's .fromDomain() method as an argument.  If you are
>> storing an object you previously fetched from Riak, that vector clock
>> and annotation needs to be there.
>>
>> The easiest way to implement that is:
>> 1. Have a VectorClock or byte[] field in your POJO annotated with
>> @RiakVClock
>>
>> 2. When you fetch, in the .toDomain() method of your Converter have
>> the line of code you noted.
>>
>> 3. When you store, the vector clock stored in that field will be
>> passed to the .fromDomain() method of your Converter. Make sure to
>> call the .withVClock(vclock) method of the RiakObjectBuilder or
>> explicitly set it in the IRiakObject being returned.
>>
>> - Roach
>>
>>
>> On Fri, Mar 8, 2013 at 3:31 PM, Daniel Iwan  wrote:
>> > Somehow I cannot find a way to avoid pre-fetch during store operation
>> > (Java
>> > client).
>> > I know in StoreObject there is withoutFetch method for that purpose but
>> > I
>> > cannot find corresponding method/property in DomainBucket or
>> > DomainBucketBuilder
>> >
>> > Am I missing something?
>> >
>> > Also on related note when withoutFetch is used I guess I need to provide
>> > annotated RiakVClock field and use something like:
>> >
>> > VClockUtil.setVClock(domainObject, riakObject.getVClock());
>> >
>> > in my Converter. Is that right or is there better way to do it?
>> >
>> >
>> > I'm using Riak Java client 1.1.0
>> >
>> > Thanks
>> > Daniel
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using withoutFetch with DomainBucket

2013-03-12 Thread Brian Roach
Daniel -

The .withoutFetch() method isn't available when using the DomanBucket.

As for the vector clock, when using .withoutFetch() the .execute()
method of StoreObject is going to extract the vector clock from the
POJO returned from your Mutation by looking for a VectorClock or
byte[] field that is annotated with @RiakVClock. It is then passed to
the Converter's .fromDomain() method as an argument.  If you are
storing an object you previously fetched from Riak, that vector clock
and annotation needs to be there.

The easiest way to implement that is:
1. Have a VectorClock or byte[] field in your POJO annotated with @RiakVClock

2. When you fetch, in the .toDomain() method of your Converter have
the line of code you noted.

3. When you store, the vector clock stored in that field will be
passed to the .fromDomain() method of your Converter. Make sure to
call the .withVClock(vclock) method of the RiakObjectBuilder or
explicitly set it in the IRiakObject being returned.

- Roach


On Fri, Mar 8, 2013 at 3:31 PM, Daniel Iwan  wrote:
> Somehow I cannot find a way to avoid pre-fetch during store operation (Java
> client).
> I know in StoreObject there is withoutFetch method for that purpose but I
> cannot find corresponding method/property in DomainBucket or
> DomainBucketBuilder
>
> Am I missing something?
>
> Also on related note when withoutFetch is used I guess I need to provide
> annotated RiakVClock field and use something like:
>
> VClockUtil.setVClock(domainObject, riakObject.getVClock());
>
> in my Converter. Is that right or is there better way to do it?
>
>
> I'm using Riak Java client 1.1.0
>
> Thanks
> Daniel
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Beginners performance problem

2013-02-24 Thread Brian Roach
One issue that immediately stands out:

> >>>
> Bucket bucket = sharedClient.createBucket("accounts1").execute();
> bucket.store(a.getId().toString(),
mapper.writeValueAsString(a)).execute();
> >>>

'CreateBucket()' is doing a fetch of the bucket properties and then storing
them back to the cluster when 'execute()' is called.

You want to fetch the bucket once then pass around the reference to it, or
at the very least use:
fetchBucket("accounts1").lazyLoadBucketProperties().execute();
Inside your threads.

The only time you ever want to use createBucket() is when you want to
modify the bucket properties.

- Roach
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: best practices to work with buckets in java-client

2013-02-15 Thread Brian Roach
You only ever need to use client.createBucket() if you want to change
the bucket properties from the defaults.

This is due to the way Riak works; a put or fetch operation that uses
a bucket that doesn't exist will create that bucket with the default
bucket properties.

To be clear, if you do:

Bucket b = client.fetchBucket("Some_bucket").execute();

That bucket did not have to exist beforehand. It will be created with
the default properties.

- Roach

On Fri, Feb 15, 2013 at 11:53 PM, Deepak Balasubramanyam
 wrote:
>> Can I create it once (on application's start) and store somewhere (like
>> static field for example)
>
>
> Yes you can. There is no need to create the bucket every time the
> application starts. The client.fetchBucket() call will get the bucket
> successfully on subsequent runs.
>
> Thanks
> -Deepak
>
> On Sat, Feb 16, 2013 at 5:00 AM, Guido Medina 
> wrote:
>>
>> I would say it is totally safe to treat them as singleton (static
>> reference or just singleton pattern), we have been doing that for a year
>> with no issues so far.
>>
>> Hope that helps,
>>
>> Guido.
>>
>>
>> On 15/02/13 22:07, Mikhail Tyamin wrote:
>>>
>>> Hello guys,
>>>
>>> what is the best way to work with Bucket object in java-client?
>>>
>>> Can I create it once (on application's start) and store somewhere (like
>>> static field for example)
>>> or I should create it ( riakClient.createBucket("bucketName") ) once and
>>> then fetch it  ( riakClient.fetchBucket("bucketName") )
>>> every time using riakClient when I need it?
>>>
>>> P.S. I am going to use the same bucket's properties (nVal, allowSiblings
>>> and etc.) during all life of application.
>>>
>>> Thank you.
>>>
>>> Mikhail.
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client 100% CPU

2013-02-15 Thread Brian Roach
Daniel -

Fixed. I swear I thought I had edited the ACL.

Thanks,
- Roach

On Fri, Feb 15, 2013 at 6:08 AM, Daniel Iwan  wrote:
> Hi
>
> Thanks for that and also for building riak-client with all dependencies.
> But I'm afraid S3 bucket is password protected or link expired since I'm
> getting AccessDenied on that 1.1.0 jar
>
> Daniel
>
>
>
> On 14 February 2013 17:22, Brian Roach  wrote:
>>
>> Daniel -
>>
>> Yes, sorry about that. This has been corrected in the current master
>> on github and version 1.1.0 of the client will be released today.
>> https://github.com/basho/riak-java-client/pull/212
>>
>> Thanks!
>> Brian Roach
>>
>> On Thu, Feb 14, 2013 at 9:31 AM, Daniel Iwan 
>> wrote:
>> > I see 100% CPU very regularly on one of the Riak client (v1.0.7)
>> > threads.
>> > I think the place where it spins is connection reaper in
>> > RiakConnectionPool
>> >
>> > I looked at it briefly and it seems that when it finds first connection
>> > using peek but that does not expired it can spin in tight while loop.
>> > I guess second peek() should be outside if block?
>> >
>> > private synchronized void doStart() {
>> > if (idleConnectionTTLNanos > 0) {
>> > idleReaper.scheduleWithFixedDelay(new Runnable() {
>> > public void run() {
>> > RiakConnection c = available.peek();
>> > while (c != null) {
>> > long connIdleStartNanos =
>> > c.getIdleStartTimeNanos();
>> > if (connIdleStartNanos + idleConnectionTTLNanos
>> > <
>> > System.nanoTime()) {
>> > if (c.getIdleStartTimeNanos() ==
>> > connIdleStartNanos) {
>> > // still a small window, but better than
>> > locking
>> > // the whole pool
>> > boolean removed = available.remove(c);
>> > if (removed) {
>> > c.close();
>> > permits.release();
>> > }
>> > }
>> > c = available.peek();
>> > }
>> > }
>> > }
>> > }, idleConnectionTTLNanos, idleConnectionTTLNanos,
>> > TimeUnit.NANOSECONDS);
>> > }
>> >
>> > state = State.RUNNING;
>> > }
>> >
>> >
>> > Regards
>> > Daniel Iwan
>> >
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java client v1.1.0

2013-02-14 Thread Brian Roach
Greetings!

Today we have released the latest version of the Java client for Riak, v1.1.0

This is available immediately from Maven Central by adding the
following to your project's pom.xml:


com.basho.riak
riak-client
1.1.0
pom


For those not using maven we provide a single .jar file that contains
the client and all its dependencies:

http://riak-java-client.s3.amazonaws.com/riak-client-1.1.0-jar-with-dependencies.jar


This release is both a bugfix and feature release.

Most notably you will find that it now supports secondary indexes
natively if you are using protocol buffers. In addition, the int_index
typing has been changed from int to long to eliminate the 2^31 limit.

Also on the protocol buffers front, you should see a performance
increase if you are working with siblings and using the IRiakClient
level interfaces; an old bug was found where and extra get operation
was being made unnecessarily when siblings were present.

A cpu utilization bug was also found to have been introduced in 1.0.7
in the protocol buffers client (connection pool). This has been
corrected.

The complete list of changes in 1.1.0 can be found in the CHANGELOG on
github. Current Javadocs have been published and are available via
http://basho.github.com/riak-java-client/1.1.0

Thanks!
- Brian Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client 100% CPU

2013-02-14 Thread Brian Roach
Daniel -

Yes, sorry about that. This has been corrected in the current master
on github and version 1.1.0 of the client will be released today.
https://github.com/basho/riak-java-client/pull/212

Thanks!
Brian Roach

On Thu, Feb 14, 2013 at 9:31 AM, Daniel Iwan  wrote:
> I see 100% CPU very regularly on one of the Riak client (v1.0.7) threads.
> I think the place where it spins is connection reaper in RiakConnectionPool
>
> I looked at it briefly and it seems that when it finds first connection
> using peek but that does not expired it can spin in tight while loop.
> I guess second peek() should be outside if block?
>
> private synchronized void doStart() {
> if (idleConnectionTTLNanos > 0) {
> idleReaper.scheduleWithFixedDelay(new Runnable() {
> public void run() {
> RiakConnection c = available.peek();
> while (c != null) {
> long connIdleStartNanos = c.getIdleStartTimeNanos();
> if (connIdleStartNanos + idleConnectionTTLNanos <
> System.nanoTime()) {
> if (c.getIdleStartTimeNanos() ==
> connIdleStartNanos) {
> // still a small window, but better than
> locking
> // the whole pool
> boolean removed = available.remove(c);
> if (removed) {
> c.close();
> permits.release();
> }
> }
> c = available.peek();
> }
> }
> }
> }, idleConnectionTTLNanos, idleConnectionTTLNanos,
> TimeUnit.NANOSECONDS);
> }
>
> state = State.RUNNING;
> }
>
>
> Regards
> Daniel Iwan
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Strange Exception in java-client

2013-02-02 Thread Brian Roach
Ingo -

As an FYI, once I started looking into this I found you had stumbled
into quite the can of worms. I just finished a pretty comprehensive
set of changes that will now allow proper handling of tombstones in
the Java client - https://github.com/basho/riak-java-client/pull/195

Sorry you had to be the one, but thanks for letting us know about this!
- Roach

On Thu, Jan 31, 2013 at 10:36 AM, Ingo Rockel
 wrote:
> Hi Brian,
>
> thanks for the suggestion but I already chose a different solution for now,
> if these messages get deleted I just delete the links to the message and
> mark the message as "abandoned" and available for reuse. So I don't run into
> the conflict if I need to store the message again.
>
> I just started the replay again and let it run for a while to see if this
> works for me.
>
> Thanks!
>
> Ingo
>
> Am 31.01.2013 16:54, schrieb Brian Roach:
>
>> Ingo -
>>
>> Unfortunately once you've got a sibling tombstone, things get a bit
>> tricky. It's not going away until you resolve them which when using
>> the JSONConverter in the Java client, you can't. Oddly enough, this is
>> the first time anyone has hit this.
>>
>> I've got a couple ideas on how to address this properly but I need to
>> look at some things first.
>>
>> In the meantime, what I'd suggest as a workaround is to copy and paste
>> the source for the JSONConverter into your own Converter that
>> you'll pass to the StoreObject and modify it to return null:
>>
>>
>> https://github.com/basho/riak-java-client/blob/master/src/main/java/com/basho/riak/client/convert/JSONConverter.java#L141
>>
>> Have it check to see if riakObject.getValue() returns null and if it
>> does ... return null. You'll also need to modify your ConflictResolver
>> to check for null as it iterates through the list of your POJOs that
>> gets passed to it and act accordingly. If there's only a tombstone,
>> just return null ... which means you will also need to modify your
>> Mutation to handle a null being passed to it in the case of there
>> only being a tombstone.
>>
>> In the end this may well be what I do but I think I have a slightly
>> more elegant solution that I want to look into.
>>
>> I've got an errand I need to run this morning, but I'll get to work on
>> this as soon as I get back.
>>
>> Thanks, and sorry for the trouble.
>> - Brian Roach
>>
>>
>> On Thu, Jan 31, 2013 at 3:56 AM, Ingo Rockel
>>  wrote:
>>>
>>> Hi Brian,
>>>
>>> thanks for the detailed explaination!
>>>
>>> I had a look at an object which constantly fails to load even if
>>> retrying:
>>>
>>> lftp :~> cat "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081"
>>>  Verbinde mit 172.22.3.14 (172.22.3.14) Port 8091
>>> ---> GET /riak/m/Um|18498012|4|0|18298081 HTTP/1.1
>>> ---> Host: 172.22.3.14:8091
>>> ---> User-Agent: lftp/4.3.3
>>> ---> Accept: */*
>>> ---> Connection: keep-alive
>>> --->
>>> <--- HTTP/1.1 300 Multiple Choices
>>> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
>>> <--- Vary: Accept, Accept-Encoding
>>> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
>>> <--- Last-Modified: Thu, 31 Jan 2013 10:00:48 GMT
>>> <--- ETag: "6PSreYIOL25KOpNyG0XPe7"
>>> <--- Date: Thu, 31 Jan 2013 10:42:41 GMT
>>> <--- Content-Type: text/plain
>>> <--- Content-Length: 56
>>> <---
>>> <--* Siblings:
>>> <--* 50Uz9nvQWwOUBE6USi2gki
>>> <--* 1JsgLs3CE3k2mWsaCEiPp4
>>> cat: Zugriff nicht möglich: 300 Multiple Choices
>>> (/riak/m/Um|18498012|4|0|18298081)
>>> lftp :~> cat
>>>
>>> "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081?vtag=50Uz9nvQWwOUBE6USi2gki"
>>> ---> GET /riak/m/Um|18498012|4|0|18298081?vtag=50Uz9nvQWwOUBE6USi2gki
>>> HTTP/1.1
>>> ---> Host: 172.22.3.14:8091
>>> ---> User-Agent: lftp/4.3.3
>>> ---> Accept: */*
>>> ---> Connection: keep-alive
>>> --->
>>> <--- HTTP/1.1 200 OK
>>> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
>>> <--- Vary: Accept-Encoding
>>> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
>>> <--- Link: ; rel="up"
>>> <--- Last-Modified: Thu, 31 Jan 2013 0

Re: Java client question...

2013-02-01 Thread Brian Roach
Scenario 1: If you know the object doesn't exist or you want to
overwrite it (because allow_multi is not enabled), you want to call
withoutFetch() - there's no reason to fetch something from Riak if
it's not there (not found) or you don't care what the value is
currently (and again, are not creating siblings)

Scenario 2: If you want to ... but since you're not fetching in the
first place the result should be exactly what is in your Mutation.

- Roach

On Thu, Jan 31, 2013 at 6:14 AM, Guido Medina  wrote:
> Hi,
>
> I have doubts about withoutFetch() and returnBody(boolean), I will put some
> scenarios:
>
> Store object (or overwrite) existing Riak object where I'm 100% I don't need
> to fetch from Riak (Last write wins and goes to memory cache)
> Apply a mutation to an object but this time return the mutated instance from
> the Riak operation without fetching it from Riak cluster (via apply()?) so
> that the mutated result gets updated in memory cache.
>
> I want to use accurately both methods but I'm a bit lost with their use
> case, so, is it safe to assume the following?
>
> Scenario 1: execute() without calling withoutFetch() and returnBody(false)
> because both by default are false?
> Scenario 2: execute() with returnBody(true) so I get the result of
> Mutation.apply()?
>
> All described scenarios have no siblings enabled and use default converter
> (Domain POJO annotated with Jackson)
>
> Thanks in advance for the response(s),
>
> Guido.
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Strange Exception in java-client

2013-01-31 Thread Brian Roach
Ingo -

Unfortunately once you've got a sibling tombstone, things get a bit
tricky. It's not going away until you resolve them which when using
the JSONConverter in the Java client, you can't. Oddly enough, this is
the first time anyone has hit this.

I've got a couple ideas on how to address this properly but I need to
look at some things first.

In the meantime, what I'd suggest as a workaround is to copy and paste
the source for the JSONConverter into your own Converter that
you'll pass to the StoreObject and modify it to return null:

https://github.com/basho/riak-java-client/blob/master/src/main/java/com/basho/riak/client/convert/JSONConverter.java#L141

Have it check to see if riakObject.getValue() returns null and if it
does ... return null. You'll also need to modify your ConflictResolver
to check for null as it iterates through the list of your POJOs that
gets passed to it and act accordingly. If there's only a tombstone,
just return null ... which means you will also need to modify your
Mutation to handle a null being passed to it in the case of there
only being a tombstone.

In the end this may well be what I do but I think I have a slightly
more elegant solution that I want to look into.

I've got an errand I need to run this morning, but I'll get to work on
this as soon as I get back.

Thanks, and sorry for the trouble.
- Brian Roach


On Thu, Jan 31, 2013 at 3:56 AM, Ingo Rockel
 wrote:
> Hi Brian,
>
> thanks for the detailed explaination!
>
> I had a look at an object which constantly fails to load even if retrying:
>
> lftp :~> cat "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081"
>  Verbinde mit 172.22.3.14 (172.22.3.14) Port 8091
> ---> GET /riak/m/Um|18498012|4|0|18298081 HTTP/1.1
> ---> Host: 172.22.3.14:8091
> ---> User-Agent: lftp/4.3.3
> ---> Accept: */*
> ---> Connection: keep-alive
> --->
> <--- HTTP/1.1 300 Multiple Choices
> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
> <--- Vary: Accept, Accept-Encoding
> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
> <--- Last-Modified: Thu, 31 Jan 2013 10:00:48 GMT
> <--- ETag: "6PSreYIOL25KOpNyG0XPe7"
> <--- Date: Thu, 31 Jan 2013 10:42:41 GMT
> <--- Content-Type: text/plain
> <--- Content-Length: 56
> <---
> <--* Siblings:
> <--* 50Uz9nvQWwOUBE6USi2gki
> <--* 1JsgLs3CE3k2mWsaCEiPp4
> cat: Zugriff nicht möglich: 300 Multiple Choices
> (/riak/m/Um|18498012|4|0|18298081)
> lftp :~> cat
> "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081?vtag=50Uz9nvQWwOUBE6USi2gki"
> ---> GET /riak/m/Um|18498012|4|0|18298081?vtag=50Uz9nvQWwOUBE6USi2gki
> HTTP/1.1
> ---> Host: 172.22.3.14:8091
> ---> User-Agent: lftp/4.3.3
> ---> Accept: */*
> ---> Connection: keep-alive
> --->
> <--- HTTP/1.1 200 OK
> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
> <--- Vary: Accept-Encoding
> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
> <--- Link: ; rel="up"
> <--- Last-Modified: Thu, 31 Jan 2013 09:57:38 GMT
> <--- ETag: "50Uz9nvQWwOUBE6USi2gki"
> <--- Date: Thu, 31 Jan 2013 10:42:49 GMT
> <--- Content-Type: application/octet-stream
> <--- Content-Length: 0
> <---
> lftp :~> cat
> "http://172.22.3.14:8091/riak/m/Um|18498012|4|0|18298081?vtag=1JsgLs3CE3k2mWsaCEiPp4"
> ---> GET /riak/m/Um|18498012|4|0|18298081?vtag=1JsgLs3CE3k2mWsaCEiPp4
> HTTP/1.1
> ---> Host: 172.22.3.14:8091
> ---> User-Agent: lftp/4.3.3
> ---> Accept: */*
> ---> Connection: keep-alive
> --->
> <--- HTTP/1.1 200 OK
> <--- X-Riak-Vclock: a85hYGBgzGDKBVIcaZPWMQZyWttkMCWy5rEyXNhTd4ovCwA=
> <--- Vary: Accept-Encoding
> <--- Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
> <--- Link: ; rel="up"
> <--- Last-Modified: Thu, 31 Jan 2013 10:00:48 GMT
> <--- ETag: "1JsgLs3CE3k2mWsaCEiPp4"
> <--- Date: Thu, 31 Jan 2013 10:43:01 GMT
> <--- Content-Type: application/json; charset=UTF-8
> <--- Content-Length: 114
> <---
> {"sortKey":1359626448000106,"st":2,"t":4,"r":18498012,"s":18298081,"ct":1359626448000,"rv":21215685,"cv":1,"su":0}
>
> the object has two siblings, one the deleted empty "tombstone" and one with
> the new data. And there's a gap auf 2:30min between both siblings. There's
> no immediate write after the deletion. I logged the write operations and
> this gap is there also. And the java client constantly fails to load this
> object.
>

Re: Strange Exception in java-client

2013-01-30 Thread Brian Roach
Ingo -

Riak is returning an object with no contents (which ends up being an
empty String passed to Jackson).

Unless you've somehow mangled the data yourself (which sounds unlikely
given the bit about the 404 from the command line; more on that in a
bit) what's happening is that you're encountering a tombstone; an
object that has been deleted via a delete operation but hasn't been
removed yet. This causes an "empty" object to be returned (the
tombstone) and causes Jackson to puke (HTTP will actually return this
as a 404 but if you look there's still a X-Riak-Vclock: header with a
vclock).

Probably the best description of how this works in Riak is a post by
Jon Meredith which can be found here:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-October/006048.html

Unfortunately this is something the Java client doesn't know what to
do with when using the default JSONConverter and your own POJOs. And
it's not as simple as "just return null" because of a case where a
tombstone could actually be a sibling and the client then needs to
resolve the conflict which is the next step in the process. It's
something I'm going to have to think about.

As you've discovered, when the tombstone isn't a sibling simply
retrying will often work because by then the delete has fully
completed and the tombstones have been removed from the Riak nodes.

Is there a reason you're rapidly doing a delete then a store (which
triggers that fetch)?

Thanks,
Brian Roach

On Wed, Jan 30, 2013 at 9:06 AM, Ingo Rockel
 wrote:
> Hi Dmitri,
>
> it doesn't happen in my code and it does happen while the riak-client tries
> to deserialize a fetched object from riak in a "fetch-before-store" (see the
> stack), I also get this error randomly while trying just to fetch an object
> from the database.
>
> And if I try to fetch the object from the cmdline I just get a 404. So I
> would expect the java-client just returns a null-result for this fetch and
> not to throw an exception.
>
> All my objects are stored using the riak-java-client and the
> json-serializer.
>
> Ahh, just tested: if I retry it sometimes works, although most of the time
> still fails (haven't tried with a sleep so far).
>
> Ingo
>
> Am 30.01.2013 16:57, schrieb Dmitri Zagidulin:
>>
>> Hi Ingo.
>>
>> It's difficult to diagnose the exact reason without looking at your code.
>> But that error is a JSON parser error. It gets thrown whenever the code
>> tries to parse an empty string as a json object.
>> The general-case solution is to validate your strings or input streams
>> that you're turning into JSON objects, or to catch an exception when
>> creating that object and deal with it accordingly.
>>
>> But again, it's hard to say why it's happening exactly, in your case --
>> try to determine where in your code that's happening and think of ways
>> some input or result is empty, and check for that.
>>
>> Dmitri
>>
>>
>>
>> On Wed, Jan 30, 2013 at 10:44 AM, Ingo Rockel
>> mailto:ingo.roc...@bluelionmobile.com>>
>>
>> wrote:
>>
>> Hi,
>>
>> I wrote a java tool to convert part of our data from a
>> mysql-database into riak. As this tool is running while our system
>> is still up, it needs to replay all modifications done in the mysql
>> database, during these modifications I sometimes get this exception
>> from the riak client:
>>
>> com.basho.riak.client.convert.__ConversionException:
>>
>> java.io.EOFException: No content to map to Object due to end of input
>> com.basho.riak.client.convert.__ConversionException:
>>
>> java.io.EOFException: No content to map to Object due to end of input
>>  at
>>
>> com.basho.riak.client.convert.__JSONConverter.toDomain(__JSONConverter.java:167)
>>  at
>>
>> com.basho.riak.client.__operations.FetchObject.__execute(FetchObject.java:110)
>>  at
>>
>> com.basho.riak.client.__operations.StoreObject.__execute(StoreObject.java:112)
>>  at
>>
>> com.bluelionmobile.qeep.__messaging.db.impl.__MessageKVImpl.__storeUniqueMessageDto(__MessageKVImpl.java:264)
>>  at
>>
>> com.bluelionmobile.qeep.__messaging.db.impl.__MessageKVImpl.__createDataFromDTO(__MessageKVImpl.java:138)
>>  at
>>
>> com.bluelionmobile.qeep.__messaging.db.impl.__MessageKVImpl.__updateDataFromDTO(__MessageKVImpl.java:205)
>>  at
>>
>> com.bluelionmobile.qeep.__messaging.db.utils.Replay$__ReplayRunner.run(Replay.

  1   2   >