Hi,
While testing my infrastructure under heavy load I realized that all
nodes for every single solr search, that must be related to YZ Coverage
Plan.
Is there any way to tune my Solr coverage by choosing wether I want to
distribute the query as much as possible, or to limit it to as few
I'm very curious of your cursorMark implementation, I'm in deep need of
that feature.
From my experience I wasn't even able to trigger a query with my riak
version as it was not yet supported by the Solr bundled with it. But I
might missed a point with that.
I'm using 2.1.2.
Guillaume
On
Well, I guess that's on me. Solr is unable to create my index due to an
extension I'm using, thus Riak-4 don't know a working index for the
bucket-type.
Guillaume
On 20/09/2016 14:48, Guillaume Boddaert wrote:
Hi,
I'm currently adding two new node to my production. Yet, they seems
Hi,
I'm currently adding two new node to my production. Yet, they seems to
answer my requests while joining. I have strange answers with bad prop
types for my bucket (index_name in my case).
admin@riak-1:~$ sudo riak-admin member-status
= Membership
operations (including indexing new entries into Solr), but it
will be excluded from any cover set when a query plan is generated. I
can't guarantee that this would take less than 5 days, however.
-Fred
On Aug 29, 2016, at 3:56 AM, Guillaume Boddaert
<guilla...@lighthouse-analytics
Hi,
I recently needed to alter my Riak Search schema for a bucket type that
contains ~30 millions rows. As a result, my index was wiped since we are
waiting for a Riak Search 2.2 feature that will sync Riak storage with
Solr index on such an occasion.
I adapted a since script suggested by
Krotkine wrote:
Hi Guillaume,
If I understand correctly you need to change all the values of your JSON data.
How many keys are we talking about, how big are the data, and in how many
buckets are the keys?
Also, is your cluster in production yet?
Le 7 juin 2016 à 18:43, Guillaume Boddaert <gui
Hi,
I'd like to patch my current riak collection to rename a field inside a
JSON schema, how can I achieve that from command line on the riak server
itself ? Is there some kind of map/reduce mechanism that allow any json
record to be updated then saved to the riak cluster ?
Guillaume
uot; in riak.conf
* Be sure to carefully read and apply all tunings:
http://docs.basho.com/riak/kv/2.1.4/using/performance/
* You may wish to increase the memory dedicated to leveldb:
http://docs.basho.com/riak/kv/2.1.4/configuring/backend/#leveldb
--
Luke Bakken
Engineer
lbak...@basho.com
On
Hi there,
I'm currently testing custom Component in my Riak Search system. As I
need a suggestion mechanism from the Solr index, I implemented the
Suggester component (https://wiki.apache.org/solr/Suggester).
It seems to work correctly, yet I have some question regarding the usage
of custom
ing/performance/
* You may wish to increase the memory dedicated to leveldb:
http://docs.basho.com/riak/kv/2.1.4/configuring/backend/#leveldb
--
Luke Bakken
Engineer
lbak...@basho.com
On Tue, May 3, 2016 at 7:33 AM, Guillaume Boddaert
<guilla...@lighthouse-analytics.co> wrote:
Hi,
So
-read those stats. You'll notice
that those "put" stats are only for consistent or write_once
operations, so they don't apply to you.
Your read stats show objects well within Riak's recommended object size:
node_get_fsm_objsize_100 : 10916
node_get_fsm_objsize_95 : 7393
node_get_fsm_objsize_99 : 8
.com
On Mon, May 2, 2016 at 8:26 AM, Guillaume Boddaert
<guilla...@lighthouse-analytics.co> wrote:
My clients are working through an haproxy box configured on round-robin.
I've switched from PBC to HTTP to provide you this:
May 2 15:24:12 intrabalancer haproxy[29677]: my_daemon_box:53456
ize_median : 0
G.
On 02/05/2016 17:21, Luke Bakken wrote:
Which Riak client are you using? Do you have it configured to connect
to all nodes in your cluster or just one?
--
Luke Bakken
Engineer
lbak...@basho.com
On Mon, May 2, 2016 at 7:40 AM, Guillaume Boddaert
<guilla...@lighthouse-anal
uke Bakken
Engineer
lbak...@basho.com
On Mon, May 2, 2016 at 4:45 AM, Guillaume Boddaert
<guilla...@lighthouse-analytics.co> wrote:
Hi,
I'm trying to setup a production environment with Riak as backend.
Unfortunately I have very slow write times that bottleneck my whole system.
Here is
Hi,
I'm trying to setup a production environment with Riak as backend.
Unfortunately I have very slow write times that bottleneck my whole system.
Here is a sample of one of my node (riak-admin status | grep -e
'^node_put_fsm_time'):
node_put_fsm_time_100 : 3305516
node_put_fsm_time_95 :
16 matches
Mail list logo