Re: Yokozuna's inconsistent query problem

2017-02-23 Thread Witeman Zheng
Hello Fred, riak-admin aae-status: == Exchanges == Index Last (ago)All (ago) --- 0

Re: Riak: reliable object deletion

2017-02-23 Thread al so
> > Present both in Solr and Riak. It's a 1-way replication in MDC. MDC is not > the cause unless there is a bug there as well. > > > On Thu, Feb 23, 2017 at 6:02 AM, Fred Dushin wrote: > >> Running a Solr query has no impact on writes -- Riak search queries are >> direct pass

Re: Riak: reliable object deletion

2017-02-23 Thread al so
Present both in Solr and Riak. On Thu, Feb 23, 2017 at 6:02 AM, Fred Dushin wrote: > Running a Solr query has no impact on writes -- Riak search queries are > direct pass throughs to Solr query and don't touch any of the salient Riak > systems (batching writes to Solr, YZ

Re: Yokozuna's inconsistent query problem

2017-02-23 Thread Fred Dushin
Hello Witeman, What you are seeing with your two queries is the result of two different coverage plans, querying different parts of the cluster. Riak Search translates coverage plans to Solr sharded queries, and will periodically change the coverage plan, so as to more evenly distribute

Yokozuna's inconsistent query problem

2017-02-23 Thread Witeman Zheng
Hi, I am having a 10 nodes of RiakKV 2.2.0, and turn on Riak Search(Yokozuna). Having about 3million records in one bucket with index, every record has about 1k size. Then when it is triggered a Yokozuna query for one specific id, sometimes return the record, sometimes return NOT FOUND, it

Update metadata of entries in bucket

2017-02-23 Thread Grigory Fateyev
Hello! I'm trying to write riak_pipe command that updates metadata, the code: https://gist.github.com/greggy/7d7fa3102d89673019410c6e244650cd I'm getting every entry in update_metadata/1 then creating a new metadata, updating it in Item. My question is how to update r_object in a bucket? Thank

Riak TS and downsampling?

2017-02-23 Thread Jordan Ganoff
Hi, Is it possible to downsample a table as part of a query? Specifically, to group and aggregate a given table's records at a less granular level than the rows are stored in the table using an aggregation technique. Most time series databases offer a way to either precompute (reindex) at

Re: Riak: reliable object deletion

2017-02-23 Thread Fred Dushin
Running a Solr query has no impact on writes -- Riak search queries are direct pass throughs to Solr query and don't touch any of the salient Riak systems (batching writes to Solr, YZ AAE, etc). I believe the timing of the reappearance is a coincidence. Is it possible the object reappeared

Re: Node is not running!

2017-02-23 Thread Magnus Kessler
On 23 February 2017 at 13:38, Jurgen Ramaliu wrote: > Hello Magnus, > > Attached is console.log. > > Hi Jurgen, The log contains these lines: 2017-02-23 14:36:17.949 [error] <0.707.0>@riak_kv_vnode:init:512 Failed to start riak_kv_eleveldb_backend backend for index

Re: Node is not running!

2017-02-23 Thread Magnus Kessler
On 23 February 2017 at 07:19, Jurgen Ramaliu wrote: > Hi Paul and Magnus, > I have resolve it by using command : > > >- riak stop > > >- changing nodename into file riak.conf from nodename = riak@127.0.0.1 >to riak@192.168.1.10 > > >- riak-admin reip

Riak: reliable object deletion

2017-02-23 Thread al so
> > Here is brief env: > Riak v 2.0.8 + Solr/ 5 node cluster/ MDC > > Problem: > Deleted object suddenly resurrected after few days. Solr search query( > "*:*") was executed around the time of reappearance. > > Bucket property for this reappeared object > { > "props": { > "name":

Re: Node is not running!

2017-02-23 Thread Jurgen Ramaliu
Hi Paul and Magnus, I have resolve it by using command : - riak stop - changing nodename into file riak.conf from nodename = riak@127.0.0.1 to riak@192.168.1.10 - riak-admin reip riak@127.0.0.1 riak@192.168.1.10 But I have another problem, riak starts with this IP but shut down

Re: Handoffs are too slow after netsplit

2017-02-23 Thread Douglas Rohrer
Andrey: It's waiting for 60 seconds, literally... See https://github.com/basho/riak_core/search?utf8=%E2%9C%93=vnode_inactivity_timeout - handoff is not initiated until a vnode has been inactive for the specified inactivity period. For demonstration purposes, if you want to reduce this time,

Handoffs are too slow after netsplit

2017-02-23 Thread Andrey Ershov
Hi, guys! I'd like to follow up on handoffs behaviour after netsplit. The problem is that right after network partition is healed, "riak-admin transfers" command says that there are X partitions waiting transfer from one node to another, and Y partitions waiting transfer in the opposite