Hello Fred,
riak-admin aae-status:
== Exchanges ==
Index Last (ago)All (ago)
---
0
What is the best way to root cause it.
Any metadata/etc to query the lifecycle of this deleted object given that
Key and Bucket are known.
On Thu, Feb 23, 2017 at 9:53 PM, al so wrote:
> Present both in Solr and Riak. It's a 1-way replication in MDC. MDC is not
>> the cause unless there is a bu
>
> Present both in Solr and Riak. It's a 1-way replication in MDC. MDC is not
> the cause unless there is a bug there as well.
>
>
> On Thu, Feb 23, 2017 at 6:02 AM, Fred Dushin wrote:
>
>> Running a Solr query has no impact on writes -- Riak search queries are
>> direct pass throughs to Solr que
Present both in Solr and Riak.
On Thu, Feb 23, 2017 at 6:02 AM, Fred Dushin wrote:
> Running a Solr query has no impact on writes -- Riak search queries are
> direct pass throughs to Solr query and don't touch any of the salient Riak
> systems (batching writes to Solr, YZ AAE, etc). I believe t
Hello Witeman,
What you are seeing with your two queries is the result of two different
coverage plans, querying different parts of the cluster. Riak Search
translates coverage plans to Solr sharded queries, and will periodically change
the coverage plan, so as to more evenly distribute querie
Hi,
I am having a 10 nodes of RiakKV 2.2.0, and turn on Riak Search(Yokozuna).
Having about 3million records in one bucket with index, every record has about
1k size.
Then when it is triggered a Yokozuna query for one specific id, sometimes
return the record, sometimes return NOT FOUND, it i
Hello!
I'm trying to write riak_pipe command that updates metadata, the code:
https://gist.github.com/greggy/7d7fa3102d89673019410c6e244650cd
I'm getting every entry in update_metadata/1 then creating a new metadata,
updating it in Item.
My question is how to update r_object in a bucket?
Thank
Hi,
Is it possible to downsample a table as part of a query? Specifically, to
group and aggregate a given table's records at a less granular level than
the rows are stored in the table using an aggregation technique. Most time
series databases offer a way to either precompute (reindex) at differen
Running a Solr query has no impact on writes -- Riak search queries are direct
pass throughs to Solr query and don't touch any of the salient Riak systems
(batching writes to Solr, YZ AAE, etc). I believe the timing of the
reappearance is a coincidence.
Is it possible the object reappeared via
On 23 February 2017 at 13:38, Jurgen Ramaliu
wrote:
> Hello Magnus,
>
> Attached is console.log.
>
>
Hi Jurgen,
The log contains these lines:
2017-02-23 14:36:17.949 [error] <0.707.0>@riak_kv_vnode:init:512 Failed to
start riak_kv_eleveldb_backend backend for index
91343852333181432387730302044
On 23 February 2017 at 07:19, Jurgen Ramaliu
wrote:
> Hi Paul and Magnus,
> I have resolve it by using command :
>
>
>- riak stop
>
>
>- changing nodename into file riak.conf from nodename = riak@127.0.0.1
>to riak@192.168.1.10
>
>
>- riak-admin reip riak@127.0.0.1 riak@192.168.1.
>
> Here is brief env:
> Riak v 2.0.8 + Solr/ 5 node cluster/ MDC
>
> Problem:
> Deleted object suddenly resurrected after few days. Solr search query(
> "*:*") was executed around the time of reappearance.
>
> Bucket property for this reappeared object
> {
> "props": {
> "name": "UsaH
Hi Paul and Magnus,
I have resolve it by using command :
- riak stop
- changing nodename into file riak.conf from nodename = riak@127.0.0.1
to riak@192.168.1.10
- riak-admin reip riak@127.0.0.1 riak@192.168.1.10
But I have another problem, riak starts with this IP but shut down a
Andrey:
It's waiting for 60 seconds, literally...
See
https://github.com/basho/riak_core/search?utf8=%E2%9C%93&q=vnode_inactivity_timeout
-
handoff is not initiated until a vnode has been inactive for the specified
inactivity period.
For demonstration purposes, if you want to reduce this time, y
Hi, guys!
I'd like to follow up on handoffs behaviour after netsplit. The problem is
that right after network partition is healed, "riak-admin transfers"
command says that there are X partitions waiting transfer from one node to
another, and Y partitions waiting transfer in the opposite direction.
15 matches
Mail list logo