Re: Stopping solrq supervisor in log; What is this?

2017-06-22 Thread Fred Dushin
This might be a relatively benign message, which might be better at debug scope 
than info scope.  This would have been introduced in 2.0.8 or 2.2.2 or later.

Essentially, there are solq supervisors for each partition/index pair (for each 
partition on a node), and when these supervisors start or stop (e.g., when an 
index is deleted, or when the ring is rebalanced), you should get these log 
entries.  There may be less common scenarios that cause a solrq supervisor to 
start/stop (other than, of course, start/stop of the Riak node), but they 
shouldn't be occurring regularly.

-Fred

> On Jun 22, 2017, at 3:12 PM, Robert Latko  wrote:
> 
> Anybody seen this in their logs?
> 
> yz_solrq_sup:stop_queue_pair:130 Stopping solrq supervisor for index 
> <<"search1">>
> 
> Searching using the index still works, however I am getting log notices for 
> most my indexes?
> 
> If you know, please tell me...  Thank you!
> 
> Sincerely,
> 
> Robert
> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Solr search response time spikes

2017-06-22 Thread Fred Dushin
It's pretty strange that you are seeing no search latency measurements on node 
5.  Are you sure your round robining is working?  Are you favoring node 1?

In general, I don't think which node you hit for query should make a 
difference, but I'd have to stare at the code some to be sure.  In essence, all 
the node that services the query does is convert the query into a sharded Solr 
query based on a coverage plan, which changes every minute or so, and then runs 
the sharded query on the local Solr node.  The Solr node then distributes the 
query to the rest of the nodes in the cluster, but that's all Solr comms -- 
Riak is out of the picture, by then.

Now, if you have a lot of sharded queries accumulating on one node, that might 
make a difference to Solr.  I am not a Solr expert, and I don't even play one 
on TV.  But maybe the fact that you are not hitting node 5 is relevant for that 
reason?

Can you do more analysis on your client, to make sure you are not favoring node 
1?

-Fred

> On Jun 22, 2017, at 10:20 AM, sean mcevoy  wrote:
> 
> Hi List,
> 
> We have a standard riak cluster with 5 nodes and at the minute the traffic 
> levels are fairly low. Each of our application nodes has 25 client 
> connections, 5 to each riak node which get selected in a round robin.
> 
> Our application level requests involve multiple riak requests so our traffic 
> tends to make requests in small bursts. Everything works fine for KV gets, 
> puts & deletes but we're seeing timeouts & weird response time spikes on solr 
> search operations.
> 
> In the past 36 hours (the only period I have riak stats for) I see one 
> response time of 38.8 seconds, 3 hours earlier a response time of 20.8 
> seconds, and the third biggest spike is an acceptable 3.5 seconds.
> 
> See below all search_query stats for the minute of the 38 sec sample. In the 
> application request we made 5 riak search requests to the same index in 
> parallel, which happens for each request of this type and normally doesn't 
> have an issue. But in this case all 5 timed out, and one timed out again on 
> retry with the other 4 succeeding.
> 
> Anyone ever seen anything like this before? Is there any known deadlock in 
> solr that I might hit if I make the same request on another connection before 
> the first has completed? This is what we do when our riak client times out 
> after 2 seconds and immediately retries.
> 
> Any advice or pointers welcomed.
> Thanks,
> //Sean.
> 
> 
> Riak node 1
> search_query_throughput_one: 14
> search_query_throughput_count: 259
> search_query_latency_min: 2776
> search_query_latency_median: 69411
> search_query_latency_mean: 4900973
> search_query_latency_max: 38887902
> search_query_latency_999: 38887902
> search_query_latency_99: 38887902
> search_query_latency_95: 2046215
> search_query_fail_one: 0
> search_query_fail_count: 0
> 
> Riak node 2
> search_query_throughput_one: 22
> search_query_throughput_count: 564
> search_query_latency_min: 4006
> search_query_latency_median: 8800
> search_query_latency_mean: 11834
> search_query_latency_max: 25509
> search_query_latency_999: 25509
> search_query_latency_99: 25509
> search_query_latency_95: 24035
> search_query_fail_one: 0
> search_query_fail_count: 0
> 
> Riak node 3
> search_query_throughput_one: 6
> search_query_throughput_count: 298
> search_query_latency_min: 3200
> search_query_latency_median: 15391
> search_query_latency_mean: 18062
> search_query_latency_max: 31759
> search_query_latency_999: 31759
> search_query_latency_99: 31759
> search_query_latency_95: 31759
> search_query_fail_one: 0
> search_query_fail_count: 0
> 
> Riak node 4
> search_query_throughput_one: 8
> search_query_throughput_count: 334
> search_query_latency_min: 2404
> search_query_latency_median: 7230
> search_query_latency_mean: 10211
> search_query_latency_max: 22502
> search_query_latency_999: 22502
> search_query_latency_99: 22502
> search_query_latency_95: 22502
> search_query_fail_one: 0
> search_query_fail_count: 0
> 
> Riak node 5
> search_query_throughput_one: 0
> search_query_throughput_count: 0
> search_query_latency_min: 0
> search_query_latency_median: 0
> search_query_latency_mean: 0
> search_query_latency_max: 0
> search_query_latency_999: 0
> search_query_latency_99: 0
> search_query_latency_95: 0
> search_query_fail_one: 0
> search_query_fail_count: 0
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [IE] Re: Continuous Crash Report

2017-06-22 Thread Johan Sommerfeld
I haven't used bucket types that much so have no real idea of why it
thinks that a bucket type is deleted.

If you read the error log it seem like {name
{<<"commercial">>,<<"commercial_systest">>}}. As I said I'm not that
familiar wit bucket types but having a tuple as name feels odd. What
bucket types do you have and how did they get created.

Regarding the cluster how many nodes are there, what happened to the
node that went away, was it gone for long before being removed. I'm
poking since as far as I can read there is no way of removing a bucket
type so it could be something internal that has gone wrong. How did
you deploy the new node, was it installed using script could it be
that you created the bucket type once more before joining the new
node. did you force-remove and then add a new one or did you do force
replace?

I'm in way over my head with bucket types but maybe someone else can
jump in once they get online.

/J


On 22 June 2017 at 12:31, Mark Richard Thomas  wrote:
> Hello
>
>
>
> force-deleted a node but I made no modification to bucket properties.
>
>
>
> From: jo...@s2hc.com [mailto:jo...@s2hc.com] On Behalf Of Johan Sommerfeld
> Sent: 22 June 2017 09:30
> To: Mark Richard Thomas
> Cc: riak-users@lists.basho.com
> Subject: [IE] Re: Continuous Crash Report
>
>
>
> Hi,
>
>
>
> I'm not sure but looking at the exception and the code you get function
> clause error because you have the first argument being '$deleted' and the
> function guard expects lists as arguments
>
>
>
> https://github.com/basho/riak_core/blob/master/src/riak_core_bucket_props.erl#L129
>
>
>
> The reason for it being '$deleted' I don't know, have you done anything to
> bucket properties or is it just replacing a node?
>
>
>
> Regards
>
> Johan Sommerfeld
>
>
>
> On 22 June 2017 at 10:11, Mark Richard Thomas 
> wrote:
>
> Hello
>
>
>
> My crash.log is continually filling up with the following message after
> replacing a node:
>
>
>
> 2017-06-22 08:06:05 =CRASH REPORT
>
>   crasher:
>
> initial call: riak_kv_index_hashtree:init/1
>
> pid: <0.16663.2>
>
> registered_name: []
>
> exception exit:
> {{function_clause,[{riak_core_bucket_props,resolve,['$deleted',[{active,true},{allow_mult,false},{basic_quorum,true},{big_vclock,50},{chash_keyfun,{riak_core_util,chash_std_keyfun}},{claimant,'r...@udcu1lc9a009.app.c9.test.com'},{dvv_enabled,false},{dw,quorum},{last_write_wins,true},{linkfun,{modfun,riak_kv_wm_link_walker,mapreduce_linkfun}},{n_val,3},{name,{<<"commercial">>,<<"commercial_systest">>}},{notfound_ok,false},{old_vclock,86400},{postcommit,[]},{pr,0},{precommit,[]},{pw,0},{r,quorum},{rw,quorum},{search_index,<<"commercial_systest">>},{small_vclock,50},{w,quorum},{young_vclock,20}]],[{file,"src/riak_core_bucket_props.erl"},{line,129}]},{riak_core_metadata_object,'-resolve/2-fun-1-',3,[{file,"src/riak_core_metadata_object.erl"},{line,116}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{dvvset,reconcile,2,[{file,"src/dvvset.erl"},{line,243}]},{riak_core_metadata_object,resolve,2,[{file,"src/riak_core_metadata_object.erl"},{line,118}]},{riak_core_metadata,maybe_resolve,5,[{file,"src/riak_core_metadata.erl"},{line,355}]},{riak_core_metadata,itr_key_values,1,[{file,"src/riak_core_metadata.erl"},{line,226}]},{riak_core_metadata,fold_it,3,[{file,"src/riak_core_metadata.erl"},{line,130}]}]},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
>
> ancestors: [<0.1742.0>,riak_core_vnode_sup,riak_core_sup,<0.240.0>]
>
> messages: []
>
> links: []
>
> dictionary: []
>
> trap_exit: false
>
> status: running
>
> heap_size: 2586
>
> stack_size: 27
>
> reductions: 3015
>
>
>
> And in error.log:
>
>
>
> 2017-06-22 08:07:38.284 [error] <0.3955.4> CRASH REPORT Process <0.3955.4>
> with 0 neighbours exited with reason: no function clause matching
> riak_core_bucket_props:resolve('$deleted',
> [{active,true},{allow_mult,false},{basic_quorum,true},{big_vclock,50},{chash_keyfun,{riak_core_util,...}},...])
> line 129 in gen_server:init_it/6 line 328
>
>
>
>
>
> 'r...@udcu1lc9a009.app.c9.test.com' is a dead node.
>
>
>
> Mark Thomas
> Technical Lead, UK IT
> Equifax Inc.
>
>
>
> O +44 (0)7908 798 270
>
> mark.tho...@equifax.com
>
>
>
>
>
> Equifax Limited is registered in England with Registered No. 2425920.
> Registered Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax
> Limited is authorised and regulated by the Financial Conduct Authority.
>
> Equifax Touchstone Limited is registered in Scotland with Registered No.
> SC113401. Registered Office: Exchange Tower,19 Canning Street, Edinburgh,
> EH3 8EH.
>
> Equifax Commercial Services Limited is registered in the Republic of Ireland
> with Registered No. 215393. Registered Office: IDA Business & Technology
> Park, Rosslare Road, Drinagh, Wexford.
>
>
>
> This message contains information from Equifax 

Stopping solrq supervisor in log; What is this?

2017-06-22 Thread Robert Latko

Anybody seen this in their logs?

yz_solrq_sup:stop_queue_pair:130 Stopping solrq supervisor for index 
<<"search1">>


Searching using the index still works, however I am getting log notices 
for most my indexes?


If you know, please tell me...  Thank you!

Sincerely,

Robert



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Solr search response time spikes

2017-06-22 Thread sean mcevoy
Hi List,

We have a standard riak cluster with 5 nodes and at the minute the traffic
levels are fairly low. Each of our application nodes has 25 client
connections, 5 to each riak node which get selected in a round robin.

Our application level requests involve multiple riak requests so our
traffic tends to make requests in small bursts. Everything works fine for
KV gets, puts & deletes but we're seeing timeouts & weird response time
spikes on solr search operations.

In the past 36 hours (the only period I have riak stats for) I see one
response time of 38.8 seconds, 3 hours earlier a response time of 20.8
seconds, and the third biggest spike is an acceptable 3.5 seconds.

See below all search_query stats for the minute of the 38 sec sample. In
the application request we made 5 riak search requests to the same index in
parallel, which happens for each request of this type and normally doesn't
have an issue. But in this case all 5 timed out, and one timed out again on
retry with the other 4 succeeding.

Anyone ever seen anything like this before? Is there any known deadlock in
solr that I might hit if I make the same request on another connection
before the first has completed? This is what we do when our riak client
times out after 2 seconds and immediately retries.

Any advice or pointers welcomed.
Thanks,
//Sean.


Riak node 1
search_query_throughput_one: 14
search_query_throughput_count: 259
search_query_latency_min: 2776
search_query_latency_median: 69411
search_query_latency_mean: 4900973
search_query_latency_max: 38887902
search_query_latency_999: 38887902
search_query_latency_99: 38887902
search_query_latency_95: 2046215
search_query_fail_one: 0
search_query_fail_count: 0

Riak node 2
search_query_throughput_one: 22
search_query_throughput_count: 564
search_query_latency_min: 4006
search_query_latency_median: 8800
search_query_latency_mean: 11834
search_query_latency_max: 25509
search_query_latency_999: 25509
search_query_latency_99: 25509
search_query_latency_95: 24035
search_query_fail_one: 0
search_query_fail_count: 0

Riak node 3
search_query_throughput_one: 6
search_query_throughput_count: 298
search_query_latency_min: 3200
search_query_latency_median: 15391
search_query_latency_mean: 18062
search_query_latency_max: 31759
search_query_latency_999: 31759
search_query_latency_99: 31759
search_query_latency_95: 31759
search_query_fail_one: 0
search_query_fail_count: 0

Riak node 4
search_query_throughput_one: 8
search_query_throughput_count: 334
search_query_latency_min: 2404
search_query_latency_median: 7230
search_query_latency_mean: 10211
search_query_latency_max: 22502
search_query_latency_999: 22502
search_query_latency_99: 22502
search_query_latency_95: 22502
search_query_fail_one: 0
search_query_fail_count: 0

Riak node 5
search_query_throughput_one: 0
search_query_throughput_count: 0
search_query_latency_min: 0
search_query_latency_median: 0
search_query_latency_mean: 0
search_query_latency_max: 0
search_query_latency_999: 0
search_query_latency_99: 0
search_query_latency_95: 0
search_query_fail_one: 0
search_query_fail_count: 0
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


RE: [IE] Re: Continuous Crash Report

2017-06-22 Thread Mark Richard Thomas
Hello

force-deleted a node but I made no modification to bucket properties.

From: jo...@s2hc.com [mailto:jo...@s2hc.com] On Behalf Of Johan Sommerfeld
Sent: 22 June 2017 09:30
To: Mark Richard Thomas
Cc: riak-users@lists.basho.com
Subject: [IE] Re: Continuous Crash Report

Hi,

I'm not sure but looking at the exception and the code you get function clause 
error because you have the first argument being '$deleted' and the function 
guard expects lists as arguments

https://github.com/basho/riak_core/blob/master/src/riak_core_bucket_props.erl#L129

The reason for it being '$deleted' I don't know, have you done anything to 
bucket properties or is it just replacing a node?

Regards
Johan Sommerfeld

On 22 June 2017 at 10:11, Mark Richard Thomas 
> wrote:
Hello

My crash.log is continually filling up with the following message after 
replacing a node:

2017-06-22 08:06:05 =CRASH REPORT
  crasher:
initial call: riak_kv_index_hashtree:init/1
pid: <0.16663.2>
registered_name: []
exception exit: 
{{function_clause,[{riak_core_bucket_props,resolve,['$deleted',[{active,true},{allow_mult,false},{basic_quorum,true},{big_vclock,50},{chash_keyfun,{riak_core_util,chash_std_keyfun}},{claimant,'r...@udcu1lc9a009.app.c9.test.com'},{dvv_enabled,false},{dw,quorum},{last_write_wins,true},{linkfun,{modfun,riak_kv_wm_link_walker,mapreduce_linkfun}},{n_val,3},{name,{<<"commercial">>,<<"commercial_systest">>}},{notfound_ok,false},{old_vclock,86400},{postcommit,[]},{pr,0},{precommit,[]},{pw,0},{r,quorum},{rw,quorum},{search_index,<<"commercial_systest">>},{small_vclock,50},{w,quorum},{young_vclock,20}]],[{file,"src/riak_core_bucket_props.erl"},{line,129}]},{riak_core_metadata_object,'-resolve/2-fun-1-',3,[{file,"src/riak_core_metadata_object.erl"},{line,116}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{dvvset,reconcile,2,[{file,"src/dvvset.erl"},{line,243}]},{riak_core_metadata_object,resolve,2,[{file,"src/riak_core_metadata_object.erl"},{line,118}]},{riak_core_metadata,maybe_resolve,5,[{file,"src/riak_core_metadata.erl"},{line,355}]},{riak_core_metadata,itr_key_values,1,[{file,"src/riak_core_metadata.erl"},{line,226}]},{riak_core_metadata,fold_it,3,[{file,"src/riak_core_metadata.erl"},{line,130}]}]},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
ancestors: [<0.1742.0>,riak_core_vnode_sup,riak_core_sup,<0.240.0>]
messages: []
links: []
dictionary: []
trap_exit: false
status: running
heap_size: 2586
stack_size: 27
reductions: 3015

And in error.log:

2017-06-22 08:07:38.284 [error] <0.3955.4> CRASH REPORT Process <0.3955.4> with 
0 neighbours exited with reason: no function clause matching 
riak_core_bucket_props:resolve('$deleted', 
[{active,true},{allow_mult,false},{basic_quorum,true},{big_vclock,50},{chash_keyfun,{riak_core_util,...}},...])
 line 129 in gen_server:init_it/6 line 328


'r...@udcu1lc9a009.app.c9.test.com' 
is a dead node.

Mark Thomas
Technical Lead, UK IT
Equifax Inc.

O +44 (0)7908 798 270
mark.tho...@equifax.com
[cid:image001.png@01D2EB4B.26C4EC40]
[cid:image005.png@01D2C8D0.9963C110] 
[cid:image006.png@01D2C8D0.9963C110] 


Equifax Limited is registered in England with Registered No. 2425920. 
Registered Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax 
Limited is authorised and regulated by the Financial Conduct Authority.
Equifax Touchstone Limited is registered in Scotland with Registered No. 
SC113401. Registered Office: Exchange Tower,19 Canning Street, Edinburgh, EH3 
8EH.
Equifax Commercial Services Limited is registered in the Republic of Ireland 
with Registered No. 215393. Registered Office: IDA Business & Technology Park, 
Rosslare Road, Drinagh, Wexford.

This message contains information from Equifax which may be confidential and 
privileged. If you are not an intended recipient, please refrain from any 
disclosure, copying, distribution or use of this information and note that such 
actions are prohibited. If you have received this transmission in error, please 
notify by e-mail postmas...@equifax.com.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



--
Johan Sommerfeld
tel: +46 (0) 70 769 15 73
S2HC Sweden AB
Litsbyvägen 56
187 46 Täby
Equifax Limited is registered in England with Registered No. 2425920. 
Registered Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax 
Limited is authorised and 

Re: Riak Java client question

2017-06-22 Thread Guido Medina
Also, a property annotated with a Riak annotation was kept in the Json 
only if it has been explicitly annotated with @JsonProperty or has not 
been annotated with a Riak property.


This was accomplished with the following code from the old client:


static {
RIAK_ANNOTATIONS.add(RiakKey.class);
RIAK_ANNOTATIONS.add(RiakIndex.class);
RIAK_ANNOTATIONS.add(RiakVClock.class);
RIAK_ANNOTATIONS.add(RiakTombstone.class);
RIAK_ANNOTATIONS.add(RiakUsermeta.class);
RIAK_ANNOTATIONS.add(RiakLinks.class);
}

private boolean keepProperty(BeanPropertyWriter beanPropertyWriter) {
if (beanPropertyWriter.getAnnotation(JsonProperty.class) != 
null) {

return true;
}
for (Class annotation : RIAK_ANNOTATIONS) {
if (beanPropertyWriter.getAnnotation(annotation) != null) {
return false;
}
}
return true;
}


So my other question is if this still holds true for the current Riak 
Java client 2.1.1?


On 22/06/17 09:49, Guido Medina wrote:

Hi,

I see now there is support for 2i which we needed in order to migrate 
to 2.x, there was another issue with the old client which forced us to 
modify the client, such issue was related to the following, let me put 
an example:


public class POJO {

  @RiakKey
  public String getKey() {
// generate our own key
  }

  @RiakIndex("some-index")
  @JsonIgnore
  public String getSomeIndex() {
// generate some index
  }

}

The following example should add "some-index" but ignore it as a Json 
property, and as you can see the Riak key is a pseudo (computed) 
property and has no setter, at some point Riak client 1.4.x required 
that the key be a property which IMHO is a very poor design to require 
the POJO to have some properties to hold internal Riak client values 
between calls.


I'm wondering what's the current POJO support as we have been stuck 
for years now with that old Riak client 1.4.x with our modifications, 
we are afraid that one day it will simply not be supported anymore by 
a newer Riak server.


Regards,

Guido.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java client question

2017-06-22 Thread Guido Medina

Hi,

I see now there is support for 2i which we needed in order to migrate to 
2.x, there was another issue with the old client which forced us to 
modify the client, such issue was related to the following, let me put 
an example:


public class POJO {

  @RiakKey
  public String getKey() {
// generate our own key
  }

  @RiakIndex("some-index")
  @JsonIgnore
  public String getSomeIndex() {
// generate some index
  }

}

The following example should add "some-index" but ignore it as a Json 
property, and as you can see the Riak key is a pseudo (computed) 
property and has no setter, at some point Riak client 1.4.x required 
that the key be a property which IMHO is a very poor design to require 
the POJO to have some properties to hold internal Riak client values 
between calls.


I'm wondering what's the current POJO support as we have been stuck for 
years now with that old Riak client 1.4.x with our modifications, we are 
afraid that one day it will simply not be supported anymore by a newer 
Riak server.


Regards,

Guido.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Continuous Crash Report

2017-06-22 Thread Johan Sommerfeld
Hi,

I'm not sure but looking at the exception and the code you get function
clause error because you have the first argument being '$deleted' and the
function guard expects lists as arguments

https://github.com/basho/riak_core/blob/master/src/riak_core_bucket_props.erl#L129

The reason for it being '$deleted' I don't know, have you done anything to
bucket properties or is it just replacing a node?

Regards
Johan Sommerfeld

On 22 June 2017 at 10:11, Mark Richard Thomas 
wrote:

> Hello
>
>
>
> My crash.log is continually filling up with the following message after
> replacing a node:
>
>
>
> 2017-06-22 08:06:05 =CRASH REPORT
>
>   crasher:
>
> initial call: riak_kv_index_hashtree:init/1
>
> pid: <0.16663.2>
>
> registered_name: []
>
> exception exit: {{function_clause,[{riak_core_bucket_props,resolve,['$
> deleted',[{active,true},{allow_mult,false},{basic_
> quorum,true},{big_vclock,50},{chash_keyfun,{riak_core_util,
> chash_std_keyfun}},{claimant,'r...@udcu1lc9a009.app.c9.test.com
> '},{dvv_enabled,false},{dw,quorum},{last_write_wins,true}
> ,{linkfun,{modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
> },{n_val,3},{name,{<<"commercial">>,<<"commercial_
> systest">>}},{notfound_ok,false},{old_vclock,86400},{
> postcommit,[]},{pr,0},{precommit,[]},{pw,0},{r,
> quorum},{rw,quorum},{search_index,<<"commercial_systest">>
> },{small_vclock,50},{w,quorum},{young_vclock,20}]],[{file,"
> src/riak_core_bucket_props.erl"},{line,129}]},{riak_core_
> metadata_object,'-resolve/2-fun-1-',3,[{file,"src/riak_
> core_metadata_object.erl"},{line,116}]},{lists,foldl,3,[{
> file,"lists.erl"},{line,1248}]},{dvvset,reconcile,2,[{file,"
> src/dvvset.erl"},{line,243}]},{riak_core_metadata_object,
> resolve,2,[{file,"src/riak_core_metadata_object.erl"},{
> line,118}]},{riak_core_metadata,maybe_resolve,5,[{
> file,"src/riak_core_metadata.erl"},{line,355}]},{riak_core_
> metadata,itr_key_values,1,[{file,"src/riak_core_metadata.
> erl"},{line,226}]},{riak_core_metadata,fold_it,3,[{file,"
> src/riak_core_metadata.erl"},{line,130}]}]},[{gen_server,
> init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,
> init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
>
> ancestors: [<0.1742.0>,riak_core_vnode_sup,riak_core_sup,<0.240.0>]
>
> messages: []
>
> links: []
>
> dictionary: []
>
> trap_exit: false
>
> status: running
>
> heap_size: 2586
>
> stack_size: 27
>
> reductions: 3015
>
>
>
> And in error.log:
>
>
>
> 2017-06-22 08:07:38.284 [error] <0.3955.4> CRASH REPORT Process <0.3955.4>
> with 0 neighbours exited with reason: no function clause matching
> riak_core_bucket_props:resolve('$deleted', [{active,true},{allow_mult,
> false},{basic_quorum,true},{big_vclock,50},{chash_keyfun,{riak_core_util,...}},...])
> line 129 in gen_server:init_it/6 line 328
>
>
>
>
>
> 'r...@udcu1lc9a009.app.c9.test.com' is a dead node.
>
>
>
>
> *Mark Thomas *Technical Lead*, *UK IT
> Equifax Inc.
>
>
>
> O +44 (0)7908 798 270 <+44%207908%20798270>
>
> mark.tho...@equifax.com
>
> 
>
> [image: cid:image005.png@01D2C8D0.9963C110]
>  [image:
> cid:image006.png@01D2C8D0.9963C110]
> 
>
>
>
>
> Equifax Limited is registered in England with Registered No. 2425920. 
> Registered
> Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax Limited is
> authorised and regulated by the Financial Conduct Authority.
>
> Equifax Touchstone Limited is registered in Scotland with Registered No.
> SC113401. Registered Office: Exchange Tower,19 Canning Street, Edinburgh,
> EH3 8EH.
>
> Equifax Commercial Services Limited is registered in the Republic of
> Ireland with Registered No. 215393. Registered Office: IDA Business &
> Technology Park, Rosslare Road, Drinagh, Wexford.
>
>
>
> This message contains information from Equifax which may be confidential
> and privileged. If you are not an intended recipient, please refrain from
> any disclosure, copying, distribution or use of this information and note
> that such actions are prohibited. If you have received this transmission in
> error, please notify by e-mail postmas...@equifax.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Johan Sommerfeld
tel: +46 (0) 70 769 15 73
S2HC Sweden AB
Litsbyvägen 56
187 46 Täby
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Continuous Crash Report

2017-06-22 Thread Mark Richard Thomas
Hello

My crash.log is continually filling up with the following message after 
replacing a node:

2017-06-22 08:06:05 =CRASH REPORT
  crasher:
initial call: riak_kv_index_hashtree:init/1
pid: <0.16663.2>
registered_name: []
exception exit: 
{{function_clause,[{riak_core_bucket_props,resolve,['$deleted',[{active,true},{allow_mult,false},{basic_quorum,true},{big_vclock,50},{chash_keyfun,{riak_core_util,chash_std_keyfun}},{claimant,'r...@udcu1lc9a009.app.c9.test.com'},{dvv_enabled,false},{dw,quorum},{last_write_wins,true},{linkfun,{modfun,riak_kv_wm_link_walker,mapreduce_linkfun}},{n_val,3},{name,{<<"commercial">>,<<"commercial_systest">>}},{notfound_ok,false},{old_vclock,86400},{postcommit,[]},{pr,0},{precommit,[]},{pw,0},{r,quorum},{rw,quorum},{search_index,<<"commercial_systest">>},{small_vclock,50},{w,quorum},{young_vclock,20}]],[{file,"src/riak_core_bucket_props.erl"},{line,129}]},{riak_core_metadata_object,'-resolve/2-fun-1-',3,[{file,"src/riak_core_metadata_object.erl"},{line,116}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{dvvset,reconcile,2,[{file,"src/dvvset.erl"},{line,243}]},{riak_core_metadata_object,resolve,2,[{file,"src/riak_core_metadata_object.erl"},{line,118}]},{riak_core_metadata,maybe_resolve,5,[{file,"src/riak_core_metadata.erl"},{line,355}]},{riak_core_metadata,itr_key_values,1,[{file,"src/riak_core_metadata.erl"},{line,226}]},{riak_core_metadata,fold_it,3,[{file,"src/riak_core_metadata.erl"},{line,130}]}]},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
ancestors: [<0.1742.0>,riak_core_vnode_sup,riak_core_sup,<0.240.0>]
messages: []
links: []
dictionary: []
trap_exit: false
status: running
heap_size: 2586
stack_size: 27
reductions: 3015

And in error.log:

2017-06-22 08:07:38.284 [error] <0.3955.4> CRASH REPORT Process <0.3955.4> with 
0 neighbours exited with reason: no function clause matching 
riak_core_bucket_props:resolve('$deleted', 
[{active,true},{allow_mult,false},{basic_quorum,true},{big_vclock,50},{chash_keyfun,{riak_core_util,...}},...])
 line 129 in gen_server:init_it/6 line 328


'r...@udcu1lc9a009.app.c9.test.com' is a dead node.

Mark Thomas
Technical Lead, UK IT
Equifax Inc.

O +44 (0)7908 798 270
mark.tho...@equifax.com
[cid:image007.png@01D2EB37.7F323090]
[cid:image005.png@01D2C8D0.9963C110] 
[cid:image006.png@01D2C8D0.9963C110] 


Equifax Limited is registered in England with Registered No. 2425920. 
Registered Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax 
Limited is authorised and regulated by the Financial Conduct Authority.
Equifax Touchstone Limited is registered in Scotland with Registered No. 
SC113401. Registered Office: Exchange Tower,19 Canning Street, Edinburgh, EH3 
8EH.
Equifax Commercial Services Limited is registered in the Republic of Ireland 
with Registered No. 215393. Registered Office: IDA Business & Technology Park, 
Rosslare Road, Drinagh, Wexford.

This message contains information from Equifax which may be confidential and 
privileged. If you are not an intended recipient, please refrain from any 
disclosure, copying, distribution or use of this information and note that such 
actions are prohibited. If you have received this transmission in error, please 
notify by e-mail postmas...@equifax.com.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com