Hello!

We are experiencing error messages from the client that we don’t totally 
understand. They look like the following:

<Riak::ProtobuffsErrorResponse: Expected success from Riak but received 1013. 
no response from backend>

Checking the riak error and crash logs, I’m seeing “overload” errors which I 
assume is causing the “no response from backend” client errors:

{error,
 badarg,
 [{erlang,iolist_to_binary,[overload],[]},
  
{riak_kv_ts_svc,make_rpberrresp,2,[{file,"src/riak_kv_ts_svc.erl"},{line,483}]},
  
{riak_kv_ts_svc,sub_tsqueryreq,4,[{file,"src/riak_kv_ts_svc.erl"},{line,445}]},
  {riak_kv_pb_ts,process,2,[{file,"src/riak_kv_pb_ts.erl"},{line,71}]},
  
{riak_api_pb_server,process_message,4,[{file,"src/riak_api_pb_server.erl"},{line,388}]},
  
{riak_api_pb_server,connected,2,[{file,"src/riak_api_pb_server.erl"},{line,226}]},
  {riak_api_pb_server,decode_buffer,2,[{file,...},...]},...]}

I’m curious if these overload errors are caused by clients requesting more 
concurrent TS queries than our current setting for 
timeseries_max_concurrent_queries allows OR if the the 
timeseries_max_concurrent_queries is set too high and we are causing riak to 
crash.

Do you have any recommendations on what timeseries_max_concurrent_queries 
should be set to relative to hardward specs? I assume it should be limited 
based on disk I/O bandwidth.

Also, does anyone have any recommendations on query pooling so we can guarantee 
that multiple clients will not generate more queries than the cluster can 
handle? I like HAProxy for HTTP connection pooling but it doesn’t seem like it 
would work well for limiting the number of global queries from multiple PBC 
clients.

Thank you!

Chris
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to