This might be a relatively benign message, which might be better at debug scope
than info scope. This would have been introduced in 2.0.8 or 2.2.2 or later.
Essentially, there are solq supervisors for each partition/index pair (for each
partition on a node), and when these supervisors start or
This might be a relatively benign message, which might be better at debug scope
than info scope. This would have been introduced in 2.0.8 or 2.2.2 or later.
Essentially, there are solq supervisors for each partition/index pair (for each
partition on a node), and when these supervisors start or
Anybody seen this in their logs?
yz_solrq_sup:stop_queue_pair:130 Stopping solrq supervisor for index
<<"search1">>
Searching using the index still works, however I am getting log notices
for most my indexes?
If you know, please tell me... Thank you!
Sincerely,
Robert
Joe's first law [of protocol design]
"Always include protocol identifier and version number in the first message"
I think I heard him say that when Tom presented BERT-RPC protocol at EUC 2010
:-)
Maybe we should apply that rule to riak's PB protocol going forward?
Kresten
On Jan 22, 2012,
So stupid it hurts, and though it's embarrassing I should close the loop...
Env change on upgrade to 1.0.2 moved ports.
Tested 1.0.3 on the 1.0.2 bad config...then tested 1.0.0 against that
same bad config.
Intermittent success were due to the config not being 100% incorrect.
PB client doesn't wo
I get this on a riakc_pb_socket:ping() ...
20:24:04.361 [error] Handoff receiver for partition undefined exited
abnormally after processing 0 objects:
{noproc,{gen_fsm,sync_send_all_state_event,[undefined,{handoff_data,<<>>},6]}}
-mox
On Sat, Jan 21, 2012 at 8:43 AM, Mike Oxford wrote:
> 3
After tracing the source code, I think it might also be caused by network
breakdown, possibly ^_^ I am not sure.
Please modify these two lines of riak_core_handoff_receiver.erl (Riak 1.0.3):
From:
process_message(_, _MsgData, State=#state{sock=Socket,
tcp
Oh, sorry for my fast response ^_^
This issue should due to the miscommunication between
riak_core_handoff_sender.erl and riak_core_handoff_receiver.erl
Zheng Zhibin
在 2012-1-22,上午11:23, Zheng Zhibin 写道:
> I think this basically due to different definition of Protocol Buffers
> between Riak a
I think this basically due to different definition of Protocol Buffers between
Riak and Riak Erlang Client.
Updated them into the same version, and make clean & make, this should help.
Zheng Zhibin
在 2012-1-22,上午11:18, Zheng Zhibin 写道:
> saw the code in "deps/riak_core/src/riak_core_handoff_re
saw the code in "deps/riak_core/src/riak_core_handoff_receiver.erl"
process_message(?PT_MSG_INIT, MsgData, State=#state{vnode_mod=VNodeMod}) ->
<> = MsgData,
lager:info("Receiving handoff data for partition ~p:~p", [VNodeMod,
Partition]),
{ok, VNode} = riak_core_vnode_master:get_vnode
Return MsgCode of "255", I think this clue could be used to find out the issue,
whether in riakc or Riak or just completely received a corrupted TCP data?
Best regards,
Zheng Zhibin
在 2012-1-22,上午6:10,Mike Oxford 写道:
> Completely separate 1.0.3 (note the version diff) installation, 3 new
> nod
Completely separate 1.0.3 (note the version diff) installation, 3 new
nodes, bitcask backend (instead of leveldb), absolutely no data.
Rebuilt making sure I had latest erlang_protobuffs tag (0.6.0).
Fire up three nodes, build some connections, and repeat using binary-only.
3>
riakc_pb_socket:lis
Not sure if it's the cause of the error, but I notice sometimes you're using a
string for the bucket name when it should always be passed as a binary.
Kelly
Sent from my iPhone
On Jan 21, 2012, at 9:43 AM, Mike Oxford wrote:
> 3 node cluster of 1.0.2, level_db backend, pb interface. Build up
3 node cluster of 1.0.2, level_db backend, pb interface. Build up a
store of 9 connections (3 to each node) and pull one out randomly.
--snip
62>
riakc_pb_socket:list_keys(gen_server:call(forum_store:get_random_pid(alliance),
get_connector_pid), <<"alliance_overview">>
{ok,[]}
63>
riakc_pb_sock
14 matches
Mail list logo