There may be better ways, but what I would do is: Stop Riak, blow away your ${platform_data_dir}/anti_entropy directory (and yz_anti_entropy, if you have search enabled), and restart. This will force a rebuild of your hash trees. They rebuild automatically anyway; this is just making them rebuild faster.
So it would recover .. eventually. -Fred > On Jun 1, 2017, at 7:13 AM, Guido Medina <gmed...@temetra.com> wrote: > > Correction: I replied to an old e-mail instead of creating a new one and > forgot to change the subject. > > Hi all, > > My impression of Riak has always been that it would recover from anything but > yesterday we had the worst happened, there was a power outage so all the > servers in a Riak cluster went down, once they were back we have been having > these constant so my questions are: > How can I recover from this? > Isn't Riak supposed to restore corrupted hashes from other nodes? > >> 2017-06-01 12:07:43.795 [info] >> <0.659.0>@riak_kv_vnode:maybe_create_hashtrees:234 >> riak_kv/1164634117248063262943561351070788031288321245184: unable to start >> index_hashtree: {error,{{badmatch,{error,{db_open,"Corruption: truncated >> record at end of >> file"}}},[{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,725}]},{hashtree,new,2,[{file,"src/hashtree.erl"},{line,246}]},{riak_kv_index_hashtree,do_new_tree,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,712}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{riak_kv_index_hashtree,init_trees,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,565}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{line,308}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}} >> 2017-06-01 12:07:45.457 [error] <0.27505.112> CRASH REPORT Process >> <0.27505.112> with 0 neighbours exited with reason: no match of right hand >> value {error,{db_open,"Corruption: truncated record at end of file"}} in >> hashtree:new_segment_store/2 line 725 in gen_server:init_it/6 line 328 > This is not our production cluster but our development is now stuck because > all the data we had there would take days to restore from another > cluster, I hope there is a solvable problem. > > Regards, > > Guido. > _______________________________________________ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com