The way we solved this problem is that it turned out we had only a few
hundred rows with unicode keys, so we simply extracted them, upgraded to
0.7, and wrote them back. However, this means that among the rows, there are
a few hundred weird duplicate rows with identical keys.

Is this going to be a problem in the future? Is there a chance that the good
duplicate is cleaned out in favour of the bad duplicate so that we suddnely
lose those rows again?


/Henrik Schröder

Reply via email to