Seems good. I'll discus it with data owners and we choose the best method.
Best regards,
Aleksander
4 lut 2014 19:40 "Robert Coli" <rc...@eventbrite.com> napisaƂ(a):

> On Tue, Feb 4, 2014 at 12:21 AM, olek.stas...@gmail.com <
> olek.stas...@gmail.com> wrote:
>
>> I don't know what is the real cause of my problem. We are still guessing.
>> All operations I have done one cluster are described on timeline:
>> 1.1.7-> 1.2.10 -> upgradesstable -> 2.0.2 -> normal operations ->2.0.3
>> -> normal operations -> now
>> normal operations means reads/writes/repairs.
>> Could you please, describe briefly how to recover data? I have a
>> problem with scenario described under link:
>>
>> http://thelastpickle.com/blog/2011/12/15/Anatomy-of-a-Cassandra-Partition.html,
>> I can't apply this solution to my case.
>>
>
> I think your only option is the following :
>
> 1) determine which SSTables contain rows have doomstones (tombstones from
> the far future)
> 2) determine whether these tombstones mask a live or dead version of the
> row, by looking at other row fragments
> 3) dump/filter/re-write all your data via some method, probably
> sstable2json/json2sstable
> 4) load the corrected sstables by starting a node with the sstables in the
> data directory
>
> I understand you have a lot of data, but I am pretty sure there is no way
> for you to fix it within Cassandra. Perhaps ask for advice on the JIRA
> ticket mentioned upthread if this answer is not sufficient?
>
> =Rob
>
>

Reply via email to