[ 
https://issues.apache.org/jira/browse/CASSANDRA-924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stu Hood updated CASSANDRA-924:
-------------------------------

    Attachment: 924-0.5.patch
                924.patch

Patches updated to remove stubbed out 'range read-repair' option.

Jianing: can you give your test one more try?

> lost data replica not restored after server restart
> ---------------------------------------------------
>
>                 Key: CASSANDRA-924
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-924
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 0.5, 0.6, 0.6.1, 0.7
>         Environment: CentOS 5.2
>            Reporter: Jianing hu
>             Fix For: 0.5, 0.6, 0.6.1, 0.7
>
>         Attachments: 924-0.5.patch, 924-0.5.patch, 924.patch, 924.patch, 
> cs1.log, cs2.log, cs3.log, error.log, error.log, error.log, storage-conf.xml
>
>
> With Replicationfactor=2, if a server is brought down and its data directory 
> wiped out, it doesn't restore its data replica after restart and nodeprobe 
> repair.
> Steps to reproduce:
> 1) Bring up a cluster with three servers cs1,2,3, with their initial token 
> set to 'foo3', 'foo6', and 'foo9', respectively. ReplicationFactor is set to 
> 2 on all 3.
> 2) Insert 9 columns with keys from 'foo1' to 'foo9', and flush. Now I have 
> foo1,2,3,7,8,9 on cs1, foo1,2,3,4,5,6, on cs2, and foo4,5,6,7,8,9
> on cs3. So far so good
> 3) Bring down cs3 and wipe out its data directory
> 4) Bring up cs3
> 5) run nodeprobe repair Keyspace1 on cs3, the flush
> At this point I expect to see cs3 getting its data back. But there's nothing 
> in its data directory. I also tried getting all columns with
> ConsistencyLevel::ALL to see if that'll do a read pair. But still cs3's data 
> directory is empty.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to