[ 
https://issues.apache.org/jira/browse/CASSANDRA-1040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1040:
--------------------------------------

    Fix Version/s: 0.6.2

> read failure during flush
> -------------------------
>
>                 Key: CASSANDRA-1040
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1040
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Jonathan Ellis
>            Priority: Critical
>             Fix For: 0.6.2
>
>
> Joost Ouwerkerk writes:
>       
> On a single-node cassandra cluster with basic config (-Xmx:1G)
> loop {
>   * insert 5,000 records in a single columnfamily with UUID keys and
> random string values (between 1 and 1000 chars) in 5 different columns
> spanning two different supercolumns
>   * delete all the data by iterating over the rows with
> get_range_slices(ONE) and calling remove(QUORUM) on each row id
> returned (path containing only columnfamily)
>   * count number of non-tombstone rows by iterating over the rows
> with get_range_slices(ONE) and testing data.  Break if not zero.
> }
> while this is running, call "bin/nodetool -h localhost -p 8081 flush 
> KeySpace" in the background every minute or so.  When the data hits some 
> critical size, the loop will break.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to