[ 
https://issues.apache.org/jira/browse/CASSANDRA-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13791974#comment-13791974
 ] 

Tomas Salfischberger edited comment on CASSANDRA-6092 at 10/10/13 8:56 PM:
---------------------------------------------------------------------------

I have accidentally reproduced this on a test-cluster, N=3 and RF=3 with 
1.2.10. The CF I was testing with started out as LCS with sstable_size of 5mb 
(original default in 1.0) which created 30.000+ files. I switched it to 
SizeTieredCompactionStrategy, which made it compact everything to a single 
sstable.

My initial goal was to use sstablesplit on this, but then I realized I could 
just try to switch back to LCS and see what it would do. This exhibited exactly 
the behavior described in this issue. It created a set of pending tasks and 
never tried to execute them.

Then I ran "nodetool compact" on the CF and that started the compaction process 
(visible in both the logs and nodetool compactionstats). After a few hours the 
compact-command returned, all pending tasks were cleared and the sstables were 
nicely split in the configured sizes. Checking the json metadata for LCS shows 
the files all ended up in L1.

So to summarize: 1. I've reproduced the issue and 2. running "nodetool compact" 
seems a good workaround.


was (Author: t0mas):
I have accidentally reproduced this on a test-cluster, N=3 and RF=3 with 
1.2.10. The CF I was testing with started out as LCS with sstable_size of 5mb 
(original default in 1.0) which created 30.000+ files. I switched it to 
SizeTieredCompactionStrategy, which made it compact everything to a single 
sstable.

My initial goal was to use sstablesplit on this, but then I realized I could 
just try to switch back to LCS and see what it would do. This exhibited exactly 
the behavior described in this issue. It created a set of pending tasks and 
never tried to execute them.

Then I ran "nodetool compact" on the CF and that started the compaction process 
(visible in both the logs and nodetool compactionstats). After a few hours the 
compact-command returned, all pending tasks were cleared and the sstables were 
nicely split in the configured sizes. Checking the json metadata for LCS shows 
the files all ended up in L1.

So to summarize: 1. I've reprocuded the issue and 2. running "nodetool compact" 
seems a good workaround.

> Leveled Compaction after ALTER TABLE creates pending but does not actually 
> begin
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-6092
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6092
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Cassandra 1.2.10
> Oracle Java 1.7.0_u40
> RHEL6.4
>            Reporter: Karl Mueller
>            Assignee: Daniel Meyer
>
> Running Cassandra 1.2.10.  N=5, RF=3
> On this Column Family (ProductGenomeDev/Node), it's been major compacted into 
> a single, large sstable.
> There's no activity on the table at the time of the ALTER command. I changed 
> it to Leveled Compaction with the command below.
> cqlsh:ProductGenomeDev> alter table "Node" with compaction = { 'class' : 
> 'LeveledCompactionStrategy', 'sstable_size_in_mb' : 160 };
> Log entries confirm the change happened.
> [...]column_metadata={},compactionStrategyClass=class 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy,compactionStrategyOptions={sstable_size_in_mb=160}
>  [...]
> nodetool compactionstats shows pending compactions, but there's no activity:
> pending tasks: 750
> 12 hours later, nothing has still happened, same number pending. The 
> expectation would be that compactions would proceed immediately to convert 
> everything to Leveled Compaction as soon as the ALTER TABLE command goes.
> I try a simple write into the CF, and then flush the nodes. This kicks off 
> compaction on 3 nodes. (RF=3)
> cqlsh:ProductGenomeDev> insert into "Node" (key, column1, value) values 
> ('test123', 'test123', 'test123');
> cqlsh:ProductGenomeDev> select * from "Node" where key = 'test123';
>  key     | column1 | value
> ---------+---------+---------
>  test123 | test123 | test123
> cqlsh:ProductGenomeDev> delete from "Node" where key = 'test123';
> After a flush on every node, now I see:
> [cassandra@dev-cass00 ~]$ cas exec nt compactionstats
> *** dev-cass00 (0) ***
> pending tasks: 750
> Active compaction remaining time :        n/a
> *** dev-cass04 (0) ***
> pending tasks: 752
>           compaction type        keyspace   column family       completed     
>       total      unit  progress
>                CompactionProductGenomeDev            Node      3413333881    
> 643290447928     bytes     0.53%
> Active compaction remaining time :        n/a
> *** dev-cass01 (0) ***
> pending tasks: 750
> Active compaction remaining time :        n/a
> *** dev-cass02 (0) ***
> pending tasks: 751
>           compaction type        keyspace   column family       completed     
>       total      unit  progress
>                CompactionProductGenomeDev            Node      3374975141    
> 642764512481     bytes     0.53%
> Active compaction remaining time :        n/a
> *** dev-cass03 (0) ***
> pending tasks: 751
>           compaction type        keyspace   column family       completed     
>       total      unit  progress
>                CompactionProductGenomeDev            Node      3591320948    
> 643017643573     bytes     0.56%
> Active compaction remaining time :        n/a
> After inserting and deleting more columns, enough that all nodes have new 
> data, and flushing, now compactions are proceeding on all nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to