[ 
https://issues.apache.org/jira/browse/CASSANDRA-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12919944#action_12919944
 ] 

Jonathan Ellis commented on CASSANDRA-1596:
-------------------------------------------

+1

> failed compactions should not leave tmp files behind
> ----------------------------------------------------
>
>                 Key: CASSANDRA-1596
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1596
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Peter Schuller
>            Assignee: Gary Dusbabek
>            Priority: Minor
>             Fix For: 0.7.1
>
>         Attachments: v2-0001-abandon-temp-sstables-on-compaction-error.txt
>
>
> Using a somewhat old (few months) trunk running tests, I had a compaction 
> failure due to the 2 TB file size limit on ext3fs. The tmp file was left 
> behind, after which further compaction proceeded.
> This can be detrimental in particular because disk space requirements can 
> increase, having additional partially written but abandoned compacted 
> sstables around.
> Stack trace with code path included below.
> I can imagine that for debugging purposes there would be cases where you do 
> not want a compaction to immediately remove the temp file. On the other hand, 
> compaction failures would presumably be caused by the input most of the time, 
> rather than the output. So the extra effort of having to patch cassandra to 
> avoid the removal does not seem like a critical issue. Maybe one can provide 
> a JMX tunable to turn off removal if this is a concern. Thoughts?
> java.util.concurrent.ExecutionException: java.io.IOException: File too large
>         at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>         at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:87)
>         at 
> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:636)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:619)
> Caused by: java.io.IOException: File too large
>         at java.io.RandomAccessFile.writeBytes(Native Method)
>         at java.io.RandomAccessFile.write(RandomAccessFile.java:466)
>         at 
> org.apache.cassandra.io.util.BufferedRandomAccessFile.flushBuffer(BufferedRandomAccessFile.java:194)
>         at 
> org.apache.cassandra.io.util.BufferedRandomAccessFile.seek(BufferedRandomAccessFile.java:240)
>         at 
> org.apache.cassandra.io.util.BufferedRandomAccessFile.writeAtMost(BufferedRandomAccessFile.java:391)
>         at 
> org.apache.cassandra.io.util.BufferedRandomAccessFile.write(BufferedRandomAccessFile.java:367)
>         at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:117)
>         at 
> org.apache.cassandra.db.CompactionManager.doCompaction(CompactionManager.java:352)
>         at 
> org.apache.cassandra.db.CompactionManager$2.call(CompactionManager.java:150)
>         at 
> org.apache.cassandra.db.CompactionManager$2.call(CompactionManager.java:131)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         ... 2 more

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to