[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-15 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627911#comment-14627911
 ] 

Andreas Schnitzerling commented on CASSANDRA-9686:
--

I filed a new ticket Handle corrupted files during startup. (CASSANDRA-9812).

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.9, 2.2.0, 3.0 beta 1

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-14 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14626108#comment-14626108
 ] 

Andreas Schnitzerling commented on CASSANDRA-9686:
--

I just want the same behavior as in prior versions of C*. Means not to stop on 
corrupted files since I have replication and regular repair of important 
keyspaces. So I have to use IGNORE to reach that since I'm not at work everyday 
and C* is used everyday.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.9, 2.2.0, 3.0 beta 1

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-13 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14624561#comment-14624561
 ] 

Andreas Schnitzerling commented on CASSANDRA-9686:
--

After applying all patches, the server stops now on corrupted files with 
several exceptions. LEAK ERROR I liked more since the server kept on running. 
Can we not move those files to another directory since the hard-drive is not 
corrupted? Is it necessary to stop on policy stop for these errors?
{code:title=system.log}
WARN  [main] 2015-07-13 13:43:49,838 CLibrary.java:72 - JNA link failure, one 
or more native method will be unavailable.
WARN  [main] 2015-07-13 13:43:49,838 StartupChecks.java:148 - 32bit JVM 
detected.  It is recommended to run Cassandra on a 64bit JVM for better 
performance.
WARN  [main] 2015-07-13 13:43:50,048 SigarLibrary.java:167 - Cassandra server 
running in degraded mode. Is swap disabled? : false,  Address space adequate? : 
true,  nofile limit adequate? : true, nproc limit adequate? : true 
ERROR [SSTableBatchOpen:1] 2015-07-13 13:44:07,959 SSTableReader.java:503 - 
Cannot read sstable 
D:\Programme\Cassandra\data\data\logdata\onlinedata\la-115-big=[Data.db, 
Index.db, Filter.db, CompressionInfo.db, Statistics.db, TOC.txt, 
Digest.adler32]; file system error, skipping table
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@18340c2
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:695)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:656)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:450)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:356)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:493)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
[na:1.7.0_55]
at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
[na:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
[na:1.7.0_55]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
java.io.DataInputStream@18340c2
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
... 15 common frames omitted
ERROR [SSTableBatchOpen:1] 2015-07-13 13:44:07,959 FileUtils.java:464 - Exiting 
forcefully due to file system exception on startup, disk failure policy stop
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@18340c2
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
 ~[apache-cassandra-2.2.0-rc2-SNAPSHOT.jar:2.2.0-rc2-SNAPSHOT]
at 

[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-13 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14624431#comment-14624431
 ] 

Stefania commented on CASSANDRA-9686:
-

I don't think so. If I understood correctly, the problem with CASSANDRA-9774 is 
that the test relied incorrectly on a LEAK DETECTED error that got fixed - but 
it cannot have been fixed by this patch since we haven't committed yet. In any 
case the solution for CASSANDRA-9774 is to fix the test since it appears the 
LEAK error is gone already.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-13 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14624650#comment-14624650
 ] 

Stefania commented on CASSANDRA-9686:
-

That's the expected behavior, see the discussion above regarding carrying on 
with corrupt system tables. It's a bug that _stop_ allows the transports and 
gossip to be started after we have encountered corrupt sstable files. If it 
happened after startup, _stop_ would leave the process running but with no 
gossip or transports, so that you cannot do anything with it other than 
analyzing the status via JMX. During startup, gossip or the transports aren't 
started yet, so they are not stopped and later they start. We decided the 
easiest way to fix this was to stop the process if we encounter corrupt 
sstables at startup.

You can use the _ignore_ policy to carry on, but you really should fix corrupt 
sstable files, or the system may become unstable depending on which sstable 
files are corrupt.

If standalone scrub (sstablescrub) recovers the files (it couldn't with yours) 
then it would move them as well. Perhaps you could open a ticket to enhance 
sstablescrub to always move files, even if they cannot be recovered. So I 
suggest you try standalone scrub and open a ticket with a request to move the 
files, perhaps via a command line argument? The idea would be: cassandra fails 
fast (it won't start), you run standalone scrub and restart. Better like this 
than ignoring corrupt sstable files IMO.


 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 

[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-13 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14624687#comment-14624687
 ] 

Andreas Schnitzerling commented on CASSANDRA-9686:
--

Would-be nice if I could decide that behavior with a parameter and not ignoring 
all in general. If I'm not in my company for some days or on weekend, I cannot 
inspect the files. I have to rely on the cluster, even on hard reset of the 
nodes.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14623252#comment-14623252
 ] 

Jeff Jirsa commented on CASSANDRA-9686:
---

Is this related to CASSANDRA-9774 ? 


 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621869#comment-14621869
 ] 

Marcus Eriksson commented on CASSANDRA-9686:


this looks good to me, +1

we should push this to 2.1 as well I think, could you backport (and squash etc)?

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621937#comment-14621937
 ] 

Stefania commented on CASSANDRA-9686:
-

It's [done|https://github.com/stef1927/cassandra/commits/9686-2.1].

I will post the CI results once they are available.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14623160#comment-14623160
 ] 

Stefania commented on CASSANDRA-9686:
-

CI results for 2.1:

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-9686-2.1-testall/lastCompletedBuild/testReport/
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-9686-2.1-dtest/lastCompletedBuild/testReport/

There seems to be a few flacky dtests so I launched a second dtest build.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-09 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621574#comment-14621574
 ] 

Stefania commented on CASSANDRA-9686:
-

[~krummas], the last part is ready for review. 

CI is still pending:

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-9686-2.2-testall/lastCompletedBuild/testReport/
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-9686-2.2-dtest/lastCompletedBuild/testReport/

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-09 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619960#comment-14619960
 ] 

Stefania commented on CASSANDRA-9686:
-

Sounds good, thanks.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-08 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618207#comment-14618207
 ] 

Stefania commented on CASSANDRA-9686:
-

This is ready for review.

I added handling of {{CorruptSSTableException}} and {{FSError}} when loading 
sstables. When the disk failure policy is {{die}}, the process will exit as 
expected. For {{CorruptSSTableException}}, the transports will be stopped only 
with {{stop_paranoid}}, not just {{stop}}, as per documentation in the yaml 
file. 

One potential issue is that tables are loaded during the initial health checks, 
before the transports are started. So stopping the transports has no effect. 
Later on they are started as if nothing happened. So effectively only the 
{{die}} policy is honored in this scenario. Not sure what to do about this as 
it could be legit to want to restart the transports later on via JMX.

CI results:

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-9686-2.2-testall/lastCompletedBuild/testReport/
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-9686-2.2-dtest/lastCompletedBuild/testReport/

One flacky dtest, I've scheduled a new build (#4).

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not 

[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-08 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618399#comment-14618399
 ] 

Andreas Schnitzerling commented on CASSANDRA-9686:
--

I applied the patch. The only thing disturbing me, is that everytime, I restart 
C*, I get long stacktraces as well starting with:{code}ERROR 
[SSTableBatchOpen:1] (stacktrace){code}. The error-one-liner {code}ERROR 
[Reference-Reaper:1] LEAK DETECTED + Filename{code} should be enough not to 
blow up log since corrupted file is not handled by the user yet. Moving the 
corrupted files somewhere else would-be a good idea too for my opinion.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618214#comment-14618214
 ] 

Marcus Eriksson commented on CASSANDRA-9686:


sure, will have a look soon

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618634#comment-14618634
 ] 

Marcus Eriksson commented on CASSANDRA-9686:


bq. One potential issue is that tables are loaded during the initial health 
checks, before the transports are started.
[~Stefania] could we perhaps set a special startup exception boolean somewhere 
(StorageProxy) to not start transports + gossip on startup if we have 
configured 'stop' and find corrupted sstables? This would still allow them to 
be started via JMX, but we would not automatically start the node with the 
corrupt sstables

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618610#comment-14618610
 ] 

Marcus Eriksson commented on CASSANDRA-9686:


[~Andie78] I think having exceptions when starting with corrupt sstables is 
something you have to live with. But, do you still see the LEAK DETECTED errors?

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-08 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619836#comment-14619836
 ] 

Stefania commented on CASSANDRA-9686:
-

bq. could we perhaps set a special startup exception boolean somewhere 
(StorageProxy) to not start transports + gossip on startup if we have 
configured 'stop' and find corrupted sstables?

We could but Gossip is slightly problematic because of the local application 
states and it needs a little bit of refactoring. I am a bit hesitant hiding a 
Gossip change behind a ticket about corrupt sstables. So I would prefer to open 
another ticket stating clearly what we are doing. An alternative approach is to 
force a die policy during the startup checks or, better but slightly more 
work, make SystemKeyspace.checkHealth() throw a ConfigurationException if it 
notices some sstables were skipped. Do you think it's reasonable to refuse to 
start-up if the system keyspace has corrupt sstables even if the 
disk_failure_policy says to carry on?

[~Andie78] There shouldn't be any more LEAK DETECTED errors, at least I don't 
see any when starting up with corrupt files, please let us know if you see any.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released 

[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619908#comment-14619908
 ] 

Marcus Eriksson commented on CASSANDRA-9686:


[~Stefania] Agreed, lets not refactor gossip startup here :) How about we add a 
handleStartupFSError in FileUtils that just makes sure we die with a good 
error message if we have configured die/stop/stop_paranoid? Reason for 
stop/stop_paranoid is that operators might want to inspect the node with 
JMX/nodetool etc, but there is not much to inspect before starting up properly, 
so dying should be acceptable

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616396#comment-14616396
 ] 

Stefania commented on CASSANDRA-9686:
-

Thanks - I'll make sure to apply the correct disk failure policy to this 
exception and then submit a complete patch.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14614676#comment-14614676
 ] 

Marcus Eriksson commented on CASSANDRA-9686:


The patch looks good to suppress the LEAK errors, but we should also honor the 
disk_failure_policy here - if we have 'die' or 'stop' etc configured, we should 
not allow cassandra to start with a corrupt sstable.

We could probably have a look at the exception with FileUtils#handleFSError() 
in SSTR#openAll()

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613039#comment-14613039
 ] 

Stefania commented on CASSANDRA-9686:
-

No one volunteered on IRC so let's wait for [~krummas] to be back next week 
regarding handling of corrupt sstables.


 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611557#comment-14611557
 ] 

Stefania commented on CASSANDRA-9686:
-

Using Andrea's compactions_in_progress sstable files I can reproduce the 
exception in *2.1.7* regardless of heap size and on Linux 64bit:

{code}
ERROR 05:51:50 Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@4854d57
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:127)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[main/:na]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[main/:na]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[main/:na]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:721) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:676) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:482) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:381) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:519) 
~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
java.io.DataInputStream@4854d57
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:183)
 ~[main/:na]
... 15 common frames omitted
{code}

Aside from the LEAK errors, which are only in 2.2 and for which we have a 
patch, it's very much the same issue as CASSANDRA-8192. The following files 
contain only zeros:

xxd -p system-compactions_in_progress-ka-6866-CompressionInfo.db
00

xxd -p system-compactions_in_progress-ka-6866-Digest.sha1   


xxd -p system-compactions_in_progress-ka-6866-TOC.txt



00

The other files contain some data. I have no idea how they got to become like 
this. [~Andie78] do you see any assertion failures or other exceptions in the 
log files before the upgrade? Do you do any offline operations on the files at 
all? And how do you stop the process normally?



 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  

[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611734#comment-14611734
 ] 

Stefania commented on CASSANDRA-9686:
-

Yes a disk corruption due to a power cut would explain it. I don't think we 
should delete corrupt sstables though, but we could maybe move them somewhere 
else - where they wouldn't be automatically loaded. Then the scrub tool could 
copy the fixed version back into the right folder, but this is kind of opposite 
of what it does at the moment (save a backup and then fix the original).

I need someone a bit more experienced to comment on this. I'll ask on the IRC 
channel.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-02 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611681#comment-14611681
 ] 

Andreas Schnitzerling commented on CASSANDRA-9686:
--

I checked the 2.1.7 instance, which I copied before upgrading. There is the 
same issue and I have an idea: Our computers are in 3 different laboratories 
for electronic engineers. C* is running there as a backround-job. We have 
regular emergency-tests once a month in the morning. They switch off power and 
computers are not shut-down. I think I cannot change the process there and in 
laboratories it can always happen during electronic-tests, that fuses are 
triggered. That can happen anywhere since not everybody uses power-backup. For 
my opinion, invalid files like here should be deleted - maybe with a warning in 
the log - especially if we don't loose data like here for temporary 
process-infos.
{code:title=system.log v2.1.7}
ERROR [SSTableBatchOpen:1] 2015-06-24 14:12:02,033 CassandraDaemon.java:223 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@1a32dcf
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:127)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:721) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:676) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:482) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:381) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:519) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
~[na:1.7.0_55]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
[na:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
[na:1.7.0_55]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
java.io.DataInputStream@1a32dcf
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:183)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
... 15 common frames omitted
ERROR [SSTableBatchOpen:1] 2015-06-24 14:12:02,123 CassandraDaemon.java:223 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@11ca50a
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:127)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 

[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-01 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610579#comment-14610579
 ] 

Joshua McKenzie commented on CASSANDRA-9686:


bq. There is also this warning that looks concerning:
That's a standard message on Windows. Most, if not all, of the members in 
CLibrary aren't going to be found in the Windows libc implementation regardless 
of 32v64.

This ticket reminds me of CASSANDRA-8192 and looks like it might be related. I 
modified CompressionMetadata to throw that exception rather than fail an 
assertion to give us more idea that we were dealing with a corrupt file. 
[~Andie78]: Have you run a scrub on those sstables after upgrade? As your 
previous corrupt file was all 0's, if this is the same situation I don't know 
how much help scrub will be as it should just drop corrupt data but might be 
worth a shot.

I'm surprised that the files read fine on 2.1.7 and then fail on 2.2.0; the 
implication to me is that something caused corruption on those files during the 
upgrade process. Given that we're on a 32-bit JVM w/1GB dedicated to the heap, 
there's at least a couple of big elephants in the room that might be causing 
unexpected behavior here.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 

[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-06-30 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609502#comment-14609502
 ] 

Stefania commented on CASSANDRA-9686:
-

I've prepared a 2.2 patch for the leak errors, available 
[here|https://github.com/stef1927/cassandra/commits/9686-2.2].

For verification purposes, the code I used to simulate the main exception is 
this:

{code}
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 23a9f3e..b92971d 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -170,7 +170,7 @@ public class CompressionMetadata
 try
 {
 chunkCount = input.readInt();
-if (chunkCount = 0)
+if (chunkCount = 0 || 
indexFilePath.contains(compactions_in_progress))
 throw new IOException(Compressed file with 0 chunks 
encountered:  + input);
 }
 catch (IOException e)
{code}

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-06-30 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609433#comment-14609433
 ] 

Stefania commented on CASSANDRA-9686:
-

The LEAK errors are a consequence of the exception. We should fix it but we 
don't need to block rc2 since the garbage collector will release the resources 
when it collects the reader. In {{SSTableReader.open()}} we should put a 
{{try}} to make sure we close the reader in case of exception and increase the 
reference count by one just before returning it (but before closing it). 

I tried to reproduce the CompressionMetadata exception by interrupting a stress 
write on 2.1.7 and then launching 2.2.0-rc1 but I couldn't. The exception is 
simply saying that there are no compression chunks, which I assume there should 
be. Without the files it's difficult to say what the problem is.

[~Andie78] it would help if you could provide us with some of the system files 
causing the problem (one system cf such as compactions_in_progress should do), 
along with cassandra.yaml (before and after the upgrade if different) and any 
steps you took during the upgrade. If you cannot provide the data files, a log 
file at DEBUG level might help a bit more.

There is also this warning that looks concerning:

{code}
WARN  [main] 2015-06-29 16:58:12,169 CLibrary.java:72 - JNA link failure, one 
or more native method will be unavailable.
{code}

At debug level it should print the cause of the failure. Perhaps it's due to a 
32-bit incompatibility that we introduced recently, or perhaps it has always 
been there, I'm not sure. 

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR 

[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-06-30 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608560#comment-14608560
 ] 

Philip Thompson commented on CASSANDRA-9686:


[~Stefania], I know your plate is full, but can you look at this, and see if it 
looks bad enough to block rc2? Marcus is out for the week.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Marcus Eriksson
 Fix For: 2.2.x

 Attachments: system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)