[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994056#comment-13994056
 ] 

Tyler Hobbs commented on CASSANDRA-6525:
----------------------------------------

Considering that drop/recreate seems to be necessary to reproduce the issue and 
that using a disk_access_mode of "standard" with no compression seems to fix 
the issue, I believe the problem is that old FileCacheService entries are being 
reused with new SSTables.  The FileCacheService is only used for 
PoolingSegmentedFiles, which are used if compression or mmap disk access mode 
are enabled.  Since FileCacheService uses (String) file paths as keys, new 
SSTables with the same filename can lookup old entries.

The only question is why the old FileCacheService entries are not being 
invalidated; this basically means that SSTableReader.close() is not being 
called in some cases.

> Cannot select data which using "WHERE"
> --------------------------------------
>
>                 Key: CASSANDRA-6525
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>            Reporter: Silence Chow
>            Assignee: Tyler Hobbs
>             Fix For: 2.0.8
>
>         Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>         at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>         at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>         at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>         at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>         at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>         at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
>         at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
>         at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
>         at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
>         at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>         at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
>         at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
>         at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
>         at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
>         at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
>         at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
>         at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>         at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
>         at java.io.RandomAccessFile.readFully(Unknown Source)
>         at java.io.RandomAccessFile.readFully(Unknown Source)
>         at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
>         at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>         at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
>         at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>         ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to