[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966868#comment-13966868
 ] 

Martin Bligh commented on CASSANDRA-6525:
-----------------------------------------

(copied from 6981)
I thought it was interesting how far apart these two numbers were:

"java.io.IOError: java.io.IOException: mmap segment underflow; remaining is 
20402577 but 1879048192 requested"

And that the requested number is vaguely close to 2^^31 - did something do a 
negative number and wrap a 32 bit signed here?
To be fair, it's not that close to 2^^31, but still way off what was expected?

> Cannot select data which using "WHERE"
> --------------------------------------
>
>                 Key: CASSANDRA-6525
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>            Reporter: Silence Chow
>            Assignee: Michael Shuler
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>         at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>         at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>         at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>         at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>         at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>         at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
>         at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
>         at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
>         at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
>         at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>         at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
>         at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
>         at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
>         at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
>         at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
>         at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
>         at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>         at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
>         at java.io.RandomAccessFile.readFully(Unknown Source)
>         at java.io.RandomAccessFile.readFully(Unknown Source)
>         at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
>         at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>         at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
>         at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>         ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to