[ https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966429#comment-13966429 ]
Shyam K Gopal commented on CASSANDRA-6525: ------------------------------------------ I am still getting this error in DSE 2.0.5 and 2.0.6.. Tried in various machine mac & ubuntu. Steps : 1 -> CREATE TABLE DSQ ( exchange text, sc_code int, load_date timeuuid, /* tried timestamp also but same behaviour */ PRIMARY KEY (exchange, sc_code, load_date) ) 2 -> Did SSTable load writer.newRow(compositeColumn.builder().add(bytes(entry.stock_exchange)).add(bytes(entry.sc_code)).add(bytes(new com.eaio.uuid.UUID().toString())).build()); 3 -> sstablesload Established connection to initial hosts Opening sstables and calculating sections to stream Streaming relevant part of stock/DSQ/stock-DSQ-ib-1-Data.db to [/127.0.0.1] progress: [/127.0.0.1 1/1 (100%)] [total: 100% - 2147483647MB/s (avg: 2MB/s) 4 -> No errors in server log 5 -> Log into cqlsh and select * from DSQ; 6 --> errors in Server log: Exception in thread Thread[ReadStage:51,5,main] java.io.IOError: java.io.EOFException at org.apache.cassandra.db.Column$1.computeNext(Column.java:79) at org.apache.cassandra.db.Column$1.computeNext(Column.java:64) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88) at org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82) at org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82) at org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157) at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140) at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144) at org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87) at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46) at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120) at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80) at org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101) at org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75) at org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115) at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607) at org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754) at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718) at org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137) at org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Caused by: java.io.EOFException at java.io.RandomAccessFile.readUnsignedShort(RandomAccessFile.java:713) at org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361) at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74) at org.apache.cassandra.db.Column$1.computeNext(Column.java:75) ... 37 more 7 -> client shows Request did not complete within rpc_timeout. > Cannot select data which using "WHERE" > -------------------------------------- > > Key: CASSANDRA-6525 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6525 > Project: Cassandra > Issue Type: Bug > Environment: Linux RHEL5 > RAM: 1GB > Cassandra 2.0.3 > CQL spec 3.1.1 > Thrift protocol 19.38.0 > Reporter: Silence Chow > Assignee: Michael Shuler > > I am developing a system on my single machine using VMware Player with 1GB > Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when > I using "WHERE" and it has just below 10 records. I have got this error in > system log: > ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) > Exception in thread Thread[ReadStage:41,5,main] > java.io.IOError: java.io.EOFException > at org.apache.cassandra.db.Column$1.computeNext(Column.java:79) > at org.apache.cassandra.db.Column$1.computeNext(Column.java:64) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88) > at > org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82) > at > org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157) > at > org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140) > at > org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87) > at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46) > at > org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120) > at > org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80) > at > org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72) > at > org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297) > at > org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53) > at > org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487) > at > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306) > at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332) > at > org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65) > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401) > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > Caused by: java.io.EOFException > at java.io.RandomAccessFile.readFully(Unknown Source) > at java.io.RandomAccessFile.readFully(Unknown Source) > at > org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348) > at > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392) > at > org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) > at > org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74) > at org.apache.cassandra.db.Column$1.computeNext(Column.java:75) > ... 27 more > E.g. > SELECT * FROM table; > Its fine. > SELECT * FROM table WHERE field = 'N'; > field is the partition key. > Its said "Request did not complete within rpc_timeout." in cqlsh -- This message was sent by Atlassian JIRA (v6.2#6252)