[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14004445#comment-14004445
 ] 

Sylvain Lebresne commented on CASSANDRA-6525:
-

bq. Truncates don't reset the SSTable generation counter

Fair enough (though it would still feel cleaner to invalidate the key cache 
entries, even if it don't result in a bug). But anyway, +1 on the patch.

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6525-2.0.txt, 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-20 Thread Vladimir Kuptsov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003958#comment-14003958
 ] 

Vladimir Kuptsov commented on CASSANDRA-6525:
-

Yes, it looks like this. We have deleted the data from 
/var/lib/cassandra/saved_caches/* and after nodes restarting we don't notice 
the mentioned exceptions.

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6525-2.0.txt, 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-20 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003948#comment-14003948
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


[~vkuptcov] that seems consistent with what I found.  I suggest invalidating 
your key cache in the problematic DC.  You can use {{nodetool 
invalidatekeycache}} to do this.

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6525-2.0.txt, 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-20 Thread Vladimir Kuptsov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003915#comment-14003915
 ] 

Vladimir Kuptsov commented on CASSANDRA-6525:
-

We have a cluster with 5 nodes in one DC and a cluster with two nodes in the 
other without a replication between these datacenters. In all DC we use C* 
2.0.5.

Today we've found a bug with similar messages but with the different result. We 
have dropped and recreated one table in the DC with 5 nodes and just truncated 
the same table in another DC.
After ~10 hours we have noticed appearing of the following messages in the 
first DC logs:
```
ERROR [ReadStage:231469] 2014-05-20 21:05:20,349 CassandraDaemon.java (line 
192) Exception in thread Thread[ReadStage:231469,5,main]
java.io.IOError: java.io.EOFException
at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
:
```.
For the node, on which this messages started, we found several messages like "
 INFO [GossipTasks:1] 2014-05-20 21:20:31,864 Gossiper.java (line 863) 
InetAddress /10.33.20.91 is now DOWN
 INFO [RequestResponseStage:10] 2014-05-20 21:20:32,186 Gossiper.java (line 
849) InetAddress /10.33.20.91 is now UP
 INFO [GossipTasks:1] 2014-05-20 21:26:51,965 Gossiper.java (line 863) 
InetAddress /10.33.20.91 is now DOWN
"
and finally the node has stopped.


We found such effect only in the DC, where we have dropped and recreated table. 
In the DC with truncate everything is OK.


> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6525-2.0.txt, 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
>   

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-20 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003874#comment-14003874
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


bq. wouldn't make sense to do also invalidate for truncate in 
CFS.truncateBlocking, just to be on the safe side?

Truncates don't reset the SSTable generation counter 
({{CFS.fileIndexGenerator}}), so new tables will have different generation 
numbers (and hence different key cache keys).

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6525-2.0.txt, 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout."

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14001471#comment-14001471
 ] 

Sylvain Lebresne commented on CASSANDRA-6525:
-

Patch lgtm, but wouldn't make sense to do also invalidate for truncate in 
CFS.truncateBlocking, just to be on the safe side?

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6525-2.0.txt, 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-16 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998551#comment-13998551
 ] 

Sylvain Lebresne commented on CASSANDRA-6525:
-

bq. For 2.0, perhaps we should do something similar to CASSANDRA-6351 and go 
through the key cache to invalidate all entries for the CF when it's dropped.

That makes sense to me.

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-16 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994056#comment-13994056
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


Considering that drop/recreate seems to be necessary to reproduce the issue and 
that using a disk_access_mode of "standard" with no compression seems to fix 
the issue, I believe the problem is that old FileCacheService entries are being 
reused with new SSTables.  The FileCacheService is only used for 
PoolingSegmentedFiles, which are used if compression or mmap disk access mode 
are enabled.  Since FileCacheService uses (String) file paths as keys, new 
SSTables with the same filename can lookup old entries.

The only question is why the old FileCacheService entries are not being 
invalidated; this basically means that SSTableReader.close() is not being 
called in some cases.

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
>

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13993980#comment-13993980
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


This seems to require near-OOM conditions to occur.  So far I've only been able 
to reproduce this in a low-memory environment (~1GB), and it either occurs just 
before an OOM or when the JVM is on the brink of exhausting its heap space.

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-14 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13997968#comment-13997968
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


The problem is that key cache entries stick around after the keyspace is 
dropped.  After it's recreated and read, there are key cache hits that return 
old positions.  I'm not sure why it only seems to be a problem for the 
secondary index tables; my guess is that the key-cache preheating that happens 
after compaction is replacing the old entries in the key cache for the data 
tables.

CASSANDRA-5202 is the correct permanent solution for this, but that's for 2.1.  
For 2.0, perhaps we should do something similar to CASSANDRA-6351 and go 
through the key cache to invalidate all entries for the CF when it's dropped.

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserialize

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-14 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994022#comment-13994022
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


Interestingly, this doesn't seem to be reproduceable when the keyspace isn't 
dropped and recreated.  (Just modify the repro script to remove the "DROP 
KEYSPACE" and use "IF NOT EXISTS" on the create statements.)

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-05-14 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13996985#comment-13996985
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


My initial guess about FileCacheService entries not being invalidated was 
wrong; they're all being invalidated correctly.  Furthermore, this isn't 
specific to compressed sstables (it reproduces with and without compression) or 
to a particular disk_access_mode (both standard and mmap have errors, although 
the specific errors are different).

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Requ

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-04-25 Thread Shyam K Gopal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980877#comment-13980877
 ] 

Shyam K Gopal commented on CASSANDRA-6525:
--

I tried couple of things this morning and would like to update

I changed the table definition to have COMPACT STORAGE with 
LevelCompactionStrategy and loaded the data. 

Server log: 
INFO 06:51:06,415 [Stream #80a0b380-cc67-11e3-a1a2-fb95dccb4714] Received 
streaming plan for Bulk Load
 INFO 06:51:06,416 [Stream #80a0b380-cc67-11e3-a1a2-fb95dccb4714] Prepare 
completed. Receiving 1 files(199308 bytes), sending 0 files(0 bytes)
 INFO 06:51:06,466 Enqueuing flush of 
Memtable-compactions_in_progress@2119789616(131/1310 serialized/live bytes, 7 
ops)
 WARN 06:51:06,466 setting live ratio to maximum of 64.0 instead of Infinity
 INFO 06:51:06,466 CFS(Keyspace='system', 
ColumnFamily='compactions_in_progress') liveRatio is 64.0 (just-counted was 
64.0).  calculation took 0ms for 0 cells
 INFO 06:51:06,467 Writing Memtable-compactions_in_progress@2119789616(131/1310 
serialized/live bytes, 7 ops)
 INFO 06:51:06,467 [Stream #80a0b380-cc67-11e3-a1a2-fb95dccb4714] Session with 
/192.168.1.73 is complete
 INFO 06:51:06,468 [Stream #80a0b380-cc67-11e3-a1a2-fb95dccb4714] All sessions 
completed
 INFO 06:51:06,479 Completed flushing 
***/apache-cassandra-2.0.7/data/data/system/compactions_in_progress/system-compactions_in_progress-jb-6-Data.db
 (158 bytes) for commitlog position ReplayPosition(segmentId=1398421195982, 
position=197721)
 INFO 06:51:06,483 Compacting 
[SSTableReader(path='***/apache-cassandra-2.0.7/data/data/stock/dailystockquote/stock-dailystockquote-jb-6-Data.db'),
 
SSTableReader(path='***/apache-cassandra-2.0.7/data/data/stock/dailystockquote/stock-dailystockquote-jb-5-Data.db')]
 INFO 06:51:06,485 Enqueuing flush of 
Memtable-compactions_in_progress@729498316(0/0 serialized/live bytes, 1 ops)
 INFO 06:51:06,491 Writing Memtable-compactions_in_progress@729498316(0/0 
serialized/live bytes, 1 ops)
 INFO 06:51:06,500 Completed flushing 
***/apache-cassandra-2.0.7/data/data/system/compactions_in_progress/system-compactions_in_progress-jb-7-Data.db
 (42 bytes) for commitlog position ReplayPosition(segmentId=1398421195982, 
position=197800)

Behavioral changes:
Able to query table with no errors at server log but no data was loaded. 


> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
>   

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-04-24 Thread Shyam K Gopal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980524#comment-13980524
 ] 

Shyam K Gopal commented on CASSANDRA-6525:
--

FYI... Same issue also exist in 2.0.7 version as well. 

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-04-11 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13967148#comment-13967148
 ] 

Ryan McGuire commented on CASSANDRA-6525:
-

This repros on git:cassandra-2.0 HEAD as well:

{code}
ERROR [ReadStage:82] 2014-04-11 17:49:50,903 CassandraDaemon.java (line 216) 
Exception in thread Thread[ReadStage:82,5,main]
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException: 
EOF after 35761 bytes out of 48857
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:82)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1540)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1369)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:164)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:103)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1735)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:50)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:556)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1723)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1374)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1916)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.EOFException: EOF after 35761 bytes out of 48857
at 
org.apache.cassandra.io.util.FileUtils.skipBytesFully(FileUtils.java:394)
at 
org.apache.cassandra.utils.ByteBufferUtil.skipShortLength(ByteBufferUtil.java:382)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:70)
... 22 more
{code}

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Michael Shuler
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-04-11 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13967134#comment-13967134
 ] 

Ryan McGuire commented on CASSANDRA-6525:
-

Running this a few more times, I was able to get this on 2.0.5:

{code}
ERROR [ReadStage:90] 2014-04-11 17:37:57,768 CassandraDaemon.java (line 192) 
Exception in thread Thread[ReadStage:90,5,main]
java.lang.RuntimeException: 
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException: 
EOF after 46084 bytes out of 48857
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
java.io.EOFException: EOF after 46084 bytes out of 48857
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:82)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1560)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1379)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:166)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:105)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1742)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
... 3 more
Caused by: java.io.EOFException: EOF after 46084 bytes out of 48857
at 
org.apache.cassandra.io.util.FileUtils.skipBytesFully(FileUtils.java:392)
at 
org.apache.cassandra.utils.ByteBufferUtil.skipShortLength(ByteBufferUtil.java:382)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:70)
... 22 more
{code}

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Michael Shuler
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> co

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-04-11 Thread Martin Bligh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966868#comment-13966868
 ] 

Martin Bligh commented on CASSANDRA-6525:
-

(copied from 6981)
I thought it was interesting how far apart these two numbers were:

"java.io.IOError: java.io.IOException: mmap segment underflow; remaining is 
20402577 but 1879048192 requested"

And that the requested number is vaguely close to 2^^31 - did something do a 
negative number and wrap a 32 bit signed here?
To be fair, it's not that close to 2^^31, but still way off what was expected?

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Michael Shuler
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout.

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-04-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966676#comment-13966676
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


It's worth noting that in CASSANDRA-6981, setting {{disk_access_mode: 
standard}} seemed to fix the problem.

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Michael Shuler
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-04-11 Thread Shyam K Gopal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966429#comment-13966429
 ] 

Shyam K Gopal commented on CASSANDRA-6525:
--

I am still getting this error in DSE 2.0.5 and 2.0.6.. Tried in various machine 
mac & ubuntu. 

Steps :
1 -> CREATE TABLE DSQ (
exchange text,
sc_code int,
load_date timeuuid, /* tried timestamp also but same behaviour */
PRIMARY KEY (exchange, sc_code, load_date)
) 
2 -> Did SSTable load
writer.newRow(compositeColumn.builder().add(bytes(entry.stock_exchange)).add(bytes(entry.sc_code)).add(bytes(new
 com.eaio.uuid.UUID().toString())).build());
3 -> sstablesload 
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of stock/DSQ/stock-DSQ-ib-1-Data.db to [/127.0.0.1]
progress: [/127.0.0.1 1/1 (100%)] [total: 100% - 2147483647MB/s (avg: 2MB/s)

4 -> No errors in server log
5 -> Log into cqlsh and select * from DSQ; 
6 --> errors in Server log: 
Exception in thread Thread[ReadStage:51,5,main]
java.io.IOError: java.io.EOFException
at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
at 
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
at 
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
at 
org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
at 
org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.EOFException
at java.io.RandomAccessFile.readUnsignedShort(RandomAccessFile.java:713)
at 
org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:3

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-03-14 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935623#comment-13935623
 ] 

Michael Shuler commented on CASSANDRA-6525:
---

I tested using cassandra-2.0 git branch on my laptop (16G), which ran fine. I 
tried on a 4G box and 2G box - both ran fine looping through the script below, 
while running cassandra-stress in another shell, also. I'm about 10 or so loops 
through, while looping stress read and write on a 1G virtualbox vm, and it's 
slow, but I've had no errors, so far. I'll let it keep running a while to see 
if I can get a timeout or error of some sort

{code}
#!/bin/sh

# create some data:
for i in $(seq 1 50); do echo "N,text blah blah$i,text blah blah$i,text blah 
blah$i" >> c6525_1-50.csv ; done
for i in $(seq 51 500); do echo "N,text blah blah$i,text blah blah$i,text blah 
blah$i" >> c6525_51-500.csv ; done
for i in $(seq 501 5000); do echo "N,text blah blah$i,text blah blah$i,text 
blah blah$i" >> c6525_501-5000.csv ; done
for i in $(seq 5001 5); do echo "N,text blah blah$i,text blah blah$i,text 
blah blah$i" >> c6525_5001-5.csv ; done

# create our cql to drop/create/import
cat << 'EOF' > c6525_run.cql
DROP KEYSPACE c6525;

CREATE KEYSPACE c6525 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '1'};

CREATE TABLE c6525.test (hidden text, field2 text, field3 text, field4 text, 
PRIMARY KEY (hidden, field2, field3));

COPY c6525.test (hidden, field2, field3, field4) FROM 'c6525_1-50.csv';
SELECT * from c6525.test WHERE hidden = 'N';

COPY c6525.test (hidden, field2, field3, field4) FROM 'c6525_51-500.csv';
SELECT * from c6525.test WHERE hidden = 'N';

COPY c6525.test (hidden, field2, field3, field4) FROM 'c6525_501-5000.csv';
SELECT * from c6525.test WHERE hidden = 'N';

COPY c6525.test (hidden, field2, field3, field4) FROM 'c6525_5001-5.csv';
SELECT * from c6525.test WHERE hidden = 'N' LIMIT 51000;
EOF

echo; echo "*** Hit CTL-C to stop looping..***"; echo
sleep 3

# loop it
while true; do echo "SOURCE 'c6525_run.cql';" | cqlsh ; sleep 1 ; done
{code}

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>Assignee: Michael Shuler
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> 

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-01-21 Thread Vaya Sinola (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13877955#comment-13877955
 ] 

Vaya Sinola commented on CASSANDRA-6525:


I got the exact same error message and problem. But for me it was a small table 
of about 20 rows and i did not stress test.
running a compactions fixed the problem for me. I'm not sure what the cause was 
but i recently dropped and recreated the table. 

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2013-12-29 Thread Silence Chow (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858627#comment-13858627
 ] 

Silence Chow commented on CASSANDRA-6525:
-

I think I know how to reproduce my situation now.
I have created a new table same as that table. I can run the query something 
like SELECT * FROM test WHERE hidden = 'N'; at the beginning, After that I have 
run a stress test which made all the physical RAM and swap exhaust. Then, the 
problem happen again.


> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2013-12-28 Thread Silence Chow (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13858003#comment-13858003
 ] 

Silence Chow commented on CASSANDRA-6525:
-

My table have 4 field only
For example
CREATE TABLE test (
  hidden text,
  field2 text,
  field3 text,
  field4 text,
  PRIMARY KEY (hidden, field2 , field3)
);

The query using CQL3: SELECT * FROM test WHERE hidden = 'N';

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2013-12-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13856930#comment-13856930
 ] 

Jonathan Ellis commented on CASSANDRA-6525:
---

Can you describe how to reproduce starting with a fresh Cassandra install?

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)