[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980877#comment-13980877
 ] 

Shyam K Gopal commented on CASSANDRA-6525:
------------------------------------------

I tried couple of things this morning and would like to update

I changed the table definition to have COMPACT STORAGE with 
LevelCompactionStrategy and loaded the data. 

Server log: 
INFO 06:51:06,415 [Stream #80a0b380-cc67-11e3-a1a2-fb95dccb4714] Received 
streaming plan for Bulk Load
 INFO 06:51:06,416 [Stream #80a0b380-cc67-11e3-a1a2-fb95dccb4714] Prepare 
completed. Receiving 1 files(199308 bytes), sending 0 files(0 bytes)
 INFO 06:51:06,466 Enqueuing flush of 
Memtable-compactions_in_progress@2119789616(131/1310 serialized/live bytes, 7 
ops)
 WARN 06:51:06,466 setting live ratio to maximum of 64.0 instead of Infinity
 INFO 06:51:06,466 CFS(Keyspace='system', 
ColumnFamily='compactions_in_progress') liveRatio is 64.0 (just-counted was 
64.0).  calculation took 0ms for 0 cells
 INFO 06:51:06,467 Writing Memtable-compactions_in_progress@2119789616(131/1310 
serialized/live bytes, 7 ops)
 INFO 06:51:06,467 [Stream #80a0b380-cc67-11e3-a1a2-fb95dccb4714] Session with 
/192.168.1.73 is complete
 INFO 06:51:06,468 [Stream #80a0b380-cc67-11e3-a1a2-fb95dccb4714] All sessions 
completed
 INFO 06:51:06,479 Completed flushing 
***/apache-cassandra-2.0.7/data/data/system/compactions_in_progress/system-compactions_in_progress-jb-6-Data.db
 (158 bytes) for commitlog position ReplayPosition(segmentId=1398421195982, 
position=197721)
 INFO 06:51:06,483 Compacting 
[SSTableReader(path='***/apache-cassandra-2.0.7/data/data/stock/dailystockquote/stock-dailystockquote-jb-6-Data.db'),
 
SSTableReader(path='***/apache-cassandra-2.0.7/data/data/stock/dailystockquote/stock-dailystockquote-jb-5-Data.db')]
 INFO 06:51:06,485 Enqueuing flush of 
Memtable-compactions_in_progress@729498316(0/0 serialized/live bytes, 1 ops)
 INFO 06:51:06,491 Writing Memtable-compactions_in_progress@729498316(0/0 
serialized/live bytes, 1 ops)
 INFO 06:51:06,500 Completed flushing 
***/apache-cassandra-2.0.7/data/data/system/compactions_in_progress/system-compactions_in_progress-jb-7-Data.db
 (42 bytes) for commitlog position ReplayPosition(segmentId=1398421195982, 
position=197800)

Behavioral changes:
Able to query table with no errors at server log but no data was loaded. 


> Cannot select data which using "WHERE"
> --------------------------------------
>
>                 Key: CASSANDRA-6525
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>            Reporter: Silence Chow
>            Assignee: Tyler Hobbs
>             Fix For: 2.0.8
>
>         Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>         at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>         at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>         at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>         at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>         at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>         at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
>         at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
>         at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
>         at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
>         at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>         at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
>         at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
>         at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
>         at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
>         at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
>         at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
>         at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>         at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
>         at java.io.RandomAccessFile.readFully(Unknown Source)
>         at java.io.RandomAccessFile.readFully(Unknown Source)
>         at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
>         at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>         at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
>         at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>         ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to