[ https://issues.apache.org/jira/browse/CASSANDRA-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115132#comment-15115132 ]
Branimir Lambov commented on CASSANDRA-8180: -------------------------------------------- bq. iter.partitionLevelDeletion() is currently called for all sstables by queryMemtableAndDiskInternal() That is, your code is currently completely disabled by the [{{if (... iterator != null) return null;}}|https://github.com/stef1927/cassandra/commit/d5cfc6fd56d50eda5d9c510591bae1d66e17ec59#diff-78de604e500e1cc63c6d53a2ac6d6d65R52] check in {{lowerBound()}}? Is there a reason to want the code committed then? bq. We can replace BTreeRow.emptyRow(ret) with new RangeTombstoneBoundMarker(RangeTombstone.Bound.inclusiveOpen(filter.isReversed(), ret.getRawValues()), DeletionTime.LIVE) if there is still a valid reason and ideally a failing test would be useful. It is not as easy to fully break it as I was expecting, but in the presence of tombstones you can still break a basic feature of the iterators -- the inequality of the returned elements. A test that does {code} createTable("CREATE TABLE %s (a int, b int, c text, primary key (a, b))"); execute("INSERT INTO %s (a, b, c) VALUES(1, 1, '1')"); execute("INSERT INTO %s (a, b, c) VALUES(1, 3, '3')"); execute("DELETE FROM %s where a=1 and b >= 2 and b <= 3"); execute("INSERT INTO %s (a, b, c) VALUES(1, 2, '2')"); flush(); execute("DELETE FROM %s where a=1 and b >= 2 and b <= 3"); flush(); execute("SELECT * FROM %s WHERE a = ?", 1); {code} will end up with an iterator that lists two tombstone markers with equal clustering. Unfortunately that's filtered out before being returned so it's not trivial to write a test that checks this. On the other hand, an assertion in {{UnfilteredIteratorWithLowerBound}} that saves the returned {{lowerBound()}} and checks the first {{next()}} to be greater-than or equal may be something we want to have anyway. > Optimize disk seek using min/max column name meta data when the LIMIT clause > is used > ------------------------------------------------------------------------------------ > > Key: CASSANDRA-8180 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8180 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths > Environment: Cassandra 2.0.10 > Reporter: DOAN DuyHai > Assignee: Stefania > Priority: Minor > Fix For: 3.x > > Attachments: 8180_001.yaml, 8180_002.yaml > > > I was working on an example of sensor data table (timeseries) and face a use > case where C* does not optimize read on disk. > {code} > cqlsh:test> CREATE TABLE test(id int, col int, val text, PRIMARY KEY(id,col)) > WITH CLUSTERING ORDER BY (col DESC); > cqlsh:test> INSERT INTO test(id, col , val ) VALUES ( 1, 10, '10'); > ... > >nodetool flush test test > ... > cqlsh:test> INSERT INTO test(id, col , val ) VALUES ( 1, 20, '20'); > ... > >nodetool flush test test > ... > cqlsh:test> INSERT INTO test(id, col , val ) VALUES ( 1, 30, '30'); > ... > >nodetool flush test test > {code} > After that, I activate request tracing: > {code} > cqlsh:test> SELECT * FROM test WHERE id=1 LIMIT 1; > activity | > timestamp | source | source_elapsed > ---------------------------------------------------------------------------+--------------+-----------+---------------- > execute_cql3_query | > 23:48:46,498 | 127.0.0.1 | 0 > Parsing SELECT * FROM test WHERE id=1 LIMIT 1; | > 23:48:46,498 | 127.0.0.1 | 74 > Preparing statement | > 23:48:46,499 | 127.0.0.1 | 253 > Executing single-partition query on test | > 23:48:46,499 | 127.0.0.1 | 930 > Acquiring sstable references | > 23:48:46,499 | 127.0.0.1 | 943 > Merging memtable tombstones | > 23:48:46,499 | 127.0.0.1 | 1032 > Key cache hit for sstable 3 | > 23:48:46,500 | 127.0.0.1 | 1160 > Seeking to partition beginning in data file | > 23:48:46,500 | 127.0.0.1 | 1173 > Key cache hit for sstable 2 | > 23:48:46,500 | 127.0.0.1 | 1889 > Seeking to partition beginning in data file | > 23:48:46,500 | 127.0.0.1 | 1901 > Key cache hit for sstable 1 | > 23:48:46,501 | 127.0.0.1 | 2373 > Seeking to partition beginning in data file | > 23:48:46,501 | 127.0.0.1 | 2384 > Skipped 0/3 non-slice-intersecting sstables, included 0 due to tombstones | > 23:48:46,501 | 127.0.0.1 | 2768 > Merging data from memtables and 3 sstables | > 23:48:46,501 | 127.0.0.1 | 2784 > Read 2 live and 0 tombstoned cells | > 23:48:46,501 | 127.0.0.1 | 2976 > Request complete | > 23:48:46,501 | 127.0.0.1 | 3551 > {code} > We can clearly see that C* hits 3 SSTables on disk instead of just one, > although it has the min/max column meta data to decide which SSTable contains > the most recent data. > Funny enough, if we add a clause on the clustering column to the select, this > time C* optimizes the read path: > {code} > cqlsh:test> SELECT * FROM test WHERE id=1 AND col > 25 LIMIT 1; > activity | > timestamp | source | source_elapsed > ---------------------------------------------------------------------------+--------------+-----------+---------------- > execute_cql3_query | > 23:52:31,888 | 127.0.0.1 | 0 > Parsing SELECT * FROM test WHERE id=1 AND col > 25 LIMIT 1; | > 23:52:31,888 | 127.0.0.1 | 60 > Preparing statement | > 23:52:31,888 | 127.0.0.1 | 277 > Executing single-partition query on test | > 23:52:31,889 | 127.0.0.1 | 961 > Acquiring sstable references | > 23:52:31,889 | 127.0.0.1 | 971 > Merging memtable tombstones | > 23:52:31,889 | 127.0.0.1 | 1020 > Key cache hit for sstable 3 | > 23:52:31,889 | 127.0.0.1 | 1108 > Seeking to partition beginning in data file | > 23:52:31,889 | 127.0.0.1 | 1117 > Skipped 2/3 non-slice-intersecting sstables, included 0 due to tombstones | > 23:52:31,889 | 127.0.0.1 | 1611 > Merging data from memtables and 1 sstables | > 23:52:31,890 | 127.0.0.1 | 1624 > Read 1 live and 0 tombstoned cells | > 23:52:31,890 | 127.0.0.1 | 1700 > Request complete | > 23:52:31,890 | 127.0.0.1 | 2140 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)