Hugh Gibson wrote:
>> What happens if you create the index on sCommunityID only? Does 
>> it still do the full table scan?
> 
> A bit difficult to drop the (sCommunityID, sTransactionID) index, as it's 
> the primary key. 

I'm intrigued. How do you get SQLite to use a multi-column index as it's
primary key (i.e. B-tree hash)? Please elaborate.

Have you investigated the following to see how the optimizer deals with it?

SELECT sCommunityID, max(sTransactionID)
  FROM TransactionList
 GROUP BY sCommunityID
HAVING sCommunityID IN ('a030600001bFi','a030600004KDy', 'a030600008QhK');

I think I'd normally err on keeping a filter table handy (temporary if
you wish):

-- Initialise filter:
CREATE TABLE filter (sCommunityID TEXT);
CREATE INDEX filter_idx ON filter(sCommunityID);

-- Configure filter:
INSERT INTO filter VALUES ('a030600001bFi');
INSERT INTO filter VALUES ('a030600004KDy');
INSERT INTO filter VALUES ('a030600008QhK');

-- Get your results:
SELECT sCommunityID, max(sTransactionID)
  FROM filter NATURAL INNER JOIN TransactionList
 GROUP BY sCommunityID;

-- Clear filter ready for next time:
DELETE FROM filter;

I feel fairly confident that that method would make good use of your
existing index.
You can use a nice sqlite3_bind_text() for those inserts :)
If you really, really wanted you could munge the filter table into a
sequence of UNIONs, but it would be nowhere near as elegant. This method
easily extends to any number of filtered sCommunityIDs, while anything
in a single query is going to start getting "icky".

Probably too late, but I've also found that indexing, grouping and
joining are far faster on INT columns. If you could store sCommunityID
as an INT...

Please correct me if any of the above is junk, I'm still learning :)

Regards,
Robin
-- 
Robin Breathe, Computer Services, Oxford Brookes University, Oxford, UK
[EMAIL PROTECTED]       Tel: +44 1865 483685  Fax: +44 1865 483073

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to