> I had a look at this and tried it without using the extra
auto-increment field, using the table ROWID
> instead to check for consecutiveness. It would work great and a lot
faster indeed if it wasn't for
> the multiple values on the same date. Problem with that is that if
you group by date it may
> skip a ROWID, so it won't pick up that triple. The answer is just to
clean that table up and
> clear these extra values on the same day. This will always need to be
done in any case, so
> it might as well be done as a one off rather than in every select
SQL. Thanks for that idea,
> it looks to me the fastest. RBS
With a complexity of O(N.logN) this algorithm is the second best to a
pure sequential scan that compares the successive triples, which is in
O(N). Practically, considering the locality of accesses for the join
(the 3 rows to join most often are in the same page) and the small size
of the primary index, the performance should be close to that of a
sequential scan.
When you write "if it wasn't for the multiple values on the same date",
do you mean that the values of (Id,Date) are not unique among the rows
of TABLE1, so that one can find more than one row for the same values of
(Id,Date)? In this case, how do you compute the unique Value of this
couple (Id,Date)? Average, last one, largest one?
JLH
_______________________________________________
sqlite-users mailing list
[email protected]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users