It appears that Python's __getslice__(self, a, b) overload has a surprising 
behavior. From 
http://docs.python.org/reference/datamodel.html#object.__getslice__:

"If negative indexes are used in the slice, the length of the sequence is 
added to that index."

So when you evaluate rows[-20:], Python evaluates len(rows) (which is 13 in 
the above example), and calls:
  rows.__getslice__(-20+13, 2147483647)

rather than:
  rows.__getslice__(-20, None)
or rows.__getslice__(-20, 2147483647)

as one might initially suspect. The Rows object just passes the received 
indices on to self.records, so you're getting a new Rows back containing 
records[-7:] - clearly not what you want!!

SOLUTION:

It turns out that __getslice__ is deprecated since Python 2.0, and that 
__getitem__ should be used instead. If __getslice__ is missing and a a slice 
is requested, __getitem__ receives a "slice" object. A proposed fix:

- remove __getslice__ from Rows
- replace __getitem__ with:

    def __getitem__(self, i):
        if isinstance(i, slice):
            return Rows(self.db, self.records[i], self.colnames)
        else:
            row = self.records[i]
            keys = row.keys()
            if self.compact and len(keys) == 1 and keys[0] != '_extra':
                return row[row.keys()[0]]
            return row

I've done a bit of testing with this. It fixes the problem noted by Gene, 
and does not appear to break anything obvious.

Cheers,
Kevin

Reply via email to