> Having read :  https://www.sqlite.org/isolation.html
> Specifically the line "And the application can UPDATE the current row
> or any prior row, though doing so might cause that row to reappear in a
> subsequent sqlite3_step()."

> Is it possible to create and endless loop

Eventually you will have no more rows to update and therefore the underlying 
structures become stable and the select loop will eventually run out of rows, 
though you may or may not have visited all rows, and may visit some rows two or 
more times (once before update and more than once after).

If you change the outer query to "select auth_id, expiration from AuthTable 
order by +auth_id, +expiration" then you will PROBABLY never have a problem 
since the results will LIKELY be from a sorter and not from the underlying 
table, and therefore mutation of the underlying tables and indexes will not 
interfere with the result of the outer select, even if those mutations affect 
the AuthTable or the indexes on it.  Some SQL varients use a FOR UPDATE clause 
on a SELECT to tell the query planner that you intend to dally-about with the 
underlying datastore without having the proper isolation in place.  The SQLite 
way of doing this is by requesting a row sorter not dependent on indexes by 
using the +columnname syntax in an order by on the select.

Your outer query should probably be "select auth_id, expiration from AuthTable 
where expiration <= ? order by +auth_id, +expiration" and binding current_time 
as the parameter since there is no point in retrieving rows that you will not 
be updating is there?

The correct solution is, of course, to use separate connections so that you 
have isolation between the select and the updates.  

You SHOULD be executing the outer select on one connection and the updates on 
another connection.  This will work for journal mode delete unless the number 
of changed pages is too large to fit in sqlite's cache, in which case you may 
get an error from the update statement when it needs to spill the cache, and 
you will need to kaibosh the whole thing and do the updates in smaller chunks 
by putting a limit on the outer select and looping the whole thing until there 
are no more rows to process.  (or increase the cache_size to be sufficient).

You can avoid that particular problem by having the database in 
journal_mode=WAL in which case you can even process each update in its own 
transaction if you wish (get rid of the db2.beginimmediate() and db2.commit(), 
though then you will have to handle the eventuality of getting errors on the 
UPDATE).

db1 = Connection('database.db')
db1.executescript('pragma journal_mode=WAL;')
db2 = Connection('database.db')
current_time = datetime.now()
current_time_plus_one_year = current_time.add(years=1)
sess_id = ... some constant ...
db2.beginimmediate()
for row in db1.execute('select auth_id, expiration from authtable where 
expiration <= ?;', 
                       (current_time,)):
    db2.execute('update authtable set sesCookie = ?, expiration = ? where 
auth_id = ?;', 
                (generate_ses_id(sess_id), current_time_plus_one_year, 
row.auth_id,))
db2.commit()

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to