I'm working on a Python web application project (Datasette) where I'd like
to accept untrusted SQL queries from users and execute them against a
SQLite database, without risk of malicious SQL being used to perform a
denial of service attack against my application.

I'm opening the database file itself using SQLite's read only mode
(?mode=ro in the URI passed to sqite3.connect()) so I'm not worried about
them being able to modify or corrupt the database file itself.

My concern is more queries that might use an enormous quantity of RAM or
starve the CPU.

One example: if I have a database with 10 million rows, I worry that the
following query may break my application:

select group_concat(large_column) from large_table;

I'm using the Python standard library sqlite3 module, but I'm willing to
ship a custom compiled build of SQLite (based on
https://github.com/coleifer/pysqlite3 ) if necessary.

My absolute ideal solution would be a way of saying to SQLite "if a query
takes longer than X seconds or consumes more than Y amount of memory,
cancel the query and return an error".

I've actually implemented a time limit for queries by hooking into the
progress callback - and having it cancel the query if more than a second
has passed since it started - but I'm not sure how to handle the memory
challenge.

Or maybe I'm overthinking this?  Any tips on directions I could be
investigating here would be very much appreciated!

Thanks,

Simon
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to