Re: [sqlite] sqlite crash when realloc fails
On Tue, Sep 9, 2014 at 10:39 AM, Dominique Devienne wrote: > On Tue, Sep 9, 2014 at 1:50 PM, Sebastien David > wrote: > > > I have a database with 10million of row in one table (file size is ~5GB) > > When I execute a query that should return half of this table, sqlite > fails > > because at some point there is no memory (this is normal) [...] > > > Why do you think it's normal? > Yes, he shouldn't be using sqlite3_get_table() as the interface is marked as deprecated. Sabastien should be using sqlite3_prepare() and sqlite3_step(). On the other hand, we should fix sqlite3_get_table() so that it works without integer overflow problems, which are unacceptable even in a "deprecated" interface. The problem will be doing that in a way that is testable on embedded systems > > Since your query has no SORTING or GROUPING, there's no reason for SQLite > to use an inordinate amount of memory beyond it's page cache, unless your > callback_cache_filter_check(record_id) function call does some > memory-expensive operation. > > That your application code runs out of memory if it stores those 5M rows, > sure, but I don't see why SQLite itself would use a lot of memory in that > case. What am I missing? --DD > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite crash when realloc fails
On Tue, Sep 9, 2014 at 1:50 PM, Sebastien David wrote: > I have a database with 10million of row in one table (file size is ~5GB) > When I execute a query that should return half of this table, sqlite fails > because at some point there is no memory (this is normal) [...] Why do you think it's normal? Since your query has no SORTING or GROUPING, there's no reason for SQLite to use an inordinate amount of memory beyond it's page cache, unless your callback_cache_filter_check(record_id) function call does some memory-expensive operation. That your application code runs out of memory if it stores those 5M rows, sure, but I don't see why SQLite itself would use a lot of memory in that case. What am I missing? --DD ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] sqlite crash when realloc fails
Hi, I have a database with 10million of row in one table (file size is ~5GB) When I execute a query that should return half of this table, sqlite fails because at some point there is no memory (this is normal) but instead of returning an error, it crashes. sqlite version: 3.7.5 #0 0x0065b70e in sqlite3_get_table (db=0x7fffe0023d08, zSql=0x7fffc001a6e8 "SELECT * FROM sync_snapdb_table WHERE tobe_deleted = 0 AND scn_not_visited < 30 AND 1 = callback_cache_filter_check(record_id);", pazResult=0x7fffdabfa110, pnRow=0x7fffdabfa16c, pnColumn=0x7fffdabfa168, pzErrMsg=0x7fffdabfa118) at /mytest/imports/3rdparty/sqlite/sqlite3.c:92643 After some investigation, i found that sqlite3_get_table_cb is called with nAlloc equals to ~164856293, a realloc is requested and nAlloc is now ~325 000 000 So sqlite3_realloc is called with ~2 630 000 000, this value gives a negative value because nAlloc is an int. In this case, sqlite3_realloc releases the pointer. Afterward in sqlite3_get_table at line 92643, this released pointer is used. To fix i quickly added this line in sqlite3_get_table_cb before : if( p->nData + need > p->nAlloc ){ char **azNew; * if (p->nAlloc > (int)(0x3fff/sizeof(char*)-need*sizeof(char*))) { goto malloc_failed; }* p->nAlloc = p->nAlloc*2 + need; azNew = sqlite3_realloc( p->azResult, sizeof(char*)*p->nAlloc ); ... Do you think this fix is right ? Is it necessary to apply the same fix in other places? Thank you ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users