> I’ve just discovered the thread in the original app decreases the
> available memory by around 4 GB. Are they really that expensive?

After others pointed out threads weren’t that expensive I concluded there was a 
bug in my code. On checking though I couldn’t find anything wrong yet the 
programme was seemingly making 4 GB disappear as it ran. It gets even stranger 
though and I’ve written the console app below to illustrate. It prints out the 
amount of RAM available every 20 million steps. All other front end apps were 
closed during running although there were background tasks running. It was run 
in 64 bit mode although it also works for 32 bit.

#include <vcl.h>
#include <windows.h>
#pragma hdrstop
#pragma argsused
#include <tchar.h>
#include <stdio.h>
#include <conio.h>
#include <iostream>
#include "sqlite.h"

uint64_t FreeMBs()
{
        MEMORYSTATUSEX status;
        status.dwLength = sizeof(status);
        GlobalMemoryStatusEx(&status);
        return status.ullAvailPhys / (1024 * 1024);
}

int _tmain(int argc, _TCHAR* argv[])
{
        const int Million=1000000;
        const int Gap=20*Million;
        sqlite3 *DB;
        sqlite3_open("c:/SQLiteData/MyTemp.db",&DB);
        sqlite3_stmt *asc,*desc;
        sqlite3_prepare_v2(DB,"select RowID from big order by 
RowID",-1,&asc,NULL);
        sqlite3_prepare_v2(DB,"select RowID from big order by RowID 
desc",-1,&desc,NULL);

        std::cout << "Ascending" << std::endl;
        for (int i=0; sqlite3_step(asc)==SQLITE_ROW; i++)
                if (i%Gap==0) std::cout << FreeMBs() << std::endl;
        std::cout << FreeMBs() << std::endl;

        std::cout << std::endl << "Descending" << std::endl;
        for (int i=0; sqlite3_step(desc)==SQLITE_ROW; i++)
                if (i%Gap==0) std::cout << FreeMBs() << std::endl;
        std::cout << FreeMBs() << std::endl;

        sqlite3_finalize(asc);
        sqlite3_finalize(desc);
        sqlite3_close(DB);
        getch();
        return 0;
}

The big table can be emulated with this sql which creates a table with 100 
million records

sqlite3_exec(DB, "create table wee as "
"with cte(a, b, c) as "
"(values (1, 'XXXXXXXXXXXXXXXXXXXX', 'XXXXXXXXXXXXXXXXXXXX') "
"union all "
"select a+1, b, c from cte where a<10000) "
"select * from cte;"
"create table big as select * from wee t1, wee t2;", 0, 0, 0);

OUTPUT

Ascending
13227
11355
9465
7582
5683
3801

Descending
3801
5868
7773
9683
11586
13473

On each run the numbers may be different but the overall results are the same. 
It seems to lose RAM while step ascending and then give it back when step 
descending. I can’t imagine it’s down to sqlite but can’t see anything wrong 
with my code. Can anyone put me out of my misery?
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to