On Fri, 13 Mar 2009, Pierre Chatelier might have said:
> Hello,
>
> I am using SQLITE to store and retrieve raw data blocks that are
> basically ~300Ko. Each block has an int identifier, so that insert/
> select are easy. This is a very basic use : I do not use complex
> queries. Only "INSERT/SELECT where index=..."
>
> Now, I am thinking about performance, for writing a sequence of a few
> hundreds 300k blocks, as fast as possible.
> Obviously, I use bind_blob(), blob_read() and blob_write() functions.
> I have already tuned the PRAGMAs for journal/synchronous/page_size/
> cache, so that it's rather efficient.
> I do not DELETE any content and the whole database is dropped after
> use: VACUUM is not important.
>
> There are other ways to optimize, but I wonder if it is worth, or it
> the gain would be only marginal regarding what I am doing.
> 1)recompile SQLite ? Which compile options would help in this case ?
> 2)using other memory allocators ? I am not sure that writing big data
> blocks triggers many calls to malloc()
> 3)using compression ? zlib could help, but since my data does not
> compress very well (Let's say an average 20% space can be saved per
> block), I am not sure that the compression time will balance the
> writing time.
>
> Of course, I am only asking for advices regarding your experience,
> there is certainly no exact answer, and it will always depend on my
> data.
>
> Regards,
>
> Pierre Chatelier
Why do you not use the int converted to a hex (sprintf("%08x", id))
as a file name and just use the file system?
Mike
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users