В сообщении от Monday 21 July 2008 04:59:10 Kang Kai написал(а):
> Thanks for sharing your experience, would you please introduce how you deal
> with the concurrency?

I'm using AOL web server + tclsqlite. Small and fast read/write transactions + 
in-memory && file-based database replicas + "db timeout" function + split 
database by region or month as example to group of databases is enough for 
most causes. 

Selects is extremely fast by default. Usually not optimized select on one 
table or a lot of tables in SQLite is more fast than very optimized query in 
PostgreSQL. It's fantastic! 

I'm using "PRAGMA page_size=4096" on ext3 filesystem with 4k block size.

For example:

            sqlite3 db :memory:
            db timeout [ns_config "ns/server/dataset" timeout]
            db eval {PRAGMA page_size=4096}
            db eval {PRAGMA default_cache_size=10000}

            set dbmainfile [dbmainfile]
            db eval    {ATTACH DATABASE $dbmainfile as merch}

            set dbuserfile [dbuserfile]
            db eval    {ATTACH DATABASE $dbuserfile as user}

There  are main in-memory database "main" and application database "merch" and 
persistent user storage database "user". One-time replicas are created 
in-memory and long-time report tables, etc. are created in user database. In 
PostgreSQL corresponding way is to create user schemas and tables/views in 
these schemas (PostgreSQL creating different files for each table). Now I'm 
creating dataset and generate web page by them:

===========
# generate filters
...
# create replica by filters
db eval {create table data_replica as select ... from data where ...}
# html drop-down lists 
db eval {select distinct a from data_replica} {...}
db eval {select distinct b from data_replica} {...}
...
# html table
db eval {select a,b,c from data_replica} {...}
===========

So read lock on application database is short.

Besides read locks not blocked database for long time than database is 
accessible for write always. One insert is very fast by default and big 
dataset insert/update is better prepare in database replica and sync by one 
query with application database.

# prepare dataset
db eval {create table replica as ...}
db eval {insert into replica values ()}
db eval {update replica ...}
# sync with application database (insert/update)
db eval {insert into merch.data select ... from replica}

Users session storage is in-memory with sync dump/restore procedures for 
stop/start server.

compress/uncompress functions is very useful for fast disk read/write. See 
http://www.mail-archive.com/sqlite-users%40sqlite.org/msg17018.html
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to