This is a continuation of the "SQLite vs. Oracle (parallelized)"
thread with a request to learn how others are using SQLite with
very large data sets. The context of this post is processing
large data sets from a single process perspective, eg. this
question is being asked from a batch data processing vs.
multi-user perspective.
1. In browsing the archives, it seems that one technique is to
split or partition large data sets into separate SQLite databases
that can be loaded and indexed independently of one another
(possibly via separate processes on the same box or on separate
boxes). It appears that some people have written their own
front-ends to manage how records are inserted and/or read from a
collection of SQLite databases.
2. Another technique appears to be to run SQLite on boxes with
lots of memory and then configure SQLite to make optimal use of
available memory.
Are there other techniques that one should consider and/or what
techniques should one avoid?
Thank you,
Malcolm
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to