That was very interesting and informative. I'm a "normal" IT type person
(as "normal" as most IT people, I guess <grin/>). I know very little about
the application that you talked about, but it did help me to understand why
SQLite might be used instead of a "more normal" SQL server data base like
PostgreSQL/Oracle/DB2. It gave me things to think about. Which I always
enjoy.


On Wed, Oct 16, 2013 at 10:02 AM, <a.furi...@lqt.it> wrote:

> On Wed, 16 Oct 2013 07:32:08 -0500, John McKown wrote:
>
>> I can't say for sure, but would somebody want to store 20
>> million rows in a SQLite data base? Why? I really am curious
>>
>>
> Hi John,
>
> you could eventually find interesting in some way my own
> first hand experiences in the Geographic/GeoSpatial field.
>
> in this very specific environment (certainly not the most
> common one, I agree) it's not at all exceptional deploying
> SQLite/SpatiaLite DB-files as big as 20/40 GB (and even more)
> containing several tenths/hundredths million rows disseminated
> in many hundredths different tables (aka layers in GIS jargon)
> strictly related the one to the other not only in the "classic"
> relational way based on Primary and Foreign Keys, but even in
> the "awkward spatial way" based on geometric and topological
> relationships.
>
> there are several good technical reasons suggesting to use
> exactly SQLite/SpatiaLite for processing, validating and
> distributing huge geographic datasets:
>
> - SQLite/SpatiaLite offers exactly the same overall level of
>   standard Spatial SQL processing capabilities supported by
>   the much more sophisticated (and complex) PostgreSQL/PostGIS;
>   but it's by way simpler to be installed and configured, and
>   it's usually faster under many common working conditions
>   (mainly thanks to its really outstanding first class R*Tree
>   implementation).
>
> - not to mention the SQLite's "forbidden weapon"; if your HW
>   supports an adequate amount of RAM you can directly load a
>   whole DB in memory at once; and under such a configuration
>   you can easily reach dramatically impressive supersonic speeds.
>
> - you can directly copy / send a whole DB (even a really huge one)
>   from a locations to another in a single shot and in the most
>   painless way, because after all it simply is an ordinary file.
>
> - if something goes completely wrong during any complex Spatial
>   data processing operation (it happens ... nobody's perfect),
>   you simply have to remove a single file and then patiently
>   restart yet again from scratch after correcting your buggish
>   Spatial SQL scripts.
>   under the same conditions fully recovering some client/server
>   Spatial DBMS left in an inconsistent state could eventually be
>   a not so pleasant and easy affair.
>
> - you can freely ATTACH and DETACH together many DB-files
>   depending on your very variable specific requirements, thus
>   achieving an outstanding and unconstrained flexibility.
>   Quite often this is a really usefull feature, e.g. when you
>   have to collect, integrate and merge together many different
>   datasets presenting a very loose standardization because they
>   were produced during a long period of time by many different
>   subjects for different initial purposes.
>
> all this considered, my answer is: "oh yes, it makes perfectly
> sense storing 20 million rows in a SQLite DB; and it works
> absolutely well" :-D
>
> by Sandro
> ______________________________**_________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users<http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users>
>



-- 
This is clearly another case of too many mad scientists, and not enough
hunchbacks.

Maranatha! <><
John McKown
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to