I always have to explain to people that there's no magic sauce that "real
databases" add versus SQLite.

SQLite uses the same techniques all databases use, and thanks to the
absence of a network later, you avoid a lot of latency, so it can actually
be faster.

(I do believe that SQLite optimizes for smaller disk footprint, so that may
be a tradeoff where other databases might gain speed)

Wout.

On Tue, Jan 29, 2019, 2:07 PM Rob Willett <rob.sql...@robertwillett.com
wrote:

> Millions of rows is not even large, never mind huge or very huge :)
>
> We have tables with hundreds of millions of rows, we got to billions of
> rows in a single table before we changed the logic. From memory we had
> 67GB for a single database and I reckon 60GB was one table. Not many
> issues at all with inserting or searching. One of our data mining
> queries searched the entire table and it still only took 90 secs, though
> all of the query used indexes.
>
> We only changed what we store as the management of a circa 60GB database
> was too much and we worked out we only needed 1% of it. We were using a
> virtual private server and we had issues with disk IO when we copied the
> database around using Unix cp. This wasn't a SQLite problem at all.
> However I have no doubt that SQLite was more than capable of handling
> even more data.
>
> Rob
>
> On 29 Jan 2019, at 11:00, mzz...@libero.it wrote:
>
> > Dear all,
> >
> > what happens if I put all data in a single table and this table become
> > very huge (for example millions of rows)?
> >
> > Will I have same performace problems?
> >
> > Thanks.
> >
> >
> > Regards.
> >
> >>
> >>     Il 28 gennaio 2019 alle 17.28 Simon Slavin <slav...@bigfraud.org>
> >> ha scritto:
> >>
> >>     On 28 Jan 2019, at 4:17pm, mzz...@libero.it wrote:
> >>
> >>         > >
> >>>         when the number of the tables become huge (about 15000/20000
> >>> tables) the first DataBase reading query, after Database open, is
> >>> very slow (about 4sec.) while next reading operations are faster.
> >>>
> >>>         How can I speed up?
> >>>
> >>>     >
> >>     Put all the data in the same table.
> >>
> >>     At the moment, you pick a new table name each time you write
> >> another set of data to the database. Instead of that, create just one
> >> big table, and add an extra column to the columns which already exist
> >> called "dataset". In that you put the string you previously used as
> >> the table name.
> >>
> >>     SQL is not designed to have a variable number of tables in a
> >> database. All the optimization is done assuming that you will have a
> >> low number of tables, and rarely create or drop tables.
> >>
> >>     Simon.
> >>
> >>     _______________________________________________
> >>     sqlite-users mailing list
> >>     sqlite-users@mailinglists.sqlite.org
> >>
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >>
> > _______________________________________________
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to