Re: [sqlite] schema_version and Vacuum or Backup API

2016-09-07 Thread David Empson


> On 8/09/2016, at 3:36 PM, David Empson  wrote:
> 
> 
>> On 8/09/2016, at 3:23 PM, Rowan Worth  wrote:
>> 
>> We recently made sqlite's logging more visible in our application and we're
>> seeing a lot more "schema has changed" warnings than I'd expect, since our
>> app pretty much doesn't touch the schema except during DB creation. I know
>> the warnings are harmless, just curious where they are coming from :)
> 
> I found the same thing, with an obvious cause: every ATTACH DATABASE or 
> DETACH DATABASE triggers a schema change which requires recompiling every 
> prepared statement. SQLite does this automatically, 

[Oops, bumped the send button mid-sentence.]

SQLite does this automatically, but it outputs a message to the error log 
(which is not returned via the API as an error).

I ended up suppressing that specific error in my log handler as our application 
is regularly doing a lot of attach/detach operations and has a lot of prepared 
statements.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] schema_version and Vacuum or Backup API

2016-09-07 Thread David Empson

> On 8/09/2016, at 3:23 PM, Rowan Worth  wrote:
> 
> We recently made sqlite's logging more visible in our application and we're
> seeing a lot more "schema has changed" warnings than I'd expect, since our
> app pretty much doesn't touch the schema except during DB creation. I know
> the warnings are harmless, just curious where they are coming from :)

I found the same thing, with an obvious cause: every ATTACH DATABASE or DETACH 
DATABASE triggers a schema change which requires recompiling every prepared 
statement. SQLite does this automatically, 
-- 
David Empson
demp...@emptech.co.nz
Snail mail: P.O. Box 27-103, Wellington 6141, New Zealand

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] schema_version and Vacuum or Backup API

2016-09-07 Thread Rowan Worth
On 6 September 2016 at 20:04, Clemens Ladisch  wrote:

> Stephen Chrzanowski wrote:
> > [...] I'm relying on the results from "pragma
> > schema_version".  I've noticed that this bumps up every time I run a
> vacuum
> > or use the backup API against the database.  Is this supposed to happen?
>
> The documentation says that the schema version
> | is incremented by SQLite whenever the database schema is modified (by
> | creating or dropping a table or index). The schema version is used by
> | SQLite each time a query is executed to ensure that the internal cache
> | of the schema used when compiling the SQL query matches the schema of
> | the database against which the compiled query is actually executed.
>
> Inside a compiled statement, tables and indexes are identified by their
> root page number.  When running VACUUM, this is likely to change.
>

Interesting! Is there other non-schema modifying operations that bump the
version? What about ANALYZE? Obviously the first time it is run on a DB it
*does* modify the schema, but what about subsequent runs?

We recently made sqlite's logging more visible in our application and we're
seeing a lot more "schema has changed" warnings than I'd expect, since our
app pretty much doesn't touch the schema except during DB creation. I know
the warnings are harmless, just curious where they are coming from :)

-Rowan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Clarification on "No Isolation On Same Db Connection"

2016-09-07 Thread Igor Tandetnik

On 9/7/2016 6:11 PM, Stephan Mueller wrote:

I understand that a way to ensure "SELECT is unperturbed" semantics is to use 
separate connections for SELECT and updates.


If you go down that route, make sure you are using WAL journaling mode; 
it won't work otherwise.



This is undesirable since I'd have to (IIUC) do all my updates (possibly 
millions) in a single transaction.


I don't see how this follows.


I'd prefer to commit after each update


You can't commit on a single connection either, while there's an 
unfinalized SELECT statement traversal going on. So you aren't gaining 
anything by trying to interleave SELECT and updates on the same connection.



That is, if I ever receive a record that ought to have arrived earlier because 
of ORDER BY, it must be a since-SELECT-began update, and should be ignored.


When data is modified under SELECT's feet, phantom rows are just one 
problem; it's also possible for the statement to skip rows it would have 
otherwise returned, and to return rows containing stale data. Basically, 
undefined behavior is undefined.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Clarification on "No Isolation On Same Db Connection"

2016-09-07 Thread Keith Medcalf

There is only cursor stability level isolation between queries on the same 
connection.  The query is executed as a descending series of loops.  There is a 
"current record" pointer based on the location of the "current record" in the 
query for each table.  If you modify an entry, that modification is made 
immediately.  Then the next "step" in the descending loops is made.  Your 
modification may, however, have changed the btree's and suchlike "out from 
under" your query.  When your query continues (does the next step) it is using 
the "location pointers" of where it was and following them "as it is now".

This make it appear that the behaviour is undefined, however this is only an 
appearance.  The behaviour is entire rational, predictable, and deterministic.

The "proper" way to do this is to engage WAL and use a separate connection for 
each (one for updating, one for reading).  When WAL is in effect, read 
operations WITHIN A TRANSACTION have Repeatable Read isolation.  That means 
that all READ operations within the transaction are repeatable and will return 
the exact same results until the transaction is ended.

That is if you do:

Connection 1:

create table x ( x );
insert into x values (1);

BEGIN;
select x from x;
  Connection 2
  BEGIN;
  UPDATE x SET x=2;
select x from x;
  COMMIT;
select x from x;  
  SELECT x from x;
select x from x;

COMMIT;

select x from x;

All the selects in connection 1 within the transaction will return 1, while the 
selects in connection 2 will return 2.  The last select in connection 1 (taking 
place in a new automagic transaction) will return 2.

In other words, the data READ from a connection between BEGIN ... COMMIT is 
REPEATABLE -- hence the name Repeatable Read.
The "normal" isolation level is Cursor Stability (which means that the pointers 
to the "current rows" remain "current" in the face of updates, but that the 
results of the query may perturbate as a result of concomitant updates.

While the transaction is open on connection 1 (and in a transaction), you can 
do as many transactions as you like on connection 2.  Those effects will not be 
seen until the changes are committed on connection 2 *AND* the Repeatable Read 
isolation level is ended by committing connection 1 (thus allowing you to see 
changes made to the database).

> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Stephan Mueller
> Sent: Wednesday, 7 September, 2016 16:12
> To: sqlite-users@mailinglists.sqlite.org
> Subject: [sqlite] Clarification on "No Isolation On Same Db Connection"
> 
> Hello all;
> 
> I have a scenario that is well covered by the text at
> https://www.sqlite.org/isolation.html in the section "No Isolation Between
> Operations On The Same Database Connection" (relevant paragraphs at end
> for convenience) but am hoping for a bit more clarity.  Here's an overview
> of my scenario.  I'm trying to find the right level of detail; if I've
> failed on the side of too little, I'm happy to provide more.
> 
> I have a non-trivial SELECT simulating an outer join
> (http://stackoverflow.com/questions/1923259/full-outer-join-with-sqlite)
> with the overall structure
> 
> SELECT columns FROM t1 LEFT JOIN t2 ON t1.column = t2.column
> WHERE conditions
> UNION ALL
> SELECT columns FROM t2 LEFT JOIN t1 ON t2.column = t1.column
> WHERE t1.column IS NULL
> AND more_conditions
> ORDER BY column
> 
> The application fetches a row at a time, doing an UPDATE or INSERT or
> DELETE into t1 (henceforth referred to just  as 'updates') for each row
> fetched.  I _want_ the semantics to be that the SELECT is unperturbed by
> my updates.  (Details: there are actually three tables.  The joined-on
> column is UNIQUE (indeed, the PRIMARY KEY) in each table.  The query
> conditions are such that column is UNIQUE in the result set when not doing
> any updates while iterating.)
> 
> I understand that a way to ensure "SELECT is unperturbed" semantics is to
> use separate connections for SELECT and updates.   This is undesirable
> since I'd have to (IIUC) do all my updates (possibly millions) in a single
> transaction.  I'd prefer to commit after each update (to avoid redoing any
> work in case of unnatural program abort, for example).  The compromise for
> performance is to commit every N (4096, currently) updates, but
> potentially millions seems right out.
> 
> Another way is to fetch all the results into a temporary table tt, then
> iterate over tt while modifying t1.   This is easy to code, and SQLite can
> fill tt with a single INSERT INTO tt SELECT .  With potentially
> millions of rows, however, I'm rather an approach that doesn't spoil the
> interactive user experience by adding seconds spent on creating the
> temporary table, if 

Re: [sqlite] Problem with rename table

2016-09-07 Thread Bob McFarlane
Please reply if you sent this. Thanks.





-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Richard Hipp
Sent: Wednesday, September 7, 2016 7:49 PM
To: SQLite mailing list 
Subject: Re: [sqlite] Problem with rename table

On 9/6/16, Radovan Antloga  wrote:
> Hi Richard !
>
> I can't find a solution how to fix my database after I have renamed
> table DOKUMENTI to DOKUMENTI2.
> Table DOKUMENTI had trigger dokumenti_trigger1 and after renaming
> table I cant execute any sql. I forgot to drop trigger first. So now I
> always get error:
> malformed database schema (dokumenti_trigger1) - no such table
> main.dokumenti.

Rename the table back to its old name?

Worst case:  You can drop all the triggers like this:

   PRAGMA writable_schema=ON;
   DELETE FROM sqlite_master WHERE type='trigger';

Then close and reopen your database, and you have no more triggers.
The same will work for views.  But if you try the above with tables or
indexes, you'll end up with a database that fails "PRAGMA integrity_check" -
though the corruption can be fixed with a VACUUM.
--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



TMGID:S1141121912621011
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Bob McFarlane
Please reply if you sent this. Thanks.





-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Chris Locke
Sent: Wednesday, September 7, 2016 12:12 PM
To: SQLite mailing list 
Subject: Re: [sqlite] Query time execution difference between my application
and SQLiteBrowser

>First of all, I'll check all the pragmas and stuff, plus the version of 
>SQliteBrowser (former DB Browser for SQlite indeed) I'm using, etc.

Just to confirm (as it seems to be overlooked) that SQLite Browser (actually
now called DB Browser for SQLite, rather than formerly...) is a 3rd party
tool.
I can post your query to their issues board though ... its still in high
development, and its curious the version number irregularities...
https://github.com/sqlitebrowser/sqlitebrowser/issues


Thanks,
Chris


On Wed, Sep 7, 2016 at 4:04 PM, Simon Slavin  wrote:

>
> On 7 Sep 2016, at 3:48pm, Laura BERGOENS 
> wrote:
>
> > This query takes 100 seconds approx.
>
> Once your tables have some convincing data in (does not need to be
> final data, just something useful to see how the values are
> distributed), run "ANALYZE", just once.  It might speed up later
> SELECTs.  It might not.  But a situation where you have many 'AND'
> clauses looking at different columns is exactly what ANALYZE is most
helpful for.
>
> And yes, a 50 kilorow table is not big by SQLite standards.  I have
> tables with a thousand times that that yield answers to SELECT in 5ms.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



TMGID:S1141121912621015
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Clarification on "No Isolation On Same Db Connection"

2016-09-07 Thread Bob McFarlane
Please reply if you sent this. Thanks.





-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Stephan Mueller
Sent: Wednesday, September 7, 2016 6:12 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] Clarification on "No Isolation On Same Db Connection"

Hello all;

I have a scenario that is well covered by the text at
https://www.sqlite.org/isolation.html in the section "No Isolation Between
Operations On The Same Database Connection" (relevant paragraphs at end for
convenience) but am hoping for a bit more clarity.  Here's an overview of my
scenario.  I'm trying to find the right level of detail; if I've failed on
the side of too little, I'm happy to provide more.

I have a non-trivial SELECT simulating an outer join
(http://stackoverflow.com/questions/1923259/full-outer-join-with-sqlite)
with the overall structure

SELECT columns FROM t1 LEFT JOIN t2 ON t1.column = t2.column
WHERE conditions
UNION ALL
SELECT columns FROM t2 LEFT JOIN t1 ON t2.column = t1.column
WHERE t1.column IS NULL
AND more_conditions
ORDER BY column

The application fetches a row at a time, doing an UPDATE or INSERT or DELETE
into t1 (henceforth referred to just  as 'updates') for each row fetched.  I
_want_ the semantics to be that the SELECT is unperturbed by my updates.
(Details: there are actually three tables.  The joined-on column is UNIQUE
(indeed, the PRIMARY KEY) in each table.  The query conditions are such that
column is UNIQUE in the result set when not doing any updates while
iterating.)

I understand that a way to ensure "SELECT is unperturbed" semantics is to
use separate connections for SELECT and updates.   This is undesirable since
I'd have to (IIUC) do all my updates (possibly millions) in a single
transaction.  I'd prefer to commit after each update (to avoid redoing any
work in case of unnatural program abort, for example).  The compromise for
performance is to commit every N (4096, currently) updates, but potentially
millions seems right out.

Another way is to fetch all the results into a temporary table tt, then
iterate over tt while modifying t1.   This is easy to code, and SQLite can
fill tt with a single INSERT INTO tt SELECT .  With potentially
millions of rows, however, I'm rather an approach that doesn't spoil the
interactive user experience by adding seconds spent on creating the
temporary table, if that can be avoided.

I'm thus led to the following notion (apologies for my pseudo-code, which is
neither SQL, nor C, nor the Perl I'm actually coding in):

prepare SELECT from above
execute()
$last = "";
while (fetchnext) {
if $fetched.column <= $last(1)
skip to next iteration of loop  (2)
$last = $fetched.column   (3)
update-or-insert-or-delete
}

That is, if I ever receive a record that ought to have arrived earlier
because of ORDER BY, it must be a since-SELECT-began update, and should be
ignored.

I'm trying to not think in terms of potential implementation, and stick with
"what must ORDER BY guarantee?"   I'd imagine if it ever returns a recent
update, it would do so at the proper point in sort order.  If it's decided
that an "alphabetically-prior-to-where-we-have-been-returning-records recent
update" record must be returned, it would be returned ASAP -- as if the
ORDER BY guarantee was that "all remaining records are sorted, and the
lowest of those returned next".

Is my thinking sound?  -- are the numbered lines in my pseudo-code necessary
and/or sufficient to achieve the goal of ignoring updates in this case, in a
righteously precise use of SQL?

Thanks in advance for your thoughts!

Here are selections from the relevant SQLite documentation paragraphs to
provide context, with comments/queries interspersed.

  No Isolation Between Operations On The Same Database Connection

  ... What if a SELECT statement is started and the
sqlite3_step() interface steps
through roughly half of its output, then some
UPDATE statements are run by the
application that modify the table that the SELECT statement is reading, then
more calls to sqlite3_step() are
made to finish out the SELECT statement? Will the later steps of the SELECT
statement see the changes made by the UPDATE or not? The answer is that this
behavior is undefined.  In particular, whether or not the SELECT statement
sees the concurrent changes depends on which release of SQLite is running,
the schema of the database file, whether or not
ANALYZE has been run, and the
details of the query. In some cases, it might depend on the content of the
database file, too. There is no good way to know 

Re: [sqlite] "Responsive" website revamp at www.sqlite.org

2016-09-07 Thread Bob McFarlane
Please reply if you sent this. Thanks.





-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Chris Locke
Sent: Tuesday, September 6, 2016 8:54 AM
To: SQLite mailing list 
Subject: Re: [sqlite] "Responsive" website revamp at www.sqlite.org

Makes perfect sense.  Thanks.


Chris

On Tue, Sep 6, 2016 at 12:26 PM, Richard Hipp  wrote:

> On 9/6/16, Chris Locke  wrote:
> > When reducing the size of the browser (on Chrome desktop at least)
> > the
> '***
> > DRAFT ***' tag disappears.  Rather than disappears, maybe this
> > should be reduced in size and always be visible?
> > Edit: Noticed its right at the bottom of the page, regardless of
> > size,
> but
> > not at the top.  Is this by design?
>
> By design.  There simply is not space.
>
> Note that when not in draft mode, the "*** DRAFT ***" text becomes the
> tag-line:  "Small. Fast. Reliable.  Choose any three!"  There is no
> way to fit that on a 320-pixel wide screen, together with the logo,
> and make it readable.  It is not essential information so it is
> elided.
>
> > Additionally, on the menu, the 'Download' option is not visible at all.
> >
>
> By design.  There is nothing on the Download page that is useful to a
> mobile device.  So links to that page are omitted on mobile, to save
> precious pixels.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



TMGID:S1141121912621021
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Bob McFarlane
Please reply if you sent this. Thanks.





-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Laura BERGOENS
Sent: Wednesday, September 7, 2016 11:15 AM
To: SQLite mailing list 
Subject: Re: [sqlite] Query time execution difference between my application
and SQLiteBrowser

Ok I didn't know abount ANALYZE existence, sorry for that.

I just run ANALYZE then the query again in sqlite3.exe and this time it took
like half a second. Should I close and re open the DB, cause maybe this
result was influenced with some cache system or something?

Plus, I ran PRAGMA compile_options for the 3 platforms I mentionned before
and I tried to regroup the results in a libreOffice calc document, I don't
know if that can help or if it's relevant

2016-09-07 17:04 GMT+02:00 Simon Slavin :

>
> On 7 Sep 2016, at 3:48pm, Laura BERGOENS 
> wrote:
>
> > This query takes 100 seconds approx.
>
> Once your tables have some convincing data in (does not need to be
> final data, just something useful to see how the values are
> distributed), run "ANALYZE", just once.  It might speed up later
> SELECTs.  It might not.  But a situation where you have many 'AND'
> clauses looking at different columns is exactly what ANALYZE is most
helpful for.
>
> And yes, a 50 kilorow table is not big by SQLite standards.  I have
> tables with a thousand times that that yield answers to SELECT in 5ms.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



--
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



TMGID:S1141121912621017
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with rename table

2016-09-07 Thread Bob McFarlane
Please reply if you sent this. Thanks.





-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Radovan Antloga
Sent: Tuesday, September 6, 2016 9:37 AM
To: SQLite mailing list 
Subject: [sqlite] Problem with rename table

Hi Richard !

I can't find a solution how to fix my database after I have renamed table
DOKUMENTI to DOKUMENTI2.
Table DOKUMENTI had trigger dokumenti_trigger1 and after renaming table I
cant execute any sql. I forgot to drop trigger first. So now I always get
error:
malformed database schema (dokumenti_trigger1) - no such table
main.dokumenti.

TIA
Radovan Antloga
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



TMGID:S1141121912621019
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with rename table

2016-09-07 Thread Richard Hipp
On 9/6/16, Radovan Antloga  wrote:
> Hi Richard !
>
> I can't find a solution how to fix my database after
> I have renamed table DOKUMENTI to DOKUMENTI2.
> Table DOKUMENTI had trigger dokumenti_trigger1
> and after renaming table I cant execute any sql. I forgot
> to drop trigger first. So now I always get error:
> malformed database schema (dokumenti_trigger1) -
> no such table main.dokumenti.

Rename the table back to its old name?

Worst case:  You can drop all the triggers like this:

   PRAGMA writable_schema=ON;
   DELETE FROM sqlite_master WHERE type='trigger';

Then close and reopen your database, and you have no more triggers.
The same will work for views.  But if you try the above with tables or
indexes, you'll end up with a database that fails "PRAGMA
integrity_check" - though the corruption can be fixed with a VACUUM.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Clarification on "No Isolation On Same Db Connection"

2016-09-07 Thread Stephan Mueller
Hello all;

I have a scenario that is well covered by the text at 
https://www.sqlite.org/isolation.html in the section "No Isolation Between 
Operations On The Same Database Connection" (relevant paragraphs at end for 
convenience) but am hoping for a bit more clarity.  Here's an overview of my 
scenario.  I'm trying to find the right level of detail; if I've failed on the 
side of too little, I'm happy to provide more.

I have a non-trivial SELECT simulating an outer join 
(http://stackoverflow.com/questions/1923259/full-outer-join-with-sqlite) with 
the overall structure

SELECT columns FROM t1 LEFT JOIN t2 ON t1.column = t2.column
WHERE conditions
UNION ALL
SELECT columns FROM t2 LEFT JOIN t1 ON t2.column = t1.column
WHERE t1.column IS NULL
AND more_conditions
ORDER BY column

The application fetches a row at a time, doing an UPDATE or INSERT or DELETE 
into t1 (henceforth referred to just  as 'updates') for each row fetched.  I 
_want_ the semantics to be that the SELECT is unperturbed by my updates.  
(Details: there are actually three tables.  The joined-on column is UNIQUE 
(indeed, the PRIMARY KEY) in each table.  The query conditions are such that 
column is UNIQUE in the result set when not doing any updates while iterating.)

I understand that a way to ensure "SELECT is unperturbed" semantics is to use 
separate connections for SELECT and updates.   This is undesirable since I'd 
have to (IIUC) do all my updates (possibly millions) in a single transaction.  
I'd prefer to commit after each update (to avoid redoing any work in case of 
unnatural program abort, for example).  The compromise for performance is to 
commit every N (4096, currently) updates, but potentially millions seems right 
out.

Another way is to fetch all the results into a temporary table tt, then iterate 
over tt while modifying t1.   This is easy to code, and SQLite can fill tt with 
a single INSERT INTO tt SELECT .  With potentially millions of rows, 
however, I'm rather an approach that doesn't spoil the interactive user 
experience by adding seconds spent on creating the temporary table, if that can 
be avoided.

I'm thus led to the following notion (apologies for my pseudo-code, which is 
neither SQL, nor C, nor the Perl I'm actually coding in):

prepare SELECT from above
execute()
$last = "";
while (fetchnext) {
if $fetched.column <= $last(1)
skip to next iteration of loop  (2)
$last = $fetched.column   (3)
update-or-insert-or-delete
}

That is, if I ever receive a record that ought to have arrived earlier because 
of ORDER BY, it must be a since-SELECT-began update, and should be ignored.

I'm trying to not think in terms of potential implementation, and stick with 
"what must ORDER BY guarantee?"   I'd imagine if it ever returns a recent 
update, it would do so at the proper point in sort order.  If it's decided that 
an "alphabetically-prior-to-where-we-have-been-returning-records recent update" 
record must be returned, it would be returned ASAP -- as if the ORDER BY 
guarantee was that "all remaining records are sorted, and the lowest of those 
returned next".

Is my thinking sound?  -- are the numbered lines in my pseudo-code necessary 
and/or sufficient to achieve the goal of ignoring updates in this case, in a 
righteously precise use of SQL?

Thanks in advance for your thoughts!

Here are selections from the relevant SQLite documentation paragraphs to 
provide context, with comments/queries interspersed.

  No Isolation Between Operations On The Same Database Connection

  ... What if a SELECT statement is started and the 
sqlite3_step() interface steps through 
roughly half of its output, then some 
UPDATE statements are run by the 
application that modify the table that the SELECT statement is reading, then 
more calls to sqlite3_step() are made 
to finish out the SELECT statement? Will the later steps of the SELECT 
statement see the changes made by the UPDATE or not? The answer is that this 
behavior is undefined.  In particular, whether or not the SELECT statement sees 
the concurrent changes depends on which release of SQLite is running, the 
schema of the database file, whether or not 
ANALYZE has been run, and the details 
of the query. In some cases, it might depend on the content of the database 
file, too. There is no good way to know whether or not a SELECT statement will 
see changes that were made to the database by the same database connection 
after the SELECT statement was started. And hence, developers should diligently 
avoid writing applications that make assumptions about what will occur in that 

Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Chris Locke
>First of all, I'll check all the pragmas and stuff, plus the version of
> SQliteBrowser (former DB Browser for SQlite indeed) I'm using, etc.

Just to confirm (as it seems to be overlooked) that SQLite Browser
(actually now called DB Browser for SQLite, rather than formerly...) is a
3rd party tool.
I can post your query to their issues board though ... its still in high
development, and its curious the version number irregularities...
https://github.com/sqlitebrowser/sqlitebrowser/issues


Thanks,
Chris


On Wed, Sep 7, 2016 at 4:04 PM, Simon Slavin  wrote:

>
> On 7 Sep 2016, at 3:48pm, Laura BERGOENS 
> wrote:
>
> > This query takes 100 seconds approx.
>
> Once your tables have some convincing data in (does not need to be final
> data, just something useful to see how the values are distributed), run
> "ANALYZE", just once.  It might speed up later SELECTs.  It might not.  But
> a situation where you have many 'AND' clauses looking at different columns
> is exactly what ANALYZE is most helpful for.
>
> And yes, a 50 kilorow table is not big by SQLite standards.  I have tables
> with a thousand times that that yield answers to SELECT in 5ms.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
Ok I didn't know abount ANALYZE existence, sorry for that.

I just run ANALYZE then the query again in sqlite3.exe and this time it
took like half a second. Should I close and re open the DB, cause maybe
this result was influenced with some cache system or something?

Plus, I ran PRAGMA compile_options for the 3 platforms I mentionned before
and I tried to regroup the results in a libreOffice calc document, I don't
know if that can help or if it's relevant

2016-09-07 17:04 GMT+02:00 Simon Slavin :

>
> On 7 Sep 2016, at 3:48pm, Laura BERGOENS 
> wrote:
>
> > This query takes 100 seconds approx.
>
> Once your tables have some convincing data in (does not need to be final
> data, just something useful to see how the values are distributed), run
> "ANALYZE", just once.  It might speed up later SELECTs.  It might not.  But
> a situation where you have many 'AND' clauses looking at different columns
> is exactly what ANALYZE is most helpful for.
>
> And yes, a 50 kilorow table is not big by SQLite standards.  I have tables
> with a thousand times that that yield answers to SELECT in 5ms.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Problem with rename table

2016-09-07 Thread Radovan Antloga

Hi Richard !

I can't find a solution how to fix my database after
I have renamed table DOKUMENTI to DOKUMENTI2.
Table DOKUMENTI had trigger dokumenti_trigger1
and after renaming table I cant execute any sql. I forgot
to drop trigger first. So now I always get error:
malformed database schema (dokumenti_trigger1) -
no such table main.dokumenti.

TIA
Radovan Antloga
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Responsive" website revamp at www.sqlite.org

2016-09-07 Thread Chris Locke
Makes perfect sense.  Thanks.


Chris

On Tue, Sep 6, 2016 at 12:26 PM, Richard Hipp  wrote:

> On 9/6/16, Chris Locke  wrote:
> > When reducing the size of the browser (on Chrome desktop at least) the
> '***
> > DRAFT ***' tag disappears.  Rather than disappears, maybe this should be
> > reduced in size and always be visible?
> > Edit: Noticed its right at the bottom of the page, regardless of size,
> but
> > not at the top.  Is this by design?
>
> By design.  There simply is not space.
>
> Note that when not in draft mode, the "*** DRAFT ***" text becomes the
> tag-line:  "Small. Fast. Reliable.  Choose any three!"  There is no
> way to fit that on a 320-pixel wide screen, together with the logo,
> and make it readable.  It is not essential information so it is
> elided.
>
> > Additionally, on the menu, the 'Download' option is not visible at all.
> >
>
> By design.  There is nothing on the Download page that is useful to a
> mobile device.  So links to that page are omitted on mobile, to save
> precious pixels.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to "split" a large DB into two DBs

2016-09-07 Thread Eduardo Morras
On Wed, 7 Sep 2016 18:08:50 +0200
Dominique Devienne  wrote:

> Imagine there's a 25GB SQLite DB file in version v1 of an application.
> 
> Because the way this DB is structured, with large to very large
> blobs, this leads to problems for the app. That DB has 35 tables, but
> 5 of those represent 95% of the DB size (because of those blobs).
> 
> So for v2 of the app, there's a new design where the 30 "lightweight"
> tables move into a new DB file (with additional new tables), and the 5
> "blob heavy" tables remain in the existing DB file (but the 30 "moved"
> tables should be dropped from it).
> 
> Initial design was to copy the DB file (app is "shutdown", so no
> connection to that DB file).
> Drop the 5 big tables. Vaccum it (the new DB). Add the new tables.
> But the 25GB copy was deemed too long in that case (several minutes).
> (after all, copying 95% of 25GB to discard all those GBs is
> inefficient)
> 
> So alternate design was to create the schema empty in the new DB file,
> attach the old one, and insert the data from the 30 tables into the
> new empty (with identical schema/structure) tables. But that's also
> very slow apparently.
> 
> Even though it's not my app, I know there are indexes and triggers on
> those tables (but the triggers are mostly on deletes, so don't really
> apply here), and I suggested adding those after the selects, but I
> doubt it's going to make a dramatic difference time-wise.
> 
> Conceptually, the ideal situation would be to just copy the DB header,
> sqlite_master page(s) (dropping the 5 big tables definitions and
> related indexes/triggers from sqlite_master), and only the 5% of
> pages related to the other tables (and related objects).  (chaining
> of pages and root pages in sqlite_master have to be updated of
> course). It's almost like a form of VACCUM, except it's not in-place
> and works on a subset of the tables. IO-wise, that has the potential
> to be 20x faster I imagine.
> 
> But of course there's no such "out-of-place" "partial" VACCUM...
> 
> Then I though maybe .backup, which also works at the page level (I
> believe), perhaps can subset what tables to backup. But no luck there
> either. backup works for the whole DB, not a subset.
> 
> Am I missing other ways to achieve this "split" efficiently?
> Any chance the backup API could group table-specific (and related
> objects) alternate form?

Not answering your question, but some comments/tricks about how boost sqlite3 
performance on your scenario.

First, compile last sqlite3 version with these options on

SQLITE_DIRECT_OVERFLOW_READ
SQLITE_DEFAULT_AUTOVACUUM=2

They are described at https://www.sqlite.org/compile.html#direct_overflow_read 
and https://www.sqlite.org/compile.html#omit_autovacuum, don't omit
autovacuum, but set it to 2 or incremental. 

For an explanation about why setting autovacuum incremental works see
http://marc.info/?l=sqlite-users=136265346522617=4

If your blob size is high, bigger than 500KB, set page size to 64KB.

Create the blob column the last one.

Compile with dbstat (SQLITE_ENABLE_DBSTAT_VTAB) option if you want statistics 
about your blob db. You can get information about fragmentation, etc, see
https://www.sqlite.org/dbstat.html#section_3

If your use case is write heavy, don't use wal, modifications (CUD) are not 
atomic when use attached dbs and wal mode. If your use case is read heavy, use 
wal.

I use (and develop) a diy filesystem over sqlite and has a similar scenario 
with hundreds of GB of blobs on multiple databases. Those compile options, 
tricks 
and split metadata from data made a huge improvement.

> Any advice would be appreciated. Thanks, --DD

HTH

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Responsive" website revamp at www.sqlite.org

2016-09-07 Thread Warren Young
On Sep 7, 2016, at 11:02 AM, Richard Hipp  wrote:
> 
> On 9/7/16, dmp  wrote:
> 
>> The draft site looks and works fine, is fast for loads, doesn't
>> seem to require scripting, GOOD!
> 
> Sorry to disappoint, but some pages (ex:
> https://www.sqlite.org/draft/c3ref/funclist.html) use JS to split long
> lists into multiple columns.  You can see this in action by dragging
> your browser window very wide or very narrow and pressing Reload on
> the example given in the previous sentence.  If JS is disabled, the
> list will not display at all.
> 
> I don't know of a way to do that using only CSS.  If you know of a
> way, please enlighten me.

There is, but it requires using a feature of CSS3 that isn’t universally 
deployed yet, particularly among the benighted IE users:

  https://responsivedesign.is/develop/css/css3-multiple-columns
  http://caniuse.com/#feat=multicolumn

If you don’t want to use that, then may I suggest that you attach the layout 
reflow to the window.onresize event, so you don’t have to reload the page for 
the layout to reflow?

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to "split" a large DB into two DBs

2016-09-07 Thread Dan Kennedy

On 09/08/2016 12:57 AM, Clay Gerrard wrote:

On Wed, Sep 7, 2016 at 10:08 AM, Dan Kennedy  wrote:


[...] then attach it and your main db to the same handle so that you can
use an "INSERT INTO ... SELECT ..." statement


Can you elaborate or provide any reference materials for this "attach it
and your main db to the same handle" procedure?  Very interesting!?


Use the ATTACH SQL command:

  http://sqlite.org/draft/lang_attach.html

If you're second db is "x.db" you can do something like:

  ATTACH 'x.db' AS aux;

and then:

  SELECT * FROM aux.sqlite_master;

and so on.

Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to "split" a large DB into two DBs

2016-09-07 Thread Simon Slavin

On 7 Sep 2016, at 6:57pm, Clay Gerrard  wrote:

> Can you elaborate or provide any reference materials for this "attach it
> and your main db to the same handle" procedure?



Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread David Raymond
Just a reminder that the automatic_index pragma does not create permanent 
indexes, it's for places where SQLite feels that "hey, if I take the time to 
create this temporary index then it'll save me more than that time during the 
rest of this one query." Those indexes get wiped at the end of the query. So in 
a bad case then you're creating a temporary index every time you run the query. 
There're some good notes on that at
http://www.sqlite.org/optoverview.html#autoindex


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Laura BERGOENS
Sent: Wednesday, September 07, 2016 1:09 PM
To: SQLite mailing list
Subject: Re: [sqlite] Query time execution difference between my application 
and SQLiteBrowser

quote:"Now, you need to create a primary/foreign keys and indexes on the
tables you are creating.
Then the execution time will improve even more"

I will probably try that tomorrow and I'll tell you how that goes, but I'm
not sure if that will be very effective since i activated the PRAGMA to
create indexes automatically. I'm guessing it has created some (hopefully).
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to "split" a large DB into two DBs

2016-09-07 Thread Clay Gerrard
On Wed, Sep 7, 2016 at 10:08 AM, Dan Kennedy  wrote:

>
> [...] then attach it and your main db to the same handle so that you can
> use an "INSERT INTO ... SELECT ..." statement


Can you elaborate or provide any reference materials for this "attach it
and your main db to the same handle" procedure?  Very interesting!?

-Clay
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Robert Weiss
I haven't gone through the previous emails to see whether anyone has proposed 
this, but it seems possible to me that the GUI tool (oriented as it is to 
interaction) is not waiting for the query to finish before displaying the first 
results. 

On Wednesday, September 7, 2016 7:21 AM, Laura BERGOENS 
 wrote:
 

 Hi Mr. Slavin,

As for why the query takes so long, I do know the answer !
Long story short, my application does a lot of calculation and things, I
don't want to get into the details here, but queries are built and
auto-generated piece by piece. Therefore, sometimes the queries aren't
optimized at all.
I solved that issue myself by touching up the queries a bit before
executing them, and everything is fine now.
I was concerned mainly because I figured that maybe some queries were a bit
longer to execute as they should have (let's say 200 ms instead of 100 ms),
and I'm running a lot of queries in the app (approx 1000 per seconds).

I know now that there is nothing to worry about regarding my settings or
pragmas choices, since I get the same execution time with the sqlite3 tool

Thanks again for taking some time to answer

2016-09-07 16:14 GMT+02:00 Simon Slavin :

>
> On 7 Sep 2016, at 3:10pm, Laura BERGOENS 
> wrote:
>
> > I guess I don't have further question regarding my issue, since I have
> the
> > same time execution as with the sqlite3.exe tool, so there are no
> problems
> > with my pragmas or whatsoever.
>
> Nevertheless, 100 seconds is a very long time for a query, and we can
> probably improve on it if you're willing to include your "SELECT" command
> and your schema.  It might be something as simple as creating one
> additional index.
>
> You might also try executing "ANALYZE" in sqlite3.exe and see whether that
> speeds things up.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


   
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Simon Slavin

On 7 Sep 2016, at 4:36pm, Laura BERGOENS  wrote:

> I've put ANALYZE in my code before running this SELECT query, I have no
> change, and now on sqlite3.exe it runs kind of fast every time, even if i
> close and re open the DB, without ANALYZE statement.

Good.  If should now run quickly in your own program too.  This is the 
improvement in speed we expected to see.

The results of ANALYZE are stored in the database file.  You only have to do it 
once.  SQLite creates some hidden tables in the database file and saves the 
results there.  Later commands with a "WHERE" or "ORDER BY" clause cause SQLite 
to take a look at those tables to find the best way to do the search or sort.

So having done ANALYZE once, perhaps using sqlite3.exe, you do not need to 
include it in your own program.  If will find the results that were saved in 
the database file.

The only reason to do ANALYZE again is if you change your database schema 
(make/delete columns, TABLEs or INDEXes), or if the nature of your data changes 
(if you expand from 2 countries to 10 countries, or if you once had 50 managers 
and how have just 5 managers). If the 'chunkiness' of the values in a column 
does not change much, then there is no need to run ANALYZE again just because 
you added or deleted many rows.  At most, some people include it in an 
end-of-year procedure.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
quote : "Is there a reason why you can't create a copy of the database and
work
with a copy?"

I never said I couldn't, and if I misled you I'm sorry. Of course I can
play with the DB, since I'm in a dev environment, and I'm open to any
alteration on the tables.

I conceed I'm not giving away a lot information, but when I do I feel like
there is not much to say or too add.

quote:"Now, you need to create a primary/foreign keys and indexes on the
tables you are creating.
Then the execution time will improve even more"

I will probably try that tomorrow and I'll tell you how that goes, but I'm
not sure if that will be very effective since i activated the PRAGMA to
create indexes automatically. I'm guessing it has created some (hopefully).

Anyway, thank you all for having the patience to read and reply.

2016-09-07 18:52 GMT+02:00 Igor Korot :

> Hi, Laura,
>
> On Wed, Sep 7, 2016 at 12:25 PM, Laura BERGOENS
>  wrote:
> > Hi Dominique,
> >
> > I guess it's not, but I'm not quite sure on what I can share, I'm a
> newbie
> > in a professional environment. I would be more confortable if I ask my
> boss
> > first. In the meantime, all I can say is that there are no primary keys
> nor
> > foreign keys in the table, and no indexes at all. I activated the
> > automatic_index PRAGMA.
> >
> > All tables look the same more or less :
> >
> > CREATE TABLE TABLEA (ID_A INTEGER,NUM_A INTEGER,NUM_GROUPE_A
> INTEGER,TYPE_A
> > VARCHAR(255),LIBELLE_A VARCHAR(255),NOM_A VARCHAR(255));
>
> If you don't have keys nor indexes in any of  the tables than the
> query time will definitely be slower.
>
> Now let me ask you this:
> Is there a reason why you can't create a copy of the database and work
> with a copy?
> Just open the Terminal and do:
>
> cp real_database.db my_database.db
>
> Then you can experiment with the database itself without screwing up
> the real data.
> Besides, that's how actual dev environment should be - you have access
> to the development
> database on which you do the experiments and then the DB Admin will
> populate the schema
> based on you analysis for production.
>
> Now, you need to create a primary/foreign keys and indexes on the
> tables you are creating.
> Then the execution time will improve even more.
>
> Try it and see.
>
> Then when you are ready ask the DB Admin to make those changes for
> production database.
>
> Thank you.
>
> BTW, if its so scary, you can make the field name as field1, field2,
> field3, etc.
> That way the design of the database and the application will stay as
> the company secret.
>
> Thank you.
>
> >
> > 2016-09-07 18:13 GMT+02:00 Dominique Devienne :
> >
> >> On Wed, Sep 7, 2016 at 6:00 PM, Laura BERGOENS <
> laura.bergo...@imerir.com>
> >> wrote:
> >>
> >> > Then I create real tables in the in-memory clone that contain the
> content
> >> > of the views : INSERT INTO tableA  SELECT * from viewA
> >> >
> >>
> >> What matters is how you create the tables,
> >> and in particular what primary key and indexes you use on them.
> >>
> >> If you don't have indexes (or a PK) on the columns you're joining on,
> >> it can't be as fast as it could be (and SQLite might end-up creating
> >> those "indexes" on-the-fly, and recreate them on the next query,
> etc...).
> >>
> >> So as advised, share your schema (see Richard's post), not your data,
> >> and share your queries and explain query plan for your queries.
> >>
> >> You're schema is not confidential, is it? The data, sure. But the
> schema???
> >> --DD
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@mailinglists.sqlite.org
> >> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >>
> >
> >
> >
> > --
> > Laura BERGOENS
> > Technicienne supérieure en Informatique et étudiante à l'IMERIR de
> Perpignan
> >
> > *Institut Méditerranéen d'Étude etde Recherche en Informatique*
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to "split" a large DB into two DBs

2016-09-07 Thread Dan Kennedy

On 09/07/2016 11:08 PM, Dominique Devienne wrote:

Imagine there's a 25GB SQLite DB file in version v1 of an application.

Because the way this DB is structured, with large to very large blobs, this
leads to problems for the app. That DB has 35 tables, but 5 of those
represent 95% of the DB size (because of those blobs).

So for v2 of the app, there's a new design where the 30 "lightweight"
tables move into a new DB file (with additional new tables), and the 5
"blob heavy" tables remain in the existing DB file (but the 30 "moved"
tables should be dropped from it).

Initial design was to copy the DB file (app is "shutdown", so no connection
to that DB file).
Drop the 5 big tables. Vaccum it (the new DB). Add the new tables.
But the 25GB copy was deemed too long in that case (several minutes).
(after all, copying 95% of 25GB to discard all those GBs is inefficient)

So alternate design was to create the schema empty in the new DB file,
attach the old one, and insert the data from the 30 tables into the new
empty (with identical schema/structure) tables. But that's also very slow
apparently.


SQLite contains a special optimization for statements of the form:

  INSERT INTO tbl SELECT * FROM tbl2;

If both tables have the same set of columns, the same PK, the same 
indexes and identical UNIQUE constraints and no triggers, and if foreign 
keys are disabled, SQLite can copy records directly from one b-tree 
structure to another without unpacking the records. This can be much 
faster. And it writes the b-tree structures in order too, which leads to 
a more efficient use of the cache.


So if you create your new db with the tables and indexes but no 
triggers, then attach it and your main db to the same handle so that you 
can use an "INSERT INTO ... SELECT ..." statement of the form above, 
things might run a bit faster.


Dan.






Even though it's not my app, I know there are indexes and triggers on those
tables (but the triggers are mostly on deletes, so don't really apply
here), and I suggested adding those after the selects, but I doubt it's
going to make a dramatic difference time-wise.

Conceptually, the ideal situation would be to just copy the DB header,
sqlite_master page(s) (dropping the 5 big tables definitions and related
indexes/triggers from sqlite_master), and only the 5% of pages related to
the other tables (and related objects).  (chaining of pages and root pages
in sqlite_master have to be updated of course). It's almost like a form of
VACCUM, except it's not in-place and works on a subset of the tables.
IO-wise, that has the potential to be 20x faster I imagine.

But of course there's no such "out-of-place" "partial" VACCUM...

Then I though maybe .backup, which also works at the page level (I
believe), perhaps can subset what tables to backup. But no luck there
either. backup works for the whole DB, not a subset.

Am I missing other ways to achieve this "split" efficiently?
Any chance the backup API could group table-specific (and related objects)
alternate form?

Any advice would be appreciated. Thanks, --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Responsive" website revamp at www.sqlite.org

2016-09-07 Thread Richard Hipp
On 9/7/16, dmp  wrote:

> The draft site looks and works fine, is fast for loads, doesn't
> seem to require scripting, GOOD!
>

Sorry to disappoint, but some pages (ex:
https://www.sqlite.org/draft/c3ref/funclist.html) use JS to split long
lists into multiple columns.  You can see this in action by dragging
your browser window very wide or very narrow and pressing Reload on
the example given in the previous sentence.  If JS is disabled, the
list will not display at all.

I don't know of a way to do that using only CSS.  If you know of a
way, please enlighten me.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Igor Korot
Hi, Laura,

On Wed, Sep 7, 2016 at 12:25 PM, Laura BERGOENS
 wrote:
> Hi Dominique,
>
> I guess it's not, but I'm not quite sure on what I can share, I'm a newbie
> in a professional environment. I would be more confortable if I ask my boss
> first. In the meantime, all I can say is that there are no primary keys nor
> foreign keys in the table, and no indexes at all. I activated the
> automatic_index PRAGMA.
>
> All tables look the same more or less :
>
> CREATE TABLE TABLEA (ID_A INTEGER,NUM_A INTEGER,NUM_GROUPE_A INTEGER,TYPE_A
> VARCHAR(255),LIBELLE_A VARCHAR(255),NOM_A VARCHAR(255));

If you don't have keys nor indexes in any of  the tables than the
query time will definitely be slower.

Now let me ask you this:
Is there a reason why you can't create a copy of the database and work
with a copy?
Just open the Terminal and do:

cp real_database.db my_database.db

Then you can experiment with the database itself without screwing up
the real data.
Besides, that's how actual dev environment should be - you have access
to the development
database on which you do the experiments and then the DB Admin will
populate the schema
based on you analysis for production.

Now, you need to create a primary/foreign keys and indexes on the
tables you are creating.
Then the execution time will improve even more.

Try it and see.

Then when you are ready ask the DB Admin to make those changes for
production database.

Thank you.

BTW, if its so scary, you can make the field name as field1, field2,
field3, etc.
That way the design of the database and the application will stay as
the company secret.

Thank you.

>
> 2016-09-07 18:13 GMT+02:00 Dominique Devienne :
>
>> On Wed, Sep 7, 2016 at 6:00 PM, Laura BERGOENS 
>> wrote:
>>
>> > Then I create real tables in the in-memory clone that contain the content
>> > of the views : INSERT INTO tableA  SELECT * from viewA
>> >
>>
>> What matters is how you create the tables,
>> and in particular what primary key and indexes you use on them.
>>
>> If you don't have indexes (or a PK) on the columns you're joining on,
>> it can't be as fast as it could be (and SQLite might end-up creating
>> those "indexes" on-the-fly, and recreate them on the next query, etc...).
>>
>> So as advised, share your schema (see Richard's post), not your data,
>> and share your queries and explain query plan for your queries.
>>
>> You're schema is not confidential, is it? The data, sure. But the schema???
>> --DD
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> Laura BERGOENS
> Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan
>
> *Institut Méditerranéen d'Étude etde Recherche en Informatique*
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Responsive" website revamp at www.sqlite.org

2016-09-07 Thread dmp
I'm not interested in mobile browsing, just that the site is
still acceptable after changes for desktop. I use unconventional
browsers included one of my own design.

The draft site looks and works fine, is fast for loads, doesn't
seem to require scripting, GOOD!

danap.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
Hi Dominique,

I guess it's not, but I'm not quite sure on what I can share, I'm a newbie
in a professional environment. I would be more confortable if I ask my boss
first. In the meantime, all I can say is that there are no primary keys nor
foreign keys in the table, and no indexes at all. I activated the
automatic_index PRAGMA.

All tables look the same more or less :

CREATE TABLE TABLEA (ID_A INTEGER,NUM_A INTEGER,NUM_GROUPE_A INTEGER,TYPE_A
VARCHAR(255),LIBELLE_A VARCHAR(255),NOM_A VARCHAR(255));

2016-09-07 18:13 GMT+02:00 Dominique Devienne :

> On Wed, Sep 7, 2016 at 6:00 PM, Laura BERGOENS 
> wrote:
>
> > Then I create real tables in the in-memory clone that contain the content
> > of the views : INSERT INTO tableA  SELECT * from viewA
> >
>
> What matters is how you create the tables,
> and in particular what primary key and indexes you use on them.
>
> If you don't have indexes (or a PK) on the columns you're joining on,
> it can't be as fast as it could be (and SQLite might end-up creating
> those "indexes" on-the-fly, and recreate them on the next query, etc...).
>
> So as advised, share your schema (see Richard's post), not your data,
> and share your queries and explain query plan for your queries.
>
> You're schema is not confidential, is it? The data, sure. But the schema???
> --DD
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to "split" a large DB into two DBs

2016-09-07 Thread Stephen Chrzanowski
What are the business logic reasons you want to split the database?  Is
your application going to be able to handle dealing with data that'd exist
in multiple places?

When I know I'm going to be putting large blobs within a database, those
blob tables have two fields, one ID, and one for the actual content.
Perhaps even a field to indicate what kind of blob it is, but, I'm also the
type that'd just make a new table for the new type of blob information.

The ID is a 1:1 PK:FK relationship between a field in the "master" table
and this blob data, so the master table would have a direct reference to
the blob record.  This way, whenever I need to do queries, I pull from the
master table, and IF I need to get content about the blob (Say image, MP3,
or whatever the case) I'll open a new query to the blob table, get my data,
then shut it down. I'd also set a unique constraint on the blob ID field as
well, which automatically creates an index against it for fast lookups.


On Wed, Sep 7, 2016 at 12:08 PM, Dominique Devienne 
wrote:

> Imagine there's a 25GB SQLite DB file in version v1 of an application.
>
> Because the way this DB is structured, with large to very large blobs, this
> leads to problems for the app. That DB has 35 tables, but 5 of those
> represent 95% of the DB size (because of those blobs).
>
> So for v2 of the app, there's a new design where the 30 "lightweight"
> tables move into a new DB file (with additional new tables), and the 5
> "blob heavy" tables remain in the existing DB file (but the 30 "moved"
> tables should be dropped from it).
>
> Initial design was to copy the DB file (app is "shutdown", so no connection
> to that DB file).
> Drop the 5 big tables. Vaccum it (the new DB). Add the new tables.
> But the 25GB copy was deemed too long in that case (several minutes).
> (after all, copying 95% of 25GB to discard all those GBs is inefficient)
>
> So alternate design was to create the schema empty in the new DB file,
> attach the old one, and insert the data from the 30 tables into the new
> empty (with identical schema/structure) tables. But that's also very slow
> apparently.
>
> Even though it's not my app, I know there are indexes and triggers on those
> tables (but the triggers are mostly on deletes, so don't really apply
> here), and I suggested adding those after the selects, but I doubt it's
> going to make a dramatic difference time-wise.
>
> Conceptually, the ideal situation would be to just copy the DB header,
> sqlite_master page(s) (dropping the 5 big tables definitions and related
> indexes/triggers from sqlite_master), and only the 5% of pages related to
> the other tables (and related objects).  (chaining of pages and root pages
> in sqlite_master have to be updated of course). It's almost like a form of
> VACCUM, except it's not in-place and works on a subset of the tables.
> IO-wise, that has the potential to be 20x faster I imagine.
>
> But of course there's no such "out-of-place" "partial" VACCUM...
>
> Then I though maybe .backup, which also works at the page level (I
> believe), perhaps can subset what tables to backup. But no luck there
> either. backup works for the whole DB, not a subset.
>
> Am I missing other ways to achieve this "split" efficiently?
> Any chance the backup API could group table-specific (and related objects)
> alternate form?
>
> Any advice would be appreciated. Thanks, --DD
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Best way to "split" a large DB into two DBs

2016-09-07 Thread Dominique Devienne
Imagine there's a 25GB SQLite DB file in version v1 of an application.

Because the way this DB is structured, with large to very large blobs, this
leads to problems for the app. That DB has 35 tables, but 5 of those
represent 95% of the DB size (because of those blobs).

So for v2 of the app, there's a new design where the 30 "lightweight"
tables move into a new DB file (with additional new tables), and the 5
"blob heavy" tables remain in the existing DB file (but the 30 "moved"
tables should be dropped from it).

Initial design was to copy the DB file (app is "shutdown", so no connection
to that DB file).
Drop the 5 big tables. Vaccum it (the new DB). Add the new tables.
But the 25GB copy was deemed too long in that case (several minutes).
(after all, copying 95% of 25GB to discard all those GBs is inefficient)

So alternate design was to create the schema empty in the new DB file,
attach the old one, and insert the data from the 30 tables into the new
empty (with identical schema/structure) tables. But that's also very slow
apparently.

Even though it's not my app, I know there are indexes and triggers on those
tables (but the triggers are mostly on deletes, so don't really apply
here), and I suggested adding those after the selects, but I doubt it's
going to make a dramatic difference time-wise.

Conceptually, the ideal situation would be to just copy the DB header,
sqlite_master page(s) (dropping the 5 big tables definitions and related
indexes/triggers from sqlite_master), and only the 5% of pages related to
the other tables (and related objects).  (chaining of pages and root pages
in sqlite_master have to be updated of course). It's almost like a form of
VACCUM, except it's not in-place and works on a subset of the tables.
IO-wise, that has the potential to be 20x faster I imagine.

But of course there's no such "out-of-place" "partial" VACCUM...

Then I though maybe .backup, which also works at the page level (I
believe), perhaps can subset what tables to backup. But no luck there
either. backup works for the whole DB, not a subset.

Am I missing other ways to achieve this "split" efficiently?
Any chance the backup API could group table-specific (and related objects)
alternate form?

Any advice would be appreciated. Thanks, --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Dominique Devienne
On Wed, Sep 7, 2016 at 6:00 PM, Laura BERGOENS 
wrote:

> Then I create real tables in the in-memory clone that contain the content
> of the views : INSERT INTO tableA  SELECT * from viewA
>

What matters is how you create the tables,
and in particular what primary key and indexes you use on them.

If you don't have indexes (or a PK) on the columns you're joining on,
it can't be as fast as it could be (and SQLite might end-up creating
those "indexes" on-the-fly, and recreate them on the next query, etc...).

So as advised, share your schema (see Richard's post), not your data,
and share your queries and explain query plan for your queries.

You're schema is not confidential, is it? The data, sure. But the schema???
--DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
Hi Igor,

I'm working on providing those data, meanwhile I have to emphasize on how
those tables are created, maybe there is something here :

The idea is that i run the application using a given database. I want this
database to be untouched as for its data, so i'm making a clone of it in
memory.
Then I create views, let's call them viewA, viewB, viewC and viewD with the
statement CREATE VIEW etc...

Then I create real tables in the in-memory clone that contain the content
of the views :
INSERT INTO tableA  SELECT * from viewA

This way I can work on real tables, without altering my original database.

I'm trying to give as much intel as I can, and maybe the fact that I am
using views a bit, and creating real tables thanks to those views, maybe
there is a performance issue I'm not aware of or something like that.

2016-09-07 17:39 GMT+02:00 Igor Korot :

> Hi, Laura,
>
> On Wed, Sep 7, 2016 at 10:48 AM, Laura BERGOENS
>  wrote:
> > Note : I had some real queries that use to take 100 seconds to execute,
> and
> > I optimized them myself.
> > It looks like this :
> >
> > I have 4 tables:  tableA, tableLink, tableC and tableD
> > tableA, Link and C have no more than 10k rows in it, and tableD around
> 50k
> > (which is not big at all right?)
> > The query goes like this :
> > SELECT DISTINCT A1.idA, A1.column1, A1.column2
> > FROMtableA A1,
> >  tableA A2,
> >  tableLink link,
> >  tableC C1,
> >  tableD D1,
> >  tableD D2,
> > WHERE  C1.idA = A1.idA
> > ANDC1.idD = D1.idD
> > AND   A1.idD  =  D1.idD
> > AND   A1.column2 = 'VALUE'
> > AND   A2.idA = link.id_item_1
> > AND   A1.idA = link.id_item_2
> > AND  D2.idD   =  A2.idD
> > AND  A2.idA = 100
> >
> > This query takes 100 seconds approx.
> > I don't know if that can help you in any way, but the tables have been
> > created with a query like INSERT INTO SELECT * FROM A_View, so they have
> > been created from a view.
>
> After populating the data, do create any indexes?
> Also, I presume that all 4 tables are created like this, not just tableA,
> right?
> BTW, if you can show the schema (no data necessary) for those 4 tables, we
> can
> see if there is a way to improve.
>
> Thank you.
>
>
> > As we can see, only columns from tableA A1 are selected, so most of the
> > joins here can be replaced with something of the form :
> > AND EXISTS (SELECT 1 FROM  "test join")
> >
> > I managed to drastically reduce time execution on this query with the
> > EXISTS trick, and now it has a normal time execution (below 300 ms for
> > sure, can't tell you how much exactly)
> >
> >
> > I've check the EXPLAIN QUERY PLAN of the original query, and I understood
> > that I was scanning tables in nested loops, so that this can take some
> time
> > (in fact the product of the sizes of all the tables in the FROM clause
> > right?)
> >
> > Here is what i can give you for now, now i'll do some tests that you've
> > recommended earlier
> >
> > 2016-09-07 16:33 GMT+02:00 R Smith :
> >
> >>
> >> On 2016/09/07 4:20 PM, Laura BERGOENS wrote:
> >>
> >>> Hi Mr. Slavin,
> >>>
> >>> As for why the query takes so long, I do know the answer !
> >>> Long story short, my application does a lot of calculation and things,
> I
> >>> don't want to get into the details here, but queries are built and
> >>> auto-generated piece by piece. Therefore, sometimes the queries aren't
> >>> optimized at all.
> >>> I solved that issue myself by touching up the queries a bit before
> >>> executing them, and everything is fine now.
> >>> I was concerned mainly because I figured that maybe some queries were a
> >>> bit
> >>> longer to execute as they should have (let's say 200 ms instead of 100
> >>> ms),
> >>> and I'm running a lot of queries in the app (approx 1000 per seconds).
> >>>
> >>
> >> Magic goalposts...
> >>
> >> "A query" taking 100s is a VERY VERY different problem to 1000 queries
> >> taking 100ms each. And to get technical, you shouldn't really need to
> >> optimize the queries (apart from avoiding the obvious silliness), you
> only
> >> need to know how to ask for the data correctly. Optimization is the job
> of
> >> the query planner in the DB engine - it should get the best fastest
> query
> >> results possible as long as it has all the information (which is what
> >> ANALYZE will do as others mentioned already) and as long as you provide
> the
> >> best Index for the job (which is something we might have some
> suggestions
> >> on if we know the schema and typical query is).
> >>
> >>
> >> I know now that there is nothing to worry about regarding my settings or
> >>> pragmas choices, since I get the same execution time with the sqlite3
> tool
> >>>
> >>
> >> Maybe nothing to worry about, but that is no reason to leave it be -
> >> whatever the case is, if some previous version of SQLite can run 

Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Igor Korot
Hi, Laura,

On Wed, Sep 7, 2016 at 10:48 AM, Laura BERGOENS
 wrote:
> Note : I had some real queries that use to take 100 seconds to execute, and
> I optimized them myself.
> It looks like this :
>
> I have 4 tables:  tableA, tableLink, tableC and tableD
> tableA, Link and C have no more than 10k rows in it, and tableD around 50k
> (which is not big at all right?)
> The query goes like this :
> SELECT DISTINCT A1.idA, A1.column1, A1.column2
> FROMtableA A1,
>  tableA A2,
>  tableLink link,
>  tableC C1,
>  tableD D1,
>  tableD D2,
> WHERE  C1.idA = A1.idA
> ANDC1.idD = D1.idD
> AND   A1.idD  =  D1.idD
> AND   A1.column2 = 'VALUE'
> AND   A2.idA = link.id_item_1
> AND   A1.idA = link.id_item_2
> AND  D2.idD   =  A2.idD
> AND  A2.idA = 100
>
> This query takes 100 seconds approx.
> I don't know if that can help you in any way, but the tables have been
> created with a query like INSERT INTO SELECT * FROM A_View, so they have
> been created from a view.

After populating the data, do create any indexes?
Also, I presume that all 4 tables are created like this, not just tableA, right?
BTW, if you can show the schema (no data necessary) for those 4 tables, we can
see if there is a way to improve.

Thank you.


> As we can see, only columns from tableA A1 are selected, so most of the
> joins here can be replaced with something of the form :
> AND EXISTS (SELECT 1 FROM  "test join")
>
> I managed to drastically reduce time execution on this query with the
> EXISTS trick, and now it has a normal time execution (below 300 ms for
> sure, can't tell you how much exactly)
>
>
> I've check the EXPLAIN QUERY PLAN of the original query, and I understood
> that I was scanning tables in nested loops, so that this can take some time
> (in fact the product of the sizes of all the tables in the FROM clause
> right?)
>
> Here is what i can give you for now, now i'll do some tests that you've
> recommended earlier
>
> 2016-09-07 16:33 GMT+02:00 R Smith :
>
>>
>> On 2016/09/07 4:20 PM, Laura BERGOENS wrote:
>>
>>> Hi Mr. Slavin,
>>>
>>> As for why the query takes so long, I do know the answer !
>>> Long story short, my application does a lot of calculation and things, I
>>> don't want to get into the details here, but queries are built and
>>> auto-generated piece by piece. Therefore, sometimes the queries aren't
>>> optimized at all.
>>> I solved that issue myself by touching up the queries a bit before
>>> executing them, and everything is fine now.
>>> I was concerned mainly because I figured that maybe some queries were a
>>> bit
>>> longer to execute as they should have (let's say 200 ms instead of 100
>>> ms),
>>> and I'm running a lot of queries in the app (approx 1000 per seconds).
>>>
>>
>> Magic goalposts...
>>
>> "A query" taking 100s is a VERY VERY different problem to 1000 queries
>> taking 100ms each. And to get technical, you shouldn't really need to
>> optimize the queries (apart from avoiding the obvious silliness), you only
>> need to know how to ask for the data correctly. Optimization is the job of
>> the query planner in the DB engine - it should get the best fastest query
>> results possible as long as it has all the information (which is what
>> ANALYZE will do as others mentioned already) and as long as you provide the
>> best Index for the job (which is something we might have some suggestions
>> on if we know the schema and typical query is).
>>
>>
>> I know now that there is nothing to worry about regarding my settings or
>>> pragmas choices, since I get the same execution time with the sqlite3 tool
>>>
>>
>> Maybe nothing to worry about, but that is no reason to leave it be -
>> whatever the case is, if some previous version of SQLite can run it in
>> 1/10th the time, there MUST be opportunity for improvement.
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> Laura BERGOENS
> Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan
>
> *Institut Méditerranéen d'Étude etde Recherche en Informatique*
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
I've put ANALYZE in my code before running this SELECT query, I have no
change, and now on sqlite3.exe it runs kind of fast every time, even if i
close and re open the DB, without ANALYZE statement.

2016-09-07 17:15 GMT+02:00 Laura BERGOENS :

> Ok I didn't know abount ANALYZE existence, sorry for that.
>
> I just run ANALYZE then the query again in sqlite3.exe and this time it
> took like half a second. Should I close and re open the DB, cause maybe
> this result was influenced with some cache system or something?
>
> Plus, I ran PRAGMA compile_options for the 3 platforms I mentionned before
> and I tried to regroup the results in a libreOffice calc document, I don't
> know if that can help or if it's relevant
>
> 2016-09-07 17:04 GMT+02:00 Simon Slavin :
>
>>
>> On 7 Sep 2016, at 3:48pm, Laura BERGOENS 
>> wrote:
>>
>> > This query takes 100 seconds approx.
>>
>> Once your tables have some convincing data in (does not need to be final
>> data, just something useful to see how the values are distributed), run
>> "ANALYZE", just once.  It might speed up later SELECTs.  It might not.  But
>> a situation where you have many 'AND' clauses looking at different columns
>> is exactly what ANALYZE is most helpful for.
>>
>> And yes, a 50 kilorow table is not big by SQLite standards.  I have
>> tables with a thousand times that that yield answers to SELECT in 5ms.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> Laura BERGOENS
> Technicienne supérieure en Informatique et étudiante à l'IMERIR de
> Perpignan
>
> *Institut Méditerranéen d'Étude etde Recherche en Informatique*
>



-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Simon Slavin

On 7 Sep 2016, at 3:48pm, Laura BERGOENS  wrote:

> This query takes 100 seconds approx.

Once your tables have some convincing data in (does not need to be final data, 
just something useful to see how the values are distributed), run "ANALYZE", 
just once.  It might speed up later SELECTs.  It might not.  But a situation 
where you have many 'AND' clauses looking at different columns is exactly what 
ANALYZE is most helpful for.

And yes, a 50 kilorow table is not big by SQLite standards.  I have tables with 
a thousand times that that yield answers to SELECT in 5ms.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
Note : I had some real queries that use to take 100 seconds to execute, and
I optimized them myself.
It looks like this :

I have 4 tables:  tableA, tableLink, tableC and tableD
tableA, Link and C have no more than 10k rows in it, and tableD around 50k
(which is not big at all right?)
The query goes like this :
SELECT DISTINCT A1.idA, A1.column1, A1.column2
FROMtableA A1,
 tableA A2,
 tableLink link,
 tableC C1,
 tableD D1,
 tableD D2,
WHERE  C1.idA = A1.idA
ANDC1.idD = D1.idD
AND   A1.idD  =  D1.idD
AND   A1.column2 = 'VALUE'
AND   A2.idA = link.id_item_1
AND   A1.idA = link.id_item_2
AND  D2.idD   =  A2.idD
AND  A2.idA = 100

This query takes 100 seconds approx.
I don't know if that can help you in any way, but the tables have been
created with a query like INSERT INTO SELECT * FROM A_View, so they have
been created from a view.
As we can see, only columns from tableA A1 are selected, so most of the
joins here can be replaced with something of the form :
AND EXISTS (SELECT 1 FROM  "test join")

I managed to drastically reduce time execution on this query with the
EXISTS trick, and now it has a normal time execution (below 300 ms for
sure, can't tell you how much exactly)


I've check the EXPLAIN QUERY PLAN of the original query, and I understood
that I was scanning tables in nested loops, so that this can take some time
(in fact the product of the sizes of all the tables in the FROM clause
right?)

Here is what i can give you for now, now i'll do some tests that you've
recommended earlier

2016-09-07 16:33 GMT+02:00 R Smith :

>
> On 2016/09/07 4:20 PM, Laura BERGOENS wrote:
>
>> Hi Mr. Slavin,
>>
>> As for why the query takes so long, I do know the answer !
>> Long story short, my application does a lot of calculation and things, I
>> don't want to get into the details here, but queries are built and
>> auto-generated piece by piece. Therefore, sometimes the queries aren't
>> optimized at all.
>> I solved that issue myself by touching up the queries a bit before
>> executing them, and everything is fine now.
>> I was concerned mainly because I figured that maybe some queries were a
>> bit
>> longer to execute as they should have (let's say 200 ms instead of 100
>> ms),
>> and I'm running a lot of queries in the app (approx 1000 per seconds).
>>
>
> Magic goalposts...
>
> "A query" taking 100s is a VERY VERY different problem to 1000 queries
> taking 100ms each. And to get technical, you shouldn't really need to
> optimize the queries (apart from avoiding the obvious silliness), you only
> need to know how to ask for the data correctly. Optimization is the job of
> the query planner in the DB engine - it should get the best fastest query
> results possible as long as it has all the information (which is what
> ANALYZE will do as others mentioned already) and as long as you provide the
> best Index for the job (which is something we might have some suggestions
> on if we know the schema and typical query is).
>
>
> I know now that there is nothing to worry about regarding my settings or
>> pragmas choices, since I get the same execution time with the sqlite3 tool
>>
>
> Maybe nothing to worry about, but that is no reason to leave it be -
> whatever the case is, if some previous version of SQLite can run it in
> 1/10th the time, there MUST be opportunity for improvement.
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
Hi Mr. Smith,

Quote :"(though, SQLite successive iterations usually become faster, not
slower, apart from compile-time options that may be different)."

That's what concerns me as well, but since I'm a recent user of SQlite type
database and sqlite3 I prefer blaming me for those differences.

Since I'm working on confidential/sensitive data, I need to take some time
to process your reply and see what can I show you guys and I can't.
First of all, I'll check all the pragmas and stuff, plus the version of
SQliteBrowser (former DB Browser for SQlite indeed) I'm using, etc.

I'll try to come back at you asap

2016-09-07 16:24 GMT+02:00 R Smith :

>
>
> On 2016/09/07 2:53 PM, Laura BERGOENS wrote:
>
>> Hello everyone,
>>
>> I tried to get some answers using the mailing list archives, but you guys
>> have been communicating so much on this, it's hard to run through
>> everything that was written ;)
>>
>> Basically, I'm using sqlite3 in my C application. I believe the tool
>> SQLiteBrowser is using it as well.
>>
>
> Yes, though the versions might be hugely different. Further to this (and
> regarding the rest of your post), SQLitebrowser sounds like the name it
> used to have before it became "DB Browser for SQLite" quite some time ago,
> perhaps you'd want to update to the latest to be sure the differences are
> real (though, SQLite successive iterations usually become faster, not
> slower, apart from compile-time options that may be different).
>
> To find out the exact difference, you can SELECT the following:
> sqlite_version()
> sqlite_source_id()
>
> and this query will tell which options were compiled-in:
> PRAGMA compile_options;*
>
> *If the new version is truly slower, then the most likely culprit is a
> regression for an optimization (or a regression introduced by an
> optimization, compile-time option or new functionality) that causes the
> current version to perform less quick than a previous version. If this is
> the case, we would definitely like to know about it. Could you supply the
> Schema and query (at a minimum) and perhaps upload the DB somewhere (if the
> data is not too sensitive).
>
> Further to this, a 100-second query on a DB the size you describe is
> ludicrous, even for terribly formulated queries - if this is in fact a
> normal situation (i.e. not an obvious regression), might we have a go at
> your query to see if it could possibly be optimized?
> *
> *Thanks,
> Ryan
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread R Smith


On 2016/09/07 4:20 PM, Laura BERGOENS wrote:

Hi Mr. Slavin,

As for why the query takes so long, I do know the answer !
Long story short, my application does a lot of calculation and things, I
don't want to get into the details here, but queries are built and
auto-generated piece by piece. Therefore, sometimes the queries aren't
optimized at all.
I solved that issue myself by touching up the queries a bit before
executing them, and everything is fine now.
I was concerned mainly because I figured that maybe some queries were a bit
longer to execute as they should have (let's say 200 ms instead of 100 ms),
and I'm running a lot of queries in the app (approx 1000 per seconds).


Magic goalposts...

"A query" taking 100s is a VERY VERY different problem to 1000 queries 
taking 100ms each. And to get technical, you shouldn't really need to 
optimize the queries (apart from avoiding the obvious silliness), you 
only need to know how to ask for the data correctly. Optimization is the 
job of the query planner in the DB engine - it should get the best 
fastest query results possible as long as it has all the information 
(which is what ANALYZE will do as others mentioned already) and as long 
as you provide the best Index for the job (which is something we might 
have some suggestions on if we know the schema and typical query is).




I know now that there is nothing to worry about regarding my settings or
pragmas choices, since I get the same execution time with the sqlite3 tool


Maybe nothing to worry about, but that is no reason to leave it be - 
whatever the case is, if some previous version of SQLite can run it in 
1/10th the time, there MUST be opportunity for improvement.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread R Smith



On 2016/09/07 2:53 PM, Laura BERGOENS wrote:

Hello everyone,

I tried to get some answers using the mailing list archives, but you guys
have been communicating so much on this, it's hard to run through
everything that was written ;)

Basically, I'm using sqlite3 in my C application. I believe the tool
SQLiteBrowser is using it as well.


Yes, though the versions might be hugely different. Further to this (and 
regarding the rest of your post), SQLitebrowser sounds like the name it 
used to have before it became "DB Browser for SQLite" quite some time 
ago, perhaps you'd want to update to the latest to be sure the 
differences are real (though, SQLite successive iterations usually 
become faster, not slower, apart from compile-time options that may be 
different).


To find out the exact difference, you can SELECT the following:
sqlite_version()
sqlite_source_id()

and this query will tell which options were compiled-in:
PRAGMA compile_options;*

*If the new version is truly slower, then the most likely culprit is a 
regression for an optimization (or a regression introduced by an 
optimization, compile-time option or new functionality) that causes the 
current version to perform less quick than a previous version. If this 
is the case, we would definitely like to know about it. Could you supply 
the Schema and query (at a minimum) and perhaps upload the DB somewhere 
(if the data is not too sensitive).


Further to this, a 100-second query on a DB the size you describe is 
ludicrous, even for terribly formulated queries - if this is in fact a 
normal situation (i.e. not an obvious regression), might we have a go at 
your query to see if it could possibly be optimized?

*
*Thanks,
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
Hi Mr. Slavin,

As for why the query takes so long, I do know the answer !
Long story short, my application does a lot of calculation and things, I
don't want to get into the details here, but queries are built and
auto-generated piece by piece. Therefore, sometimes the queries aren't
optimized at all.
I solved that issue myself by touching up the queries a bit before
executing them, and everything is fine now.
I was concerned mainly because I figured that maybe some queries were a bit
longer to execute as they should have (let's say 200 ms instead of 100 ms),
and I'm running a lot of queries in the app (approx 1000 per seconds).

I know now that there is nothing to worry about regarding my settings or
pragmas choices, since I get the same execution time with the sqlite3 tool

Thanks again for taking some time to answer

2016-09-07 16:14 GMT+02:00 Simon Slavin :

>
> On 7 Sep 2016, at 3:10pm, Laura BERGOENS 
> wrote:
>
> > I guess I don't have further question regarding my issue, since I have
> the
> > same time execution as with the sqlite3.exe tool, so there are no
> problems
> > with my pragmas or whatsoever.
>
> Nevertheless, 100 seconds is a very long time for a query, and we can
> probably improve on it if you're willing to include your "SELECT" command
> and your schema.  It might be something as simple as creating one
> additional index.
>
> You might also try executing "ANALYZE" in sqlite3.exe and see whether that
> speeds things up.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Richard Hipp
On 9/7/16, Simon Slavin  wrote:
>
> On 7 Sep 2016, at 3:10pm, Laura BERGOENS  wrote:
>
>> I guess I don't have further question regarding my issue, since I have the
>> same time execution as with the sqlite3.exe tool, so there are no problems
>> with my pragmas or whatsoever.
>
> Nevertheless, 100 seconds is a very long time for a query, and we can
> probably improve on it if you're willing to include your "SELECT" command
> and your schema.  It might be something as simple as creating one additional
> index.
>
> You might also try executing "ANALYZE" in sqlite3.exe and see whether that
> speeds things up.
>

+1

First run ANALYZE and then retry your query.

Then run ".fullschema -indent" from the command-line shell and send us
the output together with the complete text of the SELECT statement
that is running slowly.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Simon Slavin

On 7 Sep 2016, at 3:10pm, Laura BERGOENS  wrote:

> I guess I don't have further question regarding my issue, since I have the
> same time execution as with the sqlite3.exe tool, so there are no problems
> with my pragmas or whatsoever.

Nevertheless, 100 seconds is a very long time for a query, and we can probably 
improve on it if you're willing to include your "SELECT" command and your 
schema.  It might be something as simple as creating one additional index.

You might also try executing "ANALYZE" in sqlite3.exe and see whether that 
speeds things up.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
Hi Mr. Hipp,
First of all thanks for replying.
As you recommend, I downloaded the sqlite3 executable (I'm on Debian).
Just out of curiosity, I've ran both the "SELECT sqlite_source_id()" query
and a long query i have on 3 systems, SQliteBrowser, sqlite3.exe and my
app, here is what i get, it's... interesting :)  :

sqlite3.exe :
 -  Version : 2016-08-11 18:53:32
a12d8059770df4bca59e321c266410344242bf7b
 -  Execution time : Approx 100 seconds

my app :
 -  Version : 2015-07-29 20:00:57
cf538e2783e468bbc25e7cb2a9ee64d3e0e80b2f
 -  Execution time : Approx 100 seconds

SQliteBrower :
 -  Version : 2014-10-29 13:59:56
3b7b72c4685aa5cf5e675c2c47ebec10d9704221
 -  Execution time : Approx 5 seconds

Needless to say that i have the same set of results in all three tests
(it's a SELECT type query)
I guess I don't have further question regarding my issue, since I have the
same time execution as with the sqlite3.exe tool, so there are no problems
with my pragmas or whatsoever.
That being said, I wonder what is the reason why SQliteBrowser is doing so
great comparing to the others. But as you said that's not the place for
this question ;)

Thanks for replying again.


2016-09-07 15:19 GMT+02:00 Richard Hipp :

> On 9/7/16, Laura BERGOENS  wrote:
> > Hello everyone,
> >
> > I tried to get some answers using the mailing list archives, but you guys
> > have been communicating so much on this, it's hard to run through
> > everything that was written ;)
> >
> > Basically, I'm using sqlite3 in my C application. I believe the tool
> > SQLiteBrowser is using it as well.
> >
> > I noticed that the difference of query time execution between my app and
> > the browser, with the same queries of course, can be widely different. In
> > fact, it feels like it's exponential.
>
> (1) SQLiteBrowser is a 3rd-party tool that is not supported by the
> SQLite developers nor this website.  You are welcomed to use
> SQLiteBrowser if it meets your needs.  You won't hurt anyones
> feelings.  But neither will we support it here.  You'll need to
> contact the developers of SQLiteBrowser to get support for that tool.
>
> To get the best response from this mailing list, it is important to
> show a difference in the query performance of SQLite in your
> application versus the "sqlite3.exe" command-line tool found on the
> http://sqlite.org/download.html page.  Have you tried running your
> queries using sqlite3.exe?  How is the performance there.
>
> (2) Performance differences like this can sometimes arise because the
> slower application is using an older version of SQLite.  What does
> "SELECT sqlite_source_id()" return in the two applications you are
> comparing?
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Richard Hipp
On 9/7/16, Laura BERGOENS  wrote:
> Hello everyone,
>
> I tried to get some answers using the mailing list archives, but you guys
> have been communicating so much on this, it's hard to run through
> everything that was written ;)
>
> Basically, I'm using sqlite3 in my C application. I believe the tool
> SQLiteBrowser is using it as well.
>
> I noticed that the difference of query time execution between my app and
> the browser, with the same queries of course, can be widely different. In
> fact, it feels like it's exponential.

(1) SQLiteBrowser is a 3rd-party tool that is not supported by the
SQLite developers nor this website.  You are welcomed to use
SQLiteBrowser if it meets your needs.  You won't hurt anyones
feelings.  But neither will we support it here.  You'll need to
contact the developers of SQLiteBrowser to get support for that tool.

To get the best response from this mailing list, it is important to
show a difference in the query performance of SQLite in your
application versus the "sqlite3.exe" command-line tool found on the
http://sqlite.org/download.html page.  Have you tried running your
queries using sqlite3.exe?  How is the performance there.

(2) Performance differences like this can sometimes arise because the
slower application is using an older version of SQLite.  What does
"SELECT sqlite_source_id()" return in the two applications you are
comparing?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query time execution difference between my application and SQLiteBrowser

2016-09-07 Thread Laura BERGOENS
Hello everyone,

I tried to get some answers using the mailing list archives, but you guys
have been communicating so much on this, it's hard to run through
everything that was written ;)

Basically, I'm using sqlite3 in my C application. I believe the tool
SQLiteBrowser is using it as well.

I noticed that the difference of query time execution between my app and
the browser, with the same queries of course, can be widely different. In
fact, it feels like it's exponential.

The difference is really tiny for fast queries, but i have some queries
that take 10 seconds of execution in SQLiteBrowser, and 100 seconds in my
application.

Context : the databases i'm working on aren't large at all (between 10Mb
and 30 Mb, no more than 50 Mb for sure). Tables im working on don't exceed
100k rows, which is kind of small when i compare to some of the things i
read on the mailing list.
In order to measure that execution on time on my app, I've surrounded the
exec() function of sqlite3 with a clock() mechanism.
To finish, I'm working on a in-memory database, with those pragmas :
 page_size = 4096, cache_size = 1, thread safe, automatic index = on,
synchronous = off,  journal_mode = MEMORY", locking_mode = exclusive,
temp_store=memory.
The reason why for those pragmas is that I'm essentially running SELECT
queries, a very few INSERT or UPDATE but I'm mainly reading in the database.

I've tried to check the compile options of SQLiteBrowser to begin with, but
I was unable to find them.

Anyway, I was wondering why is there such a difference, I feel like the
pragmas are ok, and I've tested with same database, same queries.

Any ideas?  Thanks for reading, if you guys need to know more I'm here

-- 
Laura BERGOENS
Technicienne supérieure en Informatique et étudiante à l'IMERIR de Perpignan

*Institut Méditerranéen d'Étude etde Recherche en Informatique*
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Responsive" website revamp at www.sqlite.org

2016-09-07 Thread Dominique Devienne
On Mon, Sep 5, 2016 at 10:55 PM, Richard Hipp  wrote:

> Most of the world views the internet on their phone now, I am told,
> and websites are suppose to be "responsive", meaning that they
> reformat themselves to be attractive and useful for the majority who
> view them through a 320x480 pixel soda-straw.  In an effort to conform
> to this trend, I have made some changes to the *draft* SQLite website
> (http://sqlite.org/draft) Your feedback on these changes is
> appreciated.  Please be sure to try out the new design both on a
> narrow-screen phone and on a traditional desktop browser.  The goal is
> to provide a more mobile-friendly website without reducing the
> information content available to desktop users.


FWIW, I tried it on my 240x320 phone running Android 2.3 [1],
and a retina iPad. Many little things don't work too well on the phone,
like using a 2-columns layout with API names on the left (single line),
and mulit-line description on the right. Lots of whitespice below the
former,
and truncated text on the right for the latter. Also the initial page load,
and a pinch-to-zoom-out leads to different text sizes, weird. Impossible
to go back to initial page size, which also has a hidden RHS margin one
can H-scroll (before zooming out), and which contains text later below
in the page. Also the hidden TOC closed icon/char is a square (missing char
IMHO)
while the open one is the down-triangle as expected.

On the Retina iPad, the original site's fine, and actually looked better
IMHO.
On the Draft one, the 3-columns API list in cintro becomes 2-columns,
some paragraphs no longer align on the same vertical, little things like
that.
But overall both versions are functional, it's just my impression the new
one
doesn't bring bring anything, and on the contrary adds tiny/harmless
"style regressions" (but it's really just a taste and attention to detail
thing).
FWIW, I often read the SQLite side on that iPad.

That 5-years old phone is smaller that your minimum size, but despite the
weirdness I describe above, one can still access the content, and to be
honest
I never tried to current site on it :). Tried it only because you asked.
--DD

[1] http://www.gsmarena.com/samsung_galaxy_mini_s5570-3725.php
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users