[sqlite] Enabling FTS on a view

2019-09-01 Thread Amjith Ramanujam
Hi,

I've created a view that is a union of two tables. How can I enable FTS for
this view?

Since views don't have a "rowid" I'm unable to populate the FTS virtual
table with the contents of the view.

Is there a workaround for this?

Thank you!
Amjith
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_BUSY, database is locked in "PRAGMA journal_mode"

2019-09-01 Thread Rowan Worth
On Fri, 30 Aug 2019 at 04:18, test user 
wrote:

> B. Is there any method for determining lock transitions for connections?
> - Is there an API?
> - Would it be possible to use dtrace to instrument SQLite to detect
> lock transitions?
> - Where should I be looking?
>

 On unix sqlite uses fcntl() with cmd=F_SETLK on specific byte locations to
acquire locks -- I'm not familiar with dtrace, but I've used strace + sed
to watch sqlite lock activity before. eg:

#!/bin/sh

PID=$1

replace() {
 echo "s#F_SETLK, {type=F_$1, whence=SEEK_SET, start=$2, len=$3}#$4#"
}

strace -Ttt -ff -e trace=fcntl -p $PID 2>&1 |
sed \
-e "$(replace RDLCK 1073741824 1 acquireR{PENDING})" \
-e "$(replace RDLCK 1073741825 1 acquireR{RESERVED})" \
-e "$(replace RDLCK 1073741826 510 acquire{SHARED})" \
-e "$(replace WRLCK 1073741824 1 acquireW{PENDING})" \
-e "$(replace WRLCK 1073741825 1 acquireW{RESERVED})" \
-e "$(replace WRLCK 1073741826 510 acquire{EXCLUSIVE})" \
-e "$(replace UNLCK 1073741824 2 release{PENDING+RESERVED})" \
-e "$(replace UNLCK 1073741824 1 release{PENDING})" \
-e "$(replace UNLCK 1073741825 1 release{RESERVED})" \
-e "$(replace UNLCK 1073741826 510 release{SHARED/EXCLUSIVE})" \
-e "$(replace UNLCK 0 0 release{ALL})"

-Rowan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select for power-meter accumulated total readings

2019-09-01 Thread Keith Medcalf
Of course, what we are emulating here is called a "Process Historian", common 
examples being PHD and PI.  So, if you make a few minor adjustments, you can 
make this run just about as fast as a "designed for purpose" Process Historian. 
 The changes are that you need to store the data in an "economical format".  I 
have chosen to store the timestamp as a floating-point julianday number.  You 
also need to calculate and store the slope to the previous engineering value 
each time you store a value.  You must also insert the data in chronological 
order and you may not update a value once it has been inserted.

This is about 500 times (or more) faster than using the table you created.  The 
following work the same as the previous example but uses triggers to enforce 
the constraints and calculate the slope for new engineering values stored.  And 
you only need to change the '+1 day' to whatever interval you want to use.

This will load the data into the pwr table from your existing table.

drop table pwr;
create table pwr
(
timestamp float primary key,
reading float not null,
ratetoprior float
) without rowid;

create trigger pwr_ins_stamp before insert on pwr
begin
select raise(ABORT, 'Data insertion must be in chronological order')
 where new.timestamp < (select max(timestamp) from pwr);
end;

create trigger pwr_ins_slope after insert on pwr
begin
update pwr
   set ratetoprior = (new.reading - (select reading
   from pwr
  where timestamp < new.timestamp
   order by timestamp desc
  limit 1))
 /
 (new.timestamp - (select timestamp
 from pwr
where timestamp < new.timestamp
 order by timestamp desc
limit 1))
 where timestamp = new.timestamp;
end;

create trigger pwr_upd_error after update of timestamp, reading on pwr
begin
select raise(ABORT, 'Data update prohibited');
end;

insert into pwr (timestamp, reading)
  select julianday(timestamp),
 total_kwh
from power
order by julianday(timestamp);

select datetime(timestamp),
   kwh
  from (
with periods (timestamp) as
(
 select julianday(date(min(timestamp), '-1 day') || ' 23:59:59.999')
   from pwr
union all
 select julianday(datetime(timestamp, '+1 day'))
   from periods
  where timestamp < (select max(timestamp) from pwr)
),
 readings (timestamp, reading) as
(
 select timestamp,
(select reading - (b.timestamp - p.timestamp) * ratetoprior
  from pwr as b
 where b.timestamp >= p.timestamp
 limit 1) as reading
   from periods as p
  where timestamp between (select min(timestamp) from pwr)
  and (select max(timestamp) from pwr)
)
  select timestamp,
 reading - lag(reading) over () as kwh
from readings
   )
where kwh is not null;

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select for power-meter accumulated total readings

2019-09-01 Thread Keith Medcalf

This will get you the consumption projection for each day in the table 
(timestamp in s represents the ENDING period you are interested in and you can 
modify it to whatever interval you want, and of course the final query gets the 
result).  It works by computing the slope from each timestamp to the next, 
building the timestamps that you want data for, then computing what the reading 
would be at that time, and then finally getting the difference from the 
previous timestamp.  It could probably be optimized somewhat, but it works.  
With the caveat that it assumes the timestamp is in UT1 or a fixed offset from 
UT1.  Since the intervals are defined by s then you could make this a UT1 
equivalent table of whatever localtime intervals you need.

and of course you should create in index on power(timestamp, total_kwh) unless 
you want it to be really very slow

with a as (
select timestamp as curr_timestamp,
   total_kwh as curr_kwh,
   lead(timestamp) over (order by timestamp) as next_timestamp,
   lead(total_kwh) over (order by timestamp) as next_kwh
  from power
  order by timestamp
  ),
 b as (
select curr_timestamp,
   curr_kwh,
   (next_kwh - curr_kwh) / (julianday(next_timestamp) - 
julianday(curr_timestamp)) as rate
  from a
  order by curr_timestamp
  ),
 s (timestamp) as
  (
select date(min(timestamp)) || ' 23:59:59' as timestamp
  from power
  union all
select datetime(timestamp, '+1 day') as timestamp
  from s
 where julianday(s.timestamp) < (select max(julianday(timestamp)) 
from power)
  ),
 t (timestamp, total_kwh) as
  (
select s.timestamp,
   (select b.curr_kwh + ((julianday(s.timestamp) - 
julianday(b.curr_timestamp)) * b.rate)
  from b
 where julianday(b.curr_timestamp) <= julianday(s.timestamp)
  order by julianday(b.curr_timestamp) desc) as total_kwh
  from s
  order by s.timestamp
  ),
u (timestamp, kwh) as
  (
select timestamp,
   total_kwh - lag(total_kwh) over (order by timestamp) as kwh
  from t
  order by timestamp
  )
  select date(timestamp),
 kwh
from u
   where kwh is not null
order by 1;


eg, for hourly it would be:

with a as (
select timestamp as curr_timestamp,
   total_kwh as curr_kwh,
   lead(timestamp) over (order by timestamp) as next_timestamp,
   lead(total_kwh) over (order by timestamp) as next_kwh
  from power
  order by timestamp
  ),
 b as (
select curr_timestamp,
   curr_kwh,
   (next_kwh - curr_kwh) / (julianday(next_timestamp) - 
julianday(curr_timestamp)) as rate
  from a
  order by curr_timestamp
  ),
 s (timestamp) as
  (
select date(min(timestamp)) || ' 00:59:59' as timestamp
  from power
  union all
select datetime(timestamp, '+1 hour') as timestamp
  from s
 where julianday(s.timestamp) < (select max(julianday(timestamp)) 
from power)
  ),
 t (timestamp, total_kwh) as
  (
select s.timestamp,
   (select b.curr_kwh + ((julianday(s.timestamp) - 
julianday(b.curr_timestamp)) * b.rate)
  from b
 where julianday(b.curr_timestamp) <= julianday(s.timestamp)
  order by julianday(b.curr_timestamp) desc) as total_kwh
  from s
  order by s.timestamp
  ),
u (timestamp, kwh) as
  (
select timestamp,
   total_kwh - lag(total_kwh) over (order by timestamp) as kwh
  from t
  order by timestamp
  )
  select substr(timestamp,1,13) || ':00:00' as timestamp,
 kwh
from u
   where kwh is not null
order by 1;

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select for power-meter accumulated total readings

2019-09-01 Thread Petr Jakeš
As far I have ended with following:

WITH miniPow as (
select date(TIMESTAMP,'+1 day') as d, max(TOTAL_KWH) mini
from power
group by date(timestamp)
)
, maxiPow as (
select date(TIMESTAMP) as d, max(TOTAL_KWH) maxi
from power
group by date(timestamp)
)
select maxiPow.d, ROUND(maxi-mini, 1) from miniPow
 join
maxiPow
on miniPow.d = maxiPow.d

The only problem is how to calculate average consumption for time gap
(days), when consumption data were not recorded.
Is this possible somehow?

I am thinking about monitor it with an external script (Python) and insert
average virtual data in to the database.

On Thu, Aug 8, 2019 at 9:36 AM Petr Jakeš  wrote:

> I am storing electricity consumption data to the sqlite.
>
> The simple table to store kWh consumption looks like following example
> (accumulated total readings in each row - exactly as you see on your
> electricity meter):
>
> |ID|timestamp|kWh   ||1 | 2019-07-31 14:24:25 | 270.8||2 | 
> 2019-07-31 14:31:02 | 272.1||3 | 2019-08-01 06:56:45 | 382.5||4 | 2019-08-01 
> 16:29:01 | 382.5||5 | 2019-08-01 20:32:53 | 582.5||6 | 2019-08-02 16:18:14 | 
> 612.1|
> |7 | 2019-08-08 07:13:04 | 802.7|
> |..|.|..|
>
>
>- The data interval is not predictable (is random).
>- There can be a day with no records at all (if data transmission
>failure for example).
>- There can be many records with the identical (equal) power
>consumption (no energy consumption) for one or more days.
>
> My question is how to write SQL select to get energy consumption for
> required interval summarized  by days, weeks or months ...
>
> The real challenge is to get an average if for each day for days when
> records were not taken (in the example table days between ID 6 and ID7) -
> each day as a row.
>
> It looks like simple question but I am pulling out my hair for two days to
> find a solution.
>
>
>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Endless loop possible with simultaneous SELECT and UPDATE?

2019-09-01 Thread Keith Medcalf

On Sunday, 1 September, 2019 11:12, Alexander Vega  wrote:

>Thank you Keith for your answer. It has led me to more questions.

>"though you may or may not have visited all rows"
>From the documentation I did not get the impression that you would
>ever not visit ALL ROWS at least once. Is there a technical reason 
>for this? I would assume a full table scan is walking the un-ordered 
>leaf pages of the B*tree?

How do you know that you are doing a table scan?  This certainly cannot be 
assumed.  Perhaps the AuthTable has 57 columns with a total length of several 
hundred bytes per row but there also happens to be an index on a subset of the 
columns that includes the two columns that you have asked for.  Perhaps you are 
"table scanning" that covering index instead (because it is cheaper than 
reading the actual table)?  There are ways to insist on a table scan (select 
... from table NOT INDEXED ...) for example.  However, you left it up to the 
database engine to choose the most cost effective way to answer your select 
(which is how SQL works ... it is a declarative language ... you declare what 
you want and the database figures out the best way to go about giving you what 
you asked for).

As a result of updating the first such row thus received the index has now 
changed such that the row you are operating on became the last row in the index 
being scanned.  Therefore there is no "next" row.  You will have visited only 
one row, even though there might have been millions of rows in the table.

>"Your outer query should probably be "select auth_id, expiration from
>AuthTable where expiration <= ? order by +auth_id, +expiration" and
>binding current_time as the parameter since there is no point in 
>retrieving rows that you will not be updating is there?  "

>You are correct that does make sense. I guess I was trying avoid any
>ambiguities of a WHERE clause on the SELECT because I do not
>understand its behavior in this circumstance.

If you cannot understand the behaviour with a WHERE clause, then what would 
make you think that one without a WHERE clause would be any more transparent, 
especially given that all Relational Databases are designed to provide you the 
results you asked for as efficiently as possible?  Perhaps in a few days you 
will discover that you need to create another index for some other purpose, and 
that causes SQLite3 to obtain what you said you wanted in an entirely different 
manner.  When you make any change to the database do you re-evaluate the 
implementation details of every previously written SQL statement to see if it 
still compatible with the details you depended on?  What about it you update 
the version of SQLite3?  You should not be dependent on the peculiarities of 
the implementation since they might change at any time.

>You mentioned two database connections to the same database. Is this
>going to work if I am using Threadsafe mode = 0? 

Yes.  Threadsafe mode only affects programs having multiple threads making 
calls into the sqlite3 library.  These are independent variables (that is you 
can have X threads and Y connections, and X is independent of Y) just because 
you have 47 connections does not mean that you have more than 1 thread, nor 
does having 47 threads mean that you have more than 1 connection.  Threads are 
commenced with _beginthread (or equivalent for the OS) calls and connections 
are commenced with sqlite3_open* calls.  The _beginthread operations result in 
the creation of a thread and the sqlite3_open* calls create a database 
connection -- they are not related to each other in any way.  Also consider 
that it is entirely possible for a program to have hundreds of threads yet 
still only be single-threaded as far as sqlite3 is concerned if only one of 
those threads makes use of the sqlite3 library, and that one thread may use 
hundreds of database connections either serially or in parallel or in some 
combination thereof.

>Would the second connection be done through an attach?

No.  The attach statement attaches a database to a connection.  You have to 
have opened the connection first.  Connections are created with the 
sqlite3_open* functions which return a pointer to a database connection.

>Does this conversation change if I wrap the whole select and updates
>in one transaction? e.g. BEGIN...END

No, because isolation is only BETWEEN connections, not WITHIN connections.  And 
the transaction state is per connection.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-keyword quoted identifiers parsed as string literals

2019-09-01 Thread William Chargin
Thank you both for your quick and helpful replies! The `quirks.html`
page certainly clears things up. Glad to see that there are new options
to disable this; I reached out to the maintainers of the language
bindings that I use to see if we can get that enabled [1].

[1]: https://github.com/JoshuaWise/better-sqlite3/issues/301

Best,
WC
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Endless loop possible with simultaneous SELECT and UPDATE?

2019-09-01 Thread Alexander Vega
Thank you Keith for your answer. It has led me to more questions.

"though you may or may not have visited all rows"
From the documentation I did not get the impression that you would ever not
visit ALL ROWS at least once. Is there a technical reason for this? I would
assume a full table scan is walking the un-ordered leaf pages of the B*
tree?

"Your outer query should probably be "select auth_id, expiration from
AuthTable where expiration <= ? order by +auth_id, +expiration" and binding
current_time as the parameter since there is no point in retrieving rows
that you will not be updating is there?  "
You are correct that does make sense. I guess I was trying avoid any
ambiguities of a WHERE clause on the SELECT because I do not understand its
behavior in this circumstance.

You mentioned two database connections to the same database. Is this going
to work if I am using Threadsafe mode = 0? Would the second connection be
done through an attach?

Does this conversation change if I wrap the whole select and updates in one
transaction? e.g. BEGIN...END

Thanks



On Sun, Sep 1, 2019 at 1:32 AM Keith Medcalf  wrote:

>
> > Having read :  https://www.sqlite.org/isolation.html
> > Specifically the line "And the application can UPDATE the current row
> > or any prior row, though doing so might cause that row to reappear in a
> > subsequent sqlite3_step()."
>
> > Is it possible to create and endless loop
>
> Eventually you will have no more rows to update and therefore the
> underlying structures become stable and the select loop will eventually run
> out of rows, though you may or may not have visited all rows, and may visit
> some rows two or more times (once before update and more than once after).
>
> If you change the outer query to "select auth_id, expiration from
> AuthTable order by +auth_id, +expiration" then you will PROBABLY never have
> a problem since the results will LIKELY be from a sorter and not from the
> underlying table, and therefore mutation of the underlying tables and
> indexes will not interfere with the result of the outer select, even if
> those mutations affect the AuthTable or the indexes on it.  Some SQL
> varients use a FOR UPDATE clause on a SELECT to tell the query planner that
> you intend to dally-about with the underlying datastore without having the
> proper isolation in place.  The SQLite way of doing this is by requesting a
> row sorter not dependent on indexes by using the +columnname syntax in an
> order by on the select.
>
> Your outer query should probably be "select auth_id, expiration from
> AuthTable where expiration <= ? order by +auth_id, +expiration" and binding
> current_time as the parameter since there is no point in retrieving rows
> that you will not be updating is there?
>
> The correct solution is, of course, to use separate connections so that
> you have isolation between the select and the updates.
>
> You SHOULD be executing the outer select on one connection and the updates
> on another connection.  This will work for journal mode delete unless the
> number of changed pages is too large to fit in sqlite's cache, in which
> case you may get an error from the update statement when it needs to spill
> the cache, and you will need to kaibosh the whole thing and do the updates
> in smaller chunks by putting a limit on the outer select and looping the
> whole thing until there are no more rows to process.  (or increase the
> cache_size to be sufficient).
>
> You can avoid that particular problem by having the database in
> journal_mode=WAL in which case you can even process each update in its own
> transaction if you wish (get rid of the db2.beginimmediate() and
> db2.commit(), though then you will have to handle the eventuality of
> getting errors on the UPDATE).
>
> db1 = Connection('database.db')
> db1.executescript('pragma journal_mode=WAL;')
> db2 = Connection('database.db')
> current_time = datetime.now()
> current_time_plus_one_year = current_time.add(years=1)
> sess_id = ... some constant ...
> db2.beginimmediate()
> for row in db1.execute('select auth_id, expiration from authtable where
> expiration <= ?;',
>(current_time,)):
> db2.execute('update authtable set sesCookie = ?, expiration = ? where
> auth_id = ?;',
> (generate_ses_id(sess_id), current_time_plus_one_year,
> row.auth_id,))
> db2.commit()
>
> --
> The fact that there's a Highway to Hell but only a Stairway to Heaven says
> a lot about anticipated traffic volume.
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-keyword quoted identifiers parsed as string literals

2019-09-01 Thread Keith Medcalf
On Sunday, 1 September, 2019 00:26, William Chargin  wrote:

>I tracked down a perplexing issue to the following behavior:

>sqlite> CREATE TABLE tab (col);
>sqlite> SELECT nope FROM tab;  -- fails; good
>Error: no such column: nope
>sqlite> SELECT "nope" FROM tab;  -- works?
>sqlite> INSERT INTO tab (col) VALUES (77);
>sqlite> SELECT col FROM tab WHERE nope IS NOT NULL;  -- fails; good
>Error: no such column: nope
>sqlite> SELECT col FROM tab WHERE "nope" IS NOT NULL;  -- works?
>77

>It seems that "nope" is being interpreted as a string literal here,
>while quoted names of valid columns are not:

>sqlite> SELECT "nope", "col" FROM tab;
>nope|77

>I see that this is discussed briefly in the documentation, though the
>exception as written only applies to quoted keywords, which "nope" is
>not: 

>But it seems especially surprising that the parse tree should depend
>on the actual identifier values and table schemata, making the grammar
>not context-free.

>Is this working as intended? Are there plans to make SQLite reject
>such examples as malformed queries instead of implicitly coercing?

Yes, this is working as intended.  Double-quotes strings refer to column names 
if the semantics permit a column name to appear at that location and the column 
name exists.  Otherwise it is treated as a constant single-quoted string.

>My `sqlite3 --version`:
>3.11.0 2016-02-15 17:29:24 3d862f207e3adc00f78066799ac5a8c282430a5f

Version 3.29 added options to DBCONFIG to require that quotes be interpreted 
according to the standard (double quotes are identifiers ONLY and single quotes 
are strings ONLY and compilation defines so that you can permanently make this 
the new default in your custom version of SQLite3.

https://www.sqlite.org/draft/releaselog/3_29_0.html

The default is to keep the old behaviour as the confusion surrounding the use 
of double and single quotes is quite pervasive and forcing the correct 
behaviour would cause applications which currently work to stop working if they 
use quotes incorrectly (which is extremely common).


sqlite> .dbconfig
   enable_fkey on
enable_trigger on
   enable_view on
fts3_tokenizer off
load_extension on
  no_ckpt_on_close off
   enable_qpsg off
   trigger_eqp off
reset_database off
 defensive off
   writable_schema off
legacy_alter_table off
   dqs_dml off
   dqs_ddl off
sqlite> CREATE TABLE tab (col);
sqlite> SELECT nope FROM tab;
Error: no such column: nope
sqlite> SELECT "nope" FROM tab;
Error: no such column: nope
sqlite> INSERT INTO tab (col) VALUES (77);
sqlite> SELECT col FROM tab WHERE nope IS NOT NULL;
Error: no such column: nope
sqlite> SELECT col FROM tab WHERE "nope" IS NOT NULL;
Error: no such column: nope

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-keyword quoted identifiers parsed as string literals

2019-09-01 Thread Ben Kurtovic
> Is this working as intended? Are there plans to make SQLite reject such
> examples as malformed queries instead of implicitly coercing?

This problematic behavior, including discussion on how to disable it, is 
documented here: https://www.sqlite.org/quirks.html#dblquote 


Ben
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Non-keyword quoted identifiers parsed as string literals

2019-09-01 Thread William Chargin
I tracked down a perplexing issue to the following behavior:

sqlite> CREATE TABLE tab (col);
sqlite> SELECT nope FROM tab;  -- fails; good
Error: no such column: nope
sqlite> SELECT "nope" FROM tab;  -- works?
sqlite> INSERT INTO tab (col) VALUES (77);
sqlite> SELECT col FROM tab WHERE nope IS NOT NULL;  -- fails; good
Error: no such column: nope
sqlite> SELECT col FROM tab WHERE "nope" IS NOT NULL;  -- works?
77

It seems that "nope" is being interpreted as a string literal here,
while quoted names of valid columns are not:

sqlite> SELECT "nope", "col" FROM tab;
nope|77

I see that this is discussed briefly in the documentation, though the
exception as written only applies to quoted keywords, which "nope" is
not: 

But it seems especially surprising that the parse tree should depend on
the actual identifier values and table schemata, making the grammar not
context-free.

Is this working as intended? Are there plans to make SQLite reject such
examples as malformed queries instead of implicitly coercing?

My `sqlite3 --version`:

3.11.0 2016-02-15 17:29:24 3d862f207e3adc00f78066799ac5a8c282430a5f
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users