[sqlite] [feature request] Quirk-fixing compile-time options

2020-02-26 Thread Joshua Wise
Hi, I was very happy to see the addition of the SQLITE_DQS compile-time option, 
which fixes one of the legacy bugs/quirks of SQLite3.

I was wondering if additional compile-time options could be added to fix two 
other legacy quirks:
- Primary Keys containing NULLs 

- Aggregate queries containing non-aggregate result columns 


It would be great for SQLite3 users to opt out of these behaviors, as it would 
help prevent many difficult-to-find bugs.

Any feedback on this suggestion by a SQLite3 dev would be much appreciated!

Warmly,
Joshua Wise
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: more robust handling of invalid UTF-16 data

2020-02-16 Thread Maks Verver
*Richard:* the issue with the JSON extension seems unrelated to the issue
that I reported originally, which relates to the SQLite C API
(specifically, the sqlite3_bind_text16() and sqlite3_bind_text16()
functions). My issue is still not fixed.

I've expanded my original sample code to make it easier to run the test and
reproduce the problems:
https://gist.github.com/maksverver/c3d5da8a0a9f2ec1c2a225209f290e13

Let me know if you need more help understanding/reproducing the problem.

*Dennis:* thanks for raising the issue again, and for digging through the
Unicode standard to confirm the most reasonable behavior.

I think your proposed patch is not quite correct. I think I spot two
problems. One:

  if( c>=0xDC00 && c<=0xE000 && TERM ) {

.. here, you should drop the `&& TERM`, otherwise you'll fail to replace a
high surrogate when it occurs at the end of the string. Also, you should
check c<0xE000 because 0xE000 is a valid character by itself. Two:

} else if( c>=0xD800 && TERM ){

Here, you should also check c<0xDC00 (or c<0xE000), otherwise you'll
misinterpret valid characters with code points 0xE000 and above as part of
a surrogate pair.

I believe my original patch handles these and all other cases correctly.

  - Maks.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: more robust handling of invalid UTF-16 data

2020-01-14 Thread Richard Hipp
On 1/14/20, Richard Hipp  wrote:
> I'm having trouble reproducing this.

I went back to version 3.30.1 and I was able to reproduce it.  So I
bisected and found the following:

https://sqlite.org/src/timeline?c=51027f08c0478f1b

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: more robust handling of invalid UTF-16 data

2020-01-14 Thread Richard Hipp
On 1/13/20, Dennis Snell  wrote:
> We have a JSON document like this which we store in a table.
>
> {“content”: “\ud83c\udd70\ud83c(null)\udd71”,”tags":[]}
>
>
> The JSON is well-formed but the sequence of UTF-16 code points is invalid.
>
> When sqlite reads this data two types of further corruption

I'm having trouble reproducing this.  The following test script (one
of many) illustrates:

CREATE TABLE t1(j TEXT);
INSERT INTO t1(j) VALUES
('{"content": "\ud83c\udd70\ud83c(null)\udd71","tags":[]}');
SELECT length(json_extract(j,'$.content')) FROM t1;
WITH RECURSIVE c(x) AS (VALUES(1) UNION ALL SELECT x+1 FROM c WHERE x<9)
SELECT x, printf('%x',unicode(substr(json_extract(j,'$.content'),x)))
  FROM t1, c;

The column T1.J is loaded up with your original JSON with the invalid
code points. Then I run json_extract() to pull out the invalid string,
but SQLite says that the length is 9, which I think is the correct
answer.  The second SELECT with the CTE in it loops over each
character and prints out the HEX value for that character.  Here is
what I see:

1|1f170
2|fffd
3|28
4|6e
5|75
6|6c
7|6c
8|29
9|fffd

So the initial surrogate pair was rendered correctly as 0x1f170.  The
\ud83c without the following high surrogate was converted into 0xfffd
(which is the right thing to do, is it not).  Then the 6 ASCII
characters follow.  Finally, the last isolated high-surrogate is
(correctly?) converted into 0xfffd.

What behavior were you expecting?

Is there something that I can be doing differently to make it misbehave?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: more robust handling of invalid UTF-16 data

2020-01-14 Thread Detlef Golze
I want to second that. This leads to the situation that data is accepted by the 
database but there is no way to read that data back or more precisely I get the 
wrong (i.e. different) data back. I didn't check the suggested patch, but I 
don't believe it will work in all cases. I'd rather prefer rejecting such 
strings or implicitly  convert them to a BLOB which at least provides a way to 
get the data back.

Thanks,
Detlef.

-Ursprüngliche Nachricht-
Von: sqlite-users  Im Auftrag von 
Dennis Snell
Gesendet: Montag, 13. Januar 2020 21:57
An: sqlite-users@mailinglists.sqlite.org
Betreff: Re: [sqlite] Feature request: more robust handling of invalid UTF-16 
data

I’d like to raise this issue again and give my support for what Maks Verver 
recommended in 
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg110107.html


Independently I came to this bug while working on an issue in Simplenote’s 
Android app where our data was being corrupted when saved to sqlite inside the 
Android SDK. We received some invalid UTF-16 sequences and instead of rejecting 
them or decoding it properly sqlite is further mangling them and introducing 
more corruption.


Example:
We have a JSON document like this which we store in a table.


    {“content”: “\ud83c\udd70\ud83c(null)\udd71”,”tags":[]}


The JSON is well-formed but the sequence of UTF-16 code points is invalid. We 
have fixed our side of the equation which prevents creating this content, but 
we still receive from time to time the invalid sequence from older client 
libraries.


When sqlite reads this data two types of further corruption occur: reading 
beyond a code unit subsequence; and conflating high and low surrogates.


Reading beyond a code unit subsequence:


When the `TERM` was introduced[1] and updated[2] it appears to have been 
designed to assume that a string ends mid-surrogate but it does not attempt to 
address unpaired surrogates in the middle of an input text. In our case the 
`READ_UTF16BE` macro accepts the second `\ud83c` code unit and then consumes 
the following `\u0028` which is the separate and well-formed “(“. In turn this 
produces the more corrupted value of `\ud83c\udc28`, code point U+1F028, plus 
“null)” without the leading “(“.


Conflating high and low surrogates:


The `READ_UTF16__` macros both attempt to start processing surrogate pairs 
based on the `0xD800 <= c <= 0xE000` value of the input code unit. Because of 
this they will pick up on unpaired low surrogates, consume the next character, 
and then create a more corrupted Unicode string.


In our case, once we reach the `\udd71` the macro consumes the following quote, 
which in the JSON document closes the string, and puts them together as 
`\udd71\u0022` producing the invalid code point U+6C422. Moreover, because it 
consumed the string-ending quote it also corrupted the entire JSON document, as 
the new output resembles the following:


    {“content”: “\ud83c\udd70\ud83c\udc28ull)\ud971\udc22,”tags”:[]}


That is, we write this invalid Unicode sequence but valid JSON document into 
sqlite and read back an invalid Unicode sequence _and_ invalid JSON (see the 
missing quote before “tags”).


Supporting Unicode spec:


The Unicode specification[3] sections 3.2 and 3.9 speak to this situation and 
provides a specific comparable example:


    When a process interprets a code unit sequence which purports to be in a 
Unicode
    character encoding form, it shall treat ill-formed code unit sequences as 
an error
    condition and shall not interpret such sequences as characters.


    Furthermore, such a process must not treat any adjacent well-formed code 
unit
    sequences as being part of those ill-formed code unit sequences.


    For example, with the input UTF-8 code unit sequence , such a 
UTF-8
    conversion process must not return  or , because
    either of those outputs would be the result of interpreting a well-formed 
subsequence
    as being part of the ill-formed subsequence. The expected return value for 
such a
    process would instead be .


Supporing Maks’ suggestion to use the replacement character on error section 
23.8[4] provides the guidance:


    It [U+FFFD] can be substituted for any “unknown” character in another 
encoding that
    cannot be mapped in terms of known Unicode characters. It can also be used 
as one
    means of indicating a conversion error, when encountering an ill-formed 
sequence in
    a conversion between Unicode encoding forms.


Patching:


The `READ_UTF16__` macros thus should do not only what Maks proposed, which is 
to verify that the character following a surrogate half is also a surrogate 
half, but also to verify that we don’t start interpreting a surrogate sequence 
when encountering an unpaired low surrogate. I propose this change instead:


    #define READ_UTF16LE(zIn, TERM, c){
        c = (*zIn++);
        c += ((*zIn++)<<8);
        if( c>=0xDC00 &am

Re: [sqlite] Feature request: more robust handling of invalid UTF-16 data

2020-01-13 Thread Dennis Snell
I’d like to raise this issue again and give my support for what Maks Verver 
recommended in 
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg110107.html


Independently I came to this bug while working on an issue in Simplenote’s 
Android app where our data was being corrupted when saved to sqlite inside the 
Android SDK. We received some invalid UTF-16 sequences and instead of rejecting 
them or decoding it properly sqlite is further mangling them and introducing 
more corruption.


Example:
We have a JSON document like this which we store in a table.


    {“content”: “\ud83c\udd70\ud83c(null)\udd71”,”tags":[]}


The JSON is well-formed but the sequence of UTF-16 code points is invalid. We 
have fixed our side of the equation which prevents creating this content, but 
we still receive from time to time the invalid sequence from older client 
libraries.


When sqlite reads this data two types of further corruption occur: reading 
beyond a code unit subsequence; and conflating high and low surrogates.


Reading beyond a code unit subsequence:


When the `TERM` was introduced[1] and updated[2] it appears to have been 
designed to assume that a string ends mid-surrogate but it does not attempt to 
address unpaired surrogates in the middle of an input text. In our case the 
`READ_UTF16BE` macro accepts the second `\ud83c` code unit and then consumes 
the following `\u0028` which is the separate and well-formed “(“. In turn this 
produces the more corrupted value of `\ud83c\udc28`, code point U+1F028, plus 
“null)” without the leading “(“.


Conflating high and low surrogates:


The `READ_UTF16__` macros both attempt to start processing surrogate pairs 
based on the `0xD800 <= c <= 0xE000` value of the input code unit. Because of 
this they will pick up on unpaired low surrogates, consume the next character, 
and then create a more corrupted Unicode string.


In our case, once we reach the `\udd71` the macro consumes the following quote, 
which in the JSON document closes the string, and puts them together as 
`\udd71\u0022` producing the invalid code point U+6C422. Moreover, because it 
consumed the string-ending quote it also corrupted the entire JSON document, as 
the new output resembles the following:


    {“content”: “\ud83c\udd70\ud83c\udc28ull)\ud971\udc22,”tags”:[]}


That is, we write this invalid Unicode sequence but valid JSON document into 
sqlite and read back an invalid Unicode sequence _and_ invalid JSON (see the 
missing quote before “tags”).


Supporting Unicode spec:


The Unicode specification[3] sections 3.2 and 3.9 speak to this situation and 
provides a specific comparable example:


    When a process interprets a code unit sequence which purports to be in a 
Unicode
    character encoding form, it shall treat ill-formed code unit sequences as 
an error
    condition and shall not interpret such sequences as characters.


    Furthermore, such a process must not treat any adjacent well-formed code 
unit
    sequences as being part of those ill-formed code unit sequences.


    For example, with the input UTF-8 code unit sequence , such a 
UTF-8
    conversion process must not return  or , because
    either of those outputs would be the result of interpreting a well-formed 
subsequence
    as being part of the ill-formed subsequence. The expected return value for 
such a
    process would instead be .


Supporing Maks’ suggestion to use the replacement character on error section 
23.8[4] provides the guidance:


    It [U+FFFD] can be substituted for any “unknown” character in another 
encoding that
    cannot be mapped in terms of known Unicode characters. It can also be used 
as one
    means of indicating a conversion error, when encountering an ill-formed 
sequence in
    a conversion between Unicode encoding forms.


Patching:


The `READ_UTF16__` macros thus should do not only what Maks proposed, which is 
to verify that the character following a surrogate half is also a surrogate 
half, but also to verify that we don’t start interpreting a surrogate sequence 
when encountering an unpaired low surrogate. I propose this change instead:


    #define READ_UTF16LE(zIn, TERM, c){
        c = (*zIn++);
        c += ((*zIn++)<<8);
        if( c>=0xDC00 && c<=0xE000 && TERM ) {
            c = 0xFFFD
        } else if( c>=0xD800 && TERM ){
            int c2 = (zIn[0] | (zIn[1] << 8));
            if ( c2>=0xDC00 && c2<0xE000) {
                zIn += 2;
                c = (c2&0x03FF) + ((c&0x003F)<<10) + (((c&0x03C0)+0x0040)<<10);
            } else {
                c = 0xFFFD;
            }
        }
    }



This will solve both problem of reading past an ill-formed surrogate sequence 
and of interpreting an ill-formed surrogate sequence. I love sqlite and I 
really appreciated how the code is laid out which made it so easy to find this 
macro in the source and identify the problem.


Dennis Snell
Automattic, Inc.


[1]: https://sqlite.org/src/info/19064d7cea


Re: [sqlite] Feature request: (VALUES (1), (2), (3)) AS t(n)

2019-11-08 Thread Ainar Garipov
(Hopefully this works.)

> How about something like:
>
> with t(a, b) as (values (1, 1), (2, 2)) select a, b from t;

Yeah, CTEs are an obvious alternative.  I mostly request this AS t(n)
feature because I have had some otherwise-portable PostgreSQL queries
that I needed to tweak for SQLite.  The thing with PostgreSQL is that
until very recently CTEs would generally perform worse than sub-queries,
so a lot of people avoided it.  My SQLite version used CTEs with no
issues, but still I thought I might as well request the feature, to
improve compatibility.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: (VALUES (1), (2), (3)) AS t(n)

2019-11-07 Thread Keith Medcalf
How about something like:

with t(a, b) as (values (1, 1), (2, 2)) select a, b from t;

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Ainar Garipov
>Sent: Thursday, 7 November, 2019 13:44
>To: sqlite-users@mailinglists.sqlite.org
>Subject: [sqlite] Feature request: (VALUES (1), (2), (3)) AS t(n)
>
>Good day!  It's my first time writing to a mail list for a very long
>time, so I may not get this right the first time.
>
>I would like to request the ability to write:
>
>  SELECT *
>FROM (VALUES (1, 1), (2, 2)) AS t(a, b)
>  ;
>
>Instead of:
>
>  SELECT t.column1 AS a, t.column2 AS b
>FROM (VALUES (1, 1), (2, 2)) AS t
>  ;
>
>That is, support column names in AS aliases.  Currently I can do that in
>PostgreSQL but not in SQLite.  The latter gives me this error:
>
>  Error: near "(": syntax error
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: (VALUES (1), (2), (3)) AS t(n)

2019-11-07 Thread Ainar Garipov
Good day!  It's my first time writing to a mail list for a very long
time, so I may not get this right the first time.

I would like to request the ability to write:

  SELECT *
FROM (VALUES (1, 1), (2, 2)) AS t(a, b)
  ;

Instead of:

  SELECT t.column1 AS a, t.column2 AS b
FROM (VALUES (1, 1), (2, 2)) AS t
  ;

That is, support column names in AS aliases.  Currently I can do that in
PostgreSQL but not in SQLite.  The latter gives me this error:

  Error: near "(": syntax error
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: import MySQL dumps in CLI

2019-08-07 Thread Simon Slavin
On 7 Aug 2019, at 9:16pm, Thomas Kurz  wrote:

> Well, that's why I asked for an *import* support. It's widely spread practice 
> to offer at least import capabilities from other software.

This is what the .import function in SQLite's shell tool is for.  It reads a 
well-documented text format.  All you need to do is have a tool, supplied by 
the MySql/MariaDB development team which writes that format.

I do not think that you're going to see the SQLite development team write any 
code which depends on a library for another database system.  The commitment to 
upkeep as the library changes would be too expensive.  Bear in mind that 
anything added to SQLite has to be supported for the next 31 years because of 
existing commitments to existing consortium members.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: import MySQL dumps in CLI

2019-08-07 Thread Graham Holden
Wednesday, August 07, 2019, 9:16:17 PM, Thomas Kurz  
wrote:

>> I highly doubt the SQLite team will undertake this task. They
>> Surely have the skill to do so, but their priority is the one
>> software product you desire to use, undoubtedly due to its
>> high utility.  I doubt that utility would exist if they were
>> to wander off tacking the conversion challenge for the other
>> popular database systems.

Another reason they might not want to is that by making it an
"official feature" then -- at least to some degree -- they would
need to track changes in MySQL/MariaDB that might break things.
 
Graham Holden



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: import MySQL dumps in CLI

2019-08-07 Thread Thomas Kurz
> I highly doubt the SQLite team will undertake this task. They
> Surely have the skill to do so, but their priority is the one
> software product you desire to use, undoubtedly due to its
> high utility.  I doubt that utility would exist if they were
> to wander off tacking the conversion challenge for the other
> popular database systems.
 
Well, that's why I asked for an *import* support. It's widely spread practice 
to offer at least import capabilities from other software. The other way round 
would be up to MySql/MariaDB.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: import MySQL dumps in CLI

2019-08-07 Thread Larry Brasfield
Mr. Kurz writes:
> I suppose I am not the only one having to convert between
> MySQL/MariaDB and SQLite databases every now and then. I
> know there are converters for MySQL dumps but none of any
> I have ever tried did work nearly reliable.

If you dislike the available converters and do not wish to
bother writing or adapting one to your tastes, you could use
any of several ETL tools, such as Talend or Pentahoe Kettle.
These tools are good at extracting data and loading it into
a(nother) database. You might have to handle getting your
schema translated, but that will mainly involve minor editing
of the converter outputs you have already obtained.

Mr. Kurz wrote further:
> So my suggestion would be to add an import feature to the
> CLI that allows to directly import MySQL/MariaDB dumps into
> an SQLite database keeping as many information as possible.
> As SQLite already has a complete SQL parser I expect much
> better results than with existing converters.

I highly doubt the SQLite team will undertake this task. They
Surely have the skill to do so, but their priority is the one
software product you desire to use, undoubtedly due to its
high utility.  I doubt that utility would exist if they were
to wander off tacking the conversion challenge for the other
popular database systems.

Best regards,
- 
Larry Brasfield

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: import MySQL dumps in CLI

2019-08-07 Thread Stephen Chrzanowski
The BIGGEST problem I had with importing data from MySQL to SQLite is the
table definitions.

If you do two dumps, one specifically for table definitions, the other for
the actual data to be imported, you could get a script to handle the table
definition file to make it conform to what SQLite can use, and the raw data
is sitting there ready to be imported.  Most of the work is going to be
against the table definitions as MySQL dumps information about what MySQL
engine needs to be used, while SQLite has no such necessity and breaks.

The work to be done would be to do a multi-pass "string replacement".
Remove the text that is superficial to SQLite that MySQL requires, like the
engine used by MySQL.  Remove anything that's MySQL language specific in
regards to table definitions, and wipe them or swap them for a generic TEXT
or NUMERIC definition.  Etc.  If you run into problems with the import,
it'd be easy enough modify the script and rerun the job.

I learned in my venture that the #! 00 numbers represent to the MySQL
engine that the 00 is a version number that must be met by the
importing engine for the command to be executed.  So (off the cuff) if the
export was done on MySQL 1.2, and the importer is MySQL 1.1, any line that
has #! 010200 {some command} would not execute on the 1.1 version.  Some of
these statements still must be executed for SQLite to behave as expected,
so you can't just blindly remove all #! lines.


On Wed, Aug 7, 2019 at 12:13 PM Thomas Kurz  wrote:

> Dear SQLite team,
>
> I suppose I am not the only one having to convert between MySQL/MariaDB
> and SQLite databases every now and then. I know there are converters for
> MySQL dumps but none of any I have ever tried did work nearly reliable.
>
> So my suggestion would be to add an import feature to the CLI that allows
> to directly import MySQL/MariaDB dumps into an SQLite database keeping as
> many information as possible. As SQLite already has a complete SQL parser I
> expect much better results than with existing converters.
>
> Kind regards,
> Thomas
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: import MySQL dumps in CLI

2019-08-07 Thread Thomas Kurz
er not null references geodb_locations
,  coord_type   integer not null check (coord_type=20010)
,  lat  double precision
,  lon  double precision
,  coord_subtypeinteger
,  valid_since  date
,  date_type_since  integer
,  valid_until  date not null
create table geodb_textdata (
  loc_id   integer not null references geodb_locations
,  text_typeinteger not null
,  text_val varchar(255) not null,  /* 
varchar(2000)? */
,  text_locale  varchar(5),  /* ISO 639-1 */
,  is_native_lang   smallint(1)
,  is_default_name  smallint(1)
,  valid_since  date
,  date_type_since  integer
,  valid_until  date not null
,  date_type_until  integer not null
,check (
,  (
,(
,  (text_type = 50010 or text_type = 50014 or
,   text_type = 50012 or text_type = 50070 or
,   text_type = 50071 or text_type = 50080 or
,   text_type = 50080 or text_type = 50090
,  ) and
,  text_locale like '__%' and
,  is_native_lang is not null and
,  is_default_name is not null
,) or
,(
,  (text_type = 50011 or text_type = 50013 or
,   text_type = 50030 or text_type = 50050 or
,   text_type = 50060
,  ) and
,  text_locale is null and
,  is_native_lang is null and
,  is_default_name is null
,)
,  ) and
,(
,  (valid_since is null and date_type_since is null) or
,  (valid_since is not null and date_type_since is not null)
,)
create table geodb_intdata (
  loc_id   integer not null references geodb_locations
,  int_type integer not null
,  int_val  bigint not null
,  valid_since  date
,  date_type_since  integer
,  valid_until  date not null
create table geodb_floatdata (
  loc_id   integer not null references geodb_locations
,  float_type   integer not null
,  float_valdouble precision not null,/* double / float??? */
,  valid_since  date
,  date_type_since  integer
,  valid_until  date not null
create table geodb_changelog (
  id   integer not null primary key
,  datumdate not null
,  beschreibung text not null
,  autorvarchar(50) not null
END TRANSACTION; 


- Original Message - 
From: Simon Slavin 
To: SQLite mailing list 
Sent: Wednesday, August 7, 2019, 18:25:45
Subject: [sqlite] Feature request: import MySQL dumps in CLI

On 7 Aug 2019, at 5:13pm, Thomas Kurz  wrote:

> So my suggestion would be to add an import feature to the CLI that allows to 
> directly import MySQL/MariaDB dumps into an SQLite database keeping as many 
> information as possible. As SQLite already has a complete SQL parser I expect 
> much better results than with existing converters.

MySQL has a tool which dumps the database as SQL commands.  SQLite has a tool 
which reads SQL commands and makes a database from them.

However, there are occasional compatibility problems with using the two 
together because of differing rules on text quoting, entity names, etc..  If 
you're running into one of these give us some details, and what OS you're 
using, and we'll see if we can figure out a script which works around them.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: import MySQL dumps in CLI

2019-08-07 Thread Simon Slavin
On 7 Aug 2019, at 5:13pm, Thomas Kurz  wrote:

> So my suggestion would be to add an import feature to the CLI that allows to 
> directly import MySQL/MariaDB dumps into an SQLite database keeping as many 
> information as possible. As SQLite already has a complete SQL parser I expect 
> much better results than with existing converters.

MySQL has a tool which dumps the database as SQL commands.  SQLite has a tool 
which reads SQL commands and makes a database from them.

However, there are occasional compatibility problems with using the two 
together because of differing rules on text quoting, entity names, etc..  If 
you're running into one of these give us some details, and what OS you're 
using, and we'll see if we can figure out a script which works around them.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: import MySQL dumps in CLI

2019-08-07 Thread Thomas Kurz
Dear SQLite team,

I suppose I am not the only one having to convert between MySQL/MariaDB and 
SQLite databases every now and then. I know there are converters for MySQL 
dumps but none of any I have ever tried did work nearly reliable.

So my suggestion would be to add an import feature to the CLI that allows to 
directly import MySQL/MariaDB dumps into an SQLite database keeping as many 
information as possible. As SQLite already has a complete SQL parser I expect 
much better results than with existing converters.

Kind regards,
Thomas

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feature request -- enhance strftime() implementing %V, %g and %G for week of year according to ISO 8601

2019-05-13 Thread Nißl Reinhard
Hello Luuk,

in your mentionend man page, there is a similar sentence which is also wrong 
and needs to be corrected:

"When three of fewer days of the first calendar week of the new year fall 
within that year, then the ISO 8601 week-based system counts those days as part 
of week 53 of the preceding year."

A unit test should verify that ISO week for 2012-01-01 is 52 (not 53) and ISO 
year is 2011.

Nice to play with: https://www.timeanddate.com/date/weeknumber.html

Bye.

-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Luuk
Sent: Thursday, May 9, 2019 7:57 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] feature request -- enhance strftime() implementing %V, %g 
and %G for week of year according to ISO 8601


On 9-5-2019 18:20, Nißl Reinhard wrote:
> Hi,
>
> it would be nice, if sqlite3's strftime() would support the following 
> formatting codes:
> %gThe last 2 digits of the ISO 8601 week-based year as a decimal number 
> (00 - 99)
> %GThe ISO 8601 week-based year as a decimal number
> %VISO 8601 week number as a decimal number (00 - 53)
minimum value for %V is 01.
>
> The ISO 8601 week and week-based year produced by %V, %g, and %G, uses a week 
> that begins on Monday, where week 1 is the week that contains January 4th, 
> which is the first week that includes at least four days of the year. If the 
> first Monday of the year is the 2nd, 3rd, or 4th, the preceding days are part 
> of the last week of the preceding year. For those days, %V is replaced by the 
> last week of the preceding year, and both %g and %G are replaced by the 
> digits of the preceding year.
>
> The above text has originally been taken from this documentation and 
> corrected regarding "%V is replaced by 53":
> https://docs.microsoft.com/de-de/cpp/c-runtime-library/reference/strftime-wcsftime-strftime-l-wcsftime-l?view=vs-2019

better docs:

http://man7.org/linux/man-pages/man3/strftime.3.html

;-)

>
> Thanks.
>
> Bye.
> --
> Reinhard Nißl
> reinhard.ni...@fee.de
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request, sqlite3_stmt_action

2019-05-10 Thread siscia
Working with RediSQL another use case comes to mind for some implementation
of the interface we were discussing.

How to detect SELECT statements that return empty.

SQLite simply return SQLITE_DONE in all cases, and it makes impossible to
know if it is an empty SELECT or something else.

A possible solution that I tried was to check if the query executed was
read_only, unfortunately also things like `CREATE TABLE IF NOT EXISTS
foo(a,b);` are read_only if the table do exists.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feature request -- enhance strftime() implementing %V, %g and %G for week of year according to ISO 8601

2019-05-09 Thread Luuk


On 9-5-2019 18:20, Nißl Reinhard wrote:

Hi,

it would be nice, if sqlite3's strftime() would support the following 
formatting codes:
%g  The last 2 digits of the ISO 8601 week-based year as a decimal number 
(00 - 99)
%G  The ISO 8601 week-based year as a decimal number
%V  ISO 8601 week number as a decimal number (00 - 53)

minimum value for %V is 01.


The ISO 8601 week and week-based year produced by %V, %g, and %G, uses a week 
that begins on Monday, where week 1 is the week that contains January 4th, 
which is the first week that includes at least four days of the year. If the 
first Monday of the year is the 2nd, 3rd, or 4th, the preceding days are part 
of the last week of the preceding year. For those days, %V is replaced by the 
last week of the preceding year, and both %g and %G are replaced by the digits 
of the preceding year.

The above text has originally been taken from this documentation and corrected regarding 
"%V is replaced by 53":
https://docs.microsoft.com/de-de/cpp/c-runtime-library/reference/strftime-wcsftime-strftime-l-wcsftime-l?view=vs-2019


better docs:

http://man7.org/linux/man-pages/man3/strftime.3.html

;-)



Thanks.

Bye.
--
Reinhard Nißl
reinhard.ni...@fee.de

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] feature request -- enhance strftime() implementing %V, %g and %G for week of year according to ISO 8601

2019-05-09 Thread Nißl Reinhard
Hi,

it would be nice, if sqlite3's strftime() would support the following 
formatting codes:
%g  The last 2 digits of the ISO 8601 week-based year as a decimal number 
(00 - 99)
%G  The ISO 8601 week-based year as a decimal number
%V  ISO 8601 week number as a decimal number (00 - 53)

The ISO 8601 week and week-based year produced by %V, %g, and %G, uses a week 
that begins on Monday, where week 1 is the week that contains January 4th, 
which is the first week that includes at least four days of the year. If the 
first Monday of the year is the 2nd, 3rd, or 4th, the preceding days are part 
of the last week of the preceding year. For those days, %V is replaced by the 
last week of the preceding year, and both %g and %G are replaced by the digits 
of the preceding year.

The above text has originally been taken from this documentation and corrected 
regarding "%V is replaced by 53":
https://docs.microsoft.com/de-de/cpp/c-runtime-library/reference/strftime-wcsftime-strftime-l-wcsftime-l?view=vs-2019

Thanks.

Bye.
--
Reinhard Nißl
reinhard.ni...@fee.de

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: FILTER on aggregate functions

2019-05-06 Thread Igor Tandetnik

On 5/6/2019 5:19 PM, Shawn Wagner wrote:

I just found out that postgres (And possibly others?) supports FILTER on
aggregate functions in general, not just when they're used as a window
function.

Trivial example:

 SELECT count(*), count(*) FILTER (WHERE amount > 100) FROM blah

which is a lot cleaner than

 SELECT count(*), sum(CASE WHEN amount > 100 THEN 1 ELSE 0 END) FROM blah


sum(amount > 100)   is sufficient.
--
Igor Tandetnik


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: FILTER on aggregate functions

2019-05-06 Thread Shawn Wagner
I just found out that postgres (And possibly others?) supports FILTER on
aggregate functions in general, not just when they're used as a window
function.

Trivial example:

SELECT count(*), count(*) FILTER (WHERE amount > 100) FROM blah

which is a lot cleaner than

SELECT count(*), sum(CASE WHEN amount > 100 THEN 1 ELSE 0 END) FROM blah

Would be nice to have someday...
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: ALTER TABLE ... ADD CONSTRAINT...

2019-04-04 Thread Marko Vodanović
Hi!
I don't know if this has been already discussed, most probably yes. But are 
there any plans to implement the possibility to add constraints to existing 
tables? And I'm not necessarily asking for full-blown functionality. Currently 
to do that you have to drop the table and recreate it again with the 
constraint. Possibly you could rename it, create the new table with the 
constraint, transfer data manually from old to new and then drop the old table? 
I'd be fine if this functionality behaves like that, I'd be fine even if it 
just drops the table and then recreates it again, losing everything stored 
inside (if it's noted in the docs with big shiny red letters that that's how it 
works).
Some background... I'm a developer and we use the concept of migration files to 
make changes to the DB structure. So, if a feature needs changes in the DB, we 
write a new migration file which migrates its structure from the old version to 
the new one. We support different DB engines to feed the service and the 
migration logic is abstracted by a library which detects what DBMS is running 
and constructs the appropriate SQL for it. We use SQLite just for running our 
tests quickly while developing, proper instances mostly run Postgres or 
something similar. When we start our tests, the DB is empty and then migrations 
are run one by one to reach the needed DB structure for the application. This 
is why if you were to implement adding constraints which completely wipe the 
table, I personally wouldn't care (if that's easier for you). What's causing us 
problems right now is that in our migration files we have to write something 
approximately like { if (dbType == SQLite) { "DROP TABLE...; CREATE 
TABLE...CONSTRAINT...;" } else { "ALTER TABLE ... ADD CONSTRAINT ...;" } }. If 
we could use the same way of altering the DB for SQLite as for other databases, 
it would help us so so much. Migrations are also used to update existing 
systems with new releases whose data mustn't be wiped so going for the SQLite 
approach with non-SQLite databases isn't an option. We're occasionally weighing 
if SQLite is worth the hassle or should we just drop it and use something like 
Postgres even for running our local tests, even though it would be slower to 
run the tests (in case of a Dockerized environment) and more of a hassle to set 
up. I'm sure we're not the only devs out there with these kinds of problems / 
hassles.
So... Is this something we can look forward to in the near / not so near 
future? Or did you already decide against it?

Best regards,Marko
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request, sqlite3_stmt_action

2019-03-28 Thread Simon Slavin
On 28 Mar 2019, at 10:25am, Dominique Devienne  wrote:

> Some info about the statement from EXPLAIN QUERY PLAN, that DRH would agree 
> to, and accept to "publicly document" and thus support would be nice,

You want something like

EXPLAIN EFFECTS OF 

and it should answer with zero or more lines.  Each line contains a single 
string column.  The strings are things like 'change data', 'change schema', 
'change pragma', 'return nothing', 'return table', 'return one row'.

The 'change' results do not mean that anything actually changes, they mean that 
the command is the kind of command intended to make that change.  For example, 
an UPDATE command that changes no rows (or perhaps even refers to a table which 
doesn't exist) still returns 'changes data'.  The 'return' results are similar. 
 'return table' means the command can return any number of rows, not how many 
rows it actually will return.

If 'changes pragma' appears, then perhaps another line could say which one, 
like 'changes pragma journal_mode'.

This would be useful for people writing a SQLite tool, or those with a setup 
which might involve an injection vulnerability.  Whether it's actually worth 
building into SQLite I have no idea.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: dynamically referenced bind parameters

2019-03-28 Thread Joshua Thomas Wise
Oooo this is really neat. Thanks!


> On Mar 27, 2019, at 5:12 PM, Richard Hipp  wrote:
> 
> See https://www.sqlite.org/carray.html
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request, sqlite3_stmt_action

2019-03-28 Thread Dominique Devienne
On Thu, Mar 28, 2019 at 10:59 AM R Smith  wrote:

> Maybe even, if possible, This query updates these tables: x1, x2, x3...
> etc. (some of which might hide behind an FK relation or Trigger)  but I
> know this is pushing my luck.  :)
>

What I ended-up doing is introspecting the VDBE program of the statement.
It's not exactly easy, and can be brittle too, since the output is not
"publicly documented"
so subject to change w/o notice, but I consider this approach less brittle
than parsing the SQL.


> Plus, I really do not mind if this explain takes some time, it will be
> faster and more future-proof than any self-parsing one can do.


Right. Some info about the statement from EXPLAIN QUERY PLAN, that DRH would
agree to, and accept to "publicly document" and thus support would be nice,
good idea.
No an AST of course, but would go a long way already, for those of us that
need/wish for that.

I'll put it on my Xmas list :). --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request, sqlite3_stmt_action

2019-03-28 Thread R Smith

On 2019/03/28 9:07 AM, Olivier Mascia wrote:

Le 27 mars 2019 à 18:04, siscia  a écrit :

I would like to propose a function (named `sqlite3_stmt_action` for the sake
of discussion) that allow to understand if a specific statement is either a
SELECT, UPDATE, DELETE or INSERT.

There is probably a much more complex need that I did not grasp reading this 
request. What stops you from parsing the beginning of the statement text to 
decide if it is a select, update, delete or insert?



Having done this already, allow me to offer some recounting of the 
difficulties:


First there are typically two things a programmer is interested in 
(well, if you maintain an SQLite management utility or the like):
-  First: Will the Query produce data output back that I need to show to 
the user?, or will it silently execute?
-  If it does produce output, is this confirming the state (such as when 
calling a pragma command), or is this output that I need to show the 
user, or perhaps log?


-  Then: Will the query change the database?
-  Put another way, will it work on a read-only file?
-  or, will it alter the table content that is currently displayed? Do I 
need to re-run the display query?

-  or will it change the schema?
-  Do I need to re-parse the schema to show the user the DB layout after 
executing?


Some of these SQLite does cater for, but many not, and there are some 
work-aroundy ways of accomplishing it.


For instance, you might reparse the schema after ANY non-select query. 
But then - how do I know if it is anything other than a SELECT query?


The obvious answer is not to see if it's any of INSERT, UPDATE, CREATE, 
etc... but to simply see if it is indeed a SELECT query. Right?


But then, what about CREATE TABLE t AS SELECT a,b,c, FROM.

Or if it is a CTE, consider these two:

WITH X(z) AS (SELECT 1) SELECT z FROM X;

vs.

WITH X(z) AS (SELECT 1) INSERT INTO t(z) SELECT z FROM X;

These are already difficult to self-parse, and they are extremely simple 
examples.


I would even be happy to have something like extending the EXPLAIN QUERY 
PLAN sql interface to include something like:

EXPLAIN QUERY RESULT  ;

which outputs a simple row of values that can tell me:

- This query produces results - YES/NO (even if those results may be 
empty, is it the /intent/ of the query to produce results?),

- It updates the data - YES/NO,
- It updates the schema - YES/NO
- It is a pragma or setting adjustment - YES/NO

Maybe even, if possible, This query updates these tables: x1, x2, x3... 
etc. (some of which might hide behind an FK relation or Trigger)  but I 
know this is pushing my luck.  :)


Plus, I really do not mind if this explain takes some time, it will be 
faster and more future-proof than any self-parsing one can do.



Cheers,
Ryan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request, sqlite3_stmt_action

2019-03-28 Thread Dominique Devienne
On Thu 28 Mar 2019 at 08:07, Olivier Mascia  wrote:

>
> > Le 27 mars 2019 à 18:04, siscia  a écrit :
> >
> > I would like to propose a function (named `sqlite3_stmt_action` for the
> sake
> > of discussion) that allow to understand if a specific statement is
> either a
> > SELECT, UPDATE, DELETE or INSERT.
>
> There is probably a much more complex need that I did not grasp reading
> this request. What stops you from parsing the beginning of the statement
> text to decide if it is a select, update, delete or insert?


Because it’s never as simple as it looks... CTEs anyone ? It can be
approximated sure. But will typically be brittle.

I’ve long wished for an AST for SQLite statements but in fact the grammar
actions directly build the internal data structures, it’s not two phase

>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request, sqlite3_stmt_action

2019-03-28 Thread Olivier Mascia

> Le 27 mars 2019 à 18:04, siscia  a écrit :
> 
> I would like to propose a function (named `sqlite3_stmt_action` for the sake
> of discussion) that allow to understand if a specific statement is either a
> SELECT, UPDATE, DELETE or INSERT.

There is probably a much more complex need that I did not grasp reading this 
request. What stops you from parsing the beginning of the statement text to 
decide if it is a select, update, delete or insert?

-- 
Best regards, Meilleures salutations, Met vriendelijke groeten,  
Olivier Mascia (from mobile device)



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: dynamically referenced bind parameters

2019-03-27 Thread Richard Hipp
See https://www.sqlite.org/carray.html

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: dynamically referenced bind parameters

2019-03-27 Thread Joshua Wise
Yes, but the problem is that I need to also retrieve the articles themselves. 
If I were to embed the articles query inside the staff query (as you’ve shown), 
the database would have to execute the article query twice.


> On Mar 27, 2019, at 4:42 PM, Keith Medcalf  wrote:
> 
> 
> You mean something like this:
> 
> SELECT staff.* FROM staff, contributions
> WHERE contributions.staff = staff.email
> AND contributions.article IN (SELECT id FROM articles
> WHERE publish_date <= CURRENT_TIMESTAMP
> ORDER BY publish_date DESC LIMIT ?);
> 
> ---
> The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
> lot about anticipated traffic volume.
> 
>> -Original Message-
>> From: sqlite-users [mailto:sqlite-users-
>> boun...@mailinglists.sqlite.org] On Behalf Of Joshua Thomas Wise
>> Sent: Wednesday, 27 March, 2019 14:22
>> To: SQLite mailing list
>> Subject: [sqlite] Feature request: dynamically referenced bind
>> parameters
>> 
>> I’ve commonly encountered cases where I have a many-to-many
>> relationship, and I would like to retrieve those relationships in a
>> single query.
>> 
>> For example:
>> 
>> CREATE TABLE staff (
>> email TEXT PRIMARY KEY,
>> name TEXT
>> );
>> CREATE TABLE articles (
>> id INTEGER PRIMARY KEY,
>> title TEXT,
>> body TEXT,
>> publish_date TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
>> );
>> CREATE TABLE contributions(
>> article INTEGER REFERENCES articles(id),
>> staff TEXT REFERENCES staff(email),
>> PRIMARY KEY(article, staff),
>> );
>> 
>> First, I select the N most recently published articles:
>> 
>> SELECT * FROM articles
>> WHERE publish_date <= CURRENT_TIMESTAMP
>> ORDER BY publish_date DESC LIMIT ?;
>> 
>> Then, I’ll build a query like this to retrieve the staff that are
>> responsible for writing those articles:
>> 
>> SELECT staff.* FROM staff, contributions
>> WHERE contributions.staff = staff.email
>> AND contributions.article IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?); --
>> assuming N was 12 in the first query
>> 
>> However, I need to prepare a new statement every time I do this,
>> depending on the value of N in the first query.
>> 
>> SQLite3 already allows us to create a large number of bind parameters
>> without explicitly declaring each one, via ?999 syntax. Now, if we
>> had the ability to reference those bind parameters dynamically, the
>> second query above could be something like this:
>> 
>>  WITH ids(id, n) AS (
>>  SELECT param(1), 1
>>  UNION ALL
>>  SELECT param(n + 1), n + 1 FROM ids WHERE n <
>> param_count())
>>  SELECT DISTINCT ids.id, staff.* FROM staff, contributions, ids
>>  WHERE contributions.staff = staff.email
>>  AND contributions.article = ids.id;
>> 
>> 
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: dynamically referenced bind parameters

2019-03-27 Thread Keith Medcalf

You mean something like this:

SELECT staff.* FROM staff, contributions
WHERE contributions.staff = staff.email
AND contributions.article IN (SELECT id FROM articles
WHERE publish_date <= CURRENT_TIMESTAMP
ORDER BY publish_date DESC LIMIT ?);

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Joshua Thomas Wise
>Sent: Wednesday, 27 March, 2019 14:22
>To: SQLite mailing list
>Subject: [sqlite] Feature request: dynamically referenced bind
>parameters
>
>I’ve commonly encountered cases where I have a many-to-many
>relationship, and I would like to retrieve those relationships in a
>single query.
>
>For example:
>
>CREATE TABLE staff (
>  email TEXT PRIMARY KEY,
>  name TEXT
>);
>CREATE TABLE articles (
>  id INTEGER PRIMARY KEY,
>  title TEXT,
>  body TEXT,
>  publish_date TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
>);
>CREATE TABLE contributions(
>  article INTEGER REFERENCES articles(id),
>  staff TEXT REFERENCES staff(email),
>  PRIMARY KEY(article, staff),
>);
>
>First, I select the N most recently published articles:
>
>SELECT * FROM articles
>WHERE publish_date <= CURRENT_TIMESTAMP
>ORDER BY publish_date DESC LIMIT ?;
>
>Then, I’ll build a query like this to retrieve the staff that are
>responsible for writing those articles:
>
>SELECT staff.* FROM staff, contributions
>WHERE contributions.staff = staff.email
>AND contributions.article IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?); --
>assuming N was 12 in the first query
>
>However, I need to prepare a new statement every time I do this,
>depending on the value of N in the first query.
>
>SQLite3 already allows us to create a large number of bind parameters
>without explicitly declaring each one, via ?999 syntax. Now, if we
>had the ability to reference those bind parameters dynamically, the
>second query above could be something like this:
>
>   WITH ids(id, n) AS (
>   SELECT param(1), 1
>   UNION ALL
>   SELECT param(n + 1), n + 1 FROM ids WHERE n <
>param_count())
>   SELECT DISTINCT ids.id, staff.* FROM staff, contributions, ids
>   WHERE contributions.staff = staff.email
>   AND contributions.article = ids.id;
>
>
>
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: dynamically referenced bind parameters

2019-03-27 Thread Joshua Thomas Wise
I’ve commonly encountered cases where I have a many-to-many relationship, and I 
would like to retrieve those relationships in a single query.

For example:

CREATE TABLE staff (
  email TEXT PRIMARY KEY,
  name TEXT
);
CREATE TABLE articles (
  id INTEGER PRIMARY KEY,
  title TEXT,
  body TEXT,
  publish_date TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
);
CREATE TABLE contributions(
  article INTEGER REFERENCES articles(id),
  staff TEXT REFERENCES staff(email),
  PRIMARY KEY(article, staff),
);

First, I select the N most recently published articles:

SELECT * FROM articles
WHERE publish_date <= CURRENT_TIMESTAMP
ORDER BY publish_date DESC LIMIT ?;

Then, I’ll build a query like this to retrieve the staff that are responsible 
for writing those articles:

SELECT staff.* FROM staff, contributions
WHERE contributions.staff = staff.email
AND contributions.article IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?); -- assuming 
N was 12 in the first query

However, I need to prepare a new statement every time I do this, depending on 
the value of N in the first query.

SQLite3 already allows us to create a large number of bind parameters without 
explicitly declaring each one, via ?999 syntax. Now, if we had the ability to 
reference those bind parameters dynamically, the second query above could be 
something like this:

WITH ids(id, n) AS (
SELECT param(1), 1
UNION ALL
SELECT param(n + 1), n + 1 FROM ids WHERE n < param_count())
SELECT DISTINCT ids.id, staff.* FROM staff, contributions, ids
WHERE contributions.staff = staff.email
AND contributions.article = ids.id;




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request, sqlite3_stmt_action

2019-03-27 Thread David Raymond
I'm not familiar with the C API, but the question I'll ask is this: How should 
this work with triggers? Running a statement as simple as "delete from foo;" 
could result in any number of different updates, deletes or inserts from any 
number of different tables, so how should that be reported?


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of siscia
Sent: Wednesday, March 27, 2019 1:05 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] Feature request, sqlite3_stmt_action

Hi List,

I would like to propose a feature and I believe here is the best place.
Please keep in mind that I am ready to work on a patch for this feature if
needed, but I would like to discuss it here first.

I would like to propose a function (named `sqlite3_stmt_action` for the sake
of discussion) that allow to understand if a specific statement is either a
SELECT, UPDATE, DELETE or INSERT.

Similar functionalities are provided by the authorizer which has a quite
non-ergonomic interface relying on callbacks. Indeed the use of the
authorizer for this is challenging, especially in a multi-threaded
environments with several databases.

The prototype that I am envisioning for the function would be something
like:

int sqlite3_stmt_action(sqlite3_stmt* stmt)

where the function will return the action code
(https://www.sqlite.org/c3ref/c_alter_table.html) of the statement passed as
input.

We could go even a little further and return something similar to the
authorizer input itself:

   int sqlite3_stmt_action(sqlite_stmt* stmt, const char**, const char**,
const char**, const char**) 

where the extra `const char**` will point to the NULL terminated string --
just like the authorizer -- that indicates tables name and index names where
it makes sense.

This new interface will make possible to even deprecate the authorizer
itself, since it can be implemented on top of `sqlite3_stmt_action` while
being more ergonomic especially in multi-threaded, multi-database
environments. 

This same feature is already been required in the list itself:

o)
http://sqlite.1065341.n5.nabble.com/Determine-type-of-prepared-statement-via-C-Interface-td82075.html

o)
http://sqlite.1065341.n5.nabble.com/Determine-query-type-td83553.html#a83554

o)
http://sqlite.1065341.n5.nabble.com/Distinguish-type-of-statements-td106281.html#a106282
(myself)

even if most of those use cases have been solved using the stmt_readonly
interface.

Thanks for your attention!

Cheers,
Simone



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request, sqlite3_stmt_action

2019-03-27 Thread siscia
Hi List,

I would like to propose a feature and I believe here is the best place.
Please keep in mind that I am ready to work on a patch for this feature if
needed, but I would like to discuss it here first.

I would like to propose a function (named `sqlite3_stmt_action` for the sake
of discussion) that allow to understand if a specific statement is either a
SELECT, UPDATE, DELETE or INSERT.

Similar functionalities are provided by the authorizer which has a quite
non-ergonomic interface relying on callbacks. Indeed the use of the
authorizer for this is challenging, especially in a multi-threaded
environments with several databases.

The prototype that I am envisioning for the function would be something
like:

int sqlite3_stmt_action(sqlite3_stmt* stmt)

where the function will return the action code
(https://www.sqlite.org/c3ref/c_alter_table.html) of the statement passed as
input.

We could go even a little further and return something similar to the
authorizer input itself:

   int sqlite3_stmt_action(sqlite_stmt* stmt, const char**, const char**,
const char**, const char**) 

where the extra `const char**` will point to the NULL terminated string --
just like the authorizer -- that indicates tables name and index names where
it makes sense.

This new interface will make possible to even deprecate the authorizer
itself, since it can be implemented on top of `sqlite3_stmt_action` while
being more ergonomic especially in multi-threaded, multi-database
environments. 

This same feature is already been required in the list itself:

o)
http://sqlite.1065341.n5.nabble.com/Determine-type-of-prepared-statement-via-C-Interface-td82075.html

o)
http://sqlite.1065341.n5.nabble.com/Determine-query-type-td83553.html#a83554

o)
http://sqlite.1065341.n5.nabble.com/Distinguish-type-of-statements-td106281.html#a106282
(myself)

even if most of those use cases have been solved using the stmt_readonly
interface.

Thanks for your attention!

Cheers,
Simone



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: ARM64 support in Universal Windows Platform

2018-11-19 Thread Martin Suchan
Hi,
Microsoft released few days ago Visual Studio 2017 release 15.9 that brings
full support for building Universal Windows Platform apps targeting new
ARM64 platform.
https://blogs.windows.com/buildingapps/2018/11/15/official-support-for-windows-10-on-arm-development

SQLite library is already available for Universal Windows Platform apps
targeting x86, x64 and ARM, but not yet for ARM64.

Since I don't have access to the issue tracker, this is a feature request
to add support for the ARM64 platform as well, if possible.
Thanks.

Best Regards
Martin Suchan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: degenerates in Lemon's report

2018-07-09 Thread Cezary H. Noweta

Hello,

Could you allow an outputting of autoreduced states when they produce 
conflicts? I have noticed that such errors are caused by small and 
hard-to-find remnants in a grammar file. There is no other way to detect 
such errors. A few lines are getting the problem off (function 
lemon.c:ReportOutput):


==
--- "../../sqlite-src-324/tool/lemon.c"	2018-06-04 
21:51:19.0 +0200

+++ lemon.c 2018-07-09 19:58:39.852430500 +0200
@@ -3294,9 +3740,19 @@

   fp = file_open(lemp,".out","wb");
   if( fp==0 ) return;
-  for(i=0; inxstate; i++){
+  for(i=0; instate; i++){
 stp = lemp->sorted[i];
-fprintf(fp,"State %d:\n",stp->statenum);
+if( i>=lemp->nxstate ){
+  for(ap=stp->ap; ap!=0; ap=ap->next){
+if( ap->type==SSCONFLICT ) break;
+if( ap->type==SRCONFLICT ) break;
+if( ap->type==RRCONFLICT ) break;
+  }
+  if( ap==0 ) continue;
+  fprintf(fp,"Degenerated state %d:\n",stp->statenum);
+} else {
+  fprintf(fp,"State %d:\n",stp->statenum);
+}
 if( lemp->basisflag ) cfp=stp->bp;
 else  cfp=stp->cfp;
 while( cfp ){
==

-- best regards

Cezary H. Noweta
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: A function to read the value of db->u1.isInterrupted

2018-07-06 Thread sqlite
Feature request: A function to read the value of db->u1.isInterrupted
The purpose of this is so that extensions that implement additional SQL 
functions and/or virtual tables that use loops that aren't VDBE programs can 
still know that it is interrupted.
For example, if the extension uses libcurl then the progress callback can use 
this to know when to stop due to interruption. For example it might use:

int progress_callback(void *clientp,   curl_off_t dltotal,   curl_off_t dlnow,  
 curl_off_t ultotal,  curl_off_t ulnow) {
  return sqlite3_interrupted(clientp);
}

Implementing the sqlite3_interrupted() function (or whatever you want to call 
it) should be very easy to implement. However, it must be added into the 
extension loading mechanism, so if I do it by myself then it will be 
incompatible.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: more robust handling of invalid UTF-16 data

2018-05-21 Thread Maks Verver
*Background: *UTF-16 is an encoding which allows most characters to be
encoded in a single 16-bit code unit. Characters outside the basic
multilingual plane (i.e. code points between 0x1 and 0x10), require
two code units: a high surrogate between 0xD800 and 0xDBFF, followed by a
low surrogate between 0xDC00 and 0xDFFF. Strings that contain unpaired
surrogates are invalid UTF-16.

*Problem:* sqlite silently accepts invalid UTF-16 strings as arguments to
functions like sqlite3_bind_text16(), but corrupts them when converting
them to UTF-8 and back. A common use case where this happens implicitly is
when using a database with UTF-8 as the native text encoding from a
programming language that uses UTF-16 to represent strings in memory.

Specifically, what happens depends on where the unpaired surrogate
character occurs:

   - *In the middle of the string*: the surrogate is consumed together with
   the following character, so that "fooXbar" may be transformed into"fooYar"
   (note missing 'b'), where Y is some seemingly-random character outside the
   BMP. When read back, it's not obvious that a corruption has occurred.
   - *At the end of the string*: the surrogate is consumed and encoded in
   UTF-8, which is technically invalid (UTF-8 is not supposed to encode
   surrogate characters). How this reads back depends on whether the value is
   accessed through sqlite3_column_text() or sqlite3_column_text16(): the
   latter uses READ_UTF8() internally, which detects the invalid encoding and
   substitutes the replacement character 0xFFFD. So the storage is in a
   logically inconsistent state at this point.

I've created a small proof-of-concept to reproduce some of these issues,
here:
https://gist.github.com/maksverver/2b225637186d64878d3e635ef0a4fd18


The problem is caused by the implementation of READ_UTF16 in utf.c
, which blindly assumes that the
input string is valid UTF-16, and doesn't check that surrogates pair up as
required. Although arguably the problems originated with the caller
that passed bad string data to sqlite, it would be better if sqlite
detected and corrected invalid UTF-16 strings during conversion by
converting unpaired surrogates to the Unicode replacement character 0xFFFD.
This would be consistent with the behavior of READ_UTF8, and would make
sqlite more robust.

As a concrete suggestion, the macro READ_UTF16LE, which looks like this:

#define READ_UTF16LE(zIn, TERM, c){
 c = (*zIn++);
 c += ((*zIn++)<<8);
 if( c>=0xD800 && c<0xE000 && TERM ){
int c2 = (*zIn++);
c2 += ((*zIn++)<<8);
c = (c2&0x03FF) + ((c&0x003F)<<10) + (((c&0x03C0)+0x0040)<<10);
  }
}

Could be changed to something like this:

#define READ_UTF16LE(zIn, TERM, c){
  c = (*zIn++);
  c += ((*zIn++)<<8);
  if( c>=0xD800 ){
int c2 = c=0xDC00 && c2<0xE000) {
  zIn += 2;
  c = (c2&0x03FF) + ((c&0x003F)<<10) + (((c&0x03C0)+0x0040)<<10);
} else {
  c = 0xFFFD;
}
  }
}

(And similarly for READ_UTF16BE.)

Kind regards,
Maks Verver.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feature request: MIN() and MAX() of set of row values

2018-04-13 Thread Mark Brand



On 13/04/18 14:12, Simon Slavin wrote:

On 13 Apr 2018, at 8:40am, Mark Brand  wrote:


It also occurs to me that COUNT() should work (but doesn't) over sets of row 
values:

 sqlite> select count((1,2));
 Error: row value misused

I would expect it to return the number of non-NULL row values in the set.

What should this do, and why ?

CREATE TABLE MyTable (a INTEGER, b, INTEGER, c INTEGER);
INSERT INTO MyTable VALUES (1, 1, 1);
INSERT INTO MyTable VALUES (2, 2, 2);
...
INSERT INTO MyTable VALUES (10, 10, 10);

SELECT COUNT(a, b, c) FROM MyTable;


Hi Simon,

I would expect COUNT() to count row values just as it counts normal 
values. In your example, it should return 10 because there are 10 rows 
in the MyTable.


When counting normal values, COUNT() excludes NULLs. If there is a such 
as thing as a NULL row value, COUNT() should exclude it too.


Sqlite doesn't seem to distinguish between a row value made up of only 
NULLs and a NULL row value, at least in this context:


    sqlite> select (NULL, NULL) IS (SELECT 1, 2 WHERE 0);  -- The right 
side would probably satisfy anybody's idea of what "NULL row value" means.

    1

    sqlite> select (NULL, NULL) IS (SELECT NULL, NULL WHERE 1); -- The 
right side is a row containing all NULLs.

    1

Therefore, probably COUNT() should exclude row values made up of only 
NULLs on the grounds that these qualify as NULL row values.


You wrote

    COUNT(a, b, c)

but I would have expected

    COUNT((a, b, c))

to make it clear that COUNT() has one argument which is a row value. For 
aggregate MIN() and MAX(), the "extra" parenthesis would have the 
additional motivation of distinguishing the aggregate functions from the 
non-aggregate MIN() and MAX() which have 2 arguments.



Mark
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feature request: MIN() and MAX() of set of row values

2018-04-13 Thread Simon Slavin
On 13 Apr 2018, at 8:40am, Mark Brand  wrote:

> It also occurs to me that COUNT() should work (but doesn't) over sets of row 
> values:
> 
> sqlite> select count((1,2));
> Error: row value misused
> 
> I would expect it to return the number of non-NULL row values in the set.

What should this do, and why ?

CREATE TABLE MyTable (a INTEGER, b, INTEGER, c INTEGER);
INSERT INTO MyTable VALUES (1, 1, 1);
INSERT INTO MyTable VALUES (2, 2, 2);
...
INSERT INTO MyTable VALUES (10, 10, 10);

SELECT COUNT(a, b, c) FROM MyTable;

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feature request: MIN() and MAX() of set of row values

2018-04-13 Thread Mark Brand


On 13/04/18 09:32, Mark Brand wrote:

On 30/03/18 18:55, Igor Tandetnik wrote:



Row values support less-than comparison, so it kind of makes sense to 
expect MIN to work on them, too.


That's what I was thinking too. One would expect aggregate MIN() and 
MAX() to work over row values.


While we're on the subject of row values, the error on the first query 
below seems unexpected:


   sqlite> select (2, 3) = (select (2, 3));
    Error: row value misused

    sqlite> select (2, 3) = (2, 3);
    1

    sqlite> select (2, 3) = (values(2, 3));
    1



It also occurs to me that COUNT() should work (but doesn't) over sets of 
row values:


    sqlite> select count((1,2));
    Error: row value misused

I would expect it to return the number of non-NULL row values in the set.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feature request: MIN() and MAX() of set of row values

2018-04-13 Thread Mark Brand

On 30/03/18 18:55, Igor Tandetnik wrote:



Row values support less-than comparison, so it kind of makes sense to 
expect MIN to work on them, too.


That's what I was thinking too. One would expect aggregate MIN() and 
MAX() to work over row values.


While we're on the subject of row values, the error on the first query 
below seems unexpected:


   sqlite> select (2, 3) = (select (2, 3));
    Error: row value misused

    sqlite> select (2, 3) = (2, 3);
    1

    sqlite> select (2, 3) = (values(2, 3));
    1


Mark

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-22 Thread Eduardo
On Sun, 21 Jan 2018 05:54:13 +
Simon Slavin  escribió:

> Feature request for the Shell Tool: ".mode json".
> 

Others has pointed to libraries to export to json, so I point to the one I use:
libucl https://github.com/vstakhov/libucl

Using the generation functions [1] you can convert from C structs and types to 
ucl and
export to any suportted formats, json, compact json, yaml and nginx like config 
files.

Licence BSD 2-clause "Simplified" License

> Simon.

[1] 
https://github.com/vstakhov/libucl/blob/master/doc/api.md#generation-functions-1

-- 
Eduardo 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-22 Thread Dominique Devienne
On Mon, Jan 22, 2018 at 12:50 AM, Stadin, Benjamin <
benjamin.sta...@heidelberg-mobil.com> wrote:

> wrote a tool to convert an arbitrary SQLite result set to properly typed
> json key/value pairs, using the SQLite type affinity of the objects.
>
...

> while ((rc = sqlite3_step(readStmt)) == SQLITE_ROW) {
> ...

for (int colIdx=0; colIdx ...

sqlite3_value *val = sqlite3_column_value(readStmt, colIdx);
> switch (int type = sqlite3_value_type(val)) {
> ...

}
>

Since it's based on on sqlite3_value_type(), that's not "type affinity"
exactly,
just the actual "storage" type of the value. AFAIK there's no way to know
the
affinity [1] of a column of a table using an SQLite API. Perhaps there's a
pragma?

There's sqlite3_value_numeric_type() which allows you to emulate SQLite's
numeric affinity, but that's not the same. FWIW. --DD

[1] https://www.sqlite.org/datatype3.html#type_affinity
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread petern
>Just the data returned by the SELECT command, expressed as an array of
objects, one object per row.

That's what shell_callback() does inside shell.c.  It outputs one row at a
time in the current mode selected by the cases of a big switch()
statement.  Not sure I follow how your code would be subject to other
intellectual property claims.  You would merely be copying and pasting code
from SQLite itself.  For one, below is the row output code for MODE_html in
shell.c.  Your proposed MODE_json is probably a similar pattern with
different decorations.   The other modes are also there to be studied and
copied.

   case MODE_Html: {
  if( p->cnt++==0 && p->showHeader ){
raw_printf(p->out,"");
for(i=0; iout,"");
  output_html_string(p->out, azCol[i]);
  raw_printf(p->out,"\n");
}
raw_printf(p->out,"\n");
  }
  if( azArg==0 ) break;
  raw_printf(p->out,"");
  for(i=0; iout,"");
output_html_string(p->out, azArg[i] ? azArg[i] : p->nullValue);
raw_printf(p->out,"\n");
  }
  raw_printf(p->out,"\n");
  break;
}

Just copy, edit, compile and test until it produces sensible output you can
demonstrate in an email.   Incorporating relevant features from MySQL JSON
output mode documentation that others will miss is also a good idea
considering the entire json1.c API model was also cloned from MySQL.

The only thing not to like about it would be if your design decisions make
something that isn't generally useful.   Not a waste of time if other JSON
users are interested enough in your idea to provide feedback on the
features they need.  Most of the development work here is figuring out
'what' and 'why' rather than 'how'.  The 'how' is trivial.

Peter

On Sun, Jan 21, 2018 at 3:04 PM, Simon Slavin  wrote:

>
>
> On 21 Jan 2018, at 11:01pm, Simon Slavin  wrote:
>
> > Just the data that is stored in the table, expressed as a JSON object,
> not an array.
>
> Sorry, what I meant was
>
> Just the data returned by the SELECT command, expressed as an array of
> objects, one object per row.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread J Decker
On Sun, Jan 21, 2018 at 3:50 PM, Stadin, Benjamin <
benjamin.sta...@heidelberg-mobil.com> wrote:

> Hi Simon,
>
> I recently wrote a tool to convert an arbitrary SQLite result set to
> properly typed json key/value pairs, using the SQLite type affinity of the
> objects. Though the code is in C++. But it gives an idea how simple this is
> when with a JSON library (I'm using RapidJson). Rapidjson can also be used
> in buffered mode, so it's also easily possible to write out large tables of
> any size.
>
> The code for the SQLite to Json exporter is below. For an importer using
> Rapidjson, I can imagine this can be done fairly simple and efficiently
> (SAX parsing mode) with a custom "filter" handler. For example, it should
> be possible to keep track of when an object begins and ends, collect all
> values as variant values with json type info, and write out each complete
> object with proper types and using a prepared statement.
>
> Regards
> Ben
>
>
>
Or in javascript...

var sack = require( 'sack.vfs' );

function sqliteQueryToJson( dbPath, sql, jsonFile )
var db = sack.Sqlite( dbPath );
sack.Volume().file( jsonFIle).write( JSON.strinigfy( db.do( sql ) ) );

sack.JSON and sack.JSON6 have streaming JSON parsers... can read a stream
of records with a callback for each value or object discovered along the
stream.




> Am 21.01.18, 06:55 schrieb "sqlite-users im Auftrag von Simon Slavin" <
> sqlite-users-boun...@mailinglists.sqlite.org im Auftrag von
> slav...@bigfraud.org>:
>
> Feature request for the Shell Tool: ".mode json".
>
> Output should be as a JSON array of objects, with one object for each
> row of the table.  Output should start with the "[" character and end with
> "]".  Rows should be separated with ",\n".  Quotes in strings should be
> escaped for JSON, with a leading backslash.  NULL should be supported as
> the four lower-case characters "null", ignoring ".nullvalue".
>
> The above setting should also affect the ".import filename [table]"
> command as described in section 8 of  .
> Columns should be created as necessary.  Signed zeros should be imported as
> zero.
>
> The above facilities should be implemented whether or not
> DSQLITE_ENABLE_JSON1 was enabled when compiling the shell tool.  They are
> for export and import, not for handling JSON within SQLite.
>
> Implementation questions:
>
> I am not sure what the program should do if asked to import a value
> which is an array or object.  Perhaps, for compatibility with the JSON1
> extension, those should be imported as a string.
>
> I am not sure whether the program should respect the settings for
> ".separator" for JSON mode, either for output or .import.
>
> I am not sure how BLOBs should be handled, either for output or
> .import.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Stadin, Benjamin
Hi Simon,

I recently wrote a tool to convert an arbitrary SQLite result set to properly 
typed json key/value pairs, using the SQLite type affinity of the objects. 
Though the code is in C++. But it gives an idea how simple this is when with a 
JSON library (I'm using RapidJson). Rapidjson can also be used in buffered 
mode, so it's also easily possible to write out large tables of any size. 

The code for the SQLite to Json exporter is below. For an importer using 
Rapidjson, I can imagine this can be done fairly simple and efficiently (SAX 
parsing mode) with a custom "filter" handler. For example, it should be 
possible to keep track of when an object begins and ends, collect all values as 
variant values with json type info, and write out each complete object with 
proper types and using a prepared statement. 

Regards
Ben

 SQLite to Json converter class  

#include "sqlite_to_json.hpp"
#include "rapidjson/filereadstream.h"
#include "rapidjson/stringbuffer.h"
#include "rapidjson/writer.h"
#include "rapidjson/document.h"
#include 
#include 
#include 
#include 

extern "C" {
#include "sqlite3.h"
}

using namespace hdm::sqlitejson;
using namespace rapidjson;

bool SqliteToJson::sqliteQueryToJson(std::string dbPath, std::string sql, 
std::string jsonFile) {
// open the sqlite db at dbPath

if (!dbPath.length()) {
std::cout << "No db path provided\n";
return false;
}

sqlite3 *db = NULL;
int error = sqlite3_open_v2(dbPath.c_str(), &db, SQLITE_OPEN_READONLY, 
NULL);

if (error != SQLITE_OK) {
std::cout << "Failed to open db at path" << dbPath << "\n";
return false;
}

// prepare the sql statement
sqlite3_stmt* readStmt = NULL;
error = sqlite3_prepare_v2(db, sql.c_str(), -1, &readStmt, NULL);
if (error != SQLITE_OK) {
const char *errMsg = sqlite3_errmsg(db);
std::cout << "SQL error: " << errMsg << "\n";
return false;
}

// get the column names
std::vector columnNames;
int count = sqlite3_column_count(readStmt);
for (int i=0; i > jsonBuffer;
rapidjson::Writer jsonWriter(jsonBuffer);

jsonWriter.StartArray();
int numRecs = 0;
int rc;
while ((rc = sqlite3_step(readStmt)) == SQLITE_ROW) {
numRecs++;
jsonWriter.StartObject();
for (int colIdx=0; colIdx");
break;
}
case SQLITE_NULL: {
jsonWriter.Null();
break;
}
default: {
assert(false);
}
}

}

jsonWriter.EndObject();
}
jsonWriter.EndArray();

// write to json
FILE* file = fopen(jsonFile.c_str(), "wt");
if (!file) {
std::cout << "Failed to write to file " << jsonFile.c_str() << 
std::endl;
return false;
}

std::cout << "Wrote " << numRecs << " records to file" << jsonFile.c_str() 
<< std::endl;

fputs(jsonBuffer.GetString(), file);
fclose(file);

return true;
}

Am 21.01.18, 06:55 schrieb "sqlite-users im Auftrag von Simon Slavin" 
:

Feature request for the Shell Tool: ".mode json".

Output should be as a JSON array of objects, with one object for each row 
of the table.  Output should start with the "[" character and end with "]".  
Rows should be separated with ",\n".  Quotes in strings should be escaped for 
JSON, with a leading backslash.  NULL should be supported as the four 
lower-case characters "null", ignoring ".nullvalue".

The above setting should also affect the ".import filename [table]" command 
as described in section 8 of  .  Columns should be 
created as necessary.  Signed zeros should be imported as zero.

The above facilities should be implemented whether or not 
DSQLITE_ENABLE_JSON1 was enabled when compiling the shell tool.  They are for 
export and import, not for handling JSON within SQLite.

Implementation questions:

I am not sure what the program should do if asked to import a value which 
is an array or object.  Perhaps, for compatibility with the JSON1 extension, 
those should be imported as a string.

I am not sure whether the program should respect the settings for 
".separator" for JSON mode, either for output or .import.

I am not sure how BLOBs should be handled, either for output or .import.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Jungle Boogie
On Sun 21 Jan 2018  4:21 PM, Simon Slavin wrote:
> 
> 
> On 21 Jan 2018, at 3:05pm, Brian Curley  wrote:
> 
> > pipe it
> > through jq instead.
> 
> I did not know jq existed.  Thanks.  Just gave the documentation a quick 
> glance.
> 

You might like to see some code examples:
https://www.rosettacode.org/wiki/Category:Jq

> jq is not installed on my platform (macOS) whereas sqlite3 is.

Right, but fortunately jq is a single binary file.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Simon Slavin


On 21 Jan 2018, at 11:01pm, Simon Slavin  wrote:

> Just the data that is stored in the table, expressed as a JSON object, not an 
> array.

Sorry, what I meant was

Just the data returned by the SELECT command, expressed as an array of objects, 
one object per row.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Simon Slavin
On 21 Jan 2018, at 9:22pm, petern  wrote:

> Simon.   You want something like MySQL but using SQLite's shallower column
> type awareness?  Reference:
> 
> https://dev.mysql.com/doc/refman/5.7/en/mysql-shell-json-output.html

Just the data that is stored in the table, expressed as a JSON object, not an 
array.

> Would you include a header variable when headers are turned on? Column
> types too?

Since the data should be expressed as objects, the SQL column headers get 
expressed as JSON property keys.  Thus the names of the headers are preserved 
without any special measures.

> Have you considered writing a reference implementation for a new mode_json
> case of shell_callback to work out a practical design?

It would be a waste of time.  Either the development team likes my idea or it 
doesn’t.  If it does like my idea it needs code, and it won’t want to use my 
code since it won’t be certain that I didn’t copy it from somewhere.

I know that the idea works in some cases because I wrote my own code to do it.  
But my own code wasn’t in C, and I didn’t worry about possibilities like BLOBs.

> The other possibility would be to upgrade json1.c to allow
> update/insert/delete and readout/writeout of current table in JSON format.
> Given the trouble with integrating SQLite shell into every situation, such
> a readout mode for json1.c could have wider utility.

I considered asking for it to appear in JSON1.  I rejected that option because 
JSON1 is not included by default, and because if you already have access to 
JSON functions, you can pretty-much do what I asked for myself.  I see my 
request as being more like another CSV input / output facility in the shell, 
suitable for scripting, rather than detailed flexible JSON capabilities in the 
API, suitable for programmers.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread petern
Simon.   You want something like MySQL but using SQLite's shallower column
type awareness?  Reference:

https://dev.mysql.com/doc/refman/5.7/en/mysql-shell-json-output.html

Would you include a header variable when headers are turned on? Column
types too?
There are a number of design choices to work out, but the new code and test
cases would be entirely isolated to shell.c

The output mode is just formatting and indentation changes and the input
mode can crib from json1.c input routine.
Have you considered writing a reference implementation for a new mode_json
case of shell_callback to work out a practical design?

The other possibility would be to upgrade json1.c to allow
update/insert/delete and readout/writeout of current table in JSON format.
Given the trouble with integrating SQLite shell into every situation, such
a readout mode for json1.c could have wider utility.  Others have run into
the same portability problem with the read only csv.c extension.

Peter





On Sun, Jan 21, 2018 at 11:54 AM, Simon Slavin  wrote:

>
>
> On 21 Jan 2018, at 6:56pm, Brian Curley  wrote:
>
> > In short, yes...you can get jq to convert both ways.
> >
> > It's not exactly as simple as just piping it through jq though, just to
> > reiterate my earlier self-correction.
>
> Hi, Brian.  Thanks for your detailed example which I read.  I can see that
> the tasks can be done by jq.  But I think the SQLite shell tool, which
> knows which key/column names to use, will do them faster and with far less
> work from the user.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Simon Slavin


On 21 Jan 2018, at 6:56pm, Brian Curley  wrote:

> In short, yes...you can get jq to convert both ways.
> 
> It's not exactly as simple as just piping it through jq though, just to
> reiterate my earlier self-correction.

Hi, Brian.  Thanks for your detailed example which I read.  I can see that the 
tasks can be done by jq.  But I think the SQLite shell tool, which knows which 
key/column names to use, will do them faster and with far less work from the 
user.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Brian Curley
hi, Simon.

In short, yes...you can get jq to convert both ways.

It's not exactly as simple as just piping it through jq though, just to
reiterate my earlier self-correction. JSON is intended to allow rich data
definition, such that there's no quick fix that would suit all parties; in
my own case, I had to drill down to one object that had at least four
alternate formats in the same file. Itrequires some factory-style handling
to abstract and manage that deduction anyone might need individually. Same
concern would apply for any embedded .mode handling to produce JSON through
SQLite; nothing would ever be lightweight AND comprehensive for all uses.

I'd created a local function that does what I need when forcing JSON down
to @csv format through jq. It does what I need (ie, crushing "plump" JSON
output down to a table-ready format, including BLOB fields, at a specific
path level), but it'd need some tweaking to handle depth and even separator
preferences per individual use case.

Assuming some abstraction planning is done, the same can be done in either
direction.

I'd posted a link to the jq Developer's own cookbook, but the following is
a link that actually uses a similar approach where you parse the header
rows to produce a template-based JSON output:

   http://infiniteundo.com/post/99336704013/convert-csv-to-json-with-jq ​​

​Sample function for using jq to make @csv
​ to get JSON into SQLite​
: ​


​   ###
   #  function jq_csv ...
   # -  This function does a quick fix of .json files for import use...
   #
function jq_csv () {

#set -x

   #  A little help here...
   if [ $# -eq 0 ] \
 || [ "${1}" == "-h" ]; then
 #  Usage: script_name [-h|-v] filename
 #
  cat <<-ENDOFHELP

jq_csv  requires 1-2 args:

 jq_csv  [${1:-" -h | -v "}] [${2:-"filename"}]

  Examples of use:

 jq_csv  -h ...   [this help listing]
 jq_csv  -v filename  [turns on xtrace]
 jq_csv filename  [cleans csv for loading]

ENDOFHELP

  return 0

   fi

   if [ "${1}" == "-v" ]; then
   #  Turn on xtrace...we'll kill it on our way out
  set -x
   fi

   #  There's a path length 'feature' in jq, which presents as a
segmentation error.
  #  Switch to target file's directory, run from there...
  #  ...and switch back. We could use cd -, but let's be explicit...
   typeset my_path=${PWD}
   #  Pure laziness...not going w optargs just yet.
   my_file="${2:-${1}}"
#   not_template="Error: file is not a csv-delimited document"  2>&1

   if [ -e ${my_file} ] ;then
  #  Invalid first argument...
 #  STFU
  typeset jq_path=$(dirname ${my_file} )
  #  Guard against undefined options...
  if [ -z $(echo "${1}" |egrep "\-v|\-h" ) ] \
 && [ "$(echo ${1} |cut -b 1 )" == "-" ]; then
printf "\nError:  %s \n" "Invalid arg1... "
return
  #  File exists, but...it's either zero byte or not an appropriate
file...
 #  GTFO
  elif [ -z "$(head ${my_file} |egrep '\,|\"\,|\,\"' )" ] ;then
printf "\nError:  %s \n" "${not_template}"
return
  fi
  #  Hey, look...a real file!
 #  A little gymnastics to establish our column headers...
#  Have jq generate the keys into a CSV for us,
#  and then take only the 2nd field onward. (Since {metadata}
isn't useful...)
 typeset my_hdrs=$(cd ${jq_path}  \
  && jq -rc   \
  '.d.results[] |keys_unsorted |@csv' \
  $(basename ${my_file} ) \
  |awk -F',' '!uniq[substr($0,1,length($0))]++;'
2>&1 \
  |cut -d, -f2-   \
  && cd ${my_path}\
   )
 #  Do NOT mess with the output here, as it produces a literal for
the upcoming call to jq.
 #
#  Reformulate the headers into jq syntax to be sent back for
retrieval...
 typeset to_flds=$(echo "${my_hdrs}"  \
  |sed 's/^"/[ ./g;s/,"/, ./g;s/"//g;s/$/ |tostring
]/g; ' \
   )
 cd ${jq_path}
 #  First the headers...then the fields.
#  But we'll pass them through additional parsing to make them
|-delimited.
 (echo ${my_hdrs}\
&& jq -rc ".d.results[] |${to_flds} |@csv "  \
$(basename ${my_file} ) 2>&1 )   \
|sed 's/","/"|"/g;'  #\
 cd ${my_path}

   elif [ ! -e ${my_file} ]; then
  printf "\nError:  %s \n"  "No input file found..."
   fi

   #  Turn off debug/verbose mode...IF it was us that did it...
   if [ "${1}" == "-v" ] \
  && [ ! -z "$(set |grep xtrace )" ]; then
 set +x
   fi

}

​
Reg

Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Luuk
On 21-01-18 17:15, Brian Curley wrote:
> Well, I did oversimplify to just say 'pipe it through', but it's really
> more like a sed usage.
>
> You wouldn't see much difference if you'd pipe your delimited output
> through sed or awk either, unless you threw in some directives, or a
> script. It would require some planning on the part of the user, but there's
> a cookbook on the jq site that covers this.
>
>
> https://github.com/stedolan/jq/wiki/Cookbook#convert-a-csv-file-with-headers-to-json
>
>
> There's other takes on this same recipe out there, on StackExchange, etc.
>
> As with any such localized solution, once you get it working, you can use
> it seamlessly as a function or an aliased call.
>
> Regards.
>
> Brian P Curley
>
>
>
>
> On Jan 21, 2018 10:15 AM, "Luuk"  wrote:
>
> On 21-01-18 16:05, Brian Curley wrote:
>> Is there even a need to embed it into sqlite itself? Since you're on the
>> shell, and in keeping with the whole 'do one thing well' mandate: pipe it
>> through jq instead.
>>
>> Beautiful creature that jq...
>>
>> Regards.
>>
>> Brian P Curley
>>
>>
> luuk@opensuse:~/tmp> echo 'select * from test;' | sqlite test.db
> 1
> 2
> 3
> luuk@opensuse:~/tmp> echo 'select * from test;' | sqlite test.db | jq
> 1
> 2
> 3
>
> Can you give an example please?
> ___
>

Thanks, will look at it, when i'm doing someting with JSON, and CSV
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Simon Slavin


On 21 Jan 2018, at 3:05pm, Brian Curley  wrote:

> pipe it
> through jq instead.

I did not know jq existed.  Thanks.  Just gave the documentation a quick glance.

jq is not installed on my platform (macOS) whereas sqlite3 is.

Does jq do conversion both ways ?

Can jq deduce the column names (SQL) / keys (JSON) from the output of SELECT 
without extra work from the programmer/scripter ?

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Brian Curley
Well, I did oversimplify to just say 'pipe it through', but it's really
more like a sed usage.

You wouldn't see much difference if you'd pipe your delimited output
through sed or awk either, unless you threw in some directives, or a
script. It would require some planning on the part of the user, but there's
a cookbook on the jq site that covers this.


https://github.com/stedolan/jq/wiki/Cookbook#convert-a-csv-file-with-headers-to-json


There's other takes on this same recipe out there, on StackExchange, etc.

As with any such localized solution, once you get it working, you can use
it seamlessly as a function or an aliased call.

Regards.

Brian P Curley




On Jan 21, 2018 10:15 AM, "Luuk"  wrote:

On 21-01-18 16:05, Brian Curley wrote:
> Is there even a need to embed it into sqlite itself? Since you're on the
> shell, and in keeping with the whole 'do one thing well' mandate: pipe it
> through jq instead.
>
> Beautiful creature that jq...
>
> Regards.
>
> Brian P Curley
>
>
luuk@opensuse:~/tmp> echo 'select * from test;' | sqlite test.db
1
2
3
luuk@opensuse:~/tmp> echo 'select * from test;' | sqlite test.db | jq
1
2
3

Can you give an example please?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Luuk
On 21-01-18 16:05, Brian Curley wrote:
> Is there even a need to embed it into sqlite itself? Since you're on the
> shell, and in keeping with the whole 'do one thing well' mandate: pipe it
> through jq instead.
>
> Beautiful creature that jq...
>
> Regards.
>
> Brian P Curley
>
>
luuk@opensuse:~/tmp> echo 'select * from test;' | sqlite test.db
1
2
3
luuk@opensuse:~/tmp> echo 'select * from test;' | sqlite test.db | jq
1
2
3

Can you give an example please?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread Brian Curley
Is there even a need to embed it into sqlite itself? Since you're on the
shell, and in keeping with the whole 'do one thing well' mandate: pipe it
through jq instead.

Beautiful creature that jq...

Regards.

Brian P Curley



On Jan 21, 2018 9:54 AM, "J Decker"  wrote:

> On Sat, Jan 20, 2018 at 9:54 PM, Simon Slavin 
> wrote:
>
> > Feature request for the Shell Tool: ".mode json".
> >
> > Output should be as a JSON array of objects, with one object for each row
> > of the table.  Output should start with the "[" character and end with
> > "]".  Rows should be separated with ",\n".  Quotes in strings should be
> > escaped for JSON, with a leading backslash.  NULL should be supported as
> > the four lower-case characters "null", ignoring ".nullvalue".
> >
> Numbers should be unquoted.
>
> NaN, Infinity are not definable through JSON.
>
>
> > The above setting should also affect the ".import filename [table]"
> > command as described in section 8 of  .
> > Columns should be created as necessary.  Signed zeros should be imported
> as
> > zero.
> >
> > The above facilities should be implemented whether or not
> > DSQLITE_ENABLE_JSON1 was enabled when compiling the shell tool.  They are
> > for export and import, not for handling JSON within SQLite.
> >
> > Implementation questions:
> >
> > I am not sure what the program should do if asked to import a value which
> > is an array or object.  Perhaps, for compatibility with the JSON1
> > extension, those should be imported as a string.
> >
> > I am not sure whether the program should respect the settings for
> > ".separator" for JSON mode, either for output or .import.
> >
> > I am not sure how BLOBs should be handled, either for output or .import.
> >
> could be arrays of bytes.  A reviver could covert it to UInt8Array when
> used on javascript side.  can't just use character strings; many
> combinations of bytes are invalid unicode code points.
> [0,1,2,255]
>
>
> >
> > Simon.
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-21 Thread J Decker
On Sat, Jan 20, 2018 at 9:54 PM, Simon Slavin  wrote:

> Feature request for the Shell Tool: ".mode json".
>
> Output should be as a JSON array of objects, with one object for each row
> of the table.  Output should start with the "[" character and end with
> "]".  Rows should be separated with ",\n".  Quotes in strings should be
> escaped for JSON, with a leading backslash.  NULL should be supported as
> the four lower-case characters "null", ignoring ".nullvalue".
>
Numbers should be unquoted.

NaN, Infinity are not definable through JSON.


> The above setting should also affect the ".import filename [table]"
> command as described in section 8 of  .
> Columns should be created as necessary.  Signed zeros should be imported as
> zero.
>
> The above facilities should be implemented whether or not
> DSQLITE_ENABLE_JSON1 was enabled when compiling the shell tool.  They are
> for export and import, not for handling JSON within SQLite.
>
> Implementation questions:
>
> I am not sure what the program should do if asked to import a value which
> is an array or object.  Perhaps, for compatibility with the JSON1
> extension, those should be imported as a string.
>
> I am not sure whether the program should respect the settings for
> ".separator" for JSON mode, either for output or .import.
>
> I am not sure how BLOBs should be handled, either for output or .import.
>
could be arrays of bytes.  A reviver could covert it to UInt8Array when
used on javascript side.  can't just use character strings; many
combinations of bytes are invalid unicode code points.
[0,1,2,255]


>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request for the Shell Tool: .mode json

2018-01-20 Thread Simon Slavin
Feature request for the Shell Tool: ".mode json".

Output should be as a JSON array of objects, with one object for each row of 
the table.  Output should start with the "[" character and end with "]".  Rows 
should be separated with ",\n".  Quotes in strings should be escaped for JSON, 
with a leading backslash.  NULL should be supported as the four lower-case 
characters "null", ignoring ".nullvalue".

The above setting should also affect the ".import filename [table]" command as 
described in section 8 of  .  Columns should be 
created as necessary.  Signed zeros should be imported as zero.

The above facilities should be implemented whether or not DSQLITE_ENABLE_JSON1 
was enabled when compiling the shell tool.  They are for export and import, not 
for handling JSON within SQLite.

Implementation questions:

I am not sure what the program should do if asked to import a value which is an 
array or object.  Perhaps, for compatibility with the JSON1 extension, those 
should be imported as a string.

I am not sure whether the program should respect the settings for ".separator" 
for JSON mode, either for output or .import.

I am not sure how BLOBs should be handled, either for output or .import.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: merge joins (preferably through a hint)

2017-11-07 Thread Simon Slavin
On 7 Nov 2017, at 7:59am, Davor Josipovic  wrote:

> What sqlite does now is for each "a" it searches through the index for "x".

If an ideal index already exists, accessing the correct records will be fast.  
If one does not exist, how would you expect a merge join to be any faster ?

There are specific cases where a merge join is faster than using JOIN … ORDER 
BY.  For that to happen, both source tables must already have indexes ideally 
suited to the merge join, and the rows which you’re going to want returned must 
be a very large proportion of both source tables, probably the whole tables.  
Also, SQLite has to be aware of those facts, it cannot simply assume them.

Except for the above cases, existing formats will be just as fast, and can be 
far faster, especially in cases where the rows wanted do not represent most of 
the rows of the existing tables.

Merge joins also represent a problem where you have to compare the two 
available rows.  There’s no good way to know what the programmer means by this, 
especially in cases involving many columns and collation methods.  Assumptions 
have to be made and whatever the development team picks is sure to annoy some 
users.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: merge joins (preferably through a hint)

2017-11-07 Thread Davor Josipovic
>  You are thinking that perhaps queries such as the following might
> be faster using a merge:

>
> SELECT * FROM tab1 JOIN tab2 ON tab1.a=tab2.x;

>

> I disagree.


I don't see any reason to disagree. Merge join will definitely be faster if the 
data is already sorted. See the reference: 
https://en.wikipedia.org/wiki/Sort-merge_join. It is a linear time operation.


What sqlite does now is for each "a" it searches through the index for "x". 
This search operation is logarithmic time. If there are many records in tab1, 
then this stacks and becomes quasilinear time. I experience this constantly 
with sqlite data wrangling and tab1 and 2 in the millions. sqlite's nested 
loops are very fast, but the joins _could_ be made much faster with merge joins 
in such situations. I just wish I had this hint...
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: merge joins (preferably through a hint)

2017-11-05 Thread Simon Slavin


On 5 Nov 2017, at 11:04am, Richard Hipp  wrote:

> SQLite does do a merge in some cases, though not for what you would
> traditionally call a join.  For example, SQLite will do a merge to
> combine the two halves of this query:
> 
>SELECT a,b,c FROM tab1 UNION SELECT x,y,z FROM tab2 ORDER BY 1,2,3;

In case it’s not clear from the above, SQL processes the UNION before the ORDER 
BY clause.   If "merge join" didn’t already have a definition, we could use the 
term for that.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: merge joins (preferably through a hint)

2017-11-05 Thread Richard Hipp
On 11/5/17, Davor Josipovic  wrote:
> Merge joins could be an incredible optimization in some cases for large
> queries and would make sqlite much faster in such cases.

SQLite does do a merge in some cases, though not for what you would
traditionally call a join.  For example, SQLite will do a merge to
combine the two halves of this query:

SELECT a,b,c FROM tab1 UNION SELECT x,y,z FROM tab2 ORDER BY 1,2,3;

You are thinking that perhaps queries such as the following might be
faster using a merge:

SELECT * FROM tab1 JOIN tab2 ON tab1.a=tab2.x;

I disagree.  In order to do this as a merge, we'd need indexes on both
tab1.a and tab2.x.  (In order for the merge to be practical, and not
require an arbitrary amount of auxiliary storage, both indexes would
need to be UNIQUE.)  But if you already either one of those two
indexes, then the nested-loop join will already be blazing fast.  It
is difficult to see how switching to a merge join would make it go any
faster.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: merge joins (preferably through a hint)

2017-11-05 Thread Davor Josipovic
Are there any plans to implement merge joins in sqlite? As far as I am aware, 
only nested loops are currently supported.

Merge joins could be an incredible optimization in some cases for large queries 
and would make sqlite much faster in such cases.

Personally, I would like to have this option rather as a sql HINT, than as an 
optimizer option, since the optimizer now, small and efficient as it is, does a 
great job. The merge join HINT could be used to greatly optimize specific 
queries - i.e. queries where poking the index many many times on an already 
ordered set is inefficient.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread Nico Williams
On Fri, Aug 04, 2017 at 10:55:57AM -0500, Bob Friesenhahn wrote:
> Sqlite does not really have a way to know if a module in the current
> directory (the directory which just happened to be current when the request
> was made) should be trusted.  To be secure, sqlite should insist that the
> load request be something other than a bare module name because then the
> responsibility is put on the user of sqlite.

You can always load an absolute path.  That said, using relative paths
and depending on the caller's run-path is not bad per-se -- just
something to be aware of.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread Nico Williams
On Fri, Aug 04, 2017 at 08:20:10AM -0700, J Decker wrote:
> On Fri, Aug 4, 2017 at 8:11 AM, Nico Williams  wrote:
> > No, see, the ".so"/".dll" suffix is used in all cases, and it varies by
> > platform, so it's best if SQLite3 adds it so you can keep your code more
> > portable.  While the "lib" prefix is only ever needed if you want a
> > link-edit to find the thing as -l -- which you almost never ever
> > want when  is a loadable module.
> 
> so does the lib prefix.  You cannot package a .so in an android .apk.
>  but you can include lib.so.

Really?!  Oy.  But still, that would just mean that when building for an
Android platform SQLite3 *must* add the "lib" prefix, not that it should
try it on every Unix-like platform.

> At which point, because I apparently missed that too.  if one is using
> CMake, you get a lib.so without setting additional options.

So?

Nico
-- 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread Bob Friesenhahn

On Fri, 4 Aug 2017, Peter Da Silva wrote:


On 8/4/17, 8:29 AM, "sqlite-users on behalf of Bob Friesenhahn" 
 
wrote:

Lazy programmers who request such things are of the same ilk which use 
programming practices resulting in SQL injection attacks.  Sqlite should not 
promote such practices.


Then require a fully qualified path and extension, and don’t have a search path 
for DLLs at all.

Otherwise you’re just haggling over where to draw the line.


The operating system (insert operating system used here) has an 
operating-system specific algorithm it uses when it searches for 
shared libraries which were specified using only the file name. 
Whether 'lib' at the front of the file name is significant to its 
searching behavior depends on the operating system used.


I have not checked what sqlite actualy does, but for security, it 
should be doing its own 'stat' to find the existing module, and then 
open it via an explicit path in order to defeat any operating-system 
specific behavior.


If sqlite were to simply issue load requests via dlopen() (or 
equivalent) with various permutations, then it would become subject to 
the varying behavior of different systems.  For example, a program 
which uses sqlite as part of a directory indexer which is indexing a 
directory which contains uploads from untrusted users could be 
compromised.


Sqlite does not really have a way to know if a module in the current 
directory (the directory which just happened to be current when the 
request was made) should be trusted.  To be secure, sqlite should 
insist that the load request be something other than a bare module 
name because then the responsibility is put on the user of sqlite.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread J Decker
On Fri, Aug 4, 2017 at 8:11 AM, Nico Williams  wrote:

> On Fri, Aug 04, 2017 at 10:17:33AM +0200, Dominique Devienne wrote:
> > On Fri, Aug 4, 2017 at 2:46 AM, Nico Williams 
> wrote:
> > > You're mistaken.
> > >
> > > lib.so is NOT "the default naming scheme on many *nix platforms".
> > >
> > > lib.so is the naming scheme when you want the link-editor (e.g.,
> > > ld(1)) to find a library using -l.
> > >
> > > But loadable modules are almost never meant to be used that way.
> > > They're usually meant to be used only through dlopen() and friends.
> >
> > While you're technically correct, Matt's request seems completely
> > reasonable to me.
>
> Not if it's a result of being mistaken.  Now that OP knows about object
> naming, he can reconsider and restate his request.
>
> > If SQLite wasn't doing *any* changes to the filename, not adding the
> > extension for example, you may have had a point, but since it does,
> > trying with the lib prefix on Linux, which is undeniably a common
>
> No, see, the ".so"/".dll" suffix is used in all cases, and it varies by
> platform, so it's best if SQLite3 adds it so you can keep your code more
> portable.  While the "lib" prefix is only ever needed if you want a
> link-edit to find the thing as -l -- which you almost never ever
> want when  is a loadable module.
>

so does the lib prefix.  You cannot package a .so in an android .apk.
 but you can include lib.so.



>
> > naming convention on Linux, seems like a worthwhile addition. [...]
>
> You didn't understand.
>
At which point, because I apparently missed that too.  if one is using
CMake, you get a lib.so without setting additional options.


>
> > I really don't see what's controversial with Matt's request :)
>
> a) it's borne of a mistake
> b) it's not necessary
> c) it's more code and more docs
> d) it's more likely to lead to accidents
>
>
a) It's born of 'I want to load 'myExtension'" and sqlite already does
substitutions.
b) maybe.
c) so?  it's not like a whole path is specified that you'd get
/usr/lib/myextension  and then try lib/usr/lib/myextension.so
d) and more likely to cause scripting configuration issues;   There's no
.if or .goto commands to be able to handle error conditions.  So you have
to maintain at least 2 scripts instead of just 1.  Leading to more
accidents in updating one and not the other.


> > It's not like load-extension is a performance-critical operation, that
> > trying an extra load is that expensive.
>
> This is true, but also irrelevant :)
>
> > And the security consideration that an "attacker" could make it load
> > his own library instead, but using a different name tried before the
> > actual one is moot IMHO, since extension loading is by nature unsafe.
> >
> > In short, I support Matt's request and I hope DRH considers it seriously.
> > FWIW :). --DD
>
> What problems does this solve?  None.
>
> Wrong, it adds the ability to code a single script to execute.


> Nico
> --
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread J Decker
On Fri, Aug 4, 2017 at 6:29 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Fri, 4 Aug 2017, Dominique Devienne wrote:
>
>>
>> I really don't see what's controversial with Matt's request :)
>>
>> It's not like load-extension is a performance-critical operation, that
>> trying an extra load is that expensive.
>> And the security consideration that an "attacker" could make it load his
>> own library instead, but using a
>> different name tried before the actual one is moot IMHO, since extension
>> loading is by nature unsafe.
>>
>> In short, I support Matt's request and I hope DRH considers it seriously.
>> FWIW :). --DD
>>
>
> It is true that sqlite normally only needs to load an extension once per
> invocation.  However, loading an extension incurs a cost in that
> several/many 'stat' operations on the filesystem are necessary in order to
> find the module unless the full path to it was given (use 'strace',
> 'truss', or 'dtruss' to see this in action).  The security implications can
> be severe on some popular operating systems.
>
> As I mentioned before, two very popular desktop OSs (Microsoft Windows and
> Apple's OS X) have a defined pattern in that they will search the current
> directory for a module by default.


And then search the whole path.  certainly not a cost savings, don't see
the point.


> Hopefully it should sink in that if one requests loading the extension
> while the process current directory is currently in a potentially 'hostile'
> directory that this may lead to the compromise of the account of the user
> ID executing sqlite because arbitrary binary code can be injected.
>

And how does this help avoid that?  export LD_LIBRARY_PATH=. and it is the
same behavior as windows...


>
> Lazy programmers who request such things are of the same ilk which use
> programming practices resulting in SQL injection attacks.  Sqlite should
> not promote such practices.
>
> Probably the extension is not in a well known place, so a path of possible
places is still going to be checked.


> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread Nico Williams
On Fri, Aug 04, 2017 at 10:17:33AM +0200, Dominique Devienne wrote:
> On Fri, Aug 4, 2017 at 2:46 AM, Nico Williams  wrote:
> > You're mistaken.
> >
> > lib.so is NOT "the default naming scheme on many *nix platforms".
> >
> > lib.so is the naming scheme when you want the link-editor (e.g.,
> > ld(1)) to find a library using -l.
> >
> > But loadable modules are almost never meant to be used that way.
> > They're usually meant to be used only through dlopen() and friends.
> 
> While you're technically correct, Matt's request seems completely
> reasonable to me.

Not if it's a result of being mistaken.  Now that OP knows about object
naming, he can reconsider and restate his request.

> If SQLite wasn't doing *any* changes to the filename, not adding the
> extension for example, you may have had a point, but since it does,
> trying with the lib prefix on Linux, which is undeniably a common

No, see, the ".so"/".dll" suffix is used in all cases, and it varies by
platform, so it's best if SQLite3 adds it so you can keep your code more
portable.  While the "lib" prefix is only ever needed if you want a
link-edit to find the thing as -l -- which you almost never ever
want when  is a loadable module.

> naming convention on Linux, seems like a worthwhile addition. [...]

You didn't understand.

> I really don't see what's controversial with Matt's request :)

a) it's borne of a mistake
b) it's not necessary
c) it's more code and more docs
d) it's more likely to lead to accidents

> It's not like load-extension is a performance-critical operation, that
> trying an extra load is that expensive.

This is true, but also irrelevant :)

> And the security consideration that an "attacker" could make it load
> his own library instead, but using a different name tried before the
> actual one is moot IMHO, since extension loading is by nature unsafe.
> 
> In short, I support Matt's request and I hope DRH considers it seriously.
> FWIW :). --DD

What problems does this solve?  None.

Nico
-- 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread Nico Williams
On Fri, Aug 04, 2017 at 06:05:53AM +, Hick Gunter wrote:
> >Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
> >Auftrag von Nico Williams
> >But loadable modules are almost never meant to be used that way.
> >They're usually meant to be used only through dlopen() and friends.
> 
> Which other method apart from dlopen() would you recommend for
> dynamically loaded extensions?

There's only two methods for loading an object dynamically: because you
demanded it at link-edit-time, or because you used dlopen() or similar.

There are no others, full stop.

(You could write your own run-time loader, but you'd still be
implementing a dlopen().)

> We are using virtual tables to interface with diverse data stores and
> make them queryable with SQL. The general interactive shell needs
> access to a certain subset of functions, and some speciality tools
> have dedicated loadable extensions that should not be accessible
> outside of that scope.

So?

Nico
-- 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread Peter Da Silva
On 8/4/17, 8:29 AM, "sqlite-users on behalf of Bob Friesenhahn" 
 wrote:
> Lazy programmers who request such things are of the same ilk which use 
> programming practices resulting in SQL injection attacks.  Sqlite should not 
> promote such practices.

Then require a fully qualified path and extension, and don’t have a search path 
for DLLs at all.
 
Otherwise you’re just haggling over where to draw the line.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread Bob Friesenhahn

On Fri, 4 Aug 2017, Dominique Devienne wrote:


I really don't see what's controversial with Matt's request :)

It's not like load-extension is a performance-critical operation, that
trying an extra load is that expensive.
And the security consideration that an "attacker" could make it load his
own library instead, but using a
different name tried before the actual one is moot IMHO, since extension
loading is by nature unsafe.

In short, I support Matt's request and I hope DRH considers it seriously.
FWIW :). --DD


It is true that sqlite normally only needs to load an extension once 
per invocation.  However, loading an extension incurs a cost in that 
several/many 'stat' operations on the filesystem are necessary in 
order to find the module unless the full path to it was given (use 
'strace', 'truss', or 'dtruss' to see this in action).  The security 
implications can be severe on some popular operating systems.


As I mentioned before, two very popular desktop OSs (Microsoft Windows 
and Apple's OS X) have a defined pattern in that they will search the 
current directory for a module by default.  Hopefully it should sink 
in that if one requests loading the extension while the process 
current directory is currently in a potentially 'hostile' directory 
that this may lead to the compromise of the account of the user ID 
executing sqlite because arbitrary binary code can be injected.


Lazy programmers who request such things are of the same ilk which use 
programming practices resulting in SQL injection attacks.  Sqlite 
should not promote such practices.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-04 Thread Dominique Devienne
On Fri, Aug 4, 2017 at 2:46 AM, Nico Williams  wrote:

> On Tue, Aug 01, 2017 at 10:56:47AM -0700, Matt Chambers wrote:
> > load_extension() has the very sensible behavior of:
> > > So for example, if "samplelib" cannot be loaded, then names like
> > > "samplelib.so" or "samplelib.dylib" or "samplelib.dll" might be tried
> > > also.
> >
> > I would like to see that extended to include "libsamplelib.so" since
> that is
> > the default naming scheme on many *nix platforms. This simple change
> would
> > allow me to use the same base library name for my extension on both
> Windows
> > and Linux. Otherwise I have to modify my build system to override its
> > default behavior of adding the lib prefix on Linux.
>
> You're mistaken.
>
> lib.so is NOT "the default naming scheme on many *nix platforms".
>
> lib.so is the naming scheme when you want the link-editor (e.g.,
> ld(1)) to find a library using -l.
>
> But loadable modules are almost never meant to be used that way.
> They're usually meant to be used only through dlopen() and friends.


While you're technically correct, Matt's request seems completely
reasonable to me.
If SQLite wasn't doing *any* changes to the filename, not adding the
extension for example,
you may have had a point, but since it does, trying with the lib prefix on
Linux, which is
undeniably a common naming convention on Linux, seems like a worthwhile
addition. It is
true after all most build system will be default use that "ld-based" naming
convention, and
that there's no distinction for a shared lib meant to be loaded explicitly
via dlopen/LoadLibrary
or one that's implicitly linked with an executable. If fact, you can have
the same shared lib
SQLite "extension" used both ways, explicit-loading by sqlite3.exe, and
implicit-loading by
custom-app.exe.

I really don't see what's controversial with Matt's request :)

It's not like load-extension is a performance-critical operation, that
trying an extra load is that expensive.
And the security consideration that an "attacker" could make it load his
own library instead, but using a
different name tried before the actual one is moot IMHO, since extension
loading is by nature unsafe.

In short, I support Matt's request and I hope DRH considers it seriously.
FWIW :). --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-03 Thread Hick Gunter
>Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
>Auftrag von Nico Williams
>
>On Tue, Aug 01, 2017 at 10:56:47AM -0700, Matt Chambers wrote:
>> load_extension() has the very sensible behavior of:
>> > So for example, if "samplelib" cannot be loaded, then names like
>> > "samplelib.so" or "samplelib.dylib" or "samplelib.dll" might be
>> > tried also.
>>
>> I would like to see that extended to include "libsamplelib.so" since
>> that is the default naming scheme on many *nix platforms. This simple
>> change would allow me to use the same base library name for my
>> extension on both Windows and Linux. Otherwise I have to modify my
>> build system to override its default behavior of adding the lib prefix on 
>> Linux.
>
>You're mistaken.
>
>lib.so is NOT "the default naming scheme on many *nix platforms".
>
>lib.so is the naming scheme when you want the link-editor (e.g.,
>ld(1)) to find a library using -l.
>
>But loadable modules are almost never meant to be used that way.
>They're usually meant to be used only through dlopen() and friends.
>

Which other method apart from dlopen() would you recommend for dynamically 
loaded extensions?

We are using virtual tables to interface with diverse data stores and make them 
queryable with SQL. The general interactive shell needs access to a certain 
subset of functions, and some speciality tools have dedicated loadable 
extensions that should not be accessible outside of that scope.


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-03 Thread Nico Williams
On Tue, Aug 01, 2017 at 10:56:47AM -0700, Matt Chambers wrote:
> load_extension() has the very sensible behavior of:
> > So for example, if "samplelib" cannot be loaded, then names like
> > "samplelib.so" or "samplelib.dylib" or "samplelib.dll" might be tried
> > also.
> 
> I would like to see that extended to include "libsamplelib.so" since that is
> the default naming scheme on many *nix platforms. This simple change would
> allow me to use the same base library name for my extension on both Windows
> and Linux. Otherwise I have to modify my build system to override its
> default behavior of adding the lib prefix on Linux.

You're mistaken.

lib.so is NOT "the default naming scheme on many *nix platforms".

lib.so is the naming scheme when you want the link-editor (e.g.,
ld(1)) to find a library using -l.

But loadable modules are almost never meant to be used that way.
They're usually meant to be used only through dlopen() and friends.

Nico
-- 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-03 Thread J Decker
On Thu, Aug 3, 2017 at 10:42 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Tue, 1 Aug 2017, Matt Chambers wrote:
>
> load_extension() has the very sensible behavior of:
>>
>>> So for example, if "samplelib" cannot be loaded, then names like
>>> "samplelib.so" or "samplelib.dylib" or "samplelib.dll" might be tried
>>> also.
>>>
>>
>> I would like to see that extended to include "libsamplelib.so" since that
>> is
>> the default naming scheme on many *nix platforms. This simple change would
>> allow me to use the same base library name for my extension on both
>> Windows
>> and Linux. Otherwise I have to modify my build system to override its
>> default behavior of adding the lib prefix on Linux.
>>
>
> These conveniences tend to lessen the security of sqlite since this is
> arbitrary executable code capable of doing anything the user is able to do
> (e.g. delete all files or add a virus).  If the user is willing to be
> precise, then there is less risk of a compromised module/library from being
> introduced.
>
>
then it shouldn't try any alternatives.  OR it should always be
'lib.so' which would also be acceptable.



> It should be obvious that calling sqlite3_load_extension() without an
> absolute path, or other safeguards, exposes the program to accidentally
> loading a file from whatever happens to be the current directory (perhaps a
> writeable directory that an attacker was able to write into).
>
> Apple's OS X and Microsoft Windows always try to load from the current
> directory.
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-03 Thread Bob Friesenhahn

On Tue, 1 Aug 2017, Matt Chambers wrote:


load_extension() has the very sensible behavior of:

So for example, if "samplelib" cannot be loaded, then names like
"samplelib.so" or "samplelib.dylib" or "samplelib.dll" might be tried
also.


I would like to see that extended to include "libsamplelib.so" since that is
the default naming scheme on many *nix platforms. This simple change would
allow me to use the same base library name for my extension on both Windows
and Linux. Otherwise I have to modify my build system to override its
default behavior of adding the lib prefix on Linux.


These conveniences tend to lessen the security of sqlite since this is 
arbitrary executable code capable of doing anything the user is able 
to do (e.g. delete all files or add a virus).  If the user is willing 
to be precise, then there is less risk of a compromised module/library 
from being introduced.


It should be obvious that calling sqlite3_load_extension() without an 
absolute path, or other safeguards, exposes the program to 
accidentally loading a file from whatever happens to be the current 
directory (perhaps a writeable directory that an attacker was able to 
write into).


Apple's OS X and Microsoft Windows always try to load from the current 
directory.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: check for 'lib' prefix for load_extension()

2017-08-02 Thread Matt Chambers
load_extension() has the very sensible behavior of:
> So for example, if "samplelib" cannot be loaded, then names like
> "samplelib.so" or "samplelib.dylib" or "samplelib.dll" might be tried
> also.

I would like to see that extended to include "libsamplelib.so" since that is
the default naming scheme on many *nix platforms. This simple change would
allow me to use the same base library name for my extension on both Windows
and Linux. Otherwise I have to modify my build system to override its
default behavior of adding the lib prefix on Linux.



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Feature-request-check-for-lib-prefix-for-load-extension-tp96658.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: support for FROM_UNIXTIME

2017-06-13 Thread Igor Tandetnik

On 6/13/2017 11:21 AM, René Cannaò wrote:

I would like to have support for FROM_UNIXTIME() function, as available in
MySQL:
https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_from-unixtime

Some background about this feature request.

ProxySQL (https://github.com/sysown/proxysql/) is a proxy for MySQL , and
it uses SQLite (currently bundled) to store configuration and export
metrics using a MySQL interface built on top of SQLite iself.
A feature request for ProxySQL was to have FROM_UNIXTIME() available (
https://github.com/sysown/proxysql/issues/758), and this was made available
in:
* for SQLite 3.15 :
https://github.com/sysown/proxysql/commit/b49966a8509f2c85e2507534ed6f1843a654ac81
* for SQLite 3.19 :
https://github.com/sysown/proxysql/blob/312a04c73dafc6f5c23bb308c1a70b5f3728899e/deps/sqlite3/from_unixtime.patch


Any reason the proxy could not install FROM_UNIXTIME as a custom function ( 
https://sqlite.org/c3ref/create_function.html )? Why does it need to be hacked 
directly into SQLite library?
--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: support for FROM_UNIXTIME

2017-06-13 Thread René Cannaò
I would like to have support for FROM_UNIXTIME() function, as available in
MySQL:
https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_from-unixtime

Some background about this feature request.

I perfectly understand that SQLite is not MySQL, that FROM_UNIXTIME is a
function currently only available in MySQL , and that the equivalent in
SQLite is to use the "unixepoch" modifier in date and time function.

ProxySQL (https://github.com/sysown/proxysql/) is a proxy for MySQL , and
it uses SQLite (currently bundled) to store configuration and export
metrics using a MySQL interface built on top of SQLite iself.
A feature request for ProxySQL was to have FROM_UNIXTIME() available (
https://github.com/sysown/proxysql/issues/758), and this was made available
in:
* for SQLite 3.15 :
https://github.com/sysown/proxysql/commit/b49966a8509f2c85e2507534ed6f1843a654ac81
* for SQLite 3.19 :
https://github.com/sysown/proxysql/blob/312a04c73dafc6f5c23bb308c1a70b5f3728899e/deps/sqlite3/from_unixtime.patch


It is being discussed in https://bugzilla.redhat.com/show_bug.cgi?id=1457929
about de-bundling SQLite from ProxySQL , and having this feature upstream
(SQLite) should allow to de-bundle the two softwares.

Would you accept (or improve) the function FROM_UNIXTIME as implemented in
https://github.com/sysown/proxysql/blob/312a04c73dafc6f5c23bb308c1a70b5f3728899e/deps/sqlite3/from_unixtime.patch
?

Thanks,
René
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: please have the sqlite3_set_authorizer callback given the full list of tables and columns used by a statement

2017-04-07 Thread Gwendal Roué
Hello,

I'm the author of GRDB.swift [1], a Swift wrapper around SQLite which aims, 
among other things, at notifying of transactions that may have an impact on a 
the results of a SELECT statement.

For example, `SELECT a, b FROM table1` is impacted by `DELETE FROM table1`, but 
not by `UPDATE table1 SET c = 1` or `INSERT INTO table2 (...)`.

To achieve this feature, GRDB uses a sqlite3_set_authorizer callback [2]. For 
example, the callback is given SQLITE_READ which tells that `SELECT a, b FROM 
table1` uses the columns a and b from table1. The authorizer callback can also 
been given SQLITE_INSERT, which tells that `INSERT INTO table2 (...)` performs 
an insertion in table2.

Those pieces of information can be compared together, so that one can deduce 
that `INSERT INTO table2 (...)` has no impact on `SELECT a, b FROM table1`, but 
`DELETE FROM table1` has.

Now, enter `COUNT(*)`. The sqlite3_set_authorizer callback is told nearly 
nothing about the `SELECT COUNT(*) FROM table1`. Especially not that table1 is 
used. It is only told that the COUNT function is called throuh SQLITE_FUNCTION. 
That's all. That is more than nothing, because one can deduce from a call to 
the COUNT function that *any* statement can have an impact on `SELECT COUNT(*) 
FROM table1`. For example, `INSERT INTO table2 (...)` will be assumed to have 
an impact on `SELECT COUNT(*) FROM table1`.

Unfortunately, this is less than ideal. I understand the situation: `SELECT 
COUNT(*) FROM table1` does not access values from the table1 table, and thus 
does not need any authorization. But I suggest that sqlite3_set_authorizer is 
so close from giving a full picture of the columns and table read by a 
statement that it's a pity that a simple COUNT(*) is able to ruin the picture.

So here is my feature request: please have the sqlite3_set_authorizer callback 
given the full list of tables and columns used by a statement.

Cheers,
Gwendal Roué
[1] http://github.com/groue/GRDB.swift
[2] https://sqlite.org/c3ref/set_authorizer.html

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request

2017-01-16 Thread Richard Hipp
On 1/16/17, Simon Slavin  wrote:
>
>
> I think it would be easy to add but I’m not part of the dev group and don’t
> really know if this is the case.
>

The implementation is here:
https://www.sqlite.org/src/artifact/dc3f1391d9297f8c?ln=983-1133

Who can send me a patch?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request

2017-01-16 Thread Simon Slavin

On 16 Jan 2017, at 1:10pm, Stephen Chrzanowski  wrote:

> From the above link:
> 
> %V is replaced by the week number of the year (Monday as the first day of
> the week) as a decimal number [01,53]. If the week containing 1 January has
> four or more days in the new year, then it is considered week 1. Otherwise,
> it is the last week of the previous year, and the next week is week 1. %W
> is replaced by the week number of the year (Monday as the first day of the
> week) as a decimal number [00,53]. All days in a new year preceding the
> first Monday are considered to be in week 0.

That almost fits the definition of an ISO week number, not the one for a USA 
week number.  So it is what the OP was asking for.

> So with my side question, the OPs concerns about a 54 week year, and
> pending a serious concern about allowing things to strftime, I'm not sure
> why %V couldn't be added?

I think it would be easy to add but I’m not part of the dev group and don’t 
really know if this is the case.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request

2017-01-16 Thread Hick Gunter
http://man7.org/linux/man-pages/man3/strftime.3.html

"ISO 8601 week dates
   %G, %g, and %V yield values calculated from the week-based year
   defined by the ISO 8601 standard.  In this system, weeks start on a
   Monday, and are numbered from 01, for the first week, up to 52 or 53,
   for the last week.  Week 1 is the first week where four or more days
   fall within the new year (or, synonymously, week 01 is: the first
   week of the year that contains a Thursday; or, the week that has 4
   January in it).  When three of fewer days of the first calendar week
   of the new year fall within that year, then the ISO 8601 week-based
   system counts those days as part of week 53 of the preceding year.
   For example, 1 January 2010 is a Friday, meaning that just three days
   of that calendar week fall in 2010.  Thus, the ISO 8601 week-based
   system considers these days to be part of week 53 (%V) of the year
   2009 (%G); week 01 of ISO 8601 year 2010 starts on Monday, 4 January
   2010."

%g as "shortest representation of a float" is from (s)printf format specifiers.


-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Simon Slavin
Gesendet: Montag, 16. Jänner 2017 13:30
An: SQLite mailing list 
Betreff: Re: [sqlite] Feature request


On 16 Jan 2017, at 12:17pm, Hick Gunter  wrote:

> Please be aware that %V implies %G/%g (four and two digit ISO Year number), 
> which differs from %Y/%y on the "spillover days" that belong to the 
> first/last week of the "other" year.

Can you tell me where your %G and %g definitions are coming from ?  I thought 
%g was for printing the shortest representation of a number.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request

2017-01-16 Thread Stephen Chrzanowski
Straight up, I've never had to concern myself with the week number of a
year.  I'm aware of it, but, with it a moving number year to year, I've
never relied on it, or even had the requirement/desire to output it as a
result, except maybe for 'fun'.

The SQLite.org page references that strftime goes against the standard C
library and contains a link to
http://pubs.opengroup.org/onlinepubs/007908799/xsh/strftime.html and
mentions only a subset of substitutions are available (Side question: Why
only a subset?)

I don't see %G or %g as a modifier.  The last time I wrote anything in C
was in in my college days, so I'm not entirely familiar with strftime.
Could %G/%g be a sprintf thing, and not a strftime thing?

From the above link:

%V is replaced by the week number of the year (Monday as the first day of
the week) as a decimal number [01,53]. If the week containing 1 January has
four or more days in the new year, then it is considered week 1. Otherwise,
it is the last week of the previous year, and the next week is week 1. %W
is replaced by the week number of the year (Monday as the first day of the
week) as a decimal number [00,53]. All days in a new year preceding the
first Monday are considered to be in week 0.

So with my side question, the OPs concerns about a 54 week year, and
pending a serious concern about allowing things to strftime, I'm not sure
why %V couldn't be added?

On Mon, Jan 16, 2017 at 7:29 AM, Simon Slavin  wrote:

>
> On 16 Jan 2017, at 12:17pm, Hick Gunter  wrote:
>
> > Please be aware that %V implies %G/%g (four and two digit ISO Year
> number), which differs from %Y/%y on the "spillover days" that belong to
> the first/last week of the "other" year.
>
> Can you tell me where your %G and %g definitions are coming from ?  I
> thought %g was for printing the shortest representation of a number.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request

2017-01-16 Thread Simon Slavin

On 16 Jan 2017, at 12:17pm, Hick Gunter  wrote:

> Please be aware that %V implies %G/%g (four and two digit ISO Year number), 
> which differs from %Y/%y on the "spillover days" that belong to the 
> first/last week of the "other" year.

Can you tell me where your %G and %g definitions are coming from ?  I thought 
%g was for printing the shortest representation of a number.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request

2017-01-16 Thread Hick Gunter
Please be aware that %V implies %G/%g (four and two digit ISO Year number), 
which differs from %Y/%y on the "spillover days" that belong to the first/last 
week of the "other" year.

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Simon Slavin
Gesendet: Montag, 16. Jänner 2017 11:44
An: SQLite mailing list 
Betreff: Re: [sqlite] Feature request


On 16 Jan 2017, at 7:53am, Jean-Christophe Deschamps  wrote:

> Would it be possible to add the '%V' format (ISO week number in [01..53]) in 
> some future release?

For those playing along at home, the EU week starts on a Monday, with week 1 
being the one which contains the first Thursday of the year.  The USA week 
starts on a Sunday, with week 1 being the one which contains the first Saturday 
of the year.

I cannot speak for the development team, but I wanted to check your definition. 
 Are you asking for the ISO 8601 week number as specified on this page:

<https://en.wikipedia.org/wiki/ISO_week_date>

Do you want them always to be two digits long, as specified in the standard ?
Do you want them preceded by the mandatory 'W' or would you expect to have the 
programmer add that themself ?

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request

2017-01-16 Thread Simon Slavin

On 16 Jan 2017, at 7:53am, Jean-Christophe Deschamps  wrote:

> Would it be possible to add the '%V' format (ISO week number in [01..53]) in 
> some future release?

For those playing along at home, the EU week starts on a Monday, with week 1 
being the one which contains the first Thursday of the year.  The USA week 
starts on a Sunday, with week 1 being the one which contains the first Saturday 
of the year.

I cannot speak for the development team, but I wanted to check your definition. 
 Are you asking for the ISO 8601 week number as specified on this page:



Do you want them always to be two digits long, as specified in the standard ?
Do you want them preceded by the mandatory 'W' or would you expect to have the 
programmer add that themself ?

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request

2017-01-15 Thread Jean-Christophe Deschamps

Dear list,

I often have to use SQLite strftime() to compute a week number but the 
only proposed format '%W' causes problems.
Not only that north-american week number has a varying range [00..52] 
or [01..53] but some years (e.g. 2012, 2040) yield a result in 
[00..53], making those years 54 weeks.


This is a major inconvenience for a number of applications, since 
deriving the ISO week number from an ISO date as part of a [sub]query 
or condition is a real pain and slows down things significantly.


Would it be possible to add the '%V' format (ISO week number in 
[01..53]) in some future release?



--
Jean-Christophe Deschamps
2891 route de Pouillon
40180 Heugas
France
06 15 10 19 29  


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: ANALYZE REMOVE

2016-01-31 Thread Simon Slavin
I would like a version of the ANALYZE command which drops all the tables that 
ANALYZE creates, then updates the query planner so that it knows no stats are 
available.  This command is intended to help with

A) automated testing and time-trials
B) cleanup for transporting datasets from one setup to another
C) cleanup for when changing compilation options

It should delete not just the tables that ANALYZE in its current mode would 
create, but also those which might have been created by other compilation 
options such as SQLITE_ENABLE_STAT3.  If no schema or table name is supplied, 
dropping all tables with names starting with 'sqlite_stat' is fine.

The current syntax for ANALYZE is

ANALYZE [schema-name.table-or-index-name]

I see no need to implement ANALYZE REMOVE for individual tables or indices, so 
I propose that this new command be

ANALYZE [schema-name] REMOVE

This is my best idea but it does present ambiguity if you have a schema called 
REMOVE, and I'm happy to accept alternative ideas.  Or you may prefer 'DROP' to 
'REMOVE' or have some completely different idea about syntax.

PS: For those interested I currently use the following sequence:

DROP TABLE IF EXISTS sqlite_stat1;
DROP TABLE IF EXISTS sqlite_stat2;
DROP TABLE IF EXISTS sqlite_stat3;
DROP TABLE IF EXISTS sqlite_stat4;
ANALYZE sqlite_master;

Simon.


[sqlite] Feature Request: Support for MMAP beyond 2GB

2015-10-31 Thread Eduardo Morras
On Fri, 30 Oct 2015 12:01:15 -0700
Roger Binns  wrote:

> https://sqlite.org/mmap.html
> 
> SQLite supports memory mapping databases, but only does so for the
> first 2GB of the file.  My databases are around 4 to 20GB, and
> completely fit within RAM on my 64 bit systems.  The 2GB mmap limit
> means that only a portion of the file benefits from the improved
> performance, and much effort is wasted copying the data around bits of
> memory.

This limit is set at compile time with SQLITE_MAX_MMAP_SIZE compile option. I 
don't see at pager.c why it can't be bigger than 2GB, it's a 64bit integer. Try 
to compile with -DSQLITE_MAX_MMAP_SIZE=21474836480 and use a copy or backup, 
not the original database.

> Roger

---   ---
Eduardo Morras 


[sqlite] Feature Request: Support for MMAP beyond 2GB

2015-10-30 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

https://sqlite.org/mmap.html

SQLite supports memory mapping databases, but only does so for the
first 2GB of the file.  My databases are around 4 to 20GB, and
completely fit within RAM on my 64 bit systems.  The 2GB mmap limit
means that only a portion of the file benefits from the improved
performance, and much effort is wasted copying the data around bits of
memory.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iEYEARECAAYFAlYzvnsACgkQmOOfHg372QTLyQCg2Hbf8V/4xPt7OA6s0bK6U7Ob
Qp4An1LOw8nl9DAHoK07ykY+DIFaa/jS
=iTb4
-END PGP SIGNATURE-


[sqlite] Feature request for sqlite3_initialize().

2015-09-10 Thread Scott Hess
On Sat, Sep 5, 2015 at 6:42 PM, Darin Adler  wrote:

> Michael is planning a workaround in WebKit that will call
> sqlite3_initialize manually exactly once before WebKit uses sqlite, using
> std::once to deal with the thread safety issue.
>

This reminds me ... I was recently working on a patch which used
sqlite3_config(), which needs to run before sqlite3_initialize().  Since we
were already calling sqlite3_initialize(), I put it in the obvious place.
Then I found a case where someone was making a sqlite3_*() call before
calling into the normal code path.  That worked fine for lazy
initialization, but caused my sqlite3_config() to start failing depending
on what happened in which order.

It would be modestly interesting to have a macro SQLITE_REQUIRE_INIT (or
similar) which caused SQLite to throw errors if an API is being called
before sqlite3_initialize().  My pencil implementation would probably
piggyback on SQLITE_OMIT_AUTOINIT, changing those callsites from
sqlite3_initialize() to sqlite3Initialize(), then having that
implementation look something like:

#ifndef SQLITE_OMIT_AUTOINIT
int sqlite3Initialize() {
#ifdef SQLITE_REQUIRE_INIT
  return sqlite3GlobalConfig.isInit ? SQLITE_OK : SQLITE_MISUSE;
#else
  return sqlite3_initialize();
#endif
}
#endif

Mostly I'd want this in debug and testing builds, so that I could have
confidence in setting SQLITE_OMIT_AUTOINIT  for release builds.

Thanks,
scott


Re: [sqlite] Feature Request - RowCount

2014-12-15 Thread Paul
> 
> On 14 Dec 2014, at 11:08am, Jean-Christophe Deschamps  
> wrote:
> 
> > Without using slow triggers or changing the v3 file format there is still 
> > another possibility which could be implemented relatively easily. All it 
> > would need is a new pragma (or internal function) like "pragma 
> > row_count=0/1" and some code.
> > 
> > On invokation, the engine would create a hidden "system" table like 
> > sqlite_rowcount --similar to sqlite_sequence-- which would initially hold 
> > row counts for every table in the DB.
> 
> Two obvious places:
> 
> A) In that sqlite_sequence table you mentioned, as an additional column. 
> Always up-to-date.
> 
> B) In the tables prepared by SQLite ANALYZE. If you want the rowcount 
> updated, do another ANALYZE.

It's just my two cents, but if you take time to run ANALYZE and you 
don't care for the value to be synced with the real row count 
in between two ANALYZEs, why not make a table for your convenience,
that will do the same thing you want ANALYZE to do. IE you make 
a script that will select all names of the tables in database:

  SELECT name FROM sqlite_master WHERE type = 'table' AND name NOT LIKE 
'sqlite_%';

And then will perform 

  INSERT INTO my_row_count_cache(table_name, row_count)
  SELECT 'XXX' AS name, (SELECT COUNT(*) FROM XXX) AS count;

for each resulting table.

All this can actually be implemented as an SQLite add-on, via virtual table.
So for example, instead of doing

  SELECT COUNT(*) FROM XXX;

you could do

  SELECT count FROM vtb_row_count_cache WHERE table = 'XXX';


Just and idea...

Regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature Request - RowCount

2014-12-14 Thread RSmith


On 2014/12/13 21:46, James K. Lowden wrote:

So the number of tools with feature X is no measure of the value of X. (Notable example: the tool should keep every query and 
result in a time-sequenced transcript log, so that prior results can be re-examined and prior queries modified. Most tools 
disassociate query from output and invite the user to modify the query in-place, destroying the prior.) 


This is hardly a function of the DB admin tools though, it's a research function or application function (depending on whether the 
inquest is theoretical or physical). That said, many of the tools I know do allow saving queries as scripts with the possibility to 
examine output at every step, but not all of them, so this might be somewhat valid.  More importantly, I was not touting the idea 
that because all the tools use feature X, it is therefore valid... I more conceded the fact that the request for feature X is 
valuable (mostly) only to those tools, which probably renders it less urgent - quite in agreement with your point.


My first question, then, is whether or not the rowcount is so interesting that it must be known before a table can be operated on. 
I suggest the answer is No. The relative & approximate sizes of the tables is known to the admin in most cases and, when it is 
not, the information is readily discovered on a case-by-case basis. Would a proxy figure do? Is it enough to know the number of 
pages or bytes allocated to a table? I don't know if such is available, but if it is perhaps that would serve your purpose. 


Yes, this would actually do, but it is not available as you rightly pondered. To the point of necessity, I have to disagree.  It is 
nearly always the first thing I want to know. When someone here is troubled by a query running time... first question is: how many 
rows are in which joined tables? To state the problem a bit simplistic - It is hard to fathom the meaning of O log N without a clear 
understanding of what both O and N might be.  And that's just from a DB admin perspective, in companies where the business analysis 
data matter, lots of queries are usually stored as tables for further analysis, and the first thing asked is: How many?  Other times 
that figure serves probably only as bemusement to big data fans.  It's usually (I'd say vast majority of cases) an easy and fast 
step to ascertain though (as this discussion pointed out) hence me resting the case - but I do stand by the point that the need 
isn't invalid.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature Request - RowCount

2014-12-14 Thread Klaas V
Jim Callahan wrote:>#26 The unique columns have non-null values (the answer 
says a lot more,>but that is the essence of what I am relying on).
Right, but the question was how to count rows as quickly as possible regarding 
any or all columnse.g. count(ProspectName) from Clients; 
One can imagine from some prospects you don't know the (real) name yet, just 
her or his emailAddress
>#1 If you have an integer primary key (which by definition in 
>SQLITE3>autoincrements) one might be able to get an approximate row count 
>faster>using the:
>sqlite3_last_insert_rowid() function.
If you're happy with an approximate count, but as soon as rows are deleted the 
last inserted row# may and will be more off an accurate value. Far off at some 
point in time that it becomes unacceptable for most managers.

>Jim Callahan>Orlando, FL
 
Kind regards | Cordiali saluti | Vriendelijke groeten | Freundliche Grüsse,
Klaas `Z4us` V  - OrcID -0001-7190-2544
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   4   >