[sqlite] Table constraints

2013-10-16 Thread Joseph L. Casale
Hi,
I have a table as follows:

CREATE TABLE t ( 
id  INTEGER NOT NULL,
a   VARCHAR NOT NULL COLLATE 'nocase',
b   VARCHAR COLLATE 'nocase',
c   VARCHAR CHECK (c IN ('foo', 'bar', NULL)) COLLATE 'nocase',
PRIMARY KEY (id)
);

How does one elegantly construct an index or constraint such that for any
row, column a may appear twice with column c having a value of 'foo' and
'bar', unless this value for column a appears with a null value in column c
where no other rows may now exist for that value of column a.

id  a   b   c
--  --- --- ---
1   ab   foo
2   ab   bar
(no more rows with col a having a value of 'a'.

id  a   b   c
--  --- --- ---
1   ab   NULL
2   ab   bar <- not allowed.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FW: sqlite-users Digest, Vol 70, Issue 16

2013-10-16 Thread Paul Bainter
Joe,

Thank You, Thank You, that is exactly what I needed.  I couldn't think of
how to debug this, so with your suggestion, I put a try block in my code and
wrote the stack trace out to a text file and that gave me exactly what I
needed.  I was missing a dll, but it ended up having nothing to do with
SQLite.  You are awesome.

Sincerely,
Paul Bainter

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of
sqlite-users-requ...@sqlite.org
Sent: Wednesday, October 16, 2013 10:00 AM
To: sqlite-users@sqlite.org
Subject: sqlite-users Digest, Vol 70, Issue 16

Send sqlite-users mailing list submissions to
sqlite-users@sqlite.org

To subscribe or unsubscribe via the World Wide Web, visit
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
or, via email, send a message with subject or body 'help' to
sqlite-users-requ...@sqlite.org

You can reach the person managing the list at
sqlite-users-ow...@sqlite.org

When replying, please edit your Subject line so it is more specific than
"Re: Contents of sqlite-users digest..."


Today's Topics:

   1. System.Data.SQLite Deployment Problem (Paul Bainter)
   2. Re: System.Data.SQLite Deployment Problem (Joe Mistachkin)
   3. Re: Trigger SQL and database schema (Darren Duncan)
   4. Re: Trigger SQL and database schema (Petite Abeille)
   5. Re: Trigger SQL and database schema (Darren Duncan)
   6. Re: FTS4 + spellfix1 with multiple languages (Dan Kennedy)
   7. JDBC Driver Issue (mitzanu)
   8. ALTER COLUMN in sqlite (mitzanu)
   9. Feature Request: Binding Arrays (Dominique Devienne)
  10. Re: Feature Request: Binding Arrays (Clemens Ladisch)
  11. Re: Feature Request: Binding Arrays (Dominique Devienne)
  12. Re: Feature Request: Binding Arrays (Richard Hipp)
  13. Re: FTS4 + spellfix1 with multiple languages (Raf Geens)
  14. Re: Feature Request: Binding Arrays (Paul van Helden)
  15. Trying to figure out how to circumvent
  sqlite3_win32_mbcs_to_utf8 (Mike Clagett)
  16. Re: Feature Request: Binding Arrays (Richard Hipp)
  17. Re: Trying to figure out how to circumvent
  sqlite3_win32_mbcs_to_utf8 (Richard Hipp)
  18. Re: ALTER COLUMN in sqlite (John McKown)
  19. Re: Feature Request: Binding Arrays (Paul van Helden)
  20. Re: Feature Request: Binding Arrays (techi eth)
  21. Analyze optimizing views? (Daniel Polski)
  22. Re: Analyze optimizing views? (Richard Hipp)
  23. Re: ALTER COLUMN in sqlite (a.furi...@lqt.it)


--

Message: 1
Date: Tue, 15 Oct 2013 20:56:10 -0600
From: "Paul Bainter" 
To: 
Subject: [sqlite] System.Data.SQLite Deployment Problem
Message-ID: <004901ceca1b$45edc060$d1c94120$@gmail.com>
Content-Type: text/plain;   charset="us-ascii"

I'm having a terrible time trying to deploy my SQLite application. I
downloaded from System.Data.SQLite.org the file:
sqlite-netFx45-setup-bundle-x86-2012-1.0.88.0.exe and installed that on my
development machine (Windows 7 Ultimate x64).  I actually have 2 development
machines, a laptop and a desktop both are Windows7 x64 and both have Visual
Studio 2012.  I used this particular download file because I enjoy working
with EntityFramework and this saved me a lot of valuable time in my
development.

 

I'm using Visual Studio 2012 and I got my application running and everything
is great.  I configured all the libraries and my main application to build
to x86, so that it matched the sqlite installation file.  I then copied all
the files from the bin\debug directory and placed them on a target machine
and made sure the sqlite database file was accessible in the correct
location per the exe.config file. both System.Data.SQLite.dll and
System.Data.SQLite.Linq.dll were included. Also, per the installation notes,
I placed the following code in the Configuration file:

 













 

 

When I run the program on the target machine, the main window comes up fine
because it doesn't access the database, but once I bring up a window that
does access the database, I get a message stating that the application has
stopped working. No error message specific to the problem and then another
window from the OS stating that it will try to discover what the problem is,
but of course it can't.

 

I've used several machines as the target machine such as Windows 7 Ultimate
x64, Windows 7 Ultimate x86, a virtual Windows 7 Ultimate x64, etc.  I even
tried to install sqlite-netFx45-setup-bundle-x86-2012-1.0.88.0.exe on some
of the target machines to see if that would help and it didn't.  Each of the
target machines has installed the .NET 4.5 update, so that is also not the
problem.

 

I'm completely stumped on this issue.  I really want to be able to use
EntityFramework and love the SQLite database, but this has got me pulling my
hair out, (not that I have much anyway. 

Re: [sqlite] select all fields of a column in one table that are not in a column in another table

2013-10-16 Thread Igor Tandetnik

On 10/16/2013 4:49 PM, dean gwilliam wrote:

if I have two tables
1 aliases (std_name, raw_name)
2 items (name..)
what would the query look like to select all "name" fields in "itms"
that match neither "std_name" or "raw_name" in "aliases"
and where the resulting list of "name"s contains no duplicates.


select distinct name from items
where not exists (select 1 from aliases where std_name = name or 
raw_name = name);


--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] select all fields of a column in one table that are not in a column in another table

2013-10-16 Thread dean gwilliam

I'm just wondering...

if I have two tables
1 aliases (std_name, raw_name)
2 items (name..)
what would the query look like to select all "name" fields in "itms"
that match neither "std_name" or "raw_name" in "aliases"
and where the resulting list of "name"s contains no duplicates.
I ask because I want to add these "name"s (in items) to the "raw_name" 
column in "aliases"


I hope I've explained my self clearly and any help much appreciated.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ALTER COLUMN in sqlite

2013-10-16 Thread John McKown
Thank you for the information. I guess I assumed the Lite was also for "not
too big". The uses I've seen on my Linux system all seem to be that. Oh,
and I'm reading your book, off and on, as I get time. I have the Kindle
edition. But I keep getting side tracked by silly people at work who keep
pestering me to do something useful. 


On Wed, Oct 16, 2013 at 12:32 PM, Jay A. Kreibich  wrote:

> On Wed, Oct 16, 2013 at 07:32:08AM -0500, John McKown scratched on the
> wall:
> > I'm not truly against such a thing. But we need to remember the main use
> > for SQLite is to be small, fast, and "embedded". At least as best as I
> can
> > tell, it is not meant to compete with MariaDB (nee MySQL) or PostgreSQL.
> It
> > doesn't appear to be _committed_ to being 100% SQL compliant (as in
> exactly
> > matching the ANSI/ISO requirements).
>
> > I can't say for sure, but would somebody want to store 20 million rows
> > in a SQLite data base? Why?
>
>   Overall I agree with your point... SQLite is not trying to be-- and
>   shouldn't be-- everything to everyone.  It is not trying to compete
>   with or replace large client/server systems.  In fact, I think
>   SQLite's success is largely driven by the fact it compliments such
>   products so well.  Developers have learned the power and usefulness
>   of RDBMS systems, and want to use that power, experience, and
>   feature set in environments where a full client/server system would
>   be impractical.
>
>   That said, I don't think it is fair to compare feature set to
>   database size.  The complexity of the database system has little to
>   do with the size of the dataset it is expected to handle.  The "Lite"
>   in SQLite generally refers to the design, feature set, and "weight"
>   of the overall database engine, not the size of the data it is
>   expected to handle.  People choose SQLite for where they need to run
>   it, not what they put into it (except in the extreme degree).
>
>   So while I agree with your feeling that this kind of feature may be
>   getting into a realm that is outside of SQLite's core use-case and
> market,
>   that has more to do with how SQLite is used, the class of application
>   that uses it, and the life-cycle of data stored in SQLite.  None of
>   these things have to do with the *amount* of data stored in a database.
>   The SQLite team takes some pride (and rightfully so) in how well SQLite
>   scales to extremely large data sets, even with extremely tight memory
>   constraints.  When things get really big, generally the biggest
>   performance limitation is the underlying filesystem, not the SQLite
>   engine.
>
>
>   And for what it's worth, yes-- I have put 20 million rows in an SQLite
>   database.  Considerably more, actually.  Some six years ago I imported
>   all the English text portions of Wikipedia into an SQLite database so
>   that I could access all the data on an embedded device that was designed
>   to operate without constant network access.  Back then, with some very
>   carefully tuned string compression, you could just squeeze the whole
>   thing onto a 4GB SD card.  It was actually that project (and my
>   frustrations with the older SQLite API docs) that lead to me write the
>   "Using SQLite" book.
>
>   Since then I've done some SQLite related consulting work and fire
>   fighting, and seen SQLite deployed in situations that it really shouldn't
>   have been...  like the center of extremely busy and popular websites.
>   While it was the wrong choice for the situation, I have to give
>   SQLite a lot of credit for struggling through the situation.  Performance
>   wasn't great, but the simple fact it worked at all was very impressive.
>
> > I really am curious. Perhaps I'm "out of touch" (won't be the first
> time).
> > My use for SQLite is for storing smallish amount of data which is
> > dedicated to a single application. My "large" data base needs are
> > relegated to PostgreSQL data bases.
>
>   One must remember that different people have radically different
>   definitions of "big" and "small".  As data gets bigger and simple
>   off-the-shelf desktops and server systems get more powerful, you
>   also have the situation summed up by one of the data scientists at
>   Strata last year: "'Big' is getting smaller and 'small' is getting
>   bigger."  Our core application runs databases in the ~2 TB range
>   (PostgreSQL), which we consider on the smallish side compared to other
>   companies in the same industry.  We often export data into SQLite
>   files for personal work or analysis, and at any time I've got a dozen
>   or so SQLite files on my laptop in the 30 to 80 GB range.  You can
>   do an amazing amount of data work, on some pretty big data sets,
>   with nothing more than SQLite and a semi-descent laptop (especially
>   if it has an SSD!).
>
> > IMO, the "proper" way to do this is just what you outlined. It is a "one
> > shot" and should not take long to run in most cases. Unless 

Re: [sqlite] ALTER COLUMN in sqlite

2013-10-16 Thread Jay A. Kreibich
On Wed, Oct 16, 2013 at 07:32:08AM -0500, John McKown scratched on the wall:
> I'm not truly against such a thing. But we need to remember the main use
> for SQLite is to be small, fast, and "embedded". At least as best as I can
> tell, it is not meant to compete with MariaDB (nee MySQL) or PostgreSQL. It
> doesn't appear to be _committed_ to being 100% SQL compliant (as in exactly
> matching the ANSI/ISO requirements).

> I can't say for sure, but would somebody want to store 20 million rows
> in a SQLite data base? Why? 

  Overall I agree with your point... SQLite is not trying to be-- and
  shouldn't be-- everything to everyone.  It is not trying to compete
  with or replace large client/server systems.  In fact, I think
  SQLite's success is largely driven by the fact it compliments such
  products so well.  Developers have learned the power and usefulness
  of RDBMS systems, and want to use that power, experience, and
  feature set in environments where a full client/server system would
  be impractical.

  That said, I don't think it is fair to compare feature set to
  database size.  The complexity of the database system has little to
  do with the size of the dataset it is expected to handle.  The "Lite"
  in SQLite generally refers to the design, feature set, and "weight"
  of the overall database engine, not the size of the data it is
  expected to handle.  People choose SQLite for where they need to run
  it, not what they put into it (except in the extreme degree).
  
  So while I agree with your feeling that this kind of feature may be
  getting into a realm that is outside of SQLite's core use-case and market,
  that has more to do with how SQLite is used, the class of application
  that uses it, and the life-cycle of data stored in SQLite.  None of
  these things have to do with the *amount* of data stored in a database.
  The SQLite team takes some pride (and rightfully so) in how well SQLite
  scales to extremely large data sets, even with extremely tight memory
  constraints.  When things get really big, generally the biggest
  performance limitation is the underlying filesystem, not the SQLite
  engine.

  
  And for what it's worth, yes-- I have put 20 million rows in an SQLite
  database.  Considerably more, actually.  Some six years ago I imported
  all the English text portions of Wikipedia into an SQLite database so
  that I could access all the data on an embedded device that was designed
  to operate without constant network access.  Back then, with some very
  carefully tuned string compression, you could just squeeze the whole
  thing onto a 4GB SD card.  It was actually that project (and my
  frustrations with the older SQLite API docs) that lead to me write the
  "Using SQLite" book.
  
  Since then I've done some SQLite related consulting work and fire
  fighting, and seen SQLite deployed in situations that it really shouldn't
  have been...  like the center of extremely busy and popular websites.
  While it was the wrong choice for the situation, I have to give
  SQLite a lot of credit for struggling through the situation.  Performance
  wasn't great, but the simple fact it worked at all was very impressive.

> I really am curious. Perhaps I'm "out of touch" (won't be the first time).
> My use for SQLite is for storing smallish amount of data which is
> dedicated to a single application. My "large" data base needs are
> relegated to PostgreSQL data bases.

  One must remember that different people have radically different
  definitions of "big" and "small".  As data gets bigger and simple
  off-the-shelf desktops and server systems get more powerful, you
  also have the situation summed up by one of the data scientists at
  Strata last year: "'Big' is getting smaller and 'small' is getting
  bigger."  Our core application runs databases in the ~2 TB range
  (PostgreSQL), which we consider on the smallish side compared to other 
  companies in the same industry.  We often export data into SQLite
  files for personal work or analysis, and at any time I've got a dozen
  or so SQLite files on my laptop in the 30 to 80 GB range.  You can
  do an amazing amount of data work, on some pretty big data sets,
  with nothing more than SQLite and a semi-descent laptop (especially
  if it has an SSD!).

> IMO, the "proper" way to do this is just what you outlined. It is a "one
> shot" and should not take long to run in most cases. Unless those million
> row SQLite data bases are more prevalent than that I had ever thought

  Returning to the question at hand, yes, that might be the most proper
  way, but there are still concerns. There are a lot of interdependencies
  between database objects... tables, views, indexes, foreign keys, and
  triggers all need to find each other.  ALTER TABLE...RENAME catches
  some of these, but not all of them.  So going through this process
  isn't going to carry over indexes (they'll need to be recreated) but
  should, in the end, resolve other triggers 

Re: [sqlite] Feature Request: Binding Arrays

2013-10-16 Thread Dominique Devienne
On Wed, Oct 16, 2013 at 1:28 PM, Richard Hipp  wrote:

> Please see http://www.sqlite.org/src/artifact/2ece66438?ln=13-75


Thanks. Being familiar with vtables, I had imagined as much, even though
stopped short of doing it in practice.

This takes care of hiding the DML statements from auth/trace hooks, by
inserting/deleting behind the scene in native code.
But it does not hide from SQL the DDL for creating the vtable or dropping
it OTOH. Those will be seen by hooks still.

I realize I'm pushing my luck here Dr Hipp, but thinking about this more,
what I think SQLite is missing is some kind of unprotected
sqlite3_table_value, a subtype of sqlite3_value, with APIs to define the
columns, and fill in the values of a "table value".

Once you have such a beast, you can bind such "table value" for the << in
%1 >> case I was describing above, since there's already
sqlite3_bind_value().

But you can also now create "table functions", i.e. custom SQLite functions
that do not return scalars but anonymous temporary "tables", returning
these sqlite3_table_values via sqlite3_result_value().

vtables can already do pretty much the same thing, except that
* vtables cannot be used "inline" to a given statement (i.e. created on the
fly), and
* vtables cannot be used in an anonymous manner (the vtable must have a
name)
* vtables cannot dynamically process "document cells" that belong to other
tables (real or virtual) in a statement. (you can explicit insert stuff
into them like FTS does, but it's more a custom index than a custom table).

You can easily create a virtual table that parses a comma separated list,
and return one row per string between commans, but you must name the
vtable, and pass it the comma separated list explicitly, "hardcoding" its
rows. But if you make it a table function, you can select from that
function, passing arbitrary strings to parse, each time returning a new
unnamed result-table (i.e. a table), and you're not limited to literals,
you can also "join" to another table to process specific strings (in a
given column) of that other table and have an implicit union-all of those
anonymous per-string-value result-sets.

In pseudo-code, this would look something like this:

sqlite> select * from parse_csv('a, b, a');
a
b
a
sqlite> create table s (name text, csv text);
sqlite> insert into s values ('dec', '1, 2, 3'), ('hex', '1, A'), ('alpha',
'a, B, TT');
sqlite> select s.name, p* from s, parse_csv(s.csv) p;
dec|1
dec|2
dec|3
hex|1
hex|A
alpha|a
alpha|B
alpha|TT
sqlite> select s.name, count(parse_csv(s.csv)) from s;
dec|3
hex|2
alpha|3

With such table functions, you can imagine all sorts of interesting
scenarios, like getting info out of XML or JSON documents stored in table
cells (UnQL anyone?), or getting a list of doubles as rows from a blob cell
value (since SQLite lacks array support, any user-defined-type is basically
a blob or a string that aggregate denormalized info).

OK, it's probably fever-induced wandering. I'll stop there ;) --DD

Oracle XMLTable http://stackoverflow.com/questions/12690868

http://www.oracle-base.com/articles/misc/pipelined-table-functions.php

PS: BTW, this taught me the << in table >> alternative to << in (list) >> I
was not aware of. Thanks for that.
sqlite> create table t (name text, type text);
sqlite> insert into t values ('foo', 'en'), ('bar', 'en');
sqlite> insert into t values ('toto', 'fr'), ('titi', 'fr');
sqlite> insert into t values ('furtch', 'gr');
sqlite> create table type_sel (type text);
sqlite> insert into type_sel values ('gr'), ('fr');
sqlite> select * from t where type in type_sel;
toto|fr
titi|fr
furtch|gr
sqlite> delete from type_sel;
sqlite> insert into type_sel values ('en');
sqlite> select * from t where type in type_sel;
foo|en
bar|en
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ALTER COLUMN in sqlite

2013-10-16 Thread John McKown
That was very interesting and informative. I'm a "normal" IT type person
(as "normal" as most IT people, I guess ). I know very little about
the application that you talked about, but it did help me to understand why
SQLite might be used instead of a "more normal" SQL server data base like
PostgreSQL/Oracle/DB2. It gave me things to think about. Which I always
enjoy.


On Wed, Oct 16, 2013 at 10:02 AM,  wrote:

> On Wed, 16 Oct 2013 07:32:08 -0500, John McKown wrote:
>
>> I can't say for sure, but would somebody want to store 20
>> million rows in a SQLite data base? Why? I really am curious
>>
>>
> Hi John,
>
> you could eventually find interesting in some way my own
> first hand experiences in the Geographic/GeoSpatial field.
>
> in this very specific environment (certainly not the most
> common one, I agree) it's not at all exceptional deploying
> SQLite/SpatiaLite DB-files as big as 20/40 GB (and even more)
> containing several tenths/hundredths million rows disseminated
> in many hundredths different tables (aka layers in GIS jargon)
> strictly related the one to the other not only in the "classic"
> relational way based on Primary and Foreign Keys, but even in
> the "awkward spatial way" based on geometric and topological
> relationships.
>
> there are several good technical reasons suggesting to use
> exactly SQLite/SpatiaLite for processing, validating and
> distributing huge geographic datasets:
>
> - SQLite/SpatiaLite offers exactly the same overall level of
>   standard Spatial SQL processing capabilities supported by
>   the much more sophisticated (and complex) PostgreSQL/PostGIS;
>   but it's by way simpler to be installed and configured, and
>   it's usually faster under many common working conditions
>   (mainly thanks to its really outstanding first class R*Tree
>   implementation).
>
> - not to mention the SQLite's "forbidden weapon"; if your HW
>   supports an adequate amount of RAM you can directly load a
>   whole DB in memory at once; and under such a configuration
>   you can easily reach dramatically impressive supersonic speeds.
>
> - you can directly copy / send a whole DB (even a really huge one)
>   from a locations to another in a single shot and in the most
>   painless way, because after all it simply is an ordinary file.
>
> - if something goes completely wrong during any complex Spatial
>   data processing operation (it happens ... nobody's perfect),
>   you simply have to remove a single file and then patiently
>   restart yet again from scratch after correcting your buggish
>   Spatial SQL scripts.
>   under the same conditions fully recovering some client/server
>   Spatial DBMS left in an inconsistent state could eventually be
>   a not so pleasant and easy affair.
>
> - you can freely ATTACH and DETACH together many DB-files
>   depending on your very variable specific requirements, thus
>   achieving an outstanding and unconstrained flexibility.
>   Quite often this is a really usefull feature, e.g. when you
>   have to collect, integrate and merge together many different
>   datasets presenting a very loose standardization because they
>   were produced during a long period of time by many different
>   subjects for different initial purposes.
>
> all this considered, my answer is: "oh yes, it makes perfectly
> sense storing 20 million rows in a SQLite DB; and it works
> absolutely well" :-D
>
> by Sandro
> __**_
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users
>



-- 
This is clearly another case of too many mad scientists, and not enough
hunchbacks.

Maranatha! <><
John McKown
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ALTER COLUMN in sqlite

2013-10-16 Thread a . furieri

On Wed, 16 Oct 2013 07:32:08 -0500, John McKown wrote:

I can't say for sure, but would somebody want to store 20
million rows in a SQLite data base? Why? I really am curious



Hi John,

you could eventually find interesting in some way my own
first hand experiences in the Geographic/GeoSpatial field.

in this very specific environment (certainly not the most
common one, I agree) it's not at all exceptional deploying
SQLite/SpatiaLite DB-files as big as 20/40 GB (and even more)
containing several tenths/hundredths million rows disseminated
in many hundredths different tables (aka layers in GIS jargon)
strictly related the one to the other not only in the "classic"
relational way based on Primary and Foreign Keys, but even in
the "awkward spatial way" based on geometric and topological
relationships.

there are several good technical reasons suggesting to use
exactly SQLite/SpatiaLite for processing, validating and
distributing huge geographic datasets:

- SQLite/SpatiaLite offers exactly the same overall level of
  standard Spatial SQL processing capabilities supported by
  the much more sophisticated (and complex) PostgreSQL/PostGIS;
  but it's by way simpler to be installed and configured, and
  it's usually faster under many common working conditions
  (mainly thanks to its really outstanding first class R*Tree
  implementation).

- not to mention the SQLite's "forbidden weapon"; if your HW
  supports an adequate amount of RAM you can directly load a
  whole DB in memory at once; and under such a configuration
  you can easily reach dramatically impressive supersonic speeds.

- you can directly copy / send a whole DB (even a really huge one)
  from a locations to another in a single shot and in the most
  painless way, because after all it simply is an ordinary file.

- if something goes completely wrong during any complex Spatial
  data processing operation (it happens ... nobody's perfect),
  you simply have to remove a single file and then patiently
  restart yet again from scratch after correcting your buggish
  Spatial SQL scripts.
  under the same conditions fully recovering some client/server
  Spatial DBMS left in an inconsistent state could eventually be
  a not so pleasant and easy affair.

- you can freely ATTACH and DETACH together many DB-files
  depending on your very variable specific requirements, thus
  achieving an outstanding and unconstrained flexibility.
  Quite often this is a really usefull feature, e.g. when you
  have to collect, integrate and merge together many different
  datasets presenting a very loose standardization because they
  were produced during a long period of time by many different
  subjects for different initial purposes.

all this considered, my answer is: "oh yes, it makes perfectly
sense storing 20 million rows in a SQLite DB; and it works
absolutely well" :-D

by Sandro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Analyze optimizing views?

2013-10-16 Thread Richard Hipp
On Wed, Oct 16, 2013 at 10:29 AM, Daniel Polski wrote:

> Hello,
> Does the ANALYZE command gather statistics and optimize for views I've
> created or only "real tables"?
>

Only real tables.

Views are just macros that are applied to queries when the queries are run.
If you have:

 CREATE TABLE t1(a,b,c);
 CREATE VIEW v1 AS SELECT a+b, c+a FROM t1;

Then you do:

 SELECT * FROM v1;

That's exactly the same as doing:

 SELECT * FROM (SELECT a+b, c+a FROM t1);



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Analyze optimizing views?

2013-10-16 Thread Daniel Polski

Hello,
Does the ANALYZE command gather statistics and optimize for views I've 
created or only "real tables"?


Best regards,
Daniel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature Request: Binding Arrays

2013-10-16 Thread techi eth
It is really useful feature.

I have a use case where I need to log the data in continuous interval &
store in database. If array type is supported by sqlite then in single row
I can store data in array of time stamp & array of value.
Is it specific to int type or any other data type can be supported?


On Wed, Oct 16, 2013 at 6:15 PM, Paul van Helden wrote:

> > Since version 3.6.21, circa 2009-12-07.  Note however that this
> capability
> > is not built in.  It is an extension that you need to compile and link
> > separately.
> >
> > OK... Herewith my vote to make it standard then, like
> SQLITE_ENABLE_COLUMN_METADATA was enabled for the precompiled binary at
> some point.
>
> I prefer to stick to the precompiled binaries. Besides, it would make the
> sqlite3_intarray functions more visible in the documentation, etc. I'm sure
> I'm not the only one that didn't know about this very useful functionality.
>
> It's about time the binary got slightly bigger ;-)
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature Request: Binding Arrays

2013-10-16 Thread Paul van Helden
> Since version 3.6.21, circa 2009-12-07.  Note however that this capability
> is not built in.  It is an extension that you need to compile and link
> separately.
>
> OK... Herewith my vote to make it standard then, like
SQLITE_ENABLE_COLUMN_METADATA was enabled for the precompiled binary at
some point.

I prefer to stick to the precompiled binaries. Besides, it would make the
sqlite3_intarray functions more visible in the documentation, etc. I'm sure
I'm not the only one that didn't know about this very useful functionality.

It's about time the binary got slightly bigger ;-)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ALTER COLUMN in sqlite

2013-10-16 Thread John McKown
I'm not truly against such a thing. But we need to remember the main use
for SQLite is to be small, fast, and "embedded". At least as best as I can
tell, it is not meant to compete with MariaDB (nee MySQL) or PostgreSQL. It
doesn't appear to be _committed_ to being 100% SQL compliant (as in exactly
matching the ANSI/ISO requirements). I can't say for sure, but would
somebody want to store 20 million rows in a SQLite data base? Why? I really
am curious. Perhaps I'm "out of touch" (won't be the first time). My use
for SQLite is for storing smallish amount of data which is dedicated to a
single application. My "large" data base needs are relegated to PostgreSQL
data bases.

IMO, the "proper" way to do this is just what you outlined. It is a "one
shot" and should not take long to run in most cases. Unless those million
row SQLite data bases are more prevalent than that I had ever thought


On Wed, Oct 16, 2013 at 2:48 AM, mitzanu  wrote:

> There's no ALTER COLUMN in sqlite.
>
> I believe the option is to:
>
> •Rename the table to a temporary name
> •Create a new table without the NOT NULL constraint
> •Copy the content of the old table to the new one
> •Remove the old table
>
> Can you guys implement ALTER COLUMN in sqlite? it would be a great feature.
>
>
>
> --
> View this message in context:
> http://sqlite.1065341.n5.nabble.com/ALTER-COLUMN-in-sqlite-tp71706.html
> Sent from the SQLite mailing list archive at Nabble.com.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
This is clearly another case of too many mad scientists, and not enough
hunchbacks.

Maranatha! <><
John McKown
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trying to figure out how to circumvent sqlite3_win32_mbcs_to_utf8

2013-10-16 Thread Richard Hipp
On Wed, Oct 16, 2013 at 7:51 AM, Mike Clagett wrote:

> Hi -
>
> We have a C++ (VisualC++) app that is reading from and writing to a sqlite
> database.   Profiling reveals that it is spending 883.437 of its 2160.988
> seconds in the sqlite3_win32_mbcs_to_utf8 function.


Wow.  That routine should only be called when (1) reporting a low-level I/O
error and (2) creating a temporary filename.  And both of those should be
rare occurrences.

1.  What does your profiler say is the most frequent caller to
sqlite3_win32_mbcs_to_utf8()?

2.  Have you enabled error logging?  (http://www.sqlite.org/errlog.html)

3.  Have you tried running with PRAGMA temp_store=MEMORY to see if that
helps?

4.  Please tell us which MathWorks products us SQLite, so that we can add
them to http://www.sqlite.org/famous.html




>  We are using std::basic_string as our string type and I can
> only assume that these are being seen by sqlite as mbcs strings.


No.  The problem is that SQLite uses UTF8 for filenames and Windows uses
MBCS for filenames and so we have to convert between the two when making
Windows system calls such as opening new files.


> I would like to know a better way of doing this that will eliminate all
> these unnecessary conversions.   I believe it may end up being a
> combination of picking the correct string type (although using anything but
> the type we are using may be difficult if it contravenes a product-wide
> standard) and setting the defaults properly in sqlite.   I have attempted
> the latter by issuing am_db.executeStatement("PRAGMA encoding =
> \"UTF-16\"", error); just after I create a database.  I clearly am not
> doing this effectively as it seems to have no effect on the use of the
> function in question.
>
> Any guidance from older hands would be greatly appreciated.
>
> Thanks.
>
> Michael Clagett
> Principal Software Engineer
> Mathworks, Inc.
> mike.clag...@mathworks.com
> (508)-647-4307
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature Request: Binding Arrays

2013-10-16 Thread Richard Hipp
On Wed, Oct 16, 2013 at 7:40 AM, Paul van Helden wrote:

> Fantastic! I've been wanting this for a long time.
>
> Since which version do we have sqlite3_intarray_x?
>
>
Since version 3.6.21, circa 2009-12-07.  Note however that this capability
is not built in.  It is an extension that you need to compile and link
separately.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Trying to figure out how to circumvent sqlite3_win32_mbcs_to_utf8

2013-10-16 Thread Mike Clagett
Hi -

We have a C++ (VisualC++) app that is reading from and writing to a sqlite 
database.   Profiling reveals that it is spending 883.437 of its 2160.988 
seconds in the sqlite3_win32_mbcs_to_utf8 function.  We are using 
std::basic_string as our string type and I can only assume that 
these are being seen by sqlite as mbcs strings.   I would like to know a better 
way of doing this that will eliminate all these unnecessary conversions.   I 
believe it may end up being a combination of picking the correct string type 
(although using anything but the type we are using may be difficult if it 
contravenes a product-wide standard) and setting the defaults properly in 
sqlite.   I have attempted the latter by issuing a
m_db.executeStatement("PRAGMA encoding = \"UTF-16\"", error); just after I 
create a database.  I clearly am not doing this effectively as it seems to have 
no effect on the use of the function in question.

Any guidance from older hands would be greatly appreciated.

Thanks.

Michael Clagett
Principal Software Engineer
Mathworks, Inc.
mike.clag...@mathworks.com
(508)-647-4307

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature Request: Binding Arrays

2013-10-16 Thread Paul van Helden
Fantastic! I've been wanting this for a long time.

Since which version do we have sqlite3_intarray_x?


On Wed, Oct 16, 2013 at 1:28 PM, Richard Hipp  wrote:

> Please see http://www.sqlite.org/src/artifact/2ece66438?ln=13-75
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4 + spellfix1 with multiple languages

2013-10-16 Thread Raf Geens


On 10/16/13 08:48, Dan Kennedy wrote:

On 10/15/2013 08:13 PM, Raf Geens wrote:

Hi,

  I have a FTS4 table that contains entries in multiple languages 
(using the languageid option). I also have a spellfix1 table that I 
use to search with misspelled words on the FTS4 table. In the 
spellfix1 documentation a fts4aux table is used to fill a spellfix1 
table based on a FTS4 one. This works in a single-language scenario. 
However, I've found that the fts4aux table is empty if the languageid 
option is used on the FTS4 table.


  My workaround for this has been to create temporary copies of the 
FTS4 table, one for each language, with the languageid column 
dropped. I can then use fts4aux and fill the spellfix1 table language 
by language. This feels like a big hack though. Have I missed a 
better way to do this?


The upcoming 3.8.1 release adds a hidden languageid column
to the fts4aux table:

  http://sqlite.org/draft/fts3.html#f4alid

So you by adding "languageid=N" to the WHERE clause used
to query the fts4aux table you can read the vocabulary
belonging to languages with non-zero language-ids.

Will this work for you?

Dan.


Yes, that sounds ideal, thanks!

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature Request: Binding Arrays

2013-10-16 Thread Richard Hipp
Please see http://www.sqlite.org/src/artifact/2ece66438?ln=13-75


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature Request: Binding Arrays

2013-10-16 Thread Dominique Devienne
On Wed, Oct 16, 2013 at 12:03 PM, Clemens Ladisch wrote:

> Dominique Devienne wrote:
> > select * from some_table where some_column in (...)
> >
> > 2) In other places we synthesize the query text by splicing list.join(",
> ")
> > in the in (list) where clause.
> >
> > Both solutions are unsatisfactory, because ... 2) constantly reparse and
> > prepare queries, which can get super long if the array to "bind" is big.
> >
> > Any chance SQLite would add true array binding?
>
> The compiled statement depends on the number of elements, so SQLite
> would have to reprepare anyway:
>

But isn't that a consequence of the fact that a in (list) where clause is
necessary bounded and known at parse time?

The same way I can manually transform the in (list) into a join to a temp
table, so can the query optimizer.

It already uses hidden intermediary result-sets for query processing, and
the array would basically be one such internal (anonymous) hidden
"result-set".

The difference with the manual transform-into-join code I'm forced to do
now is that SQLite wouldn't have to name the table and column to create the
temp table, fill it, using, drop it, etc... These activities trigger
authorizer hooks, trace hooks, change the (temp) schema, etc... (i.e. a
bunch of DDL and DML statements) while SQLite itself, would it support
array binding, would generate none of that monitored statement activity.

Notice that I'm inquiring about array-binding for in (list) only, not for
putting into table cells, not selecting them, not joining on them, etc...
I'd love to be able to do that, but that's a different can of worms
entirely.

Thanks, --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature Request: Binding Arrays

2013-10-16 Thread Clemens Ladisch
Dominique Devienne wrote:
> select * from some_table where some_column in (...)
>
> 2) In other places we synthesize the query text by splicing list.join(", ")
> in the in (list) where clause.
>
> Both solutions are unsatisfactory, because ... 2) constantly reparse and
> prepare queries, which can get super long if the array to "bind" is big.
>
> Any chance SQLite would add true array binding?

The compiled statement depends on the number of elements, so SQLite
would have to reprepare anyway:

> .explain on
> explain select 1 in (111,222,333);
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
...
8 Integer111   3 000
9 MakeRecord 3 1 4 b  00
10IdxInsert  1 4 000
11Integer222   3 000
12MakeRecord 3 1 4 b  00
13IdxInsert  1 4 000
14Integer333   3 000
15MakeRecord 3 1 4 b  00
16IdxInsert  1 4 000
...


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature Request: Binding Arrays

2013-10-16 Thread Dominique Devienne
We have an SQLite virtual-table heavy application with a lot of the GUI
driven by SQL queries, and often times we have queries of the form

select * from some_table where some_column in (...)

where ... is coming from prior selections in the GUI, or filtering, etc...

1) In some places, we create temporary tables and join with those, instead
of using the in (list) where clause.
2) In other places we synthesize the query text by splicing list.join(", ")
in the in (list) where clause. (whether you splice the text of the values,
or a series of %i and do proper binding makes little difference IMHO, in
both cases you need to reparse).

Both solutions are unsatisfactory, because with 1) you have to create dummy
transient tables, for which you need to invent table names, insert, join
with, and then delete/cleanup, and 2) constantly reparse and prepare
queries, which can get super long if the array to "bind" is big.

Any chance SQLite would add true array binding?

For example, given

create table t (name text, type text, primary key (name, type));
select * from t where type in (%1);

and binding would look something like this:

sqlite3_bind_array_begin(stmt, 1 [, types.size()]); // size param?
for (const auto& type: types) {
  sqlite3_bind_text(stmt, 1, type.c_str(), type.size(), SQLITE_TRANSIENT);
}
sqlite3_bind_array_end(stmt, 1);

Whether the API allows only homogeneous elements in the array (element type
specified in the sqlite3_bind_array_begin) or it's the usual SQLite duck
typing matters little me.

Obviously I would welcome such a change. I have no clue how difficult to
implement that is of course, but if somehow it could be added, and doesn't
make SQLite that much bigger, then such an addition would be very much
welcome.

If I somehow missed a better work-around to this lack of array-binding, I'm
also interested of course, but obviously I'd prefer real array binding.

Thanks for any insight on this, --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ALTER COLUMN in sqlite

2013-10-16 Thread mitzanu
There's no ALTER COLUMN in sqlite.

I believe the option is to:

•Rename the table to a temporary name
•Create a new table without the NOT NULL constraint
•Copy the content of the old table to the new one
•Remove the old table

Can you guys implement ALTER COLUMN in sqlite? it would be a great feature.



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/ALTER-COLUMN-in-sqlite-tp71706.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] JDBC Driver Issue

2013-10-16 Thread mitzanu
Sqlite JDBC : 

I have a bug in Jdbc driver. The method DatabaseMetaData.getColumns(String
catalog, String schemaPattern,
String tableNamePattern, String columnNamePattern)  
throws SQLException;
does not correctly return the value in resultSet.getInt(11 ) - NULLABLE OR
NOT. Please make sure that also the Primary Key columns have the correct
value here.
I consider a column mandatory ( not null ) if : 
DatabaseMetaData.columnNoNulls == rs.getInt(11)




One more request for Sqlite : 
Would be great if you would support adding foreign keys after the table has
been created.
This is highly required by database design tools. All users are creating the
foreign keys after the table has been created.
This would be a great advantage for thouse who intend to a real database
design.
So please add commands for ALTER TABLE... ADD CONSTRAINT ... FOREIGN KEY (
col1, col2 ) REFERENCES ... ( col1, col2 ).




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/JDBC-Driver-Issue-tp71705.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4 + spellfix1 with multiple languages

2013-10-16 Thread Dan Kennedy

On 10/15/2013 08:13 PM, Raf Geens wrote:

Hi,

  
I have a FTS4 table that contains entries in multiple languages (using the languageid option). I also have a spellfix1 table that I use to search with misspelled words on the FTS4 table. In the spellfix1 documentation a fts4aux table is used to fill a spellfix1 table based on a FTS4 one. This works in a single-language scenario. However, I've found that the fts4aux table is empty if the languageid option is used on the FTS4 table.


  
My workaround for this has been to create temporary copies of the FTS4 table, one for each language, with the languageid column dropped. I can then use fts4aux and fill the spellfix1 table language by language. This feels like a big hack though. Have I missed a better way to do this?


The upcoming 3.8.1 release adds a hidden languageid column
to the fts4aux table:

  http://sqlite.org/draft/fts3.html#f4alid

So you by adding "languageid=N" to the WHERE clause used
to query the fts4aux table you can read the vocabulary
belonging to languages with non-zero language-ids.

Will this work for you?

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger SQL and database schema

2013-10-16 Thread Darren Duncan

On 2013.10.15 10:34 PM, Petite Abeille wrote:

On Oct 16, 2013, at 7:20 AM, Darren Duncan  wrote:


On 2013.10.14 11:58 PM, Sqlite Dog wrote:

seems like SQLite is not checking trigger SQL for invalid column names
until execution?


What you describe sounds like the behavior of every SQL DBMS which has triggers 
whose trigger behavior I know.


Hmmm… FWIW… Oracle, for one, will invalidate triggers, views, packages, etc if 
their underlying tables change.

There is even a very handy ALL_DEPENDENCIES views to track all the explicit 
interdependencies between objects:

http://docs.oracle.com/cd/B28359_01/server.111/b28320/statviews_1066.htm#i1576452


But the key thing here, and my point, is that even Oracle wouldn't block the 
underlying tables change due to the invalidation of other schema objects that 
would result.


Oracle would allow the invalid trigger/view/package definitions to exist, rather 
than requiring the user to temporarily delete those first or update their 
definitions simultaneously with the underlying tables thereby enforcing 
compatibility.


This is what I'm talking about, that invalid trigger/etc definitions are allowed 
to exist, by every SQL DBMS whose behavior I know about, and SQLite matching 
that behavior would best be maintained.


Not checking trigger/etc validity until execution makes it possible to 
separately change the tables and other objects depending on them, or for that 
matter, altering underlying tables again to bring them back into compatibility 
with other objects' expectations of them, at which point the triggers/etc would 
become valid again without having ever changed.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users