Re: [sqlite] Maintaing a Last updated timestamp column using triggers

2020-03-12 Thread Jean-Luc Hainaut

On 12/03/2020 08:47, David Blake wrote:

I'm looking for an easy way to maintain a last updated column for each
record in several tables and considering if using a triggers is viable.

I thought that defining a trigger like this on each table would work

CREATE TRIGGER my_update_trigger BEFORE UPDATE ON  My_table
   FOR EACH ROW WHEN NEW.lastupdated < OLD. lastupdated
   BEGIN
   UPDATE song SET  lastupdated = DATETIME('now') WHERE id = NEW.id
   END


As far as I know, updating the current row in a "before" trigger is a 
"nop" in SQLite since this row does not exist yet (for my information, 
could you check?). However, this works fine in an "after" trigger.


In addition, the role of the "when" clause is unclear.  Is it necessary?

JLH


--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to enforce a specific order of group_concat?

2020-03-02 Thread Jean-Luc Hainaut

On 1/03/2020 22:57, mailing lists wrote:

Assume I create the following table:

CREATE TABLE Test (ID INTEGER PRIMARY KEY, Value TEXT);
INSERT INTO Test (Value) VALUES('Alpha');
INSERT INTO Test (Value) VALUES('Beta');
INSERT INTO Test (Value) VALUES('Beta');
INSERT INTO Test (Value) VALUES('Alpha');

According to the documentation of group_concat the order is undefined, indeed:

SELECT group_concat(DISTINCT Value) FROM Test ORDER BY Value ASC;
SELECT group_concat(DISTINCT Value) FROM Test ORDER BY Value DESC;

Both queries result in Alpha,Beta.

Changing the queries to

WITH Result(x) AS (SELECT DISTINCT Value FROM Test ORDER BY Value ASC) SELECT 
group_concat(x) FROM Result;
WITH Result(x) AS (SELECT DISTINCT Value FROM Test ORDER BY Value DESC) SELECT 
group_concat(x) FROM Result;

leads to the results Alpha,Beta, respectively Beta,Alpha.

Is this a coincidence or is this guaranteed to work?

Are there any other solutions / possibilities?


I also sometimes need a deterministic version of group_concat(). For 
such cases, I have written the short UDF below (in Python, but I guess 
the algorithm can easily be translated in other languages):


JLH

class group_concat2:
# Rewriting of "group_concat" of SQLite to simulate that of MySQL.
# Implements "distinct", "order by", "descending" and "separator".
# Interprets "null" values "intuitively"
#
# Format: group_concat2(value,distinct,sortkey,direction,separator)
#   value: char or numeric SQL expression; if numeric, 
converted into char;

#  the next value to concatenate;
#  discarded if None (Python translation of SQL null).
#   distinct:  numeric or char SQL expression; if char, converted 
into integer;

#  uniqueness indicator;
#  if 1, duplicates ignored; if 0, duplicates allowed.
#   sortkey:   char or numeric SQL expression (no conversion);
#  the order key value for the current "value" instance;
#  If None or u'', the current "value" instance is used 
instead.
#   direction: numeric or char SQL expression; if char, converted 
into integer;

#  ordering direction (1 = asc; 2 = desc).
#   sep:   char or numeric SQL expression; if numeric, 
converted into char;

#  value separator;
#  If None, = default u','.
# Example:
#   select City,group_concat2(lower(CustID),1,Account,'2','; ') as 
Customers

#   from   CUSTOMER group by City;

def __init__(self):
# Initialize
self.number  = 0# number of values added
self.valList = []   # List of values to concatenate
self.orderby = []   # list of values of the order key
self.distinct = 0   # whether "valList" values must be unique 
(0 = no; 1 = yes)

self.direction = 1  # ordering direction (1 = asc; 2 = desc)
self.sep = u',' # separator

def step(self,value,distinct,sortkey,direction,sep):
# Adding a new value to concatenate.
# Each call of this method may specify different values of
#  (distinct,sortkey,direction,sep) parameters.
# However, only those specified by the call of the first 
"value" instance

# will be considered, the others being ignored.

import numbers

self.number += 1

# Initialize user values of "distinct", "direction" and "sep"
if self.number == 1:
if distinct in [1,u'1']:
self.distinct = 1
if direction in [1,2,u'1',u'2']:
self.direction = int(direction)
if sep is not None:
if isinstance(sep,numbers.Number):
self.sep = unicode(sep)
else:
self.sep = sep

if sortkey is None:
sortkey = value if value is not None else 1
elif sortkey == u'':
sortkey = value if value is not None else 1

if value is not None:
if isinstance(value,numbers.Number):
value = unicode(value)

if self.distinct:
if value not in self.valList:
self.valList.append(value)
self.orderby.append(sortkey)
else:
self.valList.append(value)
self.orderby.append(sortkey)
else:
# value discarded
pass

def finalize(self):
if self.direction == 1:
self.valList = [y for x,y in 
sorted(zip(self.orderby,self.valList),reverse=False)]

else:
self.valList = [y for x,y in 
sorted(zip(self.orderby,self.valList),reverse=True)]

return self.sep.join(self.valList)








___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger name missing

2020-02-26 Thread Jean-Luc Hainaut

On 26/02/2020 12:18, Richard Hipp wrote:

On 2/26/20, Jean-Luc Hainaut  wrote:

Hi all,

It seems that SQLite (version 31.1) accepts a trigger declaration in
which the name is missing. When fired, this trigger doesn't crashes but
exhibits a strange behaviour. In particular, while expression
"new." in an "insert" trigger returns the correct value, the
equivalent expression "select  from T where Id = new.Id" always
returns null (column "Id" is the PK of table "T"). Similarly, "update T
set  =   where Id = new.Id" (silently) fails.


What is the text of your trigger?


This trigger belongs to a small experimental application I'm writting to 
study the extent to what application code (initially in Java, Python, 
etc.) can be integrated into SQL, notably through triggers. In short, 
can one convert a standard 3-tier business application into just a GUI + 
an active database, without the standard application program between them?
The following trigger controls the registration of a customer order 
[insert into CUSTORDER(OrdID, CustID,ItemID,Qty,...) values (...)] when 
the available quantity (Qavail) of the requested item is sufficient.

If the name 'CORD_INS1' is missing, this trigger (among others):
  - updates the ITEM table. [successfully]
  - completes the customer order (Price and State in CUSTORDER). [fails]
  - creates an invoice (in CUSTINVOICE) and prints it in a text file. 
[successfully]


After reading all your explanations and comments, my interpretation is 
as follows:
1. The SQLite syntax tells me that the "before/after/instead of" keyword 
can be missing, in which case (I guess) "before" is assumed.

2. So, my "name-less" trigger is valid and must be read:
   create trigger "after" before insert on CUSTORDER ...
3. In a "before" trigger, the current row cannot be updated, since it 
doesn't exist yet (though several RDBMS have a specific syntax for that).

4. This explains why SQLite legitimely ignores the second update.
Am I right?
If I am, this behaviour is "not a bug but a feature". It could be useful 
to precise these facts in the documentation.


Thanks to all

Jean-Luc Hainaut

create table CUSTOMER (CustID,Name,Address,City,Account,...);
create table ITEM 
(ItemID,Description,Price,QonHand,Qord,Qavail,Qmargin,...);

create table CUSTORDER (OrdID,DateOrd,CustID,ItemID,Price,Qty,State,...);
create table CUSTINVOICE 
(InvID,DateInv,OrdID,CustID,ItemID,Price,Qty,Amount,State,...);

create table SUPPLIER (SuppID,Name,City,...);
create table OFFER (SuppID,ItemID,Price,Delay,...);
create table SUPPORDER (OrdID,DateOrd,ItemID,SuppID,Price,Qty,State,...);

create trigger CORD_INS1
after insert on CUSTORDER
for each row
when new.Qty <= (select Qavail from ITEM where ItemID = new.ItemID)
and  not exists (select * from CUSTINVOICE where OrdID = new.OrdID)
begin
   --
   -- Subtract Qty from Qavail:
   update ITEM
   set   Qavail = Qavail - new.Qty
   where ItemID = new.ItemID;
   --
   --...
   -- Set CUSTORDER.State to 'invoiced' or 'pending'
   update CUSTORDER
   set   Price = (select Price from ITEM where ItemID = new.ItemID),
 State = case when new.Qty <= (select QonHand from ITEM where 
ItemID = new.ItemID)

  then 'invoiced'
  else 'pending'
 end
   where OrdID = new.OrdID;
   --
   -- Create an invoice and print it:
   insert into CUSTINVOICE(...);
   --
end;

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Trigger name missing

2020-02-26 Thread Jean-Luc Hainaut

Hi all,

It seems that SQLite (version 31.1) accepts a trigger declaration in 
which the name is missing. When fired, this trigger doesn't crashes but 
exhibits a strange behaviour. In particular, while expression 
"new." in an "insert" trigger returns the correct value, the 
equivalent expression "select  from T where Id = new.Id" always 
returns null (column "Id" is the PK of table "T"). Similarly, "update T 
set  =   where Id = new.Id" (silently) fails.


Not critical but annoying if you are, like me, absent minded when typing 
code!


Regards

Jean-Luc Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to group this?

2020-02-11 Thread Jean-Luc Hainaut

On 11/02/2020 01:35, Simon Slavin wrote:

I don't think that creating an index on a view actually works, does it?
You're right.  What was I thinking ?  Maybe I've used another implementation of 
SQL that it does work on.  Thanks for picking me up on it.


You are right, SQL Server allows you to create an index on a view.

J-L Hainaut


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] case-insensitivity of keywords are hardly documented

2020-02-08 Thread Jean-Luc Hainaut

On 7/02/2020 20:08, Aapo Rantalainen wrote:

Hi, I'm just doing some 'software archeology' and I found that:
on Nov 21 01:02:00 2004
FossilOrigin-Name: ac72a1d5518f7b505ae2a1bd3be3d71db461ae7e
git: f8565825622a1ed48bdaa835968a1137b2ffa593

This sentence have been dropped out of documentation:
"Keyword matching in SQLite is case-insensitive."

https://github.com/sqlite/sqlite/commit/f8565825622a1ed48bdaa835968a1137b2ffa593#diff-b43337792fa9656f4e2ae1351e18bee6L1556

Can anybody say what happened at that time? This sentence seems to not
been there ever since. Just by using SQLite man can say that keywords
are case-insensitive, but why it is not documented explicitly anymore?


Keyword case-insensitivity is a requirement of the SQL standard(s), not 
an SQLite feature. One can suppose that removing this from SQLite info 
was just a way to remove noise from SQLite documentation.


J-L Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] single table data collapse with constraints

2020-02-01 Thread Jean-Luc Hainaut

Hi Robert,

I'm a bit late: I missed your post in the recent tsunami (of posts)!

Your question relates to one of the basic primitives of temporal 
databases, "coalescing". Basic but far from simple if you want to 
express it in pure SQL. On the contrary, the answer is much simpler if 
you code it as a loop in any host language.


You could consult this tutorial about the concepts of temporal databases:
https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Case09-Temporal-DB(1).pdf 
<https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Case09-Temporal-DB%281%29.pdf>


... and this one about temporal data manipulation, including coalescing:
https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Case10-Temporal-DB(2).pdf 
<https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Case10-Temporal-DB%282%29.pdf>


Regards

Jean-Luc Hainaut


On 29/01/2020 16:02, Golding Robert wrote:

Hi,

I am emailing to ask if someone could advise me how to create a query or 
queries which will collapse some data based on a limited number of constraints.
I am currently attempting to complete this task using DB Browser for SQLite. I  
have tried to write a WITH RECLUSIVE statement as I think this is the 
requirement but am struggling with both the abstraction and syntax.


I have data contained with a single table of structure:
CLS1 field (text)
CLS2 field (integer)
START field (integer)
END field (integer

I need to collapse the data based on the matching of values in fields CLS1 and 
CLS2; the final constraint is that if END and START values are continuous of 
the another record then they can be collapsed. Therefore records should only be 
collapsed if gaps do not exist. The results then need to written to a new 
table, leaving the original data as is.

Input data: assumptions

   1.  Data may or may not be ordered
   2.  Duplicates may or may not exist
   3.  Start and end values could be the same
   4.  Start values are normally lower that the end value, however the high 
value could be in the start field
   5.  Assume that there is no overlap in terms of start and end values (namely 
if they can be joined then one will stop where the next starts)

Input data:  example
CLS1,CLS2,START,END
ABC1,100,0,1
ABC1,100,1,1 (start and end values could be the same, in the first instance 
assume that they may be dissolved if possible, if they cannot the record need 
to be retained)
ABC1,100,1,3
ABC1,100,1,3 (duplicates may or may not be present, if present then they can be 
dissolved into a single instance)
ABC1,100,3,4
ABC1,100,4,3
ABC1,100,5,6
ABC1,100,6,20
ABC1,100,6,20(duplicates may or may not be present, if present then they can be 
dissolved into a single instance)
ABC1,500,4,19
ABC1,500,19,4 (start and end values could be inverted where Start is high and 
End id low (this is not the norm but it is legitimate) in this case start and 
end values may be inverted, and in this case dissolved into a single instance)
ABC2,300,4,4 (start and end values could be the same, in the first instance 
assume that they may be dissolved if possible, if they cannot the record need 
to be retained)



Output data: collapsed/merged expected output
CLS1,CLS2,START,END
ABC1,100,0,4
ABC1,100,5,20
ABC1,500,4,19
ABC2,300,4,4


I would be extremely grateful if anybody could help me with this issue.


Regards,

Rob





The content of this email (and any attachment) is confidential. It may also be 
legally privileged or otherwise protected from disclosure.
This email should not be used by anyone who is not an original intended 
recipient, nor may it be copied or disclosed to anyone who is not an original 
intended recipient.

If you have received this email by mistake please notify us by emailing the 
sender, and then delete the email and any copies from your system.

Liability cannot be accepted for statements made which are clearly the sender's 
own and not made on behalf of Network Rail.
Network Rail Infrastructure Limited registered in England and Wales No. 
2904587, registered office Network Rail, 2nd Floor, One Eversholt Street, 
London, NW1 2DN


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Capturing the changes in columns in a table

2020-01-14 Thread Jean-Luc Hainaut


Another version that doesn't use CTE nor window functions:

select t1.a as Proj, t2.idate as "On", 'b' as Var, t1.b as oldVal, t2.b 
as newVal

from   t t1,t t2
where  t1.a = t2.a
andt2.idate = date(t1.idate,'+1 day')
andt1.b <> t2.b
   union all
select t1.a as Proj, t2.idate as "On", 'c' as Var, t1.c as oldVal, t2.c 
as newVal

from   t t1,t t2
where  t1.a = t2.a
andt2.idate = date(t1.idate,'+1 day')
andt1.c <> t2.c
   union all
select t1.a as Proj, t2.idate as "On", 'd' as Var, t1.d as oldVal, t2.d 
as newVal

from   t t1,t t2
where  t1.a = t2.a
andt2.idate = date(t1.idate,'+1 day')
andt1.d <> t2.d
   union all
select t1.a as Proj, t2.idate as "On", 'e' as Var, t1.e as oldVal, t2.e 
as newVal

from   t t1,t t2
where  t1.a = t2.a
andt2.idate = date(t1.idate,'+1 day')
andt1.e <> t2.e
order by Proj,"On";

Valid if there is one state for each project on each date.

Jean-Luc Hainaut


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Death of Hector Garcia-Molina

2019-12-07 Thread Jean-Luc Hainaut


Sad news: death of Hector Garcia-Molina, one of the pioneers in the 
field of distributed databases.


https://news.stanford.edu/2019/12/06/hector-garcia-molina-influential-computer-scientist-database-expert-dies-65/.

JLH
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Jean-Luc Hainaut


Four observations that may (or may not) explain the problem:
- dataset.id declared "unique": useless since the primary key is unique 
by definition but it may create an index (not checked). A PK is not 
"more unique" if an additional "unique" constraint is declared..
- no "on delete <...>" on trace.datasetid; so, default "no action". Is 
it what you want?
- no index on this foreign key, hence potential full scan to identify 
children rows in "trace" (or absence thereof).
- index trace_idx_01 declared twice. Most often once is sufficient, even 
for large tables.


Jean-Luc Hainaut


Yes, please apologize, I indeed forgot to attach the table definitions:

CREATE TABLE dataset (
 id INTEGER  PRIMARY KEY AUTOINCREMENT
 UNIQUE
 NOT NULL,
 name   STRING   DEFAULT NULL
 COLLATE NOCASE,
 is_latest  BOOLEAN  NOT NULL
 DEFAULT 1,
 created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);

CREATE TABLE trace (
 id INTEGER  PRIMARY KEY AUTOINCREMENT
 UNIQUE
 NOT NULL,
 name   STRING   DEFAULT NULL
 COLLATE NOCASE,
 datasetid  INTEGER  REFERENCES dataset (id)
 NOT NULL,
 quantityid INTEGER  REFERENCES quantity (id)
 NOT NULL,
 stored DATETIME DEFAULT NULL,
 created_at DATETIME NOT NULL
 DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX trace_idx_01 ON trace (
 quantityid
);

CREATE INDEX trace_idx_01 ON trace (
 quantityid
);

CREATE TABLE item (
 idINTEGER PRIMARY KEY AUTOINCREMENT
   UNIQUE
   NOT NULL,
 traceid   INTEGER REFERENCES trace (id)
   NOT NULL,
 freq  BIGINT  NOT NULL,
 value REALNOT NULL,
 noiseflag BOOLEAN DEFAULT NULL
);

CREATE INDEX item_idx_01 ON item (
 traceid
);

CREATE TABLE metadata (
 idINTEGER PRIMARY KEY AUTOINCREMENT
   UNIQUE
   NOT NULL,
 parameter STRING  NOT NULL
   COLLATE NOCASE,
 value STRING  NOT NULL
   COLLATE NOCASE,
 datasetid INTEGER DEFAULT NULL
   REFERENCES dataset (id),
 traceid   INTEGER DEFAULT NULL
   REFERENCES trace (id),
 itemidINTEGER DEFAULT NULL
   REFERENCES item (id)
);

CREATE INDEX metadata_idx_01 ON metadata (
 parameter,
 value,
 datasetid,
 traceid,
 itemid
);

CREATE INDEX metadata_idx_02 ON metadata (
 datasetid,
 traceid
);

CREATE INDEX metadata_idx_03 ON metadata (
 traceid
);

CREATE INDEX metadata_idx_04 ON metadata (
 datasetid,
 itemid
);

CREATE INDEX metadata_idx_05 ON metadata (
 traceid,
 itemid
);

CREATE INDEX metadata_idx_06 ON metadata (
 itemid
);

CREATE INDEX metadata_idx_07 ON metadata (
 datasetid,
 parameter
);

CREATE INDEX metadata_idx_08 ON metadata (
 traceid,
 parameter
);

CREATE INDEX metadata_idx_09 ON metadata (
 parameter,
 traceid
);

CREATE INDEX metadata_idx_10 ON metadata (
 parameter,
 datasetid,
 traceid,
 itemid
);

CREATE TABLE quantity (
 id INTEGER PRIMARY KEY AUTOINCREMENT
UNIQUE
NOT NULL,
 name   STRING  NOT NULL,
 unit   STRING  NOT NULL,
 sumrule[SMALLINT UNSIGNED] DEFAULT NULL,
 created_at DATETIMEDEFAULT CURRENT_TIMESTAMP,
 UNIQUE (
 name,
 unit,
 sumrule
 )
 ON CONFLICT IGNORE
);





- Original Message -
From: Dominique Devienne 
To: SQLite mailing list 
Sent: Thursday, October 31, 2019, 11:06:07
Subject: [sqlite] DELETE extremely slow

On Thu, Oct 31, 2019 at 10:57 AM Thomas Kurz  wrote:


I'm using a database with 5 hierarchically strcutured tables using foreign
keys. The largest table contains about 230'000 entries. My problem is that
deleting in this database is extremely slow:
pragma foreign_keys=on;
pragma journal_mode=wal;
.timer on
delete from dataset;
--> Run Time: real 197993.218 user 53015.593750 sys 54056.546875
I experimentally imported the same data into a MariaDB database and tried
the same operation there (without paying attention to creating any indexes,
etc.). It takes only a few seconds there.
Is there something I can check or do to improve deletion speed?


You're not describing the schema enough IMHO.
Is dataset the "top-most" table, containing the "parent" rows all other
tables references (directly or indirectly),
with all FKs having ON DELETE CASCADE?

If that's the case, without some kind of optimization in 

Re: [sqlite] Is it possible to transpose a table using SQL?

2019-03-31 Thread Jean-Luc Hainaut
If by "a dynamically changing table " you mean that any data change (not 
schema change) in t1 will propagate to v1, and if the set of Product 
values doesn't change, then you can try this:


create table t1(Product,Belgium,France,USA);
insert into t1 values 
('OilFilter',1,2,3),('SparkPlug',4,5,6),('Coolent',7,8,9);

select * from t1;

create view v1(Region,OilFilter,SparkPlug,Coolent)
as select 'Belgium' as Region,
  sum(case Product when 'OilFilter' then Belgium else null end) 
as OilFilter,
  sum(case Product when 'SparkPlug' then Belgium else null end) 
as SparkPlug,
  sum(case Product when 'Coolent'   then Belgium else null end) 
as Coolent

   from t1
  union
   select 'France' as Region,
  sum(case Product when 'OilFilter' then France else null end) 
as OilFilter,
  sum(case Product when 'SparkPlug' then France else null end) 
as SparkPlug,
  sum(case Product when 'Coolent'   then France else null end) 
as Coolent

   from t1
  union
   select 'USA' as Region,
  sum(case Product when 'OilFilter' then USA else null end) as 
OilFilter,
  sum(case Product when 'SparkPlug' then USA else null end) as 
SparkPlug,
  sum(case Product when 'Coolent'   then USA else null end) as 
Coolent

   from t1;
select * from v1;

t1:
+---+-++-+
| Product   | Belgium | France | USA |
+---+-++-+
| OilFilter | 1   | 2  | 3   |
| SparkPlug | 4   | 5  | 6   |
| Coolent   | 7   | 8  | 9   |
+---+-++-+
v1:
+-+---+---+-+
| Region  | OilFilter | SparkPlug | Coolent |
+-+---+---+-+
| Belgium | 1 | 4 | 7   |
| France  | 2 | 5 | 8   |
| USA | 3 | 6 | 9   |
+-+---+---+-+

J-L Hainaut


On 31/03/2019 22:07, Shane Dev wrote:

Hello,

Is it possible to create a view which switches rows and columns of a
dynamically changing table?

For example, imagine we have table t1 where both columns and rows could
change after the view has been created

sqlite> select * from t1;
Product/Region|Belgium|France|USA
Oil_filter|1|2|3
Spark_plug|4|5|6
Coolent|7|8|9

Could view v1 be created such that

sqlite> select * from v1;
Product/Region|Oil_filter|Spark_plug|Coolent
Belgium|1|4|7
France|2|5|8
USA|3|6|9
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] library interfering with input function when running in python console

2019-03-22 Thread Jean-Luc Hainaut

On 20/03/2019 22:48, Anthony-William Thibault wrote:

Hello there!

Consider the following program

Import sqlite3
x = Input("Enter your name”)
print(“Hello, ” + x)

When you run the code directly with python (double click the .py file or choose 
open with python) it won’t work


Not surprising, this is not a Python program, for several reasons. Try 
this one instead:


import sqlite3
x = raw_input("Enter your name")
print("Hello, " + x)

J-L Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Recursive CTE on tree with doubly linked items

2019-03-11 Thread Jean-Luc Hainaut


Your implementation of trees is that of network databases at the 
pointer-based physical level but definitely not relational. Try this:


create table TREE(
  ID integer not null primary key,
  Parent  integer references TREE on delete ... on update cascade); -- 
Notice the absence of "not null"

create index XTREE on TREE(Parent); -- Only useful for large sets of nodes

That's all.

From this, CTE and non-CTE queries just are easy, elegant and fast. For 
instance extracting the siblings of a note is the translation of their 
intuitive definition: "nodes with the same parent" :


select * from TREE where Parent = 2.

Regards

J-L Hainaut

On 11/03/2019 09:08, heribert wrote:
I've a tree with doubly linked items. I want to get all siblings of a 
tree node (e.g. ID=2 or harder to implement ID=3).
I tried to solve this problem with CTE of SQLite by myself - but I can 
not find the solution. I looked for any exemplary solution - but do 
not find some.


DROP TABLE IF EXISTS "Tree";

CREATE TABLE "Tree" (
  "ID" INTEGER,
  "PrevIDX" INTEGER DEFAULT NULL,
  "NextIDX" INTEGER DEFAULT NULL,
  "ParentIDX" INTEGER DEFAULT NULL,
  PRIMARY KEY ("ID"),
  FOREIGN KEY ("PrevIDX") REFERENCES "Tree" ("ID"),
  FOREIGN KEY ("NextIDX") REFERENCES "Tree" ("ID"),
  FOREIGN KEY ("ParentIDX") REFERENCES "Tree" ("ID") ON DELETE CASCADE
);

INSERT INTO "Tree" VALUES (1, NULL, NULL, NULL);
INSERT INTO "Tree" VALUES (2, NULL, 3, 1);
INSERT INTO "Tree" VALUES (3, 2, 4, 1);
INSERT INTO "Tree" VALUES (4, 3, NULL, 1);

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Displaying hierarchical structure

2019-01-31 Thread Jean-Luc Hainaut

On 31/01/2019 17:59, Bart Smissaert wrote:

Thanks, will try that.


order by PATH

So, where is this path coming from?


Simple, from a discrepancy between the script I have tested and the 
contents of this mail!

Here is the complete (tested) script:

create table CLOSURE(PARENT_ID integer,ID integer,PATH text,DIST integer);

create trigger CLOSURE_INS after insert on CLOSURE
for each row
begin
  insert into CLOSURE
  select new.PARENT_ID,ID,new.PATH||'/'||cast(ID as char),new.DIST+1 
from FOLDERS

  where  PARENT_ID = new.ID; end;

insert into CLOSURE select ID,ID,'1',0 from FOLDERS where ID = 1;

select *,substr('  ',1,2*DIST)||cast(ID as char) as Display from 
CLOSURE order by PATH;


+---++---+--+-+
| PARENT_ID | ID | PATH  | DIST | Display |
+---++---+--+-+
| 1 | 1  | 1 | 0| 1   |
| 1 | 2  | 1/2   | 1|   2 |
| 1 | 5  | 1/2/5 | 2| 5   |
| 1 | 6  | 1/2/6 | 2| 6   |
| 1 | 3  | 1/3   | 1|   3 |
| 1 | 4  | 1/4   | 1|   4 |
+---++---+--+-+

JL

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Displaying hierarchical structure

2019-01-31 Thread Jean-Luc Hainaut
Recursive CTEs are the most obvious technique to solve this kind of 
problems.

However, a less known technique can do the job: recursive triggers.
Here is how the closure of FOLDERS can be computed. It will be stored in 
table CLOSURE:


create table CLOSURE(PARENT_ID integer, ID integer, DIST integer);

A trigger adds the children rows of each row that has been inserted into 
this table:


create trigger CLOSURE_INS after insert on CLOSURE
for each row
begin
  insert into CLOSURE
  select new.PARENT_ID,ID,new.DIST+1 from FOLDERS
  where  PARENT_ID = new.ID;
end;

To compute the closure, we just insert the root node:

insert into CLOSURE select ID,ID,0 from FOLDERS where ID = 1;

or all the nodes:

insert into CLOSURE select ID,ID,0 from FOLDERS;

To get the strict closure, we discard the initial rows (DIST = 0). And 
to display the node hierarchy:


select *,substr('  ',1,2*DIST)||cast(ID as char) as Display
from CLOSURE order by PATH;

+---++--+-+
| PARENT_ID | ID | DIST | Display |
+---++--+-+
| 1 | 1  | 0| 1   |
| 1 | 2  | 1|   2 |
| 1 | 5  | 2| 5   |
| 1 | 6  | 2| 6   |
| 1 | 3  | 1|   3 |
| 1 | 4  | 1|   4 |
+---++--+-+

The path of each folder is computed in the same way.

J-L Hainaut


Working on an Android app and part of that is storing SQL in a virtual
folder system in SQLite. For this I want to use a so-called closure table
as explained nicely here:

http://technobytz.com/closure_table_store_hierarchical_data.html

I have a table holder the folder details:

ID PARENT_ID Folder

-
1  0   Folder1
2  1   Folder2
3  1   Folder3
4  1   Folder4
5  2   Folder5
6  2   Folder6


And then the closure table:

PARENT_ID CHILD_ID DEPTH

---
1   10
2   20
3   30
4   40
5   50
6   60
1   21
1   31
1   41
2   51
1   52
2   61
1   62

What should the SQL be to display the folders like this:

FolderPARENT_ID
Folder1   0
Folder2   1
 Folder5   2
 Folder6   2
Folder3   1
Folder4   1


RBS
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concatenating text literals with NULL yields NULL

2019-01-04 Thread Jean-Luc Hainaut

On 04/01/2019 10:48, Dominique Devienne wrote:

I was just surprised by this behavior, see below.
Googling it, seems like SQL Server has a setting the change the behavior in
that case.
Is this standard SQL behavior, as implemented in SQLite?
Not complaining, just asking whether I can depend on it, or not.



It's standard SQL and plain logic: if you add something unknown to some 
known stuff, the result is .. unknown, that is, "null". Same with the 
other SQL scalar expressions (123.4 + null --> null).


JLH


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Persistent snapshots and rollbacks

2018-10-07 Thread Jean-Luc Hainaut


This suggestion refers to temporal DB.  To those interested by this 
approach, this tutorial could help (implementation coded in SQLite):


https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Case09-Temporal-DB(1).pdf 



J-L Hainaut


On 05/10/2018 20:27, Richard Damon wrote:

On 10/5/18 2:19 PM, James K. Lowden wrote:

On Fri, 5 Oct 2018 17:39:57 +0200
Daniel Kraft  wrote:


I need the ability to make multiple changes / commits to my SQLite
database but keep snapshots of previous states and potentially roll
back to those states later on.  All of that needs to be persistent,
i.e. survive closing the database and restarting the process.  After
some time, I can get rid of old snapshots (my process determines by
itself when and which snapshots can get discarded, it is not based on
some fixed TTL or something like that).

"The totality of data in a data bank may be viewed
as a collection of time-varying relations."
-- E.F. Codd in
"A Relational Model of Data for Large Shared Data Banks"

You're not the first.  Data change over time.  SQL doesn't support data
versions as a language feature, but you can implement it yourself
in your database design.

Add a "version" column to your table.  Create views that (using a
self-join) show only the latest version. Periodically purge old
versions.  Roll back by deleting new versions.

One design that I have used is to add two timestamps to every record
(with sufficient precision for your versioning, it could be a version
number too), one is the starting time for the record, and the second for
the ending time (NULL if to 'now'). To update a record, you get the
current time stamp (or next version number), alter the existing record
to have that as its end and create a new record with it as the start
time and NULL for the end. To get 'current' data, you condition selects
with ISNULL(endtime), to get a historical record you select such that
start is less than or equal to the time, and the end is greater than the
time or NULL.

You can purge old records based on the end time being old enough, or
total roll back by deleting records with start greater than the time,
and changing end date greater to NULL.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Last_row_id

2018-09-20 Thread Jean-Luc Hainaut

On 17/09/2018 14:05, Hick Gunter wrote:

A trigger program does not return any result rows.


True. But a "select" query in the body of a trigger can be used to 
evaluate a user-defined function (in the "where" clause for instance) in 
which any action allowed by your host language can be executed, 
including writing in a text file or opening a DB connection.



-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Yadwindersingh
Gesendet: Sonntag, 16. September 2018 03:28
An: sqlite-users@mailinglists.sqlite.org
Betreff: [EXTERNAL] [sqlite] Last_row_id

Hi all;
I am using vs15.x

Create trigger return_pono after insert on po Begin Select last_insert_rowid();

End

Trigger works quite fine in sqlite but fails to return any value to vb.net 
statement

Dim lrow as int64

Lrow = some_cmd.executescalar()


Please help
Thank you


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
  Gunter Hick | Software Engineer | Scientific Games International GmbH | 
Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 
1 80100 - 0

May be privileged. May be confidential. Please delete if not the addressee.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why trigger (UPDATE) does not work?

2018-08-26 Thread Jean-Luc Hainaut


Please check the syntax of the case-end function. As you have written 
them, they just return boolean values.


J-L Hainaut


On 26/08/2018 14:16, Csányi Pál wrote:

On Sun, Aug 26, 2018 at 07:45:33AM -0400, Brian Curley wrote:

You don't list your trigger definition if there's anything that might need
troubleshooting, but I will say that I've recently stopped using
SQLiteStudio for its somewhat erratic behavior.

I attached to my previous mail the whole database dump in which is
that trigger which does not work as I expected.

But here is then the database itself:
CREATE TABLE MyLengthOfService (id INT PRIMARY KEY, WorkPlaceName TEXT, 
StartDate DATE, EndDate DATE, WithWorkingTime INT, Comment TEXT, Years INT, 
RemainingMonths INT, RemainingDays INT);
INSERT INTO MyLengthOfService (id, WorkPlaceName, StartDate, EndDate, 
WithWorkingTime, Comment, Years, RemainingMonths, RemainingDays) VALUES (1, 
'Name of the 1. work place', '1983-07-11', '1984-08-31', 1, 'workman', 1, 1, 
21);
INSERT INTO MyLengthOfService (id, WorkPlaceName, StartDate, EndDate, 
WithWorkingTime, Comment, Years, RemainingMonths, RemainingDays) VALUES (2, 
'Name of the 2. work place', '1984-11-01', '1986-01-15', 1, 'workman', 1, 2, 
15);
INSERT INTO MyLengthOfService (id, WorkPlaceName, StartDate, EndDate, 
WithWorkingTime, Comment, Years, RemainingMonths, RemainingDays) VALUES (3, 
'Name of the 3. work place', '1986-01-16', '1999-07-16', 1, 'workman', 13, 6, 
1);
INSERT INTO MyLengthOfService (id, WorkPlaceName, StartDate, EndDate, 
WithWorkingTime, Comment, Years, RemainingMonths, RemainingDays) VALUES (4, 
'Name of the 4. work place', '2000-02-01', '2000-08-31', 1, 'teacher', 0, 7, 0);
INSERT INTO MyLengthOfService (id, WorkPlaceName, StartDate, EndDate, 
WithWorkingTime, Comment, Years, RemainingMonths, RemainingDays) VALUES (5, 
'Name of the 4. work place', '2000-09-01', '2001-01-31', 0.5,'teacher', 0, 5, 
0);
INSERT INTO MyLengthOfService (id, WorkPlaceName, StartDate, EndDate, 
WithWorkingTime, Comment, Years, RemainingMonths, RemainingDays) VALUES (6, 
'Name of the 4. work place', '2001-02-01', '2018-08-26', 1, 'teacher', 17, 6, 
26);

CREATE TABLE SummedYearsMonthsDays (
 idINT PRIMARY KEY,
 SummedYears   INT,
 RemainingSummedMonths INT,
 RemainingSummedDays   INT
);
INSERT INTO SummedYearsMonthsDays (id, SummedYears, RemainingSummedMonths, 
RemainingSummedDays) VALUES (1, 12, 0, 0);

CREATE TRIGGER AllYearsMonthsDays AFTER UPDATE OF Years, RemainingMonths, 
RemainingDays ON MyLengthOfService BEGIN UPDATE SummedYearsMonthsDays SET 
RemainingSummedDays = CASE WHEN ( SELECT total(RemainingDays) FROM 
MyLengthOfService ) < 30 THEN RemainingSummedDays = ( SELECT 
total(RemainingDays) FROM MyLengthOfService ) ELSE RemainingSummedDays = ( SELECT 
total(RemainingDays) % 30 FROM MyLengthOfService ) END WHERE id = 1;
UPDATE SummedYearsMonthsDays SET RemainingSummedMonths = CASE WHEN ( SELECT 
total(RemainingDays) FROM MyLengthOfService ) < 30 THEN RemainingSummedMonths = 
( SELECT total(RemainingMonths) FROM MyLengthOfService ) ELSE 
RemainingSummedMonths = ( SELECT CAST ( total(RemainingMonths) + 
total(RemainingMonths) / 30 AS INTEGER ) FROM MyLengthOfService ) END WHERE id = 
1; END;



CREATE TRIGGER YearsRemainingMonthsDays AFTER UPDATE OF EndDate ON MyLengthOfService BEGIN UPDATE MyLengthOfService SET 
Years = (  with recursive dates (startDateR, endDateR) as (SELECT date(StartDate,'-1 day'), date(EndDate) FROM 
MyLengthOfService WHERE EndDate = NEW.EndDate), yearsTable (startDateR, years, months, days, resultDate, endDateR) as ( 
select min(startDateR, endDateR), 0, 0, 0, min(startDateR, endDateR), max(startDateR, endDateR) from dates union all select 
startDateR, years + 1, months, days, date(startDateR, '+' || cast(years + 1 as text) || ' years'), endDateR from yearsTable 
where resultDate < endDateR ), monthsTable (startDateR, years, months, days, resultDate, endDateR) as ( select * from ( 
select *from yearsTable where resultDate <= endDateR order by years desc, months desc, days desc limit 1) unionall select 
startDateR, years, months + 1, days, date(startDateR, '+' || cast(years as text) || ' years', '+' || cast(months + 1 as 
text) || ' months'), endDateR from monthsTable where resultDate < endDateR ), daysTable (startDateR, years, months, days, 
resultDate, endDateR) as ( select * from( select * from monthsTable where resultDate <= endDateR order by years desc, 
months desc, days desc limit 1) union all select startDateR, years, months, days + 1, date(startDateR, '+' || cast(years as 
text) || ' years', '+' || cast(months as text) || ' months', '+' || cast(days + 1 as text) || ' days'), endDateR from 
daysTable where resultDate < endDateR ) select  years from daysTable where resultDate = endDateR ), RemainingMonths = (  
with recursive dates (startDateR, endDateR) as (SELECT date(StartDate,'-1 day'), date(EndDate) FROM MyLengthOfService WHERE 
EndDate = NEW.EndDate), yearsTable 

Re: [sqlite] Common index for multiple databases

2018-08-02 Thread Jean-Luc Hainaut

On 02/08/2018 20:50, Keith Medcalf wrote:

In no DBMS known can you index data sourced from multiple tables in the same index -- 
this applies to "Relational" databases and all other database models (such as 
pure hierarchical, network, network extended, etc.)  In all DBMS systems the contents of 
the index must be sourced from a single object.


To the best of my (limited) knowledge, Oracle's CLUSTER is the only 
technique allowing an index to reference rows from several tables. The 
rows (from source tables) that share the same value of a column are 
collected into a page, if needed complemented by an overflow chain of 
pages. Whether the OP is ready to migrate to Oracle is another story!


J-L Hainaut
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange concatenation result

2018-02-27 Thread Jean-Luc Hainaut


Let me suggest an interpretation that seems to comply with the current 
implementation of "substr".


1. String X is stored in a (ficticious) infinite array, the cells of 
which are indexed -*, ..., -2, -1, 0, 1, 2,.., +*.

2. String X is stored from cell 1 upward.
3. String 'abcd' is stored in cells [1,4]. Cells [-*,0] and [5,+*] are 
empty.

4. Parameters X and Y specify a slice of the array.
5. Parameter Y, as described in the documentation, denotes any cell of 
the array, even if it doesn't contain a character of X.
6. Parameter Z, as described in the documentation, denotes any slice of 
the array, that may (but need not) include characters of X.
7. Function "substr" returns the contents of the non empty cells of this 
slice.


Some examples:

select substr('abcd',1,2);   --> slice [1,2]
select substr('abcd',0,2);   --> slice [0,2]
select substr('abcd',0,-2);  --> slice [-2,-1]
select substr('abcd',5,-3);  --> slice [2,4]
select substr('abcd',5,2);   --> slice [5,6]
select substr('abcd',-3,3);  --> slice [2,4]
select substr('abcd',-4,3);  --> slice [1,3]
select substr('abcd',-5,3);  --> slice [0,2]
select substr('abcd',-6,3);  --> slice [-1,1]
select substr('abcd',-7,3);  --> slice [-2,0]
select substr('abcd',2,0);   --> empty slice
select substr('abcd',-5,0);  --> empty slice

+-+
| substr('abcd',1,2)  |
+-+
| ab  |
+-+
+-+
| substr('abcd',0,2)  |
+-+
| a   |
+-+
+-+
| substr('abcd',0,-2) |
+-+
| |
+-+
+-+
| substr('abcd',5,-3) |
+-+
| bcd |
+-+
+-+
| substr('abcd',5,2)  |
+-+
| |
+-+
+-+
| substr('abcd',-3,3) |
+-+
| bcd |
+-+
+-+
| substr('abcd',-4,3) |
+-+
| abc |
+-+
+-+
| substr('abcd',-5,3) |
+-+
| ab  |
+-+
+-+
| substr('abcd',-6,3) |
+-+
| a   |
+-+
+-+
| substr('abcd',-7,3) |
+-+
| |
+-+
+-+
| substr('abcd',2,0)  |
+-+
| |
+-+
+-+
| substr('abcd',-5,0) |
+-+
| |
+-+

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange concatenation result

2018-02-26 Thread Jean-Luc Hainaut

On 26/02/2018 12:19, Cezary H. Noweta wrote:

Hello,

On 2018-02-26 11:38, Hick Gunter wrote:
The substr(x,y,z) function is defined only for nonzero values of y. 
SQlite can return whatever it feels like if you insist on providing 
invalid input. With "being nice to the user" and "making a best 
effort to return sensible data even for nonsense input" as design 
goals, mapping substr(x,0,z) to substr(x,1,z-1) seems quite a benign 
solution.


... and as such, that design could be documented.


I have been using SQLite for several years but I didn't know this 
feature. I quite agree with your answers.


Thanks to both of you.

Just a personal comment: if we consider that the query, with Y=0, has no 
legitimate answer (an empty string IS a legitimate answer), returning 
"null" could also be a "user-friendly" answer, perhaps better at 
alerting the user of the use of an invalid parameter.


J-L Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange concatenation result

2018-02-26 Thread Jean-Luc Hainaut


About the "substr(X,Y,Z)" function, I observe a strange behaviour when Y 
= 0.


If I execute this script:

select 'abcd',substr('abcd',0,1),substr('abcd',1,1),substr('abcd',2,1);
select 'abcd',substr('abcd',0,2),substr('abcd',1,2),substr('abcd',2,2);
select 'abcd',substr('abcd',0,9),substr('abcd',1,9),substr('abcd',2,9);
select 'abcd',substr('abcd',0),substr('abcd',1),substr('abcd',2);

It prints:

+++++
| 'abcd' | substr('abcd',0,1) | substr('abcd',1,1) | substr('abcd',2,1) |
+++++
| abcd   || a  | b  |
+++++
+++++
| 'abcd' | substr('abcd',0,2) | substr('abcd',1,2) | substr('abcd',2,2) |
+++++
| abcd   | a  | ab | bc |
+++++
+++++
| 'abcd' | substr('abcd',0,9) | substr('abcd',1,9) | substr('abcd',2,9) |
+++++
| abcd   | abcd   | abcd   | bcd|
+++++
++--+--+--+
| 'abcd' | substr('abcd',0) | substr('abcd',1) | substr('abcd',2) |
++--+--+--+
| abcd   | abcd | abcd | bcd  |
++--+--+--+

It seems that Y=0 denotes a fictitious empty position before the first 
one (Y=1).Is it the intended behaviour?


The documentation (https://www.sqlite.org/lang_corefunc.html#substr), 
says nothing about this specific pattern.


J-L Hainaut


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] c program which performs the same function as the SQLite shell command ".import"

2018-01-18 Thread Jean-Luc Hainaut


True, "some" parts of "some" games can be implemented with DB 
technology, particularly matrix- and graph-based ones. Not only for fast 
storage and retrieval of game data, but, more interestingly, for 
implementing complex computation algorithms through SQL queries, that 
may prove faster than their expression in standard languages.


In a series of case studies I have developed to show that many problems 
can be elegantly and efficiently solved by a carefully designed DB 
schema + SQL queries, I have included three applications close to the 
game domain: text-based adventure games, Conway's cellular automata (aka 
"Game of life") and shortest path finding.


The text of these studies (all implemented in SQLite) are available on 
https://projects.info.unamur.be/~dbm/mediawiki/index.php/LIBD:Outils#SQLfast 
.


- text-based adventure games: download document "Case study: The book of 
which you are the hero"
- Conway's cellular automata: download document "Case study: Conway's 
Game of Life"
- shortest path problem:  download document "Case study: Path finders, 
rovers and Ariadne's thread".


Just my three cents!

J-L Hainaut


  Any practical realtime video game using SQLite is probably
doing so only to save and restore the game board between games.

and perhaps calculating the initial "maze" or other non time sensitive data
processing


Even a cursory look into production
quality video game development will tell you that a database is the wrong
technology to base a video game engine on.

Fully agree, I would use another library for that part.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Emulate right-join

2017-12-06 Thread Jean-Luc Hainaut


Actually, the left outer join is sufficient to execute all the outer 
join operators:


- right outer join: just swap the "from" arguments

- full outer joins: union of left and right outer joins

Examples (classical "supplier-part-supply" example):

create table S(SN,NAME);
create table P(PN,COLOR);
create table SP(SN,PN,Q);
insert into S values ('S1','SMITH'),('S2','JONES'),('S3','BLAKE');
insert into P values ('P1','red'),('P2','green'),('P3','blue');
insert into SP values 
('S1','P1',30),('S2','P1',30),('S2','P2',40),('S3','P4',10);


- Left outer join S --> SP --> P:
  --

select S.SN,S.NAME,SP.PN,P.COLOR,SP.Q
from S left join SP using (SN)
left join P  using (PN);

++---++---++
| SN | NAME  | PN | COLOR | Q  |
++---++---++
| S1 | SMITH | P1 | red   | 30 |
| S2 | JONES | P1 | red   | 30 |
| S2 | JONES | P2 | green | 40 |
| S3 | BLAKE | P4 | --| 10 |
++---++---++

- Right outer join P --> SP --> S:
  --

select S.SN,S.NAME,P.PN,P.COLOR,SP.Q
from P left join SP using (PN)
   left join S  using (SN);

++---++---++
| SN | NAME  | PN | COLOR | Q  |
++---++---++
| S1 | SMITH | P1 | red   | 30 |
| S2 | JONES | P1 | red   | 30 |
| S2 | JONES | P2 | green | 40 |
| -- | --| P3 | blue  | -- |
++---++---++

- Full outer join P <--> SP <--> S:
  ---

select S.SN,S.NAME,SP.PN,P.COLOR,SP.Q
from S left join SP using (SN)
   left join P  using (PN)
  union
select S.SN,S.NAME,P.PN,P.COLOR,SP.Q
from P left join SP using (PN)
   left join S  using (SN);

- Full outer join with a "union all" (may be faster but may include 
duplicates):

  -

select S.SN,S.NAME,SP.PN,P.COLOR,SP.Q
from S left join SP using (SN)
   left join P  using (PN)
  union all
select S.SN,S.NAME,P.PN,P.COLOR,SP.Q
from P left join SP using (PN)
   left join S  using (SN)
where Q is null;

++---++---++
| SN | NAME  | PN | COLOR | Q  |
++---++---++
| -- | --| P3 | blue  | -- |
| S1 | SMITH | P1 | red   | 30 |
| S2 | JONES | P1 | red   | 30 |
| S2 | JONES | P2 | green | 40 |
| S3 | BLAKE | P4 | --| 10 |
++---++---++

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering a GROUP BY, or other concatenating tricks?

2017-10-04 Thread Jean-Luc Hainaut

On 04/10/2017 02:16, Simon Slavin wrote:

The differences between SQLite and (a.o.) MySQL versions of 
"group_concat" are a recurrent topic.
Since I often need to specify "distinct", "order by", "order direction" 
and "separator", I have written a simple UDF class that simulates the 
MySQL full version.
It is written in Python 2.7 through the standard SQLite3 interface but 
it should be easy to translate it in C:


https://www.dropbox.com/s/ilpx8duppbus8u3/group_concat2.py?dl=0

Hoping it will help!

Jean-Luc Hainaut


On 3 Oct 2017, at 11:13pm, Doug Nebeker  wrote:


How can I select a document and get the complete sorted text back in a single 
row (so I can do a JOIN on a different table with additional information)?

There is a way which will probably work but the documentation adds a careful 
note that it will not always work.

<https://sqlite.org/lang_aggfunc.html#groupconcat>

So you would want something like

SELECT group_concat(LineText, '\n') FROM
(SELECT LineText FROM DocLines
WHERE DocID = 10
ORDER BY LineIndex)

The problem is that the order of concatenation is arbitrary, in other words you 
shouldn’t rely on this working.

If you don’t want to use group_concat(), do it in your programming language.  
Use

SELECT LineText FROM DocLines
WHERE DocID = 10
ORDER BY LineIndex

and concatenate the retrieved values in your programming language.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] group_concat() reverses order given where clause?

2017-08-16 Thread Jean-Luc Hainaut

On 15/08/2017 17:12, Bob Friesenhahn wrote:

I am surprised by this behavior of group_concat():


The implementation of SQLite "group_concat" (a very powerful but often 
overlooked function) provides some but not all the features found in 
other DBMS (MySQL "group_concat" and PostgreSQL "string_agg" for example):

- the values to concatenate (SQL string expression),
- the separator, default (comma) or user-defined (SQL string expression),
- a uniqueness contraint on the values of each group,
- the order of the values.

As far as I understand the specs, SQLlite provides the first three, but 
with a frustrating constraint: you must choose between the uniqueness 
and the user-defined separator but you cannot have both.
The "order by" is badly needed, so, programmers tend to use the 
workaround suggested in this thread: sorting the values in a "from" 
subquery. This is intuitive and works fine in the current version but, 
as said in the documentation, this order is not guaranteed to propagate 
to the concatenated list .


The uniqueness constraint can be enforced in a "from" subquery and the 
user-defined separator can be merged with the values to concatenate, 
followed by some cleaning. As I saw in various forums, it seems possible 
to force the ordering with a CTE (I have not checked). However this 
makes the final expression horribly complicated.


I personally have implemented (in Python) a UDF aggregate function that 
simulates the full group_concat version.  But it would be nice to 
include a full-fledged function (whatever the syntax) in a future SQLite 
version.

Why not in the Christmast version for example?

Best regards

Jean-Luc Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger firing order

2017-07-22 Thread Jean-Luc Hainaut

On 21/07/2017 19:00, Simon Slavin wrote:


I’m minded to leave things as they are, with the order undefined.  If you 
really want to trigger a number of different operations in a specific order, 
put those operations all in one trigger, one after another.


Yes, possible now with the reminder of Peter: inserts can be conditional 
as well.



Alternatively, instead of having
...
have

Operation A
Trigger A1 on operation A performs operation B
Trigger B1 on operation B performs operation C
Trigger A3 on operation A performs operation D

That way you’ll know that you’ll get either ABCD or ABDC, but either way C will 
be executed after B.


Right, but not applicable in my special case: B is a global timestamp 
server that delivers unique timestamp Id's that are further used by 
several operations (e.g., closing the current state then creating a new 
current state). So trigger B1 does not know which operation will be 
performed next. Only triggers of type A1 know.



I have a question.  Would you expect to see depth-first or width-first 
recursion, and why ?


I suppose you refer to the order cascading triggers must be fired, as in 
the architecture mentioned above. Am I right?
Quite difficult question, to which I have no general answer (probably, 
there is none). In MY case (automatic management of transaction time 
temporal DB), I would say that a depth-first execution would be the most 
natural.
 If action A triggers actions B1 and B2, and action B1 triggers actions 
C11 and C12, I think that the final state of the data should be easier 
to understand if it results from sequence A.B1.C11.C12.B2, even if B2 
modifies the result of A.B1.C11.C12. I can imagine that the result of 
A.B1.B2.C11.C12 could lead, in some cases, to unexpected final states.
It seems that depth-first order is applied by Oracle (I don't know for 
others), though the protocol is fairly complicated by their "mutating 
table" concept, which often leads to convoluted programming.
The documentation is unclear but it seems to suggest that the 
"follows/precedes" clauses apply to select the next sibling trigger 
after the child triggers have completed. But I may be wrong!


Thanks for the suggestions and discussion

JL

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger firing order

2017-07-22 Thread Jean-Luc Hainaut

On 21/07/2017 18:13, petern wrote:

a programming pattern that is missing in SQLite trigger body: "if (new.C1

<> old.C1) then ".  It can be simulated for updates ("update
... where ... and new.C1 <> old.C1") but not for inserts ("insert" has no
"where" clause).

Maybe so.  But, INSERT can accept data rows from a SELECT statement which
both does have a WHERE clause and is aware of the trigger body variables.


Quite right! Didn't think of this.  Now, all invidual triggers have been 
gathered into a single one.


Thanks

JL


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Trigger firing order

2017-07-21 Thread Jean-Luc Hainaut

Hello all,

My question concerns the order (deterministic or not) in which triggers 
of the same kind (same table, same event, same position) fire.


My application is a temporal database in which each table stores the 
history of one attribute of a set of entities. A view collects the last 
value of each table to build the current state of these entities.


Let us simplify the problem as follows:
1. The DB comprises base tables T1(K,C1,..), T2(K,C2,..), T3(K,C3,..). 
Each table comprises primary key K + column Ci + some system data.
2. SQL view T(K,C1,C2,C3) collects, for each value of K, the last value 
from each base table.
3. Users update data through view T with such queries as "update T set 
C1='f', C3='g' where K=12"
4. "instead of" triggers translate this update into operations on those 
base tables that are affected by the update, here T1 and T3.

5. More precisely, the translation of this update proceeds in two steps:
5.1 a value "v" is computed and stored in a reference table 
(typically the current_date). This operation must be performed first.
5.2 for each base table affected, an "update" then an " insert" are 
performed, using value "v". These operations can be performed in any order.


Each operation is controlled by a trigger "instead of update on T".  The 
trigger of the first step is unconditional (no "when" clause). Each 
operation of step 2 is controlled by a trigger with a filter like "when 
new.C1 <> old.C1".


The problem is that the operation of step 1 MUST be performed BEFORE the 
operations of step 2. So, technically, the trigger of step 1 must fire 
before those of step 2.
In most DBMS (notably Oracle, DB2, PostgreSQL, SQL Server, InterBase), 
firing order of similar triggers can be specified, either explicitly or 
according to naming or creation time rules (no problem in MySQL: only 
one trigger of a kind is allowed!).
SQLite allows multiple triggers of the same kind, which is a very good 
thing, but its documentation tells nothing on the firing order issue, 
which seems to be interpreted as: "firing order is arbitrary".  This 
considerably limits its usefulness.


The usual responses to trigger ordering problems in forums are of two kinds:
- "Your schema probably is flawed. Fix it."
- "Gather all your triggers into a single one."

No, my schema is not flawed and gathering my triggers into a single one 
is impossible since it would require a programming pattern that is 
missing in SQLite trigger body: "if (new.C1 <> old.C1) then statement>".  It can be simulated for updates ("update ... where ... and 
new.C1 <> old.C1") but not for inserts ("insert" has no "where" clause).


It seems that, in current SQLite implementations, trigger firing order 
is not quite arbitrary. In several tests I (and others) performed, 
triggers appear to always fire in "inverse creation time" order.


Hence my modest proposal: wouldn't it be a nice idea to make this 
unofficial order a feature of SQLite (just like DB2)? This would make 
the multiple triggers of a kind much more useful as it currently are.


Thanks for your attention

Have a nice day

Jean-Luc Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Hierarchical Queries with Looping in SQLite DB

2017-07-06 Thread Jean-Luc Hainaut

On 06/07/2017 08:08, Ashif Ahamed wrote:



When there is some data bringing looping scenario  :

*INSERT INTO FOLDER VALUES(2, 'Loop Data', 5);*

After inserting this loop data , when i trigger the above recursive query
in  SQLite it keeps on running without bringing any results.

Note: In oracle database , this kind of scenario is handled by connect by
nocycle prior or cycle column set is_cycle to '1' default '0'

How to handle this scenario in SQLite?

Can anyone look on into this..


You could use one of the three techniques suggested in this document.

https://www.dropbox.com/s/5tsh7lod3oqv3h5/Tuto19-Recursive-programming.pdf?dl=0

J-L Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Providing incrementing column to query

2017-06-25 Thread Jean-Luc Hainaut


Some suggestions, the validity of which depend on the context and 
whether some ordering must be preserved.
Let T(C1 primary key,other_data) be the table with which we want to 
associate a sequence number (as pseudo column Seq).


Technique 1: Computing Seq through a recursive CTE.

create table T0 as
with SEQUENCE(Seq,C1,other_data) as
 (select 1,C1,other_data
  from (select C1,other_data
from   T order by C1 limit 1)
   union
select S.Seq+1,T.C1,T.other_data
from   T, SEQUENCES
where  T.C1 = (select min(C1)
   from   T
   where  C1 > S.C1)
  )
select * from SEQUENCE;

Technique 2: Extracting rowid from source table (unordered)

create temp table T1(Seq integer,C1,other_data);
insert into T1
   select rowid as Seq,* from T order by C1;

Technique 3: Extracting rowid from a temp table (ordered)

create temp table T2as
   select 0 as Seq,* from T order by C1;
update T2 set Seq = rowid;

Technique 4: Adding auto incremented column in a temp table

create temp table T3(Seq integer primary key autoincrement,C1,other_data)
insert into T3(C1,other_data) select * from Torder by C1;

Technique 5: From declarative definition - suggestion of (C. Ladish)

create temp table T4as
select (select count(*)
from   TT2
where  T2.C1 <= T1.C1) as Seq,
C1,
other_data
from T T1 order by C1;

For small tables (10-100), the execution times are similar (around 1 
ms.), but for medium-size to large tables, some techniques are better.

For a source table T of 32,000 rows of 30 bytes, there are clear winners:
  - technique 1: 185 ms.
- technique 2: 24 ms.
- technique 3: 58 ms.
- technique 4: 17 ms.
- technique 5: 42,000 ms.

All the scores are linear wrt table size but the declarative one, which 
is quadratic.


Regards

Jean-Luc Hainaut





___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Drop view “automatically” when its associated table is dropped?

2017-03-22 Thread Jean-Luc Hainaut

On 22/03/2017 11:33, 邱朗 wrote:

Hi,


Is there any way to drop view “automatically” when its associated table is 
dropped?
It seems no way to do. Then if I drop a table, is there any (easy) way to find 
views created based on it and I can drop view manually ?


A quick and dirty procedure:

sqlite> create table T(A,B,C);
sqlite> create view TA as select A,B from T;
sqlite> create view TB as select B,C from T;
sqlite> create view TAB as select A,B,C from TA natural join TB;
sqlite> select * from TAB;
sqlite> drop table T;
sqlite> select * from TAB;
Error: no such table: main.T

Also works for "drop view". Quite easy to code in any host language like 
other exceptions.


J-L Hainaut
https://projects.info.unamur.be/~dbm/mediawiki/index.php/Accueil 


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About the performance of recursive WITH

2017-02-17 Thread Jean-Luc Hainaut
recursive.sql
TimeThis :Start Time :  Thu Feb 16 07:22:24 2017
TimeThis :  End Time :  Thu Feb 16 07:22:24 2017
TimeThis :  Elapsed Time :  00:00:00.093



timethis "sqlite64 graph.db < GRAPH-performance-iterative.sql"

TimeThis :  Command Line :  sqlite64 graph.db < GRAPH-performance-iterative.sql
TimeThis :Start Time :  Thu Feb 16 07:22:28 2017

SQLite 3.18.0 2017-02-15 22:36:15 58797e9bafa95709e0f706a15f42f93b409e2db5
.eqp on
.timer on

update GRAPH
set Level = null;
--EQP-- 0,0,0,SCAN TABLE GRAPH
Run Time: real 0.031 user 0.015625 sys 0.00

begin;
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 0
  where Parent is null;
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 1
  where Parent in (select Child
 from GRAPH
where Level = 0);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 2
  where Parent in (select Child
 from GRAPH
where Level = 1);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 3
  where Parent in (select Child
 from GRAPH
where Level = 2);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 4
  where Parent in (select Child
 from GRAPH
where Level = 3);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.016 user 0.015625 sys 0.00

update GRAPH
set Level = 5
  where Parent in (select Child
 from GRAPH
where Level = 4);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 6
  where Parent in (select Child
 from GRAPH
where Level = 5);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

commit;
Run Time: real 0.015 user 0.00 sys 0.00

select Level,
  count(*) as Number
 from GRAPH
group by Level;
--EQP-- 0,0,0,SCAN TABLE GRAPH USING COVERING INDEX Covering4
0|1
1|47
2|215
3|638
4|1010
5|729
6|50
Run Time: real 0.016 user 0.00 sys 0.00


TimeThis :  Command Line :  sqlite64 graph.db < GRAPH-performance-iterative.sql
TimeThis :Start Time :  Thu Feb 16 07:22:28 2017
TimeThis :      End Time :  Thu Feb 16 07:22:28 2017
TimeThis :  Elapsed Time :  00:00:00.124


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
On Behalf Of Jean-Luc Hainaut
Sent: Thursday, 16 February, 2017 05:57
To: SQLite mailing list
Subject: [sqlite] About the performance of recursive WITH

Hi,

This post concerns a strange (imho) behaviour of recursive WITH update
query as far as execution time is concerned.

I have created (in an "In memory" DB) a table that stores a set of nodes
arranged in a tree structure (actually the skeleton of the contents of a
Windows folder):

 create table GRAPH(Parent int, Child int, Level int);

Column "Level" indicates the depth of the node in the tree. The root
node has Level 0, its children Level 1, and so on.

2690 nodes have been inserted, with "Level" initialized to null:

 insert into GRAPH(Parent,Child) values (null,1); -- root node (no
parent)
 insert into GRAPH(Parent,Child) values (1,9);-- child of the
root node
 insert into GRAPH(Parent,Child) values (9,10);   -- grand-child of
the root node
 insert into GRAPH(Parent,Child) values (9,11);
 insert into GRAPH(Parent,Child) values (9,12);
 insert into GRAPH(Parent,Child) values (9,13);
 etc.

Considering the size of the table, no indexes have been created.
The distribution of nodes among the levels is fairly "normal":

 +---++
 | Level | Number |
 +---++
 | 0 | 1  |
 | 1 | 47 |
 | 2 | 215|
 

Re: [sqlite] About the performance of recursive WITH

2017-02-17 Thread Jean-Luc Hainaut
0


TimeThis :  Command Line :  sqlite64 graph.db < GRAPH-performance-recursive.sql
TimeThis :Start Time :  Thu Feb 16 07:22:24 2017
TimeThis :  End Time :  Thu Feb 16 07:22:24 2017
TimeThis :  Elapsed Time :  00:00:00.093



timethis "sqlite64 graph.db < GRAPH-performance-iterative.sql"

TimeThis :  Command Line :  sqlite64 graph.db < GRAPH-performance-iterative.sql
TimeThis :Start Time :  Thu Feb 16 07:22:28 2017

SQLite 3.18.0 2017-02-15 22:36:15 58797e9bafa95709e0f706a15f42f93b409e2db5
.eqp on
.timer on

update GRAPH
set Level = null;
--EQP-- 0,0,0,SCAN TABLE GRAPH
Run Time: real 0.031 user 0.015625 sys 0.00

begin;
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 0
  where Parent is null;
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 1
  where Parent in (select Child
 from GRAPH
where Level = 0);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 2
  where Parent in (select Child
 from GRAPH
where Level = 1);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 3
  where Parent in (select Child
 from GRAPH
where Level = 2);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 4
  where Parent in (select Child
 from GRAPH
where Level = 3);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.016 user 0.015625 sys 0.00

update GRAPH
set Level = 5
  where Parent in (select Child
 from GRAPH
where Level = 4);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

update GRAPH
set Level = 6
  where Parent in (select Child
 from GRAPH
where Level = 5);
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING INDEX Covering1 (Parent=?)
--EQP-- 0,0,0,EXECUTE LIST SUBQUERY 0
--EQP-- 0,0,0,SEARCH TABLE GRAPH USING COVERING INDEX Covering4 (Level=?)
Run Time: real 0.000 user 0.00 sys 0.00

commit;
Run Time: real 0.015 user 0.00 sys 0.00

select Level,
  count(*) as Number
 from GRAPH
group by Level;
--EQP-- 0,0,0,SCAN TABLE GRAPH USING COVERING INDEX Covering4
0|1
1|47
2|215
3|638
4|1010
5|729
6|50
Run Time: real 0.016 user 0.00 sys 0.00


TimeThis :  Command Line :  sqlite64 graph.db < GRAPH-performance-iterative.sql
TimeThis :Start Time :  Thu Feb 16 07:22:28 2017
TimeThis :  End Time :  Thu Feb 16 07:22:28 2017
TimeThis :  Elapsed Time :  00:00:00.124


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
On Behalf Of Jean-Luc Hainaut
Sent: Thursday, 16 February, 2017 05:57
To: SQLite mailing list
Subject: [sqlite] About the performance of recursive WITH

Hi,

This post concerns a strange (imho) behaviour of recursive WITH update
query as far as execution time is concerned.

I have created (in an "In memory" DB) a table that stores a set of nodes
arranged in a tree structure (actually the skeleton of the contents of a
Windows folder):

 create table GRAPH(Parent int, Child int, Level int);

Column "Level" indicates the depth of the node in the tree. The root
node has Level 0, its children Level 1, and so on.

2690 nodes have been inserted, with "Level" initialized to null:

 insert into GRAPH(Parent,Child) values (null,1); -- root node (no
parent)
 insert into GRAPH(Parent,Child) values (1,9);-- child of the
root node
 insert into GRAPH(Parent,Child) values (9,10);   -- grand-child of
the root node
 insert into GRAPH(Parent,Child) values (9,11);
 insert into GRAPH(Parent,Child) values (9,12);
 insert into GRAPH(Parent,Child) values (9,13);
 etc.

Considering the size of the table, no indexes have been created.
The distribution of nodes among the levels is fairly "normal":

 +---++
 | Level | Number |
 +---++
 |

[sqlite] About the performance of recursive WITH

2017-02-16 Thread Jean-Luc Hainaut

Hi,

This post concerns a strange (imho) behaviour of recursive WITH update 
query as far as execution time is concerned.


I have created (in an "In memory" DB) a table that stores a set of nodes 
arranged in a tree structure (actually the skeleton of the contents of a 
Windows folder):


   create table GRAPH(Parent int, Child int, Level int);

Column "Level" indicates the depth of the node in the tree. The root 
node has Level 0, its children Level 1, and so on.


2690 nodes have been inserted, with "Level" initialized to null:

   insert into GRAPH(Parent,Child) values (null,1); -- root node (no 
parent)
   insert into GRAPH(Parent,Child) values (1,9);-- child of the 
root node
   insert into GRAPH(Parent,Child) values (9,10);   -- grand-child of 
the root node

   insert into GRAPH(Parent,Child) values (9,11);
   insert into GRAPH(Parent,Child) values (9,12);
   insert into GRAPH(Parent,Child) values (9,13);
   etc.

Considering the size of the table, no indexes have been created.
The distribution of nodes among the levels is fairly "normal":

   +---++
   | Level | Number |
   +---++
   | 0 | 1  |
   | 1 | 47 |
   | 2 | 215|
   | 3 | 638|
   | 4 | 1010   |
   | 5 | 729|
   | 6 | 50 |
   +---++

Now, I would like to compute the "Level" value of all nodes from their 
position in the tree. This task immediately suggests a recursive WITH 
update:


   with recursive
   HIERARCHY(FromID,ToID,Level) as
  (
  select Parent, Child, '0'
  from   GRAPH where Parent is null
 union all
  select H.ToID, G.Child, H.Level + 1
  from   HIERARCHY H, GRAPH G
  where  H.ToID = G.Parent
  )
   update GRAPH
   setLevel = (select Level from HIERARCHYwhere ToID = GRAPH.Child);

When this query is executed by SQLite 3.16.2, the timer reports an 
execution time of 5.522 s.  Adding an index on "Parent" and one on 
"Child" just makes things worse (6.381 s.)


I find this figures quite high, so that I try an iterative technique, 
which is likely to be close to the execution strategy of the WITH 
statement (https://www.sqlite.org/lang_with.html). I translate it for 
the CLI shell as follows:


   update GRAPH set Level = 0 where Parent is null;

   update GRAPH set Level = 1
   where  Parent in (select Child from GRAPH where Level = 0);

   update GRAPH set Level = 2
   where  Parent in (select Child from GRAPH where Level = 1);

   update GRAPH set Level = 3
   where  Parent in (select Child from GRAPH where Level = 2);

   update GRAPH set Level = 4
   where  Parent in (select Child from GRAPH where Level = 3);

   update GRAPH set Level = 5
   where  Parent in (select Child from GRAPH where Level = 4);

   update GRAPH set Level = 6
   where  Parent in (select Child from GRAPH where Level = 5);

For this script, I get an execution time of 0.015 s., i.e., nearly 370 
times less!


Is there something wrong in my queries? Or is there an optimization 
trick for WITH queries by which one could approach the performance of 
the iterative version?


The scripts are available here: 
https://www.dropbox.com/s/23t4ycftlk0doy1/GRAPH-performance.zip?dl=0


Thanks for any advice

Jean-Luc Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Group contiguous rows (islands)

2017-02-15 Thread Jean-Luc Hainaut

On 15/02/2017 18:34, E.Pasma wrote:


Hello,  the query below is simpler. May be slower. But looks pretty 
relational. Thanks, E Pasma.


create table T(date integer,test char(12));
insert into T
values (1,'clim'),(3,'clim'),(7,'amb'),(10,'amb'),(12,'xxx'),
(13,'clim'),(15,'clim'),(20,'clim'),(22,'amb'),(25,'amb');

select min(date) as fromdate, max(date) as enddate, test
from(--get closest preceeding different key
select t.*, max(t2.date) as key2
from t
left join t t2
on t2.datet.test
group by t.date
)
group by key2


Quite nice solution indeed!
For those who may feel uncomfortable with outer joins, the from clause 
could be written as a subquery:


from (select date, test, (select  max(date)
  fromt t2
  where  t2.date < t.date
  and  t2.test <> t.test) 
as key2)


Thanks

J-L

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Group contiguous rows (islands)

2017-02-15 Thread Jean-Luc Hainaut


You could try this, inspired by classic algorithms of temporal databases:

create table T(date integer,test char(12));
insert into T
values (1,'clim'),(3,'clim'),(7,'amb'),(10,'amb'),(12,'xxx'),
(13,'clim'),(15,'clim'),(20,'clim'),(22,'amb'),(25,'amb');

create table TT(seq integer not null primary key autoincrement,date 
integer,test char(12));

insert into TT(date,test) select * from T order by date;

select T1.date, T3.date, T1.test
from   TT T1, TT T3
-- More efficient than "where  T1.date <= T3.date"
where  T1.seq <= T3.seq
andT1.test = T3.test
andnot exists(select * from TT where seq = T1.seq-1 and test = T1.test)
andnot exists(select * from TT where seq = T3.seq+1 and test = T3.test)
andnot exists(select *
 from   TT T2
 -- More efficient than "where  T2.date between T1.date 
and T3.date"

 where  T2.seq between T1.seq and T3.seq
 andT2.test <> T1.test);

Result:

+--+--+--+
| date | date | test |
+--+--+--+
| 1| 3| clim |
| 7| 10   | amb  |
| 12   | 12   | xxx  |
| 13   | 20   | clim |
| 22   | 25   | amb  |
+--+--+--+

Working table TT is recommended to create an ordered sequence of rows in 
which "next" and "previous" rows are more easily described than in the 
source table. Avoid "order by" on views. It works in SQLite but it 
should not!


The idea is to identify maximal sequences of identical "test" values as 
follow:

- T1 denotes the first row of a sequence
- T3 the last row
- T2 any "disturbing" row lying between T1 and T3 but with a different 
value of "test"
- first "not exists" condition states that T1 must be the very first of 
the sequence: it must not be immediately preceded by a row with same 
value of "test"

- same for second "not exists" condition: T3 must be the last
- the third "not exists" condition states that there is no "disturbing" 
row between T1 and T3.


Valid if maximal sequences do not overlap. This query also detects 
single row sequences (e.g., 'xxx').

An index on TT.test may be useful to support T1*T3 join.

For large tables, an iterative procedure will be faster, though less 
elegant!


Regards

Jean-Luc Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug using aggregate functions

2017-02-06 Thread Jean-Luc Hainaut


This is the way SQL (not only SQLite) interprets these queries.
Basically you ask information about an empty set:
- count(*) = 0, as expected
- min(A) is undefined, which is translated in SQL by 'null' value; since 
'null' is not 'nothing', you get a 1-line result comprising 'null' !


Regards

J-L Hainaut



Hi,

I discovered a bug using sqlite 3.15.2. It is simple
to reproduce. Try this SQL-s:

CREATE TABLE TEST(
A integer primary key,
B integer);

-- insert some test data
insert into TEST (A,B) values (1, 1);
insert into TEST (A,B) values (2, null);

-- check count(*)
select count(*)
from TEST
where B is null
  and A > 3;

-- bug sql using agg function (also with other functions: max, avg)
select min(A)
from TEST
where B is null
  and A > 3;

if you replace min(A) with * you get empty result set as expected
but with min or max or avg you get one record

Best Regards
Radovan Antloga
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 2 consecutive rises in value

2016-10-21 Thread Jean-Luc Hainaut
> I had a look at this and tried it without using the extra 
auto-increment field, using the table ROWID
> instead to check for consecutiveness. It would work great and a lot 
faster indeed if it wasn't for
> the multiple values on the same date. Problem with that is that if 
you group by date it may
> skip a ROWID, so it won't pick up that triple. The answer is just to 
clean that table up and
> clear these extra values on the same day. This will always need to be 
done in any case, so
> it might as well be done as a one off rather than in every select 
SQL. Thanks for that idea,

> it looks to me the fastest. RBS

With a complexity of O(N.logN) this algorithm is the second best to a 
pure sequential scan that compares the successive triples, which is in 
O(N). Practically, considering the locality of accesses for the join 
(the 3 rows to join most often are in the same page) and the small size 
of the primary index, the performance should be close to that of a 
sequential scan.


When you write "if it wasn't for the multiple values on the same date", 
do you mean that the values of (Id,Date) are not unique among the rows 
of TABLE1, so that one can find more than one row for the same values of 
(Id,Date)? In this case, how do you compute the unique Value of this 
couple (Id,Date)? Average, last one, largest one?


JLH

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 2 consecutive rises in value

2016-10-20 Thread Jean-Luc Hainaut


What about this one?

create table TABLE1(Seq integer primary key autoincrement,
Id integer, Date date, Value integer);
insert into TABLE1(Id,Date,Value) values(2,'2004-06-23', 42), (...), ...;

select distinct T1.Id-- only one per Id
from   TABLE1 T1, TABLE1 T2, TABLE1 T3   -- very fast join 
on indexed rowid

where  T2.Seq = T1.Seq+1 and T3.Seq = T1.Seq+2   -- consecutive triples
andT1.Id = T2.Id and T3.Id + T1.Id   -- same Id
andT1.Value < T2.Value and T2.Value < T3.Value;  -- ascending Values

Works if:
- the rows are inserted in ascending values of (Id,Date) (if they don't, 
just create and load a temp table with the sorted rows of TABLE1)

- and if the rows are unique on (Id,Date).

If rowid column Seq cannot be added to source TABLE1, load data in temp 
table TABLE2 with columns (Seq,Id,Date,Value).


Probably not as fast as a purely procedural algorithm and less elegant 
than previous proposals,
but probably faster than previous proposals and more elegant than a 
purely procedural one!


J-L Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cyclic detection in recursive queries

2016-07-12 Thread Jean-Luc Hainaut

On 12/07/2016 13:59, New, Cecil (GE Aviation, US) wrote:

The best I have been able to come with is documented at:
http://stackoverflow.com/questions/32640043/cannot-detect-cyclic-data-in-an-sqlite-database/32719216#32719216

But a) it is ugly, b) performance impact of all the length(), replace() 
functions, c) if values end in similar strings, it probably won't work.

After some thought, I think the minimum that would solve this problem is to 
enhance the instr() function to either take a starting position to begin the 
search or to take an occurrence number to search for. Oracle's version of 
instr() does both of these (see 
https://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_1103.htm)

Postgresql has a specific way of detecting loops, which would be even more 
robust. It is documented here:
https://www.postgresql.org/docs/9.1/static/queries-with.html
Three suggestions to solve this problem are described in Section 24.4 of 
the following document:


https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Tuto24-Recursive-programming.pdf

Recursive triggers can be used as well (see Section 24.7).

J-L Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] View workarounds

2016-05-23 Thread Jean-Luc Hainaut

As long as you don't try to modify data, a view just behaves like a base table. 
So, like in base tables, you can't extract, filter, sort, group by, etc. based 
on non-existing columns.

SQLite views are read-only, but modifying data through a view can be done with 
"instead of" triggers.

J-L Hainaut

>Hi,
>
>I have created some views in my database by joining multiple tables to pull
>out specific columns from these tables without having to remember the exact
>SQL and joins (easy repeatability). But it looks like I have misunderstood
>how views work and have run into some limitations when using these views. I
>was wondering if any of you have any workarounds for these limitations.
>
>1. I can't filter the view on any column that is not explicitly part of the
>SELECT clause of the view. These are columns that are part of the tables
>included in the view, but they are not in the SELECT statement, so I am not
>able say: SELECT * from myView where [column that is not part of the
>select] = 'myValue'. I am able to copy the SQL of the view and add that
>WHERE condition to its end, and it filters perfectly fine, but I can't use
>the view directly, I have to use the SQL of the view
>
>2. Similar, probably related: I can't order the view by any column that is
>not part of the SELECT clause of the view. Again, this is a column in a
>table included in the view, but the view itself does not include it in the
>SELECT, and so I can't sort by it.
>
>Is there something similar to a view in SQLite that I should be using
>instead to get around these? I don't want to keep using the query because
>it is long and complicated and I am afraid I will introduce errors into it
>when I try to modify it to add sorting and filtering. And I don't want to
>include these columns in my view because my view already includes some
>calculations based on these columns (for example, a cost field is output as
>a string with a leading $ sign, so I don't want to include the raw
>numerical column in the select, but I want to be able to filter and sort by
>that raw numerical value).
>
>I have a lot of experience with SQL, and have worked with MS Access
>extensively, so I am used to saving queries in the database and using them
>as needed.  MS Access does not have views, and saved queries are MS Access'
>alternative to views.  But they behave more like queries than SQLite
>views:  they give me access to all the columns in the tables involved, not
>just those in the SELECT clause.  Maybe I am just spoilt!
>
>Thank you in advance for your thoughts on this.
>
>Balaji Ramanathan
>___
>sqlite-users mailing list
>sqlite-users at mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 



[sqlite] Computed column or create index on a view ?

2016-05-15 Thread Jean-Luc Hainaut

- No index on a view in SQLite (so far).

- A computed column can be maintained through appropriate triggers (here, "on 
insert" and "on update"). Efficient if you read data more than you modify them.

- Perhaps trying this: 
create table readings(...);
create index trg_cx on readings(timestamp,(20+(a+b)/(c+c)));

J-L Hainaut


>I know that SQLite does not currently implement these things but I'm curious 
>if anyone else wants them and how hard they would be to implement.
>
>I have what you might consider to be a computed column.  You might imagine
>
>CREATE TABLE readings
>(timestamp TEXT PRIMARY KEY,
>a REAL, b REAL, c REAL)
>
>and I constantly need to evaluate
>
>pressure = 20+(a+b)/(c+c)
>
>What I really want from SQLite is to support computed columns.  I don't really 
>care which syntax is used but perhaps
>
>CREATE TABLE readings
>(timestamp TEXT PRIMARY KEY,
>a REAL, b REAL, c REAL,
>(20+(a+b)/(c+c)) AS pressure)
>
>... or perhaps ...
>
>CREATE TABLE readings
>(timestamp TEXT PRIMARY KEY,
>a REAL, b REAL, c REAL,
>pressure = (20+(a+b)/(c+c)))
>
>One can then, of course, do
>
>CREATE INDEX r_tp ON readings (timestamp,pressure DESC)
>
>That's my ideal.  Second choice would be to be able to create an index on a 
>VIEW:
>
>CREATE TABLE readings
>(timestamp TEXT PRIMARY KEY,
>a REAL, b REAL, c REAL);
>CREATE VIEW r_t_p (timestamp,pressure)
>AS SELECT timestamp,(20+(a+b)/(c+c)) FROM readings;
>CREATE INDEX r_tp ON VIEW r_t_p (timestamp, pressure DESC)
>
>At the moment I have to simulate the above abilities by creating both the VIEW 
>and an index with a calculated column independently, and even after that I 
>have to do two fetches to get the row of data I need.
>
>Surely I'm not alone in thinking that since SQLite now implements expressions 
>in indexes computed columns are a natural addition to SQLite at this time ?
>
>Simon.
>___
>sqlite-users mailing list
>sqlite-users at mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 



[sqlite] "Distinct" and "Order by" in "group_concat"

2016-05-13 Thread Jean-Luc Hainaut
Hi,

I have two questions/observations/suggestions related to SQLite function 
"group_concat". 
This function is extremely useful since it replaces in an elegant and concise 
way quite tricky procedures. However, it misses two features of which I would 
like to discuss the work-around.

Let's consider table P(PiD,Age,City), giving, for a set of persons, their age 
and city. PiD is the PK.

create table P(PiD integer not null primary key,Age integer,City text);
insert into P values (1,35,'B'),(2,32,'B'),(3,35,'A'),(4,35,'B'),(5,32,'A');

+-+-+--+
| PiD | Age | City |
+-+-+--+
| 1   | 35  | B|
| 2   | 32  | B|
| 3   | 35  | A|
| 4   | 35  | B|
| 5   | 32  | A|
+-+-+--+


1. The case of the DISTINCT clause in "group_concat" function

I want to extract a table that assigns to each age the list of cities in which 
some persons of this age live:

select Age,group_concat(City) as Cities
from P
group by Age;

+-++
| Age | Cities |
+-++
| 32  | B,A|
| 35  | B,A,B  |
+-++

Duplicates can be removed through quantifier "distinct":

select Age,group_concat(distinct City) as Cities
from P
group by Age;

+-++
| Age | Cities |
+-++
| 32  | B,A|
| 35  | B,A|
+-++

Now, I would like to change the default separator (",") for ";". No problem 
"with duplicates":

select Age,group_concat(City,';') as Cities
from P
group by Age;

+-++
| Age | Cities |
+-++
| 32  | B;A|
| 35  | B;A;B  |
+-++

But this does not work when I ask to reduce duplicates:

select Age,group_concat(distinct City,';') as Cities
from P
group by Age;

We get the error message: "DISTINCT aggregates must have exactly one argument". 
Clearly, the analyzer doesn't consider the second argument as the separator but 
as an additional element to concatenate, which is invalid (the documentation 
says "In any aggregate function that takes a single argument, that argument can 
be preceded by the keyword DISTINCT").

This can be solved, but at the cost of increased complexity (and perhaps 
execution time), by removing duplicates in a subquery "from" clause ...:

select Age,group_concat(City,';') as Cities 
from (select distinct Age,City
  from P)
group by Age;

+-++
| Age | Cities |
+-++
| 32  | B;A|
| 35  | B;A|
+-++

... or in a subquery in the select-list:

select Age,(select group_concat(City,';') 
from P
where Age = AP.Age
group by Age) as Cities
from P AP 
group by Age;

.. or even by replacing default ',' with ';' in the resulting concatenated 
list, provided no comma appears in the concatenated elements:

select Age,replace(group_concat(distinct City),',',';') as Cities
from P group by Age;

A more drastic solution would be to code a new, user-defined, aggregate 
function that simulates a "group_concat(distinct ...,separator)" function.


My question/suggestion
--
Is there a better way to reduce duplicates with the current version of SQLite? 
Could we imagine, in a future version of SQLite, an extension of "group_concat" 
with both "distinct" and user-defined separator? If we can, I will support it!


2. The case of the ORDER BY clause in "group_concat" function

My second concern is about the way to sort the items of the "group_concat" 
result, as is possible in some other DBMS. E.g., how to get such sorted lists 
in SQLite:

+-++
| Age | Cities |
+-++
| 32  | A,B|
| 35  | A,B,B  |
+-++
or
+-++
| Age | Cities |
+-++
| 32  | A,B|
| 35  | A,B|
+-++

The usual technique (generally found on the web) to sort the lists is through a 
subquery "from" clause in which table P (or a projection of it) is pre-sorted 
on the elements to aggregate:

select Age,group_concat(City) as Cities 
from (select Age,City 
  from P
  order by City)
-- or (more coercive for the optimizer): order by Age,City)
group by Age;

This works fine in all the tests I have caried out. 
However, this technique relies on the fact that the order is preserved and is 
exploited when executing the outer query. 
Though this hypothesis seems quite natural, theoretically, it depends on 
multiple factors: size of the source data, indexes, other clauses of the source 
query that may require different row ordering, current strategies of the 
optimizer (that may evolve in next versions). In short, unless the hypothesis 
formulated above can be taken for certain and definitive in the long term, this 
technique may be felt unstable and unsecure.

My question
-
Can we be made sure that this form will always, in all cases, produce the 
desired element ordering in the concatenated list?

Thanks for your comments.

Jean-Luc Hainaut



[sqlite] How to get 1 row with no null columns

2016-05-11 Thread Jean-Luc Hainaut
Le 12:26 11/05/2016, vous avez ?crit:
>All,
>
>Is there a simple way to find a row in a table where none of columns contain a 
>null value? For example:
>
>SELECT * FROM AnyTable WHERE (all columns IS NOT NULL) LIMIT 1;

select * from AnyTable col1||col2||...||coln is not nul limit 1;

Fine for SQLite but could fail in other RDBMS, in which numeric columns must be 
converted into characters: ...||cast(colj as char)|| ...

I see no way to write a generic, table-independent, query unless it is 
generated from the metadata of the table.

J-L Hainaut




[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Jean-Luc Hainaut
Le 14:43 06/05/2016,Simon Slavin ?crit:

>On 6 May 2016, at 1:32pm, Stephan Buchert  wrote:
>
>> The largest database file has now grown to about 180 GB. I need to have
>> copies of the files at at least two different places. The databases are
>> updated regularly as new data from the satellites become available.
>> 
>> Having the copies of the file synced becomes increasingly tedious
>> as their sizes increase. Ideal would be some kind of
>> incremental backup/sync facility.
>
>Believe it or not, the fastest way to synchronise the databases is not to 
>synchronise the databases.  Instead you keep a log of the instructions used to 
>modify the database.  You might, for example, modify the library that you use 
>for INSERT, DELETE and UPDATE commands to execute those commands and also save 
>the command to another 'commandLog' table.  Or perhaps just append those 
>commands to a plain text file.
>
>Then instead of sending any data to the other sites you send this list of 
>commands to the other sites and have them execute them.
>
>Once you start implementing this you'll see that it's more complicated than I 
>have described but the text of your post suggests that you're a good enough 
>programmer to do it properly.

This does not need to be so complicated: the problem can be solved by three 
triggers (per table) that insert in a log table the components of the three 
data modification commands. At definite time points, the contents of this table 
is used to generate the SQL data modification commands for the other DB.

J-L Hainaut




[sqlite] Is this a regression?

2016-04-24 Thread Jean-Luc Hainaut
Hello,

When executing a set of queries written some years ago (let's call it a 
"regression test"!), I found that one of them now fails with a strange message.
It executes correctly until version 3.8.5 (perhaps later) but fails from 
version 3.10 (perhaps earlier).

I have simplified the problem as follow:

- table P(PID) represents products,
- table D(PID,QTY) represents order details (PID identifies a product
  and QTY specifies the quantity ordered of this product). 

In SQL:

create table P(PID integer not null primary key);
create table D(PID integer not null references P,
   QTY integer not null);
insert into P values (1),(2),(3),(4);
insert into D values (1,5),(1,10),(3,6),(3,2),(4,12);

The following query computes, for each product, the sum of quantities ordered. 
It also includes quantity 0 for products not referenced by D:

select PID,TOTALQ
from (select PID, sum(QTY) as TOTALQ
  from   D
  group by PID
union
  select PID, 0 as TOTALQ
  from   P
  where  PID not in (select PID from D)
  )
order by PID;

As expected, it provides, through the SQLite3 shell:

1|15
2|0
3|8
4|12

The problem arises when we add a "where" clause involving computed column 
TOTALQ:

select PID,TOTALQ
from (select PID, sum(QTY) as TOTALQ
  from   D
  group by PID
union
  select PID, 0 as TOTALQ
  from   P
  where  PID not in (select PID from D)
  )
where TOTALQ < 10
order by PID;

With SQLite 3.10 and 3.12.2 the query fails with the message:

   "Error: misuse of agregate: sum()"

while with SQLite 3.8.5, it provides the correct answer:

2|0
3|8

Rewriting the "from" clause as a "with" query or creating a view (with and 
without the problematic "where" clause) then querying show the same behaviour.
It also appears that removing the second argument of the union "solves" the 
problem.

Has anybody observed this problem?

Thanks for future help

Jean-Luc Hainaut


Prof. Jean-Luc Hainaut
Facult? d'Informatique
University of Namur
Rue Grandgagnage, 21
B-5000 - Namur (Belgium)   
E-mail : jlhainaut at info.fundp.ac.be, jean-luc.hainaut at unamur.be
http://www.info.fundp.ac.be/libd


[sqlite] Autocommit in "with" query: bug or feature?

2014-07-06 Thread Jean-Luc Hainaut
Hi,

Context: Python 2.7.6, Windows XP, SQLite v3.8.5.

The following test program suggest that "with" queries automatically execute a 
commit, as if they were DDL statements. I hope this is a bug, otherwise, this 
side effect considerably reduces the interest of this query.

Best regards

Jean-Luc Hainaut


# -*- coding: UTF8 -*-
import sqlite3

def displayDBcontents():
query = "select * from PERSON"
c.execute(query)
print
for row in c: print '%-4s %-10s' % (row[0],row[1])

# We create and fill the STAFF database
conn = sqlite3.connect('STAFF.db')
c = conn.cursor()
c.execute("drop table if exists PERSON")
c.execute("create table PERSON (PID char(4),Name char(10))")
c.execute("insert into PERSON values ('p1','Smith'),('p2','Dermiez')")
conn.commit()

# We check the contents of table PERSON
displayDBcontents()

# We insert Jones and we check the contents of PERSON
c.execute("insert into PERSON values('p3','Jones')")
displayDBcontents()

# We execute a simple "with" query
c.execute("with CTE(A) as (values (1),(2)) select A from CTE")
print
for row in c: print row[0]

#  We cancel the last insertion (Jones should disappear)
#  and we check the contents of PERSON
conn.rollback()
displayDBcontents()

# Surprise: Jones still is in the DB

c.close()
conn.close()

Prof. Jean-Luc Hainaut
Faculté d'Informatique
University of Namur
Rue Grandgagnage, 21
B-5000 - Namur (Belgium)   
Phone (direct) : +32 (81) 72 49 96
Phone (secret.): +32 (81) 72 49 64
Fax: +32 (81) 72 49 67
E-mail : jlhain...@info.fundp.ac.be
http://www.info.fundp.ac.be/libd

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug in "with" query

2014-06-21 Thread Jean-Luc Hainaut
Hi,

Context: Python 2.7.6, Windows XP, SQLite v3.8.5

The test program:

import sqlite3
conn = sqlite3.connect('ORDERS.db')
c = conn.cursor()

query = "with CTE1(ID,Name,City)"\
+ " as (select CustID,Name from CUSTOMER where City = 'London')"\
+ " select * from CTE1"

c.execute(query)

print c.description


Description of the bug:
- if the result set is not empty, c.description returns the list of column 
names, as expected.
- if the result set is empty, c.description returns None.

Not critical but annoying!

Anyway, warm congratulation for the implementation of "with"!

Jean-Luc Hainaut


Prof. Jean-Luc Hainaut
Faculté d'Informatique
University of Namur
Rue Grandgagnage, 21
B-5000 - Namur (Belgium)   
Phone (direct) : +32 (81) 72 49 96
Phone (secret.): +32 (81) 72 49 64
Fax: +32 (81) 72 49 67
E-mail : jlhain...@info.fundp.ac.be
http://www.info.fundp.ac.be/libd

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users