On 8-12-2014 14:58, Gwendal Roué wrote:

Le 8 déc. 2014 à 14:48, RSmith <rsm...@rsweb.co.za> a écrit :


On 2014/12/08 11:55, Gwendal Roué wrote:
Hi,

Unique indexes make some valid update queries fail.

Please find below the SQL queries that lead to the unexpected error:

-- The `books` and `pages` tables implement a book with several pages.
-- Page ordering is implemented via the `position` column in the pages table.
-- A unique index makes sure two pages do not share the same position.
CREATE TABLE books (
         id INT PRIMARY KEY
         )
CREATE TABLE pages (
         book_id INT NOT NULL REFERENCES books(id) ON DELETE CASCADE ON UPDATE 
CASCADE,
         position INT
)
CREATE UNIQUE INDEX pagination ON pages(book_id, position)

-- Let's populate the tables with a single book and three pages:
INSERT INTO books VALUES (0);
INSERT INTO pages VALUES (0,0);
INSERT INTO pages VALUES (0,1);
INSERT INTO pages VALUES (0,2);

-- We want to insert a page between the pages at positions 0 and 1. So we have
-- to increment the positions of all pages after page 1.
-- Unfortunately, this query yields an error: "columns book_id, position are not 
unique"/

UPDATE pages SET position = position + 1 WHERE book_id = 0 AND position >= 1;

NOT a bug...  the moment you SET position to position +1 for the first 
iteration of the query, it tries to make that entry look like (0,2) and there 
is of course at this point in time already an entry like (0,2).

Some engines allow you to defer the constraint checking until the end of the 
transaction (and you can do this for References, though you are cascading which 
is fine). In SQLite the check is immediate and will fail for the duplication 
attempted on the first iteration. The fact that the other record will 
eventually be changed to no longer cause a fail is irrelevant to the engine in 
a non-deferred checking.

Now that we have established it isn't a bug,

I'm new to this mailing list, and I won't try to push my opinion, which is : 
yes this is a bug, and this bug could be fixed without introducing any 
regression (since fixing it would cause failing code to suddenly run, and this 
has never been a compatibility issue).

Thank you all for your support and explanations. The root cause has been found, 
and lies in the constraint checking algorithm of sqlite. I have been able to 
find a work around that is good enough for me.

Now the subject deserves a rest, until, maybe, someday, one sqlite maintainer 
who his not attached to the constraint-checking algorithm fixes it.

Have a nice day,
Gwendal Roué


It's not a bug, it's in the manual that SQLite behave this way

(https://www.sqlite.org/lang_update.html)
Optional LIMIT and ORDER BY Clauses

If SQLite is built with the SQLITE_ENABLE_UPDATE_DELETE_LIMIT compile-time option then the syntax of the UPDATE statement is extended with optional ORDER BY and LIMIT clauses as follows:
.....
The ORDER BY clause on an UPDATE statement is used only to determine which rows fall within the LIMIT. The order in which rows are modified is arbitrary and is *not* influenced by the ORDER BY clause.



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to