Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Dan Kennedy

On 12/18/2014 02:41 PM, Paul wrote:

I want to confirm that issue is fixed for me.
Thanks again, Dan!


Please ignore this update, patch fixes this problem as well.


I want to add even more input for this issue.
I understand why there is implicit savepoint, when I remove row from 'parent' 
table.
But why is this also true for a 'child' table when I perform 'INSERT OR 
REPLACE'?
Removing FK reference disables journal growth. I don't understand...




At the end of the day my head was so big that I, having some ten different test 
cases,
have errourneously confirmed that 'INSERT OR REPLACE' is fixed also, by running 
wrong test.
But sadly it isn't. Here, I'll drop my test program again, for clarity.


I don't think there is an easy fix for this one. The statement journal 
is required, as SQLite may need to reinstate rows deleted by the REPLACE 
processing if the FK constraint fails.


To fix this properly, it probably needs to use a more sophisticated data 
structure than the statement journal. Which would complicate things 
some. But at the moment it seems like SAVEPOINT and very large 
transactions don't work well together.


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Paul
Hi, Dan.

 On 12/18/2014 02:41 PM, Paul wrote:
  I want to confirm that issue is fixed for me.
  Thanks again, Dan!
 
 
  Please ignore this update, patch fixes this problem as well.
 
  I want to add even more input for this issue.
  I understand why there is implicit savepoint, when I remove row from 
  'parent' table.
  But why is this also true for a 'child' table when I perform 'INSERT OR 
  REPLACE'?
  Removing FK reference disables journal growth. I don't understand...
 
 
 
  At the end of the day my head was so big that I, having some ten different 
  test cases,
  have errourneously confirmed that 'INSERT OR REPLACE' is fixed also, by 
  running wrong test.
  But sadly it isn't. Here, I'll drop my test program again, for clarity.
 
 I don't think there is an easy fix for this one. The statement journal 
 is required, as SQLite may need to reinstate rows deleted by the REPLACE 
 processing if the FK constraint fails.
 
 To fix this properly, it probably needs to use a more sophisticated data 
 structure than the statement journal. Which would complicate things 
 some. But at the moment it seems like SAVEPOINT and very large 
 transactions don't work well together.
 

I understand. I guess, I'll have to stick to UPDATE - INSERT.
Thank you for taking your time.

Just out of curiosity, I want to ask one more question.
How can FK constraint fail if I am removing (replacing) row from the 'child' 
table?



Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Dan Kennedy

On 12/18/2014 04:16 PM, Paul wrote:

Hi, Dan.


On 12/18/2014 02:41 PM, Paul wrote:

I want to confirm that issue is fixed for me.
Thanks again, Dan!


Please ignore this update, patch fixes this problem as well.


I want to add even more input for this issue.
I understand why there is implicit savepoint, when I remove row from 'parent' 
table.
But why is this also true for a 'child' table when I perform 'INSERT OR 
REPLACE'?
Removing FK reference disables journal growth. I don't understand...



At the end of the day my head was so big that I, having some ten different test 
cases,
have errourneously confirmed that 'INSERT OR REPLACE' is fixed also, by running 
wrong test.
But sadly it isn't. Here, I'll drop my test program again, for clarity.

I don't think there is an easy fix for this one. The statement journal
is required, as SQLite may need to reinstate rows deleted by the REPLACE
processing if the FK constraint fails.

To fix this properly, it probably needs to use a more sophisticated data
structure than the statement journal. Which would complicate things
some. But at the moment it seems like SAVEPOINT and very large
transactions don't work well together.


I understand. I guess, I'll have to stick to UPDATE - INSERT.
Thank you for taking your time.

Just out of curiosity, I want to ask one more question.
How can FK constraint fail if I am removing (replacing) row from the 'child' 
table?


The FK constraint can fail because a new row is being inserted into the 
child table. The reason statement rollback may be required is because 
any replaced rows will be removed before SQLite has a chance to figure 
out if the INSERT actually does violate the PK constraint.


Dan.








Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread RSmith


On 2014/12/18 12:03, Dan Kennedy wrote:

On 12/18/2014 04:16 PM, Paul wrote:

I understand. I guess, I'll have to stick to UPDATE - INSERT.
Thank you for taking your time.

Just out of curiosity, I want to ask one more question.
How can FK constraint fail if I am removing (replacing) row from the 'child' 
table?


The FK constraint can fail because a new row is being inserted into the child table. The reason statement rollback may be required 
is because any replaced rows will be removed before SQLite has a chance to figure out if the INSERT actually does violate the PK 
constraint.




Also - I think it depends on more factors, like that may not be the only child table linking to that item in the parent, nor is the 
child table excluded from also being a parent to another table. Constraints may fail all over and knowing all this before-hand would 
require a much longer query prep I would imagine, but the journal does the job.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Paul
  Just out of curiosity, I want to ask one more question.
  How can FK constraint fail if I am removing (replacing) row from the 
  'child' table?
 
 The FK constraint can fail because a new row is being inserted into the 
 child table. The reason statement rollback may be required is because 
 any replaced rows will be removed before SQLite has a chance to figure 
 out if the INSERT actually does violate the PK constraint.
 

I see, thank you for explanation, Dan.


Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Row filtering prior to aggregate function execution

2014-12-18 Thread Roland Martin
I have a need to filter the result set before aggregate functions are
performed.  For example, a query with a group by clause produces a result
set of 5 rows with count() and sum(). For each of the 5 rows I need the
value of a single column to serve as input into a security check. If the
security checks passes the row is part of the final result set. If the
security check fails the row is discarded and is not part of the final
result set. The final result set in the example could be 0-5 rows and I
would like the result values of count() and sum() to be accurate.
Unfortunately I cannot query the security system  to find out all valid
values and add these values to the where clause. I have to give a value to
the security system and it returns pass or fail.

  

Based on the above, is there a callback or other mechanism I can use to
participate in the result set generation?

 

Thanks for the help - Roland Martin

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row filtering prior to aggregate function execution

2014-12-18 Thread Clemens Ladisch
Roland Martin wrote:
 I have a need to filter the result set before aggregate functions are
 performed.

Use the WHERE clause.

 Unfortunately I cannot query the security system  to find out all valid
 values and add these values to the where clause. I have to give a value to
 the security system and it returns pass or fail.

Add a call to a user-defined function to the WHERE clause.
http://www.sqlite.org/c3ref/create_function.html


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE wal file size keeps growing

2014-12-18 Thread Kushagradhi Bhowmik
I am writing continuously into a db file which has PRAGMA journal_mode=WAL,
PRAGMA journal_size_limit=0. My C++ program has two threads, one
reader(queries at 15 sec intervals) and one writer(inserts at 5 sec
intervals).

Every 3 min I am pausing insertion to run a sqlite3_wal_checkpoint_v2()
from the writer thread with the mode parameter as
SQLITE_CHECKPOINT_RESTART. To ensure that no active read operations are
going on at this point, I set a flag that checkpointing is about to take
place and wait for reader to complete (the connection is still open) before
running checkpoint. After checkpoint completion I again indicate to readers
it is okay to resume querying.

sqlite3_wal_checkpoint_v2() returns SQLITE_OK, and pnLog and Ckpt as
equal(around 4000), indicating complete wal file has been synced with main
db file. So next write should start from beginning according to
documentation. However, this does not seem to be happening as the
subsequent writes cause the WAL file to grow indefinitely, eventually up to
some GBs.

I did some searching and found that that readers can cause checkpoint
failure due to open transactions. However, the only reader I'm using is
ending its transaction before the checkpoint starts. What else could be
preventing the WAL file from not growing?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE wal file size keeps growing

2014-12-18 Thread GB
Readers do not need long-lasting transactions (if any at all), so I'd 
rather suspect your writer to be the culprit. Does it use lasting 
transactions? If so, make it commit the transaction before checkpointing.


regards
gerd
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row filtering prior to aggregate function execution

2014-12-18 Thread Hick Gunter
SELECT ...,sum(...),count() FROM ... WHERE security(...) ...

With a user defined function security().

-Ursprüngliche Nachricht-
Von: Roland Martin [mailto:rolandsmar...@gmail.com]
Gesendet: Donnerstag, 18. Dezember 2014 17:09
An: sqlite-users@sqlite.org
Betreff: [sqlite] Row filtering prior to aggregate function execution

I have a need to filter the result set before aggregate functions are 
performed.  For example, a query with a group by clause produces a result set 
of 5 rows with count() and sum(). For each of the 5 rows I need the value of a 
single column to serve as input into a security check. If the security checks 
passes the row is part of the final result set. If the security check fails the 
row is discarded and is not part of the final result set. The final result set 
in the example could be 0-5 rows and I would like the result values of count() 
and sum() to be accurate.
Unfortunately I cannot query the security system  to find out all valid values 
and add these values to the where clause. I have to give a value to the 
security system and it returns pass or fail.



Based on the above, is there a callback or other mechanism I can use to 
participate in the result set generation?



Thanks for the help - Roland Martin

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE wal file size keeps growing

2014-12-18 Thread Dan Kennedy

On 12/19/2014 11:22 AM, Kushagradhi Bhowmik wrote:

I am writing continuously into a db file which has PRAGMA journal_mode=WAL,
PRAGMA journal_size_limit=0. My C++ program has two threads, one
reader(queries at 15 sec intervals) and one writer(inserts at 5 sec
intervals).

Every 3 min I am pausing insertion to run a sqlite3_wal_checkpoint_v2()
from the writer thread with the mode parameter as
SQLITE_CHECKPOINT_RESTART. To ensure that no active read operations are
going on at this point, I set a flag that checkpointing is about to take
place and wait for reader to complete (the connection is still open) before
running checkpoint. After checkpoint completion I again indicate to readers
it is okay to resume querying.


It shouldn't hurt, but you should not have to manage the readers that 
way. SQLITE_CHECKPOINT_RESTART should wait on readers as required to 
ensure that the next writer can write into the start of the wal file 
instead of appending. If SQLITE_CHECKPOINT_RESTART returns SQLITE_OK, 
the next writer should be able to restart the wal file.


If you register an sqlite3_wal_hook() callback it will be invoked to 
report the size of the wal file after each write transaction. Logging 
this information along with the checkpoint attempts and return codes 
might help to shed light on the problem.


Dan.







sqlite3_wal_checkpoint_v2() returns SQLITE_OK, and pnLog and Ckpt as
equal(around 4000), indicating complete wal file has been synced with main
db file. So next write should start from beginning according to
documentation. However, this does not seem to be happening as the
subsequent writes cause the WAL file to grow indefinitely, eventually up to
some GBs.

I did some searching and found that that readers can cause checkpoint
failure due to open transactions. However, the only reader I'm using is
ending its transaction before the checkpoint starts. What else could be
preventing the WAL file from not growing?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users