[sqlite] Why operations with in-memory DB are much faster than with on-disk DB?

2018-10-18 Thread Yuri
I noticed that my DB import process is much slower when run on the DB on 
disk, vs. in memory. It reads files and runs a massive amount of 
inserts/updates.



Why is this? Is there any way to speed it with disk without using 
in-memory DB?



Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Violated failed foreign key constraint delays the rest of transaction ; Some foreign key violations don't trigger the error at all

2018-08-02 Thread Yuri

On 8/2/18 3:46 PM, Keith Medcalf wrote:

You are required to have a UNIQUE index on the PARENT KEYS in a foreign key 
relationship.

Depending on whether the PARENT:CHILD is 1:1 or 1:N you need either a UNIQUE 
(1:1) or regular index on the child key.



Why is index on PARENT KEY in a foreign key relationship required for 
inserts? Missing index should slow down deletion of the target record in 
FK, but insertions shouldn't need checking if such parent key is already 
present or not. Insertion in the parent part of FK checks if the target 
exists or not. If it exists, insertion succeeds, if not, it fails. It 
doesn't need to check if another parent key already exists.



Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Violated failed foreign key constraint delays the rest of transaction ; Some foreign key violations don't trigger the error at all

2018-08-02 Thread Yuri

On 8/2/18 3:17 PM, Keith Medcalf wrote:

.lint fkey-indexes

and it will tell you what indexes you forgot to create that cause the issue you 
are seeing.



But this problem isn't about a missing index.


Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Violated failed foreign key constraint delays the rest of transaction ; Some foreign key violations don't trigger the error at all

2018-08-02 Thread Yuri

On 8/2/18 7:02 AM, David Raymond wrote:

So for your test script there, try inserting a record with the violation ID a 
little bit later and see if it suddenly speeds up again. In my Python version 
of your script it does indeed speed back up again once the outstanding 
violation is fixed.



The main problem is that this bug makes it difficult to handle bugs in 
our code. A bug causing the FK violation automatically wastes a lot of 
time before being detected, because the slowdown is in the range of 
1000X. FK violations aren't out of the ordinary or abnormal, they are a 
valid outcome of some queries, and should be handled reasonably.



Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Violated failed foreign key constraint delays the rest of transaction ; Some foreign key violations don't trigger the error at all

2018-08-01 Thread Yuri
The attached testcase injects the foreign key violation into a long 
transaction. This makes the remainder of the transaction much slower, 
even though the foreign key is deferred, and should only be checked in 
the end of the transaction.



While working on this testcase, I found that sometimes the foreign key 
violation doesn't trigger the error at all. Please change VIOLATION to 
0, and observe that there is no failure now, though it should be.



sqlite3-3.24.0 on FreeBSD 11.2


Yuri



---testcase---

#include 
#include 
#include 
#include 

void doSql(sqlite3 *db, const char *sql) {
  char *err_msg = 0;
  int rc = sqlite3_exec(db, sql, 0, 0, _msg);
  if (rc != SQLITE_OK ) {
    fprintf(stderr, "SQL error: %s\n", err_msg);
    sqlite3_free(err_msg);
    sqlite3_close(db);
    exit(1);
  }
}

#define NROWS 10
#define VIOLATION 100

int main(void) {
  sqlite3 *db;

  char s[512];

  int rc = sqlite3_open(":memory:", );
  if (rc != SQLITE_OK) {
    fprintf(stderr, "Cannot open database: %s\n", sqlite3_errmsg(db));
    sqlite3_close(db);
    return 1;
  }

  doSql(db, "PRAGMA foreign_keys = ON;");

  printf("creating B ...\n");
  doSql(db, "create table b (id int PRIMARY KEY, name text)");

  printf("populating B ...\n");
  for (int i = 0; i < NROWS; i++) {
    sprintf(s, "insert into b values(%d, 'The name field for %d')", i, i);
    doSql(db, s);
  }

  printf("creating A ...\n");
  doSql(db, "create table a (id int PRIMARY KEY, name text, aid int not 
null, FOREIGN KEY(aid) REFERENCES a(id))");


  printf("populating A ...\n");
  doSql(db, "BEGIN TRANSACTION;");
  doSql(db, "PRAGMA defer_foreign_keys=ON;");
  for (int i = 0; i < NROWS; i++) {
    if (i % 1000 == 0)
  printf("...row#%d... (time=%ld)\n", i, time(0));
    sprintf(s, "insert into a values(%d, 'The name field for %d', %d)", 
i, i, i);

    doSql(db, s);
    // introfuce the FK violation
    if (i == NROWS/4) {
  sprintf(s, "insert into a values(%d, 'The name field for %d', 
%d)", NROWS+i, i, NROWS+i+VIOLATION);

  doSql(db, s);
    }
  }
  doSql(db, "COMMIT TRANSACTION;");

  sqlite3_close(db);

  return 0;
}

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why some options (ex. SQLITE_ENABLE_EXPLAIN_COMMENTS, SQLITE_ENABLE_DBPAGE_VTAB) are permanently enabled in Makefile.am, but are documented as user-defined?

2018-03-13 Thread Yuri

On 03/13/18 02:05, Clemens Ladisch wrote:

These options are used to compile the command-line shell (where you want
to have as many (debugging) features as possible), and to assemble the
amalgamation.



Having two sets of build options is quite confusing.


Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why some options (ex. SQLITE_ENABLE_EXPLAIN_COMMENTS, SQLITE_ENABLE_DBPAGE_VTAB) are permanently enabled in Makefile.am, but are documented as user-defined?

2018-03-13 Thread Yuri

Makefile.am has:
AM_CFLAGS = @THREADSAFE_FLAGS@ @DYNAMIC_EXTENSION_FLAGS@ @FTS5_FLAGS@ 
@JSON1_FLAGS@ @ZLIB_FLAGS@ @SESSION_FLAGS@ -DSQLITE_ENABLE_FTS3 
-DSQLITE_ENABLE_RTREE
sqlite3_CFLAGS = $(AM_CFLAGS) -DSQLITE_ENABLE_EXPLAIN_COMMENTS 
-DSQLITE_ENABLE_DBPAGE_VTAB -DSQLITE_ENABLE_STMTVTAB -DSQLITE_ENABLE_DBSTAT_VTAB

But all of these options (with -D) are documented here 
http://www.sqlite.org/compile.html as user-settable.


Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_ERROR ("SQL error or missing database") should be split into two codes: "SQL error" and "missing database"

2017-07-07 Thread Yuri

On 07/07/2017 12:04, Simon Slavin wrote:

I note that the "extended result code" feature is relevent:

<https://sqlite.org/c3ref/c_ioerr_access.html>

You’re proposing two error codes like

SQLITE_FILE_NODATABBASE
SQLITE_FILE_NOTADATABBASE

for a missing database file, and for one which has the wrong text in the magic 
header area.

I’d suggest some of my own:

SQLITE_API_BADARGUMENT
SQLITE_API_WRONGORDER
SQLITE_SQL_BADSYNTAX
SQLITE_SQL_WRONGORDER

Does anyone want to contribute others ?



There are some other problems in error definitions. For example, what 
does SQLITE_FULL mean? How can database be full? Is it really a 
disk-full condition?


> #define SQLITE_FULL13   /* Insertion failed because database 
is full */


Also, what does

#define SQLITE_IOERR_SHORT_READ(SQLITE_IOERR | (2<<8))

really mean? How is it different from the case when database is 
corrupt/truncated? But there is SQLITE_CORRUPT for that.
Short read mean EOF, and EOF in unexpected place constitutes corrupt database 
file.


Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE_ERROR ("SQL error or missing database") should be split into two codes: "SQL error" and "missing database"

2017-07-07 Thread Yuri

Currently SQLITE_ERROR stands for two very different errors:

> #define SQLITE_ERROR1   /* SQL error or missing database */

It would make sense to have separate codes for them to avoid possible 
confusion, because these two errors really have nothing in common.



Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] No public bug tracker?

2017-03-20 Thread Yuri

On 03/20/2017 17:11, Simon Slavin wrote:

You can report bugs by posting here.  The development team will see them.

It sometimes takes a little time to get a reaction while they discuss things 
between themselves to figure out what to do.


Does this mean that this issue will be fixed:

> A few days ago I reported the problem with configure script and 
thread safety options. configure option --enable-threadsafe only 
supports two values for SQLITE_THREADSAFE, 0 and 1, while documentation 
says that the allowed values are 0,1,2. There was no answer, while this 
appears to be a valid problem.



?


Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] No public bug tracker?

2017-03-20 Thread Yuri

On 03/20/2017 17:20, Joshua J. Kugler wrote:

If you go here:

https://www.sqlite.org/src/login

You can see you can log in (if you have an account), or:

"Visitors may enter anonymous as the user-ID with the 8-character hexadecimal
password shown below."



sqlite.org/src/login  shows that I am logged in, but the bug site says I am not 
logged in. I am not sure if this is intentional. I assumed it was.

Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] configure --enable-shared doesn't build shared libraries

2017-03-20 Thread Yuri

Configure file says:

>   --enable-shared[=PKGS]  build shared libraries [default=yes]

However, shared library isn't built by default. This command:

$ ./configure --enable-shared && gmake

doesn't build it either.


Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] No public bug tracker?

2017-03-20 Thread Yuri
Your bug tracker http://www.sqlite.org/src/rptview?rn=1 doesn't accept 
public submissions.
A few weeks ago I reported (in this ML) the bug with blob handle being 
invalidated by the writes into unrelated tables.
fossil timeline doesn't  show this bug as fixed, and the above bug site 
doesn't show it too.


Is there anything to assure that this bug isn't forgotten?

A few days ago I reported the problem with configure script and thread 
safety options. configure option --enable-threadsafe only supports two 
values for SQLITE_THREADSAFE, 0 and 1, while documentation says that the 
allowed values are 0,1,2. There was no answer, while this appears to be 
a valid problem.


Isn't it better to have a separate bug report for every issue?

Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to set SQLITE_THREADSAFE=2 through the configure arguments?

2017-03-10 Thread Yuri
Documentation suggests that  -DSQLITE_THREADSAFE has 3 valid values: 0, 
1, 2. https://www.sqlite.org/threadsafe.html


But configure script appears to only be able to set the values 0 and 1 
with --enable-threadsafe. How to set -DSQLITE_THREADSAFE=2? I think, 
configure script is missing this ability.



Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-08 Thread Yuri

On 03/05/2017 10:19, Clemens Ladisch wrote:

or change the row IDs so that they are unique in the entire database.



Thanks for your suggestion. I followed it, and made rowids unique in the 
table b with a blob field.


But I've ran into another problem: commit of the subsequent unrelated 
transaction fails with error=5 (SQLITE_BUSY) if the previous open 
sqlite3_blob object hasn't been closed.



This fails:

sqlite3_blob_open // table b with rowids unique in db

sqlite3_blob_write // table b

BEGIN

UPDATE // table a

UPDATE // table a

COMMIT // FAILS SQLITE_BUSY!


This succeeds:

sqlite3_blob_open // table b

sqlite3_blob_write // table b

sqlite3_blob_close // table b

BEGIN

UPDATE // table a

UPDATE // table a


COMMIT // SUCCEEDS


Unique rowids were supposed to make blobs independent of updates in 
other tables, but it appears that there is still some dependency.


I am not sure if this is the same bug, or another bug.


Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [BUG REPORT] Open blob is invalidated by the update to unrelated fields in the same row

2017-03-06 Thread Yuri

On 03/06/2017 01:00, Dominique Devienne wrote:

This is clearly documented inhttps://www.sqlite.org/c3ref/blob_open.html
though, so I'm afraid this is "by design". --DD



Even though this is documented, parts of this limitation don't appear to 
be reasonable. Updating an integer field in the same row shouldn't 
affect the blob field. Rows can be very large and shouldn't move when 
individual fields are updated.



Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-05 Thread Yuri

On 03/05/2017 09:49, Simon Slavin wrote:

On 5 Mar 2017, at 4:28pm, Yuri <y...@rawbw.com> wrote:


So, to be clear, blob writes can only be used when no other statements are 
executed in the middle? Because the latest testcase only changes the other 
table, not the table where the blob is.

I’m glad to see that someone else has identified this as a bug.  Whether you’ll 
be able to do what you want when the bug is fixed I can’t tell.  You’re going 
to have to wait for the developer team to fix the bug, then test again.



Thanks for your help!

Yuri
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-05 Thread Yuri

On 03/05/2017 08:22, Simon Slavin wrote:

Yes, as we already established it’s because you changed the values of other 
columns in the row.  I think you’re using SQLite’s blob calls in an unexpected 
way.  They’re not meant to be used as an ongoing editor.  Adding the kind of 
support you’ve asked for would involve slowing down the routines for all the 
existing users.  Not really worth it.


So, to be clear, blob writes can only be used when no other statements 
are executed in the middle? Because the latest testcase only changes the 
other table, not the table where the blob is.



I think you’re better off reading the existing BLOB value into memory using 
SELECT, editing it using standard memory-manipulation routines, then writing it 
back with an UPDATE when you’re ready.



This is very slow.


Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-05 Thread Yuri

On 03/05/2017 07:20, Simon Slavin wrote:

If you want to delay the ending of the transaction you have to put the BEGIN in 
before it, and then you must put the COMMIT in after it or it won’t be 
committed.


The second blob write fails even after adding BEGIN before 
sqlite3_blob_open and COMMIT after sqlite3_blob_close.



Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-05 Thread Yuri

On 03/05/2017 03:48, Keith Medcalf wrote:

Perhaps because you are committing the transaction?


There is no COMMIT statement there.


Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-05 Thread Yuri

On 03/04/2017 23:08, Clemens Ladisch wrote:

I think you can work around these problems by moving the blob value into
a separate table.



I thought so too. However, even updates to other tables invalidate the 
blob handle. Please see the example below.


Variable breakMe breaks the second sqlite3_blob_write call, even though 
the added update statement is for the other table.



Yuri


#include 
#include 
#include "sqlite3.h"

int breakMe = 1;

char *exec_errmsg;

void execSql(sqlite3 *db, const char *sql) {
  int rc;
  if (SQLITE_OK != (rc = sqlite3_exec(db, sql, NULL, NULL, 
_errmsg))) {

fprintf(stderr, "Error while executing sql=%s, error=%d\n", sql, rc);
exit(1);
  }
}

int main(int argc, char **argv) {
  sqlite3 *db = NULL;
  int rc;

  rc = sqlite3_open(argv[1], );
  if (SQLITE_OK != rc) {
fprintf(stderr, "Can't open database %s (%i): %s\n", argv[1], rc, 
sqlite3_errmsg(db));

exit(1);
  }

  execSql(db, "CREATE TABLE fragmented_data ( friend_id INTEGER NOT 
NULL, frags_id  INTEGER NOT NULL, message   BLOB, PRIMARY KEY ( 
friend_id, frags_id));");
  execSql(db, "CREATE TABLE fragmented_meta ( friend_id INTEGER NOT 
NULL, frags_idINTEGER NOT NULL, timestamp_first INTEGER NOT 
NULL, timestamp_last  INTEGER NOT NULL, frags_done  INTEGER NOT 
NULL, frags_num   INTEGER NOT NULL, message BLOB, PRIMARY 
KEY ( friend_id, frags_id));");


  execSql(db, "INSERT INTO fragmented_data (friend_id, frags_id, 
message) SELECT 0, 1488703606591, zeroblob(2070) WHERE NOT EXISTS 
(SELECT 1 FROM fragmented_meta WHERE friend_id=0 AND 
frags_id=1488703606591);");
  execSql(db, "INSERT INTO fragmented_meta (friend_id, frags_id, 
timestamp_first, timestamp_last, frags_done, frags_num) SELECT 0, 
1488703606591, 1488703647556, 1488703647556, 0, 2 WHERE NOT EXISTS 
(SELECT 1 FROM fragmented_meta WHERE friend_id=0 AND 
frags_id=1488703606591);");


  sqlite3_blob *blob1;
  rc = sqlite3_blob_open(db, "main", "fragmented_data", "message", 1, 
1, );

  if (SQLITE_OK != rc) {
fprintf(stderr, "Couldn't get blob handle (%i): %s\n", rc, 
sqlite3_errmsg(db));

exit(1);
  }
  if (SQLITE_OK != (rc = sqlite3_blob_write(blob1, "1", 1, 0x10))) {
fprintf(stderr, "Error writing to blob handle #1. rc=%d\n", rc);
exit(1);
  }

  if (breakMe)
execSql(db, "UPDATE fragmented_meta SET 
timestamp_last=max(timestamp_last,1488703647556), frags_done = 
frags_done+1 WHERE friend_id=0 AND frags_id=1488703606591;");


  if (SQLITE_OK != (rc = sqlite3_blob_write(blob1, "2", 1, 0x250))) {
fprintf(stderr, "Error writing to blob handle #2. rc=%d\n", rc);
exit(1);
  }

  sqlite3_blob_close(blob1);
  printf("Successfully wrote to blob\n");
  sqlite3_close(db);
  return 0;
}

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-04 Thread Yuri

On 03/04/2017 01:22, Clemens Ladisch wrote:

Do you have a concrete example for such a statement?



Yes. I need to re-assemble the large data packet that comes in portions. 
I would like to use a blob field for this. I don't want to store parts 
in separate records, because this is essentially one file.


So far sqlite is failing, because the handle returned by 
sqlite3_blob_write is invalidated by the updates to the other fields, 
and sqlite3_blob_open/sqlite3_blob_close are too slow to call them on 
every fragment.


I would like to maintain the record like this:

table file {

id integer,

num_done integer,

num_total integer,

data blob

}

File metadata and data all in one record. Blob handle gets invalidated 
when metadata is updated, and it's too expensive to open/close the blob 
every time.


Due to these two problems sqlite can't support this reasonable, IMO, use 
case.



Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Not a Bug -- works as documented: [BUG REPORT] Open blob is invalidated by the update to unrelated fields in the same row

2017-03-04 Thread Yuri

On 03/04/2017 00:08, Keith Medcalf wrote:

If the row that a BLOB handle points to is modified by an UPDATE, DELETE, or by ON 
CONFLICT side-effects then the BLOB handle is marked as "expired". This is true 
if any column of the row is changed, even a column other than the one the BLOB handle is 
open on. Calls to sqlite3_blob_read() and sqlite3_blob_write() for an expired BLOB handle 
fail with a return code of SQLITE_ABORT. Changes written into a BLOB prior to the BLOB 
expiring are not rolled back by the expiration of the BLOB. Such changes will eventually 
commit if the transaction continues to completion.


What is the reason to expire the blob handle when unrelated fields are 
modified? I just don't see how and why is this a reasonable behavior? 
Blob lives as a separate object, and the pointer to it should be able to 
stay valid unless its field is modified.


Another bug is that reopening the blob every time is incredibly slow. 
Why is opening/closing the blob is so much slower? It also seems to get 
slower with the size of the data in the blob.


Yuri
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [BUG REPORT] Open blob is invalidated by the update to unrelated fields in the same row

2017-03-03 Thread Yuri
The write operation using the open sqlite3_blob object fails after some 
other field in the same row is updated.


The testcase below illustrates the problem.


Yuri



---testcase---

#include 
#include 
#include "sqlite3.h"

int main(int argc, char **argv) {
if(argc < 2) {
fprintf(stderr, "Usage: %s  \n", 
argv[0]);

exit(1);
}

// file to insert
FILE *f = fopen(argv[2], "rb");
if(NULL == f) {
fprintf(stderr, "Couldn't open file %s\n", argv[2]);
exit(1);
}
// Calculate size of file
fseek(f, 0, SEEK_END);
long filesize = ftell(f);
fseek(f, 0, SEEK_SET);

// Table name we're going to use
char tablename[] = "testblob";
char columnname[] = "blobby";

// Actual database handle
sqlite3 *db = NULL;

// Database commands
char create_sql[1024];
snprintf(create_sql, sizeof(create_sql), "CREATE TABLE IF NOT 
EXISTS %s ("
"id INTEGER PRIMARY KEY, fld INTEGER, %s 
BLOB)", tablename, columnname);


// Going to insert a zeroblob of the size of the file
char insert_sql[1024];
snprintf(insert_sql, sizeof(insert_sql), "INSERT INTO %s (%s) 
VALUES (?)", tablename, columnname);


// SQLite return value
int rc;
// Open the database
rc = sqlite3_open(argv[1], );
if(SQLITE_OK != rc) {
fprintf(stderr, "Can't open database %s (%i): %s\n", 
argv[1], rc, sqlite3_errmsg(db));

exit(1);
}

char *exec_errmsg;
rc = sqlite3_exec(db, create_sql, NULL, NULL, _errmsg);
if(SQLITE_OK != rc) {
fprintf(stderr, "Can't create table (%i): %s\n", rc, 
sqlite3_errmsg(db));

sqlite3_close(db);
exit(1);
}
sqlite3_stmt *insert_stmt;
rc = sqlite3_prepare_v2(db, insert_sql, -1, _stmt, NULL);
if(SQLITE_OK != rc) {
fprintf(stderr, "Can't prepare insert statment %s (%i): 
%s\n", insert_sql, rc, sqlite3_errmsg(db));

sqlite3_close(db);
exit(1);
}
// Bind a block of zeros the size of the file we're going to 
insert later

sqlite3_bind_zeroblob(insert_stmt, 1, filesize);
if(SQLITE_DONE != (rc = sqlite3_step(insert_stmt))) {
fprintf(stderr, "Insert statement didn't work (%i): 
%s\n", rc, sqlite3_errmsg(db));

exit(1);
}

sqlite3_int64 rowid = sqlite3_last_insert_rowid(db);
printf("Created a row, id %i, with a blank blob size %i\n", 
(int)rowid, (int)filesize);


sqlite3_blob *blob;
rc = sqlite3_blob_open(db, "main", tablename, columnname, 
rowid, 1, );

if(SQLITE_OK != rc) {
fprintf(stderr, "Couldn't get blob handle (%i): %s\n", 
rc, sqlite3_errmsg(db));

exit(1);
}

const int BLOCKSIZE = 1024;
int len;
void *block = malloc(BLOCKSIZE);
int offset = 0;
while(0 < (len = fread(block, 1, BLOCKSIZE, f))) {
if(SQLITE_OK != (rc = sqlite3_blob_write(blob, block, 
len, offset))) {
fprintf(stderr, "Error writing to blob handle. 
Offset %i, len %i, rc=%d\n", offset, len, rc);

exit(1);
}
offset+=len;
printf("... wrote block @off=%d\n", offset);

// update that breaks the following sqlite3_blob_write

if(SQLITE_OK != (rc = sqlite3_exec(db, "UPDATE testblob 
SET fld=1", NULL, NULL, _errmsg))) {
fprintf(stderr, "Error while updating the 
integer field, error=%d\n", rc);

exit(1);
}

}

sqlite3_blob_close(blob);

printf("Successfully wrote to blob\n");

free(block);

fclose(f);
sqlite3_finalize(insert_stmt);
sqlite3_close(db);
return 0;
}

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-03 Thread Yuri

On 03/01/2017 23:41, Clemens Ladisch wrote:


It would certainly be possible to add your own user-defined SQL function
to call sqlite3_blob_write().



I think this should be added to sqlite itself. Writing a portion of blob 
is very much like updating a field. There is currently no way to do this 
in SQL fashion in an efficient way. So such command can be chained with 
other SQL statements and make the client code much simpler.



Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-01 Thread Yuri

I have to call sqlite3_blob_write to partially modify blobs.


Wouldn't it have been better if there was a function, for example, 
writeblob(tbl.blob_fld, offset, portion_data)? SQLite could do the same 
that sqlite3_blob_write is doing when writeblob result is assigned to 
the same row/column that is supplied as the argument, and rewrite the 
whole blob otherwise?



Also, do sqlite3_blob_write calls participate in transactions?


Thanks,

Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Data loss during the disk full condition

2016-10-29 Thread Yuri

On 10/29/2016 15:15, Simon Slavin wrote:

I believe SQLite handled the failure correctly.  It returned SQLITE_FULL and it 
did not corrupt its database.  SQLITE_FULL is documented as an error code, not 
a warning, in section 1 of

<https://www.sqlite.org/rescode.html>


My program prints errors, but doesn't terminate the process. For some 
reason, Qt's db.lastError() doesn't print that text, but SQLite library 
prints it into stderr:
> /usr/home/yuri/.config/in-mem-fs-to-test-disk-full: write failed, 
filesystem is full
> QSqlSqliteSettingsFormat: database error occurred during ' "update 
(key=key-X16)" ':  QSqlError(-1, "", "")



> /usr/home/yuri/.config/in-mem-fs-to-test-disk-full: write failed, 
filesystem is full
> QSqlSqliteSettingsFormat: database error occurred during ' "update 
(key=key-X16)" ':  QSqlError(-1, "", "")


So I didn't ignore them.




 From what you describe, your application didn't handle this error correctly, 
it ignored it.  At minimum the programmer needs to consider what to do with the 
data it was trying to write, given that it couldn't write it to the database.  
Even if that data isn't important, SQLite documentation has recommendations:

<https://www.sqlite.org/lang_transaction.html>

"It is recommended that applications respond to the errors listed above by 
explicitly issuing a ROLLBACK command. If the transaction has already been rolled back 
automatically by the error response, then the ROLLBACK command will fail with an error, 
but no harm is caused by this."

And of course, if the ROLLBACK command itself fails, the program needs to 
handle that.  Since it means that recovery from the error condition failed.



It opens the DB, applies updates without the explicit transaction, some 
of them fail, messages are printed, DB is closed. DB should be left in 
the valid state. Is this not what is supposed to happen?



Yuri
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Data loss during the disk full condition

2016-10-29 Thread Yuri

On 10/29/2016 14:32, Simon Slavin wrote:

On 29 Oct 2016, at 10:21pm, Yuri <y...@rawbw.com> wrote:


I think you need to have such testcase: On the table with a lot of key/value 
pairs you run a set of updates and inserts. Random file operations should fail 
with some low probability with various error codes, like disk full.

You would have to simulate this on a wide range of platforms.  Because what each platform 
does under "disk full" conditions is different.


On a busy system one write operation might fail with disk-full, while 
the rest are fine, because other processes will write and delete data 
continuously. This is what I suspect: that SQLite failed to handle one 
or a few such failures. POSIX file API is the most popular and most OSes 
use it, but Windows is probably different.


Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Data loss during the disk full condition

2016-10-29 Thread Yuri

On 10/29/2016 13:06, Richard Hipp wrote:

>SQLite should preserve data in such case, it should fail operations that
>can't be performed, but the old data should absolutely stay intact.

That is what is suppose to happen - everything should be preserved.
This is tested extensively and there are no known problems in SQLite
that can cause the behavior you observe.  But perhaps you have come
across something nobody has yet seen or thought of.  (Or perhaps the
problem is with some other component other than SQLite - we don't know
yet.)

Do you have a way to reproduce the problem?  Do you have any
additional information that might help us to reproduce it here?



I spent all morning trying to reproduce the problem with the in-memory 
disk having disk-full and couldn't reproduce it. I didn't see any 
'delete' operations issued by my program, this was as expected. But no 
row disappearance.



I think you need to have such testcase: On the table with a lot of 
key/value pairs you run a set of updates and inserts. Random file 
operations should fail with some low probability with various error 
codes, like disk full. The test should check that all updated values 
still have either old or new values, and not anything else. All inserted 
values should be checked to be either there or not there, and values 
untouched by queries should stay intact. Nothing should disappear.


The tricky part might be to simulate file failures. But sqlite library 
is essentially a convertor between the SQLite API executing SQL 
statements, and the file access API provided by OS. You can't guarantee 
the behavior of the file API that you don't control, so random failures 
in it should be processed properly.



Only if such case passes every time after a lot of tries with random 
file failure this can be an assurance that there are no such bugs in SQLite.



Software often relies on certain behavior of the outside API, and when 
those calls once in a while behave differently



What does "PRAGMA integrity_check" say about your database?


It returns "ok", but this file has been opened and written into again.

Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Data loss during the disk full condition

2016-10-29 Thread Yuri

On 10/29/2016 10:44, Simon Slavin wrote:

To minimise problems like the one you reported it needs to quit the program 
(preferably with a non-zero exit code) when it gets any unexpected result.


This particular application, SQLite QSettings backend, can't quit on the 
disk-full condition because the condition can be, and is often 
transient. Quitting will also potentially mean losing the data when it 
could have been saved later.



I'm not certain that this would have completely prevented the problem you 
reported, but repeated attempts to make changes to a database with no free 
space aren't going to lead to anything good.

Another factor is whether you had free space when you tried to open the 
database after the faults.  Each time you use the SQLite API to open a database 
SQLite checks for to see if it was closed correctly last time.  If not, it 
tries to recover an uncorrupted database.  [1] But if the disk is full this 
will fail.  Though it shouldn't lead to further corruption.

[1] I don't know whether it checks to see whether there is space free first.



My program only issues deletes to mirror the in-app map deletions. So I 
am very confident it didn't just delete all records.



This leaves SQLite/OS. The system with disk-full shut down, this caused 
the signal to the app, this caused some failed attempts to update some 
rows in the SQLite table, then the system has shut down. The question is 
what zeroed the file.



It is possible that the disk-full interjects in between some 
open/read/write operations, so that some of them succeed and some 
subsequent ones fail, leaving something in unexpectedly wrong state.



Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Data loss during the disk full condition

2016-10-29 Thread Yuri

On 10/29/2016 06:08, Simon Slavin wrote:

On 29 Oct 2016, at 8:05am, Yuri<y...@rawbw.com>  wrote:


>The application is synchronizing the in-memory key-value table with the disk 
one using insert/update/delete statements using that key. The in-memory table was 
full at the moment of reboot,

You're going to have to explain that.  There can be no such thing as an 
in-memory table when you reboot.  Booting involved having nothing in memory.


By "in-memory table" I mean the app-level STL std::map<string,string> 
that the app saves as an SQLite table. That map had data at the time of 
the app shutdown. The code in question has been working fine for over a 
year. 
https://github.com/yurivict/qt-additions/blob/master/QSqlSqliteSettingsFormat 
It is the SQLite Qt QSettings backend. I wrote it to specifically 
safeguard against this kind of situation. (Not in Qt yet.)



>[snip]
>
>SQLite should preserve data in such case, it should fail operations that can't 
be performed, but the old data should absolutely stay intact.

It should do.  But bad programming can prevent things from happening properly.

For instance, does the program which makes changes to the database check the 
result values returned by each operation ?  And does it correctly crash if the 
return value indicates something went wrong with the operation ?  Many 
quickly-written simple programs just proceed with the next operation.



It does check error codes, and it prints warnings, but errors don't fail 
the overall app. It will keep updating/inserting again and again later.



Now I created the small in-memory disk with disk-full condition for 
testing. The same program keeps printing 'update/insert failed' but it 
never issues 'delete' statements. I also can't reproduce the table 
erasure. I am still at a loss what went wrong, what erased the table in 
the file.



Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Data loss during the disk full condition

2016-10-29 Thread Yuri
My disk was full when the system rebooted, and the important SQLite 
table got blanked at that time. The table is a simple key/value table 
with the primary key. The application is synchronizing the in-memory 
key-value table with the disk one using insert/update/delete statements 
using that key. The in-memory table was full at the moment of reboot, 
and there is no way it could run any delete operations at all.



After the reboot the table still existed but was blank. The file was 
about the same size, but had large zeroed areas, and no rows in this 
table. Other table still had rows.



How could this have happened that data could get lost like that? I was 
under the impression that SQLite is safe during such special conditions?



SQLite should preserve data in such case, it should fail operations that 
can't be performed, but the old data should absolutely stay intact.



sqlite-3.14.1 on FreeBSD.


Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [BUG REPORT] Transaction that has the key violation is very slow

2016-09-20 Thread Yuri

I import bulk data into SQLite DB. I run 50k records per transaction.


When some bug or data inconsistency occurs and causes the key violation, 
this violation is reported only in the end of the transaction (this is 
okay and has been discussed before).


But I also notice that the transaction (a batch of 50k records) with the 
failed key is much slower compared to when there is no key violation.



I think this is a bug. There is something that is slowing the 
transaction when there is a pending key violation. It should either 
report the violation immediately (which it doesn't do), or keep going 
with the same speed. The way how it is now it just slows the process of 
finding a problem without any apparent reason.



Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why the parallel read of DB is faster with separate connections per thread?

2016-06-17 Thread Yuri
I have a fairly large DB that I need to only read (not write) as fast as 
possible. I open DB with flags 
SQLITE_OPEN_READONLY|SQLITE_OPEN_PRIVATECACHE and then run select 
queries in 8 threads.


When each thread opens its own connection, DB is read in 8 wallclock 
seconds using 23 user seconds.


When DB connection is shared by threads, though, the process reads the 
same data in 17 wallclock seconds using 20 user seconds, much slower 
overall, killing the parallelism benefit.



Reusing RO connection for some reason makes threads wait for each other 
too much. What makes the single connection to slow the process down?



In an attempt to speed it up as much as possible, I was trying to first 
copy into :memory: db, so that threads would read only from memory, but 
this requires the shared connection and it is slower.



sqlite3-3.12.2


Yuri

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite scans the largest table in the query where a very small table is available

2016-01-20 Thread Yuri
On 01/20/2016 09:21, Hick Gunter wrote:
> ave you tried running ANALYZE on a representative dataset? This will 
> determine the "shape" of your tables and allow the query planner to make 
> better guesses of the costs associated with each join.
>
> If, after ANALYZE, you still feel the need to improve over the query 
> planners' ordering, you can still use CROSS JOIN to force a certain order of 
> tables.
ANALYZE helped, thanks!

Yuri


[sqlite] SQLite scans the largest table in the query where a very small table is available

2016-01-20 Thread Yuri
I have the select query over the linked chain of tables with all needed 
indices present.
All but one tables (m,h,e,w) have row count ~300,000-700,000, mt table 
has only ~150 rows.

Obviously, based on the row count, scan should begin from the table mt.
However, SQLite actually scans beginning from m, and the query returning 
only ~150 rows runs for 2+ seconds.

Does SQLite optimizer look at the row counts? (I think it does because 
the plans are different when there are no rows present).
How to correct the problem?

---schema.sql---
CREATE TABLE w
  (w_id  INTEGER PRIMARY KEY AUTOINCREMENT,
   t TEXT NOT NULL UNIQUE);
CREATE TABLE e
  (e_id  INTEGER PRIMARY KEY AUTOINCREMENT,
   w_id  INTEGER NOT NULL);
CREATE TABLE h
  (h_id  INTEGER PRIMARY KEY AUTOINCREMENT,
   e_id  INTEGER NOT NULL,
   FOREIGN KEY(e_id) REFERENCES e(e_id));
CREATE TABLE m
  (m_id INTEGER PRIMARY KEY AUTOINCREMENT,
   h_id  INTEGER NOT NULL,
   FOREIGN KEY(h_id) REFERENCES h(h_id));
CREATE TABLE mt
  (m_id  INTEGER PRIMARY KEY,
   FOREIGN KEY(m_id) REFERENCES m(m_id));

---query.sql---
select
   h.h_id,
   w.t
from
   w w,
   e e,
   h h,
   m m,
   mt mt
where
   e.w_id = w.w_id
   and
   h.e_id = e.e_id
   and
   m.h_id = h.h_id
   and
   mt.m_id = m.m_id
group by
   e.w_id;

Plan goes like this:
0|0|3|SCAN TABLE m AS m USING COVERING INDEX m_h_index
0|1|2|SEARCH TABLE h AS h USING INTEGER PRIMARY KEY (rowid=?)


Yuri



[sqlite] 'order by' doesn't work with 'group_concat()'

2016-01-04 Thread Yuri
Please consider this example:

---begin---
#!/bin/sh

DB=sq.sqlite
rm -f $DB

sql() {
   echo "$1" | sqlite3 $DB
}

sql "create table a(id integer not null, primary key(id));"
sql "create table b(oid integer not null, chr char null);"
sql "insert into a values(1);"
sql "insert into a values(2);"
sql "insert into b values(1,'y');"
sql "insert into b values(1,'x');"
sql "insert into b values(2,'x');"
sql "insert into b values(2,'y');"

sql "select
a.id,
(select group_concat(chr)
 from b
 where oid = a.id
 group by oid
 order by chr
)
  from
a;"
---end---

It returns this dataset:
1|y,x
2|x,y

The 'order by' clause doesn't work, because if it did the result would 
have been:
1|x,y
2|x,y

sqlite3-3.9.2

Yuri


[sqlite] How hard is it to add the constraint name to the 'FOREIGN KEY constraint failed' message?

2015-11-30 Thread Yuri
On 11/23/2015 07:55, James K. Lowden wrote:
> As a matter of fact, violation of UNIQUE & PK constraints is enforced
> rather too strictly in SQLite.  Any UPDATE statement that modifie more
> than one row of a UNIQUE column can fail because two rows may
> *transiently*  have the same value.  SQLite will fail the statement
> even though the completed transactation leaves the constraint
> unviolated.

It seems like it performs the checking on the level of individual field 
write, not SQL statements. I suggested earlier to have "CONSTRAINT LEVEL 
[WRITE|STATEMENT|TRANSACTION];"
It seems they currently work on the level of field writes and 
transactions, and not on SQL statement level.

Yuri


[sqlite] How hard is it to add the constraint name to the 'FOREIGN KEY constraint failed' message?

2015-11-18 Thread Yuri
On 11/18/2015 15:33, Scott Hess wrote:
> What do you plan to use the error message for?  There is generally no
> interface contract with error messages, so you can't generally depend on
> them being readable by code.  They are helpful to the developer, but
> usually they just save you a few minutes figuring it out yourself.  I'm not
> hating on your few minutes, but saving a few milliseconds or bytes at
> runtime on a few million devices is worth a periodic few minutes of my
> time.  You can use SQL to generate pretty complicated schema and queries,
> so in the limit this kind of problem can be pretty deep to resolve!

I have a process that imports some data with quite complex structure. 
Every once in a while the constraint violation occurs (usually due to 
the data irregularities). Even this simple matter can be complicated 
when you are busy with 100s other things, and this DB schema isn't on 
top of your mind. My concern is mostly development time and to minimize 
the number of things to remember. So if the message can be more 
informative, I always vote for this.

> Maybe it would make sense to have some sort of
> SQLITE_WITH_EXTENDED_FK_ERRORS type define so that you can get a better
> edit/compile/run loop going in development, without being expensive in
> production.

Yes, such option can very nicely become a package option (on BSD).

Yuri


[sqlite] How hard is it to add the constraint name to the 'FOREIGN KEY constraint failed' message?

2015-11-18 Thread Yuri
On 11/18/2015 09:55, R Smith wrote:
> There is no "first" constraint that can fail. There is a procession of 
> constraints either within a statement or within a transaction (both 
> can have many constraints) and as they are required, they are counted 
> up, and as they become resolved they are counted down. At the end the 
> result is simply either that "some constraint failed" or "All resolved 
> eventually".

Why not have two variants of the error message: one for immediate 
failure with the foreign key name, and one like now, for the complicated 
case of delayed constraints?

Yuri


[sqlite] How hard is it to add the constraint name to the 'FOREIGN KEY constraint failed' message?

2015-11-18 Thread Yuri
On 11/18/2015 01:17, Darren Duncan wrote:
>
> Deferred constraints are definitely a benefit.
>
> They allow you to express constraints otherwise not possible, for 
> example that a record may exist in table X if and only if a 
> counterpart exists in table Y, such as balancing records in a 
> double-entry accounting system.  Granted all you really need for this 
> is the ability to change multiple tables as a single atomic operation, 
> but failing that ability, deferred constraints are the way SQL 
> provides to do it. 

I agree they can be beneficial, but not in all cases. Depends on what 
you do. It would have been great if it was an option, ex. "CONSTRAINT 
LEVEL [STATEMENT|TRANSACTION];".

Yuri


[sqlite] How hard is it to add the constraint name to the 'FOREIGN KEY constraint failed' message?

2015-11-18 Thread Yuri
On 11/18/2015 00:45, Dominique Devienne wrote:
> True. But that's in the case where FK constraints validation is deferred to
> the transaction end.

Why does SQLite defer constraint violation errors? Is there a benefit?
This only complicates things, I would rather see the statement fail 
immediately.

Yuri


[sqlite] How hard is it to add the constraint name to the 'FOREIGN KEY constraint failed' message?

2015-11-17 Thread Yuri

> It is a substantial change (basically a complete rewrite of the entire
> foreign key constraint mechanism) which would negatively impact both
> space and performance.

I think the argument is fallacious.

Don't keep the bag, keep only one integer ID of the first failed 
constraint. That's all the users mostly care about.

// --- pseudo-code at the point of failure ---
if (!...check if constraint failed...) {
   // fast branch: normal stuff when constraint is satisfied, not 
impacted by the change at all
} else { // constraint failed
   // slow branch: failure is normally unexpected, this is executed very 
rarely
   cntFailed++;
   if (!savedFailedID) savedFailedID = currentFailedID; // the only 
added line is
}

You only need to modify the slow branch. This has practically zero 
performance impact, in any case it is exactly zero for the non-failure 
operation.

Yuri


[sqlite] How hard is it to add the constraint name to the 'FOREIGN KEY constraint failed' message?

2015-11-17 Thread Yuri
This message always leaves the user wondering: "Which constraint?"

How hard is it to add this information to the message? Is this a matter 
of memorizing the ID of the constraint, and then printing its name in 
the message?

Yuri


[sqlite] Easiest way to pass SQL query parameters in command line?

2015-11-05 Thread Yuri
On 11/05/2015 14:00, Clemens Ladisch wrote:
> Isn't the query itself passed through the command line?  Show some example.


In my case script is like this:
#!/bin/sh
(cat $1 && echo ";") | sqlite3 my-db.sqlite

Command to run it:
./my-sql-run sqls/my-query.sql

I need to pass some parameter, like "select * from table where 
kind=%%MYPARAM%%"

Currently one choice is to add my own shell code to modify the query on 
the fly, and to substitute %%MYPARAM%% with the argument passed to the 
shell script. But I think it would be easier if sqlite3 command itself 
could bind parameters. If sql could contain "kind=?", and sqlite3 could 
have for example --bind command to bind supplied values. Especially so 
if to consider that sqlite already supports prepared statements and binding.

Yuri



[sqlite] Easiest way to pass SQL query parameters in command line?

2015-11-05 Thread Yuri
I am looking for a way to have a parametrized query, when parameter is 
passed through the command line. Much like a prepared statement and '?' 
parameters.

Is this possible? I can't find anything like this in the sqlite3 man page.
Another way would be if sqlite had a function to access the environment 
variable, but such function doesn't seem to exist either.

Yuri


[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-28 Thread Yuri
Thanks to everybody who made suggestions.

There was the legitimate constraint violation caused by a bug in an 
importing program. Program was erroneously inserting zero integer into 
the field that is both the leading part of the primary key (possibly 
causing its non-uniqueness), and a foreign key (definitely causing its 
violation). This triggered the slowdown behavior. I can't understand why 
exactly, because the primary key should have failed immediately, and the 
foreign key was deferred. But that's what happened.

Yuri


[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-27 Thread Yuri
On 08/27/2015 19:45, Richard Hipp wrote:
> How many indexes on your data?
>
> Can you DROP your indexes for the insert, then CREATE INDEX them again
> after all the content is in place?

Only indexes needed during the insert are in DB. Only 2 indexes exist, 
plus there are several "uniq" constraints, also added to be used by 
inserts. Without indexes/uniq constrainst it will slow down, and plans 
will show table scans.

Yuri


[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-27 Thread Yuri
On 08/27/2015 19:29, Simon Slavin wrote:
> Your computer has a certain amount of free memory.  Once your database is 
> bigger than that size the computer has to keep moving parts of the database 
> into storage so it has room for the new data.
>
> I will guess that if you got more RAM in your computer you would have faster 
> operations for longer.


Good point.

My computer has 24GB ob RAM of which 4GB is free. At the time of the 
problem the size of the process is 325MB. And the size of complete DB is 
~250MB. So this isn't it.

What's worth mentioning though is that at the time of the event in 
question size of the process increases by 5MB.

Yuri


[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-27 Thread Yuri
I build a fairly large DB, with major tables having 800k..5M rows, with 
several relationships between tables.

At ~30% into the insertion process it slows down rapidly. Records 
#171k..172k are still fast, and records #172k...173k are already ~10 
times slower. (all records are more or less similar)

During Insertion process, selects are also performed to obtain keys to 
insert into other tables.

I followed every advise I could find:
* Database is written into memory
* Whole insertion happens in one transaction in one giant BEGIN 
TRANSACTON; ... END TRANSACTION; block.
* Foreign keys are deferred: PRAGMA defer_foreign_keys=ON;
* Journal is disabled: PRAGMA journal_mode = OFF;
* Synchronous mode is disabled: PRAGMA synchronous = OFF;

Plan for each "select" statement shows that it uses an index or primary 
key. Every insert statement is a simple insert "insert into xxx(x,x,x) 
values(?,?,?)" Selects are also all simple one-table selects. All 
statements used in prepared form.

How can I understand why the slowdown occurs? Especially, why the 
slowdown in so "sharp"?

Something drastic happens, like some strategy is recomputed, some index 
is rebuilt, etc...

Thank you,
Yuri


[sqlite] System.Data.SQLite.SQLiteConnection throwing NotSupportedException on WinCE 6.0 (CF 3.5)

2012-10-10 Thread Yuri Korolyov
Hello,

When using System.Data.SQLite under WinCE 6.0 + Compact Framework 3.5, it is 
not possible to get some of SQLiteConnection properties, namely, 
LastInsertRowId, MemoryUsed, MemoryHighwater because NotSupportedException is 
thrown. All three properties mentioned fail when calling native functions 
returning "sqlite3_int64" ("long" in .NET code) which can not be marshaled by 
value using P/Invoke, so, I suppose, that is a reason for exception. 

Can we expect a fix for this issue in future releases of System.Data.SQLite ?

Thanks in advance! 

Best regards,
Yuriy Korolyov

P.S. Thanks for all the good work guys. SQLite is amazing!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug? LIMIT in compound statement with UNION ALL seems to affect only the first SELECT statement if it has ORDER BY

2010-10-07 Thread Yuri G
Yep, it turned out to be a bug (http://www.sqlite.org/src/info/38cb5df375).

Thanks for the info, Pavel! I didn't know that.

2010/10/6 Pavel Ivanov <paiva...@gmail.com>

> I can't say anything about your particular issue with the LIMIT
> clause, maybe that's a bug. But
>
> > Another solution is to use UNION instead of UNION ALL. But I can't use
> that,
> > because UNION does not respect ORDER BY in sub-statements (not sure if
> it's
> > a correct behavior).
>
> Do you know that SELECT ... FROM (SELECT ... ORDER BY ...) doesn't
> have to respect your ORDER BY clause? UNION ALL doesn't have to
> respect your ORDER BY clause either. So the fact that your query
> behaves exactly that you want it to is a random coincidence and you
> shouldn't rely on it. It's best for you to execute first select,
> retrieve all rows and then execute the second select to retrieve the
> remaining rows you need.
>
>
> Pavel
>
> On Wed, Oct 6, 2010 at 12:22 PM, Yuri G <groovy...@gmail.com> wrote:
> > Hi, everyone,
> >
> > This looks like a bug to me:
> >
> > --sql:
> >
> > CREATE TABLE t(a INTEGER);
> >
> > INSERT INTO "t" VALUES(1);
> > INSERT INTO "t" VALUES(2);
> > INSERT INTO "t" VALUES(3);
> > INSERT INTO "t" VALUES(4);
> >
> > SELECT * FROM (
> >  SELECT
> >a
> >  FROM t
> >  WHERE a<=2
> >  ORDER BY a)
> >
> > UNION ALL
> >
> > SELECT * FROM (
> >  SELECT
> >a
> >  FROM t
> >  WHERE a>2)
> >
> > LIMIT 1;
> >
> > --result:
> > 1
> > 3
> > 4
> >
> > --expected:
> > 1
> >
> > If I remove ORDER BY from the first SELECT, it gives the expected number
> of
> > rows. It looks like LIMIT limits only of the results of querying the
> first
> > SELECT statement because of ORDER BY.
> > Another solution is to use UNION instead of UNION ALL. But I can't use
> that,
> > because UNION does not respect ORDER BY in sub-statements (not sure if
> it's
> > a correct behavior).
> >
> > What I'm trying to do is get all names which match the search string. I
> need
> > to show all names starting with search string and then show all other
> > results which contain search string sorting results in each "group".
> > Something like this:
> >
> > SELECT * FROM
> > (
> > SELECT
> >  name
> > FROM names
> > WHERE name LIKE 'a%'
> > ORDER BY name
> > )
> >
> > UNION ALL
> >
> > SELECT * FROM
> > (
> > SELECT
> >  name
> > FROM all_patients
> > WHERE name LIKE '%a%' AND
> >  name NOT LIKE 'a%'
> > ORDER BY name
> > )
> > LIMIT 100
> >
> > In this case LIMIT does not work as expected.
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug? LIMIT in compound statement with UNION ALL seems to affect only the first SELECT statement if it has ORDER BY

2010-10-07 Thread Yuri G
Thanks, Igor. It works like a charm now.

2010/10/6 Igor Tandetnik <itandet...@mvps.org>

> Yuri G <groovy...@gmail.com> wrote:
> > This looks like a bug to me:
> >
> > --sql:
> >
> > CREATE TABLE t(a INTEGER);
> >
> > INSERT INTO "t" VALUES(1);
> > INSERT INTO "t" VALUES(2);
> > INSERT INTO "t" VALUES(3);
> > INSERT INTO "t" VALUES(4);
> >
> > SELECT * FROM (
> >  SELECT
> >a
> >  FROM t
> >  WHERE a<=2
> >  ORDER BY a)
> >
> > UNION ALL
> >
> > SELECT * FROM (
> >  SELECT
> >a
> >  FROM t
> >  WHERE a>2)
> >
> > LIMIT 1;
> >
> > --result:
> > 1
> > 3
> > 4
> >
> > --expected:
> > 1
>
> Looks like a bug to me, too.
>
> > What I'm trying to do is get all names which match the search string. I
> need
> > to show all names starting with search string and then show all other
> > results which contain search string sorting results in each "group".
>
> As a workaround, try something like this:
>
> SELECT name FROM names
> WHERE name LIKE '%a%'
> ORDER BY name NOT LIKE 'a%', name;
>
> --
> Igor Tandetnik
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug? LIMIT in compound statement with UNION ALL seems to affect only the first SELECT statement if it has ORDER BY

2010-10-06 Thread Yuri G
Hi, everyone,

This looks like a bug to me:

--sql:

CREATE TABLE t(a INTEGER);

INSERT INTO "t" VALUES(1);
INSERT INTO "t" VALUES(2);
INSERT INTO "t" VALUES(3);
INSERT INTO "t" VALUES(4);

SELECT * FROM (
  SELECT
a
  FROM t
  WHERE a<=2
  ORDER BY a)

UNION ALL

SELECT * FROM (
  SELECT
a
  FROM t
  WHERE a>2)

LIMIT 1;

--result:
1
3
4

--expected:
1

If I remove ORDER BY from the first SELECT, it gives the expected number of
rows. It looks like LIMIT limits only of the results of querying the first
SELECT statement because of ORDER BY.
Another solution is to use UNION instead of UNION ALL. But I can't use that,
because UNION does not respect ORDER BY in sub-statements (not sure if it's
a correct behavior).

What I'm trying to do is get all names which match the search string. I need
to show all names starting with search string and then show all other
results which contain search string sorting results in each "group".
Something like this:

SELECT * FROM
(
SELECT
 name
FROM names
WHERE name LIKE 'a%'
ORDER BY name
)

UNION ALL

SELECT * FROM
(
SELECT
  name
FROM all_patients
WHERE name LIKE '%a%' AND
  name NOT LIKE 'a%'
ORDER BY name
)
LIMIT 100

In this case LIMIT does not work as expected.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users