Re: [sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-01 Thread Clemens Ladisch
Yuri wrote:
> I have to call sqlite3_blob_write to partially modify blobs.
>
> Wouldn't it have been better if there was a function, for example,
> writeblob(tbl.blob_fld, offset, portion_data)? SQLite could do the
> same that sqlite3_blob_write is doing when writeblob result is
> assigned to the same row/column that is supplied as the argument, and
> rewrite the whole blob otherwise?

The sqlite3_blob* interface is just an optimization, and it looks as if
it was included for some specific application.

It would certainly be possible to add your own user-defined SQL function
to call sqlite3_blob_write().

> Also, do sqlite3_blob_write calls participate in transactions?

Of course they do.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database is locked

2017-03-01 Thread Hick Gunter
Not directly, but you can query the locks on the file as per documentation:

struct flockv_pending;   // = { F_WRLCK, SEEK_SET, PENDING_BYTE  ,  
 1, 0};
struct flockv_reserved;  // = { F_WRLCK, SEEK_SET, RESERVED_BYTE ,  
 1, 0};
struct flockv_shared;// = { F_WRLCK, SEEK_SET, SHARED_FIRST  , 
SHARED_SIZE, 0};
int v_fd= -1;
int v_ret   = -1;

memset( _pending, 0, sizeof( v_pending));
v_pending.l_type= F_WRLCK;
v_pending.l_whence  = SEEK_SET;
v_pending.l_start   = PENDING_BYTE;
v_pending.l_len = 1;
v_pending.l_pid = 0;

memset( _reserved, 0, sizeof( v_reserved));
v_reserved.l_type   = F_WRLCK;
v_reserved.l_whence = SEEK_SET;
v_reserved.l_start  = RESERVED_BYTE;
v_reserved.l_len= 1;
v_reserved.l_pid= 0;

memset( _shared, 0, sizeof( v_shared));
v_shared.l_type = F_WRLCK;
v_shared.l_whence   = SEEK_SET;
v_shared.l_start= SHARED_FIRST;
v_shared.l_len  = SHARED_SIZE;
v_shared.l_pid  = 0;

/* check for a PENDING lock */
if (fcntl(v_fd,F_GETLK,_pending) == -1) {

};
/* check for a RESERVED lock */
if (fcntl(v_fd,F_GETLK,_reserved) == -1) {
};
/* check for a SHARED/EXCLUSIVE lock */
if (fcntl(v_fd,F_GETLK,_shared) == -1) {
};

if (v_pending.l_type == F_RDLCK)
printf("%s File:%s, Process %d PENDING (SHARED)\n"   ,g_mode[v_mode] 
,v_file, (v_ret = v_pending .l_pid));

if (v_shared .l_type == F_RDLCK)
printf("%s File:%s, Process %d SHARED\n" ,g_mode[v_mode] 
,v_file, (v_ret = v_shared  .l_pid));

switch (v_reserved.l_type) {
case F_WRLCK:
case F_RDLCK:
printf("%s File:%s, Process %d RESERVED\n"   ,g_mode[v_mode] 
,v_file, (v_ret = v_reserved.l_pid));
break;
default: break;
}

if (v_pending.l_type == F_WRLCK)
printf("%s File: %s,Process %d PENDING (EXCLUSIVE)\n",g_mode[v_mode] 
,v_file, (v_ret = v_pending .l_pid));

if (v_shared .l_type == F_WRLCK)
printf("%s File %s, Process %d EXCLUSIVE\n"  ,g_mode[v_mode] 
,v_file, (v_ret = v_shared  .l_pid));

if (v_ret == -1)
printf("%s File:%s, \n",g_mode[v_mode] 
,v_file);

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Bob Friesenhahn
Gesendet: Mittwoch, 01. März 2017 21:45
An: SQLite mailing list 
Betreff: Re: [sqlite] Database is locked

On Wed, 1 Mar 2017, Stephen Chrzanowski wrote:

> Where is the database being stored?  What OS is the software running
> under?  If Windows, I'd suggest looking up SysInternals and
> downloading the package to see what has its grubby hands on the
> database.  If Linux, as root, run an LSOF (Lower case) and grep for
> the database or process accessing the database.

This is an embedded Linux system.  Due to available resources, lsof is not 
available, but we can learn about open file descriptors from the /proc 
filesystem (via 'ls -l /proc/[pid]/fd').

Many of our processes have a database connection open all the time.  I am 
assuming that the problem is a hung (not commited or rolled back) transaction.  
I am hoping that sqlite provides a way to know what process is currently 
performing an update transaction.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Keith Medcalf

On Wednesday, 1 March, 2017 14:12, Deon Brewis  wrote:

> "But look how much space and processing time it would take up"
 
> Can you clarify what you mean by "space" ?
 
> The processing time argument I understand.

I doubt there is any "space" requirement at all.  In fact, since you are not 
carrying an extra index, the space requirement will be significantly reduced.  

Your only real concern would be that on INSERT and UPDATE operations the 
_prepare would take a little longer since it would have to retrieve and 
generate the trigger code for the statement.  You pay the price during 
statement prepare, but I doubt that the actual statement execution time will be 
much affected (the execution would have to do a "lookup" in the unique index 
anyway to determine whether the operation meets the unique requirement) as you 
are merely substitution one operation for another like operation.  Of course, 
the internal unique check is done entirely inline, whereas the trigger method 
is done by running additional VBDE code -- though of course the overall time to 
run the trigger may approximate the time taken to maintain an the extra index 
-- in which case you would save the space used by the extra index and have no 
(or a minor) execution time penalty on insert/update operations.

> I think this is one of those things that if the database engine doesn't
> internally support it, it can't really be emulated.
> 
> Sure would be nice to have INCLUDE columns support (here and in other
> places).
> 
> - Deon
> 
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Simon Slavin
> Sent: Wednesday, March 1, 2017 12:57 PM
> To: SQLite mailing list 
> Subject: Re: [sqlite] Non-unique columns in unique index
> 
> 
> On 1 Mar 2017, at 8:21pm, David Raymond  wrote:
> 
> > The trigger version you asked about would look something like the below
> I believe. More risky than having the two indexes, but should work.
> (Famous last words)
> 
> I have no doubt it would work.  But look how much space and processing
> time it would take up.  Far simpler and clearer just to create the two
> indexes.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] foreign key cardinality

2017-03-01 Thread Keith Medcalf
On Wednesday, 1 March, 2017 12:21, James K. Lowden  
wrote:

> Probably I'm overlooking something, and probably this is not the right
> forum.  SQLite adheres to the SQL standard; that's established.  Why
> does the standard say what it does?  I'm only saying it's not clear to
> me that there's theoretical justification.

I believe that Codd originally referred to this as the "Domain" requirement.  
That is, that a "Parent" specified a domain, and that each "child" (member) 
must be a part of that domain.  Hence the requirement for the "Domain" (Parent) 
entries to be unique whilst the Child (Member of domain) entries are not, yet 
have a referent (foreign key) to the relation specifying the domain.





___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread David Raymond
The extra space would just be the few bytes to store the text, which is nothing 
compared to the "extra" index size. And since it's "infrequent update" then it 
wouldn't be too bad for the extra time. The normal way would have an extra 
index to update, the trigger way would be less efficient. On an in-memory test 
of 10,000,000 updates, the trigger way took 14% more time, and only 2% more 
when I did it on disk. The "extra" unique index added 36% to the page count.

I will say that the "debugging nightmare" comment is 100% absolutely true 
though, and is why I'd personally bite the bullet and go with the unique 
keyword. I'm just pointing out how it would be done.



--just testing output below this line--



SQLite version 3.17.0 2017-02-13 16:02:40
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.

sqlite> create table foo
   ...> (
   ...>   Id integer primary key,
   ...>   UniqueCol int unique,
   ...>   ExtraCol int
   ...> );
Run Time: real 0.000 user 0.00 sys 0.00

sqlite> create index fooIndex on foo (UniqueCol, ExtraCol);
Run Time: real 0.000 user 0.00 sys 0.00

sqlite> explain query plan insert into foo values (1, 1, 1);
Run Time: real 0.000 user 0.00 sys 0.00

sqlite> explain insert into foo values (1, 1, 1);
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 28000  Start at 28
1 OpenWrite  0 2 0 3  00  root=2 iDb=0; foo
2 OpenWrite  1 4 0 k(3,,,)00  root=4 iDb=0; fooIndex
3 OpenWrite  2 3 0 k(2,,) 00  root=3 iDb=0; 
sqlite_autoindex_foo_1
4 Integer1 1 000  r[1]=1
5 NotNull1 7 000  if r[1]!=NULL goto 7
6 NewRowid   0 1 000  r[1]=rowid
7 MustBeInt  1 0 000
8 SoftNull   2 0 000  r[2]=NULL
9 Integer1 3 000  r[3]=1
10Integer1 4 000  r[4]=1
11NotExists  0 13100  intkey=r[1]
12Halt   1555  2 0 foo.Id 02
13Affinity   2 3 0 DDD00  affinity(r[2..4])
14SCopy  3 6 000  r[6]=r[3]; UniqueCol
15SCopy  4 7 000  r[7]=r[4]; ExtraCol
16IntCopy1 8 000  r[8]=r[1]; rowid
17MakeRecord 6 3 500  r[5]=mkrec(r[6..8]); 
for fooIndex
18SCopy  3 10000  r[10]=r[3]; UniqueCol
19IntCopy1 11000  r[11]=r[1]; rowid
20MakeRecord 102 900  
r[9]=mkrec(r[10..11]); for sqlite_autoindex_foo_1
21NoConflict 2 23101  00  key=r[10]
22Halt   2067  2 0 foo.UniqueCol  02
23IdxInsert  1 5 6 3  10  key=r[5]
24IdxInsert  2 9 102  10  key=r[9]
25MakeRecord 2 3 12   00  r[12]=mkrec(r[2..4])
26Insert 0 121 foo31  intkey=r[1] data=r[12]
27Halt   0 0 000
28Transaction0 1 2 0  01  usesStmtJournal=0
29Goto   0 1 000
Run Time: real 0.031 user 0.00 sys 0.00



sqlite> create table bar
   ...> (
   ...>   Id integer primary key,
   ...>   UniqueCol int,
   ...>   ExtraCol int
   ...> );
Run Time: real 0.000 user 0.00 sys 0.00

sqlite> create index barIndex on bar (UniqueCol, ExtraCol);
Run Time: real 0.000 user 0.00 sys 0.00

sqlite> create trigger bar_trg_UniqueCol_insert
   ...> before insert on bar
   ...> for each row
   ...> when exists (select 1 from bar where UniqueCol = new.UniqueCol)
   ...> begin select raise(abort, 'UNIQUE constraint failed: bar.UniqueCol'); 
end;
Run Time: real 0.000 user 0.00 sys 0.00

sqlite> create trigger bar_trg_UniqueCol_update
   ...> before update of UniqueCol on bar
   ...> for each row
   ...> when exists (select 1 from bar where UniqueCol = new.UniqueCol)
   ...> begin select raise(abort, 'UNIQUE constraint failed: bar.UniqueCol'); 
end;
Run Time: real 0.000 user 0.00 sys 0.00

sqlite> explain insert into bar values (1, 1, 1);
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 33000  Start at 33
1 OpenWrite  0 5 0 3  00  root=5 iDb=0; bar
2 OpenWrite  1 6 

Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Deon Brewis
Right, you need to use the birthday attack algorithm for determining collision 
risk.

I wouldn't mind hashing - but there is an additional complication - the query I 
want the covered field in the index for is this:

SELECT ExtraCol FROM Foo WHERE UniqueCol > range_start AND UniqueCol < range_end

So I would need a hashing algorithm that's usably small and doesn't collide, 
yet preserves the properties that if  a < b then  hash(a) < hash(b).

- Deon

-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Paul Sanderson
Sent: Wednesday, March 1, 2017 2:19 PM
To: p...@sandersonforensics.com
Cc: SQLite mailing list 
Subject: Re: [sqlite] Non-unique columns in unique index

Hmm - a bit of google foo and refreshing of chances of collions means my
1:9,223,372,036,854,775,808 is way off

That is the chance of any two hashes colliding - you'll have lot sof hashes.

The basic idea might be sound though even if you stick with a full MD5 it 
should save a chunk of storage

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 1 March 2017 at 22:13, Paul Sanderson 
wrote:

> As a bit of an off the wall suggestion you could try an MD5 (or even a 
> partial MD5 - half of the bytes)
>
> CREATE table hashes (
> hash integer primary key; // just the first 64 bits of the hash of 
> uniquecol and extracol
> )
>
> as an integer primary key the hash would be an alias of the rowid and 
> so storage would be 8 bytes plus admin
>
> the chance of a random colliison based on a 64 bit hash would be (I 
> think)
> 1:9,223,372,036,854,775,808
>
> MD5 is broken but would work OK for this
>
> use a trigger to abort the insert into blah if the insert into hashes 
> fails.
>
>
> Paul
> www.sandersonforensics.com
> skype: r3scue193
> twitter: @sandersonforens
> Tel +44 (0)1326 572786 <+44%201326%20572786>
> http://sandersonforensics.com/forum/content.php?195-SQLite-
> Forensic-Toolkit -Forensic Toolkit for SQLite email from a work 
> address for a fully functional demo licence
>
> On 1 March 2017 at 21:54, Simon Slavin  wrote:
>
>>
>> On 1 Mar 2017, at 9:41pm, Deon Brewis  wrote:
>>
>> > Yeah ok, but that is paltry compared with the gb's of diskspace 
>> > that
>> the actual second index takes up. But thanks for clarifying.
>>
>> Ah.  If it’s really GBs of disk space then I can see why you’d look 
>> for alternative solutions.
>>
>> But I have a 43 GB database file which could be 20 GB without an 
>> extra index.  I could have written that extra check in code, and 
>> reduced the file size, but I decided not to.  Because once I had 
>> developed procedures to handle a 20 GB file, I might was well be dealing 
>> with a 43 GB file anyway.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Paul Sanderson
Hmm - a bit of google foo and refreshing of chances of collions means my
1:9,223,372,036,854,775,808 is way off

That is the chance of any two hashes colliding - you'll have lot sof hashes.

The basic idea might be sound though even if you stick with a full MD5 it
should save a chunk of storage

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 1 March 2017 at 22:13, Paul Sanderson 
wrote:

> As a bit of an off the wall suggestion you could try an MD5 (or even a
> partial MD5 - half of the bytes)
>
> CREATE table hashes (
> hash integer primary key; // just the first 64 bits of the hash of
> uniquecol and extracol
> )
>
> as an integer primary key the hash would be an alias of the rowid and so
> storage would be 8 bytes plus admin
>
> the chance of a random colliison based on a 64 bit hash would be (I think)
> 1:9,223,372,036,854,775,808
>
> MD5 is broken but would work OK for this
>
> use a trigger to abort the insert into blah if the insert into hashes
> fails.
>
>
> Paul
> www.sandersonforensics.com
> skype: r3scue193
> twitter: @sandersonforens
> Tel +44 (0)1326 572786 <+44%201326%20572786>
> http://sandersonforensics.com/forum/content.php?195-SQLite-
> Forensic-Toolkit -Forensic Toolkit for SQLite
> email from a work address for a fully functional demo licence
>
> On 1 March 2017 at 21:54, Simon Slavin  wrote:
>
>>
>> On 1 Mar 2017, at 9:41pm, Deon Brewis  wrote:
>>
>> > Yeah ok, but that is paltry compared with the gb's of diskspace that
>> the actual second index takes up. But thanks for clarifying.
>>
>> Ah.  If it’s really GBs of disk space then I can see why you’d look for
>> alternative solutions.
>>
>> But I have a 43 GB database file which could be 20 GB without an extra
>> index.  I could have written that extra check in code, and reduced the file
>> size, but I decided not to.  Because once I had developed procedures to
>> handle a 20 GB file, I might was well be dealing with a 43 GB file anyway.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Paul Sanderson
As a bit of an off the wall suggestion you could try an MD5 (or even a
partial MD5 - half of the bytes)

CREATE table hashes (
hash integer primary key; // just the first 64 bits of the hash of
uniquecol and extracol
)

as an integer primary key the hash would be an alias of the rowid and so
storage would be 8 bytes plus admin

the chance of a random colliison based on a 64 bit hash would be (I think)
1:9,223,372,036,854,775,808

MD5 is broken but would work OK for this

use a trigger to abort the insert into blah if the insert into hashes fails.


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 1 March 2017 at 21:54, Simon Slavin  wrote:

>
> On 1 Mar 2017, at 9:41pm, Deon Brewis  wrote:
>
> > Yeah ok, but that is paltry compared with the gb's of diskspace that the
> actual second index takes up. But thanks for clarifying.
>
> Ah.  If it’s really GBs of disk space then I can see why you’d look for
> alternative solutions.
>
> But I have a 43 GB database file which could be 20 GB without an extra
> index.  I could have written that extra check in code, and reduced the file
> size, but I decided not to.  Because once I had developed procedures to
> handle a 20 GB file, I might was well be dealing with a 43 GB file anyway.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Can sqlite3_blob_write be done through SQL?

2017-03-01 Thread Yuri

I have to call sqlite3_blob_write to partially modify blobs.


Wouldn't it have been better if there was a function, for example, 
writeblob(tbl.blob_fld, offset, portion_data)? SQLite could do the same 
that sqlite3_blob_write is doing when writeblob result is assigned to 
the same row/column that is supplied as the argument, and rewrite the 
whole blob otherwise?



Also, do sqlite3_blob_write calls participate in transactions?


Thanks,

Yuri


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Simon Slavin

On 1 Mar 2017, at 9:41pm, Deon Brewis  wrote:

> Yeah ok, but that is paltry compared with the gb's of diskspace that the 
> actual second index takes up. But thanks for clarifying.

Ah.  If it’s really GBs of disk space then I can see why you’d look for 
alternative solutions.

But I have a 43 GB database file which could be 20 GB without an extra index.  
I could have written that extra check in code, and reduced the file size, but I 
decided not to.  Because once I had developed procedures to handle a 20 GB 
file, I might was well be dealing with a 43 GB file anyway.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database is locked

2017-03-01 Thread Simon Slavin

On 1 Mar 2017, at 9:17pm, Bob Friesenhahn  wrote:

> I am not sure of all of the conditions which might result in "Database is 
> locked".  For example, if the results of a query are being iterated through 
> and the reader stops iterating part way through the result set, might this 
> cause "Database is locked" for writers?

If you do a sqlite3_step() for a statement then you absolutely must do a 
sqlite3_reset() or sqlite3_finalize() or both for that statement.  Without it 
you haven’t finished processing the statement.

Not doing this properly may not necessarily result in a lock which prevents 
writers (though it might), but it can can cause problems later.  For instance, 
you can’t close a connection which still has active statements, and that means 
your database may still be open when your program quits.

> Some of our C software is very good at doing this, and some C code is not 
> quite as good.  Our Python code should be robust at reporting problems since 
> we are using APSW and Python exceptions.

If possible, fix your C code.  But you knew that.  And there can be reasons why 
that’s not possible (legacy code you’re not allowed to change, libraries you 
can’t recompile).

> We do capture output from sqlite's tracing facility, including error reports.

Unfortunately, the tracing facility is triggered by your program doing an API 
call.  If you don’t do the API call, you don’t get an error report saying what 
you did wrong.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Deon Brewis
Yeah ok, but that is paltry compared with the gb's of diskspace that the actual 
second index takes up. But thanks for clarifying.

-Deon

-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Simon Slavin
Sent: Wednesday, March 1, 2017 1:38 PM
To: SQLite mailing list 
Subject: Re: [sqlite] Non-unique columns in unique index


On 1 Mar 2017, at 9:11pm, Deon Brewis  wrote:

> "But look how much space and processing time it would take up"
> 
> Can you clarify what you mean by "space" ?

Your triggers have to be stored.  Every time they’re activated (each time you 
insert a row) they have to be run.  That requires SQLite to grab enough memory 
and/or disk space to run the triggers and to construct any temporary indexes 
they require.

Also, it’s a nightmare to debug in comparison with just creating one extra 
index.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Simon Slavin

On 1 Mar 2017, at 9:11pm, Deon Brewis  wrote:

> "But look how much space and processing time it would take up"
> 
> Can you clarify what you mean by "space" ?

Your triggers have to be stored.  Every time they’re activated (each time you 
insert a row) they have to be run.  That requires SQLite to grab enough memory 
and/or disk space to run the triggers and to construct any temporary indexes 
they require.

Also, it’s a nightmare to debug in comparison with just creating one extra 
index.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database is locked

2017-03-01 Thread Bob Friesenhahn

On Wed, 1 Mar 2017, Simon Slavin wrote:



On 1 Mar 2017, at 7:27pm, Bob Friesenhahn  wrote:


What is the recommended approach to diagnosing "Database is locked" errors and 
determining the guilty party?


Are you checking the result codes returned by all SQLite commands you give to 
make sure they are SQLITE_OK ?


Some of our C software is very good at doing this, and some C code is 
not quite as good.  Our Python code should be robust at reporting 
problems since we are using APSW and Python exceptions.


We do capture output from sqlite's tracing facility, including error 
reports.


I am not sure of all of the conditions which might result in "Database 
is locked".  For example, if the results of a query are being iterated 
through and the reader stops iterating part way through the result 
set, might this cause "Database is locked" for writers?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Deon Brewis
"But look how much space and processing time it would take up"

Can you clarify what you mean by "space" ?

The processing time argument I understand. 


I think this is one of those things that if the database engine doesn't 
internally support it, it can't really be emulated. 

Sure would be nice to have INCLUDE columns support (here and in other places).

- Deon

-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Simon Slavin
Sent: Wednesday, March 1, 2017 12:57 PM
To: SQLite mailing list 
Subject: Re: [sqlite] Non-unique columns in unique index


On 1 Mar 2017, at 8:21pm, David Raymond  wrote:

> The trigger version you asked about would look something like the below I 
> believe. More risky than having the two indexes, but should work. (Famous 
> last words)

I have no doubt it would work.  But look how much space and processing time it 
would take up.  Far simpler and clearer just to create the two indexes.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database is locked

2017-03-01 Thread Stephen Chrzanowski
Where is the database being stored?  What OS is the software running
under?  If Windows, I'd suggest looking up SysInternals and downloading the
package to see what has its grubby hands on the database.  If Linux, as
root, run an LSOF (Lower case) and grep for the database or process
accessing the database.

On Wed, Mar 1, 2017 at 2:27 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> We are sometimes encountering permanent "Database is locked" errors from
> sqlite 3.10.2.  After the database gets in this state it is not possible to
> write to it or read from it.  It is not possible to use sqlite's built in
> backup mechanism since it produces a backup file of zero bytes.
>
> What is the recommended approach to diagnosing "Database is locked" errors
> and determining the guilty party?
>
> Is there a way to clear the condition which causes "Database is locked"
> errors?
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] foreign key cardinality

2017-03-01 Thread James K. Lowden
On Tue, 28 Feb 2017 18:01:35 -0800
J Decker  wrote:

> Then how would you properly find the children?  Or find any
> information about their parent, the child would have to in turn be
> deleted.
> 
> foreign keys are as good as pointers.  

That's an interesting way to put it.  One of Codd's first papers
specifically rejected pointers as a database mechanism, and the
relational model relies entirely on values.  That there are no pointers
is by design.  

I accept that SQL foreign keys require a unique referent.  I have to;
it's a fact.  

I'm not convinced that's good or necessary.  I can believe it's
convenient, in the same way that "bag theory" is convenient to avoid
ensuring uniqueness in SQL.  

--jkl
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Deon Brewis
Thanks, thought about it. But I have a ROWID requirement as well due to foreign 
keys referring back to this table via the ROWID.

I could in theory put that just the RowId in an additional index, but then the 
RowId index will just carry the entire UniqueCol again - back to the same 
problem, just reversed.

To throw the specific design out there:

This is a frequent read, infrequent update table:

CREATE TABLE Blah(
Id Integer Primary Key,
UniqueCol blob, // 20 bytes fixed
ExtraCol blob, // 12 bytes fixed
UNIQUE(UniqueCol)
);  // ~36 bytes/row

CREATE INDEX sqlite_autoindex_Resource_1 on Blah  ( // implicit index
UniqueCol,   // 20 bytes
 // Id -- implicit
)  // ~24 bytes/row

CREATE INDEX blahIndex on Blah ( // actual needed index for workload
UniqueCol, // 20 bytes
ExtraCol,  // 12 bytes
 // Id -- implicit
)  // ~36 bytes/row

So this 3rd index is exactly as big as the original table. I would love to get 
rid of at least the 2 vs. 3 redundancy here. 

- Deon

-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Clemens Ladisch
Sent: Wednesday, March 1, 2017 9:58 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] Non-unique columns in unique index

Deon Brewis wrote:
> Is there way to add non-unique columns in a unique index?
>
> I would like to use the same index to enforce unique constraints, as well as 
> giving a covered result for other queries.
>
> CREATE UNIQUE INDEX indx  ON Foo(UniqueCol) INCLUDE (ExtraCol)

CREATE TABLE Foo (
  UniqueCol PRIMARY KEY,
  ExtraCol,
  [...]
) WITHOUT ROWID;


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-unique columns in unique index

2017-03-01 Thread Simon Slavin

On 1 Mar 2017, at 5:00pm, Deon Brewis  wrote:

> Is there way to add non-unique columns in a unique index?

I don’t know of anything that operates in that way.

> If not, is there a way to efficiently implement a UNIQUE constraint in a 
> different way? (Trigger maybe?)

Not efficiently.  You could make a non-UNIQUE index which has the columns you 
want UNIQUE at the beginning.  Then you could include a check for uniqueness in 
your own code.  This would probably tempt SQLite to use that index when 
checking uniqueness.  But this strikes me as less efficient than just letting 
SQLite do its own thing.

But it would be far more efficient just to create two indexes, one for the 
UNIQUE and one for coverage.  Or just create the UNIQUE index and let SQLite 
get the other columns itself.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance difference in running SQLite embed or in a TCL script

2017-03-01 Thread Dan Kennedy

On 03/01/2017 09:53 PM, Simone Mosciatti wrote:

Hi all,


tl;dr:

What are the difference between running SQLite inside TCL and running 
it embed in a shared object module? Why I am seeing such big 
difference in performance?


https://github.com/RedBeardLab/rediSQL/issues/13#issuecomment-283309641


I finally got some time to work again on my redis module rediSQL, 
which basically embed SQLite inside redis.


I already asked in this same mail list help about performance, and it 
was showed to me that the poor performance of the modules are not 
because of SQLite and that I should look at something else.


I have a couple of benchmark where I insert triple of integers inside 
a single table, using straight TCL my machine reach a stable 240k 
insert per second.


Using the redis module and doing the same kind of operation I got way 
worse performance that degrade with time. The insert per second start 
at roughly 24k and then decrease down to 1k.


What I did to investigate the different behaviour was to use perf on 
both the script TCL and the redis server running the module.


Since the SQLite performances are order of magnitude better than the 
redis module performances I was expecting that something redis related 
was throttling down the insert per second. I was wrong.


The perf of the TCL script show that most of the time is spent in 
libpthread, libc, libtcl and only after in libsqlite in the symbol 
sqlite3_randomness.


The perf ot the redis module, instead show that most of the time is 
spent in |sqlite3PcacheTruncate | (52%) and then on 
|sqlite3PcacheClearWritable |(30%) I must be doing something totally 
wrong, but what?



I forget exactly when it was introduced, but there was a bug causing 
those routines to use excessive CPU on systems with really big page 
caches. Fixed for 3.14.1. Are you using something earlier than 3.14.1? 
If so, it's worth trying a newer version.


  http://sqlite.org/releaselog/3_14_1.html

Dan.




You can find more details on the issues I opened here: 
https://github.com/RedBeardLab/rediSQL/issues/13#issuecomment-283309641


To sum up I am asking what are the difference in running redis from 
TCL and from running it embed in a shared object.


Thanks for your time.

Simone

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Performance difference in running SQLite embed or in a TCL script

2017-03-01 Thread Simone Mosciatti

Hi all,


tl;dr:

What are the difference between running SQLite inside TCL and running it 
embed in a shared object module? Why I am seeing such big difference in 
performance?


https://github.com/RedBeardLab/rediSQL/issues/13#issuecomment-283309641


I finally got some time to work again on my redis module rediSQL, which 
basically embed SQLite inside redis.


I already asked in this same mail list help about performance, and it 
was showed to me that the poor performance of the modules are not 
because of SQLite and that I should look at something else.


I have a couple of benchmark where I insert triple of integers inside a 
single table, using straight TCL my machine reach a stable 240k insert 
per second.


Using the redis module and doing the same kind of operation I got way 
worse performance that degrade with time. The insert per second start at 
roughly 24k and then decrease down to 1k.


What I did to investigate the different behaviour was to use perf on 
both the script TCL and the redis server running the module.


Since the SQLite performances are order of magnitude better than the 
redis module performances I was expecting that something redis related 
was throttling down the insert per second. I was wrong.


The perf of the TCL script show that most of the time is spent in 
libpthread, libc, libtcl and only after in libsqlite in the symbol 
sqlite3_randomness.


The perf ot the redis module, instead show that most of the time is 
spent in |sqlite3PcacheTruncate | (52%) and then on 
|sqlite3PcacheClearWritable |(30%) I must be doing something totally 
wrong, but what?


You can find more details on the issues I opened here: 
https://github.com/RedBeardLab/rediSQL/issues/13#issuecomment-283309641


To sum up I am asking what are the difference in running redis from TCL 
and from running it embed in a shared object.


Thanks for your time.

Simone

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange - query returned duplicates, issue fixed after executing 'VACUUM'

2017-03-01 Thread Richard Hipp
On 3/1/17, Simon Slavin  wrote:
>
> If you have a case where VACUUM does not fix a faulty index, that would be
> very interesting.

Not necessarily.  VACUUM does not recreate the indexes, it just copies
them, row by row.  So if the index is self-consistent but it does not
match its table (it has extra rows and/or is missing rows) then VACUUM
won't fix it.

REINDEX does rebuild the indexes from scratch.  If the problem is just
indexes that are missing entries or have extra entries, REINDEX will
fix it.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange - query returned duplicates, issue fixed after executing 'VACUUM'

2017-03-01 Thread Simon Slavin

On 1 Mar 2017, at 12:28pm, Edwin Yip  wrote:

> B - The "VACCUM" only fix the issue for a table, but the very same issue
> retains for another table.

If you have a case where VACUUM does not fix a faulty index, that would be very 
interesting.  However, we would need a copy of the unfixed version of the 
database.

Crashes in your program should not be able to corrupt your database.  I’m not 
saying that it’s completely impossible, just that it’s very unexpected.  Are 
you doing any of the things mentioned here ?



Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange - query returned duplicates, issue fixed after executing 'VACUUM'

2017-03-01 Thread Edwin Yip
Hello Domingo,

Thanks for your reply.

I'm not sure, but I guess it's the program (that uses the DB) crashes that
might caused this issue.

So far the recreation of the indexes fixed the issue for two tables. I'll
report back to here if I've got further findings.


On Wed, Mar 1, 2017 at 8:41 PM, Domingo Alvarez Duarte 
wrote:

> Hello Edwin!
>
> One thing that bugs me is how the indexes could becoming corrupt ?
>
> - Sqlite bug ?
>
> - Hard drive fault ?
>
> Cheers !
>
>
>
> On 01/03/17 09:28, Edwin Yip wrote:
>
>> Update 1 after another hours of checking:
>>
>> A - ' seems that the issue is related to indexes - "pragma
>> integrity_check"
>> would result in errors like:
>>
>> row 23465 missing from index IndexRetailItemLastModTime
>> row 24187 missing from index IndexRetailItemLastModTime
>>
>> B - The "VACCUM" only fix the issue for a table, but the very same issue
>> retains for another table.
>>
>> C - Re-creating the indexes seem to have fixed the problem, so far...
>>
>>
>> On Wed, Mar 1, 2017 at 5:59 PM, Edwin Yip 
>> wrote:
>>
>> Hello,
>>>
>>> I need some insights for an issue I spent hours finding out - was it DB
>>> file corruption or anything else? Details below:
>>>
>>> The table is called ProductType, "select count(*) from ProductType"
>>> returns 47 rows, which is correct.
>>>
>>> There is a column called "LastModTime" and the  COLLATE is ISO8601.
>>> "select count(*) from ProductType where (LastModTime >
>>> "1899/12/30T09:23:21")" would return 60 rows, which is **wrong**.
>>>
>>> After hours trying, I executed "VACUUM", now everything backs to normal.
>>>
>>> This is wired, is it just an occasional DB file corruption or anything
>>> worth mentioning, like any possible mistakes  might have done to the DB?
>>>
>>> Table schema:
>>> CREATE TABLE ProductType (
>>>  ID  INTEGER PRIMARY KEY AUTOINCREMENT,
>>>  MachineId   TEXTCOLLATE SYSTEMNOCASE,
>>>  _ModificationTime   INTEGER,
>>>  _ServerId   INTEGER,
>>>  _UserId INTEGER,
>>>  ParentIdINTEGER,
>>>  TypeNr  TEXTCOLLATE SYSTEMNOCASE,
>>>  TypeNameTEXTCOLLATE SYSTEMNOCASE,
>>>  LastModTime TEXTCOLLATE ISO8601,
>>>  PendingTransferTargetServer TEXTCOLLATE SYSTEMNOCASE
>>> );
>>>
>>>
>>> Thanks.
>>>
>>>
>>> --
>>> Best Regards,
>>> Edwin Yip
>>>
>>>
>>
>>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Best Regards,
Edwin Yip
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange - query returned duplicates, issue fixed after executing 'VACUUM'

2017-03-01 Thread Domingo Alvarez Duarte

Hello Edwin!

One thing that bugs me is how the indexes could becoming corrupt ?

- Sqlite bug ?

- Hard drive fault ?

Cheers !


On 01/03/17 09:28, Edwin Yip wrote:

Update 1 after another hours of checking:

A - ' seems that the issue is related to indexes - "pragma integrity_check"
would result in errors like:

row 23465 missing from index IndexRetailItemLastModTime
row 24187 missing from index IndexRetailItemLastModTime

B - The "VACCUM" only fix the issue for a table, but the very same issue
retains for another table.

C - Re-creating the indexes seem to have fixed the problem, so far...


On Wed, Mar 1, 2017 at 5:59 PM, Edwin Yip 
wrote:


Hello,

I need some insights for an issue I spent hours finding out - was it DB
file corruption or anything else? Details below:

The table is called ProductType, "select count(*) from ProductType"
returns 47 rows, which is correct.

There is a column called "LastModTime" and the  COLLATE is ISO8601.
"select count(*) from ProductType where (LastModTime >
"1899/12/30T09:23:21")" would return 60 rows, which is **wrong**.

After hours trying, I executed "VACUUM", now everything backs to normal.

This is wired, is it just an occasional DB file corruption or anything
worth mentioning, like any possible mistakes  might have done to the DB?

Table schema:
CREATE TABLE ProductType (
 ID  INTEGER PRIMARY KEY AUTOINCREMENT,
 MachineId   TEXTCOLLATE SYSTEMNOCASE,
 _ModificationTime   INTEGER,
 _ServerId   INTEGER,
 _UserId INTEGER,
 ParentIdINTEGER,
 TypeNr  TEXTCOLLATE SYSTEMNOCASE,
 TypeNameTEXTCOLLATE SYSTEMNOCASE,
 LastModTime TEXTCOLLATE ISO8601,
 PendingTransferTargetServer TEXTCOLLATE SYSTEMNOCASE
);


Thanks.


--
Best Regards,
Edwin Yip






___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange - query returned duplicates, issue fixed after executing 'VACUUM'

2017-03-01 Thread Edwin Yip
Update 1 after another hours of checking:

A - ' seems that the issue is related to indexes - "pragma integrity_check"
would result in errors like:

row 23465 missing from index IndexRetailItemLastModTime
row 24187 missing from index IndexRetailItemLastModTime

B - The "VACCUM" only fix the issue for a table, but the very same issue
retains for another table.

C - Re-creating the indexes seem to have fixed the problem, so far...


On Wed, Mar 1, 2017 at 5:59 PM, Edwin Yip 
wrote:

> Hello,
>
> I need some insights for an issue I spent hours finding out - was it DB
> file corruption or anything else? Details below:
>
> The table is called ProductType, "select count(*) from ProductType"
> returns 47 rows, which is correct.
>
> There is a column called "LastModTime" and the  COLLATE is ISO8601.
> "select count(*) from ProductType where (LastModTime >
> "1899/12/30T09:23:21")" would return 60 rows, which is **wrong**.
>
> After hours trying, I executed "VACUUM", now everything backs to normal.
>
> This is wired, is it just an occasional DB file corruption or anything
> worth mentioning, like any possible mistakes  might have done to the DB?
>
> Table schema:
> CREATE TABLE ProductType (
> ID  INTEGER PRIMARY KEY AUTOINCREMENT,
> MachineId   TEXTCOLLATE SYSTEMNOCASE,
> _ModificationTime   INTEGER,
> _ServerId   INTEGER,
> _UserId INTEGER,
> ParentIdINTEGER,
> TypeNr  TEXTCOLLATE SYSTEMNOCASE,
> TypeNameTEXTCOLLATE SYSTEMNOCASE,
> LastModTime TEXTCOLLATE ISO8601,
> PendingTransferTargetServer TEXTCOLLATE SYSTEMNOCASE
> );
>
>
> Thanks.
>
>
> --
> Best Regards,
> Edwin Yip
>



-- 
Best Regards,
Edwin Yip
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Strange - query returned duplicates, issue fixed after executing 'VACUUM'

2017-03-01 Thread Edwin Yip
Hello,

I need some insights for an issue I spent hours finding out - was it DB
file corruption or anything else? Details below:

The table is called ProductType, "select count(*) from ProductType" returns
47 rows, which is correct.

There is a column called "LastModTime" and the  COLLATE is ISO8601.
"select count(*) from ProductType where (LastModTime >
"1899/12/30T09:23:21")" would return 60 rows, which is **wrong**.

After hours trying, I executed "VACUUM", now everything backs to normal.

This is wired, is it just an occasional DB file corruption or anything
worth mentioning, like any possible mistakes  might have done to the DB?

Table schema:
CREATE TABLE ProductType (
ID  INTEGER PRIMARY KEY AUTOINCREMENT,
MachineId   TEXTCOLLATE SYSTEMNOCASE,
_ModificationTime   INTEGER,
_ServerId   INTEGER,
_UserId INTEGER,
ParentIdINTEGER,
TypeNr  TEXTCOLLATE SYSTEMNOCASE,
TypeNameTEXTCOLLATE SYSTEMNOCASE,
LastModTime TEXTCOLLATE ISO8601,
PendingTransferTargetServer TEXTCOLLATE SYSTEMNOCASE
);


Thanks.


-- 
Best Regards,
Edwin Yip
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] heap-use-after-free in sqlite3Strlen30

2017-03-01 Thread junjie wang
Hi sqlite3 maintainers,

I found a use-after-free bugs in salite3.
Here is the stack trace:

==12241== ERROR: AddressSanitizer: heap-use-after-free on address
0xb5a00590 at pc 0x8777099 bp 0xbfd44df8 sp 0xbfd44dec
READ of size 1 at 0xb5a00590 thread T0
#0 0x8777098 in sqlite3Strlen30 /home/b/asan/sqlite/sqlite3.c:27620
#1 0x8777098 in sqlite3ParseUri /home/b/asan/sqlite/sqlite3.c:141789
#2 0x877b78b in openDatabase /home/b/asan/sqlite/sqlite3.c:142173
#3 0x80805ea in open_db.part.12 /home/b/asan/sqlite/src/shell.c:2426
#4 0x80964a0 in open_db /home/b/asan/sqlite/src/shell.c:5460
#5 0x80964a0 in runOneSqlLine /home/b/asan/sqlite/src/shell.c:5442
#6 0x8097c7f in process_input /home/b/asan/sqlite/src/shell.c:5546
#7 0x8072f49 in main /home/b/asan/sqlite/src/shell.c:6153
#8 0xb602ea82 (/lib/i386-linux-gnu/libc.so.6+0x19a82)
#9 0x80761b0 in _start (/home/b/asan/sqlite/sqlite3+0x80761b0)
0xb5a00590 is located 0 bytes inside of 2-byte region
[0xb5a00590,0xb5a00592)
freed by thread T0 here:
#0 0xb61fb774 (/usr/lib/i386-linux-gnu/libasan.so.0+0x16774)
#1 0x810eb4e in sqlite3_free /home/b/asan/sqlite/sqlite3.c:24562
previously allocated by thread T0 here:
#0 0xb61fb854 (/usr/lib/i386-linux-gnu/libasan.so.0+0x16854)
#1 0x82be0ff in sqlite3MemMalloc /home/b/asan/sqlite/sqlite3.c:20673
SUMMARY: AddressSanitizer: heap-use-after-free
/home/b/asan/sqlite/sqlite3.c:141785 sqlite3ParseUri
Shadow bytes around the buggy address:
  0x36b40060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x36b40070: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x36b40080: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x36b40090: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa fd fd
  0x36b400a0: fa fa fd fa fa fa fd fd fa fa fd fd fa fa fd fd
=>0x36b400b0: fa fa[fd]fa fa fa fd fd fa fa 00 fa fa fa 00 fa
  0x36b400c0: fa fa 00 00 fa fa 00 fa fa fa 00 00 fa fa 00 fa
  0x36b400d0: fa fa 00 00 fa fa 00 fa fa fa 00 00 fa fa 00 04
  0x36b400e0: fa fa 00 fa fa fa 00 00 fa fa 00 05 fa fa 00 04
  0x36b400f0: fa fa 00 fa fa fa 00 00 fa fa 00 00 fa fa 00 fa
  0x36b40100: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:   00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone: fa
  Heap righ redzone: fb
  Freed Heap region: fd
  Stack left redzone:f1
  Stack mid redzone: f2
  Stack right redzone:   f3
  Stack partial redzone: f4
  Stack after return:f5
  Stack use after scope: f8
  Global redzone:f9
  Global init order: f6
  Poisoned by user:  f7
  ASan internal: fe

And test case attached.

Thanks and Regards,
Junjie

.
.'op'/
.'op'-€.€
;___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNION

2017-03-01 Thread R Smith


On 2017/03/01 3:40 AM, do...@mail.com wrote:

# SELECT * FROM processors UNION SELECT * FROM storage;
Error: SELECTs to the left and right do not have the same number of
result columns.

All tables that I created in my database have differing column names,
values, and amounts of columns with the noted exception of the one
column which is common (board). I've no idea what to do now.

Why is this an error?


I think that perhaps you are new to SQL and other replies assumed you 
simply wanted what you wrote. I could be wrong, but just in case, here 
are my suggestions:


Perhaps what you would rather like to do is JOIN these tables and not 
UNION them?


Do you wish to match /every/ processor with /every/ board?
In this case, the statement should read: (Warning: this could produced 
excessively long listings)

SELECT * FROM processors, storage;

Do you wish to match only processors and storage that fit on the same 
boards?

In this case the statement might read something like:
SELECT *
FROM processors
  JOIN storage USING board
;

Do you wish to list /all/ processors and add the storage options for the 
same board /if/ there are any?

In this case the statement might read something like:
SELECT *
  FROM processors
  LEFT JOIN storage ON storage.board = processors.board
;

As you can see, lots of different things can be achieved. A quick course 
in SQL via perhaps W3Schools will teach all these in a few hours.


Cheers,
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users