[sqlite] sqlite3 and tcl86

2016-03-03 Thread tombert.group
Dear SQLite Team,

I am experiencing the following bug in the tcl package:

First the configure script does not correctly recognize the tcl location. 
Anyhow I made it working using the amalgamation file and the gcc command below:

gcc -v -DSQLITE_OS_WIN -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_FTS4 
-DSQLITE_ENABLE_FTS5 -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_RTREE 
-I$INSTALLDIR/include -L$INSTALLDIR/lib/ -shared tclsqlite3.c -o tclsqlite3.dll 
?ltcl86


This works fine in 64bit builds ? sqlite3 loads fine and tcl exits normally 
after tclsh [exit]
In 32bit builds sqlite3 loading goes fine, but on [exit] I got a crash with 
?This application has requested the Runtime to terminate it in an unusual way. 
?

Currently I am building on Win7 with MinGW32 and MinGW64. sqlite3 is latest 
cloned from here http://www.sqlite.org/cgi/src
Tcl is also the latest release you can get hands on: 8.6.5

I was wondering if anyone is running a similar configuration, if the configure 
script for the tclpackage is still maintained, and if there is interest in 
cleaning this up?

Regards


[sqlite] Porting Sqlite to Embedded Platform

2016-03-03 Thread Dan Kennedy
On 03/03/2016 10:20 PM, Nitin Bhosale wrote:
> Hi
>
> We were attempting to use test_onefile.c based approach 
> (http://www.sqlite.org/src/doc/trunk/src/test_onefile.c). This is for 
> embedded where filesystem is not available.
> This code works and we can create, insert, select on the database created 
> using this test_onefile.c.
>
> However we want to save the db to flash and saw that we should be 
> implementing below functions but didn't find any relevant information about 
> it.
>
> **   mediaRead();- Read a 512 byte block from the file.
> **   mediaWrite();   - Write a 512 byte block to the file.
> **   mediaSync();- Tell the media hardware to sync.
>
> I have few questions about it. If you answer them it would be really valuable 
> information for us.
> 1. Signature for these functions with meaning of the argument 2. After 
> writing the above functions to behave as expected, how to integrate it with 
> rest of the code?
> 3. As per our understanding we have to just write above 3 functions and we 
> are done. Is this understanding correct?

That is what it says. But it looks like the code is out of sync with the 
comments. I think you'll need to update fsRead(), fsWrite() and fsSync() 
to work with the media instead of a file-system file. And fsOpen() to 
initialize it.

Dan.




[sqlite] Partial index query planner questions

2016-03-03 Thread David Raymond
I've got a question or two on the query planner's use of partial indexes. It 
seems that the query planner isn't using a partial index unless the fields in 
the WHERE clause of the query are present as fields in the index, even if the 
WHERE of the query exactly matches the WHERE of the index.

Example simplified schema:
create table problemTable
(
pk text primary key,
A text not null,
n int not null
--other fields here
);
create index typicalFullIndex on problemTable (A);
create index unusedPartial on problemTable (A) where n > 1;

In this case the WHERE clause on the index isn't on the field being stored in 
the index. But according to the partial indexes page:
"The columns referenced in the WHERE clause of a partial index can be any of 
the columns in the table, not just columns that happen to be indexed. However, 
it is very common for the WHERE clause expression of a partial index to be a 
simple expression on the column being indexed."

When I run queries which I think should use this index, it won't use it.

sqlite> explain query plan select count(*) from problemTable where n > 1;
0|0|0|SCAN TABLE problemTable

Even trying to force it with an INDEXED BY statement doesn't work.

sqlite> explain query plan select count(*) from problemTable indexed by 
unusedPartial where n > 1;
Error: no query solution

Analyzing the database doesn't help either, resulting in the same thing, 
although the sqlite_stat numbers clearly show the index is there, and only 
includes the correct number of records.

If I make another partial index, this time including the field from the where 
clause in the index then everything starts working.

sqlite> create index usedPartial on problemTable (n, A) where n > 1;
sqlite> explain query plan select count(*) from problemTable where n > 1;
0|0|0|SEARCH TABLE problemTable USING COVERING INDEX usedPartial (n>?)
sqlite> explain query plan select A from problemTable where n > 1;
0|0|0|SEARCH TABLE problemTable USING COVERING INDEX usedPartial (n>?)
sqlite> explain query plan select * from problemTable where n > 1;
0|0|0|SEARCH TABLE problemTable USING INDEX usedPartial (n>?)

If I create that with n and A reversed then there's a slight difference when it 
comes to querying for all fields...

sqlite> drop index usedPartial;
sqlite> create index usedPartial2 on problemTable (A, n) where n > 1;
sqlite> explain query plan select count(*) from problemTable where n > 1;
0|0|0|SCAN TABLE problemTable USING COVERING INDEX usedPartial2
sqlite> explain query plan select A from problemTable where n > 1;
0|0|0|SCAN TABLE problemTable USING COVERING INDEX usedPartial2
sqlite> explain query plan select * from problemTable where n > 1;
0|0|0|SCAN TABLE problemTable
sqlite> explain query plan select * from problemTable indexed by usedPartial2 
where n > 1;
Error: no query solution


In my specific case I don't really care what fields are in the index as I'm 
gonna need all of them, I just care that a record has an entry in the index at 
all. (Order of magnitude I'm looking at 70,000,000 records with 1,000 that 
fulfill the WHERE clause) "I would have thought" (famous last words) that the 
WHERE clauses exactly matching between index and query would be enough there. 
After all, creating the index doesn't seem to care that the WHERE fields aren't 
in the index, and it makes the index just fine. So the query shouldn't need the 
value of n to know whether or not the WHERE clause evaluates to True, a record 
simply being in the index at all is exactly that case.


So, after all that:
Is this just bad reading comprehension and bad expectations on my part?
Although it will create and analyze unusedIndex just fine, will the query 
planner even be able to make use of it?
Is it a full on requirement that fields in the WHERE clause be included as 
fields in the index, and not just something that's "very common"?



I'm relatively new at this, so my apologies if this is a ton of text for what 
turns out to be a simple answer, or if it's been covered in detail before.

Much obliged,
Dave R


(Tested with precompiled command line interface for Windows, version 3.11.0)
sqlite> select sqlite_version();
3.11.0
sqlite> select sqlite_source_id();
2016-02-15 17:29:24 3d862f207e3adc00f78066799ac5a8c282430a5f
And as I was about to hit send I noticed 3.11.1 is out, and a quick test looks 
like it does the same things.


P.S.
One other weird thing I noticed while testing:
INDEXED BY doesn't seem to prevent queries from using other indexes in some 
cases.

sqlite> explain query plan select count(*) from problemTable indexed by 
unusedPartial;
selectid|order|from|detail
0|0|0|SCAN TABLE problemTable USING COVERING INDEX 
sqlite_autoindex_problemTable_1

Shouldn't the INDEXED BY phrase there cause that to fail as it can't use the 
specified (partial) index?


[sqlite] How to use sqlite3_blob_open when the blob column is empty?

2016-03-03 Thread zhangkai
Hello,
I am using sqlite3 version 3.11.0. I use the C API.I create a table
"create table device(ieee integer primary key,
  name text, 
  status integer,
  zclversion integer, 
  applicationversion integer, 
  stackversion integer,
  hwversion integer, 
  manufacturername text,
  modelidentifier text,
  datecode text,
  endpoint blob);"
Then I insert a record with ieee.
insert into device(ieee) values(12345);
After that I get a table with ieee=12345;
Then I want to add the last blob column.So I write those code.

sqlite3_blob * blob = NULL;
int ret = sqlite3_blob_open(db,
"main",
"device",
"endpoint",
12345,
1,
);
const char* result = sqlite3_errmsg(db->db);
fprintf(stdout, "-- %s \n", result);

I got a error when I open the blob.Why? the errmsg is "cannot open value of 
type null I debug the sqlite3 the stack


#0  blobSeekToRow (p=0x812df08, iRow=12345, pzErr=0xb314) at 
../sqlite/sqlite3.c:81136
#1  0x080a1cb0 in sqlite3_blob_open (db=0x811d3f8, zDb=0x80f7a90 "main", 
zTable=0x80f7851 "device", zColumn=0x80f7a87 "endpoint", iRow=12345, flags=1, 
ppBlob=0xb38c)
#at ../sqlite/sqlite3.c:81386

rc = sqlite3_step(p->pStmt);

   x
if( rc==SQLITE_ROW ){   

   x
   VdbeCursor *pC = v->apCsr[0];

x
   u32 type = pC->aType[p->iCol];   

x
   if( type<12 ){   

x
 zErr = sqlite3MPrintf(p->db, "cannot open value of type %s",   

x
 type==0?"null": type==7?"real": "integer"  

x
 ); 

x
 rc = SQLITE_ERROR; 

x
 sqlite3_finalize(p->pStmt);

x
 p->pStmt = 0;  

x
   }else{   

x
 p->iOffset = pC->aType[p->iCol + pC->nField];  

x
 p->nByte = sqlite3VdbeSerialTypeLen(type); 

x
 p->pCsr =  pC->uc.pCursor; 

x
 sqlite3BtreeIncrblobCursor(p->pCsr);   

x
   }

x
 }

2016-03-03


zhangkai 


[sqlite] Are BLOB stored inline with the rest of the record fields?

2016-03-03 Thread Drago, William @ CSG - NARDA-MITEQ
I haven't seen any anyone else mention this yet. Another consideration when 
working with large blobs is to make those columns that last ones in your 
schema. To quote Dr. Hipp:

"make the BLOB columns the last column in your tables.  Or even store the BLOBs 
in a separate table which only has two columns: an integer primary key and the 
blob itself, and then access the BLOB content using a join if you need to. If 
you put various small integer fields after the BLOB, then SQLite has to scan 
through the entire BLOB content (following the linked list of disk pages) to 
get to the integer fields at the end, and that definitely can slow you down."

Found here:

http://sqlite.1065341.n5.nabble.com/Effect-of-blobs-on-performance-td19559.html#a19560

Your blobs are small so this probably doesn't apply to your application, but 
something to keep in mind for future projects.

--
Bill Drago
Staff Engineer
L3 Narda-MITEQ
435 Moreland Road
Hauppauge, NY 11788
631-272-5947 / William.Drago at L-3COM.com


> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
> bounces at mailinglists.sqlite.org] On Behalf Of Eric Grange
> Sent: Thursday, March 03, 2016 3:16 AM
> To: SQLite mailing list
> Subject: Re: [sqlite] Are BLOB stored inline with the rest of the record 
> fields?
>
> > All BLOBs are stored inline.  The only complication comes when the
> > total
> row length is longer than a page.
>
> Ok thanks!
>
> That is besides my original question but what is the complication in that 
> case?
> Is the blob partially stored in the row, or is some indirection to other pages
> used, with only some pointer stored in the row?
>
> > Surely you mean big-endian?
>
> Yes, my mistake :)
>
> > Storing Blobs has a few conversions added if you try to use the SQL to
> store it, but it's quite efficient when using the API.
>
> By "use SQL", you mean as a literal blob embedded in the SQL string?
>
> My main use case would be through the API, the actual numbers are fixed
> precision, and so scattered over the range they cannot be displayed to end
> users without using exponents, so some formatting will have to happen.
> Besides size, using a blob rather than base32/base64 would simplify the
> encoding/decoding, and for debugging, blobs usually display in an
> hexadecimal form here, so a big-endian blob would be directly "readable".
>
>
>
> On Wed, Mar 2, 2016 at 11:07 PM, Doug Currie 
> wrote:
>
> > On Wed, Mar 2, 2016 at 4:42 PM, R Smith  wrote:
> > >
> > >
> > > Personally, unless your space is constrained, I would simply save
> > > the numbers as strings, perhaps Hex or BCD with leading chars and
> > > convert as needed. This would sort correctly without tricks and not
> > > do much worse
> > for
> > > space. (Base64 would be even better space-wise but won't sort correct).
> > >
> >
> > There is an encoding defined in RFC 4648 "Base-N Encodings" that does
> > preserve sort order; it is called Base 32 Encoding with Extended Hex
> > Alphabet. I would think the Base64 alphabet could be rearranged to
> > have the same property.
> >
> > e
> > ___
> > sqlite-users mailing list
> > sqlite-users at mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any 
attachments are solely for the use of the addressee and may contain information 
that is privileged or confidential. Any disclosure, use or distribution of the 
information contained herein is prohibited. In the event this e-mail contains 
technical data within the definition of the International Traffic in Arms 
Regulations or Export Administration Regulations, it is subject to the export 
control laws of the U.S.Government. The recipient should check this e-mail and 
any attachments for the presence of viruses as L-3 does not accept any 
liability associated with the transmission of this e-mail. If you have received 
this communication in error, please notify the sender by reply e-mail and 
immediately delete this message and any attachments.


[sqlite] How to use sqlite3_blob_open when the blob column is empty?

2016-03-03 Thread zhangkai
Hello,
I am using sqlite3 version 3.11.0. I use the C API.I create a table
"create table device(ieee integer primary key,
  name text, 
  status integer,
  zclversion integer, 
  applicationversion integer, 
  stackversion integer,
  hwversion integer, 
  manufacturername text,
  modelidentifier text,
  datecode text,
  endpoint blob);"
Then I insert a record with ieee.
insert into device(ieee) values(12345);
After that I get a table with ieee=12345;
Then I want to add the last blob column.So I write those code.

sqlite3_blob * blob = NULL;
int ret = sqlite3_blob_open(db,
"main",
"device",
"endpoint",
12345,
1,
);
const char* result = sqlite3_errmsg(db->db);
fprintf(stdout, "-- %s \n", result);

I got a error when I open the blob.Why? the errmsg is "cannot open value of 
type null I debug the sqlite3 the stack


#0  blobSeekToRow (p=0x812df08, iRow=12345, pzErr=0xb314) at 
../sqlite/sqlite3.c:81136
#1  0x080a1cb0 in sqlite3_blob_open (db=0x811d3f8, zDb=0x80f7a90 "main", 
zTable=0x80f7851 "device", zColumn=0x80f7a87 "endpoint", iRow=12345, flags=1, 
ppBlob=0xb38c)
#at ../sqlite/sqlite3.c:81386

rc = sqlite3_step(p->pStmt);

   x
if( rc==SQLITE_ROW ){   

   x
   VdbeCursor *pC = v->apCsr[0];

x
   u32 type = pC->aType[p->iCol];   

x
   if( type<12 ){   

x
 zErr = sqlite3MPrintf(p->db, "cannot open value of type %s",   

x
 type==0?"null": type==7?"real": "integer"  

x
 ); 

x
 rc = SQLITE_ERROR; 

x
 sqlite3_finalize(p->pStmt);

x
 p->pStmt = 0;  

x
   }else{   

x
 p->iOffset = pC->aType[p->iCol + pC->nField];  

x
 p->nByte = sqlite3VdbeSerialTypeLen(type); 

x
 p->pCsr =  pC->uc.pCursor; 

x
 sqlite3BtreeIncrblobCursor(p->pCsr);   

x
   }

x
 }

2016-03-03


zhangkai 


[sqlite] attach db limit

2016-03-03 Thread Richard Hipp
On 3/3/16, John McKown  wrote:
>  Why 125 would be a good question.

The maximum value of an signed 8-bit integer is 127.  Take away two
for the "main" and "temp" databases.

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Porting Sqlite to Embedded Platform

2016-03-03 Thread Nitin Bhosale
Hi 

We were attempting to use test_onefile.c based approach 
(http://www.sqlite.org/src/doc/trunk/src/test_onefile.c). This is for embedded 
where filesystem is not available.
This code works and we can create, insert, select on the database created using 
this test_onefile.c. 

However we want to save the db to flash and saw that we should be implementing 
below functions but didn't find any relevant information about it.

**   mediaRead();- Read a 512 byte block from the file.
**   mediaWrite();   - Write a 512 byte block to the file.
**   mediaSync();- Tell the media hardware to sync.

I have few questions about it. If you answer them it would be really valuable 
information for us. 
1. Signature for these functions with meaning of the argument 2. After writing 
the above functions to behave as expected, how to integrate it with rest of the 
code?
3. As per our understanding we have to just write above 3 functions and we are 
done. Is this understanding correct?

Thanks,
Nitin

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.



[sqlite] attach db limit

2016-03-03 Thread John McKown
In the source to 3.12.0, the source in src/main.c has:

#if SQLITE_MAX_ATTACHED<0 || SQLITE_MAX_ATTACHED>125
# error SQLITE_MAX_ATTACHED must be between 0 and 125
#endif

The limits.html on the Web site also says 125. As does the changes for
3.8.6. Why 125 would be a good question. The value relates to an "array" of
bits. If SQLITE_MAX_ATTACHED <= 30 the array is kept in a single int (2
bits are used for the dedicated "databases" (the "main" and "temp"). If >
30 then it is an array of ints, indexed by an integer value. Now, 125 is
127-2, which is the largest signed byte value. But having looked at the C
code, I don't see why this couldn't range up to 8*(2^31-1) == 8*2Gi == 16Ti
tables. But I could (easily) be missing something.

Anyway, it appears that the comment you found was never updated.

On Thu, Mar 3, 2016 at 2:40 PM, Tim Uy  wrote:

> Hi, I see that there is a default limit of 10 attached databases. And from
> the docs, 125 is limit. However, in the file sqliteLimit.h it says
>
> /*
> ** The maximum number of attached databases.  This must be between 0
> ** and 62.  The upper bound on 62 is because a 64-bit integer bitmap
> ** is used internally to track attached databases.
> */
> #ifndef SQLITE_MAX_ATTACHED
> # define SQLITE_MAX_ATTACHED 10
> #endif
>
>
> So is the limit 62 or is the limit 125?
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
A fail-safe circuit will destroy others. -- Klipstein

Maranatha! <><
John McKown


[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-03 Thread James K. Lowden
On Thu, 3 Mar 2016 10:43:26 +0800 (CST)
??  wrote:

> > Can anyone describe a situation for which this style of LIMIT &
> > OFFSET is advisable from the application's point of view?  (The
> > DBMS costs are obvious enough.)  
> 
> For me this is the easiest way to implement a scrolling cursor.
> Otherwise I will have to record for each table the condition I use
> to  scroll it. So if there is an easier way to implement the
> scrolling cursor please let me know.

You say, "record ... the condition ... to  scroll [the table]".  I'm
sure I don't know what "condition" you mean.  

You appear to be doing something like:

offset = 0
do 
sqlte3_prepare (limit, offset)
do
sqlite3_step
until SQLITE_DONE
offset += limit
... application stuff ...
until last row processed

The obvious, better alternative is

sqlte3_prepare
do 
do
sqlite3_step
until LIMIT
... application stuff ...
until last row processed

because in the second example you're using SQLite's cursor to "record
the condition of the table".  There's nothing for you to keep between
"pages"; you just start from where you stopped.  

How exactly is the first way "easiest"?  

--jkl



[sqlite] attach db limit

2016-03-03 Thread Tim Uy
(Thanks Richard for the explanation of "main" and "temp".)

On Thu, Mar 3, 2016 at 2:02 PM, Tim Uy  wrote:

> So comment in sqliteLimit just needs an update?
>
> On Thu, Mar 3, 2016 at 1:20 PM, Richard Hipp  wrote:
>
>> On 3/3/16, John McKown  wrote:
>> >  Why 125 would be a good question.
>>
>> The maximum value of an signed 8-bit integer is 127.  Take away two
>> for the "main" and "temp" databases.
>>
>> --
>> D. Richard Hipp
>> drh at sqlite.org
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>


[sqlite] attach db limit

2016-03-03 Thread Tim Uy
So comment in sqliteLimit just needs an update?

On Thu, Mar 3, 2016 at 1:20 PM, Richard Hipp  wrote:

> On 3/3/16, John McKown  wrote:
> >  Why 125 would be a good question.
>
> The maximum value of an signed 8-bit integer is 127.  Take away two
> for the "main" and "temp" databases.
>
> --
> D. Richard Hipp
> drh at sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Partial index query planner questions

2016-03-03 Thread Richard Hipp
On 3/3/16, David Raymond  wrote:
>
> create table problemTable
> (
> pk text primary key,
> A text not null,
> n int not null
> --other fields here
> );
> create index typicalFullIndex on problemTable (A);
> create index unusedPartial on problemTable (A) where n > 1;
>
> sqlite> explain query plan select count(*) from problemTable where n > 1;
> 0|0|0|SCAN TABLE problemTable
>

Workaround:  SELECT count(*) FROM problemTable WHERE n>1 AND a>'';

The a>'' term will always be true (since column A is of type TEXT) so
the extra term in the WHERE clause does not change the outcome.  But
it does trick the query planner into trying to use an index on A,
which gives the result you desire.
-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] attach db limit

2016-03-03 Thread Tim Uy
Hi, I see that there is a default limit of 10 attached databases. And from
the docs, 125 is limit. However, in the file sqliteLimit.h it says

/*
** The maximum number of attached databases.  This must be between 0
** and 62.  The upper bound on 62 is because a 64-bit integer bitmap
** is used internally to track attached databases.
*/
#ifndef SQLITE_MAX_ATTACHED
# define SQLITE_MAX_ATTACHED 10
#endif


So is the limit 62 or is the limit 125?


[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-03 Thread 邱朗
So according to your answers and others, this limitation is always there even 
the document said "obsolete" ? Just want to double confirm.


Thanks!


Qiulang


At 2016-03-03 11:06:08, "Darren Duncan"  wrote:
>On 2016-03-02 6:48 PM, ?? wrote:
>>> A better way that is very similar is to use WHERE and LIMIT instead.
> >
>> I know that but as I just replied to this thread, if I do that I will then 
>> have to record each columns I use for each table I want scroll cursor. So 
>> from the implementation point of view, using LIMIT & OFFSET is easier.
>>
>> Qiulang
>
>You have to record the columns anyway in order to know what you're sorting 
>your 
>results by, this is just reuse.  Or if not, then maybe you should bite the 
>bullet and record that extra info.
>
>If you don't want to do that in order to simplify things, then you live with 
>the 
>limitations and sluggishness of LIMIT and OFFSET, as those limitations are 
>systemic to LIMIT and OFFSET.
>
>-- Darren Duncan
>
>> At 2016-03-03 10:42:37, "Darren Duncan"  wrote:
>>> On 2016-03-02 5:02 AM, ?? wrote:
 Here... said "Do not try to implement a scrolling window using LIMIT and 
 OFFSET. Doing so will become sluggish as the user scrolls down toward the 
 bottom of the list.?. But the page also said "This information is obsolete?
 ... talks about LIMIT & OFFSET, without mentioning that is a bad idea. So 
 my question is can I do that or not (will it become sluggish if I do that) 
 ?
>>>
>>> Using LIMIT and OFFSET is generally a bad idea, not just on performance but 
>>> also
>>> logically.
>>>
>>> A better way that is very similar is to use WHERE and LIMIT instead.
>>>
>>> Assuming that you are going through pages consecutively, you know what rows
>>> you've already seen, particularly in the prior page.
>>>
>>> Whatever columns you are sorting your result by, take the last row just 
>>> seen and
>>> the query for the next page is found by saying WHERE > or < etc the field 
>>> values
>>> for that last row.
>>>
>>> So you're sure to get the rows just after the ones you just saw, and later 
>>> pages
>>> shouldn't be any slower than earlier ones.
>>>
>>> This approach is also resilient to arbitrary changes to the database between
>>> page views so you don't either repeat rows or skip rows due to offset 
>>> mismatch.
>>>
>>> -- Darren Duncan
>
>___
>sqlite-users mailing list
>sqlite-users at mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-03 Thread 邱朗
>A better way that is very similar is to use WHERE and LIMIT instead.I know 
>that but as I just replied to this thread, if I do that I will then have to 
>record each columns I use for each table I want scroll cursor. So from the 
>implementation point of view, using LIMIT & OFFSET is easier.

Qiulang





At 2016-03-03 10:42:37, "Darren Duncan"  wrote:
>On 2016-03-02 5:02 AM, ?? wrote:
>> Here... said "Do not try to implement a scrolling window using LIMIT and 
>> OFFSET. Doing so will become sluggish as the user scrolls down toward the 
>> bottom of the list.?. But the page also said "This information is obsolete?
>> ... talks about LIMIT & OFFSET, without mentioning that is a bad idea. So my 
>> question is can I do that or not (will it become sluggish if I do that) ?
>
>Using LIMIT and OFFSET is generally a bad idea, not just on performance but 
>also 
>logically.
>
>A better way that is very similar is to use WHERE and LIMIT instead.
>
>Assuming that you are going through pages consecutively, you know what rows 
>you've already seen, particularly in the prior page.
>
>Whatever columns you are sorting your result by, take the last row just seen 
>and 
>the query for the next page is found by saying WHERE > or < etc the field 
>values 
>for that last row.
>
>So you're sure to get the rows just after the ones you just saw, and later 
>pages 
>shouldn't be any slower than earlier ones.
>
>This approach is also resilient to arbitrary changes to the database between 
>page views so you don't either repeat rows or skip rows due to offset mismatch.
>
>-- Darren Duncan
>
>
>___
>sqlite-users mailing list
>sqlite-users at mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-03 Thread 邱朗
Clemens Ladisch  wrote:

> > https://www.sqlite.org/lang_select.html talks about LIMIT & OFFSET,
> > without mentioning that is a bad idea.
>> Neither does it mention that it is a good idea.
>> > can I do that or not (will it become sluggish if I do that) ?
>> When you use large OFFSET values, the database must compute all these
> rows before throwing them away.

Can anyone describe a situation for which this style of LIMIT & OFFSET
is advisable from the application's point of view?  (The DBMS costs are
obvious enough.)  

For me this is the easiest way to implement a scrolling cursor. Otherwise I 
will have to record for each table the condition I use to  scroll it. 
So if there is an easier way to implement the scrolling cursor please let me 
know.


Thanks!


Qiulang






At 2016-03-02 21:02:27, "??"  wrote:

Here(http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor) said "Do not try to 
implement a scrolling window using LIMIT and OFFSET. Doing so will become 
sluggish as the user scrolls down toward the bottom of the list.?. But the page 
also said "This information is obsolete?
https://www.sqlite.org/lang_select.html talks about LIMIT & OFFSET, without 
mentioning that is a bad idea. So my question is can I do that or not (will it 
become sluggish if I do that) ?

Thanks

Qiulang








[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-03 Thread R Smith


On 2016/03/03 5:10 AM, ?? wrote:
> So according to your answers and others, this limitation is always there even 
> the document said "obsolete" ? Just want to double confirm.
>
>
> Thanks!

To add to what Clemens already mentioned - this problem is not an SQLite 
problem, this problem is true and exists for ALL SQL databases. It is 
bad to use LIMIT and OFFSET for several reasons. Performance obviously, 
but also DB changes may alter the position within an offset that a line 
appears, which can make the user skip entire lines when scrolling to a 
next page, or the "Back" button might produce a different view than 
he/she had before.

If your DB is small and does not change often, then the above is all 
negligible and you can use it. But for any performant or large or 
frequently updated DB, it is a bad idea - in ALL SQL engines.

Speaking of all engines - some allow a scrolling cursor and some allow 
windowing functions that might be pressed into service mitigating the 
problem. It remains however faster to use WHERE and LIMIT as Clemens 
suggested in all cases I know of.


Cheers,
Ryan



[sqlite] Are BLOB stored inline with the rest of the record fields?

2016-03-03 Thread Eric Grange
> All BLOBs are stored inline.  The only complication comes when the total
row length is longer than a page.

Ok thanks!

That is besides my original question but what is the complication in that
case?
Is the blob partially stored in the row, or is some indirection to other
pages used, with only some pointer stored in the row?

> Surely you mean big-endian?

Yes, my mistake :)

> Storing Blobs has a few conversions added if you try to use the SQL to
store it, but it's quite efficient when using the API.

By "use SQL", you mean as a literal blob embedded in the SQL string?

My main use case would be through the API, the actual numbers are fixed
precision, and so scattered over the range they cannot be displayed to end
users without using exponents, so some formatting will have to happen.
Besides size, using a blob rather than base32/base64 would simplify the
encoding/decoding, and for debugging, blobs usually display in an
hexadecimal form here, so a big-endian blob would be directly "readable".



On Wed, Mar 2, 2016 at 11:07 PM, Doug Currie  wrote:

> On Wed, Mar 2, 2016 at 4:42 PM, R Smith  wrote:
> >
> >
> > Personally, unless your space is constrained, I would simply save the
> > numbers as strings, perhaps Hex or BCD with leading chars and convert as
> > needed. This would sort correctly without tricks and not do much worse
> for
> > space. (Base64 would be even better space-wise but won't sort correct).
> >
>
> There is an encoding defined in RFC 4648 "Base-N Encodings" that does
> preserve sort order; it is called Base 32 Encoding with Extended Hex
> Alphabet. I would think the Base64 alphabet could be rearranged to have the
> same property.
>
> e
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Are BLOB stored inline with the rest of the record fields?

2016-03-03 Thread Paul Sanderson
What is the likely max length of a row in your table? if your row
length is less than page length-35 then it will fit in to one page.

The amount of wasted page depends on what fraction of a page size your
typical record is.

If you have a record that is 600 bytes long on a DB with 1024 byte
page size then you would "waste" about 1024-600 = 424 bytes per record
. 

If your page size is 4096 then for the same record size you fit 6
records into the page so would waste about 4096 - (600*6) = 496 / 6 =
82 bytes per record.

Pick your page size appropriately although there is usually more to
think about with page size than just wasted space.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 3 March 2016 at 08:58, Simon Slavin  wrote:
>
>> On 3 Mar 2016, at 8:16am, Eric Grange  wrote:
>>
>>> All BLOBs are stored inline.  The only complication comes when the total
>>> row length is longer than a page.
>>
>> Ok thanks!
>>
>> That is besides my original question but what is the complication in that
>> case?
>> Is the blob partially stored in the row, or is some indirection to other
>> pages used, with only some pointer stored in the row?
>
> SQLite database files are split up into pages.  Every page in a database is 
> the same number of bytes long.  Every page belongs to the header, or to free 
> space, or to a specific TABLE or INDEX.
>
> SQLite stores all the values for a row together.  Changing the value of any 
> column for a row requires rewriting the entire row, and then the row entry 
> for every index on that table which was changed. [1]
>
> Ideally a row of data would fit in a page.  This simplifies the job of 
> reading a row from disk or changing the data in a row.  SQLite does this 
> where possible.
>
> But it's possible for a row to be longer than a page.  This happens 
> especially where the row contains a long string or a long BLOB.  In that case 
> SQLite has to fetch more than one page from disk just to access the values 
> for that row.  And it's possible for a BLOB to be so long that it requires 
> one or more pages entirely to itself.  So the word 'inline' is a little 
> tricky here because there is more than one 'line' involved.
>
> Simon.
>
> [1] Handwave a few exceptions.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Are BLOB stored inline with the rest of the record fields?

2016-03-03 Thread Simon Slavin

> On 3 Mar 2016, at 8:16am, Eric Grange  wrote:
> 
>> All BLOBs are stored inline.  The only complication comes when the total
>> row length is longer than a page.
> 
> Ok thanks!
> 
> That is besides my original question but what is the complication in that
> case?
> Is the blob partially stored in the row, or is some indirection to other
> pages used, with only some pointer stored in the row?

SQLite database files are split up into pages.  Every page in a database is the 
same number of bytes long.  Every page belongs to the header, or to free space, 
or to a specific TABLE or INDEX.

SQLite stores all the values for a row together.  Changing the value of any 
column for a row requires rewriting the entire row, and then the row entry for 
every index on that table which was changed. [1]

Ideally a row of data would fit in a page.  This simplifies the job of reading 
a row from disk or changing the data in a row.  SQLite does this where possible.

But it's possible for a row to be longer than a page.  This happens especially 
where the row contains a long string or a long BLOB.  In that case SQLite has 
to fetch more than one page from disk just to access the values for that row.  
And it's possible for a BLOB to be so long that it requires one or more pages 
entirely to itself.  So the word 'inline' is a little tricky here because there 
is more than one 'line' involved.

Simon.

[1] Handwave a few exceptions.


[sqlite] SQLITE Porting on FreeRTOS with STM32

2016-03-03 Thread Clemens Ladisch
Nitin Bhosale wrote:
> I am trying to port sqlite3 VFS implementation on freertos(FATFS on
> stm32 ARM CortexM4). I have created an amalgamation with sqlite3
> version 3.12 base code.
>
> When I run the below code it runs to create a database and a query
> "create a table cars". But when query "insert" is run then fails with
> following error:- error no: 26(SQLITE_NOTADB)
> error: "file is encrypted or is not a database"
>
> I have given following options during compile:
>   SQLITE_THREADSAFE=0
>   SQLITE_OS_OTHER=1
>   SQLITE_OMIT_WAL=1

You did not set SQLITE_TEMP_STORE as specified in the demovfs
documentation. Anyway, you have to trace the actual file system calls
to find out where the wrong data comes from.


Regards,
Clemens


[sqlite] SQLITE Porting on FreeRTOS with STM32

2016-03-03 Thread Nitin Bhosale
Hi All

I am trying to port sqlite3 VFS implementation on freertos(FATFS on stm32 ARM 
CortexM4). I have created an amalgamation with sqlite3 version 3.12 base code.

When I run the below code it runs to create a database and a query "create a 
table cars". But when query "insert" is run then fails with following error:- 
error no: 26(SQLITE_NOTADB)
error: "file is encrypted or is not a database"

I have given following options during compile:
SQLITE_THREADSAFE=0
SQLITE_OS_OTHER=1
SQLITE_OMIT_WAL=1

Any pointer would really help.

Have a sample code as below:

sqlite3 *db = NULL;
char *zErrMsg = 0;
int rc;
char *sql;


rc = sqlite3_vfs_register(sqlite3_demovfs(), 1); /* Register a VFS */
if(rc != SQLITE_OK)
{
abort();
}

rc = sqlite3_open_v2("testsql.db", , SQLITE_OPEN_READWRITE |   
SQLITE_OPEN_CREATE , "demo" );/* Create a SQL table */
if( rc )
{
abort();
}

sql = "PRAGMA journal_mode=OFF";/* Create SQL statement */
rc = sqlite3_exec(db, sql, NULL, NULL, );/* Execute SQL 
statement */
if( rc != SQLITE_OK )
{
sqlite3_free(zErrMsg);
abort();
}

sql ="CREATE TABLE Cars(Id INT, Name TEXT, Price INT);" ;/* Create SQL 
statement */
rc = sqlite3_exec(db, sql, NULL, NULL, );/* Execute SQL 
statement */
if( rc != SQLITE_OK )
{
sqlite3_free(zErrMsg);
abort();
}

sql = "INSERT INTO Cars VALUES(1, 'Audi', 52642);"/* Create SQL 
statement */
"INSERT INTO Cars VALUES(2, 'Skoda', 9000);";
rc = sqlite3_exec(db, sql, NULL, NULL, );/* Execute SQL 
statement */
if( rc != SQLITE_OK )
{
sqlite3_free(zErrMsg);
abort();
}

sql = "SELECT * from Cars";/* Create SQL statement */
const char* data = "Callback function called";
rc = sqlite3_exec(db, sql, callback, (void *)data, );/* Execute 
SQL statement */
if( rc != SQLITE_OK )
{
sqlite3_free(zErrMsg);
abort();
}
sqlite3_close(db);

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.



[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-03 Thread Darren Duncan
On 2016-03-02 11:18 PM, R Smith wrote:
> On 2016/03/03 5:10 AM, ?? wrote:
>> So according to your answers and others, this limitation is always there even
>> the document said "obsolete" ? Just want to double confirm.
>
> To add to what Clemens already mentioned - this problem is not an SQLite
> problem, this problem is true and exists for ALL SQL databases. It is bad to 
> use
> LIMIT and OFFSET for several reasons. Performance obviously, but also DB 
> changes
> may alter the position within an offset that a line appears, which can make 
> the
> user skip entire lines when scrolling to a next page, or the "Back" button 
> might
> produce a different view than he/she had before.
>
> If your DB is small and does not change often, then the above is all 
> negligible
> and you can use it. But for any performant or large or frequently updated DB, 
> it
> is a bad idea - in ALL SQL engines.
>
> Speaking of all engines - some allow a scrolling cursor and some allow 
> windowing
> functions that might be pressed into service mitigating the problem. It 
> remains
> however faster to use WHERE and LIMIT as Clemens suggested in all cases I 
> know of.
>
>
> Cheers,
> Ryan

Ryan, I think I was the one who actually previously said most of this stuff you 
are talking about, rather than Clemens, including about using WHERE and LIMIT 
and about consequences of OFFSET skipping or repeating lines due to other 
users' 
updates. -- Darren Duncan


[sqlite] DB-Journal

2016-03-03 Thread Simon Slavin

On 2 Mar 2016, at 1:48pm, Itxaso Perez  wrote:

> - If this occurss (power loss) using only SQLite (without NHibernate), is
> not a problem the opening of an app?
> - How can I 'execute' the DB-Jounal file just to try to 'correct' the
> database before execting NHibernate?

[This is a simplified explanation.]

Having a journal file on disk should not crash your app.  If you are using just 
SQLite you should not delete the journal file.

When you use SQLite to open the database it looks for the journal file.  If it 
finds the journal file it knows your program crashed.  It looks at the contents 
of the database file and the contents of the journal file and uses the two 
together to 'rescue' the most up-to-date data possible.  Then it lets your app 
continue as normal.  You do not have to do anything special to make this 
happen.  SQLite does it automatically.

If your app is crashing because of this journal file, the crash does not come 
from SQLite.  It may come from NHibernate or from your app's code.  It might be 
good to look for advice from an NHibernate forum.  But if they cannot help ...

It may be possible to recover your database by using a SQLite tool.  First, 
take a backup copies of the database and journal files !  Then, from this page



download the 'Precompiled Binaries' for your platform.  Use the sqlite3 command 
line tool to open your database.

OS prompt> sqlite3 mydatabase.ext
SQLite version ...
> .quit

This should cause SQLite to try to recover the database.  Maybe it will work. 
Maybe it won't work.

Simon.