[sqlite] bug: explain Rewind Le

2017-10-31 Thread Egor Shalashnikov
create table t(n number, v varchar2(10));
insert into t values (1, 'one')
explain select * from t where 0 < n;

addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 11000  Start at 11
1 OpenRead   0 2 0 2  00  root=2 iDb=0; t
2 Rewind 0 9 000
3   Column 0 0 100  r[1]=t.n
4   Le 2 8 1 (BINARY)   53  if r[2]<=r[1] goto 8
5   Copy   1 3 000  r[3]=r[1]
6   Column 0 1 400  r[4]=t.v
7   ResultRow  3 2 000  output=r[3..4]
8 Next   0 3 001
9 Close  0 0 000
10Halt   0 0 000
11Transaction0 0 2 1  01  usesStmtJournal=0
12TableLock  0 2 0 t  00  iDb=0 root=2 write=0
13Integer0 2 000  r[2]=0
14Goto   0 1 000

The line 4 seems wrong for me, because t.n(r[1]) should be <= than
0(r[2]) to be ignored (goto 8). Here what we see: "if r[2]<=r[1] goto
8" - vise versa.

Egor.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ASP.NET MVC 5 Connection

2017-10-31 Thread Joseph L. Casale
-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Rahmat Ali
Sent: Tuesday, October 31, 2017 2:17 PM
To: SQLite mailing list 
Subject: Re: [sqlite] ASP.NET MVC 5 Connection
 
> Yes I am using MVC 5 in my project. I will go to Core in future but at this
> time, I am using MVC 5. Is there any example for me you found
> elsewhere...???

Are you using entity framework?

Searching google for "asp.net mvc 5 sqlite" shows a few older hits, such
as https://dotnetthoughts.net/how-to-use-sqlite-in-asp-net-5/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ASP.NET MVC 5 Connection

2017-10-31 Thread Rahmat Ali
Yes I am using MVC 5 in my project. I will go to Core in future but at this
time, I am using MVC 5. Is there any example for me you found
elsewhere...???


On Wed, Nov 1, 2017 at 12:48 AM, Joseph L. Casale  wrote:

> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On
> Behalf Of Rahmat Ali
> Sent: Tuesday, October 31, 2017 10:43 AM
> To: sqlite-users@mailinglists.sqlite.org
> Subject: [sqlite] ASP.NET MVC 5 Connection
>
> > I am trying to attach SQLite with my MVC 5 app. I cannot able to do so.
> > Also, I have not found any solution online ye. Can any of them help me in
> > this regard. Thanks in advance.
>
> Are you really using version 5, that means you are using a 2 year old beta?
> I assume (or hope for that matter) you are using asp.net core at either v1
> or v2.
>
> Regardless, I found examples for the old beta packages online but if you
> are using a release version, the MS docs are sufficient. See
> https://docs.microsoft.com/en-us/aspnet/core/tutorials/
> first-mvc-app-xplat/working-with-sql
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ASP.NET MVC 5 Connection

2017-10-31 Thread Joseph L. Casale
-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Rahmat Ali
Sent: Tuesday, October 31, 2017 10:43 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] ASP.NET MVC 5 Connection

> I am trying to attach SQLite with my MVC 5 app. I cannot able to do so.
> Also, I have not found any solution online ye. Can any of them help me in
> this regard. Thanks in advance.

Are you really using version 5, that means you are using a 2 year old beta?
I assume (or hope for that matter) you are using asp.net core at either v1
or v2.

Regardless, I found examples for the old beta packages online but if you
are using a release version, the MS docs are sufficient. See 
https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app-xplat/working-with-sql
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ASP.NET MVC 5 Connection

2017-10-31 Thread Rahmat Ali
I have searched a lot. But not able to find a solution. I have viewed that
page several times. But nothing here is helping me out.



On Tue, Oct 31, 2017 at 11:59 PM, GB  wrote:

> You may want to have a look at http://system.data.sqlite.org/
> index.html/doc/trunk/www/index.wiki
>
>
>
> Rahmat Ali schrieb am 31.10.2017 um 17:43:
>
>> I am trying to attach SQLite with my MVC 5 app. I cannot able to do so.
>> Also, I have not found any solution online ye. Can any of them help me in
>> this regard. Thanks in advance.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ASP.NET MVC 5 Connection

2017-10-31 Thread GB
You may want to have a look at 
http://system.data.sqlite.org/index.html/doc/trunk/www/index.wiki



Rahmat Ali schrieb am 31.10.2017 um 17:43:

I am trying to attach SQLite with my MVC 5 app. I cannot able to do so.
Also, I have not found any solution online ye. Can any of them help me in
this regard. Thanks in advance.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LSM database file growth?

2017-10-31 Thread Charles Leifer
Thanks so much, this explains things neatly. I was aware of the tree
compaction and the use of delete tombstones, but wasn't sure how it all
played out in terms of space reclamation. In other words, if the total size
of my keyspace is fixed, then the database won't grow without bounds, even
if keys within that space are frequently updated, deleted, or re-inserted.

On Tue, Oct 31, 2017 at 11:16 AM, Dan Kennedy  wrote:

> On 10/31/2017 10:50 PM, Charles Leifer wrote:
>
>> Is the LSM database append-only, as in the file size will always
>> grow/never
>> shrink (even if there are deletions/overwrites)?
>>
>
> An LSM database is basically a series of tree structures on disk. When you
> write to an LSM database you add the new key to an in-memory tree. Once
> that tree is full it is flushed out to disk. To query the database for a
> key, you query each of these trees from newest to oldest until you find a
> match or run out of trees. To prevent the number of trees on disk from
> growing indefinitely, two or more old trees are periodically merged
> together to create a single, larger tree structure.
>
> A DELETE operation is handled like an INSERT, except a special
> "delete-marker" key is added to the in-memory tree structure.
>
> When two old trees are merged together, if a delete-marker is merged with
> a real insert-key, both are discarded. So that the new tree contains
> neither the delete-marker or the insert key.
>
> So if you delete a bunch of data space is reclaimed eventually, but it can
> take a while to happen. Optimizing the database involves merging all trees
> on disk together, so this has the effect of reclaiming all unused space:
>
> http://sqlite.org/src4/doc/trunk/www/lsmusr.wiki#database_
> file_optimization
>
> Dan.
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ASP.NET MVC 5 Connection

2017-10-31 Thread Rahmat Ali
I am trying to attach SQLite with my MVC 5 app. I cannot able to do so.
Also, I have not found any solution online ye. Can any of them help me in
this regard. Thanks in advance.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LSM database file growth?

2017-10-31 Thread Dan Kennedy

On 10/31/2017 10:50 PM, Charles Leifer wrote:

Is the LSM database append-only, as in the file size will always grow/never
shrink (even if there are deletions/overwrites)?


An LSM database is basically a series of tree structures on disk. When 
you write to an LSM database you add the new key to an in-memory tree. 
Once that tree is full it is flushed out to disk. To query the database 
for a key, you query each of these trees from newest to oldest until you 
find a match or run out of trees. To prevent the number of trees on disk 
from growing indefinitely, two or more old trees are periodically merged 
together to create a single, larger tree structure.


A DELETE operation is handled like an INSERT, except a special 
"delete-marker" key is added to the in-memory tree structure.


When two old trees are merged together, if a delete-marker is merged 
with a real insert-key, both are discarded. So that the new tree 
contains neither the delete-marker or the insert key.


So if you delete a bunch of data space is reclaimed eventually, but it 
can take a while to happen. Optimizing the database involves merging all 
trees on disk together, so this has the effect of reclaiming all unused 
space:


http://sqlite.org/src4/doc/trunk/www/lsmusr.wiki#database_file_optimization

Dan.





___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] LSM database file growth?

2017-10-31 Thread Charles Leifer
Is the LSM database append-only, as in the file size will always grow/never
shrink (even if there are deletions/overwrites)?

Thanks!
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segfault when query again in-memory db

2017-10-31 Thread Keith Medcalf

>I tested the in-memory with multi-thread (but inserts/deletes are in
>a lock)

What does "with multi-thread" mean (to you)?

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Wang, Peter (Xu)
>Sent: Tuesday, 31 October, 2017 02:21
>To: sqlite-users@mailinglists.sqlite.org
>Cc: wangxu198...@gmail.com
>Subject: [sqlite] Segfault when query again in-memory db
>
>Hi
>I am trying to provide a asychnonous queue based on sqlite3 in Python
>Current, I already passted the file based tests with my queue
>When switching file to in-memory db, I keep meeting a "segfault"
>issue when running the same tests test suite
>
>Can anyone help me out of this situation?
>
>
>I tested the in-memory with multi-thread (but inserts/deletes are in
>a lock)
>Here is the trace of the core file:
>
>(gdb) bt
>#0  sqlite3_value_type (pVal=0x0) at sqlite3.c:72512
>#1  0x7fda2a34fd86 in sqlite3_column_type (pStmt=0x7fda08004178,
>i=) at sqlite3.c:73318
>#2  0x7fda2a607987 in _pysqlite_fetch_one_row
>(self=self@entry=0x7fda2a204b20) at /build/python2.7-
>1tJBSB/python2.7-2.7.12/Modules/_sqlite/cursor.c:352
>#3  0x7fda2a6089c1 in _pysqlite_query_execute
>(self=0x7fda2a204b20, multiple=, args=)
>at /build/python2.7-1tJBSB/python2.7-
>2.7.12/Modules/_sqlite/cursor.c:711
>#4  0x004b0cb3 in PyObject_Call () at
>../Objects/abstract.c:2546
>#5  0x004ce5d0 in PyEval_CallObjectWithKeywords () at
>../Python/ceval.c:4219
>#6  0x7fda2a609106 in pysqlite_connection_execute
>(self=, args=('SELECT _id, data FROM queue_default
>ORDER BY _id ASC LIMIT 1', ()))
>at /build/python2.7-1tJBSB/python2.7-
>2.7.12/Modules/_sqlite/connection.c:1262
>#7  0x004cb945 in call_function (oparg=,
>pp_stack=0x7fda19ff9fc0) at ../Python/ceval.c:4350
>#8  PyEval_EvalFrameEx () at ../Python/ceval.c:2987
>#9  0x004c2765 in PyEval_EvalCodeEx () at
>../Python/ceval.c:3582
>#10 0x004ca8d1 in fast_function (nk=0, na=,
>n=, pp_stack=0x7fda19ffa1d0, func=0x7fda2a83c050>)
>at ../Python/ceval.c:4445
>#11 call_function (oparg=, pp_stack=0x7fda19ffa1d0) at
>../Python/ceval.c:4370
>#12 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
>#13 0x004c9d8f in fast_function (nk=,
>na=, n=, pp_stack=0x7fda19ffa320,
>func=) at
>../Python/ceval.c:4435
>#14 call_function (oparg=, pp_stack=0x7fda19ffa320) at
>../Python/ceval.c:4370
>#15 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
>#16 0x004c2765 in PyEval_EvalCodeEx () at
>../Python/ceval.c:3582
>#17 0x004ca099 in fast_function (nk=1, na=,
>n=, pp_stack=0x7fda19ffa530, func=0x7fda2a841758>)
>at ../Python/ceval.c:4445
>#18 call_function (oparg=, pp_stack=0x7fda19ffa530) at
>../Python/ceval.c:4370
>#19 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
>#20 0x004c2765 in PyEval_EvalCodeEx () at
>../Python/ceval.c:3582
>#21 0x004de8b8 in function_call.lto_priv () at
>../Objects/funcobject.c:523
>#22 0x004b0cb3 in PyObject_Call () at
>../Objects/abstract.c:2546
>#23 0x004c6ad1 in ext_do_call (nk=,
>na=, flags=, pp_stack=0x7fda19ffa7e8,
>func=) at
>../Python/ceval.c:4664
>#24 PyEval_EvalFrameEx () at ../Python/ceval.c:3026
>#25 0x004c9d8f in fast_function (nk=,
>na=, n=, pp_stack=0x7fda19ffa930,
>func=) at
>../Python/ceval.c:4435
>#26 call_function (oparg=, pp_stack=0x7fda19ffa930) at
>../Python/ceval.c:4370
>#27 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
>#28 0x004c9d8f in fast_function (nk=,
>na=, n=, pp_stack=0x7fda19ffaa80,
>---Type  to continue, or q  to quit---
>func=) at
>../Python/ceval.c:4435
>#29 call_function (oparg=, pp_stack=0x7fda19ffaa80) at
>../Python/ceval.c:4370
>#30 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
>#31 0x004c2765 in PyEval_EvalCodeEx () at
>../Python/ceval.c:3582
>#32 0x004de6fe in function_call.lto_priv () at
>../Objects/funcobject.c:523
>#33 0x004b0cb3 in PyObject_Call () at
>../Objects/abstract.c:2546
>#34 0x004f492e in instancemethod_call.lto_priv () at
>../Objects/classobject.c:2602
>#35 0x004b0cb3 in PyObject_Call () at
>../Objects/abstract.c:2546
>#36 0x004ce5d0 in PyEval_CallObjectWithKeywords () at
>../Python/ceval.c:4219
>#37 0x00597e32 in t_bootstrap () at
>../Modules/threadmodule.c:620
>#38 0x7fda2d77f6ba in start_thread (arg=0x7fda19ffb700) at
>pthread_create.c:333
>#39 0x7fda2d4b53dd in clone () at
>../sysdeps/unix/sysv/linux/x86_64/clone.S:109
>
>Here is the python trace:
>
>(gdb) py-bt
>Traceback (most recent call first):
>  File "/home/user/Documents/persist-queue/persistqueue/sqlbase.py",
>line 140, in _select
>def _select(self, *args):
>  File "/home/user/Documents/persist-queue/persistqueue/sqlqueue.py",
>line 49, in _pop
>row = self._select()
>  File 

Re: [sqlite] Segfault when query again in-memory db

2017-10-31 Thread Richard Hipp
On 10/31/17, Wang, Peter (Xu)  wrote:
> Hi
> I am trying to provide a asychnonous queue based on sqlite3 in Python
> Current, I already passted the file based tests with my queue
> When switching file to in-memory db, I keep meeting a "segfault" issue when
> running the same tests test suite
>
> Can anyone help me out of this situation?

Can you get us a reproducible test case?

Can you tell us what version of SQLite you are using?

Can you recompile the SQLite shared library using -DSQLITE_DEBUG and
-DSQLITE_ENABLE_API_ARMOR and see how that changes the outcome?

>
>
> I tested the in-memory with multi-thread (but inserts/deletes are in a lock)
> Here is the trace of the core file:
>
> (gdb) bt
> #0  sqlite3_value_type (pVal=0x0) at sqlite3.c:72512
> #1  0x7fda2a34fd86 in sqlite3_column_type (pStmt=0x7fda08004178,
> i=) at sqlite3.c:73318
> #2  0x7fda2a607987 in _pysqlite_fetch_one_row
> (self=self@entry=0x7fda2a204b20) at
> /build/python2.7-1tJBSB/python2.7-2.7.12/Modules/_sqlite/cursor.c:352
> #3  0x7fda2a6089c1 in _pysqlite_query_execute (self=0x7fda2a204b20,
> multiple=, args=)
> at /build/python2.7-1tJBSB/python2.7-2.7.12/Modules/_sqlite/cursor.c:711
> #4  0x004b0cb3 in PyObject_Call () at ../Objects/abstract.c:2546
> #5  0x004ce5d0 in PyEval_CallObjectWithKeywords () at
> ../Python/ceval.c:4219
> #6  0x7fda2a609106 in pysqlite_connection_execute (self=,
> args=('SELECT _id, data FROM queue_default ORDER BY _id ASC LIMIT 1', ()))
> at
> /build/python2.7-1tJBSB/python2.7-2.7.12/Modules/_sqlite/connection.c:1262
> #7  0x004cb945 in call_function (oparg=,
> pp_stack=0x7fda19ff9fc0) at ../Python/ceval.c:4350
> #8  PyEval_EvalFrameEx () at ../Python/ceval.c:2987
> #9  0x004c2765 in PyEval_EvalCodeEx () at ../Python/ceval.c:3582
> #10 0x004ca8d1 in fast_function (nk=0, na=,
> n=, pp_stack=0x7fda19ffa1d0, func= 0x7fda2a83c050>)
> at ../Python/ceval.c:4445
> #11 call_function (oparg=, pp_stack=0x7fda19ffa1d0) at
> ../Python/ceval.c:4370
> #12 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
> #13 0x004c9d8f in fast_function (nk=, na= out>, n=, pp_stack=0x7fda19ffa320,
> func=) at ../Python/ceval.c:4435
> #14 call_function (oparg=, pp_stack=0x7fda19ffa320) at
> ../Python/ceval.c:4370
> #15 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
> #16 0x004c2765 in PyEval_EvalCodeEx () at ../Python/ceval.c:3582
> #17 0x004ca099 in fast_function (nk=1, na=,
> n=, pp_stack=0x7fda19ffa530, func= 0x7fda2a841758>)
> at ../Python/ceval.c:4445
> #18 call_function (oparg=, pp_stack=0x7fda19ffa530) at
> ../Python/ceval.c:4370
> #19 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
> #20 0x004c2765 in PyEval_EvalCodeEx () at ../Python/ceval.c:3582
> #21 0x004de8b8 in function_call.lto_priv () at
> ../Objects/funcobject.c:523
> #22 0x004b0cb3 in PyObject_Call () at ../Objects/abstract.c:2546
> #23 0x004c6ad1 in ext_do_call (nk=, na= out>, flags=, pp_stack=0x7fda19ffa7e8,
> func=) at ../Python/ceval.c:4664
> #24 PyEval_EvalFrameEx () at ../Python/ceval.c:3026
> #25 0x004c9d8f in fast_function (nk=, na= out>, n=, pp_stack=0x7fda19ffa930,
> func=) at ../Python/ceval.c:4435
> #26 call_function (oparg=, pp_stack=0x7fda19ffa930) at
> ../Python/ceval.c:4370
> #27 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
> #28 0x004c9d8f in fast_function (nk=, na= out>, n=, pp_stack=0x7fda19ffaa80,
> ---Type  to continue, or q  to quit---
> func=) at ../Python/ceval.c:4435
> #29 call_function (oparg=, pp_stack=0x7fda19ffaa80) at
> ../Python/ceval.c:4370
> #30 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
> #31 0x004c2765 in PyEval_EvalCodeEx () at ../Python/ceval.c:3582
> #32 0x004de6fe in function_call.lto_priv () at
> ../Objects/funcobject.c:523
> #33 0x004b0cb3 in PyObject_Call () at ../Objects/abstract.c:2546
> #34 0x004f492e in instancemethod_call.lto_priv () at
> ../Objects/classobject.c:2602
> #35 0x004b0cb3 in PyObject_Call () at ../Objects/abstract.c:2546
> #36 0x004ce5d0 in PyEval_CallObjectWithKeywords () at
> ../Python/ceval.c:4219
> #37 0x00597e32 in t_bootstrap () at ../Modules/threadmodule.c:620
> #38 0x7fda2d77f6ba in start_thread (arg=0x7fda19ffb700) at
> pthread_create.c:333
> #39 0x7fda2d4b53dd in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
>
> Here is the python trace:
>
> (gdb) py-bt
> Traceback (most recent call first):
>   File "/home/user/Documents/persist-queue/persistqueue/sqlbase.py", line
> 140, in _select
> def _select(self, *args):
>   File "/home/user/Documents/persist-queue/persistqueue/sqlqueue.py", line
> 49, in _pop
> row = self._select()
>   File "/home/wangp11/Documents/persist-queue/persistqueue/sqlqueue.py",
> line 67, in get
> pickled = self._pop()
>   File "/home/user/Documents/persist-queue/tests/test_sqlqueue.py", line
> 105, in 

[sqlite] Segfault when query again in-memory db

2017-10-31 Thread Wang, Peter (Xu)
Hi
I am trying to provide a asychnonous queue based on sqlite3 in Python
Current, I already passted the file based tests with my queue
When switching file to in-memory db, I keep meeting a "segfault" issue when 
running the same tests test suite

Can anyone help me out of this situation?


I tested the in-memory with multi-thread (but inserts/deletes are in a lock)
Here is the trace of the core file:

(gdb) bt
#0  sqlite3_value_type (pVal=0x0) at sqlite3.c:72512
#1  0x7fda2a34fd86 in sqlite3_column_type (pStmt=0x7fda08004178, 
i=) at sqlite3.c:73318
#2  0x7fda2a607987 in _pysqlite_fetch_one_row 
(self=self@entry=0x7fda2a204b20) at 
/build/python2.7-1tJBSB/python2.7-2.7.12/Modules/_sqlite/cursor.c:352
#3  0x7fda2a6089c1 in _pysqlite_query_execute (self=0x7fda2a204b20, 
multiple=, args=)
at /build/python2.7-1tJBSB/python2.7-2.7.12/Modules/_sqlite/cursor.c:711
#4  0x004b0cb3 in PyObject_Call () at ../Objects/abstract.c:2546
#5  0x004ce5d0 in PyEval_CallObjectWithKeywords () at 
../Python/ceval.c:4219
#6  0x7fda2a609106 in pysqlite_connection_execute (self=, 
args=('SELECT _id, data FROM queue_default ORDER BY _id ASC LIMIT 1', ()))
at 
/build/python2.7-1tJBSB/python2.7-2.7.12/Modules/_sqlite/connection.c:1262
#7  0x004cb945 in call_function (oparg=, 
pp_stack=0x7fda19ff9fc0) at ../Python/ceval.c:4350
#8  PyEval_EvalFrameEx () at ../Python/ceval.c:2987
#9  0x004c2765 in PyEval_EvalCodeEx () at ../Python/ceval.c:3582
#10 0x004ca8d1 in fast_function (nk=0, na=, n=, pp_stack=0x7fda19ffa1d0, func=)
at ../Python/ceval.c:4445
#11 call_function (oparg=, pp_stack=0x7fda19ffa1d0) at 
../Python/ceval.c:4370
#12 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
#13 0x004c9d8f in fast_function (nk=, na=, n=, pp_stack=0x7fda19ffa320,
func=) at ../Python/ceval.c:4435
#14 call_function (oparg=, pp_stack=0x7fda19ffa320) at 
../Python/ceval.c:4370
#15 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
#16 0x004c2765 in PyEval_EvalCodeEx () at ../Python/ceval.c:3582
#17 0x004ca099 in fast_function (nk=1, na=, n=, pp_stack=0x7fda19ffa530, func=)
at ../Python/ceval.c:4445
#18 call_function (oparg=, pp_stack=0x7fda19ffa530) at 
../Python/ceval.c:4370
#19 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
#20 0x004c2765 in PyEval_EvalCodeEx () at ../Python/ceval.c:3582
#21 0x004de8b8 in function_call.lto_priv () at 
../Objects/funcobject.c:523
#22 0x004b0cb3 in PyObject_Call () at ../Objects/abstract.c:2546
#23 0x004c6ad1 in ext_do_call (nk=, na=, 
flags=, pp_stack=0x7fda19ffa7e8,
func=) at ../Python/ceval.c:4664
#24 PyEval_EvalFrameEx () at ../Python/ceval.c:3026
#25 0x004c9d8f in fast_function (nk=, na=, n=, pp_stack=0x7fda19ffa930,
func=) at ../Python/ceval.c:4435
#26 call_function (oparg=, pp_stack=0x7fda19ffa930) at 
../Python/ceval.c:4370
#27 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
#28 0x004c9d8f in fast_function (nk=, na=, n=, pp_stack=0x7fda19ffaa80,
---Type  to continue, or q  to quit---
func=) at ../Python/ceval.c:4435
#29 call_function (oparg=, pp_stack=0x7fda19ffaa80) at 
../Python/ceval.c:4370
#30 PyEval_EvalFrameEx () at ../Python/ceval.c:2987
#31 0x004c2765 in PyEval_EvalCodeEx () at ../Python/ceval.c:3582
#32 0x004de6fe in function_call.lto_priv () at 
../Objects/funcobject.c:523
#33 0x004b0cb3 in PyObject_Call () at ../Objects/abstract.c:2546
#34 0x004f492e in instancemethod_call.lto_priv () at 
../Objects/classobject.c:2602
#35 0x004b0cb3 in PyObject_Call () at ../Objects/abstract.c:2546
#36 0x004ce5d0 in PyEval_CallObjectWithKeywords () at 
../Python/ceval.c:4219
#37 0x00597e32 in t_bootstrap () at ../Modules/threadmodule.c:620
#38 0x7fda2d77f6ba in start_thread (arg=0x7fda19ffb700) at 
pthread_create.c:333
#39 0x7fda2d4b53dd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Here is the python trace:

(gdb) py-bt
Traceback (most recent call first):
  File "/home/user/Documents/persist-queue/persistqueue/sqlbase.py", line 140, 
in _select
def _select(self, *args):
  File "/home/user/Documents/persist-queue/persistqueue/sqlqueue.py", line 49, 
in _pop
row = self._select()
  File "/home/wangp11/Documents/persist-queue/persistqueue/sqlqueue.py", line 
67, in get
pickled = self._pop()
  File "/home/user/Documents/persist-queue/tests/test_sqlqueue.py", line 105, 
in consumer
x = m_queue.get(block=True)
  File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
  File "/usr/lib/python2.7/threading.py", line 774, in __bootstrap
self.__bootstrap_inner()
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Properly bulk-inserting into FTS5 index with external content table

2017-10-31 Thread Simon Slavin


On 31 Oct 2017, at 12:54pm, Eugene Mirotin  wrote:

> This field is actually boolean, but also nullable. From other languages
> (like Python and JS) I actually assumed the NULL check is faster than value
> comparison.

In SQLite, NULL, 0 and 1 are all special cases and take the same amount of 
filespace/memory to store.

> Will improve later (for now it's definitely not a bottleneck).

Agreed.  If that bit of your program is already in place, and you understand 
the problems of doing logic with NULL values, then it’s not a priority.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Properly bulk-inserting into FTS5 index with external content table

2017-10-31 Thread Eugene Mirotin
Thanks for the tip.
This field is actually boolean, but also nullable. From other languages
(like Python and JS) I actually assumed the NULL check is faster than value
comparison.
Will improve later (for now it's definitely not a bottleneck).

On Tue, Oct 31, 2017 at 3:50 PM Simon Slavin  wrote:

>
>
> On 31 Oct 2017, at 10:21am, Eugene Mirotin  wrote:
>
> > Hmm, I think I've found the solution:
> >
> > INSERT INTO search (rowid, question, answer, altAnswers, comments,
> authors,
> > sources) SELECT id, question, answer, altAnswers, comments, authors,
> > sources FROM questions WHERE obsolete IS NULL;
>
> That looks like it should work, and the "INSERT … SELECT" form is very
> fast.
>
> I do have a suggestion for something else: don’t use intentionally NULL
> values.  NULL means "unknown" or "no value" in SQL whereas what you mean is
> more like TRUE and FALSE.
>
> SQLite doesn’t have a BOOLEAN type.  Instead the fastest, most compact way
> to store booleans is to use 0 and 1.  And if you store those values you can
> do things like
>
> … WHERE obsolete— equivalent to WHERE obsolete = 1
> … WHERE NOT obsolete— equivalent to WHERE obsolete = 0
>
> Unfortunately TRUE and FALSE are not reserved words, so you do have to do
> things like
>
> CREATE TABLE questions (…, obsolete INTEGER DEFAULT 0, …).
>
> Using specific values 0 and 1 means that if you have a fault in your
> software and end up with NULL values in your fields you know something
> definitely went wrong.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Properly bulk-inserting into FTS5 index with external content table

2017-10-31 Thread Simon Slavin


On 31 Oct 2017, at 10:21am, Eugene Mirotin  wrote:

> Hmm, I think I've found the solution:
> 
> INSERT INTO search (rowid, question, answer, altAnswers, comments, authors,
> sources) SELECT id, question, answer, altAnswers, comments, authors,
> sources FROM questions WHERE obsolete IS NULL;

That looks like it should work, and the "INSERT … SELECT" form is very fast.

I do have a suggestion for something else: don’t use intentionally NULL values. 
 NULL means "unknown" or "no value" in SQL whereas what you mean is more like 
TRUE and FALSE.

SQLite doesn’t have a BOOLEAN type.  Instead the fastest, most compact way to 
store booleans is to use 0 and 1.  And if you store those values you can do 
things like

… WHERE obsolete— equivalent to WHERE obsolete = 1
… WHERE NOT obsolete— equivalent to WHERE obsolete = 0

Unfortunately TRUE and FALSE are not reserved words, so you do have to do 
things like

CREATE TABLE questions (…, obsolete INTEGER DEFAULT 0, …).

Using specific values 0 and 1 means that if you have a fault in your software 
and end up with NULL values in your fields you know something definitely went 
wrong.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Properly bulk-inserting into FTS5 index with external content table

2017-10-31 Thread Eugene Mirotin
Hmm, I think I've found the solution:

INSERT INTO search (rowid, question, answer, altAnswers, comments, authors,
sources) SELECT id, question, answer, altAnswers, comments, authors,
sources FROM questions WHERE obsolete IS NULL;


On Tue, Oct 31, 2017 at 1:02 PM Eugene Mirotin  wrote:

> Hi,
>
> I have a table questions with lots of columns. Out of them 6 columns
> represent text content and the rest are technical (FKs, update timestamp,
> etc)
>
> I need to build the FTS5 search index for this table to only index the
> content columns (that 6).
> The DB is only built once on the server and later used as read-only in the
> client app (which downloads it from the server)
> It works fine unless I try to use the external content table feature
> (which I need to reduce the table size and also to be able to make JOIN
> queries to get the meta info from the original table for each matching row.
>
> The original table (called questions) has the PK column called id. This is
> the column I want to use as rowid for the search index and for joining the
> tables.
>
> Here's how I'm trying to create and populate the search table:
>
> DROP TABLE IF EXISTS search;
> CREATE VIRTUAL TABLE search USING fts5(question, answer, altAnswers,
> comments, authors, sources, tokenize = "snowball russian english
> unicode61", content=questions, content_rowid=id);
> INSERT INTO search SELECT id, question, answer, altAnswers, comments,
> authors, sources FROM questions WHERE obsolete IS NULL;
>
> At this point INSERT fails because I'm trying to insert 7 columns into the
> table which has 6.
>
> I also tried creating id as UNINDEXED column in the search table and that
> passed but the search results were silly, matching completely unrelated
> tokens.
>
> So what is the proper way to insert all rows from questions into the
> search table?
>
>
> Thanks in advance
>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Properly bulk-inserting into FTS5 index with external content table

2017-10-31 Thread Eugene Mirotin
Hi,

I have a table questions with lots of columns. Out of them 6 columns
represent text content and the rest are technical (FKs, update timestamp,
etc)

I need to build the FTS5 search index for this table to only index the
content columns (that 6).
The DB is only built once on the server and later used as read-only in the
client app (which downloads it from the server)
It works fine unless I try to use the external content table feature (which
I need to reduce the table size and also to be able to make JOIN queries to
get the meta info from the original table for each matching row.

The original table (called questions) has the PK column called id. This is
the column I want to use as rowid for the search index and for joining the
tables.

Here's how I'm trying to create and populate the search table:

DROP TABLE IF EXISTS search;
CREATE VIRTUAL TABLE search USING fts5(question, answer, altAnswers,
comments, authors, sources, tokenize = "snowball russian english
unicode61", content=questions, content_rowid=id);
INSERT INTO search SELECT id, question, answer, altAnswers, comments,
authors, sources FROM questions WHERE obsolete IS NULL;

At this point INSERT fails because I'm trying to insert 7 columns into the
table which has 6.

I also tried creating id as UNINDEXED column in the search table and that
passed but the search results were silly, matching completely unrelated
tokens.

So what is the proper way to insert all rows from questions into the search
table?


Thanks in advance
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users