Re: [sqlite] Thinking about using SQLite on a windows file share with multiple client connections to it (is this a good idea?)

2011-07-13 Thread J Decker
On Wed, Jul 13, 2011 at 3:28 PM, Shawn Bice  wrote:
>
> Hello, I'm thinking about using SQLite on a windows file share having 
> multiple clients reading and writing to the database. I think I understand 
> the gist of how this works via some really good docs on the SQLite site but 
> from what I can tell, it sounds like this isn't a canonical scenario for 
> SQLite (which is fine, I just am hoping to get some perspective). The FAQ 
> mentions that file locking of network files is very buggy and not dependable 
> (in particular on Windows file shares). Additionally I read the document on 
> locking and concurrency (v3) that in effect states the best defense is not 
> use SQLite for files on a network file system. I believe we will have 
> moderate multi-client traffic and I was hoping to gain some additional 
> perspective from folks who have far more experience using SQLite in this 
> config. My impression is that its not really a canonical scenario and I'm 
> trying to get a handle on what I should expect or even if I'm looking at the 
> right database technology (assuming I am going to have moderate concurrency 
> to deal with). Many thanks in advance for your thouhgts. Thanks,
> Shawn
>

Depends, there's lots of ways to implement that.  my godawful tangled
mess of abstraction has a module that abstracts odbc, and sqlite
accesses to basically two commands
http://sack.sourceforge.net/sack__sql.html  ; I probably wouldn't
access a sqlite database using windows file sharing, it wouldn't be
horribly painful to make a proxy service that your clients made
requests to that read a sqlite database; but then that's not really
developing to sqlite... using ODBC isn't horribly painful (actually
ya, when you actually only need like Connect, Commnand, Query, ODBC is
a little complex; but offers all sorts of databinding so you can put
the data directly into structures of ints and floats when you get the
query back instead of having just text results.

MySQL over odbc is pretty good, easy to configure; but requires a
server to be setup somewhere... so depends on the sort of network you
have too.  I've seen sqlite odbc drivers and uhmm sqlite c# library
works good and makes things look like derivations of common database
classes.

>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


http://code.google.com/p/c-system-abstraction-component-gui/

http://code.google.com/p/c-system-abstraction-component-gui/source/browse/src/SQLlib/sqlstub.c
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Thinking about using SQLite on a windows file share with multiple client connections to it (is this a good idea?)

2011-07-13 Thread Simon Slavin

On 13 Jul 2011, at 11:28pm, Shawn Bice wrote:

> I believe we will have moderate multi-client traffic and I was hoping to gain 
> some additional perspective from folks who have far more experience using 
> SQLite in this config.

If your intention from the ground up is to do lots of concurrent multi-user 
access then SQLite is probably not for you.  As well as cases where a Windows 
computer just doesn't do what it says it's doing you have the fact that SQLite 
locks the entire database instead of individual rows or tables.

I suggest you look elsewhere.  Sorry and all that.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compiling sqlite

2011-07-13 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/13/2011 10:56 AM, Jan Hudec wrote:
> In my experience, the .o files are significantly larger than the resulting
> binary.

The size command will tell you how big the code (aka text), data and zero
initialized data (aka bss) are for object or shared library files.  Anything
beyond those is the gunk you mentioned (symbols, linker information,
comments from various tools, debug info etc).

Here is the amalgamation compiled for Linux 32 bits as both an object file
and as a shared library.  No SQLite -D flags given and -O2 used, but not -g.

$ ls -lh sqlite3.so sqlite3.o
  506K  sqlite3.o
  521K  sqlite3.so
$ size sqlite3.so sqlite3.o
   textdata bss dec hex filename
 48121166081164  488983   77617 sqlite3.so
 43372932241132  438085   6af45 sqlite3.o
$ strip sqlite3.{so,o}
$ size sqlite3.so sqlite3.o
   textdata bss dec hex filename
 48121166081164  488983   77617 sqlite3.so
 43372932241132  438085   6af45 sqlite3.o
$ ls -lh sqlite3.so sqlite3.o
  428K  sqlite3.o
  482K  sqlite3.so

If you are trying to monitor size then use the size command!

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk4eIKsACgkQmOOfHg372QRZ+wCgscuRxOlwfQpYuHRLAqEVKfpf
mswAn2XMpVFPcmn2nJop0WpPCYguQy1K
=AHFo
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Integrate page-level locking into main SQLite like BerkleyDB's SQLite ?

2011-07-13 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/13/2011 03:06 PM, Samuel Neff wrote:
> The BerkleyDB backed version of SQLite from Oracle supports page level
> locking.  Apparently this is where the real performance gains come from, not
> swapping out the back-end.

That only helps with concurrency and makes things more complicated since you
have to worry about page locks everywhere rather than a single whole file lock.

SQLite's philosophy is to make each transaction execute as quickly as
possible, rather than handle concurrent transactions with increased
complexity (note that "Lite" in the name :-)

See the very last bullet point of this page:

  http://www.sqlite.org/whentouse.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk4eHdIACgkQmOOfHg372QTQNwCbB8WoEWnS+zHKWCY7Dj8Tiwft
Rv0AoKyZeJkhiZaPvQbpNqsqg+MbFYlw
=zsd2
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Thinking about using SQLite on a windows file share with multiple client connections to it (is this a good idea?)

2011-07-13 Thread Shawn Bice

Hello, I'm thinking about using SQLite on a windows file share having multiple 
clients reading and writing to the database. I think I understand the gist of 
how this works via some really good docs on the SQLite site but from what I can 
tell, it sounds like this isn't a canonical scenario for SQLite (which is fine, 
I just am hoping to get some perspective). The FAQ mentions that file locking 
of network files is very buggy and not dependable (in particular on Windows 
file shares). Additionally I read the document on locking and concurrency (v3) 
that in effect states the best defense is not use SQLite for files on a network 
file system. I believe we will have moderate multi-client traffic and I was 
hoping to gain some additional perspective from folks who have far more 
experience using SQLite in this config. My impression is that its not really a 
canonical scenario and I'm trying to get a handle on what I should expect or 
even if I'm looking at the right database technology (assuming I am going to 
have moderate concurrency to deal with). Many thanks in advance for your 
thouhgts. Thanks,
Shawn

  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Integrate page-level locking into main SQLite like BerkleyDB's SQLite ?

2011-07-13 Thread Samuel Neff
The BerkleyDB backed version of SQLite from Oracle supports page level
locking.  Apparently this is where the real performance gains come from, not
swapping out the back-end.

http://www.oracle.com/technetwork/database/berkeleydb/downloads/index.html

Are there any plans to add page-level locking to the main SQLite
distribution?  BerkleyDB includes full source including the modified SQLite
source, but has license restrictions than would not allow integrating that
code into the public domain SQLite.

Thanks,

Sam
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compiling sqlite

2011-07-13 Thread Jan Hudec
On Tue, Jul 12, 2011 at 17:37:57 +0300, Baruch Burstein wrote:
> The website (http://www.sqlite.org/about.html) says that sqlite can be
> compiled to under 300K. I am using mingw32, and with no configuration did I
> manage to get sqlite3.o below 400k. What compiler/compiler options are used
> to reach <300K?

In my experience, the .o files are significantly larger than the resulting
binary. Depending on various circumstances, the object (and static library)
can be from 20% to over 10 times larger than the resulting binary. This is
because it has to contain a lot of information the linker need to properly
put everything together, but which will not appear in the final binary.

Debugging information is usually responsible for the larger differences, so
if you compile the .o without -g, the reduction won't be that big, but it's
quite expected that 400kB .o will not add more than 300kB to the binary it's
linked to.

Also don't forget to run the final binary through strip to really get rid of
all unnecessary symbol information. It will make crash dumps from it make no
sense at all, but it's possible to use objcopy to save the information strip
would remove in a separate file which gdb can use to interpret core dumps.

-- 
 Jan 'Bulb' Hudec 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attach file/disk DB to memory DB

2011-07-13 Thread Pavel Ivanov
> If I was to attach a database located in a file on disk to a database located
> in memory, would the file DB be loaded in memory too, or would its usage
> remain disk-based?

Its usage would be the same as if you open a new connection to that
database file, i.e. disk-based (barring the page caching in memory).


Pavel


On Wed, Jul 13, 2011 at 11:18 AM, Alexandre G
 wrote:
>
> If I was to attach a database located in a file on disk to a database located
> in memory, would the file DB be loaded in memory too, or would its usage
> remain disk-based?
>
> I need to know because we want to keep memory usage to a minimum. Note that
> the disk is read-only.
>
> Thanks
>
> --
> View this message in context: 
> http://old.nabble.com/Attach-file-disk-DB-to-memory-DB-tp32054031p32054031.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attach file/disk DB to memory DB

2011-07-13 Thread Dan Kennedy
On 07/13/2011 10:18 PM, Alexandre G wrote:
>
> If I was to attach a database located in a file on disk to a database located
> in memory, would the file DB be loaded in memory too, or would its usage
> remain disk-based?

It remains on disk. Whether or not the main database is in-memory does
not affect the way attached databases are handled.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Attach file/disk DB to memory DB

2011-07-13 Thread Alexandre G

If I was to attach a database located in a file on disk to a database located
in memory, would the file DB be loaded in memory too, or would its usage
remain disk-based?

I need to know because we want to keep memory usage to a minimum. Note that
the disk is read-only.

Thanks

-- 
View this message in context: 
http://old.nabble.com/Attach-file-disk-DB-to-memory-DB-tp32054031p32054031.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why does analyze make this go faster?

2011-07-13 Thread Bart Smissaert
The table A3Test7D4_E is deleted and re-written every time.
The table ENTRY_ATTRIBUTES is large table (millions of records) and it will need
an analyze when made first time. Doesn't look cache is the reason for
this behaviour
here.

RBS

On Wed, Jul 13, 2011 at 1:44 PM, Stephen C  wrote:
> What happens when you let the query run the first time, delete the
> records, then re-run the query?
>
> If the same length of time is spent (50 seconds) then I'm at a loss.
> If the shorter length of time happens, I'd say blame caching.
>
> On 07/13/2011 06:00 AM, Bart Smissaert wrote:
>> Have the following query:
>>
>> INSERT INTO A3Test7D4_J (PATIENT_ID, ENTRY_ID, READ_CODE, TERM_TEXT,
>> ADDED_DATE, START_DATE, NUMERIC_VALUE, TYPE_SPECIFIC_INFO) SELECT
>> E.PATIENT_ID, E.ENTRY_ID, E.READ_CODE, E.TERM_TEXT, E.ADDED_DATE,
>> E.START_DATE, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO FROM A3Test7D4_E
>> E INNER JOIN ENTRY_ATTRIBUTES EA ON (EA.ENTRY_ID = +E.ENTRY_ID) WHERE
>> EA.ATTRIBUTE_TYPE = 101 ORDER BY E.PATIENT_ID ASC, E.ENTRY_ID ASC,
>> EA.NUMERIC_VALUE ASC
>>
>> And as it was quite slow (50 secs) I did an analyze on the table
>> ENTRY_ATTRIBUTES and after that the query was much faster, less than
>> one second. Now, what I don't understand about this is that the query
>> plan is still the same:
>>
>> 0--0--0--SCAN TABLE A3Test7D4_E AS E (~18491 rows)<-->
>> 0--1--1--SEARCH TABLE ENTRY_ATTRIBUTES AS EA USING COVERING INDEX
>> IDX_ENTRY_ATTRIBUTES_ATTRIBUTE_TYPE_ENTRY_ID_TYPE_SPECIFIC_INFO_NUMERIC_VALUE
>> (ATTRIBUTE_TYPE=? AND ENTRY_ID=?) (~2 rows)<-->  0--0--0--USE TEMP
>> B-TREE FOR ORDER BY
>>
>> And also the data in sqlite_stat1 is still the same:
>>
>> IDX_ENTRY_ATTRIBUTES_NUMERIC_VALUE    1389416 176
>> IDX_ENTRY_ATTRIBUTES_TYPE_SPECIFIC_INFO       1389416 9
>> IDX_ENTRY_ATTRIBUTES_ENTRY_ID 1389416 2
>> IDX_ENTRY_ATTRIBUTES_ATTRIBUTE_TYPE   1389416 198488
>> IDX_ENTRY_ATTRIBUTES_ATTRIBUTE_TYPE_ENTRY_ID_TYPE_SPECIFIC_INFO_NUMERIC_VALUE
>>  1389416
>> 198488 2 2 1
>>
>> The data in the table is the same as well.
>> The other table involved A3Test7D4_E is exactly the same as well.
>> So why runs this query so much faster after running the analyze?
>>
>>
>> RBS
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can a sys admin control temporary file location without changing the source code?

2011-07-13 Thread Tom Browder
On Mon, Jul 11, 2011 at 01:18, H. Phil Duby  wrote:
> On Sat, Jul 9, 2011 at 23:22, Stephan Beal  wrote:
>
>> On Sun, Jul 10, 2011 at 4:25 AM, Tom Browder 
>> wrote:
>>
>> > sqlite db files and making sure that directory is writable by my web
>> > server.  I make sure that the directory is not used for anything else
>> > in order to help secure my web site.
>> >
>>
>> Another tip for such uses:
>>
>> .htaccess:
>>
>>   (or however your db is named)
>>        Order allow,deny
>>        Deny from all
>> 
>>
>> so that people who know the db is there can't fetch it over http.
>>
>
> You should be able to put the folder for the database file(s) outside of the
> path available to access by url.  'above' or 'beside' the home / root
> folder.  That way there is no way for someone to access it directly through
> a browser.  Only code running on the server can access it.

And that is exactly what I did.

Best regards,

-Tom

P.S.  So far sqlite is performing admirably on my site--so much easier
for certain uses.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak?

2011-07-13 Thread Anze Rozman
Some mistakes...
* I use also PRAGMA cache_size=100 but the some problem

On Wed, Jul 13, 2011 at 3:12 PM, Anze Rozman  wrote:

> Hi!
>
> I have an application which inserts into database most of the time. Problem
> is that memory is decreasing. Aplication is in java and I use Werner's
> sqlite wrapper - http://www.ch-werner.de/javasqlite/ (because of J2ME). I
> use method 
> (*exec
> *(String  sql,
> Callback  cb) )
> from werner's wrapper. I use also page_cache=100, and I use transactions
> with BEGIN and COMMIT, but the problem remains. After few hours application
> eat almost all free memory. I find out that problem is because of INSERT
> statements. If I run application without writing into database memory is not
> increasing. OS is linux and after 2 days system reboots because there is not
> any more free memory for other application. I have read many forums and
> websites but I havent found any solution to this. I also close and open
> database after 1000 tranactions, but there is no improvance. Can you help
> me?
>
> BR, Rose.
>



-- 
lp,
Anže
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Memory leak?

2011-07-13 Thread Anze Rozman
Hi!

I have an application which inserts into database most of the time. Problem
is that memory is decreasing. Aplication is in java and I use Werner's
sqlite wrapper - http://www.ch-werner.de/javasqlite/ (because of J2ME). I
use method 
(*exec
*(String  sql,
Callback  cb) )
from werner's wrapper. I use also page_cache=100, and I use transactions
with BEGIN and COMMIT, but the problem remains. After few hours application
eat almost all free memory. I find out that problem is because of INSERT
statements. If I run application without writing into database memory is not
increasing. OS is linux and after 2 days system reboots because there is not
any more free memory for other application. I have read many forums and
websites but I havent found any solution to this. I also close and open
database after 1000 tranactions, but there is no improvance. Can you help
me?

BR, Rose.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ANN: AnySQL Maestro 11.7 released

2011-07-13 Thread SQL Maestro Group
Hi!

SQL Maestro Group announces the release of AnySQL Maestro 11.7, a powerful 
tool for managing any database engine accessible via ODBC driver or OLE DB 
provider (SQLite, PostgreSQL, MySQL, SQL Server, Oracle, Access, etc). The 
new version is immediately available at
http://www.sqlmaestro.com/products/anysql/maestro/

AnySQL Maestro comes in both Freeware and Professional editions. The feature 
matrix can be found at
http://www.sqlmaestro.com/products/anysql/maestro/free_vs_pro/

Please note that before the end of the month you can purchase a Professional 
edition of AnySQL Maestro as well as all other our products and bundles with 
a significant discount. Details are available at
http://www.sqlmaestro.com/purchase/

Top 7 new features:
===

1. Data pagination for the corresponding tab of object editors.
2. Code snippets for SQL Editor and SQL Script Editor.
3. Data import from any database accessible via ODBC driver / OLE DB 
provider.
4. Quick index creation.
5. The "Execute at cursor" command for SQL Editor.
6. Highlighting foreign key columns for Database Designer.
7. Default directories for data export/import/extract/etc.

In addition to this, several bugs have been fixed and some other minor 
improvements and corrections have been made. Full press-release (with 
explaining screenshots) is available at:
http://www.sqlmaestro.com/news/company/anysql_maestro_11_7_released/

Background information:

SQL Maestro Group offers complete database admin and management tools for 
MySQL, Oracle, MS SQL Server, DB2, PostgreSQL, SQL Anywhere, SQLite, 
Firebird and MaxDB providing the highest performance, scalability and 
reliability to meet the requirements of today's database applications.

Sincerely yours,
The SQL Maestro Group Team
http://www.sqlmaestro.com 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why does analyze make this go faster?

2011-07-13 Thread Stephen C
What happens when you let the query run the first time, delete the 
records, then re-run the query?

If the same length of time is spent (50 seconds) then I'm at a loss.
If the shorter length of time happens, I'd say blame caching.

On 07/13/2011 06:00 AM, Bart Smissaert wrote:
> Have the following query:
>
> INSERT INTO A3Test7D4_J (PATIENT_ID, ENTRY_ID, READ_CODE, TERM_TEXT,
> ADDED_DATE, START_DATE, NUMERIC_VALUE, TYPE_SPECIFIC_INFO) SELECT
> E.PATIENT_ID, E.ENTRY_ID, E.READ_CODE, E.TERM_TEXT, E.ADDED_DATE,
> E.START_DATE, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO FROM A3Test7D4_E
> E INNER JOIN ENTRY_ATTRIBUTES EA ON (EA.ENTRY_ID = +E.ENTRY_ID) WHERE
> EA.ATTRIBUTE_TYPE = 101 ORDER BY E.PATIENT_ID ASC, E.ENTRY_ID ASC,
> EA.NUMERIC_VALUE ASC
>
> And as it was quite slow (50 secs) I did an analyze on the table
> ENTRY_ATTRIBUTES and after that the query was much faster, less than
> one second. Now, what I don't understand about this is that the query
> plan is still the same:
>
> 0--0--0--SCAN TABLE A3Test7D4_E AS E (~18491 rows)<-->
> 0--1--1--SEARCH TABLE ENTRY_ATTRIBUTES AS EA USING COVERING INDEX
> IDX_ENTRY_ATTRIBUTES_ATTRIBUTE_TYPE_ENTRY_ID_TYPE_SPECIFIC_INFO_NUMERIC_VALUE
> (ATTRIBUTE_TYPE=? AND ENTRY_ID=?) (~2 rows)<-->  0--0--0--USE TEMP
> B-TREE FOR ORDER BY
>
> And also the data in sqlite_stat1 is still the same:
>
> IDX_ENTRY_ATTRIBUTES_NUMERIC_VALUE1389416 176
> IDX_ENTRY_ATTRIBUTES_TYPE_SPECIFIC_INFO   1389416 9
> IDX_ENTRY_ATTRIBUTES_ENTRY_ID 1389416 2
> IDX_ENTRY_ATTRIBUTES_ATTRIBUTE_TYPE   1389416 198488
> IDX_ENTRY_ATTRIBUTES_ATTRIBUTE_TYPE_ENTRY_ID_TYPE_SPECIFIC_INFO_NUMERIC_VALUE 
> 1389416
> 198488 2 2 1
>
> The data in the table is the same as well.
> The other table involved A3Test7D4_E is exactly the same as well.
> So why runs this query so much faster after running the analyze?
>
>
> RBS
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3: synonyms dictionary and tokens length

2011-07-13 Thread Dan Kennedy
On 07/13/2011 05:05 PM, Alexey Pechnikov wrote:
> With synonyms dictionary the result token length can be more then
> original token length.
> Is it problem for current realization of FTS?

I don't think so. If it is, it's a bug.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS3: synonyms dictionary and tokens length

2011-07-13 Thread Alexey Pechnikov
With synonyms dictionary the result token length can be more then
original token length.
Is it problem for current realization of FTS?

-- 
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database handle access from tokenizer module (stopwords table support)

2011-07-13 Thread Alexey Pechnikov
I want to add stopwords table support for ICU tokenizer but there is
no database handle access (icuOpen).
Any ideas?


-- 
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why does analyze make this go faster?

2011-07-13 Thread Bart Smissaert
Have the following query:

INSERT INTO A3Test7D4_J (PATIENT_ID, ENTRY_ID, READ_CODE, TERM_TEXT,
ADDED_DATE, START_DATE, NUMERIC_VALUE, TYPE_SPECIFIC_INFO) SELECT
E.PATIENT_ID, E.ENTRY_ID, E.READ_CODE, E.TERM_TEXT, E.ADDED_DATE,
E.START_DATE, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO FROM A3Test7D4_E
E INNER JOIN ENTRY_ATTRIBUTES EA ON (EA.ENTRY_ID = +E.ENTRY_ID) WHERE
EA.ATTRIBUTE_TYPE = 101 ORDER BY E.PATIENT_ID ASC, E.ENTRY_ID ASC,
EA.NUMERIC_VALUE ASC

And as it was quite slow (50 secs) I did an analyze on the table
ENTRY_ATTRIBUTES and after that the query was much faster, less than
one second. Now, what I don't understand about this is that the query
plan is still the same:

0--0--0--SCAN TABLE A3Test7D4_E AS E (~18491 rows) <-->
0--1--1--SEARCH TABLE ENTRY_ATTRIBUTES AS EA USING COVERING INDEX
IDX_ENTRY_ATTRIBUTES_ATTRIBUTE_TYPE_ENTRY_ID_TYPE_SPECIFIC_INFO_NUMERIC_VALUE
(ATTRIBUTE_TYPE=? AND ENTRY_ID=?) (~2 rows) <--> 0--0--0--USE TEMP
B-TREE FOR ORDER BY

And also the data in sqlite_stat1 is still the same:

IDX_ENTRY_ATTRIBUTES_NUMERIC_VALUE  1389416 176
IDX_ENTRY_ATTRIBUTES_TYPE_SPECIFIC_INFO 1389416 9
IDX_ENTRY_ATTRIBUTES_ENTRY_ID   1389416 2
IDX_ENTRY_ATTRIBUTES_ATTRIBUTE_TYPE 1389416 198488
IDX_ENTRY_ATTRIBUTES_ATTRIBUTE_TYPE_ENTRY_ID_TYPE_SPECIFIC_INFO_NUMERIC_VALUE   
1389416
198488 2 2 1

The data in the table is the same as well.
The other table involved A3Test7D4_E is exactly the same as well.
So why runs this query so much faster after running the analyze?


RBS
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Data type of the blob returned by matchinfo()

2011-07-13 Thread Dan Kennedy
On 07/13/2011 04:57 AM, Abhinav Upadhyay wrote:
> Hi,
>
> Quoting the ranking function given in the appendix of the FTS3
> documentation page (http://www.sqlite.org/fts3.html#appendix_a)
>
> static void rankfunc(sqlite3_context *pCtx, int nVal, sqlite3_value **apVal){
>int *aMatchinfo;/* Return value of matchinfo() */
> ...
> ...
> aMatchinfo = (unsigned int *)sqlite3_value_blob(apVal[0]);
> ...
> ...
>
> aMatchinfo is declared as int * and the value obtained from
> sqlite3_value_blob() is being case to unsigned int *. This is causing
> a compiler warning, so I am wondering what is the datatype of the
> matchinfo blob (int * or unsigned int *) ? Although common sense says
> it should be unsigned int *, but just wanted to confirm .

The docs say "...the blob consists of zero or more 32-bit unsigned
integers in machine byte-order...". So I guess all of the "int"
declarations in that function should probably be "unsigned int".
Or, just changing the cast to an (int *) will also work.

   http://www.sqlite.org/fts3.html#matchinfo
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users