Re: [sqlite] Sqlite reading all column data on selects.

2009-09-24 Thread Martin Pfeifle
Hi Richard,

assume I have a table mytable (id, blob1,blob2,blob3,blob4) 
where each blob extends over several pages.

Then I do the following SQL command:
select blob4 from mytable where id = 4711

Do I understand you correctly that in the case autuvacuum =true, 
the pages covered by blob1 to blob3 are not read from disk,
whereas in the case autovaccum=false they are also read from disk into main 
memory?

Martin 

 




Von: D. Richard Hipp 
An: General Discussion of SQLite Database 
Gesendet: Donnerstag, den 24. September 2009, 14:58:16 Uhr
Betreff: Re: [sqlite] Sqlite reading all column data on selects.


On Sep 24, 2009, at 8:42 AM, Adam Panayis wrote:

> Hi, are there plans to stop sqlite3 from reading in all column data on
> specific select queries? i.e I would like sqlite to ONLY read column
> data for columns which are specified in the select and where clauses.

Short answer:  Already does that.

Long answer:  SQLite pulls content from the disk one page at a time.  
If all the data is on the same page (the common case) then all the  
data will always be read from disk.  There is no getting around that.  
If the row is large and does not fit on a single page, then SQLite  
strives to avoid reading any pages that are not actually needed.  That  
is already implemented.  Because of the file format, sometimes earlier  
pages must be read in order to find the location of later pages.  In  
other cases (when you have autovacuum set) SQLite is able to guess the  
location of later pages without having to read earlier pages.  SQLite  
never reads pages that follow what is needed.

Once the necessary pages are in memory, SQLite only looks at the  
specific parts of a row that are requested.  Unrequested columns are  
never extracted or decoded from the raw row data.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] querying r-trees

2009-08-17 Thread Martin Pfeifle
Dear all,

I would like to discuss a new feature in the SQLite R-tree which is not very 
difficult to implement but would improve
query performance a lot for use cases where the MBR (minimum bounding 
rectangle) of the query object leads to
a too large candidate set.

First of all the data structure in the database can stay at it is, also the 
R*-tree splitting algorithm.

The only thing I would like to see is that it is possible to register a 
function at the R-tree module 
which returns true or false and which gets as parameters the currently to be 
evaluated R*-tree rectangle and a void pointer 
representing the query object. Thus it would be possible to evaluate more 
complex query geometries during R-tree traversal.

The disadvantage of the approach is that the intersection tests between more 
complex query objects and an r-tree rectangle is more time consuming
(higher cpu cost for one test) but on the other hand the resulting candidate 
set would be much smaller (smaller i/o cost and less cpu-intersection tests).

The current approach could also be modelled this way, if the testing simply 
compares two rectangles and returns true if they intersect.

So the question is whether the intersection testing between query object and 
R-tree bounding rectangle 
cannot be passed to the r-tree module as user-defined  function and then be 
applied by the r-tree module during tree traversal.

Any thoughts on this?

Best Martin


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS3 *_content tables (do I really need them?)

2009-07-15 Thread Martin Pfeifle
Dear all,
 
a few weeks ago, I asked the question below but did not get any response on it.
A few minutes ago,  I found a remark
(cf.  http://osdir.com/ml/freedesktop.tracker/2008-07/msg00085.html)
 that it might be possible to avoid the "redundant" storage of document 
information in the
fts *_content tables. 
 
Has anyone an implementation available for fts3 which does not store data in 
the fts3 *_content tables but only uses the *_segdir and *_segments tables
 
 
Best Martin


Dear all,     we plan to use FTS in embedded devices for address search.  One 
requirement is to save storage.     Assume I create a table FTS_addresses 
(Field1,Field,2,..Fieldn),  where Field1 is an identifier for my addresses.  If 
now field1 would be used as document id, and if every fts query returns only   
the field1 value,  we would not need to store the contenttable at all, which 
would save space.     Can we get such a functionality from fts3, or do we have 
to implement this on   our own.Best Martin  


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Near misses

2009-06-26 Thread Martin Pfeifle
Hi, 

I guess the speed could significantly be improved,
if you leave out _car and _ar.
The inverted index which is basically (term, 
blob_containing_all_document_ids_of_this_term),
cannot skip any of the alphabetically ordered terms if the first character is 
variable.
At least that's my understanding.

Thank you for your idea, because I am also thinking of putting some fuzzy 
search on top of FTS.

Best Martin




Von: Alberto Simões 
An: General Discussion of SQLite Database 
Gesendet: Freitag, den 26. Juni 2009, 13:25:57 Uhr
Betreff: [sqlite] Near misses

Hello.

I am trying to find words in a dictionary stored in sqlite, and trying
a near miss approach.
For that I tried an algorithm to create patterns corresponding to
Levenshtein distance of 1 (edit distance of 1).
That means, one adition, one remotion or one substitution.

For that, my script receives a word (say, 'car') and generated all
possible additions and remotions, and substitutions:

Additions: _car c_ar ca_r car_
Substitutions: _ar c_r ca_
remotions: ar cr ca

Then, the script constructs an SQL query:

SELECT DISTINCT(word) FROM dict WHERE word = "ar" OR word = "ca" OR
word LIKE "_car" OR word LIKE "c_r" OR word = "cr" OR word LIKE "_ar"
OR word LIKE "ca_r" OR word LIKE "c_ar" OR word LIKE "ca_" OR word
LIKE "car_";

And this SQL quer works... but not as quickly as I need (specially
because the speed is proportional to the word size).

Any hint on how to speed up this thing?

THank you
Alberto

--
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS

2009-06-19 Thread Martin Pfeifle
Dear all,
 
we plan to use FTS in embedded devices for address search.
One requirement is to save storage.
 
Assume I create a table FTS_addresses (Field1,Field,2,..Fieldn),
where Field1 is an identifier for my addresses.
If now field1 would be used as document id, and if every fts query returns only 
the field1 value,
we would not need to store the contenttable at all, which would save space.
 
Can we get such a functionality from fts3, or do we have to implement this on 
our own.

Best Martin


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS3

2009-06-02 Thread Martin Pfeifle
Some further question regarding FTS3.

Am I correct that a doclist of a certain term is never split onto two blocks 
(BLOBs)?
Can we somehow limit the size of such BLOBs?
I did some tests where I inserted Millions of addresses into FTS3 and all 
contained a certain term.
I ended up with some Blobs bigger than 1MByte.
Can I somehow avoid this?
Best Martin


 




Von: Martin Pfeifle <martinpfei...@yahoo.de>
An: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Gesendet: Freitag, den 29. Mai 2009, 08:59:45 Uhr
Betreff: Re: [sqlite] FTS3

One further question:

In fts3.c, a comment is found which describes the file format dependent on the 
different compiler settings.
* Result formats differ with the setting of DL_DEFAULTS.  Examples:
**
** DL_DOCIDS: [1] [3] [7]
** DL_POSITIONS: [1 0[0 4] 1[17]] [3 1[5]]
** DL_POSITIONS_OFFSETS: [1 0[0,0,3 4,23,26] 1[17,102,105]] [3 1[5,20,23]]
 
I also found one functional limitation if we use only  DL_DOCIDS, in order to 
reduce the overall size.
 
/*
** By default, only positions and not offsets are stored in the doclists.
** To change this so that offsets are stored too, compile with
**
**  -DDL_DEFAULT=DL_POSITIONS_OFFSETS
**
** If DL_DEFAULT is set to DL_DOCIDS, your table can only be inserted
** into (no deletes or updates).
*/

Are there any other functional drawbacks if we go for DOCIDS only, e.g. search 
for "term1 term2" in a document?

Best Martin




Von: D. Richard Hipp <d...@hwaci.com>
An: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Gesendet: Dienstag, den 26. Mai 2009, 12:27:59 Uhr
Betreff: Re: [sqlite] FTS3


On May 26, 2009, at 5:03 AM, Martin Pfeifle wrote:

> Dear all,
> we need full and fuzzy text search for addresses.
> Currently we are looking into Lucene and SQLite's FTS extension.
> For us it is crucial to understand the file structures and the  
> concepts behind the libraries.
> Is there a self-contained, comprehensive document for FTS3 (besides  
> the comments in fts3.c) ?

There is no information on FTS3 apart from the code comments and the  
README files in the source tree.

The file formats for FTS3 and lucene are completely different at the  
byte level.  But if you dig deeper, you will find that they both use  
the same underlying concepts and ideas and really are two different  
implementations of the same algorithm.  During development, we were  
constantly testing the performance and index size of FTS3 against  
CLucene using the Enron email corpus.  Our goal was for FTS3 to run  
significantly faster than CLucene and to generate an index that was no  
larger in size.  That goal was easily met at the time, though we have  
not tested FTS3 against CLucene lately to see if anything has changed.

One of the issues with CLucene that FTS3 sought to address was that  
when inserting new elements into the index, the insertion time was  
unpredictable.  Usually the insertions would be very fast.  But lucene  
will occasionally take a very long time for a single insertion in  
order to merge multiple smaller indices into larger indices.  This was  
seen as undesirable.  FTS3 strives to give much better worst-case  
insertion times by doing index merges incrementally and spreading the  
cost of index merges across many inserts.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



      
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



      
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3

2009-05-29 Thread Martin Pfeifle
One further question:

In fts3.c, a comment is found which describes the file format dependent on the 
different compiler settings.
* Result formats differ with the setting of DL_DEFAULTS.  Examples:
**
** DL_DOCIDS: [1] [3] [7]
** DL_POSITIONS: [1 0[0 4] 1[17]] [3 1[5]]
** DL_POSITIONS_OFFSETS: [1 0[0,0,3 4,23,26] 1[17,102,105]] [3 1[5,20,23]]
 
I also found one functional limitation if we use only  DL_DOCIDS, in order to 
reduce the overall size.
 
/*
** By default, only positions and not offsets are stored in the doclists.
** To change this so that offsets are stored too, compile with
**
**  -DDL_DEFAULT=DL_POSITIONS_OFFSETS
**
** If DL_DEFAULT is set to DL_DOCIDS, your table can only be inserted
** into (no deletes or updates).
*/

Are there any other functional drawbacks if we go for DOCIDS only, e.g. search 
for "term1 term2" in a document?

Best Martin




Von: D. Richard Hipp <d...@hwaci.com>
An: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Gesendet: Dienstag, den 26. Mai 2009, 12:27:59 Uhr
Betreff: Re: [sqlite] FTS3


On May 26, 2009, at 5:03 AM, Martin Pfeifle wrote:

> Dear all,
> we need full and fuzzy text search for addresses.
> Currently we are looking into Lucene and SQLite's FTS extension.
> For us it is crucial to understand the file structures and the  
> concepts behind the libraries.
> Is there a self-contained, comprehensive document for FTS3 (besides  
> the comments in fts3.c) ?

There is no information on FTS3 apart from the code comments and the  
README files in the source tree.

The file formats for FTS3 and lucene are completely different at the  
byte level.  But if you dig deeper, you will find that they both use  
the same underlying concepts and ideas and really are two different  
implementations of the same algorithm.  During development, we were  
constantly testing the performance and index size of FTS3 against  
CLucene using the Enron email corpus.  Our goal was for FTS3 to run  
significantly faster than CLucene and to generate an index that was no  
larger in size.  That goal was easily met at the time, though we have  
not tested FTS3 against CLucene lately to see if anything has changed.

One of the issues with CLucene that FTS3 sought to address was that  
when inserting new elements into the index, the insertion time was  
unpredictable.  Usually the insertions would be very fast.  But lucene  
will occasionally take a very long time for a single insertion in  
order to merge multiple smaller indices into larger indices.  This was  
seen as undesirable.  FTS3 strives to give much better worst-case  
insertion times by doing index merges incrementally and spreading the  
cost of index merges across many inserts.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



      
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3

2009-05-26 Thread Martin Pfeifle
thank you, this helped a lot and confirmed what I expected.
Best Martin





Von: D. Richard Hipp <d...@hwaci.com>
An: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Gesendet: Dienstag, den 26. Mai 2009, 12:27:59 Uhr
Betreff: Re: [sqlite] FTS3


On May 26, 2009, at 5:03 AM, Martin Pfeifle wrote:

> Dear all,
> we need full and fuzzy text search for addresses.
> Currently we are looking into Lucene and SQLite's FTS extension.
> For us it is crucial to understand the file structures and the  
> concepts behind the libraries.
> Is there a self-contained, comprehensive document for FTS3 (besides  
> the comments in fts3.c) ?

There is no information on FTS3 apart from the code comments and the  
README files in the source tree.

The file formats for FTS3 and lucene are completely different at the  
byte level.  But if you dig deeper, you will find that they both use  
the same underlying concepts and ideas and really are two different  
implementations of the same algorithm.  During development, we were  
constantly testing the performance and index size of FTS3 against  
CLucene using the Enron email corpus.  Our goal was for FTS3 to run  
significantly faster than CLucene and to generate an index that was no  
larger in size.  That goal was easily met at the time, though we have  
not tested FTS3 against CLucene lately to see if anything has changed.

One of the issues with CLucene that FTS3 sought to address was that  
when inserting new elements into the index, the insertion time was  
unpredictable.  Usually the insertions would be very fast.  But lucene  
will occasionally take a very long time for a single insertion in  
order to merge multiple smaller indices into larger indices.  This was  
seen as undesirable.  FTS3 strives to give much better worst-case  
insertion times by doing index merges incrementally and spreading the  
cost of index merges across many inserts.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS3

2009-05-26 Thread Martin Pfeifle
Dear all,
we need full and fuzzy text search for addresses.
Currently we are looking into Lucene and SQLite's FTS extension.
For us it is crucial to understand the file structures and the concepts behind 
the libraries.
Is there a self-contained, comprehensive document for FTS3 (besides the 
comments in fts3.c) ?
Best Martin


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Accessing tables containing multiple Blobs

2009-05-13 Thread Martin Pfeifle
Assume you have a table mytable(id, blob1, blob2,blob3).
You have a page_size of 1k and each blob is in average 20KBytes, i.e. a row is 
of size 60KByte.
My question now is where does the projection of an sql-statement like "select 
blob2 from mytable where id=777" take place.
Are all 60KBytes read from disk and filtered in the sqlite library or are only 
20KBytes read from disk?
Thus, from an I/O perspective is there a difference between the queries
"select * from mytable where id=777" and "select blob2 from mytable where 
id=777".
Appreciate your comments.

Best Martin


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] retrieval of blobs stored in sqlite

2009-04-21 Thread Martin Pfeifle
Hi,
I have a question regarding the retrieval of BLOBs.
Assume you have a table mytable(id, blob) and the page size is 1k.
If we now carry out an sql-query like "select blob from mytable where id=4711" 
and the blob size is 100k.
Am I then correct that the pager asks 100 times for a page of size 1k (going 
through the linked list of overflow pages)
and that in whatever virtual file system, we do 100 times a seek operation to 
the currently requested page.
Is this correct or am I here mistaken?
Best Martin



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] fragmentation, overflow pages

2009-04-16 Thread Martin Pfeifle
Hi,
we store proprietary organized data in blobs within an sqlite database.
Assume you have a table mydata(id, attr1, attr2, blob). 
The page size of the database is 1k. 
If you now store blobs in your table which are larger than 1k, 
sqlite uses overflow pages and the content of the blobs is distributed 
to several pages, right? Are the pages on which the blobs are stored
consecutive or can they be distributed arbitrarily within the database file?
Can two records share the same overflow page, or is an overflow page dedicated 
to one record?
Is there a formal definition of the fragmentation value which is reported by 
sqlite3_analyzer?
Best Martin


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Binary Format

2009-04-01 Thread Martin Pfeifle
thank you so much, that's really helpful.
Best Martin





Von: D. Richard Hipp <d...@hwaci.com>
An: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Gesendet: Mittwoch, den 1. April 2009, 19:11:00 Uhr
Betreff: Re: [sqlite] Binary Format


On Apr 1, 2009, at 4:24 AM, Martin Pfeifle wrote:

> Hi,
> we do use SQLite in a standardisation initiative and have to state
> which binary file-format of sqlite is used.
> Up to now, I was of the opinion that all sqlite versions 3.x use the  
> same binary sqlite file
> format but only differ in the library functionality.
> Can somebody confirm that the binary disk format does not change in  
> 3.x or can it change and
> we have to say, we use the binary format of sqlite 3.5.4 for  
> instance, or is it enough to say
> that we use 3.x as binary file format?
> Best Martin


Martin, http://www.sqlite.org/fileformat.html is still a work in  
progress, but it was written with you and your standardization effort  
in mind.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Binary Format

2009-04-01 Thread Martin Pfeifle
thank you.
Best Martin


Von: Jay A. Kreibich <j...@kreibi.ch>
An: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Gesendet: Mittwoch, den 1. April 2009, 15:52:08 Uhr
Betreff: Re: [sqlite] Binary Format

On Wed, Apr 01, 2009 at 08:24:29AM +0000, Martin Pfeifle scratched on the wall:
> Hi,
> we do use SQLite in a standardisation initiative and have to?state 
> which?binary?file-format of sqlite is used.
> Up to now, I was of the opinion that all sqlite versions 3.x use the same 
> binary sqlite file
> format but only differ in the library functionality. 
> Can somebody confirm that the binary disk format does not change in 3.x or 
> can it change and
> we have to say, we use the binary format of sqlite 3.5.4 for instance, or is 
> it enough to say
> that we use 3.x as binary file format?

  There are two SQLite3 file formats: v1 and v4.

  Version 4 supports descending indexes and a more compact Boolean
  storage format.  v4 support was originally added in version 3.3.0 and
  was used as the default file format for a very short time.  As of
  3.3.7, things went back to using the v1 format by default.  As far as
  I know, that's still true all the way up to the current 3.6.12 release.

  All SQLite3 versions can read/write v1.  Everything after 3.3.0 can
  read/write v1 and v4.  Versions prior to 3.3 cannot read/write v4.

  The command "PRAGMA legacy_file_format = [ON|OFF]" can be used to force
  the use of the v1 file format.  It's default value is defined by the
  SQLITE_DEFAULT_FILE_FORMAT compile time option, which currently
  defaults to ON/v1.  That is, by default current builds of SQLite3 use
  the v1 file format (although they still support both versions).  The
  SQLite development team has indicated a desire to change this default,
  although nothing as been said about an expected timeline.


  Generally the larger issue with library versions is the SQL used within
  the database.  If you use a specific language feature in a table,
  index, or view definition, then you're obviously limiting the file
  to a specific version of the SQLite library, even if the file format
  is technically version compatible with earlier versions.

    -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
a protractor."  "I'll go home and see if I can scrounge up a ruler
and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Binary Format

2009-04-01 Thread Martin Pfeifle
Hi,
we do use SQLite in a standardisation initiative and have to state 
which binary file-format of sqlite is used.
Up to now, I was of the opinion that all sqlite versions 3.x use the same 
binary sqlite file
format but only differ in the library functionality. 
Can somebody confirm that the binary disk format does not change in 3.x or can 
it change and
we have to say, we use the binary format of sqlite 3.5.4 for instance, or is it 
enough to say
that we use 3.x as binary file format?
Best Martin


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] virtual file system

2009-03-26 Thread Martin Pfeifle
thank you. 
This helped me a lot.
Best Martin





Von: Igor Tandetnik <itandet...@mvps.org>
An: sqlite-users@sqlite.org
Gesendet: Donnerstag, den 26. März 2009, 20:55:18 Uhr
Betreff: Re: [sqlite] virtual file system

Martin Pfeifle <martinpfei...@yahoo.de> wrote:
> Maybe I am totally mistaken.
> Assume I always use the latest sqlite_shell.exe program. I do not
> want to change anything in this program,
> then I cannot connect to my virtual file system which is written in C
> and stored in "myExtension.dll".

Why? Can't you do "select load_extension" on that DLL, and the DLL would 
then call sqlite3_vfs_register in its entry point? Am I missing 
something obvious?

> In contrast loadable extensions can be used.

I don't understand the distinction you make between loadable extension 
and loadable VFS. A DLL loaded into SQLite-using process can call 
sqlite3_create_function, or sqlite3_create_collation, or 
sqlite3_vfs_register. How is the latter fundamentally different from the 
former?

> For me loadable extensions and virtual file systems are somehow
> similar

No. Custom functions and virtual file systems are similar. Loadable 
extension is a vehicle for packaging either, or both.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] virtual file system

2009-03-26 Thread Martin Pfeifle
Maybe I am totally mistaken.
Assume I always use the latest sqlite_shell.exe program. I do not want to 
change anything in this program, 
then I cannot connect to my virtual file system which is written in C and 
stored in "myExtension.dll".
In contrast loadable extensions can be used.
For me loadable extensions and virtual file systems are somehow similar both 
communicate via a specified interface with sqlite (although on different 
levels). But the usage of loadable extensions seems more convenient and I 
wonder why?
Best Martin





Von: Igor Tandetnik <itandet...@mvps.org>
An: sqlite-users@sqlite.org
Gesendet: Donnerstag, den 26. März 2009, 20:30:33 Uhr
Betreff: Re: [sqlite] virtual file system

Martin Pfeifle wrote: > Thanks. > I appreciate very much, the way I can add my 
own loadable extension > myExtension.dll by simply calling on SQL-level > 
select load_extension("myExtension.dll") without changing anything in > C. What 
do you mean, without changing anything in C? What is myExtension.dll written 
in? Can't it call sqlite3_vfs_register during its initialization? Igor 
Tandetnik ___ sqlite-users mailing 
list sqlite-users@sqlite.org 
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] virtual file system

2009-03-26 Thread Martin Pfeifle
Thanks.
I appreciate very much, the way I can add my own loadable extension 
myExtension.dll by simply calling on SQL-level
select load_extension("myExtension.dll") without changing anything in C.
I wish that I could do this for vfs too. 
select load_vfs("myVFS.dll") 
without changing anything in C.

Best Martin




Von: Roger Binns <rog...@rogerbinns.com>
An: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Gesendet: Donnerstag, den 26. März 2009, 18:36:50 Uhr
Betreff: Re: [sqlite] virtual file system

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Martin Pfeifle wrote:
> I have a question regarding virtual file systems.
> I assume I can load my own virtual file system by calling
> the  c-function sqlite3_vfs_register(...).

That function passes the vfs name and a series of callbacks that
implement the vfs to SQLite.  To actually use a vfs, you need to pass
the appropriate name to sqlite3_vfs_open_v2.

> Am I right that I cannot load a virtual file system by a pragma command

You couldn't register one that way since it needs C level callbacks.
You can set the default vfs when calling sqlite3_vfs_register(...)

> or a core function similar to load_extension?

sqlite3_vfs_register is a core function in exactly the same way as
sqlite3_load_extension

What exactly is it you are trying to achieve?

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAknLvS4ACgkQmOOfHg372QRtlgCeI5UjLAFzGM6eMiPlxGH2NxmH
qZ4AniOKkZBrYuYCSjw3Cm8GoXiAREc+
=FvJH
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] virtual file system

2009-03-26 Thread Martin Pfeifle
Dear all, 

I have a question regarding virtual file systems.
I assume I can load my own virtual file system by calling
the  c-function sqlite3_vfs_register(...).

Am I right that I cannot load a virtual file system by a pragma command or
a core function similar to load_extension?

I would appreciate very much if this were possible.

Best Martin



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fuzzy Matching

2008-07-03 Thread Martin Pfeifle
Hi, I think there is nothing available except FTS.
Doing a full table scan and computing for each string the (Levenstein) distance
to the query object is too time consuming. So what I would like to see is the 
implementation
of a generic metric index which needs as one parameter a metric distance 
function. Based on such a 
distance function you could then do similarity search on any objects , e.g. 
images, strings, etc.
One possible index would be the M-tree (which you can also organize relational 
as it was done
with the R*-tree). The idea is that you have a hierarchical index and each node 
is represented by
a database  object o and a covering radius r reflecting the maximal distance of 
all objects in that subtree to
the object o. If you do a range query now, you compute the distance of your 
query object to the
 object o. If this distance minus the coverage radius r is bigger than your 
query range
you can prune that subtree.
You can either implement such a similarity module as an own extension similar 
toFTS or the Spatial module,
or integrate it into FTS and use it only for strings.
Personally, I need the second solution because I'd like to do full and fuzzy 
text search.
Are there any plans to implement something like this, if yes, I would like to 
take part in such a
development. 
.
Best Martin
 
 


- Ursprüngliche Mail 
Von: Alberto Simões <[EMAIL PROTECTED]>
An: General Discussion of SQLite Database 
Gesendet: Donnerstag, den 3. Juli 2008, 21:52:05 Uhr
Betreff: [sqlite] Fuzzy Matching

Hello

Although I am quite certain that the answer is that SQLite does not
provide any mechanism to help me on this, it doesn't hurt to ask. Who
know if anybody have any suggestion.

Basically, I am using SQLite for a dictionary, and I want to let the
user do fuzzy searches. OK, some simple Levenshtein distance of one or
two would do the trick, probably.

I imagine that SQLite (given the lite), does not provide any kind of
nearmisses search.
But probably, somebody here did anything similar in any language?

Cheers
Alberto
-- 
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  __
Gesendet von Yahoo! Mail.
Dem pfiffigeren Posteingang.
http://de.overview.mail.yahoo.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How can I change big-endian to little-endian in the database file

2007-04-03 Thread Martin Pfeifle
Dear all,
in an upcoming project, it is required to store all integer values as little 
endian
instead of big endian (don't ask why).
Nevertheless, I would like to use SQLite in that project.
What do we have to change in the sqlite library, 
if we store the integers as little endian.
I came across some functions in B-tree.c and pager.c.

In B-tree.c
/* Read or write a two- and four-byte big-endian integer values.*/
get2byte,
get4byte,
put2byte,
put4byte 
/* pager.c*/
** All values are stored on disk as big-endian.
*/
read32bits,
write32bits
 
Is that enough.
Another question, what do we have to change if we would
also store the utf-chars as little endian?
I appreciate your help.
Martin






___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de

AW: [sqlite] Soft search in database

2007-03-06 Thread Martin Pfeifle
Unfortunately, the fts module of sqlite does not support "fuzzy text search = 
google search".
What you first need is a similarity measure between strings, e.g. the 
Edit-distance.
Based on such a similarity measure, you could build up an appropriate index 
structure,
e.g.  a Relational M-tree (cf. 
deposit.ddb.de/cgi-bin/dokserv?idn=972667849_var=d1_ext=pdf=972667849.pdf
 Chapter 10.3)
Such a module should not only support range queries, e.g. give me all strings 
which have a distance smaller than eps to my query string, but also ranked 
nearest neighbor queries.
 
We also urgently need such a module, and think about implementing it on our 
own. I would appreciate if efforts could be synchronized. 
Best Martin


- Ursprüngliche Mail 
Von: Michael Schlenker <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Dienstag, den 6. März 2007, 09:46:52 Uhr
Betreff: Re: [sqlite] Soft search in database


Henrik Ræder schrieb:
>   Hi
> 
>   (First post - hope it's an appropriate place)
> 
>   I've been implementing a database of a few MB of text (indexing
> magazines) in SQLite, and so far have found it to work really well.
> 
>   Now my boss, who has a wonderfully creative mind, asks me to implement a
> full-text search function which is not the usual simplistic 'found' /
> 'not found', but more Google-style where a graded list of results is 
> returned.
> 
>   For example, in a search for "MP3 Player", results with the phrases next
> to each other would get a high rating, as would records with a high
> occurance of the keywords.
> 
>   This falls outside the usual scope of SQL, but would still seem a
> relatively common problem to tackle.
> 
>   Any ideas (pointers) how to tackle this?
You have come to the right place.

Take a closer look at:
http://www.sqlite.org/cvstrac/wiki?p=FullTextIndex

Michael

-- 
Michael Schlenker
Software Engineer

CONTACT Software GmbH   Tel.:   +49 (421) 20153-80
Wiener Straße 1-3   Fax:+49 (421) 20153-41
28359 Bremen
http://www.contact.de/  E-Mail: [EMAIL PROTECTED]

Sitz der Gesellschaft: Bremen | Geschäftsführer: Karl Heinz Zachries
Eingetragen im Handelsregister des Amtsgerichts Bremen unter HRB 13215

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de

AW: [sqlite] PK and rowid

2006-10-11 Thread Martin Pfeifle
uniqId is the same as rowid, 
so you will get the same execution plans for ...where rowid=x and  ... where 
uniqId=x.


- Ursprüngliche Mail 
Von: chetana bhargav <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Mittwoch, den 11. Oktober 2006, 09:19:28 Uhr
Betreff: [sqlite] PK and rowid


Hi,
   
  If I declare my column as,  "uniqId integer primary key", now if I say 
something like,
  select * from tbl1 where uniqId=x;
   
  Will the uniqId be same as rowid, making my table look ups faster as I am 
using row id only. If not whats the way to assign my uniqId to the row id so 
that my lookups would be faster.
   
   
  ...
  Chetana.


-
Do you Yahoo!?
Everyone is raving about the  all-new Yahoo! Mail.






___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de

[sqlite] UTF-question

2006-10-04 Thread Martin Pfeifle
Hi,
is it possible to use UTF-16 access functionsa, e.g. sqlite3_bind_text16  or 
sqlite3_column_text16,
together with a UTF-8 database?

Do I get a byte identical UTF-8 database file if I write data to the database 
using sqlite3_bind_text16   and if I use
sqlite3_bind_text?

Best Martin



___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de

AW: AW: [sqlite] Performance question

2006-09-26 Thread Martin Pfeifle
Hi Michael,
could you please (re)post the exact create inex statements +primary key you 
used.
For speeding up your query, you need an index on x only but not on id,x.
Best Martin

- Ursprüngliche Mail 
Von: Michael Wohlwend <[EMAIL PROTECTED]>
An: "sqlite-users@sqlite.org" 
Gesendet: Dienstag, den 26. September 2006, 09:34:00 Uhr
Betreff: AW: [sqlite] Performance question


-Ursprüngliche Nachricht-
Von: Dennis Cote [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 22. September 2006 17:07
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Performance question


Michael Wohlwend wrote:

> But If I do "select data from pictures where (x between high_x and 
> low_x) and (y between high_y and low_y) then this takes ca. 8 seconds 
> (!) on wince.
>
>   

>>If you are really writing your between clauses as above with the high 
>>limit first, then they are not doing what you think. The low limit 
>>should always be given first.

Ah, that was a typo, of course the query was "between (low and high)". I
changed this to
"x > low and x <= high ..." and i got the same result: 1 single query
(without bouds-check) takes 5ms, the query with the bounds-check takes ca.
7seconds (there are indices on x and y).
I changed the query to (select  ... where id in (v1,v2,...)) this was quite
fast again, even if the list of values got over 200 elements, but that's not
the way I wanted to do it. Maybe sqlite on arm cpus in the current
implementation isn't optimized enough. But I have no idea where this huge
slowdown comes from.

Cheers
Michael






___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de

AW: [sqlite] journal-off assert

2006-08-24 Thread Martin Pfeifle
hi,
i got the same error when I ported sqlite to an operating system using a 
proprietary file system.
The reason was that our file system did not support sparse files. i.e.
the fstat-command returned the wrong file-size. 
 
Maybe you should independently of SQLite try to 
* create a file,
* write data to it
* do an fseek over the end of the file
* write data to the file
and then determine the size with fstat.
In my case that was not working.
It was a bug of the underlying file system and not of SQLite...
best Martin


 
- Ursprüngliche Mail 
Von: weiyang wang <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Donnerstag, den 24. August 2006, 09:52:37 Uhr
Betreff: [sqlite] journal-off assert


hi,

i am trying to integrate sqlite into mobile platform, and porting layer,
like 'win_os.c' , have been created for that purpose.

the integrated sqlite works well on our mobile platform with the default
settings, but error come out when i try to tune the performance with 'pragma
cache_size = 1'.

what i have done is:

 /* create a new database*/
sqlite3_open("contacts.db", );

 /*tune the cache_size for m */
 sqlite3_exec(contactsdb,"pragma default_cache_size = 1" , callback, 0,
);

 /*create a table */
sqlite3_exec(contactsdb,"create table IF NOT EXISTS contacts (name
text,phone int)" , callback, 0, );

/*insert values into the table, totally 250 entries*/
   for ( i< 250)
  {
   sqlite3_exec(("insert xxx,xxx");
  }


the symptom is :

after the size of the database increased to 3K, the prgram can not insert
entries into the table anymore. the error returned is ' The database disk
image is malformed'

i have checked the porting layer i have made, the read, write, and offset
operations seems correct.

then, i recompile the sqlite codes with SQLITE_DEBUG and add
'  sqlite3_exec(contactsdb,"PRAGMA vdbe_trace=ON" , callback, 0,
);',
an assetion stalled the execution:   "assert( pPager->journalOff==jSz );" in
pager.c

the stacked vaules at the broken time is :
pPager->journalOff= 1544
jSz= 1543
file size of contacts.db-journal is 1543 (checked from windows)

i guess the problems are related to file offset, reading, and writing. or
related to bugs of SQlite, i am not quite sure.

so, is there anyone can help me out?


best regards,

yang

AW: AW: [sqlite] New JDBC driver for SQLite

2006-07-31 Thread Martin Pfeifle
Hi David,
could you please shortly outline the differences between your jdbc driver and 
the one developed by Christian Werner?
Best Martin


- Ursprüngliche Mail 
Von: David Crawshaw <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org; Martin Pfeifle <[EMAIL PROTECTED]>
Gesendet: Sonntag, den 30. Juli 2006, 23:37:17 Uhr
Betreff: Re: AW: [sqlite] New JDBC driver for SQLite


Martin Pfeifle <[EMAIL PROTECTED]> wrote:
> Does the jdbc driver also provide the direct reading and writing of BLOBs?

Yes, through PreparedStatement.setBytes() and ResultSet.getBytes(). I
haven't added support yet for the java.sql.Blob type, because I am
funadmentally opposed to the idea of SQL calls generating heap garbage
and invoking the GC.

One thing I am trying to solve is how to get the size of a BLOB
without using RS.getBLOB().

d

AW: [sqlite] New JDBC driver for SQLite

2006-07-30 Thread Martin Pfeifle
Does the jdbc driver also provide the direct reading and writing of BLOBs?  
Best Matin


- Ursprüngliche Mail 
Von: David Crawshaw <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Sonntag, den 30. Juli 2006, 14:56:18 Uhr
Betreff: [sqlite] New JDBC driver for SQLite


Hello all,

I have written a new JDBC driver for SQLite.

http://java.zentus.com/sqlitejdbc.html

It is designed to be easy to read and as thin a layer as possible on
top of the C functions. The new _prepare()/_step() API is surprisingly
similar to the JDBC API, making it efficient code. SQLite is compiled
straight into the JNI binary, which keeps things simple for Java
programmers. There is only one dll/so/jnilib that needs to be added to
a project.

I've build binaries for Linux i386, Mac (universal) and Windows. Any
testing would be greatly appreciated, I don't have much hardware.

My few speed tests so far suggest it is fast. Code like:

statement.setAutoCommit(false);
for (int i=0; i < 1000; i++)
statement.executeUpdate("INSERT INTO...");
statement.commit();

runs in under 30ms on Linux. This means the JNI bridge is not causing
any serious overhead.

There are still a couple of features to add. I plan automatic Java
date/time handling and the code is not thread safe. These are not
difficult to do, I am just distracted by another optimisation right
now.

The code is under the BSD license. I hope someone finds a use for it.

d

[sqlite] accessing sqlite files directly via http

2006-06-15 Thread Martin Pfeifle
Hello,
We would like to access (only reading access) a remote sqlite database via http.
 
On a remote computer, an http server is located and an sqlite database file
(we are not allowed to install any software on that http-server).
On a client computer, an application program wants to access the sqlite file 
via http. 
As http supports reading  ranges from  files and also supports persistent 
connections,
it is in principal possible to write an “os_http” unit (similar to os_unix, 
os_win,..) which 
allows to access sqlite files directly via http, i.e. we can simulate fseek, 
fread and fopen via http.
My question now is whether somebody has already some experience with such an 
“http-port”. 
 
Best Martin
 

[sqlite] full text search

2006-06-02 Thread Martin Pfeifle
Dear all,
I have just seen  that there are some thoughts going on to incorporate full 
text search into SQLite.
http://www.sqlite.org/cvstrac/wiki/wiki?p=FullTextIndex
What is the current status on that project?

AW: [sqlite] Re: ACID for attached databases

2006-05-22 Thread Martin Pfeifle
thank you,
that is exactly what I searched for.
Best Martin


- Ursprüngliche Mail 
Von: Igor Tandetnik <[EMAIL PROTECTED]>
An: SQLite <sqlite-users@sqlite.org>
Gesendet: Montag, den 22. Mai 2006, 15:03:03 Uhr
Betreff: [sqlite] Re: ACID for attached databases


Martin Pfeifle <[EMAIL PROTECTED]> wrote:
> We do not have memory databases. But I do not understand how
> atomicity is achieved.
> I understand that the ACID principle can be realized by means of the
> fsync command. But how does this work for more than one database
> file? Can anyone explain that to me, please.

See http://sqlite.org/lockingv3.html, particularly section 4 dealing 
with journal files.

Igor Tandetnik

[sqlite] ACID for attached databases

2006-05-22 Thread Martin Pfeifle
Dear all,
 
We plan to update several sqlite database files within one transaction.
I was just reading the following on the SQLite homepage:
"Transactions involving multiple attached databases are atomic, assuming that 
the main database is not ":memory:".
 
We do not have memory databases. But I do not understand how atomicity is 
achieved.
I understand that the ACID principle can be realized by means of the fsync 
command. But how does this work for more than one database file?
Can anyone explain that to me, please. 
 
Best Martin

AW: AW: [sqlite] Re: spatial sqlite anyone ?

2006-05-18 Thread Martin Pfeifle
Hi,
the baisc idea of the Relational R-tree is to map the hierarchical relationship 
between the R-tree nodes to a (father, son) relation where a b-tree is on 
father. You can the traverse the relation preferable using recursive SQL, if 
not available, you have to do it on your own.
The Idea of the Relational Quadtree is to select the Quad numbers according to 
a space filling curve,e.g. z-values.
Then again you can use ordinary SQL.

- Ursprüngliche Mail 
Von: Noel Frankinet <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Donnerstag, den 18. Mai 2006, 09:50:56 Uhr
Betreff: Re: AW: [sqlite] Re: spatial sqlite anyone ?


Martin Pfeifle wrote:

>Hi,
>I think the simplest solution would be to put a spatial index on top of the 
>B-tree, that's what e.g. Oracle does in their Spatial Cartridge.
>Basically you store the index data in relations and index these relations by 
>B-trees.
>In this case, you do not have to change the core code of SQLite. You can 
>already do this now.
>Nevertheless, it would be nice, if SQlite supports somehow the "extensible 
>indexing interface".
>So, you ask a spatial query and in the background the system uses the index 
>data stored in relations.
>We have done a lot of research on that topic. To get acquainted to the basic 
>idea, you might have a look at
>"The Paradigm of Relational Indexing: A Survey".
> 
>In my opinion, a direct integration of a spatial index is of course 
>preferable, but who is going to do that?
> 
>Best Martin
>
>  
>
Not me,

Thank you for the article,
If I get the key from the quadtree/rtree index, what is the way to get 
the data while staying "on top" of sqlite ?
If can see the following schema: select data according to normal sql 
rule (all the non spatial rules) then filter it spatailly (that's what I 
do now, simple bbox filtering, no indice).
I would prefer the other way around, use spatial index, get a subset (a 
key set ?) then apply sql alphanumerical filter.
Do you know a way to do that in sqlite ?

Best regards

>- Ursprüngliche Mail 
>Von: George Ionescu <[EMAIL PROTECTED]>
>An: sqlite-users@sqlite.org
>Gesendet: Donnerstag, den 18. Mai 2006, 08:28:39 Uhr
>Betreff: [sqlite] Re: spatial sqlite anyone ?
>
>
>Hello dear Noel,
>hello all sqlite users,
>
>a spatial extension for sqlite would be nice, although I think that
>replacing the indexing scheme (e.g. replace the current b-tree with a
>quad-tree or another spatial index) is alot of work.
>
>Just some questions / thoughts:
>
>1. How would you handle indexing? Would you replace completely the btree
>with a spatial index (the hardest thing to do)? And if so, btree indexing
>would still be available or not?
>
>2. Go for GEOS's STRTree. It's the fastest spatial index I could find (I did
>some research awhile ago on open-source solutions regarding spatial
>indexes).
>
>3. You might want to take a look at SpatialIndex
>(http://u-foria.org/marioh/spatialindex/). The library is pretty clever
>designed and it supports RSTAR-tree, MVR-tree and TPR-tree. The problem I
>found with this library was that it was slower than GEOS at the time I last
>checked. One of the features I like is that it allows persisting the index.
>This might be an intermediate solution: you could store the index as a blob
>in the database. It would not be very efficient (as efficient as replacing
>indexing) but it would be a place to start.
>
>4. Another nice library who might do the job is CGAL (http://www.cgal.org/).
>I didn't benchmarked it mostly because GEOS suited my needs just fine, but
>how knows, it might be better than the ones I mentioned.
>
>George.
>
>
>  
>


-- 
Noël Frankinet
Gistek Software SA
http://www.gistek.net


AW: [sqlite] Re: spatial sqlite anyone ?

2006-05-18 Thread Martin Pfeifle
Hi,
I think the simplest solution would be to put a spatial index on top of the 
B-tree, that's what e.g. Oracle does in their Spatial Cartridge.
Basically you store the index data in relations and index these relations by 
B-trees.
In this case, you do not have to change the core code of SQLite. You can 
already do this now.
Nevertheless, it would be nice, if SQlite supports somehow the "extensible 
indexing interface".
So, you ask a spatial query and in the background the system uses the index 
data stored in relations.
We have done a lot of research on that topic. To get acquainted to the basic 
idea, you might have a look at
"The Paradigm of Relational Indexing: A Survey".
 
In my opinion, a direct integration of a spatial index is of course preferable, 
but who is going to do that?
 
Best Martin


- Ursprüngliche Mail 
Von: George Ionescu <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Donnerstag, den 18. Mai 2006, 08:28:39 Uhr
Betreff: [sqlite] Re: spatial sqlite anyone ?


Hello dear Noel,
hello all sqlite users,

a spatial extension for sqlite would be nice, although I think that
replacing the indexing scheme (e.g. replace the current b-tree with a
quad-tree or another spatial index) is alot of work.

Just some questions / thoughts:

1. How would you handle indexing? Would you replace completely the btree
with a spatial index (the hardest thing to do)? And if so, btree indexing
would still be available or not?

2. Go for GEOS's STRTree. It's the fastest spatial index I could find (I did
some research awhile ago on open-source solutions regarding spatial
indexes).

3. You might want to take a look at SpatialIndex
(http://u-foria.org/marioh/spatialindex/). The library is pretty clever
designed and it supports RSTAR-tree, MVR-tree and TPR-tree. The problem I
found with this library was that it was slower than GEOS at the time I last
checked. One of the features I like is that it allows persisting the index.
This might be an intermediate solution: you could store the index as a blob
in the database. It would not be very efficient (as efficient as replacing
indexing) but it would be a place to start.

4. Another nice library who might do the job is CGAL (http://www.cgal.org/).
I didn't benchmarked it mostly because GEOS suited my needs just fine, but
how knows, it might be better than the ones I mentioned.

George.


AW: [sqlite] spatial sqlite anyone ?

2006-05-17 Thread Martin Pfeifle
I am very interested.
We are working on spatial sqlite for almost one year.
We plan to include sqlite into an embedded spatial application.
You mention that there are open-source code for library 3 and 4.
Can you give me a hint where to find it?
I will contact you at the end of the week providing more information about our 
"spatial sqlite"
and the requirements of our application.
Best Martin


- Ursprüngliche Mail 
Von: Noel Frankinet <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Mittwoch, den 17. Mai 2006, 11:42:07 Uhr
Betreff: [sqlite] spatial sqlite anyone ?


Hello all,

I would like to set up a project to "spatialise" sqlite.
1 - to be able to create geometry colum that would store 
points,multipoints,lines,multilines,polygones and multi-polygones (Ogis 
"simple features")
2 - to be able to load and exchange data from WKT (well know text 
format) and binary (shape file for instance)
3 - to create a spatial index on geometry data (either an quadtree or a 
more advanced Rtree).
4 - to do spatial operation (join, disjoin, near, ...) on those data.

I'm ready to contribute code for 1 and 2, there are open-source c or c++ 
library for 3 and 4.

Ideally source should be on sqlite.org.

What do you think ? Any interest ?

-- 
Noël Frankinet
Gistek Software SA
http://www.gistek.net


AW: [sqlite] primary key and physical data organization

2006-04-05 Thread Martin Pfeifle
Thanks,
If we look at a B+-tree, all records are stored at the leaf level and form at 
least a logical list.
But what happens, if we insert new records which do not fit on the 
corresponding leaf page any more. Assume this page has to be split. Where is 
the newly created physical page stored? Does the logical list of blocks also 
lead to a list of consecutive physical blocks?
Is the physical ordering of the records in the file independent of the 
insertion ordering?
Our goal is that all records are physically clustered according to their ROWID.
In order to achieve this goal, does it make sense to reorganize a table by for 
instance a command like
"insert into reorganized_table
select * from original_table ordered by rowid"

- Ursprüngliche Mail 
Von: Dennis Cote <[EMAIL PROTECTED]>
An: sqlite-users@sqlite.org
Gesendet: Mittwoch, den 5. April 2006, 23:20:07 Uhr
Betreff: Re: [sqlite] primary key and physical data organization


Martin Pfeifle wrote:

>Hi,
>Assume I have a table containing an integer primary key. As far as I know, 
>this value is identical to the ROWID, right?
>How does SQLite organize the data within the file?
>Does it try to organize the data on the pages according to the primary key 
>(=ROWID) or according to the insertion order of the records?
>Can anyone explain that to me?
>Best Martin
>
>  
>
Martin,

An integer primary key is the rowid which is the key for the b-tree used 
to store the table. The way the b-tree is constructed is explained in 
the comment at the beginning of the source file btree.c which you can 
view here 
http://www.sqlite.org/cvstrac/fileview?f=sqlite/src/btree.c=1.324. 
Basically it is a tree of blocks with similar rowid values and pointers 
to other blocks with larger and smaller rowid values.

HTH
Dennis Cote


AW: [sqlite] primary key and physical data organization

2006-04-05 Thread Martin Pfeifle
That's great thank you very much.

- Ursprüngliche Mail 
Von: [EMAIL PROTECTED]
An: sqlite-users@sqlite.org; Martin Pfeifle <[EMAIL PROTECTED]>
Gesendet: Mittwoch, den 5. April 2006, 23:09:25 Uhr
Betreff: Re: [sqlite] primary key and physical data organization


Martin Pfeifle <[EMAIL PROTECTED]> wrote:
> Hi,
> Assume I have a table containing an integer primary key. As far as I know, 
> this value is identical to the ROWID, right?

Correct

> How does SQLite organize the data within the file?
> Does it try to organize the data on the pages according to the primary key 
> (=ROWID) or according to the insertion order of the records?
> Can anyone explain that to me?

Entries or ordered by ROWID.
--
D. Richard Hipp   <[EMAIL PROTECTED]>


[sqlite] extensible indexing

2006-02-13 Thread Martin Pfeifle
Dear all,
is it somehow possible to add an own index-structure,
e.g. M-tree, R-tree, Quadtree...to SQLite?
Has anyone ever done that?
Best Martin







___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


[sqlite] In memory table -> BLOB

2005-11-18 Thread Martin Pfeifle
Dear all,

How can I create an in-memory table?
Is it possible to store an in-memory table in a BLOB
and then store it permanently in an SQLite database?
Obviously, later on, I would like to use the content
of the BLOB again as in-memory table.
Is this possible or not?
Best Martin






___ 
Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: 
http://mail.yahoo.de


[sqlite] vxworks

2005-10-12 Thread Martin Pfeifle
Hi,
I am in deep trouble. I would like to use sqlite on
vxworks. There are no fysnc, fcntl calls available
which are used in os_unix.c. 
Can anybody help me? PLEASE!







___ 
Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: 
http://mail.yahoo.de


[sqlite] User-defined Collating Sequences

2005-09-29 Thread Martin Pfeifle
Hi,
does anybody know whether a code example for 
User-defined Collating Sequences in C exists, and
where I can find such an example.
Or even better has somebody already implemented such a
User-defined Collating Sequence taking German Umlaute
into account.
Best Martin



___ 
Was denken Sie über E-Mail? Wir hören auf Ihre Meinung: 
http://surveylink.yahoo.com/wix/p0379378.aspx


[sqlite] compression

2005-09-24 Thread Martin Pfeifle
Hello,
does anybody know whether it is possible to compress
sqlite data on the page level. If I compress the
sqlite database file with zlib I get very high
compression rates due to the character of the stored
data.
I think this problem is related to the problem of
using encrypted databases. Perheps it is possible just
to exchange the encryption function call by a zlib
compression call.
Integrating such a call simply into the read and write
functions in the file os_win.c does not work.
Can anybody help me, or give me a hint?
Ciao Martin



___ 
Was denken Sie über E-Mail? Wir hören auf Ihre Meinung: 
http://surveylink.yahoo.com/wix/p0379378.aspx


[sqlite] index organized table +compression

2005-09-18 Thread Martin Pfeifle
Hello,
I just started to consider the use of SQLite for a
rather big project. For this project it would be
beneficial if the database supports index organized
tables. Furthermore, it would be helpful, if we could
compress the data by integrating compression
techniques into the files os.c.
Does anybody know whether it is possible to compress
data by altering/extending the os.c files? Does
anybody know whether it is possible to create index
organized tables within SQLite.
Thanks Martin




___ 
Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: 
http://mail.yahoo.de