[sqlite] unable to write the string "1.0" into a database table

2009-06-26 Thread chandan

Hi,
   I have attached the C program along this mail. The program does the 
following:

   1. Open a database file (Indicated by the argv[1] argument).
   2. Create the table "some_tbl" and insert a row into the table.
   3. Update the second column of the new row to value "1.0".
   4. Close the database.

After executing the program, open the database file using the sqlite 
command line program:


$ sqlite3 file.db
sqlite> select * from some_tbl;
1|

As the above output shows, the second column does not seem to get 
updated with the value "1.0".

Am i doing anything wrong in the program?

Regards,
chandan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Near misses

2009-06-26 Thread Igor Tandetnik
Alberto Simoes wrote:
> On Fri, Jun 26, 2009 at 3:00 PM, Igor Tandetnik
> wrote:
>> Alberto Simoes wrote:
>>> SELECT DISTINCT(word) FROM dict WHERE word = "ar" OR word = "ca" OR
>>> word LIKE "_car" OR word LIKE "c_r" OR word = "cr" OR word LIKE
>>> "_ar" OR word LIKE "ca_r" OR word LIKE "c_ar" OR word LIKE "ca_" OR
>>> word LIKE "car_";
>>
>> I'd try writing a custom function that figures out whether two words
>> are "close enough" (most of the time, you should be able to declare a
>> negative by looking at just two first characters), then do
>>
>> select word from dict where closeEnough(word, 'car');
>
> Hmms, need to check how to do that. But that would mean call the
> function to all words in the database (110K atm).

Well, your current statement evaluates a complicated condition against 
every word in the database. I don't quite see how you can avoid checking 
every word - you can only try and make the check itself faster.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] (no subject)

2009-06-26 Thread Miroslav Zagorac
> Hello.
>  
> I'm using SQLite with a VB wrapper (dhSQLite) for VB6.
>  
> The following SQL string works fine for putting together a recordset where
> the DATE field contains only the date of the last day of each month.
>  
>  
>  SQLString = "SELECT date(Date,'start of month','+1 month','-1 day') as
> Date, " & _
> "First(Open,ID) as Open, Max(High) as High, " & _
> "Min(Low) as Low, Last(Close,ID) as Close FROM [" & sTable &
> "] GROUP By Year, Month"
> 
>  
> Each record represents the Open, High, Low, Close price for a complete
> month, and is represented by the last day of the month.
>  
> So we have...
>  
> 2009-01-31
> 2009-02-28
> 2009-03-31
> 2009-04-30
> ...
>  
> It is currently in the -mm-dd format.
>  
> The line "date(Date,'start of month','+1 month','-1 day') as Date" is what
> formats the month date to be the last day of the month.
>  
> MY PROBLEM:
>  
> I would like the date format to be either (mm/dd/) or (dd/mm/)
> depending on the user's system date format.
>  
> In the US, we use mm/dd/.
>  
> I'm completely lost on how to do this within the SELECT statement above.
>  
> Can someone suggest?
>  
> Thanks.
>  
> Webbiz

select strftime ('%m/%d/%Y', Date,'start of month','+1 month','-1 day') from ...

-- 
Zaga

You have worked and not worked.  Not working is the hardest work of all.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] (no subject)

2009-06-26 Thread Rick Ratchford
Hello.
 
I'm using SQLite with a VB wrapper (dhSQLite) for VB6.
 
The following SQL string works fine for putting together a recordset where
the DATE field contains only the date of the last day of each month.
 
 
 SQLString = "SELECT date(Date,'start of month','+1 month','-1 day') as
Date, " & _
"First(Open,ID) as Open, Max(High) as High, " & _
"Min(Low) as Low, Last(Close,ID) as Close FROM [" & sTable &
"] GROUP By Year, Month"

 
Each record represents the Open, High, Low, Close price for a complete
month, and is represented by the last day of the month.
 
So we have...
 
2009-01-31
2009-02-28
2009-03-31
2009-04-30
...
 
It is currently in the -mm-dd format.
 
The line "date(Date,'start of month','+1 month','-1 day') as Date" is what
formats the month date to be the last day of the month.
 
MY PROBLEM:
 
I would like the date format to be either (mm/dd/) or (dd/mm/)
depending on the user's system date format.
 
In the US, we use mm/dd/.
 
I'm completely lost on how to do this within the SELECT statement above.
 
Can someone suggest?
 
Thanks.
 
Webbiz
 
 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-26 Thread Jay A. Kreibich
On Fri, Jun 26, 2009 at 05:07:16PM -0400, Greg Morehead scratched on the wall:
> 
> If I close then reopen the database all my memory is recovered.  
> 
> Is this by design???  I was intending on keeping a connection open most of 
> time.

  Are you sure you're not looking at the page cache?

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-26 Thread Kees Nuyt
On Fri, 26 Jun 2009 17:07:16 -0400, "Greg Morehead"
 wrote:

>
>If I close then reopen the database all my memory is recovered.  
>
>Is this by design???  

Yes, what you see is probably the page cache.


>I was intending on keeping a connection open most of time.

That's a good idea, for at least two reasons:

- opening a connection has to parse the schema, 
  and though it's fast code, it should be avoided.

- the contents of the page cache aren't wasted, 
  it may be re-used by subsequent statements.

-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-26 Thread Greg Morehead

If I close then reopen the database all my memory is recovered.  

Is this by design???  I was intending on keeping a connection open most of time.


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Greg Morehead
Sent: Friday, June 26, 2009 4:41 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1


FYI.  I replaced the sqlite3_exec call with sqlite3_prepare_v2, sqlite3_step, 
sqlite3_finalize.

Same results.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Greg Morehead
Sent: Friday, June 26, 2009 4:21 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1


Based on the documentation in the link you sent I should only need to call free 
if there was an error message.

But, since there is no harm in calling sqlite3_free on a null pointer I moved 
it out of the if statement.  It had no impact, still leaking like faucet.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Ribeiro, Glauber
Sent: Friday, June 26, 2009 4:17 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1


I took only a quick look, but it seems to me that sqlite3_free is only
being called if there is an error.
See http://sqlite.org/c3ref/exec.html

g

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Greg Morehead
Sent: Friday, June 26, 2009 2:49 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

The extremely simple app below leaks.  What am I doing wrong??  Please
help.


#include 
#include 
#include "stdio.h"
#include "sqlite3.h"
#include 

#define TFQ_SQL_DB_NAME "/powerblock/datalog/TFQ-test.db"
#define _TABLE_NAME "sqltest"

int main(int argc, char *argv[]) 
{
char sql[1024];
char * errmsg;
sqlite3* db_;
int flags = SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE |
SQLITE_OPEN_CREATE;

if (sqlite3_open_v2(TFQ_SQL_DB_NAME
, &db_
, flags
, NULL) != SQLITE_OK)
{
return EXIT_FAILURE;
}

sprintf(sql,
"CREATE TABLE %s_info (col_idx, col_name,
data_type, meta_type, description, grouping); "
, _TABLE_NAME);

int retval = sqlite3_exec(db_, sql, 0, 0, &errmsg);
int idx = 1;
while(retval==SQLITE_OK)
{
sprintf(sql,
"INSERT INTO %s_info VALUES(%d,
\"rowid\", \"INTEGER\", \"%s\", \"Unique Row ID\", \"_SQL\");"
, _TABLE_NAME, idx++, "INTEGER" );

retval = sqlite3_exec(db_, sql, 0, 0, &errmsg); 

if(retval != SQLITE_OK) {
printf("Recieved an sqlite failure. : Failed
msg: %s", errmsg);
sqlite3_free(errmsg);
}
else
printf("Successfully inserted row idx %d\r\n",
idx);

usleep(5);
}
return EXIT_SUCCESS;
}

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Near misses

2009-06-26 Thread Jean-Christophe Deschamps
At 13:25 26/06/2009, you wrote:
´¯¯¯
>I am trying to find words in a dictionary stored in sqlite, and trying
>a near miss approach.
>For that I tried an algorithm to create patterns corresponding to
>Levenshtein distance of 1 (edit distance of 1).
>That means, one adition, one remotion or one substitution.
>
>Any hint on how to speed up this thing?
`---

Hi,

I'm currently finishing an C extension offering, among other functions, 
a "TYPOS" scalar operator which is meant to perform just that, and a 
bit more.

Internally, it applies a Unicode fold() function, a Unicode lower() 
function and then computes the Damerau-Levenshtein distance between the 
strings.  It returns the number of insertions, omissions, change and 
transposition (of adjacent letters only).

If the reference string is 'abcdef', it will return 1 (one typo) for
'abdef' missing c
'abcudef'   u inserted
'abzef' c changed into z
'abdcef'c & d exchanged

It will also accept a trailing '%' in string2 acting as in LIKE.

You can use it this way:

   select * from t where typos(col, 'levencht%') <= 2;

or this way

   select typos(str1, str2)

The code currently makes use of a couple of Win32 functions, which 
should have Un*x equivalent.  It runs at really decent speed even if I 
didn't fight for optimization.  It will obviously outperform any SQL 
solution by a large factor.

I can't promise a very clean version tomorrow but just mail if you're 
interested in the C source. You could tailor it to your precise needs 
easily.

Hope it can help.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-26 Thread Greg Morehead
FYI.  I replaced the sqlite3_exec call with sqlite3_prepare_v2, sqlite3_step, 
sqlite3_finalize.

Same results.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Greg Morehead
Sent: Friday, June 26, 2009 4:21 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1


Based on the documentation in the link you sent I should only need to call free 
if there was an error message.

But, since there is no harm in calling sqlite3_free on a null pointer I moved 
it out of the if statement.  It had no impact, still leaking like faucet.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Ribeiro, Glauber
Sent: Friday, June 26, 2009 4:17 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1


I took only a quick look, but it seems to me that sqlite3_free is only
being called if there is an error.
See http://sqlite.org/c3ref/exec.html

g

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Greg Morehead
Sent: Friday, June 26, 2009 2:49 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

The extremely simple app below leaks.  What am I doing wrong??  Please
help.


#include 
#include 
#include "stdio.h"
#include "sqlite3.h"
#include 

#define TFQ_SQL_DB_NAME "/powerblock/datalog/TFQ-test.db"
#define _TABLE_NAME "sqltest"

int main(int argc, char *argv[]) 
{
char sql[1024];
char * errmsg;
sqlite3* db_;
int flags = SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE |
SQLITE_OPEN_CREATE;

if (sqlite3_open_v2(TFQ_SQL_DB_NAME
, &db_
, flags
, NULL) != SQLITE_OK)
{
return EXIT_FAILURE;
}

sprintf(sql,
"CREATE TABLE %s_info (col_idx, col_name,
data_type, meta_type, description, grouping); "
, _TABLE_NAME);

int retval = sqlite3_exec(db_, sql, 0, 0, &errmsg);
int idx = 1;
while(retval==SQLITE_OK)
{
sprintf(sql,
"INSERT INTO %s_info VALUES(%d,
\"rowid\", \"INTEGER\", \"%s\", \"Unique Row ID\", \"_SQL\");"
, _TABLE_NAME, idx++, "INTEGER" );

retval = sqlite3_exec(db_, sql, 0, 0, &errmsg); 

if(retval != SQLITE_OK) {
printf("Recieved an sqlite failure. : Failed
msg: %s", errmsg);
sqlite3_free(errmsg);
}
else
printf("Successfully inserted row idx %d\r\n",
idx);

usleep(5);
}
return EXIT_SUCCESS;
}

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-26 Thread Greg Morehead
Based on the documentation in the link you sent I should only need to call free 
if there was an error message.

But, since there is no harm in calling sqlite3_free on a null pointer I moved 
it out of the if statement.  It had no impact, still leaking like faucet.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Ribeiro, Glauber
Sent: Friday, June 26, 2009 4:17 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1


I took only a quick look, but it seems to me that sqlite3_free is only
being called if there is an error.
See http://sqlite.org/c3ref/exec.html

g

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Greg Morehead
Sent: Friday, June 26, 2009 2:49 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

The extremely simple app below leaks.  What am I doing wrong??  Please
help.


#include 
#include 
#include "stdio.h"
#include "sqlite3.h"
#include 

#define TFQ_SQL_DB_NAME "/powerblock/datalog/TFQ-test.db"
#define _TABLE_NAME "sqltest"

int main(int argc, char *argv[]) 
{
char sql[1024];
char * errmsg;
sqlite3* db_;
int flags = SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE |
SQLITE_OPEN_CREATE;

if (sqlite3_open_v2(TFQ_SQL_DB_NAME
, &db_
, flags
, NULL) != SQLITE_OK)
{
return EXIT_FAILURE;
}

sprintf(sql,
"CREATE TABLE %s_info (col_idx, col_name,
data_type, meta_type, description, grouping); "
, _TABLE_NAME);

int retval = sqlite3_exec(db_, sql, 0, 0, &errmsg);
int idx = 1;
while(retval==SQLITE_OK)
{
sprintf(sql,
"INSERT INTO %s_info VALUES(%d,
\"rowid\", \"INTEGER\", \"%s\", \"Unique Row ID\", \"_SQL\");"
, _TABLE_NAME, idx++, "INTEGER" );

retval = sqlite3_exec(db_, sql, 0, 0, &errmsg); 

if(retval != SQLITE_OK) {
printf("Recieved an sqlite failure. : Failed
msg: %s", errmsg);
sqlite3_free(errmsg);
}
else
printf("Successfully inserted row idx %d\r\n",
idx);

usleep(5);
}
return EXIT_SUCCESS;
}

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-26 Thread Greg Morehead
I'm watching the heap from the eclipse IDE which connects to the remote 
debugging qconn service on the target.

I've included a screen shot from the last run of the code I included.  This was 
over 1 minute period where onlky 2782 records were written!

The problem is much worse on the much larger real db's I am planning on using.


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of D. Richard Hipp
Sent: Friday, June 26, 2009 3:58 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1



On Jun 26, 2009, at 3:49 PM, Greg Morehead wrote:

> The extremely simple app below leaks.  What am I doing wrong??   
> Please help.

How do you know it is leaking memory?  How are you measuring?

>
>
> #include 
> #include 
> #include "stdio.h"
> #include "sqlite3.h"
> #include 
>
> #define TFQ_SQL_DB_NAME "/powerblock/datalog/TFQ-test.db"
> #define _TABLE_NAME "sqltest"
>
> int main(int argc, char *argv[])
> {
>   char sql[1024];
>   char * errmsg;
>   sqlite3* db_;
>   int flags = SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE |  
> SQLITE_OPEN_CREATE;
>   
>   if (sqlite3_open_v2(TFQ_SQL_DB_NAME
>   , &db_
>   , flags
>   , NULL) != SQLITE_OK)   {
>   return EXIT_FAILURE;
>   }
>   
>   sprintf(sql,
>   "CREATE TABLE %s_info (col_idx, col_name, data_type, 
> meta_type,  
> description, grouping); "
>   , _TABLE_NAME);
>
>   int retval = sqlite3_exec(db_, sql, 0, 0, &errmsg);
>   int idx = 1;
>   while(retval==SQLITE_OK)
>   {
>   sprintf(sql,
>   "INSERT INTO %s_info VALUES(%d, \"rowid\", 
> \"INTEGER\", \"%s\",  
> \"Unique Row ID\", \"_SQL\");"
>   , _TABLE_NAME, idx++, "INTEGER" );
>   
>   retval = sqlite3_exec(db_, sql, 0, 0, &errmsg); 
>   
>   if(retval != SQLITE_OK) {
>   printf("Recieved an sqlite failure. : Failed msg: %s", 
> errmsg);
>   sqlite3_free(errmsg);
>   }
>   else
>   printf("Successfully inserted row idx %d\r\n", idx);
>   
>   usleep(5);
>   }
>   return EXIT_SUCCESS;
> }
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-26 Thread Ribeiro, Glauber
I took only a quick look, but it seems to me that sqlite3_free is only
being called if there is an error.
See http://sqlite.org/c3ref/exec.html

g

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Greg Morehead
Sent: Friday, June 26, 2009 2:49 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

The extremely simple app below leaks.  What am I doing wrong??  Please
help.


#include 
#include 
#include "stdio.h"
#include "sqlite3.h"
#include 

#define TFQ_SQL_DB_NAME "/powerblock/datalog/TFQ-test.db"
#define _TABLE_NAME "sqltest"

int main(int argc, char *argv[]) 
{
char sql[1024];
char * errmsg;
sqlite3* db_;
int flags = SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE |
SQLITE_OPEN_CREATE;

if (sqlite3_open_v2(TFQ_SQL_DB_NAME
, &db_
, flags
, NULL) != SQLITE_OK)
{
return EXIT_FAILURE;
}

sprintf(sql,
"CREATE TABLE %s_info (col_idx, col_name,
data_type, meta_type, description, grouping); "
, _TABLE_NAME);

int retval = sqlite3_exec(db_, sql, 0, 0, &errmsg);
int idx = 1;
while(retval==SQLITE_OK)
{
sprintf(sql,
"INSERT INTO %s_info VALUES(%d,
\"rowid\", \"INTEGER\", \"%s\", \"Unique Row ID\", \"_SQL\");"
, _TABLE_NAME, idx++, "INTEGER" );

retval = sqlite3_exec(db_, sql, 0, 0, &errmsg); 

if(retval != SQLITE_OK) {
printf("Recieved an sqlite failure. : Failed
msg: %s", errmsg);
sqlite3_free(errmsg);
}
else
printf("Successfully inserted row idx %d\r\n",
idx);

usleep(5);
}
return EXIT_SUCCESS;
}

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-26 Thread D. Richard Hipp

On Jun 26, 2009, at 3:49 PM, Greg Morehead wrote:

> The extremely simple app below leaks.  What am I doing wrong??   
> Please help.

How do you know it is leaking memory?  How are you measuring?

>
>
> #include 
> #include 
> #include "stdio.h"
> #include "sqlite3.h"
> #include 
>
> #define TFQ_SQL_DB_NAME "/powerblock/datalog/TFQ-test.db"
> #define _TABLE_NAME "sqltest"
>
> int main(int argc, char *argv[])
> {
>   char sql[1024];
>   char * errmsg;
>   sqlite3* db_;
>   int flags = SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE |  
> SQLITE_OPEN_CREATE;
>   
>   if (sqlite3_open_v2(TFQ_SQL_DB_NAME
>   , &db_
>   , flags
>   , NULL) != SQLITE_OK)   {
>   return EXIT_FAILURE;
>   }
>   
>   sprintf(sql,
>   "CREATE TABLE %s_info (col_idx, col_name, data_type, 
> meta_type,  
> description, grouping); "
>   , _TABLE_NAME);
>
>   int retval = sqlite3_exec(db_, sql, 0, 0, &errmsg);
>   int idx = 1;
>   while(retval==SQLITE_OK)
>   {
>   sprintf(sql,
>   "INSERT INTO %s_info VALUES(%d, \"rowid\", 
> \"INTEGER\", \"%s\",  
> \"Unique Row ID\", \"_SQL\");"
>   , _TABLE_NAME, idx++, "INTEGER" );
>   
>   retval = sqlite3_exec(db_, sql, 0, 0, &errmsg); 
>   
>   if(retval != SQLITE_OK) {
>   printf("Recieved an sqlite failure. : Failed msg: %s", 
> errmsg);
>   sqlite3_free(errmsg);
>   }
>   else
>   printf("Successfully inserted row idx %d\r\n", idx);
>   
>   usleep(5);
>   }
>   return EXIT_SUCCESS;
> }
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-26 Thread Greg Morehead
The extremely simple app below leaks.  What am I doing wrong??  Please help.


#include 
#include 
#include "stdio.h"
#include "sqlite3.h"
#include 

#define TFQ_SQL_DB_NAME "/powerblock/datalog/TFQ-test.db"
#define _TABLE_NAME "sqltest"

int main(int argc, char *argv[]) 
{
char sql[1024];
char * errmsg;
sqlite3* db_;
int flags = SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE | 
SQLITE_OPEN_CREATE;

if (sqlite3_open_v2(TFQ_SQL_DB_NAME
, &db_
, flags
, NULL) != SQLITE_OK)   {
return EXIT_FAILURE;
}

sprintf(sql,
"CREATE TABLE %s_info (col_idx, col_name, data_type, 
meta_type, description, grouping); "
, _TABLE_NAME);

int retval = sqlite3_exec(db_, sql, 0, 0, &errmsg);
int idx = 1;
while(retval==SQLITE_OK)
{
sprintf(sql,
"INSERT INTO %s_info VALUES(%d, \"rowid\", 
\"INTEGER\", \"%s\", \"Unique Row ID\", \"_SQL\");"
, _TABLE_NAME, idx++, "INTEGER" );

retval = sqlite3_exec(db_, sql, 0, 0, &errmsg); 

if(retval != SQLITE_OK) {
printf("Recieved an sqlite failure. : Failed msg: %s", 
errmsg);
sqlite3_free(errmsg);
}
else
printf("Successfully inserted row idx %d\r\n", idx);

usleep(5);
}
return EXIT_SUCCESS;
}

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread Kosenko Max

Well, I understand idea in general and how it works. But as you have
described in second part of your letter - this won't help. Even if you will
create 100 tables that will save you just 1 step from 5-7 IO steps, but
won't make Cache hit ratio significantly higher. And I'm pretty sure that
even having Most Recently Used cache instead of Most Often Used in SQLite,
underlying OS will cache that step (root page) for you. But having +100
tables will put a lot of overhead and as a result you won't have any benefit
in theory and I'm sure in practice.

BTW, default page size now not less than cluster size and that's most of the
time > 4K.

Thanks. Max.


Jay A. Kreibich-2 wrote:
> 
>   Assuming we have a huge number of data points and that our operations
>   are on random rows, it would be possible to quickly develop the
>   situation you describe: the cache hit-ratio crashes, and each and
>   every B-tree traversal requires us to pull a page off disk.  This
>   makes a deep tree traversal very expensive, as each moving across
>   each level of the tree requires I/O.
> 
>   Now consider the hash system.  We setup 100 tables in a database and
>   use a hash of the key to figure out which table to access.  From
>   there, it is more or less the same thing.
> 
>   What we're doing is cutting the top of the B-tree off and reducing it
>   to 100 (or whatever) sub-trees.  The whole "super-tree" that we've cut
>   off is replaced by the hash algorithm, allowing us to jump right to
>   the correct sub-tree and start our tree traversal there.  This skips
>   traversal of the layers that make up the "super-tree" structure,
>   saving on the I/O.
> 
>   At least in theory.
> 
>   The problem is two fold.  This is dependent on the cache-replacement
>   algorithm used by SQLite (of which I know nothing about), but in
>   theory the nodes that make up the "super-tree" are exactly the nodes
>   you would expect to remain in the cache.  They're used by every
>   lookup on the dataset, after all.  Even if the replacement algorithm
>   is random and they're written over, they're going to be pulled back
>   in soon enough.
> 
>   Second, if I understand the BTree node structure used in SQLite, a
>   single node can hold a fairly hefty number of children.  This is not a
>   binary tree, that's for sure. This means you're not cutting off all
>   that many layers, even with 100 or more tables, which means you're
>   not actually going to see a ton of savings.
> 
>   Overall, I agree that the OP will likely see a noticeable improvement 
>   if they boost the cache size.  Bumping it up 10x or even 100x on a
>   modern workstation is not that big of a deal (100x ~= 300MB if you
>   start with the default 1K page and 2000 page cache).  We see the same
>   thing when you build a big index (which is accessing semi-random
>   data and doing a lot of tree node shuffling).  The best way to
>   increase index build performance is to boost the cache size so that
>   as much of the Btree as possible is always in RAM.  I suspect this is
>   much the same case.
> 

-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24224634.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to find the version of the database.

2009-06-26 Thread Martin Engelschalk
Hi,

a database file does not have a version. You can access it with 
different versions of the library.
AFAIK there is no way to determine what version of the library created 
it or which version wrote to it last.

Martin

Kalyani Phadke wrote:
> Is there any way to find the version of SQlite3 database. eg. I have
> test.DB file . I want to know which SQLite3 version its using ..eg 3.5.4
> or 3.6.15? 
>  
> Thanks,
> -K
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread Jay A. Kreibich
On Fri, Jun 26, 2009 at 10:06:48AM -0700, Kosenko Max scratched on the wall:
> 
> 
> Doug Fajardo wrote:
> > No, I admit I haven't tried this under SQLITE.
> > 
> > Whether this approach will help for the specific application will depend
> > on data usage patterns, which we haven't delved into for this application.
> > Call me simple: since the main issue is degraded performance with larger
> > groupings of data, it seemed to make sense that breaking the data into
> > smaller groupings would help. 
> > 
> > Of course it's very possible that the size of the database in question may
> > mean that the number of hash buckets needed to reap significant benefits
> > makes this approach counter-productive. That's why it is only a suggestion
> > :-)
> 
> I think you're assumptions wrong initially. I just can't imagine scenario
> where your proposal witl give any benefit except wrong implementation of
> B-Tree which is not the case with SQLite.
> 
> As I have posted in answer to thread-starter, degradation of performance
> because of the cache hit ratio becoming less with amount of data. Your
> proposal may work in case keys feeding not random and hitting only several
> tables and that gives higher cache hit ratio. But if that's the case - the
> same will occur with B-Tree. And if that's the case - why not to feed data
> sorted - in that case by logic and as it's proven SQLite will insert the
> data without any performance degradation.
> 
> Could you describe me situation in which your proposal would help and why?


  Assuming we have a huge number of data points and that our operations
  are on random rows, it would be possible to quickly develop the
  situation you describe: the cache hit-ratio crashes, and each and
  every B-tree traversal requires us to pull a page off disk.  This
  makes a deep tree traversal very expensive, as each moving across
  each level of the tree requires I/O.

  Now consider the hash system.  We setup 100 tables in a database and
  use a hash of the key to figure out which table to access.  From
  there, it is more or less the same thing.

  What we're doing is cutting the top of the B-tree off and reducing it
  to 100 (or whatever) sub-trees.  The whole "super-tree" that we've cut
  off is replaced by the hash algorithm, allowing us to jump right to
  the correct sub-tree and start our tree traversal there.  This skips
  traversal of the layers that make up the "super-tree" structure,
  saving on the I/O.

  At least in theory.

  The problem is two fold.  This is dependent on the cache-replacement
  algorithm used by SQLite (of which I know nothing about), but in
  theory the nodes that make up the "super-tree" are exactly the nodes
  you would expect to remain in the cache.  They're used by every
  lookup on the dataset, after all.  Even if the replacement algorithm
  is random and they're written over, they're going to be pulled back
  in soon enough.

  Second, if I understand the BTree node structure used in SQLite, a
  single node can hold a fairly hefty number of children.  This is not a
  binary tree, that's for sure. This means you're not cutting off all
  that many layers, even with 100 or more tables, which means you're
  not actually going to see a ton of savings.



  Overall, I agree that the OP will likely see a noticeable improvement 
  if they boost the cache size.  Bumping it up 10x or even 100x on a
  modern workstation is not that big of a deal (100x ~= 300MB if you
  start with the default 1K page and 2000 page cache).  We see the same
  thing when you build a big index (which is accessing semi-random
  data and doing a lot of tree node shuffling).  The best way to
  increase index build performance is to boost the cache size so that
  as much of the Btree as possible is always in RAM.  I suspect this is
  much the same case.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to find the version of the database.

2009-06-26 Thread Kalyani Phadke
Is there any way to find the version of SQlite3 database. eg. I have
test.DB file . I want to know which SQLite3 version its using ..eg 3.5.4
or 3.6.15? 
 
Thanks,
-K
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread Kosenko Max


Doug Fajardo wrote:
> No, I admit I haven't tried this under SQLITE.
> 
> Whether this approach will help for the specific application will depend
> on data usage patterns, which we haven't delved into for this application.
> Call me simple: since the main issue is degraded performance with larger
> groupings of data, it seemed to make sense that breaking the data into
> smaller groupings would help. 
> 
> Of course it's very possible that the size of the database in question may
> mean that the number of hash buckets needed to reap significant benefits
> makes this approach counter-productive. That's why it is only a suggestion
> :-)

I think you're assumptions wrong initially. I just can't imagine scenario
where your proposal witl give any benefit except wrong implementation of
B-Tree which is not the case with SQLite.

As I have posted in answer to thread-starter, degradation of performance
because of the cache hit ratio becoming less with amount of data. Your
proposal may work in case keys feeding not random and hitting only several
tables and that gives higher cache hit ratio. But if that's the case - the
same will occur with B-Tree. And if that's the case - why not to feed data
sorted - in that case by logic and as it's proven SQLite will insert the
data without any performance degradation.

Could you describe me situation in which your proposal would help and why?

-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24223839.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread Douglas E. Fajardo
No, I admit I haven't tried this under SQLITE.

Whether this approach will help for the specific application will depend on 
data usage patterns, which we haven't delved into for this application. Call me 
simple: since the main issue is degraded performance with larger groupings of 
data, it seemed to make sense that breaking the data into smaller groupings 
would help. 

Of course it's very possible that the size of the database in question may mean 
that the number of hash buckets needed to reap significant benefits makes this 
approach counter-productive. That's why it is only a suggestion :-)

*** Doug Fajardo

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Kosenko Max
Sent: Friday, June 26, 2009 4:06 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] very large SQLite tables


Have you ever tested such proposal?
I believe that doesn't works.


Doug Fajardo wrote:
> 
> One approach might be to split the big, monolithic table into some number
> of hash buckets, where each 'bucket' is separate table. When doing a
> search, the program calculates the hash and accesses reads only the bucket
> that is needed.
> 
> This approach also has the potential for allowing multiple databases,
> where tables would be spread across the different databases. The databases
> could be spread across multiple drives to improve performance.
> 
-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24218386.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Near misses

2009-06-26 Thread Alberto Simões
On Fri, Jun 26, 2009 at 3:43 PM, Simon
Slavin wrote:
>
> On 26 Jun 2009, at 12:25pm, Alberto Simões wrote:
>
>> one adition, one remotion or one substitution
>
> I am always amazed at how well people use English.  For your word
> 'remotion' you probably mean 'removal' or 'omission'.  You have joined
> the two possibilities together !

Probably I just tried a word similar to the Portuguese word: remoção ;)

> You could write a program to prepare another table in the same
> database with your near-misses in.  In other words, to take each word
> in the dictionary (like 'car') and put entries in this other table for
> each near miss you wish to accept:

Yep. That is one of my current options.
Was just wondering (and thus my mail) about any optimization I could
do in my query.

Thanks ;)
Alberto
-- 
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Near misses

2009-06-26 Thread Alberto Simões
Hello

On Fri, Jun 26, 2009 at 3:00 PM, Igor Tandetnik wrote:
> Alberto Simoes wrote:
>> For that, my script receives a word (say, 'car') and generated all
>> possible additions and remotions, and substitutions:
>>
>> Additions: _car c_ar ca_r car_
>> Substitutions: _ar c_r ca_
>> remotions: ar cr ca
>>
>> Then, the script constructs an SQL query:
>>
>> SELECT DISTINCT(word) FROM dict WHERE word = "ar" OR word = "ca" OR
>> word LIKE "_car" OR word LIKE "c_r" OR word = "cr" OR word LIKE "_ar"
>> OR word LIKE "ca_r" OR word LIKE "c_ar" OR word LIKE "ca_" OR word
>> LIKE "car_";
>>
>> And this SQL quer works... but not as quickly as I need (specially
>> because the speed is proportional to the word size).
>
> I'd try writing a custom function that figures out whether two words are
> "close enough" (most of the time, you should be able to declare a
> negative by looking at just two first characters), then do
>
> select word from dict where closeEnough(word, 'car');

Hmms, need to check how to do that. But that would mean call the
function to all words in the database (110K atm).

> I also don't see why you need DISTINCT. Do you have duplicate words in
> dict?

Yes, I have. Forgot to explain ;)

Thanks



-- 
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Near misses

2009-06-26 Thread Simon Slavin

On 26 Jun 2009, at 12:25pm, Alberto Simões wrote:

> one adition, one remotion or one substitution

I am always amazed at how well people use English.  For your word  
'remotion' you probably mean 'removal' or 'omission'.  You have joined  
the two possibilities together !

> Then, the script constructs an SQL query:
>
> SELECT DISTINCT(word) FROM dict WHERE word = "ar" OR word = "ca" OR
> word LIKE "_car" OR word LIKE "c_r" OR word = "cr" OR word LIKE "_ar"
> OR word LIKE "ca_r" OR word LIKE "c_ar" OR word LIKE "ca_" OR word
> LIKE "car_";
>
> And this SQL quer works... but not as quickly as I need (specially
> because the speed is proportional to the word size).

You could write a program to prepare another table in the same  
database with your near-misses in.  In other words, to take each word  
in the dictionary (like 'car') and put entries in this other table for  
each near miss you wish to accept:

nearMissrealWord

car car
ca  car
cr  car
ar  car
ca_ car
c_r car
_ar car
_carcar
c_arcar
ca_rcar
car_car
cat cat
ca  cat
ct  cat
at  cat
ca_ cat
c_t cat
_at cat
_catcat
c_atcat
ca_tcat
cat_cat

Then, in your search phase you just consult the near-miss table

SELECT realWord FROM nearMisses WHERE [whatever] LIKE  
nearMisses.nearMiss;

and find all the applicable entries: a single lookup against one index  
should be extremely fast.  Look up the word 'ca' and you get the both  
'car' and 'cat' realWords.  You could even include a JOIN to find the  
entries in your dict table too.

It should be easy to write software which goes through every  
permutation of missing letter, extra letter, etc..  It will lead to  
one very big table, but it will give you instant lookup.  You can  
shrink the table by using the LIKE operator both ways around, at the  
penalty of doubling the time taken.  The choice of whether to bother  
consulting the nearMiss table if the user typed a real word to start  
with is up to you.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ROWID of 0

2009-06-26 Thread D . Richard Hipp

On Jun 26, 2009, at 12:22 PM, D. Richard Hipp wrote:

>
> On Jun 26, 2009, at 12:13 PM, Shaun Seckman (Firaxis) wrote:
>
>> I have a column ("ID") in a table that is the primary key integer  
>> so it
>> should be an alias for ROWID.   Is it safe to have  a ROWID of 0?
>
>
> A ROWID can have any value between -9223372036854775808 and  
> 9223372036854775807, inclusive of both ends and of zero.  And, yes,  
> the boundary values are tested in the test suite.
>


I should add, though, that negative ROWIDs each consume 9 bytes of  
disk space, where as small positive ROWIDs only use 1 or 2 bytes.  So  
non-negative ROWIDs are preferred.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ROWID of 0

2009-06-26 Thread Dan

On Jun 26, 2009, at 11:13 PM, Shaun Seckman (Firaxis) wrote:

> I have a column ("ID") in a table that is the primary key integer so  
> it
> should be an alias for ROWID.   Is it safe to have  a ROWID of 0?

Yes.

>
>
>
>
> -Shaun
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ROWID of 0

2009-06-26 Thread D. Richard Hipp

On Jun 26, 2009, at 12:13 PM, Shaun Seckman (Firaxis) wrote:

> I have a column ("ID") in a table that is the primary key integer so  
> it
> should be an alias for ROWID.   Is it safe to have  a ROWID of 0?


A ROWID can have any value between -9223372036854775808 and  
9223372036854775807, inclusive of both ends and of zero.  And, yes,  
the boundary values are tested in the test suite.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ROWID of 0

2009-06-26 Thread Shaun Seckman (Firaxis)
I have a column ("ID") in a table that is the primary key integer so it
should be an alias for ROWID.   Is it safe to have  a ROWID of 0?

 

-Shaun

 

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread Kosenko Max


John Stanton-3 wrote:
> Why would it not work?  It is just adding an extra top level to the 
> index.  A tried and true method.

It will work. But won't give performance benefit. And from my undestanding
it will even slow down things.
You can place parts of index in different DB and on different HDD thinking
it will boost the performance.

But the problem is that we aren't talking about multiply selects at the same
time... We are talking about updating index in sqlite which is
single-threaded and even under load that wouldn't give you any advantages.

Moreover you're emulating part of B-Tree with that approach and making it
slower and more space consumptive. So should it work as a solution? No. 

You have an idea why it should work? Tell me so.
-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p2433.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to update field with data in field in many rows?

2009-06-26 Thread Jan Martin
Hi all,

my database has a table ExifPhoto with the fields GPSGeometry and Nearby.

For each row I need to Update field Nearby with all coordinates (in the
table) within a Distance of 1 degree of the coordinates in field
GPSGeometry.

The sqllite database has the spatial extension from SpatiaLite enabled.

So far I have this sql query, which puts all POINTs from all GPSGeometry
fields into field Nearby for each row.

Ideas on how to limit POINTS to coordinates within a Distance 1 degree
please?

Thanks,
Jan


See the DISTANCE sql for how to do a spatial distance query.
I guess one needs to use PointFromWKB instead GeomFromTex because there is a
POINT already in the field GPSGeometry. But how to use a db field instead of
a hard-coded point?

Puts all Points from field GPSGeometry into field Nearby:

UPDATE ExifPhoto SET Nearby = (
SELECT
GUnion(exif1.GPSGeometry) AS GUnion1
FROM
ExifPhoto AS exif1
INNER JOIN
ExifPhoto AS exif2
ON
exif1.GPSGeometry = exif2.GPSGeometry
);

Working example of Distance search:

SELECT GUnion(GpsGeometry)
FROM ExifPhoto
WHERE Distance(GpsGeometry, GeomFromText('POINT(11.0 43.0)', 4326)) < 1;

SpatiaLite-GUI -> BLOB explorer -> Geometry Explorer:
SRID: 4326

Geometry type: MULTIPOINT

#6 POINTs:
  1) 11.8791  43.4660
  2) 11.8792  43.4673
  3) 11.8802  43.4682
  4) 11.8815  43.4645
  5) 11.8815  43.4684
  6) 11.8816  43.4684

So 6 coordinates are within 1 degree of the coordinates 11.0 43.0.
There are 9 rows alltogether in the database for testing.

Spatialite: http://www.gaia-gis.it/spatialite/
exif example: http://www.gaia-gis.it/spatialite/spatialite-exif-2.3.0.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Near misses

2009-06-26 Thread Igor Tandetnik
Alberto Simoes wrote:
> For that, my script receives a word (say, 'car') and generated all
> possible additions and remotions, and substitutions:
>
> Additions: _car c_ar ca_r car_
> Substitutions: _ar c_r ca_
> remotions: ar cr ca
>
> Then, the script constructs an SQL query:
>
> SELECT DISTINCT(word) FROM dict WHERE word = "ar" OR word = "ca" OR
> word LIKE "_car" OR word LIKE "c_r" OR word = "cr" OR word LIKE "_ar"
> OR word LIKE "ca_r" OR word LIKE "c_ar" OR word LIKE "ca_" OR word
> LIKE "car_";
>
> And this SQL quer works... but not as quickly as I need (specially
> because the speed is proportional to the word size).

I'd try writing a custom function that figures out whether two words are 
"close enough" (most of the time, you should be able to declare a 
negative by looking at just two first characters), then do

select word from dict where closeEnough(word, 'car');

I also don't see why you need DISTINCT. Do you have duplicate words in 
dict?

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite error: cannot commit transaction - SQL statements in progress

2009-06-26 Thread Igor Tandetnik
Manasi Save wrote:
> I am working on an application. In my code nowhere I am explicitly
> setting AutoCommit = False after any statement.

You set AutoCommit = False by executing BEGIN statement. You set it back 
to True by executing COMMIT, END or ROLLBACK.

> But i am getting "SQLite error: cannot commit transaction - SQL
> statements in progress" this error.

You have to sqlite3_reset or sqlite3_finalize all outstanding statements 
before you can commit a transaction.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite error: cannot commit transaction - SQL statements in progress

2009-06-26 Thread Manasi Save
Hi All,

I am working on an application. In my code nowhere I am explicitly setting
AutoCommit = False after any statement.

But i am getting "SQLite error: cannot commit transaction - SQL statements
in progress" this error.

Can anyone provide any input on this case in which all senerio we get this
error.

Thanks in advance.
-- 
Regards,
Manasi Save




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cross-Compile and Installation of Sqlite

2009-06-26 Thread Ben Atkinson

On Thu, June 25, 2009 at 4:46 PM, Matthew L. Creech wrote:

> cd /path/to/sqlite-3.6.15
> ./configure --prefix=/my/target/rootfs --host=arm-unknown-linux-gnueabi
> make install
> 


Thanks for the help. Matthew.  That was exactly what I needed.

With the configure --prefix option , the subsequent "make install" put the
installation bin, include, and lib directories in a directory on my Ubuntu host
where I could zip them up, and transfer them to /usr/local on my embedded
target.

Regards,

Ben


> On Thu, Jun 25, 2009 at 4:23 PM, Ben Atkinsonwrote:
> >
> > I have an embedded Linux ARM target and wish to run sqlite on it. I 
> successfully cross-compiled sqlite-3.6.15 on my Ubuntu x86 host, and now I'm 
> ready to install sqlite3, its libraries, and headers on my target system.
> >
> > I originally tried compiling sqlite on my embedded target system. Because 
> > it 
> has only a flash file system, and there is no swap area, gcc fails because it 
> runs out of memory.
> >
> > I tried zipping up the cross-compiled sqlite-3.6.15 directory from my x86 
> > host 
> into a tar.gz file, downloading it to my target, unzipping it, then running 
> "make install". Because the config files and the Makefile have all of the 
> arm-unknown-linux-gnueabi cross-compiler references to gcc, this doesn't 
> match 
> the actual configuration on my embedded target, and the make fails.
> >
> > Before I start hacking into the sqlite config and Makefiles on my embedded 
> target, has someone already been through this and perhaps has a "howto"? Is 
> there already a recipe in the Makefile for this?
> >
> 
> SQLite isn't much different than any other autotools-based package in
> this regard.  If you're wanting to do a firmware build, and include
> the SQLite pieces in their correct location in your target root
> filesystem, you can just do something like:
> 
> cd /path/to/sqlite-3.6.15
> ./configure --prefix=/my/target/rootfs --host=arm-unknown-linux-gnueabi
> make install
> 
> If all you want is to run the sqlite3 executable, though, you can just
> take the cross-compiled binary + shared-lib and throw them onto the
> target.  Note that you'll need to set LD_LIBRARY_PATH when running
> "sqlite3" to prevent it from complaining about missing libraries,
> unless /lib is writable on your target.
> 
> -- 
> Matthew L. Creech
> 
> 


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread John Stanton
Why would it not work?  It is just adding an extra top level to the 
index.  A tried and true method.

Kosenko Max wrote:
> Have you ever tested such proposal?
> I believe that doesn't works.
>
>
> Doug Fajardo wrote:
>   
>> One approach might be to split the big, monolithic table into some number
>> of hash buckets, where each 'bucket' is separate table. When doing a
>> search, the program calculates the hash and accesses reads only the bucket
>> that is needed.
>>
>> This approach also has the potential for allowing multiple databases,
>> where tables would be spread across the different databases. The databases
>> could be spread across multiple drives to improve performance.
>>
>> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread Kosenko Max

I forgot to say about hash...

My personal choice will be MurmurHash2 64 bit function
http://murmurhash.googlepages.com/
http://en.wikipedia.org/wiki/MurmurHash2 - lots of implementations here

It's fast (even in managed impls), have good characteristics and free.
Don't use CRC64...

P.S. You still have a chance ~ 1/10`000`000`000 that two strings in 1
billion dictionary will have same hash. So you probably should make very
small table cached in memory that will have collision resolvings - string
key that was changed to other string key w/o collision. That's simple to do
and will remove a chance of collision while keeping additional checks very
fast (due to small size of the collision check table - I believe you will
never see anything in that table at all).
-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24219678.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Near misses

2009-06-26 Thread Martin Pfeifle
Hi, 

I guess the speed could significantly be improved,
if you leave out _car and _ar.
The inverted index which is basically (term, 
blob_containing_all_document_ids_of_this_term),
cannot skip any of the alphabetically ordered terms if the first character is 
variable.
At least that's my understanding.

Thank you for your idea, because I am also thinking of putting some fuzzy 
search on top of FTS.

Best Martin




Von: Alberto Simões 
An: General Discussion of SQLite Database 
Gesendet: Freitag, den 26. Juni 2009, 13:25:57 Uhr
Betreff: [sqlite] Near misses

Hello.

I am trying to find words in a dictionary stored in sqlite, and trying
a near miss approach.
For that I tried an algorithm to create patterns corresponding to
Levenshtein distance of 1 (edit distance of 1).
That means, one adition, one remotion or one substitution.

For that, my script receives a word (say, 'car') and generated all
possible additions and remotions, and substitutions:

Additions: _car c_ar ca_r car_
Substitutions: _ar c_r ca_
remotions: ar cr ca

Then, the script constructs an SQL query:

SELECT DISTINCT(word) FROM dict WHERE word = "ar" OR word = "ca" OR
word LIKE "_car" OR word LIKE "c_r" OR word = "cr" OR word LIKE "_ar"
OR word LIKE "ca_r" OR word LIKE "c_ar" OR word LIKE "ca_" OR word
LIKE "car_";

And this SQL quer works... but not as quickly as I need (specially
because the speed is proportional to the word size).

Any hint on how to speed up this thing?

THank you
Alberto

--
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Near misses

2009-06-26 Thread Alberto Simões
Hello.

I am trying to find words in a dictionary stored in sqlite, and trying
a near miss approach.
For that I tried an algorithm to create patterns corresponding to
Levenshtein distance of 1 (edit distance of 1).
That means, one adition, one remotion or one substitution.

For that, my script receives a word (say, 'car') and generated all
possible additions and remotions, and substitutions:

Additions: _car c_ar ca_r car_
Substitutions: _ar c_r ca_
remotions: ar cr ca

Then, the script constructs an SQL query:

SELECT DISTINCT(word) FROM dict WHERE word = "ar" OR word = "ca" OR
word LIKE "_car" OR word LIKE "c_r" OR word = "cr" OR word LIKE "_ar"
OR word LIKE "ca_r" OR word LIKE "c_ar" OR word LIKE "ca_" OR word
LIKE "car_";

And this SQL quer works... but not as quickly as I need (specially
because the speed is proportional to the word size).

Any hint on how to speed up this thing?

THank you
Alberto

--
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread Kosenko Max


Matthew O'Keefe wrote:
> We wanted to post to the mailing list to see if there are any obvious,
> first-order things we can try to improve performance for such a large
> table.

The problem with slow inserts generally speaking lies in the problem of
cache miss.
Imagine that each new insert in index is random. After some time that means
that cache hit ratio becomes close to zero. So that means in case you're on
HDD you have to spend 10ms on each B-Tree page read. Even if you don't have
any flushing at all that can give you ~15 search/insert operations per
second (assuming ~6 levels depth of tree). The more data you have, the
closer you will be to that limit.

There are several inefficiencies in SQLite cache due to simplicity and
having several tables won't help. Having several files will lead to
spreading of cache between connections. Making more efficient cache system
or better defragmentation or another approach to layout data can help but
not radically. That will just move the wall or raise a bottom line from i.e.
15 ops to 50 ops.

So what you should do instead:
1. Make SQLite cache as large as possible.
2. Compact your data as much as possible -  one solution would be to convert
string key to int64 hash value - that will radically compact data.
3. Improve sqlite caching and send a patch to D. Richard Hipp :)

In case you will have cache size larger than DB - mostly every insert will
be very fast.
But with your demand of 1B records - you will have something like 10-20Gb of
sqlite db.
I don't know whether sqlite allows such large buffers defined in 64 bit. I
also don't know do you have that amount of RAM.

SOLUTION PROPOSAL

First proposal is to feed data sorted by key. That would be always fast.

Another approach would be to make delayed inserts. In case you have peaks of
inserts - you can actually create similar empty table and place rows there.
Queries should use UNION and after some time when you'll have i.e. 1
items in there - you can insert them at once in a sorted by key order. After
insert you should empty your table again. That will be MUCH faster. 


SAMPLE:

PRAGMA cache_size = 

CREATE TABLE chunks (nameHash integer primary key, pid integer not null);
CREATE TABLE chunksUpdates (nameHash integer primary key, pid integer not
null);

1. Insert with
INSERT INTO chunksUpdates VALUES(CustomHashFunction("some 88 bytes long key
for whatever reason I need it"), 34234234)

2. Select with
SELECT * FROM chunks WHERE nameHash = CustomHashFunction("some 88 bytes long
key for whatever reason I need it") UNION SELECT * FROM chunksUpdates WHERE
nameHash = CustomHashFunction("some 88 bytes long key for whatever reason I
need it") LIMIT 1

3. From time to time when size of chunksUpdates becomes something like 1
do following

BEGIN EXCLUSIVE
INSERT INTO chunks SELECT * FROM chunksUpdates ORDER BY nameHash
DELETE FROM chunksUpdates
END

Updates and deletes are different story with the same principles...

Making custom hash function is really important for you and it really should
be 64bit based otherwise you will get a duplicates on collections of 1B
items. 
-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24218566.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite error in Multi-core systems (Sun T2000)

2009-06-26 Thread Nogueira, Jose (External)
Hello,

 

I would like to inform you about a problem encountered upon writing in a
sqlite database on a multi-core machine (Sun T2000).

Indeed, on T2000 systems, during endurance tests that induce a lot of
regular writings during several hours in a sqlite database (3.3 but also 3.6
sqlite version), the following error is returned by sqlite; "SQL logic error
or missing database".

This error happened first cyclically (every about 2 hours or 3 hours,
depending on endurance tests) with -DSQLITE_THREADSAFE=1 as thread-related
sqlite compilation option.

Now, with same -DSQLITE_THREADSAFE=1 and new DSQLITE_ALLOW_XTHREAD_CONNECT=1
compilation options, the error happens only 1 time during the test.

 

A new test with -DSQLITE_THREAD_OVERRIDE_LOCK=1 option instead of -1 is
planed.

 

To note that this error doesn't occur with a mono-core system (Sun Netra
240), and that the program that writes in the database isn't multithreaded.

 

If you have explanations or/and some correction ideas about such an error
that happens in T2000 machines (multi-core) and not in Netra 240 machines
(mono-core), feel free to share them with me.

 

Thank you,

 

José Nogueira

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread Kosenko Max

Have you ever tested such proposal?
I believe that doesn't works.


Doug Fajardo wrote:
> 
> One approach might be to split the big, monolithic table into some number
> of hash buckets, where each 'bucket' is separate table. When doing a
> search, the program calculates the hash and accesses reads only the bucket
> that is needed.
> 
> This approach also has the potential for allowing multiple databases,
> where tables would be spread across the different databases. The databases
> could be spread across multiple drives to improve performance.
> 
-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24218386.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users