Re: [sqlite] Rollback over pending queries

2009-10-22 Thread Edzard Pasma
--- edz...@volcanomail.com wrote:

> I wonder if an automatic rollback, as described in
> //www.sqlite.org/c3ref/busy_handler.html, is more powerful than a 
> rollback programmed in SQL. Particularly if it is able to rollback 
> pending queries from other cursors in the same connection. The 
> programmed rollback fails here with BusyError: cannot rollback 
> transaction - SQL statements in progress. 
> 
> I tried to find out experimentally but failed to reproduce any automatic 
> rollback. It seems another solution is in place. I may be not well 
> informed, is anything written about this already?
> 
> From the timeline it appears there is ongoing development on the 
> subject. Is there any chance that a rollback in SQL will no longer fail 
> with pending queries?

Hello again,

The mechanism of an automatic rollback, as described in 
//www.sqlite.org/c3ref/busy_handler.html, appears to have disappeared somewhere 
between version 3.5.9 and 3.6.5. The following steps should show the different 
behaviour.

- Create a table containing about 1 M byte of data
(i hope anybody can imagine this for himself)

- Let one process keep a shared lock

$ sqlite3 test.db "select * from t" | more
aa
...
(leave this command unfinished)

- Start a large transaction, first in the old version

$ sqlite-amalgamation-3.5.9/sqlite3 test.db
SQLite version 3.5.9
Enter ".help" for instructions
sqlite> begin;
sqlite> insert into t select * from t;
sqlite> insert into t select * from t;
SQL error: disk I/O error
sqlite> commit;
SQL error: cannot commit - no transaction is active

- Same for the current version

$ sqlite3 test.db 
SQLite version 3.6.19
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> begin;
sqlite> insert into t select * from t;
sqlite> insert into t select * from t;
sqlite> commit;
SQL error: database is locked

The current behaviour looks better, the application may still commit once the 
database is unlocked. However it also has a disadvantage which appears from the 
process status. At the current point this shows:

$ ps -lp 14440
  UID   PID  PPID CPU PRI NI  VSZRSS COMMAND
  501 14440  4281   0  31  029936   7088 sqlite3

If the insert step is repeated once again, this becomes:

$ ps -lp 14440
  UID   PID  PPID CPU PRI NI  VSZRSS COMMAND
  501 14440  4281   0  31  038176  14056 sqlite3

The memory usage increases about 7000 pages which is in line with the amount of 
data added (4M). This must become a problem if dealing with gigabytes.

I could not find anything written about the change, except a quote from Dan 
Kennedy in the mailing list: 
http://www.mail-archive.com/sqlite-users@sqlite.org/msg42059.html. But then the 
busy_handler description must be out of date.

This derived a bit from the subject, I still would like the more powerful 
rollback in SQL.

Tnanks, Edzard
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Data migration tool certification

2009-10-22 Thread hfdabler

Hello, 

I have been using the Talend software for a few months now and am very
happy. I have seen the page on the Talend certification and the exam on the
website ( http://www.talend.com/partners/index.php ). 

I wonder what it takes to take the exam, if you need to know well Talend.
Also is the exam available for everyone and is it free or paying? 

Maybe someone would have an idea? Thank you.
-- 
View this message in context: 
http://www.nabble.com/Data-migration-tool-certification-tp26010496p26010496.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Spam filtering. Was: Data migration tool certification

2009-10-22 Thread D. Richard Hipp

On Oct 22, 2009, at 10:16 AM, hfdabler wrote:
>
> I have been using the Talend software for a few months now ...


Needless to say, "hfdabler" has been permanently banned from this  
list.  Not that that matters any since "hfdabler" is likely a throw- 
away email address and he had already unsubscribed himself by the time  
I was able to investigate.

Mailman allows me to filter messages by header content, but not by the  
content of the body of a message.  Hence, for some time now we have  
been rejecting submissions that contain the word "Talend" in the  
Subject line.  But the version of Mailman we are running does not  
allow me to filter messages that contain "Talend" in the body of the  
message.  Does anyone know if there is a newer version of Mailman  
available that has improved spam filtering capabilities?  Is there a  
different mailing list manager we should consider switching to?

I have changed settings so that moderator approval is required in  
order to join the mailing list.  Perhaps that will be sufficient to  
head off future attacks such as the above.  But it will also mean that  
new people will have to wait until I get around to approving them  
before they can send emails to this list.

Are there other suggestions on how to limit or curtail the kind of  
mailing list abuse practiced by Talend?


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] manipulating arguments (in C)

2009-10-22 Thread Rob Sciuk

Perhaps this might lead you in the right direction Jean-Christophe ...

#include 
#include 
#include 

Str_t  s_format( Str_t fmt, ... ){
   va_list  ap ;
   Int_tnx ;
   Byt_tbuf[1] ;
   Str_tqptr ;

   va_start( ap, fmt );
nx = vsnprintf( buf, 0, fmt, ap ) ;
   va_end( ap ) ;

   qptr = (Str_t) q_calloc( 1, nx+1 ) ;

   va_start( ap, fmt ) ;
nx = vsnprintf( qptr, nx+1, fmt, ap ) ;
   va_end( ap ) ;

   return qptr ;
}

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=
Robert S. Sciuk http://www.controlq.com 259 Simcoe St. S.
Control-Q Research  tel: 905.576.8028   Oshawa, Ont.
r...@controlq.com   fax: 905.576.8386   Canada, L1H 4H3
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Detecting real updates

2009-10-22 Thread Marian Aldenhoevel
Hi,

My application is downloading data from the internet, parses and 
transforms it and (currently) stores it in a sqlite3-database. The data 
parses out into a small variable number of records with just two fields 
each.

Whenever that data changes, either the number of records or the actual 
values in existing records, my application needs to forward the complete 
set to an embedded system over a serial link.

So the intermediate storage is mainly used to have a status-quo to 
compare against.

Is there a way to have sqlite3 detect the actual changes after I did a 
number of INSERT OR UPDATE statements? Using a trigger maybe?

If so I could keep the intermediate storage nicely organized and still 
not incur a lot of read-and-then-update overhead to detect the changes.

The alternative is to just prepare it in the format that would be sent 
over the link, record that in a blob and check on changes on that single 
item. Losing the ability to easily query the data to check it manually.

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem about write data into the DB

2009-10-22 Thread 邓超
The problem has solved, it's actually the permission configuration issue
about write data to the table. the apache hasn't the permission to write
data into the table, I modified the rights, then everything is OK now.
Thanks!

2009/10/21 邓超 

> Hi sqlite-users,
>   I deployed a django app on my laptop, the whole environment is like this:
> the OS is UBUNTU904, the web server is Apache, and the database is sqlite3.
> The deployment is success, but  when I try to write some data into the
> database, I get the HTTP 500 error. And I check the error log, it shows 
> "*OperationalError:
> unable to open database file*". What does this error mean? If there are
> some operation permission need configure?
>
> --
> Deng Chao
>



-- 
Deng Chao
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] manipulating arguments (in C)

2009-10-22 Thread Jean-Christophe Deschamps
Hi Rob,


>Perhaps this might lead you in the right direction Jean-Christophe ...
>
>#include 

Thank you for your answer.
The va_* construct isn't portable, AFAIK.  Various compilers (headers, 
libraries) may (and did) implement it in different and potentially 
incompatible ways.

Since in my case I have homebrew extensions, stock sqlite3.dll and 
third-party software, all compiled with distinct compilers, I've 
avoided using va_*.

In fact, this is a bit pedantic, since what I've done (packing 
arguments in a structure which is pushed on the stack) is strictly 
equivalent to the "packed array" va_* variation and is simply what the 
stack would look like if individual parameters were pushed naturally in 
a standard multi-parameter function call.  The drawback is that the 
maximum number for parameters must be fixed, but I guess this is harder 
to circumvent (in C).

Some day when/if I get spare time, I'll reimplement the printf 
extension, remove what's useless here (eg %c) and make it a bit more 
robust (force ll prefix for integers, a.s.o.) but this is very low 
priority for now.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] .import on a .csv file

2009-10-22 Thread Scott Baker
I'm trying to .import a CSV file and I can't quite figure out the syntax.

I created a table, and then did:

.separator ,
.import /tmp/foo.csv mytable

This works sort of, unless my data has , in it. Something like "last, 
first". Because it tries to split at that , and then the number of rows 
doesn't match my table.

Then I tried setting the separator to

.separator \",\"

Which works correctly (i.e. it splits the data properly). However, now my 
first and last columns have " on the beginning/end of them. Is there a way 
to import a well formed CSV.

My CSV data looks like this:

"38665","101977","Deadly Sparrows Inc.","1435 S. Doolis 
Ln","Donkville","OR","90210","Doolis, Jason","5032349422","Active"

Help!

-- 
Scott Baker - Canby Telcom
System Administrator - RHCE - 503.266.8253
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] .import on a .csv file

2009-10-22 Thread Adam DeVita
Do you have any new lines, returns, or tabs in any of the real data?  Can
you prove it?

Is this a 1 off thing or are you going to do this routinely?

There has been a lot of discussion on this list about importing csv data and
the hardships of writing a good csv importer.

If this is a one off, some possible tricks:

1) If there are no newlines or tabs inside the data, perhaps you can run
away from your embedded delimiter by changing the delimiter to a tab?
2) Have you considered using Access, XL, or open office  to see if you can
get a clean import into there?  This may allow you to save into another
format or...
2b) Use the spreadsheet to create the sql you want to import.

The merits of various solutions can be found by searching the archive.

Go to
http://www.mail-archive.com/sqlite-users@sqlite.org/info.html

and search on:
Tedious CSV import question This was a good discussion.



On Thu, Oct 22, 2009 at 12:16 PM, Scott Baker  wrote:

> I'm trying to .import a CSV file and I can't quite figure out the syntax.
>
> I created a table, and then did:
>
> .separator ,
> .import /tmp/foo.csv mytable
>
> This works sort of, unless my data has , in it. Something like "last,
> first". Because it tries to split at that , and then the number of rows
> doesn't match my table.
>
> Then I tried setting the separator to
>
> .separator \",\"
>
> Which works correctly (i.e. it splits the data properly). However, now my
> first and last columns have " on the beginning/end of them. Is there a way
> to import a well formed CSV.
>
> My CSV data looks like this:
>
> "38665","101977","Deadly Sparrows Inc.","1435 S. Doolis
> Ln","Donkville","OR","90210","Doolis, Jason","5032349422","Active"
>
> Help!
>
> --
> Scott Baker - Canby Telcom
> System Administrator - RHCE - 503.266.8253
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
VerifEye Technologies Inc.
905-948-0015x245
7100 Warden Ave, Unit 3
Markham ON, L3R 8B5
Canada
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] .import on a .csv file

2009-10-22 Thread Simon Davies
2009/10/22 Scott Baker :
> I'm trying to .import a CSV file and I can't quite figure out the syntax.
>
> I created a table, and then did:
>
> .separator ,
> .import /tmp/foo.csv mytable
>
> This works sort of, unless my data has , in it. Something like "last,
> first". Because it tries to split at that , and then the number of rows
> doesn't match my table.
>
> Then I tried setting the separator to
>
> .separator \",\"
>
> Which works correctly (i.e. it splits the data properly). However, now my
> first and last columns have " on the beginning/end of them. Is there a way
> to import a well formed CSV.
>
> My CSV data looks like this:
>
> "38665","101977","Deadly Sparrows Inc.","1435 S. Doolis
> Ln","Donkville","OR","90210","Doolis, Jason","5032349422","Active"
>
> Help!

There was a discussion on csv import a little while ago:
http://www.nabble.com/Tedious-CSV-import-question-to25601326.html#a25601326

In essence, you need to do some work if your field separator character
appears inside your data fields.

One solution is to replace your existing separators  (,) with a
character that does not exist in your data, specify that character as
the separator to sqlite, and you should be good to go .import.

>
> --
> Scott Baker - Canby Telcom
> System Administrator - RHCE - 503.266.8253

Rgds,
Simon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] .import on a .csv file

2009-10-22 Thread Rich Shepard
On Thu, 22 Oct 2009, Simon Davies wrote:

> One solution is to replace your existing separators  (,) with a character
> that does not exist in your data, specify that character as the separator
> to sqlite, and you should be good to go .import.

   I found the solution was to change all field separators from "," to "|"
which is the default SQLite field separator. This way, after more cleaning
and tuning, I could remove all double quotation marks and each column
remained delimited while embedded commas remained.

   I used emacs for this but any text editor will work.

Rich

-- 
Richard B. Shepard, Ph.D.   |  IntegrityCredibility
Applied Ecosystem Services, Inc.|Innovation
 Voice: 503-667-4517  Fax: 503-667-8863
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite DB Structure

2009-10-22 Thread CityDev

Firstly I'm not an SQLite expert but, now you know that, I would guess SQLite
marks deleted records waiting until some clean-up process removes them and
re-writes the active data. I would therefore look for some bit that is set.
Try setting up a new database, adding records - taking a copy, then deleting
the records and then compare the two images. That should point you to where
SQLIte is tagging records. Then you go back to your target database and
'unset' the deletion flags, now you know where SQL puts them. You should
then be able to see the deleted records again. 
-- 
View this message in context: 
http://www.nabble.com/SQLite-DB-Structure-tp25990484p26013692.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Spam filtering. Was: Data migration tool certification

2009-10-22 Thread Keith Roberts
On Thu, 22 Oct 2009, D. Richard Hipp wrote:

> To: General Discussion of SQLite Database 
> From: D. Richard Hipp 
> Subject: [sqlite] Spam filtering.   Was: Data migration tool certification
> 
>
> On Oct 22, 2009, at 10:16 AM, hfdabler wrote:
>>
>> I have been using the Talend software for a few months now ...
>
>
> Needless to say, "hfdabler" has been permanently banned from this
> list.  Not that that matters any since "hfdabler" is likely a throw-
> away email address and he had already unsubscribed himself by the time
> I was able to investigate.
>
> Mailman allows me to filter messages by header content, but not by the
> content of the body of a message.  Hence, for some time now we have
> been rejecting submissions that contain the word "Talend" in the
> Subject line.  But the version of Mailman we are running does not
> allow me to filter messages that contain "Talend" in the body of the
> message.  Does anyone know if there is a newer version of Mailman
> available that has improved spam filtering capabilities?  Is there a
> different mailing list manager we should consider switching to?
>
> I have changed settings so that moderator approval is required in
> order to join the mailing list.  Perhaps that will be sufficient to
> head off future attacks such as the above.  But it will also mean that
> new people will have to wait until I get around to approving them
> before they can send emails to this list.
>
> Are there other suggestions on how to limit or curtail the kind of
> mailing list abuse practiced by Talend?
>
>
> D. Richard Hipp
> d...@hwaci.com

Hi Richard. I don't use mailman, but you should be able to 
integrate tmda to work with mailman.

TMDA allows you to do email whitelist and blacklist 
filtering with regex pattern matching on all sorts of 
things, including strings in the body of an email.

Googling for mailman tmda appears to return some usefull 
results :)

http://www.ietf.org/mail-archive/web/wgchairs/current/msg05172.html

Kind Regards,

Keith Roberts

-
Websites:
http://www.php-debuggers.net
http://www.karsites.net
http://www.raised-from-the-dead.org.uk

All email addresses are challenge-response protected with
TMDA [http://tmda.net]
-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Spam filtering. Was: Data migration tool certification

2009-10-22 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

D. Richard Hipp wrote:
> Is there a
> different mailing list manager we should consider switching to?

What I have found most effective is having the ability to moderate the first
posts from new users.  That way you don't care who joins but do get to
filter them out on what they say.  You also catch those who have joined the
wrong mailing list.

If I was in your shoes I would switch to using Google Groups as they do
provide the moderate first posts option as well as all the other stuff you
would expect from a decent mailing list manager.  They are also better at
spammers and spam filtering since they have so much of it to work from.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkrgle8ACgkQmOOfHg372QRZogCZAU/Tq0QR2tHkIJTSdpxohzal
qxgAnA4r3KxxNItBalzQhcHPJg99QsNg
=7It7
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Detecting real updates

2009-10-22 Thread Simon Slavin

On 22 Oct 2009, at 3:51pm, Marian Aldenhoevel wrote:

> Is there a way to have sqlite3 detect the actual changes after I did a
> number of INSERT OR UPDATE statements? Using a trigger maybe?
>
> If so I could keep the intermediate storage nicely organized and still
> not incur a lot of read-and-then-update overhead to detect the  
> changes.
>
> The alternative is to just prepare it in the format that would be sent
> over the link, record that in a blob and check on changes on that  
> single
> item. Losing the ability to easily query the data to check it  
> manually.

If you have a central unchanged copy then you just need to keep a  
journal of all your UPDATE INSERT and DELETE commands.  When you want  
to bring the other copy up-to-date, just download your journal to the  
embedded device, and play back your journal against it's own copy of  
the database.

Of course, if the amount of data is small enough, and the device never  
makes changes to it, don't even bother to do that.  Just download a  
fresh copy of the database each time you want to update.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit time

2009-10-22 Thread Tom Broadbent
thanks for the discussion.  i'll keep my eyes open for lock contention.  i'm 
going to start w/ the simple approach first and see how it goes.

thanks
tom

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 11:00 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time

Dangerous and disturbing this puzzle is. Only a bug could have locked
those connections.

If I discover anything useful I'll report it separately (no need to
hijack this topic for that.)

John

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Thursday, October 22, 2009 1:27 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


On Oct 22, 2009, at 11:37 AM, John Crenshaw wrote:

> An open cursor will block. I've watched it. It was a major problem,
> and
> I spent many many hours stepping through SQLite before I finally
> figured
> it out. Once I carefully closed out cursors, the problem went away.
> (In
> my case I had a long running write process trying to commit a
> transaction so it could yield to another connection in a separate
> thread
> that wanted to write. If cursors were open on a table, the other
> connection would refuse to grab a write lock on that table, even
> though
> the transaction was committed and there were no open writers.)
>
> I don't remember where for sure (may have been in
> sqlite3BtreeBeginTrans) SQLite specifically checks for open cursors.
>
> The write lock doesn't stop you from reading, but an open cursor DOES
> stop you from writing. You have to check for SQLITE_LOCKED, no way
> around it.

I don't understand the situation described in the first paragraph. But
the statement above is at least not universally true. Tcl test cases
"shared-[01].3.11" and "shared-[01].3.1.2" (see around line 229 of
the test/shared.test file in the source distribution) are examples of
one connection writing to a table while a second connection is scanning
through the same table using an open cursor. In this case it is the
"second connection" is operating in read_uncommitted mode.

Dan.




>
> John
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Thursday, October 22, 2009 12:06 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
>
> On Oct 22, 2009, at 5:21 AM, Tom Broadbent wrote:
>
>> if thread 1 opens a read cursor in read uncommitted mode it can
>> block a write lock?  i thought the read happens w/o a lock?
>
> If using read-uncommitted mode, a reader thread will not block a
> writer thread that is using the same shared-cache. Except, it does
> block a writer from modifying the database schema.
>
> Dan.
>
>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org
>> ] On Behalf Of John Crenshaw
>> Sent: Wednesday, October 21, 2009 12:03 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Good, a single write thread saves you all the hassle involved with
>> yielding. Unfortunately, even without multiple writers blocking is
>> still
>> possible. If thread 1 opens a cursor, and thread 2 tries to write
>> before
>> that cursor has been closed, it will return SQLITE_LOCKED. Since any
>> read query will return a cursor, there is always a possibility for
>> blocking, and you need to handle SQLITE_LOCKED.
>>
>> John
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
>> Sent: Wednesday, October 21, 2009 2:09 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> very good.  i don't anticipate multiple writers so this should be
>> pretty
>> simple.
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
>> Sent: Wednesday, October 21, 2009 9:15 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Yes, you have to call sqlite3_enable_shared_cache before opening any
>> database connections, then execute "PRAGMA read_uncommitted = true;"
>> on
>> each connection. Blocking can still happen in some situations, but
>> you
>> can handle it as I described in my original reply.
>>
>> John
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
>> Sent: Wednesday, October 21, 2009 12:05 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>>
>> it sounds like this is the feature you recommend using:
>>
>> "A database connection in read-uncommitted mode _does not attempt to
>> obtain read-locks before

Re: [sqlite] Inner Join Performance Issue

2009-10-22 Thread Ralf
I just ran EXPLAIN, how can I tell if the Indexes are used?

I just read, that with an operator "like '%a%'" SQLite won't use an Index.
Is this the case?

Thanks 
Ralf

> -Ursprüngliche Nachricht-
> Von: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] Im Auftrag von John Crenshaw
> Gesendet: Donnerstag, 22. Oktober 2009 05:53
> An: General Discussion of SQLite Database
> Betreff: Re: [sqlite] Inner Join Performance Issue
> 
> Try to EXPLAIN the query and verify that the index is actually used.
> There are a lot of reasons why this query would probably NOT be using
> the index.
> 
> John
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ralf
> Sent: Wednesday, October 21, 2009 5:50 PM
> To: 'General Discussion of SQLite Database'
> Subject: [sqlite] Inner Join Performance Issue
> 
> Hello Forum,
> [>> ]
> I have a select that joins 15 Tables the where clause consist of 8 like
> relations (all fields are indexed), this is to implement a sort of
> "search
> engine".
> The performance is awful. It takes around 10sec.
> Is this how it should be or is there anything I can do?
> 
> If you need more infos pls. let me know
> 
> Thx
> Ralf
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inner Join Performance Issue

2009-10-22 Thread Simon Slavin

On 22 Oct 2009, at 9:55pm, Ralf wrote:

> I just read, that with an operator "like '%a%'" SQLite won't use an  
> Index.
> Is this the case?

The '%' at the beginning of that operand indicates that the first  
character of the string can be anything.  An index is of no use if you  
don't know what you're searching for.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Order of operations when using TRIGGERs

2009-10-22 Thread Simon Slavin
I'm looking for assurance that things work the way I think they do,  
and will continue to do so in future versions.

I'm using a TRIGGER that is triggered AFTER INSERT.  The INSERT  
commands do not know or set the value for the ROWID column.  Can I  
reliably fetch the value for this column from 'NEW.' ?  Or can I rely  
only on values which are explicitly set in the INSERT command ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of operations when using TRIGGERs

2009-10-22 Thread Rich Shepard
On Thu, 22 Oct 2009, Simon Slavin wrote:

> I'm using a TRIGGER that is triggered AFTER INSERT.  The INSERT commands
> do not know or set the value for the ROWID column.  Can I reliably fetch
> the value for this column from 'NEW.' ?  Or can I rely only on values
> which are explicitly set in the INSERT command ?

Simon,

   I cannot directly answer your question because I have always avoided using
the rowid for _anything_. There's no guarantee that a given row will always
have the same position in the set, so all the SQL experts I've read advise
against using it. Can you assign a unique identifier to each row as the
primary key? If so, use that in the trigger.

Rich
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of operations when using TRIGGERs

2009-10-22 Thread Igor Tandetnik
Rich Shepard  wrote:
> On Thu, 22 Oct 2009, Simon Slavin wrote:
> 
>> I'm using a TRIGGER that is triggered AFTER INSERT.  The INSERT
>> commands do not know or set the value for the ROWID column.  Can I
>> reliably fetch the value for this column from 'NEW.' ?  Or can I
>> rely only on values which are explicitly set in the INSERT command ?
> 
>   I cannot directly answer your question because I have always
> avoided using the rowid for _anything_. There's no guarantee that a
> given row will always have the same position in the set, so all the
> SQL experts I've read advise against using it. Can you assign a
> unique identifier to each row as the primary key? If so, use that in
> the trigger. 

In SQLite, ROWID _is_ the unique identifier for each record. Any field declared 
INTEGER PRIMARY KEY is simply an alias for ROWID. "Position in the set", 
whatever it is, has nothing to do with it.

To Simon: yes, in my experience, AFTER INSERT trigger can happily access 
new.ROWID and/or new.YourId (where YourId is your own column declared INTEGER 
PRIMARY KEY and thus serving as an alias for ROWID). However, not being a 
SQLite developer, I can't give any kind of official guarantee. Hopefully 
someone important will chime in.

Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why FTS3 has the limitations it does

2009-10-22 Thread Scott Hess
On Mon, Oct 19, 2009 at 11:25 AM, John Crenshaw
 wrote:
> On the other hand, we could add a tree inside each segment to index the
> doclist. The term would be looked up as normal at a cost of O(log nT).
> After that though, if the docid is known, it could be looked up at an
> additional cost of only O(log nD). The total cost O(log nT) + O(log nD)
> is only marginally worse than O(log nTD) (and only because nTD is the
> count of a natural join, rather than a true product of the counts.)

The current code deals with variably-sized doclists by aggregating
small ones into a leaf block, while large ones get their own leaf
block, which can require overflow pages (at the SQLite level).  One of
my ideas for future development was to augment the index btree to
store keys as term+docid.  Large doclists could then be spread across
multiple leaves and more efficiently accessed.  There would be an
extra byte per key in the interior nodes, plus there would be
generally more info above the leaf nodes, but I think it would be
pretty reasonable.

I once had an off-list discussion with someone about more generally
modeling the index as a sorted set of  tuples,
which can then be delta-encoded into leaf nodes and built up
more-or-less as things currently work.  That would allow leaf nodes to
be uniformly-sized.  It was a big change, though, in an area I wasn't
enthusiastic about revisiting at the time.

> This still doesn't offer direct access to the doc lists by docid (you
> still need terms) but that problem should easy to solve once the term +
> docid case is handled separately, because only the docid needs to be
> indexed at that point.

I think that this approach may not offer as much performance increase
as one might like.  Theoretically, you may read 1/10 as many pages for
a particular query, but when you consider seeks and streaming, the
gains may not be measurable when compared to a more stupid
implementation (read the existing doclist and trim it based on
inputs).  Just really depends on things like fragmentation and the
like.

Which leads me to the next point.  There's no gain to doing any of
this is you cannot phrase the question in terms of existing SQLite
structures.  I'm not entirely certain that the current code has any
facility for running MATCH against a set of rowids.  I think you get
one at a time.  That's a big hit for any implementation, but if things
come in sorted order it may be something that can be masked by caching
some info in the cursor.

You could try writing a MATCH function where the doclists needed are
faulted in as needed, and each row trims the inputs to just that docid
before executing the query.  If things come in sorted order you can
just open doclist-readers and scan once, so that part would drop from
N^2 to N.  I think doing this would be a reasonable starter project,
because it could be done with no storage-format changes, but it would
generate a familiarity with lots of bits of the code.  And I think
there's half a chance it would perform reasonably well for many cases.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of operations when using TRIGGERs

2009-10-22 Thread D. Richard Hipp

On Oct 22, 2009, at 7:11 PM, Igor Tandetnik wrote:
>
> In SQLite, ROWID _is_ the unique identifier for each record. Any  
> field declared INTEGER PRIMARY KEY is simply an alias for ROWID.  
> "Position in the set", whatever it is, has nothing to do with it.

A clarification:

The ROWID is always unique, but it not guaranteed to be static.   
ROWIDs can and often does change when you VACUUM.  Except, if you  
declare an INTEGER PRIMARY KEY, then the IPK will be the same as the  
rowid and the IPK will not change because of VACUUM.  Moral:  Always  
use INTEGER PRIMARY KEY, never a raw ROWID, for anything that you want  
to persist for longer than the current transaction.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inner Join Performance Issue

2009-10-22 Thread John Crenshaw
An example from my own data:

explain query plan select * from categories where cat_name = ?

order | from | detail
--
0 | 0| TABLE categories WITH INDEX sqlite_autoindex_categories_1
--

You can tell it uses the index because it says so. If you omit the "query plan" 
part you will get a longer more detailed result. If it has Idx* opcodes you 
know it is using an index to look up a record.

I also get the same result (WITH INDEX) for:
explain query plan  select * from categories where cat_name LIKE "foo%"
explain query plan  select * from categories where cat_name LIKE "foo_"
explain query plan  select * from categories where cat_name LIKE "foo_%"
explain query plan  select * from categories where cat_name LIKE "foo_bar_%"

Oddly enough, The following does NOT use the index:
explain query plan  select * from categories where cat_name LIKE "foo"

I'm sure this is a bug with the optimizer, because this query is index capable.

Also, be aware that the following can never use the index:
explain query plan  select * from categories where cat_name LIKE ?

The reason it can't use the index with a bound parameter is that the statement 
is compiled in advance, and has to select a plan that will work for any input. 
Since the bound parameter might not be index capable, it has to compile a query 
that uses a slower plan.

If the optimizer problem for strings without wildcards is a performance 
concern, you watch for this and build your query differently depending on 
whether the string has wildcards or not.

BEWARE To take advantage of the index with user supplied strings you will 
have to build your query as a string rather than a prepared statement. MAKE 
SURE YOU SANITIZE YOUR INPUTS!

John

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Ralf
Sent: Thursday, October 22, 2009 4:55 PM
To: 'General Discussion of SQLite Database'
Subject: Re: [sqlite] Inner Join Performance Issue

I just ran EXPLAIN, how can I tell if the Indexes are used?

I just read, that with an operator "like '%a%'" SQLite won't use an Index.
Is this the case?

Thanks 
Ralf

> -Ursprüngliche Nachricht-
> Von: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] Im Auftrag von John Crenshaw
> Gesendet: Donnerstag, 22. Oktober 2009 05:53
> An: General Discussion of SQLite Database
> Betreff: Re: [sqlite] Inner Join Performance Issue
> 
> Try to EXPLAIN the query and verify that the index is actually used.
> There are a lot of reasons why this query would probably NOT be using
> the index.
> 
> John
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ralf
> Sent: Wednesday, October 21, 2009 5:50 PM
> To: 'General Discussion of SQLite Database'
> Subject: [sqlite] Inner Join Performance Issue
> 
> Hello Forum,
> [>> ]
> I have a select that joins 15 Tables the where clause consist of 8 like
> relations (all fields are indexed), this is to implement a sort of
> "search
> engine".
> The performance is awful. It takes around 10sec.
> Is this how it should be or is there anything I can do?
> 
> If you need more infos pls. let me know
> 
> Thx
> Ralf
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of operations when using TRIGGERs

2009-10-22 Thread Simon Slavin

On 23 Oct 2009, at 1:23am, D. Richard Hipp wrote:

> On Oct 22, 2009, at 7:11 PM, Igor Tandetnik wrote:
>>
>> In SQLite, ROWID _is_ the unique identifier for each record. Any
>> field declared INTEGER PRIMARY KEY is simply an alias for ROWID.
>> "Position in the set", whatever it is, has nothing to do with it.
>
> A clarification:
>
> The ROWID is always unique, but it not guaranteed to be static.
> ROWIDs can and often does change when you VACUUM.  Except, if you
> declare an INTEGER PRIMARY KEY, then the IPK will be the same as the
> rowid and the IPK will not change because of VACUUM.  Moral:  Always
> use INTEGER PRIMARY KEY, never a raw ROWID, for anything that you want
> to persist for longer than the current transaction.

That helps.  What you're saying is that if I ever intend to use ROWID  
I should explicitly declare a column as INTEGER PRIMARY KEY.  I can  
live with that.

So now my original query comes down to this: if a column's value is  
generated by SQLite rather than being supplied by my INSERT command,  
i.e. if the column has a is INTEGER PRIMARY KEY, or has a DEFAULT  
value, can I rely on being able to access that value in my TRIGGER as  
NEW.columnName ?

I note that the documentation about TRIGGER does warn against certain  
tricks while referring to OLD.columnName and also against other tricks  
using BEFORE TRIGGERs so I won't be doing any of that.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of operations when using TRIGGERs

2009-10-22 Thread D. Richard Hipp

On Oct 22, 2009, at 9:11 PM, Simon Slavin wrote:
>
> So now my original query comes down to this: if a column's value is
> generated by SQLite rather than being supplied by my INSERT command,
> i.e. if the column has a is INTEGER PRIMARY KEY, or has a DEFAULT
> value, can I rely on being able to access that value in my TRIGGER as
> NEW.columnName ?

AFTER triggers only.  With BEFORE triggers, the NEW.columnName value  
might not have been generated yet.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of operations when using TRIGGERs

2009-10-22 Thread Simon Slavin

On 23 Oct 2009, at 3:00am, D. Richard Hipp wrote:

> On Oct 22, 2009, at 9:11 PM, Simon Slavin wrote:
>>
>
>> So now my original query comes down to this: if a column's value is
>> generated by SQLite rather than being supplied by my INSERT command,
>> i.e. if the column has a is INTEGER PRIMARY KEY, or has a DEFAULT
>> value, can I rely on being able to access that value in my TRIGGER as
>> NEW.columnName ?
>
> AFTER triggers only.  With BEFORE triggers, the NEW.columnName value
> might not have been generated yet.

Many thanks.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of operations when using TRIGGERs

2009-10-22 Thread Dan Kennedy

On Oct 23, 2009, at 5:05 AM, Simon Slavin wrote:

> I'm looking for assurance that things work the way I think they do,
> and will continue to do so in future versions.
>
> I'm using a TRIGGER that is triggered AFTER INSERT.  The INSERT
> commands do not know or set the value for the ROWID column.  Can I
> reliably fetch the value for this column from 'NEW.' ?

Yes. An automatically generated rowid is available in an AFTER
INSERT trigger.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users