Re: [sqlite] SQLite version 3.7.5

2011-02-15 Thread Sylvain Pointeau
Any news on System.Data.SQLite.org ?

It was a so great news!
When are the releases planned?

Best regards,
Sylvain
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] bug in older version of sqlite

2011-02-15 Thread Dustin J. Mitchell
I ran into some odd behavior from older versions of SQLite, and I
wonder if anyone can add some details and help me find a good
workaround.

The details are at http://trac.buildbot.net/ticket/1810.  With
apologies for boiling this down to Python:

import os
import sqlite3

print "sqlite.py:", sqlite3.version
print "sqlite itself:", sqlite3.sqlite_version

if os.path.exists("/tmp/my.db"):
os.unlink("/tmp/my.db")

def combined():
conn1 = sqlite3.connect("/tmp/my.db")
curs1 = conn1.cursor()
print "1: pragma" ## B
curs1.execute("PRAGMA table_info('foo')")

conn2 = sqlite3.connect("/tmp/my.db")
curs2 = conn2.cursor()
print "2: create"
curs2.execute("CREATE TABLE foo ( a integer )")
conn2.commit()
conn2.close() ## C

print "1: select"
# curs1.execute("SELECT * from sqlite_master") ## A
curs1.execute("SELECT * from foo")

combined()

This script fails for sqlite-3.6.12 and earlier, and works for
sqlite-3.6.17 and higher.  If I add a query of sqlite_master (A),
things work.  Remove the PRAGMA (B) and things work.  Adding or
removing the close() (C) makes no difference.

This is a part of Buildbot (http://buildbot.net), and older versions
of SQLite are still present on lots of common systems, so I can't
substantially change the required version of SQLite without alienating
a lot of users.

My workaround is to run the above at startup, and if I get an
OperationalError, run 'SELECT * from sqlite_master' before each real
query.  Obviously, that's not great for performance, but I don't
understand the underlying problem well enough to invent something
better.

I'd love to have a pointer to the bug that was solved here (I had a
look through the tracker, but didn't see anything related) as well as
the version in which it was solved.  I'd also love to know what's
going on - per-connection caching of schema metadata?  Finally, any
suggestions for a better workaround would be great.

TIA!
Dustin
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] UPDATE WITH ORDER BY LIMIT ERROR

2011-02-15 Thread venkat easwar
Hi Buddies,

Sqlite support document says, update with limit and order by clauses are 
supported. But I found it actually not working. Sample DB schema,

create table check_update( a int, b char);
insert into check_update values (1,'venkat',22);
insert into check_update values (2,'venkat',23);

Now a update like this

update check_update set b='venkat n' where b='venkat' order by a limit 1;

should actually update the first row but not the second one as per document 
http://www.sqlite.org/syntaxdiagrams.html#update-stmt-limited 


The scenario is given just for reproduction, my actual scenes are different and 
which needs this implementation. :( 


Well, now am I missing something in the update.

VENKAT



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

2011-02-15 Thread venkat easwar
Forgot to mention what error I am getting.

near "order": syntax error 
near "limit": syntax error - if i remove the order by clause

 VENKAT





From: venkat easwar 
To: General Discussion of SQLite Database 
Sent: Tue, February 15, 2011 4:32:40 PM
Subject: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

Hi Buddies,

Sqlite support document says, update with limit and order by clauses are 
supported. But I found it actually not working. Sample DB schema,

create table check_update( a int, b char);
insert into check_update values (1,'venkat',22);
insert into check_update values (2,'venkat',23);

Now a update like this

update check_update set b='venkat n' where b='venkat' order by a limit 1;

should actually update the first row but not the second one as per document 
http://www.sqlite.org/syntaxdiagrams.html#update-stmt-limited 


The scenario is given just for reproduction, my actual scenes are different and 
which needs this implementation. :( 


Well, now am I missing something in the update.

VENKAT



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

2011-02-15 Thread venkat easwar
I apologize for multiple mails. In create I missed to add one column. The 
actual 
create statement is 


create table check_update( a int, b char, c int);
insert into check_update values (1,'venkat',22);
insert into check_update values (2,'venkat',23);

 
Failing query:

update check_update set b='venkat n' where b='venkat' order by a limit 1;

support link: http://www.sqlite.org/syntaxdiagrams.html#update-stmt-limited
VENKAT





From: venkat easwar 
To: General Discussion of SQLite Database 
Sent: Tue, February 15, 2011 4:34:21 PM
Subject: Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

Forgot to mention what error I am getting.

near "order": syntax error 
near "limit": syntax error - if i remove the order by clause

VENKAT





From: venkat easwar 
To: General Discussion of SQLite Database 
Sent: Tue, February 15, 2011 4:32:40 PM
Subject: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

Hi Buddies,

Sqlite support document says, update with limit and order by clauses are 
supported. But I found it actually not working. Sample DB schema,

create table check_update( a int, b char);
insert into check_update values (1,'venkat',22);
insert into check_update values (2,'venkat',23);

Now a update like this

update check_update set b='venkat n' where b='venkat' order by a limit 1;

should actually update the first row but not the second one as per document 
http://www.sqlite.org/syntaxdiagrams.html#update-stmt-limited 


The scenario is given just for reproduction, my actual scenes are different and 
which needs this implementation. :( 


Well, now am I missing something in the update.

VENKAT



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

2011-02-15 Thread Dan Kennedy
On 02/15/2011 06:04 PM, venkat easwar wrote:
> Forgot to mention what error I am getting.
>
> near "order": syntax error
> near "limit": syntax error - if i remove the order by clause

See under the "Optional LIMIT and ORDER BY Clauses" heading
on this page:

   http://www.sqlite.org/lang_update.html

You need to build SQLite with SQLITE_ENABLE_UPDATE_DELETE_LIMIT
defined.

   http://www.sqlite.org/compile.html#enable_update_delete_limit

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

2011-02-15 Thread venkat easwar
Oh... Thanks Kennedy. Between any options on run time to enable the feature?

 VENKAT





From: Dan Kennedy 
To: sqlite-users@sqlite.org
Sent: Tue, February 15, 2011 4:48:24 PM
Subject: Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

On 02/15/2011 06:04 PM, venkat easwar wrote:
> Forgot to mention what error I am getting.
>
> near "order": syntax error
> near "limit": syntax error - if i remove the order by clause

See under the "Optional LIMIT and ORDER BY Clauses" heading
on this page:

  http://www.sqlite.org/lang_update.html

You need to build SQLite with SQLITE_ENABLE_UPDATE_DELETE_LIMIT
defined.

  http://www.sqlite.org/compile.html#enable_update_delete_limit

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] bug in older version of sqlite

2011-02-15 Thread Igor Tandetnik
Dustin J. Mitchell  wrote:
> def combined():
>conn1 = sqlite3.connect("/tmp/my.db")
>curs1 = conn1.cursor()
>print "1: pragma" ## B
>curs1.execute("PRAGMA table_info('foo')")
> 
>conn2 = sqlite3.connect("/tmp/my.db")
>curs2 = conn2.cursor()
>print "2: create"
>curs2.execute("CREATE TABLE foo ( a integer )")
>conn2.commit()
>conn2.close() ## C
> 
>print "1: select"
># curs1.execute("SELECT * from sqlite_master") ## A
>curs1.execute("SELECT * from foo")
> 
> combined()
> 
> This script fails for sqlite-3.6.12 and earlier, and works for
> sqlite-3.6.17 and higher.  If I add a query of sqlite_master (A),
> things work.  Remove the PRAGMA (B) and things work.  Adding or
> removing the close() (C) makes no difference.

You should close the cursor after executing PRAGMA table_info('foo'). You 
cannot modify database schema while there are still outstanding statements (on 
the same or other connections).
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Scrolling through results of select

2011-02-15 Thread BareFeetWare
Hi all,

I'm looking for the best way to scroll through data returned by an arbitrary 
select statement. That is, I want something that will work for any SQL select 
statement string I throw at it at runtime.

I've looked through:

http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor

and a few posts on this list.

1. It seems that in order to extract, say, the first n rows from a select, I do 
this:

begin
;
create temp view if not exists "Scrolling View" as 
;
select * from "Scrolling View"
order by keyColumn
limit n
;
commit
;

2. And then, to get the next n rows:

maxVisibleKeyValue = last value of keyColumn in visible set (returned above).

begin
;
create temp view if not exists "Scrolling View" as 
;
select * from "Scrolling View"
where keyColumn > :maxVisibleKey
order by keyColumn
limit n
;
commit
;

For the next 10 rows, I should repeat step 2.

For reverse scrolling, I'll run something like:

minVisibleKeyValue = first value of keyColumn in visible set.

begin
;
create temp view if not exists "Scrolling View" as 
;
select * from "Scrolling View"
where keyColumn < :minVisibleKeyValue
order by keyColumn desc
limit n
;
commit
;

Questions:

1. Is there any significant overhead on SQLite from my selecting from a view 
representing the original arbitrary select? That is, will SQLite still use any 
indexes etc correctly? Or do I need to dissect/parse the original select 
statement, changing the where statement etc?

2. If the arbitrary select statement already contains an "order by" clause, 
then I obviously need to use the order by column(s) specified there as the 
keyColumn for the scrolling, rather than override the order by. Is there any 
way to get the sort order of an arbitrary select statement?

3. This method requires that keyColumn is defined as unique (or primary key), 
otherwise it can skip rows of data. Is there any way to allow for a non-unique 
keyColumn?

4. If the arbitrary select statement does not specify an order by, how can I 
least affect the output (ie not impose a sort order) but still facilitate 
scrolling? For selecting from a table, the best I can think of is to use rowid 
(or its alias), which seems to be the typical result order when no order is 
specified. But when selecting from a view (which may contain joins), by which 
column(s) can I explicitly sort (for the sake of scrolling) that will best 
mimic the usual SQL output order (which I know is "undefined")?

5. I understand that "Rule Number 1" is to "not leave queries open". So what's 
the best way to minimize the overhead of repeatedly running the same query but 
with a different where clause and limit (and order if reversing). I'm thinking 
I would be best to actually keep the query (ie prepared statement) open while 
live scrolling (eg flicking through rows on an iPhone/iPad), not using a limit 
clause at all, but instead just keep getting more rows as needed to fill the 
scrolling, until the user stops scrolling, then finalize, close etc. When they 
begin scrolling again, fire up a new prepare (with a new maxVisibleKeyValue) .

Any help appreciated.

Thanks,
Tom
BareFeetWare

 --
Comparison of SQLite GUI tools:
http://www.barefeetware.com/sqlite/compare/?ml

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-15 Thread Black, Michael (IS)
I'll give you another failure point that most people never see or think of.

I used to manage numerous Linux systems with RAID-5.  One time I had a drive 
fail, the spare kicked in, and then during the rebuild a 2nd drive 
failed...hosing the RAID (i.e. two failed disks).

The problem was...normal disk access/backup only scanned the in-use blocks.  
The RAID resync scanned the entire disk which had never been done.

After that I put in a utility that did a nightly "dd if=/dev/md0 >/dev/null" 
job to force a scan of the entire disk set.

This is one reason why they invented RAID6.

There's just so many ways to fail...sigh...


Michael D. Black
Senior Scientist
NG Information Systems
Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Simon Slavin [slav...@bigfraud.org]
Sent: Monday, February 14, 2011 5:04 PM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

On 14 Feb 2011, at 8:50pm, Black, Michael (IS) wrote:

> And if you want to talk about data reliability...BACK UP YOUR DATA.

And keep the backups off-site.  And once a year try to actually restore one of 
them.  I've earned /lots/ of money from companies with backup systems that 
claimed to work fine but were either faulty, or being used to backup to media 
with low-level corruption.

If there was any money in it I'd write a book, but there isn't.  Companies 
prefer to do it wrong and then hire a consultant after it fails.

On 14 Feb 2011, at 9:13pm, Jim Wilcoxson wrote:

> I think what would be useful is for drives to have 3 settings:
>
> a) the drive can do whatever it wants to optimize performance
> b) the drive can reorder writes, but not across a sync (ACI, but no D)
> c) the drive has to respect all syncs (ACID)
>
> If the drive mgfr wants to make a) the default, that's fine, but there
> should be an easy way to request the other 2 from an application.
> Users are not usually sophisticated enough to know when it's okay to
> cache writes or not okay.  For my use of SQLite, b) would probably be
> fine, but a) apparently is not since it corrupts databases.

This is a job for someone who wants to contact lots of manufacturers and ask if 
they provide models of their drives which conform.  Some do, because they want 
to sell their drives for use in servers.  Often these drives are physically 
identical to their mass-market drives, but they have fewer bad blocks, and have 
jumpers in a different place or different firmware.

> Michael mentioned doing backups and forgetting about all of this, but
> many people are using USB drives to store their backups.  So there is
> a bit of a Catch-22 here.  Almost all modern-day filesystems are going
> to depend on some kind of journalling to prevent corrupted file
> systems, and as far I as know, journalling filesystems depend on syncs
> to maintain FS consistency.

If the data is backed up when SQLite has the files closed, things are fine.  
The really hard part of this comes when you're working with systems that must 
be live at all times.  Which is why you'll never see a bank use SQLite to 
maintain its live transaction system.  The big DBMSs have systems inside 
themselves that allow the backing-up of an active ever-changing database.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

2011-02-15 Thread Kees Nuyt
On Tue, 15 Feb 2011 03:30:23 -0800 (PST), venkat easwar
 wrote:

>Oh... Thanks Kennedy. Between any options on run time to enable the feature?

No, it's a compile-time option, compiling without
SQLITE_ENABLE_UPDATE_DELETE_LIMIT makes SQLite a bit lighter.

Without that feature, if you want to update one row, use it's unique
key in a WHERE clause. In your schema, it would make sense to create
the table like this:

create table check_update( 
a INTEGER PRIMARY KEY,
b TEXT,
c INTEGER
);

The INSERTs will stay the same, and the update statement becomes
something like:

UPDATE check_update
SET b='venkat n'
 WHERE a = (
SELECT min(a) 
  FROM check_update
 WHERE b='venkat'
);

Hpoe this helps.

>
>From: Dan Kennedy 
>To: sqlite-users@sqlite.org
>Sent: Tue, February 15, 2011 4:48:24 PM
>Subject: Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR
>
>On 02/15/2011 06:04 PM, venkat easwar wrote:
>> Forgot to mention what error I am getting.
>>
>> near "order": syntax error
>> near "limit": syntax error - if i remove the order by clause
>
>See under the "Optional LIMIT and ORDER BY Clauses" heading
>on this page:
>
>  http://www.sqlite.org/lang_update.html
>
>You need to build SQLite with SQLITE_ENABLE_UPDATE_DELETE_LIMIT
>defined.
>
>  http://www.sqlite.org/compile.html#enable_update_delete_limit
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [documentation] very minor docs bug

2011-02-15 Thread Eric Smith
http://sqlite.org/features.html

"Database For Gadgets. SQLite is popular choice for the ..."

should have an article, like 

"Database For Gadgets. SQLite is a popular choice for the ..."

Eric

--
Eric A. Smith

Windows is *NOT* a virus - viruses are small and efficient.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS slowdown with matchinfo

2011-02-15 Thread Iker Arizmendi
Hello all,

I'm running into what seems like an abnormally large
performance drop on on some FTS queries that use
matchinfo when compared to those that don't.

I created an FTS table using the following:

CREATE VIRTUAL TABLE test
USING FTS4(rowid, content);

and then filled it with ~2 million docs of ~10 tokens
each which resulted in a file of around 275 MB. After
running "optimize" on the table I issued a query with
3 terms like so:

SELECT length(content) FROM test
WHERE MATCH "w1 OR w2 OR w3"

which returned ~164,000 rows in 1.1 seconds. However,
if I throw in a call to matchinfo:

SELECT length(matchinfo(test, 'x')) FROM test
WHERE MATCH "w1 OR w2 OR w3"

the query takes 7.5 minutes. It seems FTS is getting
stuck calculating the 2nd and 3rd part of the "x"
matchinfo data ("hits all rows" and "docs with hits")
but it's not clear why this should take so long.
Any ideas on what might be causing the slowdown?

Thanks,
Iker

-- 
Iker Arizmendi
AT&T Labs - Research
Speech and Image Processing Lab
e: i...@research.att.com
w: http://research.att.com
p: 973-360-8516

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite .NET 4.0 provider available?

2011-02-15 Thread Matthew Adams
Hi all,

I'm a bit confused with the current level of support for Sqlite on
.NET.  I've seen both http://sqlite.phxsoftware.com and
http://system.data.sqlite.org/index.html/doc/trunk/www/index.wiki,
explaining that the codebase is under transition.

If I'm using Sqlite 3.7.5, can I use the last release of the .NET
provider from http://sqlite.phxsoftware.com, which is
http://sourceforge.net/projects/sqlite-dotnet2/files/SQLite%20for%20ADO.NET%202.0/1.0.66.0
?

When is official support for .NET 4.0 expected?

Thanks,
Matthew

-- 
mailto:matt...@matthewadams.me
skype:matthewadams12
yahoo:matthewadams
aol:matthewadams12
google-talk:matthewadam...@gmail.com
msn:matt...@matthewadams.me
http://matthewadams.me
http://www.linkedin.com/in/matthewadams
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Composite primary key with autoincrement

2011-02-15 Thread Maciej Lotkowski
Hi

Is it possible to have composite primary key where one column is
autoincrement? From what I found in docs it looks like it is possible
to have either simple primary key on one autoincrement column or
composite PK on few columns without autoincrement. Am I right? If no,
what is the syntax to create a table with primary key "id" and
"another_id" where "id" is autoincrement?

-- 
Maciej Lotkowski
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up count(distinct col)

2011-02-15 Thread Yuzem


BareFeetWare-2 wrote:
> 
> Does IMDB allow use of their data this way? After my brief reading of
> their site, I thought they charge a $15k minimum per year for data.
> 
IMDb Pro offers additional information and services, all the information I
am grabbing is already available to the public and I am not publishing this
information, there are a lot of programs that do this.


BareFeetWare-2 wrote:
> 
>> The script is written in bash and I can give you the code but I think it
>> would be very complicated to adapt it to your schema.
> 
> Complicated? Why? In what format is the grabbed data (ie what tables,
> columns, rows)? It usually just takes an insert select to move data from
> one table's schema to another.
> 
Well, you would have to replace the insert commands, lets say that I have
table movies with columns id,title,year,plot
In the script I have to set the columns as variables:
id=tt12345
title="movie title"
year=2011
plot="this is the plot"

Ant then I call the command insert:
insert movies

The command insert knows the columns for each table, it goes column by
column, if the column is declared as a variable it is inserted so you would
have to replace all those commands with something like:
echo "INSERT INTO movies" etc...

Let me know if you are interested in the script, the script is included in 
https://launchpad.net/figuritas/+download the application , but I can post
and explain the relevant functions if you want.


BareFeetWare-2 wrote:
> 
> Well, yes, but it depends of the definition of "how many directors there
> are". The above counts how many directors there are in total, counting the
> same person for each movie they direct.
> 
> What you want, I think, however, is how many people there are who are
> directors (counting each person just once, even if they direct multiple
> movies), given by this:
> 
> select count(*) from "Capacity People Statistics"
> where Capacity_ID = (select ID from Capacity where Name = 'director')
> ;
> 
Ok then, yes I wanted to count directors counting each person just once, I
think that that code will not count much faster.


BareFeetWare-2 wrote:
> 
> Hopefully now that I've given you the query you actually needed, it now
> makes sense ;-)
> 
Yes it does, but then I don't need the Count column since I want to speed up
counting distinct directors but counting each person just once.


BareFeetWare-2 wrote:
> 
> No, I don't think you need more tables to achieve the above.
> 
Yes you can do it with the above but I want to make it very fast, with those
two additional tables I think it will be much faster.
-- 
View this message in context: 
http://old.nabble.com/Speed-up-count%28distinct-col%29-tp30864622p30935871.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite .NET 4.0 provider available?

2011-02-15 Thread Simon Slavin

On 15 Feb 2011, at 8:31pm, Matthew Adams wrote:

> I'm a bit confused with the current level of support for Sqlite on
> .NET.  I've seen both http://sqlite.phxsoftware.com and
> http://system.data.sqlite.org/index.html/doc/trunk/www/index.wiki,
> explaining that the codebase is under transition.

'Under transition' is a good thing.  It means that the project is still 
actively being worked on rather than having been abandoned.  You have a stable 
version you can use, and the project team are working on a newer one.

> If I'm using Sqlite 3.7.5, can I use the last release of the .NET
> provider from http://sqlite.phxsoftware.com, which is
> http://sourceforge.net/projects/sqlite-dotnet2/files/SQLite%20for%20ADO.NET%202.0/1.0.66.0
> ?

The release notes for that version say which version of SQLite it supports.  If 
you use that version of SQLite for ADO.NET 2.0 then that's the version of 
SQLite you will be using.

> When is official support for .NET 4.0 expected?

'official' from the SQLite team ?  Never.  SQLite's official support is for two 
sets of source code written in C.  No other forms of support are provided, not 
for any specific platform or variation on C (e.g. C++, Objective-C, C#).  A 
great deal of work goes into making sure that the C source code compiles 
acceptably with a /lot/ of different C compilers on a lot of different 
platforms so that other teams can easily build it into their own projects, 
either prepare libraries from it.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Composite primary key with autoincrement

2011-02-15 Thread Drake Wilson
Quoth Maciej Lotkowski , on 2011-02-15 23:28:01 
+0100:
> Is it possible to have composite primary key where one column is
> autoincrement? From what I found in docs it looks like it is possible
> to have either simple primary key on one autoincrement column or
> composite PK on few columns without autoincrement. Am I right? If no,
> what is the syntax to create a table with primary key "id" and
> "another_id" where "id" is autoincrement?

When would it make sense to do this?  What value would another_id take
if you try to insert with both of them null?  More generally, could
you show some example inserts with what behavior you expect?  I
suspect what you're looking for is best done some other way.

   ---> Drake Wilson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Composite primary key with autoincrement

2011-02-15 Thread Simon Slavin

On 15 Feb 2011, at 11:28pm, Drake Wilson wrote:

> Quoth Maciej Lotkowski , on 2011-02-15 23:28:01 
> +0100:
>> Is it possible to have composite primary key where one column is
>> autoincrement? From what I found in docs it looks like it is possible
>> to have either simple primary key on one autoincrement column or
>> composite PK on few columns without autoincrement. Am I right? If no,
>> what is the syntax to create a table with primary key "id" and
>> "another_id" where "id" is autoincrement?
> 
> When would it make sense to do this?  What value would another_id take
> if you try to insert with both of them null?  More generally, could
> you show some example inserts with what behavior you expect?  I
> suspect what you're looking for is best done some other way.

Or to put it another way, if a column is autoincrement and you don't intend to 
mess with the values manually, it's going to be unique anyway, so it makes a 
good simple short primary key.  You may have good reasons to make another index 
on the combination of columns, though.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up count(distinct col)

2011-02-15 Thread BareFeetWare
> IMDb Pro offers additional information and services, all the information I
> am grabbing is already available to the public and I am not publishing this
> information, there are a lot of programs that do this.

OK, good to know, thanks.

>>> The script is written in bash and I can give you the code but I think it
>>> would be very complicated to adapt it to your schema.
>> 
>> Complicated? Why? In what format is the grabbed data (ie what tables,
>> columns, rows)? It usually just takes an insert select to move data from
>> one table's schema to another.
>> 
> Well, you would have to replace the insert commands, lets say that I have
> table movies with columns id,title,year,plot
> In the script I have to set the columns as variables:
> id=tt12345
> title="movie title"
> year=2011
> plot="this is the plot"
> 
> Ant then I call the command insert:
> insert movies
> 
> The command insert knows the columns for each table, it goes column by
> column, if the column is declared as a variable it is inserted so you would
> have to replace all those commands with something like:
> echo "INSERT INTO movies" etc...

Oh, I see, so the "complication" is that you would have to change the columns 
used in your bash script. That shouldn't be a big problem, but I'll leave the 
bash script to you.

If for some reason rewriting the insert command in the bash script is 
insurmountable, you can just create a view in SQL to match the expectations of 
the bash script. That view can funnel each insert to the underlying SQL schema 
table columns using an "instead of insert" trigger. Let me know if you need 
more info on this.

> Let me know if you are interested in the script, the script is included in 
> https://launchpad.net/figuritas/+download the application , but I can post
> and explain the relevant functions if you want.

No, that's fine, I just couldn't understand why changing the insert statement 
in the bash script is a problem.

>> What you want, I think, however, is how many people there are who are
>> directors (counting each person just once, even if they direct multiple
>> movies), given by this:
>> 
>> select count(*) from "Capacity People Statistics"
>> where Capacity_ID = (select ID from Capacity where Name = 'director')
>> ;
>> 
> Ok then, yes I wanted to count directors counting each person just once, I 
> think that that code will not count much faster.

I expect it will be *much* faster than count(distinct), almost instant.

>> Hopefully now that I've given you the query you actually needed, it now
>> makes sense ;-)
>> 
> Yes it does, but then I don't need the Count column since I want to speed up 
> counting distinct directors but counting each person just once.

Well, you will actually need the Count column if you plan on facilitating 
deletion of directors, since you will need to track when the count of a 
person's director roles drops to zero (so they're no longer counted). In that 
case, you will also need a slightly modified select:

select count(*) from "Capacity People Statistics"
where Capacity_ID = (select ID from Capacity where Name = 'director')
and Count > 0
;

or you can change the delete trigger to remove statistics that drop to a zero 
count:

begin immediate
;
drop trigger if exists "Movie People delete"
;
create trigger "Movie People delete"
on "Movie People"
after delete
begin
insert or replace into "Capacity People Statistics" (Capacity_ID, People_ID, 
Count)
select
old.Capacity_ID
,   old.People_ID
,   (   select coalesce(Count, 0) - 1 from "Capacity People Statistics"
where Capacity_ID = old.Capacity_ID and People_ID = 
old.People_ID
)
;
delete from "Capacity People Statistics"
where Count = 0
;
end
;
commit
;

>> No, I don't think you need more tables to achieve the above.
>> 
> Yes you can do it with the above but I want to make it very fast, with those
> two additional tables I think it will be much faster.

Great, I'm glad we finally got there :-)

Tom
BareFeetWare

--
Comparison of SQLite GUI tools:
http://www.barefeetware.com/sqlite/compare/?ml

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG: test script exclusive2.test needs do_not_use_codec

2011-02-15 Thread Shane Harrelson
Thanks.   I think I got all the changes.

On Mon, Feb 14, 2011 at 7:47 PM, Noah Hart  wrote:
>
> exclusive2.test reads directly from the database using binary read
>
> It needs the following changes:
>
> -source $testdir/tester.tcl
> +source $testdir/tester.tcl
> +
> +# Do not use a codec for tests in this file, as the database file is
> +# manipulated directly using tcl scripts (using binary read)
> +#
> +do_not_use_codec
>
>
> ~Noah Hart
>
> --
> View this message in context: 
> http://old.nabble.com/BUG%3A-test-script-exclusive2.test-needs-do_not_use_codec-tp30927245p30927245.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS slowdown with matchinfo

2011-02-15 Thread Dan Kennedy
On 02/16/2011 12:50 AM, Iker Arizmendi wrote:
> Hello all,
>
> I'm running into what seems like an abnormally large
> performance drop on on some FTS queries that use
> matchinfo when compared to those that don't.
>
> I created an FTS table using the following:
>
>  CREATE VIRTUAL TABLE test
>  USING FTS4(rowid, content);
>
> and then filled it with ~2 million docs of ~10 tokens
> each which resulted in a file of around 275 MB. After
> running "optimize" on the table I issued a query with
> 3 terms like so:
>
>  SELECT length(content) FROM test
>  WHERE MATCH "w1 OR w2 OR w3"
>
> which returned ~164,000 rows in 1.1 seconds. However,
> if I throw in a call to matchinfo:
>
>  SELECT length(matchinfo(test, 'x')) FROM test
>  WHERE MATCH "w1 OR w2 OR w3"
>
> the query takes 7.5 minutes. It seems FTS is getting
> stuck calculating the 2nd and 3rd part of the "x"
> matchinfo data ("hits all rows" and "docs with hits")
> but it's not clear why this should take so long.
> Any ideas on what might be causing the slowdown?

Can you make the database available for download? And
supply the exact query you are using too? I'd like to
know why this is. Thanks.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] will FTS speed up [LIKE '%foo%'] searches? or: how does iTunes do it?

2011-02-15 Thread David M. Cotter
iTunes has "update search results as you type" speed even when you have a 
hundred thousand songs and you're searching on a partial string on all meta 
data columns.

how on earth do they do that?

i'm under the impression it uses CoreData, which in turn uses SQLite under the 
hood.

how can i make my partial string searches searches thru multiple columns go 
that fast?  FTS seems to imply it can make things blazing fast.  is that really 
true?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users