Re: [sqlite] GROUP BY driving me crazy

2010-11-10 Thread James
Thanks Jim,

The position is what I wanted to use to control the display order of
the images.  So yes, which should be first, second, third.  Items can
have more than one image.  If you remove name='White' from your query,
you'll see what I mean.

I didn't want to rely on something like position=1, but instead a
general "priority" where higher numbers are displayed first (or
ascending where lower numbers are first, but then Null becomes a
problem which comes before any number).

Your example definitely sparked some new simpler thoughts though.
Like whether I could make it work where I just specify an image as
"featured".

James

On Wed, Nov 10, 2010 at 4:03 PM, Jim Morris <jmor...@bearriver.com> wrote:
> If you would explain why/how the position value is significant that
> might help.
>
> I fixed your pseudo SQL to run in SQLite Manager and I don't understand
> from the
> sample data what your trying to do. There is only one image per item.
> Do you have multiple images per item and only want to return the first?
>
> A query like the following seem to yield reasonable results:
> SELECT products.name, items.id,images.filename, images.position
> FROM products
> INNER JOIN items ON items.product_id = products.id
> LEFT JOIN images ON images.item_id = items.id
> WHERE items.name='White'
> ORDER BY products.name ASC, images.position ASC
> ;
>
> The cleaned up code is:
>
> CREATE TABLE products (id, category_id, name, description);
> CREATE TABLE items (id, product_id, part_number, name, price, buyable);
> CREATE TABLE images (id, item_id, filename, position);
>
> INSERT INTO products (id, category_id, name ) VALUES (1, 1, 'SQLite
> T-Shirt');
> INSERT INTO products (id, category_id, name ) VALUES (2, 1, 'SQLite Long
> Sleeved Shirt');
> INSERT INTO items (id, product_id, name) VALUES ('SQLT-WHT', 1, 'White');
> INSERT INTO items (id, product_id, name) VALUES ('SQLT-BLK', 1, 'Black');
> INSERT INTO items (id, product_id, name) VALUES ('SQLL-WHT', 2, 'White');
> INSERT INTO items (id, product_id, name) VALUES ('SQLL-BLK', 2, 'Black');
> INSERT INTO items (id, product_id, name) VALUES ('SQLL-BLU', 2, 'Blue');
> INSERT INTO images (item_id, filename, position) VALUES
> ('SQLT-WHT','sqlt-white.jpg', 2);
> INSERT INTO images (item_id, filename, position) VALUES
> ('SQLT-BLK','sqlt-black.jpg', 1);
> INSERT INTO images (item_id, filename, position) VALUES
> ('SQLL-WHT','sqll-white.jpg', 2);
> INSERT INTO images (item_id, filename, position) VALUES
> ('SQLL-BLK','sqll-black.jpg', 1);
>
>
> On 11/10/2010 2:47 PM, James wrote:
>> This will only display products which have items with images.  I think
>> I'm going to sit back and see if there's a simpler way to achieve what
>> I'm trying to do.  Maybe I'm going about this the wrong way, or I need
>> to make some compromises.
>>
>> Thanks
>>
>> On Wed, Nov 10, 2010 at 3:01 PM, Igor Tandetnik<itandet...@mvps.org>  wrote:
>>> select name, filename from products p, images im
>>> where im.item_id = (
>>>   select im2.item_id from items left join images im2 on (items.id = 
>>> im2.item_id)
>>>   where items.product_id = p.id and items.buyable
>>>   order by position desc limit 1)
>>> order by name;
>>>
>>> --
>>> Igor Tandetnik
>>>
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY driving me crazy

2010-11-10 Thread James
This will only display products which have items with images.  I think
I'm going to sit back and see if there's a simpler way to achieve what
I'm trying to do.  Maybe I'm going about this the wrong way, or I need
to make some compromises.

Thanks

On Wed, Nov 10, 2010 at 3:01 PM, Igor Tandetnik  wrote:
> select name, filename from products p, images im
> where im.item_id = (
>  select im2.item_id from items left join images im2 on (items.id = 
> im2.item_id)
>  where items.product_id = p.id and items.buyable
>  order by position desc limit 1)
> order by name;
>
> --
> Igor Tandetnik
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY driving me crazy

2010-11-10 Thread James
Sorry, I probably should've been clearer.  I do have the data in place
to filter by color.  The problem is I can't get the image.filenames
returned in the desired order.  Even if I wanted to use group_concat,
I still would want them in a particular order (based on
images.position).

On Wed, Nov 10, 2010 at 2:30 PM, Jim Morris  wrote:
> There is no logic way to show you intended result.  You need some sort
> of data that can be used as a filter.
>
> If you want to filter by color why not add color to the item and use a
> WHERE clause?  Or maybe style?
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY driving me crazy

2010-11-10 Thread James
Hi Jim and Igor,

Here's basically what the schema looks like:

CREATE products (id, category_id, name, description)
CREATE items (id, product_id, part_number, name, price, buyable)
CREATE images (id, item_id, filename, position)

I'm grouping in this case because I only want the unique "products".
By joining "items", and "images" I get more than one unique product,
since a product has many items.  So, I'm mis-using GROUP BY?

INSERT INTO products VALUES (1, 1, 'SQLite T-Shirt');
INSERT INTO products VALUES (2, 1, 'SQLite Long Sleeved Shirt');

INSERT INTO items (id, product_id, name) VALUES ('SQLT-WHT, 1, 'White');
INSERT INTO items (id, product_id, name) VALUES ('SQLT-BLK', 1, 'Black');
INSERT INTO items (id, product_id, name) VALUES ('SQLL-WHT', 2, 'White');
INSERT INTO items (id, product_id, name) VALUES ('SQLL-BLK', 2, 'Black');
INSERT INTO items (id, product_id, name) VALUES ('SQLL-BLU', 2, 'Blue');

INSERT INTO images (item_id, filename, position) VALUES ('SQLT-WHT',
'sqlt-white.jpg', 2);
INSERT INTO images (item_id, filename, position) VALUES ('SQLT-BLK',
'sqlt-black.jpg', 1);
INSERT INTO images (item_id, filename, position) VALUES ('SQLL-WHT',
'sqll-white.jpg', 2);
INSERT INTO images (item_id, filename, position) VALUES ('SQLL-BLK',
'sqll-black.jpg', 1);

Query without GROUP BY:
--
SELECT products.name, images.filename
FROM products
INNER JOIN items ON items.product_id = products.id
LEFT JOIN images ON images.item_id = items.id
ORDER BY products.name, images.position DESC

Result:
--
SQLite T-Shirt | sqlt-white.jpg
SQLite T-Shirt | sqlt-black.jpg
SQLite Long Sleeved Shirt | sqll-white.jpg
SQLite Long Sleeved Shirt | sqll-black.jpg
SQLite Long Sleeved Shirt | (Null)

Query with GROUP BY:
---
SELECT products.name, images.filename
FROM products
INNER JOIN items ON items.product_id = products.id
LEFT JOIN images ON images.item_id = items.id
GROUP BY products.id
ORDER BY products.name, images.position DESC

Result:
--
SQLite T-Shirt | sqlt-black.jpg
SQLite Long Sleeved Shirt | (Null)

Desired Result:
--
SQLite T-Shirt | sqlt-white.jpg
SQLite Long Sleeved Shirt | sqll-white.jpg

The reason why I'm doing this is, it allows me to show only photos for
the items that match the filter.  So if you searched for "SQLT-WHT",
it would show the "sqlt-white.jpg" image.  I'm not really working with
t-shirts, but rather vehicle specific products.  I just wish I was
working with t-shirts, as I'd probably do this differently  ;)

Is this best left for post-processing outside the database?

James

On Wed, Nov 10, 2010 at 12:39 PM, Igor Tandetnik <itandet...@mvps.org> wrote:
> James <ja...@icionline.ca> wrote:
>> Is this a fairly simple problem and solution?  What would you search
>> for to find solutions to this?
>
> That rather depends on what the problem is, which as far as I can tell you've 
> never explained. Show the schema of your tables, and a small sample of the 
> data. Describe the expected result, show what you would expect the query to 
> return given the sample data.
>
> Igor Tandetnik
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] GROUP BY driving me crazy

2010-11-10 Thread James
I've been fighting with this for a couple days now.  I've been
searching like mad, and thought I found solutions, but nothing seems
to work.  I think I may have reached the limit of my understanding  :)

This is just a simplified example of what I'm going after:

SELECT products.id, products.name, images.src
FROM products
INNER JOIN items ON items.product_id = products.id
LEFT JOIN images ON images.item_id = items.id
WHERE items.buyable = 1
GROUP BY products.id
ORDER BY products.name, images.position DESC

I've also tried things like:

SELECT products.id, products.name, images.src
FROM products
INNER JOIN items ON items.product_id = products.id
LEFT JOIN (SELECT * FROM images ORDER BY position DESC) images ON
images.item_id = items.id
WHERE items.buyable = 1
GROUP BY products.id
ORDER BY products.name

and...

SELECT products.id, products.name, (SELECT images.src FROM images
WHERE images.item_id = items.id ORDER BY images.position DESC LIMIT
0,1)
FROM products
INNER JOIN items ON items.product_id = products.id
WHERE items.buyable = 1
GROUP BY products.id
ORDER BY products.name

Without the GROUP BY, the ordering is correct.  I found this example
to order for GROUP_CONCAT, but I don't understand how I could
translate it for my problem:

SELECT ID, GROUP_CONCAT(Val)
FROM (
   SELECT ID, Val
   FROM YourTable
   ORDER BY ID, Val
   );

Is this a fairly simple problem and solution?  What would you search
for to find solutions to this?  I'm having a heck of a time.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VFS Layer for to split databaseinto several files?

2010-07-19 Thread James Croall
Hi Michael,

I agree -- while in theory it shouldn't be difficult, the testing and
verification would be painful! And it's unclear to me from glancing through
the code how locking should be handled -- probably just on the first data
file.

I saw some historic posts on here where people were discussing writing just
such a layer, thought there might be something reusable :-)

I will work on bringing the customer into the 20th century instead.

Thanks!

- James

> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Black, Michael (IS)
> Sent: Monday, July 19, 2010 12:42 PM
> To: j...@kreibi.ch; General Discussion of SQLite Database; General
> Discussion of SQLite Database
> Subject: Re: [sqlite] VFS Layer for to split databaseinto several
> files?
> 
> Oops -- my bad...I guess I never ran into that problem in the "old"
> days.  I know I used to write 2Gig+ files on 32-bit Linux before it was
> popular.
> 
> So...perhaps the next logical question is...can this guy's user put
> NFSV3 or such on?  Would be a lot easier than writing your own split-
> VFS which I think is asking for trouble.
> 
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Northrop Grumman Information Systems
> 
> 
> 
> 
> From: sqlite-users-boun...@sqlite.org on behalf of Jay A. Kreibich
> Sent: Mon 7/19/2010 2:29 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] EXTERNAL:Re: VFS Layer for to split databaseinto
> several files?
> 
> 
> 
> On Mon, Jul 19, 2010 at 01:48:56PM -0500, Black, Michael (IS) scratched
> on the wall:
> > NFSV2 is something that limits filesize.
> 
>   From the phrasing of the rest of your email, I assume you meant for
>   this to be "NFSv2 *isn't* something...".
> 
>   And technically that is true.  NFS won't limit the filesize.
> 
>   However, NFSv2 has a very well known limitation, in that network
>   clients could only access the first 2GB worth of a file.  That
>   effectively limits the usable filesize to 2GB.
> 
> http://en.wikipedia.org/wiki/Network_File_System_%28protocol%29#NFSv2
> 
>   There are still many systems (especially embedded ones) out there
>   that only support NFSv2.  Like TFTP, it is a dead simple protocol
>   that requires almost no state, a very simple networking stack,
>   and is very easy to cram into a very limited code space
> 
>-j
> 
> >
> > Generally what limits filesize is the operating system and associated
> > compilation flags like Pavel mentioned.
> >
> > What makes you or your "user" think their system is limited to 2GB?
> > What OS are they using and what OS are you using?  If it's anything
> > more recent than about 10 years ago it should support >2GB as long as
> > you have the disk space.
> 
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
> 
> "Intelligence is like underwear: it is important that you have it,
>  but showing it to the wrong people has the tendency to make them
>  feel uncomfortable." -- Angela Johnson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXTERNAL:Re: VFS Layer for to split database into several files?

2010-07-19 Thread James Croall
Hi All,

This is a very modern Linux64 system and the only limiting factor I can find
here is the NFSv2 filesystem. Our process gets killed with a SIGXFSZ when we
exceed 2GB. The user is not keen on changing their environment to support my
application.

Looking through os_unix.c it looks like it could be reasonably easy to turn
one db file into, say, a series of four each with a maximum size of 2GB, and
decide which fd to use based on the offset requested. Plus enough logic to
keep pages in the same partition.

Thanks,

- James

> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Jay A. Kreibich
> Sent: Monday, July 19, 2010 12:30 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] EXTERNAL:Re: VFS Layer for to split database into
> several files?
> 
> On Mon, Jul 19, 2010 at 01:48:56PM -0500, Black, Michael (IS) scratched
> on the wall:
> > NFSV2 is something that limits filesize.
> 
>   From the phrasing of the rest of your email, I assume you meant for
>   this to be "NFSv2 *isn't* something...".
> 
>   And technically that is true.  NFS won't limit the filesize.
> 
>   However, NFSv2 has a very well known limitation, in that network
>   clients could only access the first 2GB worth of a file.  That
>   effectively limits the usable filesize to 2GB.
> 
> http://en.wikipedia.org/wiki/Network_File_System_%28protocol%29#NFSv2
> 
>   There are still many systems (especially embedded ones) out there
>   that only support NFSv2.  Like TFTP, it is a dead simple protocol
>   that requires almost no state, a very simple networking stack,
>   and is very easy to cram into a very limited code space
> 
>-j
> 
> >
> > Generally what limits filesize is the operating system and associated
> > compilation flags like Pavel mentioned.
> >
> > What makes you or your "user" think their system is limited to 2GB?
> > What OS are they using and what OS are you using?  If it's anything
> > more recent than about 10 years ago it should support >2GB as long as
> > you have the disk space.
> 
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
> 
> "Intelligence is like underwear: it is important that you have it,
>  but showing it to the wrong people has the tendency to make them
>  feel uncomfortable." -- Angela Johnson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VFS Layer for to split database into several files?

2010-07-19 Thread James Croall
Hi Michael,

I'm trying to support a user running on an NFSv2 file server, which is
limited to a maximum file size of 2GB.

Thanks,

- James

> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Black, Michael (IS)
> Sent: Monday, July 19, 2010 10:51 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] VFS Layer for to split database into several
> files?
> 
> Also...would you care to elucidate what environment you're in that
> limits you to 2GB?
> 
> I've heard numerous people before who think they're limited when, in
> actuality, they're not.
> 
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Northrop Grumman Information Systems
> 
> 
> ____
> 
> From: sqlite-users-boun...@sqlite.org on behalf of James Croall
> Sent: Mon 7/19/2010 12:26 PM
> To: sqlite-users@sqlite.org
> Subject: EXTERNAL:[sqlite] VFS Layer for to split database into several
> files?
> 
> 
> 
> Hello All,
> 
> 
> 
> I'm stuck with an environment with a 2GB file size limit.
> 
> 
> 
> I have seen several posts over the last few years about working with
> these limits, and several proposals discussed. Some involve creating a
> VFS layer that splits the database file into chunks, and some discuss
> using a DISKIO feature that I'm frankly not familiar with.
> 
> 
> 
> Has anyone come up with a solution that they would be willing to share?
> I am in a time crunch here and anything that would save me some time
> reaching a solution would be *much* appreciated.
> 
> 
> 
> Thanks!
> 
> 
> 
> - James
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] VFS Layer for to split database into several files?

2010-07-19 Thread James Croall
Hello All,

 

I'm stuck with an environment with a 2GB file size limit.

 

I have seen several posts over the last few years about working with these
limits, and several proposals discussed. Some involve creating a VFS layer
that splits the database file into chunks, and some discuss using a DISKIO
feature that I'm frankly not familiar with.

 

Has anyone come up with a solution that they would be willing to share? I am
in a time crunch here and anything that would save me some time reaching a
solution would be *much* appreciated.

 

Thanks!

 

- James

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] difficulty installing sqlite3 into rubygems

2010-03-23 Thread James Richard Sheldon
I'm trying to install sqlite3 into Ruby Gems 1.9, and I get the following error:

$ sudo gem1.9 install sqlite3Building native extensions.  This could
take a while...
ERROR:  Error installing sqlite3:
ERROR: Failed to build gem native extension.

/usr/bin/ruby1.9 extconf.rb
extconf.rb:2:in `require': no such file to load -- mkmf (LoadError)
from extconf.rb:2:in `'

Any thoughts?

@james

-- 

@ james sheldon
@ http://www.jamessheldon.com
@ voyager...@gmail.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in porter stemmer

2010-02-24 Thread James Berry
drh,

Thanks for the response: it's nice to know that the report was actually seen.

It would be hubris indeed to claim to fix an implementation bug in Porter's 
code. The code in sqlite didn't match any of Porter's code I could find, so I 
assumed it came from elsewhere: but maybe I missed something. In any event, the 
authorship wasn't clear to me from the sources. The real point that I had 
missed was that, as Shane Harrelson points out, step 1c changed between the 
original porter stemmer and the porter2 stemmer; the step I quote below, and 
which I "fixed", is in the porter2 algorithm, which in this case introduces an 
improvement from porter. So in essence I guess my patch moves porter a bit 
closer to porter2.

I understand the complication that changes to the stemmer would cause an 
incompatibility. It might be interesting to implement the porter2 algorithm for 
fts4; I'm not sure how the two compare in terms of performance. 

Thanks again,

James


On Feb 24, 2010, at 7:05 AM, D. Richard Hipp wrote:

> We got the Porter stemmer code directly from Martin Porter.
> 
> I'm sorry it does not work like you want it to.  Unfortunately, we  
> cannot change it now without introducing a serious incompatibility  
> with the millions and millions of applications already in the field  
> that are using the existing implementation.
> 
> FTS3 has a pluggable stemmer module.  You can write your own stemmer  
> that works "correctly" if you like, and link it in for use in your  
> applications.  We will also investigate making your recommended  
> changes for FTS4.  However, in order to maintain backwards  
> compatibility of FTS3, we cannot change the stemmer algorithm, even to  
> fix a "bug".
> 
> On Feb 24, 2010, at 9:59 AM, James Berry wrote:
> 
>> Can somebody please clarify the bug reporting process for sqlite? My  
>> understanding is that it's not possible to file bug reports  
>> directly, and that the advise is to write to the user list first.  
>> I've done that (below) but have no response so far and am concerned  
>> that this means the bug report will just be forgotten others, as  
>> well as by me.
>> 
>> How does this bug move from a message on a list to a ticket (and  
>> ultimately a patch, we hope) in the system?
>> 
>> James
>> 
>> On Feb 22, 2010, at 2:51 PM, James Berry wrote:
>> 
>>> I'm writing to report a bug in the porter-stemmer algorithm  
>>> supplied as part of the FTS3 implementation.
>>> 
>>> The stemmer has an inverted logic error that prevents it from  
>>> properly stemming words of the following form:
>>> 
>>> dry -> dri
>>> cry -> cri
>>> 
>>> This means, for instance, that the following words don't stem the  
>>> same:
>>> 
>>> dried -> dri   -doesn't match-   dry
>>> cried -> cry   -doesn't match-   cry
>>> 
>>> The bug seems to have been introduced as a simple logic error by  
>>> whoever wrote the stemmer code. The original description of step 1c  
>>> is here: http://snowball.tartarus.org/algorithms/english/stemmer.html
>>> 
>>> Step 1c:
>>> replace suffix y or Y by i if preceded by a non-vowel which is  
>>> not the first letter of the word (so cry -> cri, by -> by, say ->  
>>> say)
>>> 
>>> But the code in sqlite reads like this:
>>> 
>>> /* Step 1c */
>>> if( z[0]=='y' && hasVowel(z+1) ){
>>>  z[0] = 'i';
>>> }
>>> 
>>> In other words, sqlite turns the y into an i only if it is preceded  
>>> by a vowel (say -> sai), while the algorithm intends this to be  
>>> done if it is _not_ preceded by a vowel.
>>> 
>>> But there are two other problems in that same line of code:
>>> 
>>> (1) hasVowel checks whether a vowel exists anywhere in the string,  
>>> not just in the next character, which is incorrect, and goes  
>>> against the step 1c directions above. (amplify would not be  
>>> properly stemmed to amplifi, for instance)
>>> 
>>> (2) The check for the first letter is not performed (for words  
>>> like "by", etc)
>>> 
>>> I've fixed both of those errors in the patch below:
>>> 
>>> /* Step 1c */
>>> -  if( z[0]=='y' && hasVowel(z+1) ){
>>> + if( z[0]=='y' && isConsonant(z+1) && z[2] ){
>>>   z[0] = 'i';
>>> }
>>> 
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in porter stemmer

2010-02-24 Thread James Berry
Can somebody please clarify the bug reporting process for sqlite? My 
understanding is that it's not possible to file bug reports directly, and that 
the advise is to write to the user list first. I've done that (below) but have 
no response so far and am concerned that this means the bug report will just be 
forgotten others, as well as by me.

How does this bug move from a message on a list to a ticket (and ultimately a 
patch, we hope) in the system?

James

On Feb 22, 2010, at 2:51 PM, James Berry wrote:

> I'm writing to report a bug in the porter-stemmer algorithm supplied as part 
> of the FTS3 implementation.
> 
> The stemmer has an inverted logic error that prevents it from properly 
> stemming words of the following form:
> 
>   dry -> dri
>   cry -> cri
> 
> This means, for instance, that the following words don't stem the same:
> 
>   dried -> dri   -doesn't match-   dry
>   cried -> cry   -doesn't match-   cry
> 
> The bug seems to have been introduced as a simple logic error by whoever 
> wrote the stemmer code. The original description of step 1c is here: 
> http://snowball.tartarus.org/algorithms/english/stemmer.html
> 
>   Step 1c:
>   replace suffix y or Y by i if preceded by a non-vowel which is 
> not the first letter of the word (so cry -> cri, by -> by, say -> say)
>   
> But the code in sqlite reads like this:
> 
>  /* Step 1c */
>  if( z[0]=='y' && hasVowel(z+1) ){
>z[0] = 'i';
>  }
> 
> In other words, sqlite turns the y into an i only if it is preceded by a 
> vowel (say -> sai), while the algorithm intends this to be done if it is 
> _not_ preceded by a vowel.
> 
> But there are two other problems in that same line of code:
> 
>   (1) hasVowel checks whether a vowel exists anywhere in the string, not 
> just in the next character, which is incorrect, and goes against the step 1c 
> directions above. (amplify would not be properly stemmed to amplifi, for 
> instance)
> 
>   (2) The check for the first letter is not performed (for words like 
> "by", etc)
> 
> I've fixed both of those errors in the patch below:
> 
>   /* Step 1c */
> -  if( z[0]=='y' && hasVowel(z+1) ){
> + if( z[0]=='y' && isConsonant(z+1) && z[2] ){
> z[0] = 'i';
>   }
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug in porter stemmer

2010-02-22 Thread James Berry
I'm writing to report a bug in the porter-stemmer algorithm supplied as part of 
the FTS3 implementation.

The stemmer has an inverted logic error that prevents it from properly stemming 
words of the following form:

dry -> dri
cry -> cri

This means, for instance, that the following words don't stem the same:

dried -> dri   -doesn't match-   dry
cried -> cry   -doesn't match-   cry

The bug seems to have been introduced as a simple logic error by whoever wrote 
the stemmer code. The original description of step 1c is here: 
http://snowball.tartarus.org/algorithms/english/stemmer.html

Step 1c:
replace suffix y or Y by i if preceded by a non-vowel which is 
not the first letter of the word (so cry -> cri, by -> by, say -> say)

But the code in sqlite reads like this:

  /* Step 1c */
  if( z[0]=='y' && hasVowel(z+1) ){
z[0] = 'i';
  }

In other words, sqlite turns the y into an i only if it is preceded by a vowel 
(say -> sai), while the algorithm intends this to be done if it is _not_ 
preceded by a vowel.

But there are two other problems in that same line of code:

(1) hasVowel checks whether a vowel exists anywhere in the string, not 
just in the next character, which is incorrect, and goes against the step 1c 
directions above. (amplify would not be properly stemmed to amplifi, for 
instance)

(2) The check for the first letter is not performed (for words like 
"by", etc)

I've fixed both of those errors in the patch below:

   /* Step 1c */
-  if( z[0]=='y' && hasVowel(z+1) ){
+ if( z[0]=='y' && isConsonant(z+1) && z[2] ){
 z[0] = 'i';
   }

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Installation problem on OS-X 10.4

2010-02-21 Thread James Campbell
After a successful run of “configure” with ”--disable-shared” added,  
the “make” command resulted in this:

shell.c:42:32: readline/readline.h: No such file or directory

shell.c:43:31: readline/history.h: No such file or directory

shell.c: In function `one_input_line':

shell.c:1241: warning: assignment makes pointer from integer without  
a cast

make: *** [shell.o] Error 1

I know next to nothing about installations like this.  What should I do?



James C.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] negative numbers

2010-01-31 Thread james pruett
I am writing an open source program.

I am having trouble getting any results using this query. This returns none

select * from signs where lon>-121 and lon<-119;

onCreate(SQLiteDatabase db) {
  db.execSQL("CREATE TABLE " + TABLE_NAME + " (" + _ID
+ " INTEGER PRIMARY KEY AUTOINCREMENT, " + TIME
+ " VARCHAR(45), " + TAGG
+ " VARCHAR(45), " + LAT
+ " FLOAT  , " + LON
+ " VARCHAR(45), " + COG
+ " VARCHAR(45), " + MPH
+ " VARCHAR(45), " + KPH
+  " TEXT NOT NULL);");
   }

Here is the database using RazorSQL v504
select * from signs;
_id  time tag  lat loncogmph   kph
11264890306692test20.0-120.00.045.00.0
21264890311484test20.0-120.00.045.00.0
31264890345263test20.0-120.00.045.00.0
41264890346700test20.0-120.00.045.00.0

---
I have tried lon= varchar(45) and also as a FLOAT, and both don't seem
to handle negative numbers correctly.
Any help appreciated!

Jim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] contribution: fts3 porter stemmer enhancements to handle common european accents

2010-01-27 Thread James Berry
I'd like to contribute for potential inclusion, or to help out others in the 
community, a small set of enhancements I've made to the porter tokenizer. This 
implementation shares most of its code with the current porter tokenizer, as 
the changes are really just in the tokenizer prior to the stemming operation. 
This small patch implements an additional tokenizer, which I am calling 
"porterPlus", for lack of further inspiration.

The code is based on several observations made while attempting to use the 
current porter tokenizer on a common english/utf-8 dataset:

 - There are a limited number of accented characters common in english text.

 - If the accents simply weren't there, the words would be stemmed 
appropriately, but the porter stemmer gives up on a word when it sees any utf-8 
characters, leading to perceived failures in the search queries.

 - The porter stemmer, by its very nature, is not intended to work for 
non-english text, so we can write off the major part of the the utf-8 character 
set, while concentrating on major improvements to those characters involved in 
common european languages, particularly those that have been adopted into 
english usage.

 - Additionally, there are a number of punctuation characters commonly rendered 
in utf-8 that are missed by the regular porter tokenizer  (hyphen and 
typographic quotes are good examples).

This small patch does the following:

- Defines a new tokenizer "porterPlus" which shares most of its code 
with the regular porter tokenizer

- Identifies a small subset of utf-8 characters for special handling. 
In the case of common accented varieties of regular ascii characters, the 
accents are dropped, leaving the unaccented character only. For instance, sauté 
is converted to saute. The resultant word is passed as usual into the porter 
stemmer.

- Also identifies a small subset of utf-8 characters to treat as 
delimiters, as they would otherwise be treated as part of another token, 
leading to search failures. (hyphen, typographic quotes, etc).

In our use so far, these small changes have meant that we now normalize away 
all of the important utf-8 characters in our input text, which gives us 100% 
searchability of significant input tokens.

The patch (to the 3.6.22 amalgamation) is attached.

James

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Surprisingly slow performance

2009-09-03 Thread James Turner
I'm having a performance problem with sqlite, where a particular  
select operation (involving a join) seems to be taking much longer  
than I'd expect (by an order of magnitude). I've been through several  
iterations of searching for information on the web, adjusting the  
table structures, and modifying the relevant query, but to no avail.

Here's the basic sequence of events:
- I populate a collection of tables from a large file (inside a  
transaction which is then committed)
- I then need to add further rows to the tables (from another large  
file), but for each insert, first locate an existing row and store  
it's rowid in the new row.

It's locating the existing row that is causing a huge problem -  
altering the runtime of the second phase from 2 seconds to 70 or 80  
seconds. (If I skip the select, and insert a dummy value, everything  
is blazingly fast)

Here's the slow select:

SELECT id, heading, lon, lat from  positioned, ways on ways.id =  
positioned.rowid WHERE ident=? AND type=4 AND airport=?;

Relevant table definitions:

create table positioned (type INT, ident char(8), name char(32), lon  
float, lat float, elev float, bucket int);
create table ways (id INTEGER primary key, heading float, length  
float, width float, surface int, airport int);

Other relevant things:

- this is sqlite 3.4.0, as shipped with Leopard.

 - I'm using rowids to implement support for inheritance -  
'positioned' is the base class, and 'ways' is a derived class. All the  
tables corresponding to my derived classes have an integer primary key  
which is defined to match the rowid of the base row in 'positioned'.  
Hence I'm very frequently doing joins on positioned.rowid =  
sometable.id. Perhaps there's a more efficient or standard idiom to  
express this?

 - There's indexes defined on ways (airport) and positioned (type)  
and (ident)  - and some other columns. I have a slight worry that  
querying an indexed table which is being continually INSERT-ed into  
might be a potential problem, but I experimented with disabling  
certain indexes with no change in performance (well, sometimes it got  
worse)

'ident' is not unique, but matching on it should reduce the potential  
result set down to fifty or sixty rows at most (before any join needs  
to take place). A given combination of (ident, airport) *should* be  
globally unique - there should be exactly one result row unless the  
input data is malformed (which is unlikely, but possible). I've  
deliberately made the ident term the first one, after reading in  
various places that this helps the SQLite query planner.

- both positioned and ways have tens of thousands of rows; the entire  
DB will easily fit in memory. (And, at the time I'm doing these  
operations, the tables have just been populated, so I'd expect all the  
DB pages to be in memory ... but perhaps I'm wrong about that)

- in the slow phase, I'm inside a single transaction; the sequence of  
operations is the problematic select, followed by an insert; repeated  
several thousand times.

Hopefully that's all clear - if not, I can supply an example database  
file, or an EXPLAIN output of the select operation.

Regards,
James

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] plans for completion of INSERT OR REPLACE INTO?

2009-07-06 Thread James Gregurich


On Jul 6, 2009, at 3:14 PM, Nicolas Williams wrote:

> On Mon, Jul 06, 2009 at 02:49:07PM -0700, James Gregurich wrote:
>> 1) Why on earth would you want to scroll all the way to the bottom of
>> a long email to get the response simply for the sake of "We read
>> English top to bottom."
>
> Any quoted context must be read before the reply or else is not
> necessary and should not have been quoted.  Thus top posting (as
> the practice is known) is obnoxious because it causes the reader to
> read the reply, read the quoted text, and then re-read the reply to
> understand it in its proper context, and if the quoted text was not
> necessary in order to understand your reply (as in this particular
> case), then it just wastes bandwidth and storage.
>
>> 2) This is going to be a challenge for me because I'm not writing a
>> fixed DB with a known schema.  I'm writing a higher-level data store
>> wrapper using SQLITE as the backing. The schema is configured at
>> runtime.
>
> I agree.  ISTM that SQLite3 ought to call either call DELETE triggers
> for all rows deleted by INSERT OR REPLACE, or UPDATE triggers for the
> row that had the same PRIMARY KEY and DELETE triggers for any other
> deleted rows.
>
> (To me INSERT OR REPLACE can equally be seen as deleting all  
> conflicting
> rows, then inserting a replacement row, or else as updating the row  
> that
> conflicts with the new row's PRIMARY KEY and deleting rows that  
> conflict
> on other unique indexes.  If the INSERT OR REPLACE has many input rows
> then this gets murkier in that which rows are updated or deleted may
> become non-deterministic, I think, so deleting all affected rows seems
> like the better solution.)
>
>> 3)  I found the following comment on the mysql dev site (relevant
>> since "replace into" is inspired by mysql). They apparently fire the
>> triggers as expected. It seems to me that your comments  
>> overcomplicate
>> the requirements. If you are going to delete a row, call the delete
>> triggersthat is all that is needed as best I can tell. However, I
>> do admit that I am not well educated on SQL db engines. I'm figuring
>> all this out as I go along. Feel free to tell me how absurdly wrong I
>> am.
>
> INSERT OR REPLACE is not a feature of the standard, so anything  
> goes...
>
> Nico
> -- 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



How's this...

you have a pretty low threshold for "obnoxious." Frankly, I lack the  
desire and energy needed to keep up with the list of rules people make  
up. read the email or don't. it doesn't matter to me. I don't make  
rules designed to get other people to adhere to my personal  
preferencesand it annoys me when I get lectured by others who  
expect me to follow some unwritten code of conduct just to suit their  
personal tastes. The adjective I'd use to counter "obnoxious" is  
"uptight."





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] referential integrity and INSERT OR REPLACE

2009-07-06 Thread James Gregurich

based on the test I just ran, it reports the first one encountered only.

On Jul 6, 2009, at 2:53 PM, Nicolas Williams wrote:

> On Sat, Jul 04, 2009 at 10:24:50AM +0200, Kees Nuyt wrote:
>> On Fri, 03 Jul 2009 14:38:43 -0700, James Gregurich
>> <bayouben...@mac.com> wrote:
>>
>>>
>>> nuts. that makes INSERT OR REPLACE worthless if you have tables
>>> dependent on one another.
>>>
>>>
>>> Is there any way to manually get a list of records for which there
>>> would be a conflict if a given record was inserted?
>>
>> BEGIN;
>> INSERT the row, and observe any UNIQUE constraint failures.
>>
>> If there isn't one, the INSERT succeeded and you're done.
>
> Does SQLite3 report all those failuers in one attempt?  Or must one  
> loop
> to make this a robust conflict detection algorithm?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] plans for completion of INSERT OR REPLACE INTO?

2009-07-06 Thread James Gregurich

1) Why on earth would you want to scroll all the way to the bottom of  
a long email to get the response simply for the sake of "We read  
English top to bottom."

2) This is going to be a challenge for me because I'm not writing a  
fixed DB with a known schema.  I'm writing a higher-level data store  
wrapper using SQLITE as the backing. The schema is configured at  
runtime.

3)  I found the following comment on the mysql dev site (relevant  
since "replace into" is inspired by mysql). They apparently fire the  
triggers as expected. It seems to me that your comments overcomplicate  
the requirements. If you are going to delete a row, call the delete  
triggersthat is all that is needed as best I can tell. However, I  
do admit that I am not well educated on SQL db engines. I'm figuring  
all this out as I go along. Feel free to tell me how absurdly wrong I  
am.


http://dev.mysql.com/doc/refman/5.0/en/replace.html

> MySQL uses the following algorithm for REPLACE (and LOAD DATA ...  
> REPLACE):
>
> Try to insert the new row into the table
>
> While the insertion fails because a duplicate-key error occurs for a  
> primary key or unique index:
>
> Delete from the table the conflicting row that has the duplicate key  
> value
>
> Try again to insert the new row into the table
>


and in the comments...


> If you are using REPLACE INTO... triggers are fired in this order  
> (if delete of duplcate key is used):
> - before insert
> - before delete
> - after delete
> - after insert





On Jul 6, 2009, at 1:15 PM, Simon Slavin wrote:

> Please quote previous text above your response to it.  We read English
> top to bottom.
>
> On 6 Jul 2009, at 8:22pm, James Gregurich wrote:
>
>> On Jul 6, 2009, at 3:53 AM, Simon Slavin wrote:
>>
>>> It should not call DELETE triggers since it never deletes.  It  
>>> should
>>> call either INSERT triggers or UPDATE triggers depending on which  
>>> one
>>> it decides to do.
>>>
>>> In the meantime, you can do the same thing yourself: instead of
>>> calling INSERT OR REPLACE, test to see which one would happen then
>>> call either INSERT or UPDATE.  The triggers on both of those
>>> combinations will work correctly.
>
>> so you are suggesting that I put an INSERT in a C loop checking for a
>> constraint violation failure.
>
> You have pointed out an error I made.  I was thinking that REPLACE
> meant that only one existing row could be replaced.  This is wrong:
> the new row can replace any number of existing rows.  Thank you for
> spotting my error.  The documentation points it out, not quite as
> clearly, in the REPLACE section of
>
> http://www.sqlite.org/lang_conflict.html
>
> So any proper trigger structure would have to call a combination of
> all three types of triggers: INSERT, DELETE and UPDATE.  I can't think
> of a good way to manage this properly.  And that may be why INSERT OR
> REPLACE itself doesn't use triggers correctly: it's too complicated to
> work out which of the existing rows is being REPLACED.  And you can't
> call DELETE triggers instead, because they're intended to stop
> something being deleted, and it might be okay to DELETE this thing if
> you're immediately going to INSERT another row that satisfies the
> requirement.
>
> I can only suggest that you handle the constraints testing yourself,
> in your code.  You will know, from the design of your database,
> whether your new row should be considered a replacement for an
> existing one.  You're going to have to replace your INSERT OR REPLACE
> with your own check for other constraints, and then deciding in your
> own code what commands to execute.  I don't think it will be possible
> to write code to do this properly for an arbitrary database.
>
>>>  Or do the INSERT, allow it to fail if
>>> it will, then do the UPDATE.
>
> This previous suggestion of mine won't work at all, given that there's
> no way to tell which of the existing records you think you're  
> updating.
>
> I conclude that INSERT OR REPLACE isn't compatible with triggers, or
> that triggers have to support FOR EACH TRANSACTION as well as ROW and
> STATEMENT, or that INSERT OR REPLACE has to treat a primary key
> conflict differently to some other kind of conflict.  There are
> drawbacks to all three of these.  I can't think of a way to do it that
> suits SQLite's small/neat/obvious design criteria.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] plans for completion of INSERT OR REPLACE INTO?

2009-07-06 Thread James Gregurich

so you are suggesting that I put an INSERT in a C loop checking for a  
constraint violation failure. if I get one, I use errmsg to get the  
"column  is not unique" message and extract . Then, I issue a  
DELETE to clear out rows that match the value of . is that correct?





On Jul 6, 2009, at 3:53 AM, Simon Slavin wrote:

> (Sorry, hit 'Send' before I meant to.)
>
> On 6 Jul 2009, at 6:34am, James Gregurich wrote:
>
>> a question for the sqlite developers.
>>
>> The inability of "INSERT OR REPLACE" to maintain referential  
>> integrity
>> leaves me with no mechanism to implement a feature in my project that
>> I was intending to provide.  Are there any plans to add in the
>> functionality for "INSERT OR REPLACE" to call delete triggers so that
>> referential integrity can be maintained?
>
> It should not call DELETE triggers since it never deletes.  It should
> call either INSERT triggers or UPDATE triggers depending on which one
> it decides to do.
>
> In the meantime, you can do the same thing yourself: instead of
> calling INSERT OR REPLACE, test to see which one would happen then
> call either INSERT or UPDATE.  Or do the INSERT, allow it to fail if
> it will, then do the UPDATE.  The triggers on both of those
> combinations will work correctly.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] plans for completion of INSERT OR REPLACE INTO?

2009-07-05 Thread James Gregurich
a question for the sqlite developers.

The inability of "INSERT OR REPLACE" to maintain referential integrity  
leaves me with no mechanism to implement a feature in my project that  
I was intending to provide.  Are there any plans to add in the  
functionality for "INSERT OR REPLACE" to call delete triggers so that  
referential integrity can be maintained?

thanks,
James
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to refer to a multiple-column primary key (PK) as 1 column (field) name

2009-07-04 Thread James Scott
I have the following:

CREATE TABLE [Sections] (
  [Department] varchar NOT NULL COLLATE NOCASE,
  [Course] varchar NOT NULL COLLATE NOCASE,
  [Section] varchar NOT NULL COLLATE NOCASE,
  [Class_Time] timestamp,
  [I_Id] varchar COLLATE NOCASE,
  [Room] varchar COLLATE NOCASE,
  CONSTRAINT [sqlite_autoindex_Sections_1] PRIMARY KEY ([Department],
[Course], [Section]));

CREATE INDEX [PK_Sections] ON [Sections] ([Department], [Course],
[Section]);

In the programming language, I need to refer to the primary key as 1 field.
Does Sqlite allow a 'calculated field', such as concatenation of the 3
columns in the PK?  Or can I create a View to combine the 3?

Any help is greatly appreciated.

Thanks, Bradley
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] referential integrity and INSERT OR REPLACE

2009-07-03 Thread James Gregurich

nuts. that makes INSERT OR REPLACE worthless if you have tables  
dependent on one another.


Is there any way to manually get a list of records for which there  
would be a conflict if a given record was inserted?


> On Fri, 03 Jul 2009 11:29:14 -0700, James Gregurich
> <bayouben...@mac.com> wrote:
>
> >
> >based on my reading of the docs for INSERT OR REPLACE, it will delete
> >rows for ANY constraint violation, not just one involving the primary
> >key. Is that reading wrong?
>
> You are right, for UNIQUE constraint violations.
>
> Indeed it breaks referential integrity, because the new row
> can contain a new value for the primary key, without
> cascading deletes or updates concerning the old value.
>
> My conclusion: syntax shortcuts like INSERT OR REPLACE are
> evil ;)
>
> >-James
> -- 
>   (  Kees Nuyt
>   )
> c[_]

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] referential integrity and INSERT OR REPLACE

2009-07-03 Thread James Gregurich

I read on another posting in the archives that it does not. However, I  
haven't tried it myself.

-James

> Simon Slavin
> Fri, 03 Jul 2009 09:44:22 -0700
>
> On 3 Jul 2009, at 3:28am, James Gregurich wrote:
>
> > How do I maintain referential integrity on a INSERT OR REPLACE given
> > it does not call the delete trigger on the offending rows?
>
> If SQLite decides that it's going to do a REPLACE rather than an
> INSERT, does it call the triggers for UPDATE ?  I think that would be
> a good way for it to work.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] referential integrity and INSERT OR REPLACE

2009-07-03 Thread James Gregurich

question:

How do I maintain referential integrity on a INSERT OR REPLACE given  
it does not call the delete trigger on the offending rows?

thanks,
james
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] getting offending constraint

2009-07-02 Thread James Gregurich

I dropped the constraint and added the trigger.


strange. works as expected in the sqlite3 exe.   in C code, I get  
"constraint failed" from sqlite3_errmsg. If I drop the trigger shown  
below, the C code has no constraint violation as would be expected  
which means the trigger is causing the constraint violation.



sqlite> .dump
BEGIN TRANSACTION;
CREATE TABLE test1(i INTEGER, a INTEGER, b INTEGER, c INTEGER, d  
INTEGER);
INSERT INTO "test1" VALUES(0,1,2,3,4);
INSERT INTO "test1" VALUES(1,10,20,30,40);
INSERT INTO "test1" VALUES(2,100,200,300,400);
INSERT INTO "test1" VALUES(3,1000,2000,3000,4000);
INSERT INTO "test1" VALUES(4,1,2,3,4);
INSERT INTO "test1" VALUES(5,10,20,30,40);
CREATE TABLE test1a(i INTEGER, a INTEGER, b INTEGER);
CREATE TABLE test1b(i INTEGER, c INTEGER, d INTEGER);
CREATE TRIGGER trig BEFORE INSERT ON test1b
BEGIN
SELECT CASE
WHEN (1)
THEN RAISE(ABORT, 'no parent element')
END;
END;
COMMIT;
sqlite> INSERT INTO "test1b" VALUES(1,10,20);
SQL error: no parent element
sqlite>










On Jul 1, 2009, at 6:40 PM, Simon Slavin wrote:

>
> On 2 Jul 2009, at 1:57am, James Gregurich wrote:
>
>> I tried that, but I still got back "constraint failed" rather than my
>> RAISE message. Since you say it should work, I probably did something
>> wrong. I'll look at it again.
>
> If you left the constraint definition in in your table definition then
> you're getting an error from that rather than from the trigger.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] getting offending constraint

2009-07-01 Thread James Gregurich

thanks.

I tried that, but I still got back "constraint failed" rather than my  
RAISE message. Since you say it should work, I probably did something  
wrong. I'll look at it again.

On Jul 1, 2009, at 3:59 PM, Simon Slavin wrote:

>
> On 1 Jul 2009, at 8:19pm, James Gregurich wrote:
>
>> Would there be a way to identify the offending constraint if
>> "SQLITE_CONSTRAINT" is returned?
>>
>>
>> sqlite3_errmsg is just telling me "constraint failed"...which is of
>> limited usefulness.
>
> Instead of the constraint, you could define a trigger, and use the
> 'RAISE' form to supply your own error message.  Here's an example:
>
> CREATE TRIGGER authors_books_insert BEFORE INSERT ON books
>   FOR EACH ROW BEGIN
>   SELECT RAISE(ROLLBACK, 'Attempt to add a book with an author
> number which is not valid.')
>   WHERE (SELECT id FROM authors WHERE id = new.author) IS NULL;
>   END
>
> You get back exactly the error message you put in.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] getting offending constraint

2009-07-01 Thread James Gregurich

ah. I have no knowledge of how mailing list programs work. no "poor  
etiquette" was intended.



On Jul 1, 2009, at 1:41 PM, P Kishor wrote:

> On Wed, Jul 1, 2009 at 3:39 PM, James Gregurich<bayouben...@mac.com>  
> wrote:
>>
>> How would I have "hijacked" a thread?  I changed the subject and
>> removed the original text.
>>
> ...
>
> that is exactly how a thread is hijacked... changing the subject is
> not enough. Every message has a unique id that is used by the mail
> programs to keep track of threading.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] getting offending constraint

2009-07-01 Thread James Gregurich

How would I have "hijacked" a thread?  I changed the subject and  
removed the original text.





On Jul 1, 2009, at 12:32 PM, Roger Binns wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> James Gregurich wrote:
>>
>> howdy!
>
> You hijacked someone else's thread by hitting reply, rather than
> starting a new one.  That is very poor netiquette.
>
>> Would there be a way to identify the offending constraint if
>> "SQLITE_CONSTRAINT" is returned?
>>
>>  sqlite3_errmsg is just telling me "constraint failed"...which is of
>> limited usefulness.
>
> http://www.sqlite.org/cvstrac/tktview?tn=1648
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAkpLub0ACgkQmOOfHg372QTM5wCeO38HYFTMGi77aHcgtl1Y1xyK
> H3EAoJav+Q+pAq3LzpWnoMugx87ZnmrF
> =JN3m
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] getting offending constraint

2009-07-01 Thread James Gregurich


howdy!

Would there be a way to identify the offending constraint if  
"SQLITE_CONSTRAINT" is returned?


  sqlite3_errmsg is just telling me "constraint failed"...which is of  
limited usefulness.

-James

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [noob] merge statement equivalent?

2009-06-19 Thread James Gregurich
thanks!


On Jun 18, 2009, at 6:01 PM, Dennis Cote wrote:

> James Gregurich wrote:
>> on that update statement, is the SQL optimizer smart enough to not
>> rerun that select statement for each column in the update's set
>> clause? Is it going to run a single select statement to get ,
>> , etc.  or is it going to run one for each column in the
>> update statement?
>>
>>
> James,
>
> No, I don't believe the optimizer is that smart. SQLite will execute
> multiple queries.
>
> If you are concerned that the matches table is large you could add an
> index on the the row1 column of the matches table to speed up the row2
> lookups. The lookups in table2 using the rowid should be very fast,  
> and
> once the page with the required record has been read into the cache  
> the
> subsequent value  lookup queries
> should execute very quickly as well.
>
> HTH
> Dennis Cote
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [noob] merge statement equivalent?

2009-06-17 Thread James Gregurich

oops. sorry for errant message, folks. I had the wrong email selected  
when I hit the button and didn't pay attention to what I was doing.

On Jun 17, 2009, at 3:19 PM, James Gregurich wrote:

>
> So what are you going to do? we need to get your plans pinned down.
>
> On Jun 17, 2009, at 11:46 AM, James Gregurich wrote:
>
>>
>> Dennis,
>>
>> question on an old post of yours below...
>>
>>
>> on that update statement, is the SQL optimizer smart enough to not
>> rerun that select statement for each column in the update's set
>> clause? Is it going to run a single select statement to get ,
>> , etc.  or is it going to run one for each column in the
>> update statement?
>>
>> -James
>>
>>> Petite Abeille wrote:
>>>>
>>>> How does one emulate a DML MERGE statement in SQLite [1]?
>>>>
>>>> INSERT OR REPLACE sounds promising but the REPLACE documentation
>>> under
>>>> the ON CONFLICT clause seems to imply that in the case of a
>>> constraint
>>>> violation the existing row will be deleted entirely and then
>>> replaced
>>>> by a brand new row instead of being merely updated [2].
>>>>
>>>> Apologies if this is a FAQ, but my google-fu is eluding me on this
>>> one.
>>>>
>>>> Thanks in advance.
>>>>
>>>
>>> I haven't tested this so take it with a grain of salt, but I think
>>> this
>>> should do the same thing as the merge statement.
>>>
>>> Given two tables, table1 and table2.
>>>
>>> merge into table1 using table2 on 
>>>when matched then update
>>>set  = ,
>>> =  ...
>>>when not matched then insert ,  ...
>>>values (,  ...)
>>>
>>> Should be the same as the following series of SQL statements.
>>>
>>> create temp table matches as
>>>select t1.rowid as row1, t2.rowid as row2
>>>from table1
>>>join table2
>>>where 
>>>
>>> insert into table1 (,  ...)
>>>select ,  ... from table2
>>>where rowid not in (select row2 from matches);
>>>
>>> update table1
>>>set  = (select  from table2
>>>where table2.rowid =
>>>(select row2 from matches
>>>where row1 = rowid)),
>>> = (select  from table2
>>>where table2.rowid =
>>>(select row2 from matches
>>>where row1 = rowid))
>>>...
>>>where rowid in (select row1 from matches);
>>>
>>> drop table matches;
>>>
>>>
>>> HTH
>>> Dennis Cote
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [noob] merge statement equivalent?

2009-06-17 Thread James Gregurich

So what are you going to do? we need to get your plans pinned down.

On Jun 17, 2009, at 11:46 AM, James Gregurich wrote:

>
> Dennis,
>
> question on an old post of yours below...
>
>
> on that update statement, is the SQL optimizer smart enough to not
> rerun that select statement for each column in the update's set
> clause? Is it going to run a single select statement to get ,
> , etc.  or is it going to run one for each column in the
> update statement?
>
> -James
>
>> Petite Abeille wrote:
>>>
>>> How does one emulate a DML MERGE statement in SQLite [1]?
>>>
>>> INSERT OR REPLACE sounds promising but the REPLACE documentation
>> under
>>> the ON CONFLICT clause seems to imply that in the case of a
>> constraint
>>> violation the existing row will be deleted entirely and then
>> replaced
>>> by a brand new row instead of being merely updated [2].
>>>
>>> Apologies if this is a FAQ, but my google-fu is eluding me on this
>> one.
>>>
>>> Thanks in advance.
>>>
>>
>> I haven't tested this so take it with a grain of salt, but I think
>> this
>> should do the same thing as the merge statement.
>>
>> Given two tables, table1 and table2.
>>
>> merge into table1 using table2 on 
>> when matched then update
>> set  = ,
>>  =  ...
>> when not matched then insert ,  ...
>> values (,  ...)
>>
>> Should be the same as the following series of SQL statements.
>>
>> create temp table matches as
>> select t1.rowid as row1, t2.rowid as row2
>> from table1
>> join table2
>> where 
>>
>> insert into table1 (,  ...)
>> select ,  ... from table2
>> where rowid not in (select row2 from matches);
>>
>> update table1
>> set  = (select  from table2
>> where table2.rowid =
>> (select row2 from matches
>> where row1 = rowid)),
>>  = (select  from table2
>> where table2.rowid =
>> (select row2 from matches
>> where row1 = rowid))
>> ...
>> where rowid in (select row1 from matches);
>>
>> drop table matches;
>>
>>
>> HTH
>> Dennis Cote
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [noob] merge statement equivalent?

2009-06-17 Thread James Gregurich

Dennis,

question on an old post of yours below...


on that update statement, is the SQL optimizer smart enough to not  
rerun that select statement for each column in the update's set  
clause? Is it going to run a single select statement to get ,  
, etc.  or is it going to run one for each column in the  
update statement?

-James

> Petite Abeille wrote:
> >
> > How does one emulate a DML MERGE statement in SQLite [1]?
> >
> > INSERT OR REPLACE sounds promising but the REPLACE documentation  
> under
> > the ON CONFLICT clause seems to imply that in the case of a  
> constraint
> > violation the existing row will be deleted entirely and then  
> replaced
> > by a brand new row instead of being merely updated [2].
> >
> > Apologies if this is a FAQ, but my google-fu is eluding me on this  
> one.
> >
> > Thanks in advance.
> >
>
> I haven't tested this so take it with a grain of salt, but I think  
> this
> should do the same thing as the merge statement.
>
> Given two tables, table1 and table2.
>
> merge into table1 using table2 on 
>  when matched then update
>  set  = ,
>   =  ...
>  when not matched then insert ,  ...
>  values (,  ...)
>
> Should be the same as the following series of SQL statements.
>
> create temp table matches as
>  select t1.rowid as row1, t2.rowid as row2
>  from table1
>  join table2
>  where 
>
> insert into table1 (,  ...)
>  select ,  ... from table2
>  where rowid not in (select row2 from matches);
>
> update table1
>  set  = (select  from table2
>  where table2.rowid =
>  (select row2 from matches
>  where row1 = rowid)),
>   = (select  from table2
>  where table2.rowid =
>  (select row2 from matches
>  where row1 = rowid))
>  ...
>  where rowid in (select row1 from matches);
>
> drop table matches;
>
>
> HTH
> Dennis Cote
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] append table

2009-05-16 Thread James Gregurich
howdy!

Questions:


suppose tables t1 & t2 exits in two separate db files, d1 & d2  
respectively. t1 & t2 have identical schemas, but different data.

I want to append t2 to t1.


I suppose the way to do that is to open a connect to d1, use the  
attach command to reference d2.t2 and issue a  "INSERT INTO t1 SELECT  
* FROM d2.t2;" query.


Is that correct?  Is there a better way?

Is this operation inefficient or pitfalls any pitfalls to watch out for?


-James





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-05-01 Thread James Gregurich

I describe reality.

Someone has to be the arbiter of "better." Generally, that arbiter is  
the guy handing out the research grants.

On May 1, 2009, at 5:33 AM, John Stanton wrote:

>
> Science is the Scientific Method - observation, hypothesis and
> skepticism.  The antithesis of politics.  There are no facts in  
> science,
> only observations and any hypothesis is only valid until a better one
> replaces it.
>
> You describe bad, politicized science.
>
> James Gregurich wrote:
>> With all due respect,  science itself is a set of
>> "positions" (opinions) which are endorsed by small group of people as
>> official doctrine after appropriate study. Saying "A 'position' is
>> politics, not science" is not a particularly meaningful statement.   
>> If
>> you want to argue that point, feel free to send me a private email.
>>
>> My threaded application works pretty darn well. I can process
>> thousands of print industry files on an 8-core system keeping the
>> cores busy without lagging the GUI for other applications. Just
>> because many people create ill conceived programs doesn't mean
>> threaded programs are inherently doomed to be ill-conceived. The
>> development tools and techniques for building concurrent systems are
>> advancing and making concurrency quite feasible.
>>
>> James Gregurich
>> Engineering Manager
>> Markzware
>>
>> On Apr 30, 2009, at 5:01 AM, John Stanton wrote:
>>
>>> A "position" is politics, not science.  Warnings about the use of
>>> threads are based on science, and advise you to avoid them if  
>>> possible
>>> for your own protection.
>>>
>>> I see ill conceived programs using threads which go to complex
>>> synchronization to achieve the equivalent of single stream execution
>>> but
>>> with much greater overhead.  A KISS situation.
>>>
>>> James Gregurich wrote:
>>>> thanks for the info. That should work for me.
>>>>
>>>> Given the industry is going multicore and 16-core macintoshes for
>>>> your
>>>> grand-mother are  just a few years away, I recommend you rethink  
>>>> your
>>>> position on the use of threading. Apple is heavily pushing
>>>> parallelism
>>>> on its developers.  NSOperation is a major part of that effort.  
>>>> As I
>>>> understand it, MS is developing their copy of NSOperation for  
>>>> VS2010.
>>>> The development landscape is only going to get more threaded as  
>>>> time
>>>> goes on.
>>>>
>>>> -James
>>>>
>>>>
>>>>
>>>>> On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:
>>>>>
>>>>>
>>>>>> howdy!
>>>>>>
>>>>>> question:
>>>>>>
>>>>>> for an in-memory db with the threading mode set to serialized, is
>>>>>>
>>>>> the
>>>>>
>>>>>> internal mutex held for an entire transaction so that one thread
>>>>>>
>>>>> won't
>>>>>
>>>>>> access the db while another one is in the middle of a transaction
>>>>>>
>>>>> with
>>>>>
>>>>>> multiple insert statements?
>>>>>>
>>>>> No.  But the mutex is recursive.  So you can get a copy of it  
>>>>> using
>>>>> sqlite3_db_mutex() then lock it yourself using
>>>>> sqlite3_mutex_enter()/
>>>>> leave().
>>>>>
>>>>> Also remember:  You should not be using threads.  Threads will  
>>>>> bring
>>>>> only grief and woe.  On your own head be it.
>>>>>
>>>>>
>>>>>
>>>>> D. Richard Hipp
>>>>> drh at hwaci.com
>>>>>
>>>>>
>>>> ___
>>>> sqlite-users mailing list
>>>> sqlite-users@sqlite.org
>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-05-01 Thread James Gregurich

So, you suggest I should build a commercial desktop application (for  
processing print-industry files and presenting them in a UI)  in such  
a way that it spawns multiple processes and communicates with them via  
the filesystem or IPC APIs?

Why would I want to go to that level of complexity in an  
uncontrollable environment (i.e. a consumer desktop computer) when I  
can just use NSOperation, boost::thread, and boost::mutex to build a  
single-process solution that shares data in a normal way between tasks?

James Gregurich
Engineering Manager
Markzware


On Apr 29, 2009, at 11:23 PM, Roger Binns wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> James Gregurich wrote:
>> Given the industry is going multicore and 16-core macintoshes for  
>> your
>> grand-mother are  just a few years away, I recommend you rethink your
>> position on the use of threading.
>
> Threading is the worst solution to many cpu/core and large memory.
> Having the same memory addressed (which is what threading does) across
> multiple concurrently executing cpus/cores causes cache thrashing,
> memory contention, frequent use of memory barriers for synchronization
> (which also slows things down) and as memory becomes attached to
> individual cpus leads to access being over hypertransport/csi.
>
> Far better is using multiple processes which don't have those  
> issues.  A
> good example application to study is Google Chrome which uses multiple
> processes prolifically - each tab is a separate process, as well as  
> the
> various plugins, languages etc.  Each process can be appropriately
> locked down using the principle of least privilege.  If you use  
> threads
> then typically they all have permission to do anything the process  
> could do.
>
> (Also the multi-process approach is way easier to test, record/replay
> and is more deterministic)
>
> Erlang is also worth studying.  It only has single threaded processes
> (although the processes are lighter weight than operating system  
> processes).
>
>> NSOperation is a major part of that effort.
>
> If the "single encapsulated task" doesn't go around concurrently
> touching bits of memory then it could be shunted to a separate process
> anyway.
>
> The danger from threads is not the threads themselves, but the
> concurrency.  It is virtually impossible to prove that a threaded
> process does the concurrency correctly under all circumstances.  If  
> you
> eliminate the concurrency then you can use multiple processes, and can
> usually even make it scale over multiple machines!
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAkn5Q9MACgkQmOOfHg372QSS/QCfSje/tyX0hmidHyubVKqrXlHt
> Sq0AoKEbmcwx/fmAFtcVeMjbcUgN8dr3
> =8lUQ
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-04-30 Thread James Gregurich

I can't agree that such a thing would be a good approach in a  
commercial desktop application environment. I'd never deploy something  
like that to millions of graphic designers. I want everything in a  
nice, tidy black-box that the average joe is incredibly unlikely to  
screw up. I have no idea what google chrome does and can't comment on  
it. I don't use the app.

Beyond that, I don't see how that approach solves the problem you  
point out. You still have concurrency going on with shared data  
structures. You still have to implement serialization on the shared  
data structures. The only thing you gain from that design over a  
threaded design is an extra degree of resiliency in that a crashed  
task won't bring down the app. On the downside, you have the extra  
hassle and complication of IPC.

The way I guard against a single task bringing the app down is that I  
religiously keep code exception safe, check for NULLs, and use  
shared_ptr's...I expect the same from my staff. Another way to guard  
the app is to minimize the use  of mutexes. Instead of blocking  
threads, you keep your tasks very small and focused, and you set up  
execution dependencies between tasks. Task B can be made not to run  
until task A is completed. Finally, the primary shared data structure  
is a SQLite in-memory store which is wrapped in C++ code that handles  
the dirty details of serializing transactions on the DB.


Handling the limited 32 bit VM space was indeed a challenge. I had to  
come up with a scheme to throttle the task queue once memory  
consumption reached a certain level.


As for the quality of staff members, that is always a challenge. All I  
can do about that is recruit and retain people who are talented and  
can write solid code.

-James


On Apr 30, 2009, at 4:37 PM, Roger Binns wrote:

> James Gregurich wrote:
>> So, you suggest I should build a commercial desktop application (for
>> processing print-industry files and presenting them in a UI)  in such
>> a way that it spawns multiple processes and communicates with them  
>> via
>> the filesystem or IPC APIs?
>
> You obviously know more about your application, APIs, libraries etc  
> but
> it does sound like it would actually be a very good approach.  And of
> course you can also spawn processes on other machines too should the
> need arise.  The description sounds not too different than what Google
> Chrome does.
>
>> Why would I want to go to that level of complexity in an
>> uncontrollable environment (i.e. a consumer desktop computer) when I
>> can just use NSOperation, boost::thread, and boost::mutex to build a
>> single-process solution that shares data in a normal way between  
>> tasks?
>
> Because while you are a perfect programming machine, not everyone else
> who will touch the code in the future is.  As an example if one mutex
> call is left out or the wrong acquired by programming accident, how  
> long
> would it take to know about it and fix it?
>
> If you have to run in a 32 bit address space then that also limits how
> much you can do in one process.  Do you even know how large the  
> maximum
> stack size is per thread and will other coders never exceed that?
> [Don't answer here - its your application, architecture and team :]
>
> Roger
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-04-30 Thread James Gregurich

So, you suggest I should build a commercial desktop application (for  
processing print-industry files and presenting them in a UI)  in such  
a way that it spawns multiple processes and communicates with them via  
the filesystem or IPC APIs?

Why would I want to go to that level of complexity in an  
uncontrollable environment (i.e. a consumer desktop computer) when I  
can just use NSOperation, boost::thread, and boost::mutex to build a  
single-process solution that shares data in a normal way between tasks?

James Gregurich
Engineering Manager
Markzware


On Apr 29, 2009, at 11:23 PM, Roger Binns wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> James Gregurich wrote:
>> Given the industry is going multicore and 16-core macintoshes for  
>> your
>> grand-mother are  just a few years away, I recommend you rethink your
>> position on the use of threading.
>
> Threading is the worst solution to many cpu/core and large memory.
> Having the same memory addressed (which is what threading does) across
> multiple concurrently executing cpus/cores causes cache thrashing,
> memory contention, frequent use of memory barriers for synchronization
> (which also slows things down) and as memory becomes attached to
> individual cpus leads to access being over hypertransport/csi.
>
> Far better is using multiple processes which don't have those  
> issues.  A
> good example application to study is Google Chrome which uses multiple
> processes prolifically - each tab is a separate process, as well as  
> the
> various plugins, languages etc.  Each process can be appropriately
> locked down using the principle of least privilege.  If you use  
> threads
> then typically they all have permission to do anything the process  
> could do.
>
> (Also the multi-process approach is way easier to test, record/replay
> and is more deterministic)
>
> Erlang is also worth studying.  It only has single threaded processes
> (although the processes are lighter weight than operating system  
> processes).
>
>> NSOperation is a major part of that effort.
>
> If the "single encapsulated task" doesn't go around concurrently
> touching bits of memory then it could be shunted to a separate process
> anyway.
>
> The danger from threads is not the threads themselves, but the
> concurrency.  It is virtually impossible to prove that a threaded
> process does the concurrency correctly under all circumstances.  If  
> you
> eliminate the concurrency then you can use multiple processes, and can
> usually even make it scale over multiple machines!
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAkn5Q9MACgkQmOOfHg372QSS/QCfSje/tyX0hmidHyubVKqrXlHt
> Sq0AoKEbmcwx/fmAFtcVeMjbcUgN8dr3
> =8lUQ
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-04-30 Thread James Gregurich

With all due respect,  science itself is a set of  
"positions" (opinions) which are endorsed by small group of people as  
official doctrine after appropriate study. Saying "A 'position' is  
politics, not science" is not a particularly meaningful statement.  If  
you want to argue that point, feel free to send me a private email.

My threaded application works pretty darn well. I can process  
thousands of print industry files on an 8-core system keeping the  
cores busy without lagging the GUI for other applications. Just  
because many people create ill conceived programs doesn't mean  
threaded programs are inherently doomed to be ill-conceived. The  
development tools and techniques for building concurrent systems are  
advancing and making concurrency quite feasible.

James Gregurich
Engineering Manager
Markzware

On Apr 30, 2009, at 5:01 AM, John Stanton wrote:

> A "position" is politics, not science.  Warnings about the use of
> threads are based on science, and advise you to avoid them if possible
> for your own protection.
>
> I see ill conceived programs using threads which go to complex
> synchronization to achieve the equivalent of single stream execution  
> but
> with much greater overhead.  A KISS situation.
>
> James Gregurich wrote:
>> thanks for the info. That should work for me.
>>
>> Given the industry is going multicore and 16-core macintoshes for  
>> your
>> grand-mother are  just a few years away, I recommend you rethink your
>> position on the use of threading. Apple is heavily pushing  
>> parallelism
>> on its developers.  NSOperation is a major part of that effort. As I
>> understand it, MS is developing their copy of NSOperation for VS2010.
>> The development landscape is only going to get more threaded as time
>> goes on.
>>
>> -James
>>
>>
>>
>>> On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:
>>>
>>>
>>>> howdy!
>>>>
>>>> question:
>>>>
>>>> for an in-memory db with the threading mode set to serialized, is
>>>>
>>> the
>>>
>>>> internal mutex held for an entire transaction so that one thread
>>>>
>>> won't
>>>
>>>> access the db while another one is in the middle of a transaction
>>>>
>>> with
>>>
>>>> multiple insert statements?
>>>>
>>> No.  But the mutex is recursive.  So you can get a copy of it using
>>> sqlite3_db_mutex() then lock it yourself using  
>>> sqlite3_mutex_enter()/
>>> leave().
>>>
>>> Also remember:  You should not be using threads.  Threads will bring
>>> only grief and woe.  On your own head be it.
>>>
>>>
>>>
>>> D. Richard Hipp
>>> drh at hwaci.com
>>>
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-04-29 Thread James Gregurich

thanks for the info. That should work for me.

Given the industry is going multicore and 16-core macintoshes for your  
grand-mother are  just a few years away, I recommend you rethink your  
position on the use of threading. Apple is heavily pushing parallelism  
on its developers.  NSOperation is a major part of that effort. As I  
understand it, MS is developing their copy of NSOperation for VS2010.  
The development landscape is only going to get more threaded as time  
goes on.

-James


> On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:
>
> > howdy!
> >
> > question:
> >
> > for an in-memory db with the threading mode set to serialized, is  
> the
> > internal mutex held for an entire transaction so that one thread  
> won't
> > access the db while another one is in the middle of a transaction  
> with
> > multiple insert statements?
>
>
> No.  But the mutex is recursive.  So you can get a copy of it using
> sqlite3_db_mutex() then lock it yourself using sqlite3_mutex_enter()/
> leave().
>
> Also remember:  You should not be using threads.  Threads will bring
> only grief and woe.  On your own head be it.
>
>
>
> D. Richard Hipp
> drh at hwaci.com
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] mutex and begin/end transaction

2009-04-29 Thread James Gregurich
howdy!

question:

for an in-memory db with the threading mode set to serialized, is the  
internal mutex held for an entire transaction so that one thread won't  
access the db while another one is in the middle of a transaction with  
multiple insert statements?


thanks for any info.

James Gregurich
Engineering Manager
Markzware

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug or working as designed?

2008-10-29 Thread James Sheridan
James Sheridan wrote:
 > CREATE TABLE [Query] (
 >  [id] INTEGER  NOT NULL PRIMARY KEY AUTOINCREMENT,
 >  [creatorID] INTEGER  NOT NULL,
 >  [ownerID] INTEGER  NOT NULL,
 >  [type] VARCHAR  NOT NULL
 > );
 > CREATE TABLE [UserQuery] (
 >  [userID] INTEGER  NOT NULL,
 >  [queryID] INTEGER  NOT NULL
 > );
 >
 > SELECT   Q.*
 > FROM Query Q,
 >  UserQuery UQ
 > WHEREUQ.userID = '1' OR
 >  Q.type = 'a';

That's what trying to sanitize your data/queries gets you :)

The query should have been:

SELECT  Q.*
FROMQuery Q,
UserQuery UQ
WHERE   Q.type = 'a' OR
(Q.id = UQ.queryID AND
 UQ.userID = '1');

Basically, I want all queries of type "a" and all the ones that have a record 
in 
UserQuery with userID = 1 (queries shared to the user).

Related addendum:

In reading it appears that MySQL treats "," as a CROSS JOIN and implements it 
effectively as an INNER JOIN.
a) Is this correct?
and b) Is Sqlite acting the same or treating it as a true CROSS JOIN?

I suspect I'm writing things that I've used with MySQL for a while that are 
just 
not acting the same on Sqlite. That's fine, but I do want to know that if so :)

Thanks yet again.

--
James Sheridan
Tenable Network Security
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug or working as designed?

2008-10-29 Thread James Sheridan
CREATE TABLE [Query] (
[id] INTEGER  NOT NULL PRIMARY KEY AUTOINCREMENT,
[creatorID] INTEGER  NOT NULL,
[ownerID] INTEGER  NOT NULL,
[type] VARCHAR  NOT NULL
);
CREATE TABLE [UserQuery] (
[userID] INTEGER  NOT NULL,
[queryID] INTEGER  NOT NULL
);

SELECT  Q.*
FROMQuery Q,
UserQuery UQ
WHERE   UQ.userID = '1' OR
Q.type = 'a';

Query has data and records with type = 'a'.
If UserQuery has NO rows the select returns nothing.
If UserQuery has ANY data, even non-matching data, it returns the expected rows.

Yes, changing to a LEFT JOIN gets around this, but the original question still 
stands :)

Thanks.

--
James Sheridan
Tenable Network Security
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding index to table makes SELECT much slower. Why?

2008-10-09 Thread James Pringle
Hi all-

   Thanks for your many suggestions.  I have tried many of your suggestions,
and found the following:


   1. If I create the database with depth ordered from least to greatest,
   and then create an index, the searches run an order of magnitude faster than
   without an index.  I.e. the search for matching entries in the data base
   concludes in about 9 seconds, instead of the 90 seconds it had before.
   Thanks to "developir" for this suggestion!
   2. searching via "rowid" makes no difference.
   3. doing vacuum makes no difference, since this is a write once, read
   many database.
   4. ulimit on my machine is unlimited, so this was not important for me.
   5. Cache size seems to make no difference -- and the SQLite documentation
   says it should only be important for DELETE's and UPDATE's.
   6. I like the idea of using RTREE, however, I don't want each of the
   students who use this database to have to recompile the version of SQLite
   that comes with their version of python.

Several of you, in particular Ken, suggested that I run SQL queries that
would provide some output that would be helpful in figuring out what was
going on.The database is now more than fast enough for me, so I would
not do this on my own.  However, if others on the mailing list would like to
see this output in order to make SQLite better (or just for their own
curiosity), please feel free to let me know, and I can make the runs.

I have a small subset of the data (450Mb) which exhibits the same
behaviour.  The data is public (it is from the National Ocean Database), and
so if anyone wants to see it I would be happy to put it on my web server.

Cheers,
and thanks to everyone who helped me!
Jamie Pringle

On Wed, Oct 8, 2008 at 6:50 PM, James Pringle <[EMAIL PROTECTED]> wrote:
> Hi-
>
>  I am new to sqlite, and am having a puzzling problem.  I have read
> that adding an INDEX to a sqlite table can never make a SELECT
> statement slower.  However, in my case, it seems to be making the
> select statement an order of magnitude slower, which is not what I
> wanted!  What could be going on?
>
>  I am calling SQLite from python 2.5 from fink useing the sqlite3
> module on OS X 10.5 with 2Gb of memory.  My data base contains 32
> million lines of ocean hydrographic data, stored on disk, with the
> table created with the following statement:
>
>CREATE TABLE hydro (lat REAL, lon REAL, year INTEGER,
>month INTEGER, day INTEGER, time REAL, cast_id INTEGER,
>depth REAL, T REAL, S REAL, water_depth REAL)
>
> When I perform a SELECT that returns about 0.6% of the data in the
> database (about 200,000 records)
>
>SELECT lat, lon, depth FROM hydro WHERE depth>= 49.01 AND depth <=
50.99
>
> It selects and returns the records in 82 seconds.  I wished to improve
> on this time, so I created an index with the following command:
>
>CREATE INDEX hydro_indx ON hydro (depth)
>
> I then tried the SELECT command again, and read the results into
> memory, and it took 717 seconds!?!
>
> The "depth" field contains many distinct numeric values, with no
> specific value matching more then 0.5% of the data base.  When I DROP
> the index with the command
>
>DROP INDEX hydro_indx
>
> The SELECT time returns to about 80 seconds, confirming that it is the
> index which is slowing things down.  What is going on?  I have
> repeated and confirmed these timings.
>
> I have listened for disk chatter and monitored the system, and it does
> not seem to be thrashing swap, or otherwise becoming unresponsive.
>
> I have two questions:
>
>1) Why is the index making things slower?
>2) How can I make my SELECT statement faster?  The primary
> selection will be done
>   on the "depth" and "water_depth" keys.
>
> I thank you for your time.
>
> Cheers,
> Jamie Pringle
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Adding index to table makes SELECT much slower. Why?

2008-10-08 Thread James Pringle
Hi-

  I am new to sqlite, and am having a puzzling problem.  I have read
that adding an INDEX to a sqlite table can never make a SELECT
statement slower.  However, in my case, it seems to be making the
select statement an order of magnitude slower, which is not what I
wanted!  What could be going on?

  I am calling SQLite from python 2.5 from fink useing the sqlite3
module on OS X 10.5 with 2Gb of memory.  My data base contains 32
million lines of ocean hydrographic data, stored on disk, with the
table created with the following statement:

CREATE TABLE hydro (lat REAL, lon REAL, year INTEGER,
month INTEGER, day INTEGER, time REAL, cast_id INTEGER,
depth REAL, T REAL, S REAL, water_depth REAL)

When I perform a SELECT that returns about 0.6% of the data in the
database (about 200,000 records)

SELECT lat, lon, depth FROM hydro WHERE depth>= 49.01 AND depth <= 50.99

It selects and returns the records in 82 seconds.  I wished to improve
on this time, so I created an index with the following command:

CREATE INDEX hydro_indx ON hydro (depth)

I then tried the SELECT command again, and read the results into
memory, and it took 717 seconds!?!

The "depth" field contains many distinct numeric values, with no
specific value matching more then 0.5% of the data base.  When I DROP
the index with the command

DROP INDEX hydro_indx

The SELECT time returns to about 80 seconds, confirming that it is the
index which is slowing things down.  What is going on?  I have
repeated and confirmed these timings.

I have listened for disk chatter and monitored the system, and it does
not seem to be thrashing swap, or otherwise becoming unresponsive.

I have two questions:

1) Why is the index making things slower?
2) How can I make my SELECT statement faster?  The primary
selection will be done
   on the "depth" and "water_depth" keys.

I thank you for your time.

Cheers,
Jamie Pringle
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Data Analysis help

2008-10-06 Thread James Mills
HI folks,

I'm seeking some assistance in analyzing
a rather flat database (imported from Excel / CSV).
I recognize the need for a more structured
database, but can't get my head around
a "good"/"best fit" set of schemas to
represent the data requried.

If someone is willing to donate their time
in helping me here, I would greatly
appreciate it.

Please download and see http://shortcircuit.net.au/~prologic/vv2.sql.gz

$ gunzip vv2.sql
$ sqlite3 vv.db < vv2.sql

There are 3 tables:

data - The (flat) database of data.
srates - Expected export (to be CSV) of Supplier Rates
crates - Expected export (to be CSV) of Customer Rates

The srates and crates tables are just there
for analysis purposes. These are actually
suppose to be exported CSV files called:
 * SupplierRates--MM-DD.csv
 * CustomerRates--MM-DD.csv
as at the time of the export.

I recognize that I may need to store tables
for:
 * rates
 * suppliers
 * customers
 * dialcodes
 * destinations

But, I am not even sure if the aboave
listed tables is suitable, nor can I get my
head around the necessary schemas
required.

I lack good "data analysis skills" :/

Thanks in advance to anyone that may
be willing to help.

Thanks and

cheers
James

-- 
--
-- "Problems are solved by method"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] NOT LIKE statement

2008-06-25 Thread James
The only wild cards affecting operation of the LIKE operator are '%' and
'_'.
So the SQLite doesn't support '[ ]' and '^'. Right?
Thank you.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Simon Davies
Sent: Wednesday, June 25, 2008 5:35 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] NOT LIKE statement

You seemed to be expecting that
name NOT LIKE 'InternetGatewayDevice.%.[1-9]' AND name NOT LIKE
'InternetGatewayDevice.%.[^1-9]
would exclude some of the rows you are getting.
This is incorrect. The only wild cards affecting operation of the LIKE
operator are '%' and '_'.

Regards,
Simon


2008/6/25 James <[EMAIL PROTECTED]>:
> Hi, Simon:
>Thanks for help me solve this problem.
>I have study the link you give me. But I still don't understand why
> my original SQL statement can't work. Could you explain in detail?
>    Thank you.
>
> James
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Simon Davies
> Sent: Wednesday, June 25, 2008 4:40 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] NOT LIKE statement
>
> I can get your expected results from the data you have given, as shown
> below:
>
> sqlite> CREATE TABLE tst( name text );
> sqlite>
> sqlite> insert into tst values( 'InternetGatewayDevice.DeviceInfo.' );
> sqlite> insert into tst values( 'RubbishForTesting.DeviceInfo.1' );
> sqlite> insert into tst values(
> 'InternetGatewayDevice.DeviceInfo.AdditionHardwareVersion' );
> sqlite> insert into tst values(
> 'InternetGatewayDevice.DeviceInfo.SerialNumber' );
> sqlite> insert into tst values( 'RubbishForTesting.DeviceInfo.2' );
> sqlite> insert into tst values( 'InternetGatewayDevice.ManagementServer.'
);
> sqlite> insert into tst values(
'InternetGatewayDevice.ManagementServer.URL'
> );
> sqlite> insert into tst values( 'RubbishForTesting.DeviceInfo.3' );
> sqlite> insert into tst values(
> 'InternetGatewayDevice.ManagementServer.Username' );
> sqlite> insert into tst values(
> 'InternetGatewayDevice.ManagementServer.DownloadProgressURL' );
> sqlite> insert into tst values( 'RubbishForTesting.DeviceInfo.4' );
> sqlite>
> sqlite>
> sqlite> select name from tst where name like 'InternetGatewayDevice.%';
> InternetGatewayDevice.DeviceInfo.
> InternetGatewayDevice.DeviceInfo.AdditionHardwareVersion
> InternetGatewayDevice.DeviceInfo.SerialNumber
> InternetGatewayDevice.ManagementServer.
> InternetGatewayDevice.ManagementServer.URL
> InternetGatewayDevice.ManagementServer.Username
> InternetGatewayDevice.ManagementServer.DownloadProgressURL
> sqlite>
> sqlite>
> sqlite> select name from tst where name like 'InternetGatewayDevice.%'
> and name not like 'InternetGatewayDevice.%._%';
> InternetGatewayDevice.DeviceInfo.
> InternetGatewayDevice.ManagementServer.
> sqlite>
>
> Rgds,
> Simon
>
> 2008/6/25 James <[EMAIL PROTECTED]>:
>> Hi,
>> I will read that.
>> But I want to know that is it possible to get the expected result?
>> Thank you.
>>
>>
>> -Original Message-
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of Simon Davies
>> Sent: Wednesday, June 25, 2008 3:57 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] NOT LIKE statement
>>
>> Hi James,
>>
>> I think the problem lies with your expectations.
>>
>> Read the section on the LIKE operator in
>> http://www.sqlite.org/lang_expr.html
>>
>> Rgds,
>> Simon
>>
>> 2008/6/25 James <[EMAIL PROTECTED]>:
>>> Hi,
>>>
>>> I execute the SQL statement [SELECT Name FROM tr069;] and get the result
>>>
>>> InternetGatewayDevice.DeviceInfo.
>>>
>>> InternetGatewayDevice.DeviceInfo.AdditionHardwareVersion
>>>
>>> InternetGatewayDevice.DeviceInfo.AdditionalSoftwareVersion
>>>
>>> InternetGatewayDevice.DeviceInfo.Description
>>>
>>> InternetGatewayDevice.DeviceInfo.DeviceLog
>>>
>>> InternetGatewayDevice.DeviceInfo.DeviceStatus
>>>
>>> InternetGatewayDevice.DeviceInfo.EnabledOptions
>>>
>>> InternetGatewayDevice.DeviceInfo.FirstUseDate
>>>
>>> InternetGatewayDevice.DeviceInfo.HardwareVersion
>>>
>>> InternetGatewayDevice.DeviceInfo.SoftwareVersion
>>>
>>> InternetGatewayDevice.DeviceInfo.Manufacturer
>>>
>>> InternetGatewayDevice.DeviceInfo.ManufacturerOUI
>>>
>>> InternetGatewayDe

Re: [sqlite] NOT LIKE statement

2008-06-25 Thread James
Hi, Simon:
Thanks for help me solve this problem.
I have study the link you give me. But I still don't understand why
my original SQL statement can't work. Could you explain in detail? 
Thank you.

James
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Simon Davies
Sent: Wednesday, June 25, 2008 4:40 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] NOT LIKE statement

I can get your expected results from the data you have given, as shown
below:

sqlite> CREATE TABLE tst( name text );
sqlite>
sqlite> insert into tst values( 'InternetGatewayDevice.DeviceInfo.' );
sqlite> insert into tst values( 'RubbishForTesting.DeviceInfo.1' );
sqlite> insert into tst values(
'InternetGatewayDevice.DeviceInfo.AdditionHardwareVersion' );
sqlite> insert into tst values(
'InternetGatewayDevice.DeviceInfo.SerialNumber' );
sqlite> insert into tst values( 'RubbishForTesting.DeviceInfo.2' );
sqlite> insert into tst values( 'InternetGatewayDevice.ManagementServer.' );
sqlite> insert into tst values( 'InternetGatewayDevice.ManagementServer.URL'
);
sqlite> insert into tst values( 'RubbishForTesting.DeviceInfo.3' );
sqlite> insert into tst values(
'InternetGatewayDevice.ManagementServer.Username' );
sqlite> insert into tst values(
'InternetGatewayDevice.ManagementServer.DownloadProgressURL' );
sqlite> insert into tst values( 'RubbishForTesting.DeviceInfo.4' );
sqlite>
sqlite>
sqlite> select name from tst where name like 'InternetGatewayDevice.%';
InternetGatewayDevice.DeviceInfo.
InternetGatewayDevice.DeviceInfo.AdditionHardwareVersion
InternetGatewayDevice.DeviceInfo.SerialNumber
InternetGatewayDevice.ManagementServer.
InternetGatewayDevice.ManagementServer.URL
InternetGatewayDevice.ManagementServer.Username
InternetGatewayDevice.ManagementServer.DownloadProgressURL
sqlite>
sqlite>
sqlite> select name from tst where name like 'InternetGatewayDevice.%'
and name not like 'InternetGatewayDevice.%._%';
InternetGatewayDevice.DeviceInfo.
InternetGatewayDevice.ManagementServer.
sqlite>

Rgds,
Simon

2008/6/25 James <[EMAIL PROTECTED]>:
> Hi,
> I will read that.
> But I want to know that is it possible to get the expected result?
> Thank you.
>
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Simon Davies
> Sent: Wednesday, June 25, 2008 3:57 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] NOT LIKE statement
>
> Hi James,
>
> I think the problem lies with your expectations.
>
> Read the section on the LIKE operator in
> http://www.sqlite.org/lang_expr.html
>
> Rgds,
> Simon
>
> 2008/6/25 James <[EMAIL PROTECTED]>:
>> Hi,
>>
>> I execute the SQL statement [SELECT Name FROM tr069;] and get the result
>>
>> InternetGatewayDevice.DeviceInfo.
>>
>> InternetGatewayDevice.DeviceInfo.AdditionHardwareVersion
>>
>> InternetGatewayDevice.DeviceInfo.AdditionalSoftwareVersion
>>
>> InternetGatewayDevice.DeviceInfo.Description
>>
>> InternetGatewayDevice.DeviceInfo.DeviceLog
>>
>> InternetGatewayDevice.DeviceInfo.DeviceStatus
>>
>> InternetGatewayDevice.DeviceInfo.EnabledOptions
>>
>> InternetGatewayDevice.DeviceInfo.FirstUseDate
>>
>> InternetGatewayDevice.DeviceInfo.HardwareVersion
>>
>> InternetGatewayDevice.DeviceInfo.SoftwareVersion
>>
>> InternetGatewayDevice.DeviceInfo.Manufacturer
>>
>> InternetGatewayDevice.DeviceInfo.ManufacturerOUI
>>
>> InternetGatewayDevice.DeviceInfo.ModelName
>>
>> InternetGatewayDevice.DeviceInfo.ProductClass
>>
>> InternetGatewayDevice.DeviceInfo.ProvisioningCode
>>
>> InternetGatewayDevice.DeviceInfo.SerialNumber
>>
>> InternetGatewayDevice.DeviceInfo.UpTime
>>
>> InternetGatewayDevice.ManagementServer.
>>
>> InternetGatewayDevice.ManagementServer.URL
>>
>> InternetGatewayDevice.ManagementServer.Username
>>
>> InternetGatewayDevice.ManagementServer.Password
>>
>> InternetGatewayDevice.ManagementServer.PeriodicInformEnable
>>
>> InternetGatewayDevice.ManagementServer.PeriodicInformInterval
>>
>> InternetGatewayDevice.ManagementServer.PeriodicInformTime
>>
>> InternetGatewayDevice.ManagementServer.ParameterKey
>>
>> InternetGatewayDevice.ManagementServer.ConnectionRequestURL
>>
>> InternetGatewayDevice.ManagementServer.ConnectionRequestUsername
>>
>> InternetGatewayDevice.ManagementServer.ConnectionRequestPassword
>>
>> InternetGatewayDevice.ManagementServer.UpgradesManaged
>>
>> InternetGatewayDevice.ManagementServer.KickURL
>>
>> InternetGate

Re: [sqlite] NOT LIKE statement

2008-06-25 Thread James
Hi,
I will read that.
But I want to know that is it possible to get the expected result?
Thank you.


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Simon Davies
Sent: Wednesday, June 25, 2008 3:57 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] NOT LIKE statement

Hi James,

I think the problem lies with your expectations.

Read the section on the LIKE operator in
http://www.sqlite.org/lang_expr.html

Rgds,
Simon

2008/6/25 James <[EMAIL PROTECTED]>:
> Hi,
>
> I execute the SQL statement [SELECT Name FROM tr069;] and get the result
>
> InternetGatewayDevice.DeviceInfo.
>
> InternetGatewayDevice.DeviceInfo.AdditionHardwareVersion
>
> InternetGatewayDevice.DeviceInfo.AdditionalSoftwareVersion
>
> InternetGatewayDevice.DeviceInfo.Description
>
> InternetGatewayDevice.DeviceInfo.DeviceLog
>
> InternetGatewayDevice.DeviceInfo.DeviceStatus
>
> InternetGatewayDevice.DeviceInfo.EnabledOptions
>
> InternetGatewayDevice.DeviceInfo.FirstUseDate
>
> InternetGatewayDevice.DeviceInfo.HardwareVersion
>
> InternetGatewayDevice.DeviceInfo.SoftwareVersion
>
> InternetGatewayDevice.DeviceInfo.Manufacturer
>
> InternetGatewayDevice.DeviceInfo.ManufacturerOUI
>
> InternetGatewayDevice.DeviceInfo.ModelName
>
> InternetGatewayDevice.DeviceInfo.ProductClass
>
> InternetGatewayDevice.DeviceInfo.ProvisioningCode
>
> InternetGatewayDevice.DeviceInfo.SerialNumber
>
> InternetGatewayDevice.DeviceInfo.UpTime
>
> InternetGatewayDevice.ManagementServer.
>
> InternetGatewayDevice.ManagementServer.URL
>
> InternetGatewayDevice.ManagementServer.Username
>
> InternetGatewayDevice.ManagementServer.Password
>
> InternetGatewayDevice.ManagementServer.PeriodicInformEnable
>
> InternetGatewayDevice.ManagementServer.PeriodicInformInterval
>
> InternetGatewayDevice.ManagementServer.PeriodicInformTime
>
> InternetGatewayDevice.ManagementServer.ParameterKey
>
> InternetGatewayDevice.ManagementServer.ConnectionRequestURL
>
> InternetGatewayDevice.ManagementServer.ConnectionRequestUsername
>
> InternetGatewayDevice.ManagementServer.ConnectionRequestPassword
>
> InternetGatewayDevice.ManagementServer.UpgradesManaged
>
> InternetGatewayDevice.ManagementServer.KickURL
>
> InternetGatewayDevice.ManagementServer.DownloadProgressURL
>
>
>
> And I execute the SQL statement [SELECT name FROM tr069 WHERE name LIKE
> 'InternetGatewayDevice.%' AND name NOT LIKE
'InternetGatewayDevice.%.[1-9]'
> AND name NOT LIKE 'InternetGatewayDevice.%.[^1-9] ';] and expect to get
the
> result [InternetGatewayDevice.DeviceInfo.] and
> [InternetGatewayDevice.ManagementServer.].
>
> But I still get the above result. I don't know where the problem is.
>
> Could someone tell me ?
>
> Thank you.
>
>
>
> James Liang
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] NOT LIKE statement

2008-06-25 Thread James
Hi,

I execute the SQL statement [SELECT Name FROM tr069;] and get the result

InternetGatewayDevice.DeviceInfo.

InternetGatewayDevice.DeviceInfo.AdditionHardwareVersion

InternetGatewayDevice.DeviceInfo.AdditionalSoftwareVersion

InternetGatewayDevice.DeviceInfo.Description

InternetGatewayDevice.DeviceInfo.DeviceLog

InternetGatewayDevice.DeviceInfo.DeviceStatus

InternetGatewayDevice.DeviceInfo.EnabledOptions

InternetGatewayDevice.DeviceInfo.FirstUseDate

InternetGatewayDevice.DeviceInfo.HardwareVersion

InternetGatewayDevice.DeviceInfo.SoftwareVersion

InternetGatewayDevice.DeviceInfo.Manufacturer

InternetGatewayDevice.DeviceInfo.ManufacturerOUI

InternetGatewayDevice.DeviceInfo.ModelName

InternetGatewayDevice.DeviceInfo.ProductClass

InternetGatewayDevice.DeviceInfo.ProvisioningCode

InternetGatewayDevice.DeviceInfo.SerialNumber

InternetGatewayDevice.DeviceInfo.UpTime

InternetGatewayDevice.ManagementServer.

InternetGatewayDevice.ManagementServer.URL

InternetGatewayDevice.ManagementServer.Username

InternetGatewayDevice.ManagementServer.Password

InternetGatewayDevice.ManagementServer.PeriodicInformEnable

InternetGatewayDevice.ManagementServer.PeriodicInformInterval

InternetGatewayDevice.ManagementServer.PeriodicInformTime

InternetGatewayDevice.ManagementServer.ParameterKey

InternetGatewayDevice.ManagementServer.ConnectionRequestURL

InternetGatewayDevice.ManagementServer.ConnectionRequestUsername

InternetGatewayDevice.ManagementServer.ConnectionRequestPassword

InternetGatewayDevice.ManagementServer.UpgradesManaged

InternetGatewayDevice.ManagementServer.KickURL

InternetGatewayDevice.ManagementServer.DownloadProgressURL

 

And I execute the SQL statement [SELECT name FROM tr069 WHERE name LIKE
'InternetGatewayDevice.%' AND name NOT LIKE 'InternetGatewayDevice.%.[1-9]'
AND name NOT LIKE 'InternetGatewayDevice.%.[^1-9] ';] and expect to get the
result [InternetGatewayDevice.DeviceInfo.] and
[InternetGatewayDevice.ManagementServer.].

But I still get the above result. I don't know where the problem is. 

Could someone tell me ?

Thank you.

 

James Liang

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted sqlite_sequence table

2008-06-03 Thread James P

I would remove the leading/trailing quotes external to the import of the file, 
using something like sed or gawk.

I couldn't work out how to do this purely using sqlite, however.
_
It's simple! Sell your car for just $30 at CarPoint.com.au
http://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fsecure%2Dau%2Eimrworldwide%2Ecom%2Fcgi%2Dbin%2Fa%2Fci%5F450304%2Fet%5F2%2Fcg%5F801459%2Fpi%5F1004813%2Fai%5F859641&_t=762955845&_r=tig_OCT07&_m=EXT
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-21 Thread James Gregurich

interesting. thanks for the tip.

I"ll give it some consideration.

-James


On Apr 21, 2008, at 1:07 :50PM, Scott Hess wrote:

> If you create a file on disk and set PRAGMA synchronous = OFF, you
> should get pretty close to the performance of a shared in-memory
> database on most modern desktop operating systems - maybe close enough
> that you won't care to do anything beyond that.  If you further look
> at the recent discussion/patch to disable journaling entirely, you
> should get even closer.  Going this route means you won't have to
> worry so much about the case where someone accidentally pumps 4 gig of
> data into your database and sucks up all RAM.
>
> Keep in mind that if you do these things, then it is quite trivial to
> generate corrupt database files if your app or OS crashes.  So you
> need to arrange to delete database files on app start-up to reset your
> state (an in-memory database wouldn't have that problem!).  On a
> Unix-based system, you may be able to open the database then delete
> the underlying path, but that may not work for however you are sharing
> things.
>
> -scott

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Select TOP n

2008-04-20 Thread James Dodd
Hi,
 
(Newbie to sqlite, some experience with SQL Server 2000). I tried to do a
"SELECT TOP 10 * FROM tab1" and sqlite3 complained. Then I looked at the SQL
syntax page and indeed TOP doesn't seem to be there. Is there a reason for
this and, better still, is there a way to get around it? Or is it there and
I've missed it?
 
TIA,
 
James

No virus found in this outgoing message.
Checked by AVG. 
Version: 7.5.524 / Virus Database: 269.23.2/1387 - Release Date: 19/4/2008
11:31
 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-20 Thread James Gregurich

yes. However, CoreData queues up modified managed objects in a managed  
object context and then commits them all in one shot making sure the  
serialization is done on the back side.

So, it does basically what someone here recommended earlier. I just  
don't have to write the mechanism myself.


Actually, CoreData is what I intended to use at first. However, I have  
explored the possibility of directly using SQLite instead to keep my  
document readers and their data management cross-platform.

On Apr 20, 2008, at 8:31 AM, Dennis Cote wrote:

> James Gregurich wrote:
>> I think I will go with CoreData on MacOSX and figure out something
>> else to do on Windows later.
>>
>>
>>
> You do know that CoreData uses SQLite for its persistant storage.
>
> Dennis Cote
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-20 Thread James Gregurich

for those who may be interested:

I ran a test with SQLite version: 3.5.8


I tried the scheme described earlier with each thread sharing a  
connection but writing into its own attached in-memory db on that  
connection.   Didn't work. all but the first writer thread failed with  
a SQLITE_ERROR

oh well.

I think I will go with CoreData on MacOSX and figure out something  
else to do on Windows later.


my thanks to all who attempted to provide a solution.


-James
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-19 Thread James Gregurich

ok. I think I see what you are suggesting. You want to provide a  
mechanism to create memory files that SQLite thinks are disk  
filesthat way you could create multiple connections to them.

an interesting idea. However, that may be more effort that I can  
justify at this point. I'll think about it.


yes. I am fluent in STL.


On Apr 19, 2008, at 1:19 PM, Virgilio Alexandre Fornazin wrote:

> Imagine the following cenario (I assume you know c++ stdlib)
>
> A map of strings (filenames) to in-memory file handlers (the objects  
> that
> will handle the shared memory or heap files).
>
> These files handlers will exists until the process exists and do not  
> receive
> a delelefile() vfs call.
>
> File handlers can synchronize RW-Locks using internal mutex/criticat
> sections/semaphores/spin locks, etc.
>
> When you create a new file in vfs, a new handler is created and  
> assigned to
> that filename and registered in this map.
>
>
>
>
>
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of James Gregurich
> Sent: sábado, 19 de abril de 2008 17:02
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] multiple writers for in-memory datastore
>
>
> I don't immediately see how that would solve the problem.
>
> The limitation of interest here (based on my perhaps limited
> understanding) is that locking has file-level granularity. I don't
> immediately see how a VST implementation would allow for changing the
> locking granularity of the overall system.
>
> -James
>
> On Apr 19, 2008, at 12:03 PM, Virgilio Fornazin wrote:
>
>> what about creating a VFS for such task ? Can be accomplished in
>> many ways,
>> using heap memory, shared memory... not so easy to do, but not much
>> complicated too... locking can be provided by multiple-readers
>> single-writers locks strategies, etc...
>>
>> On Sat, Apr 19, 2008 at 2:29 PM, James Gregurich  
>> <[EMAIL PROTECTED]>
>> wrote:
>>
>>>
>>> oh good! That isn't the version that ships with Leopard, but I can
>>> live with deploying my own version as part of my app.
>>>
>>> Will l get the writer parallelism I'm after as long as each thread
>>> writes exclusively into its own attached db?
>>>
>>>
>>> in other wordstwo bulk insert operations going on simultaneously
>>> on the same connection but each insert operation going into a
>>> different attached in-memory db.
>>>
>>>
>>> On Apr 19, 2008, at 9:20 AM, Dan wrote:
>>>
>>>>
>>>> On Apr 19, 2008, at 6:06 AM, James Gregurich wrote:
>>>>
>>>>>
>>>>> I'll ask this question. The answer is probably "no," but I'll ask
>>>>> it
>>>>> for the sake of completeness.
>>>>>
>>>>>
>>>>> Suppose I created an in-memory db. I use the attach command to
>>>>> associate an additional in-memory db. Suppose I assign the main
>>>>> db to
>>>>> thread 1 and the associated db to thread 2. Can I share the
>>>>> connection
>>>>> across the 2 threads if each thread works exclusively in its own
>>>>> db?
>>>>>
>>>>> I am aware that the connection is generally not threadsafe, but
>>>>> will
>>>>> it work if the two threads don't operate on the same db at the  
>>>>> same
>>>>> time?
>>>>
>>>> As of 3.5, sqlite connections are threadsafe by default. With
>>>> earlier versions, this trick will not work.
>>>>
>>>> Dan.
>>>>
>>>>
>>>> ___
>>>> sqlite-users mailing list
>>>> sqlite-users@sqlite.org
>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-19 Thread James Gregurich

I don't immediately see how that would solve the problem.

The limitation of interest here (based on my perhaps limited  
understanding) is that locking has file-level granularity. I don't  
immediately see how a VST implementation would allow for changing the  
locking granularity of the overall system.

-James

On Apr 19, 2008, at 12:03 PM, Virgilio Fornazin wrote:

> what about creating a VFS for such task ? Can be accomplished in  
> many ways,
> using heap memory, shared memory... not so easy to do, but not much
> complicated too... locking can be provided by multiple-readers
> single-writers locks strategies, etc...
>
> On Sat, Apr 19, 2008 at 2:29 PM, James Gregurich <[EMAIL PROTECTED]>
> wrote:
>
>>
>> oh good! That isn't the version that ships with Leopard, but I can
>> live with deploying my own version as part of my app.
>>
>> Will l get the writer parallelism I'm after as long as each thread
>> writes exclusively into its own attached db?
>>
>>
>> in other wordstwo bulk insert operations going on simultaneously
>> on the same connection but each insert operation going into a
>> different attached in-memory db.
>>
>>
>> On Apr 19, 2008, at 9:20 AM, Dan wrote:
>>
>>>
>>> On Apr 19, 2008, at 6:06 AM, James Gregurich wrote:
>>>
>>>>
>>>> I'll ask this question. The answer is probably "no," but I'll ask  
>>>> it
>>>> for the sake of completeness.
>>>>
>>>>
>>>> Suppose I created an in-memory db. I use the attach command to
>>>> associate an additional in-memory db. Suppose I assign the main  
>>>> db to
>>>> thread 1 and the associated db to thread 2. Can I share the
>>>> connection
>>>> across the 2 threads if each thread works exclusively in its own  
>>>> db?
>>>>
>>>> I am aware that the connection is generally not threadsafe, but  
>>>> will
>>>> it work if the two threads don't operate on the same db at the same
>>>> time?
>>>
>>> As of 3.5, sqlite connections are threadsafe by default. With
>>> earlier versions, this trick will not work.
>>>
>>> Dan.
>>>
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-19 Thread James Gregurich

oh good! That isn't the version that ships with Leopard, but I can  
live with deploying my own version as part of my app.

Will l get the writer parallelism I'm after as long as each thread  
writes exclusively into its own attached db?


in other wordstwo bulk insert operations going on simultaneously  
on the same connection but each insert operation going into a  
different attached in-memory db.


On Apr 19, 2008, at 9:20 AM, Dan wrote:

>
> On Apr 19, 2008, at 6:06 AM, James Gregurich wrote:
>
>>
>> I'll ask this question. The answer is probably "no," but I'll ask it
>> for the sake of completeness.
>>
>>
>> Suppose I created an in-memory db. I use the attach command to
>> associate an additional in-memory db. Suppose I assign the main db to
>> thread 1 and the associated db to thread 2. Can I share the  
>> connection
>> across the 2 threads if each thread works exclusively in its own db?
>>
>> I am aware that the connection is generally not threadsafe, but will
>> it work if the two threads don't operate on the same db at the same
>> time?
>
> As of 3.5, sqlite connections are threadsafe by default. With
> earlier versions, this trick will not work.
>
> Dan.
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich

I'll ask this question. The answer is probably "no," but I'll ask it  
for the sake of completeness.


Suppose I created an in-memory db. I use the attach command to  
associate an additional in-memory db. Suppose I assign the main db to  
thread 1 and the associated db to thread 2. Can I share the connection  
across the 2 threads if each thread works exclusively in its own db?

I am aware that the connection is generally not threadsafe, but will  
it work if the two threads don't operate on the same db at the same  
time?


thanks,
James


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich

On Apr 18, 2008, at 2:33 :32PM, Dennis Cote wrote:
>
> To share an attached database the threads must be able to name it, and
> this is only possible with a file database.

you could change the open() function to be able to assign a name to an  
in-memory db and then keep a mapping of all the names internally. You  
could also provide an API call that takes an existing connection to an  
in-memory store and attaches its db to another pre-existing db on  
another connection. Seems like the underlying foundation is already  
there to do it. But, I admit, I have no knowledge of the  
implementation details of SQLite.



>
> Perhaps you can replace the proprietary file format with a permanent
> SQLite database file (and then again maybe not).

We don't control those formats. they are controlled by certain large,  
well-known software companies. we just reverse-engineer their formats.


> You could implement a server thread that accesses a single memory
> database which accepts commands from, and passes the results back to,
> your other threads as John suggested. You will have to provide some  
> form
> of resource management for the shared resource, whether it is a shared
> memory database, file, or something else.

unless I misunderstand the way the SQLite API works, that isn't really  
practical.

my task is to read a chunk of data, parse it and insert a record into  
table ( a number of records in a loop ofcourse). To do that, I have to  
prepare a statement and then bind data values to the to the statement  
in a loop.

Once I begin the transaction and prepare the statement, the entire db  
is locked up for the duration of the bulk insert.  If that is true,  
then I'll lose all opportunity for parallelism.

If I have to write my own temporary storage containers to hold data  
while it waits to be committed by a datastore thread, then I might as  
well just write my own containers and be done with the task rather  
than going to the expense of using a SQL data store.

One reason to use SQLite is that it would take care of the  
synchronization of multiple writers and readers for me. If I have to  
write all that myself, then why bother with SQLite?


On of my options is to use CoreData on the macintosh. That will do  
what I want as it caches record inserts and does one big commitand  
it handles the synchronization.  However, what I do do with the  
lovable Windows platform?

oh well. I"ll figure it all out some how.





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich

On Apr 18, 2008, at 1:25 :36PM, Dennis Cote wrote:

> James Gregurich wrote:
>>
>> suppose I create a temporary db file on disk. Each task ( a thread)
>> opens a connection to the temp file and attaches an in-memory db to
>> it.
>
> You will have to open the memory database and attach the db file since
> SQLite can't attach to a memory database.

is this information wrong?  
http://www.blitzbasic.com/Community/posts.php?topic=60981

>
>
> Why have you proposed to use a temporary database file? Can this  
> data be
> destroyed between executions?

yes. the nature of the application is such that the data is loaded  
from a proprietary file format, processed, and presented to the user.  
There is no need to store the data back on disk.

I'm interested in a flexible, convenient, in-memory datastore. I  
thought it was going to work but was stopped dead in my tracks when I  
realized I couldn't open multiple connections on an in-memory db.


>
> The way I have suggested, the readers only open the database file.  
> They
> can read as long as no update is in progress. The updates will be
> batched into the independent memory database. During an update the
> database file will be locked, so readers will have to wait.

great. that will get me concurrency at the expense of unnecessary disk  
I/O. I suppose I will have to decide if using sqlite is worth the price.

I may just fall back to defining a table as a std::vector<> of  
std::map<> entries and skip the whole idea of using an embedded db  
engine.

I'll have to think about all of this.


thanks,
James
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich


I'm working on a commercial, boxed, desktop product. I can't be  
creating new mounted disks on a customer's system every time he uses  
my application.



How about this...


suppose I create a temporary db file on disk. Each task ( a thread)  
opens a connection to the temp file and attaches an in-memory db to  
it. The task then writes to tables in the attached in-memory db. When  
the task is done, the tables in the in-memory db are merged into the  
disk file and the attached in-memory db is closed. reader connections  
would only read from the disk file.

Will such a design give me full concurrency on my writer tasks until  
they are ready to flush their results to the disk file? As I  
understand it, the attached db won't be locked by reading done on the  
disk file.


thanks,
James



On Apr 18, 2008, at 10:33 :39AM, Dennis Cote wrote:

> James Gregurich wrote:
>> If the sqlite statement had a temporary storage area so that I could
>> load up a bunch of rows and then commit them in one shot so that the
>> lock on the db was not held very long by a single transaction, that
>> would probably work.
>>
>
> Using a RAM disk you could insert rows into one database as they are
> generated. This would be your batch.
>
> Then periodically attach that database to the main database and copy  
> all
> the new rows to the main DB table in one in a auto transaction.
>
> attach "batch.db" as batch;
> begin;
> insert into main.tbl select * from batch.tbl;
> delete from batch.tbl;
> commit;
> detach batch;
>
> This will only lock the main database for a short period while it is
> updated.
>
>> However, my reading of the documentation leads me to believe that
>> using the bind functions in a loop with an insert statement will lock
>> the entire in-memory DB until the bulk insert is donewhich  
>> means I
>> would get no benefit from concurrency.
>>
>> Is this correct?
>
> Readers are blocked by a writer until the write transaction is  
> committed.
>
>>
>> BTW: does the question I posed on modifying the library to add a
>> feature to attach  an in-memory data stores to another one via the C
>> API belong on the sqlite-dev list?
>>
>
> That list doesn't get much traffic. Your question was fine here.
>
> It would be fairly involved to change the handling of in memory
> databases. They don't have names to use with the attach command, and
> they don't do any locking since they can only be accessed from a  
> single
> connection currently. The locking in SQLite is done with POSIX file
> locks which can't be used for in memory databases since they aren't
> files. You're welcome to try of course, but it seems like a lot of  
> work
> for little return when there are other ways to do what you want.
>
> HTH
> Dennis Cote
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich

If the sqlite statement had a temporary storage area so that I could  
load up a bunch of rows and then commit them in one shot so that the  
lock on the db was not held very long by a single transaction, that  
would probably work.

However, my reading of the documentation leads me to believe that  
using the bind functions in a loop with an insert statement will lock  
the entire in-memory DB until the bulk insert is donewhich means I  
would get no benefit from concurrency.

Is this correct?

BTW: does the question I posed on modifying the library to add a  
feature to attach  an in-memory data stores to another one via the C  
API belong on the sqlite-dev list?

thanks,
James

On Apr 18, 2008, at 9:43 :22AM, John Stanton wrote:

> Just use a thread as a DB handler.  Queue transactions to it using  
> some
> IPC mechanism like a message queue or named pipe.  Another way would  
> be
> to synchronize access to the DB handle using a mutex.
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] multiple writers for in-memory datastore

2008-04-18 Thread James Gregurich
hi!


I need to set up multiple writers to an in-memory datastore. I just  
discovered that you can't have more than one connection to an in- 
memory store.

I can give each task its own independent datastore if there is a way I  
can merge the contents of each store into a central store. Is there a  
way to attach an existing  in-memory store to another in-memory store?

If not, how hard would it be to modify the sqlite source to allow such  
an attachment to be made given the two connection pointers to two  
independent stores?


thanks,
James Gregurich
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_get_table only get 16 rows

2008-03-11 Thread James
Sorry, I have already solved!

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of James
Sent: Tuesday, March 11, 2008 4:34 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] sqlite3_get_table only get 16 rows

Hi,

  I use command-line to query the table:

# sqlite3 listtable

 sqlite> select Name,Value from TblDeviceInfo;

 AdditionalHardwareVersion

AdditionalSoftwareVersion

Description

DeviceLog

DeviceStatus

EnabledOptions

FirstUseDate

HardwareVersion01B

Manufacturer   III

ManufacturerOUI001A2A

ModelName

ProductClass   Speedstrea

ProvisioningCode   000.000.00

SerialNumber   A1

SoftwareVersion1.09.000

UpTime



  I write a program:

 
===

  #include 

#include 

 

int main(int argc, char **argv){

  sqlite3 *db;

  char *zErrMsg = 0;

  int rc;

  char **result;

  int nrow, ncol, i;

 

  rc = sqlite3_open("listtable", );

  if( rc ){

fprintf(stderr, "Can't open database: %s\n", sqlite3_errmsg(db));

sqlite3_close(db);

exit(1);

  }

 

  rc = sqlite3_get_table(db, select Name,Value from TblDeviceInfo", ,
, , );

  if( rc!=SQLITE_OK ){

fprintf(stderr, "SQL error: %s\n", zErrMsg);

sqlite3_free(zErrMsg);

sqlite3_free_table(result);

return -1;

  }

 

  fprintf(stderr, "Row = %d, Col = %d\n", nrow, ncol);

  for (i = 0 ; i < nrow ; ++i)

fprintf(stderr,"%s\n", result[i]);

 

  sqlite3_free_table(result);

  sqlite3_close(db);

 

  return 0;

===

The output is

Row = 16, Col = 2

Name

Value

AdditionalHardwareVersion



AdditionalSoftwareVersion

 

Description

 

DeviceLog

 

DeviceStatus

 

EnabledOptions

 

FirstUseDate

 

I don't know why I just get the 16 rows of data. I expect to get the 34 rows
of data.

What should I do?

Thank you.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_get_table only get 16 rows

2008-03-11 Thread James
Hi,

  I use command-line to query the table:

# sqlite3 listtable

 sqlite> select Name,Value from TblDeviceInfo;

 AdditionalHardwareVersion

AdditionalSoftwareVersion

Description

DeviceLog

DeviceStatus

EnabledOptions

FirstUseDate

HardwareVersion01B

Manufacturer   III

ManufacturerOUI001A2A

ModelName

ProductClass   Speedstrea

ProvisioningCode   000.000.00

SerialNumber   A1

SoftwareVersion1.09.000

UpTime



  I write a program:

 
===

  #include 

#include 

 

int main(int argc, char **argv){

  sqlite3 *db;

  char *zErrMsg = 0;

  int rc;

  char **result;

  int nrow, ncol, i;

 

  rc = sqlite3_open("listtable", );

  if( rc ){

fprintf(stderr, "Can't open database: %s\n", sqlite3_errmsg(db));

sqlite3_close(db);

exit(1);

  }

 

  rc = sqlite3_get_table(db, select Name,Value from TblDeviceInfo", ,
, , );

  if( rc!=SQLITE_OK ){

fprintf(stderr, "SQL error: %s\n", zErrMsg);

sqlite3_free(zErrMsg);

sqlite3_free_table(result);

return -1;

  }

 

  fprintf(stderr, "Row = %d, Col = %d\n", nrow, ncol);

  for (i = 0 ; i < nrow ; ++i)

fprintf(stderr,"%s\n", result[i]);

 

  sqlite3_free_table(result);

  sqlite3_close(db);

 

  return 0;

===

The output is

Row = 16, Col = 2

Name

Value

AdditionalHardwareVersion



AdditionalSoftwareVersion

 

Description

 

DeviceLog

 

DeviceStatus

 

EnabledOptions

 

FirstUseDate

 

I don't know why I just get the 16 rows of data. I expect to get the 34 rows
of data.

What should I do?

Thank you.

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd problem with select on large table

2008-03-07 Thread James Kimble

On the command line:

/ > sqlite3 test.db `select name from PerfTest1 where name = "key5000"'

does work. I know this because if I query for "key500" I get back that 
row.  It's not blanks either because if I
do:

 where name link "key1%"

I only get rows prior to "key199". Very weird. I did try the other 
alternatives (single quotes and piping into
sqlite3 test.db) but got the same result.

I'll try changing the varchar to TEXT. See if that makes a difference
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance

2008-03-06 Thread James Kimble
>
>
>That's sounds like good advice. I'll do that.
>> 
>> Working with flash in this way is going to be a challenge. With limited 
>> number of writes in a lifetime (this device needs to last approx 20
>> years...) I will have to make some major design decisions around how
>> I handle the writes.
>  
>

>> How important is the persisent data? Is it kept for audit, statistical 
>> analysis, what? Basically, can you afford to lose it, or at least a subset of
>> it? If so , then I'd say maintain the data in an in-memory database, and 
>> write
>> out the data to disk (using safe synchronous writes) at whatever intervals
>> you desire.

>> I say use safe synchronous writes, as recovery may be an issue if you don't
>> write safely. Not what you need on an embedded system where user interaction
>> may be required.

>> Christian


Most of my data is not persistent. I am thinking of keeping everything RAM 
based with a 
write out only done opportunistically. Values need to be retained for logging 
purposes but 
configuration settings are very stable so they can be saved only when changed.

I have another question and I think I know the answer but just to be sure...  I 
created
a 40 column table with 10,000 rows as a test database for a reader and a writer 
process
to bang on (performance proof). 

The table is as so:

sqlite3 test.db 'create table PerfTest1 (name varchar(20),  value1 int,
value2 int, value3 int, value4 int, value5 int, value6 int, value7 int,
value8 int, value9 int, value10 int, value11 int, value12 int, value13 int,
value14 int, value15 int, value16 int, value17 int, value18 int, value19 int,
value20 int, value21 int, value22 int, value23 int, value24 int, value25 int,
value26 int, value27 int, value28 int, value29 int, value30 int, value31 int,
value32 int, value33 int, value34 int, value35 int, value36 int, value37 int,
value38 int, value39 int)'


The data is repetitive junk. Just: "key1", 1, 2, ,3 .  "key2", 1, 2, 3

What's driving me mad is that when I do a select from the command line like so:

sqlite3 test.db `select name from PerfTest1 where name like "key1%"' 

The rows returned stop at "key199". No select will return a row past that 
point. However if I 
do a simple:

sqlite3 test.db 'select name from PerfTest1' 

and just let it go it prints all 1 rows!! Is this due to the type of query 
prepartion done
from the command line interface? Maybe limits the size of something? That 
doesn't make a lot of
sense either though because if I query the specific row I want it returns 
nothing.

sqlite3 test.db 'select name from PerfTest1 where name = "key1000"' 

Returns 0 rows. Any idea what's going on there??  It also took about 15 minutes 
for the .import
command to insert the 10,000 rows into the table from a text file. That's was a 
little scarey.
(Sorry for the length. Probably should have started another thread)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance

2008-03-05 Thread James Kimble

> One thing I can highly recommend on embedded systems, especially flash
> based ones, is turn pragma synchronous to off. Having sqlite write every
> record modification to the flash, is a stunningly expensive process,
> even when it's all encapsulated in a large transaction. Let linux handle
> the synchronisation and write caching for you. A lot less robust, but,
> the benefits more than outweighed the cost. If you need guaranteed write
> to disk, then perform the synch yourself.

> Cost vs benefit and all that guff.


That's sounds like good advice. I'll do that.

Working with flash in this way is going to be a challenge. With limited 
number of writes in
a lifetime (this device needs to last approx 20 years...) I will have to 
make some major design
decisions around how I handle the writes.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance......

2008-03-05 Thread James Kimble
>
>
>I'm in the process of architecting the software for an embedded Linux system
>> that functions as a remote and local user interface to a control system.
>> There
>> will be a lot of analog (fast) data arriving via SPI bus and I'm thinking of
>> using SQLite to store this data in a well organized and easy to access
>> manner.
>> 
>> My main concern is performance. Has anyone had any similar application
>> experience they could comment on? I want to be able to insert data arriving
>> on the SPI bus and then query the data to update a GUI at a very high rate
>> (less than 250Ms). This is not real time so 250Ms is desirable but does not
>> have to be guaranteed.
>> 
>> Any thoughts or experience would be appreciated...
>  
>


>> We'd need more details for definitive answers, such as whether you're
>> using disk or FLASH based storage, your data and schema format.

>> Some things to consider:
>> - Batch inserts. Given your 250ms update requirement, you could perhaps
>>   batch data 4 times a second. That'll give you a very high insert rate.
>> - If using disk based storage, using ext3 with "data=journal" mount option
>>  The journal can be written and sync'ed very quickly.
>> - Experiment with indexing. Indexes will slow insertions, but improve
>>  querying.
>> - If the above is still too slow, and you're happy risking the database in
>>   the event of a system crash, then you can turn off synchronous updates.


Thanks for your reply.

I'm still in the planning stage so I don't have too many details. I'm working
on a test database right now that should answer most of my questions. I can play
with that as far as optimization if it's not fast enough. Batched inserts is a 
good idea though. Give the system a little more recovery time between inserts.

I will be on a flash file system though and that's totally new to me with a DB.
I've worked with big DB's on big hardware but never anything like this. I'm 
fairly
new to embedded work. SQLite just seem to be a good fit for this application. 
Lots
of data that needs to be stored and queried and various different ways. The last
time my company designed an application like this they invented their own DB and
it was (is) a disaster. I've already got it compiled and installed on my 
platform 
and I've been working with some embedded SQL (C) today. It's pretty easy to use 
considering how little it is. Never seen anything quit like it. Very impressive.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance

2008-03-05 Thread James Kimble


>I'm in the process of architecting the software for an embedded Linux system
>> that functions as a remote and local user interface to a control system.
>> There
>> will be a lot of analog (fast) data arriving via SPI bus and I'm thinking of
>> using SQLite to store this data in a well organized and easy to access
>> manner.
>> 
>> My main concern is performance. Has anyone had any similar application
>> experience they could comment on? I want to be able to insert data arriving
>> on the SPI bus and then query the data to update a GUI at a very high rate
>> (less than 250Ms). This is not real time so 250Ms is desirable but does not
>> have to be guaranteed.
>> 
>> Any thoughts or experience would be appreciated...
>> 
>  
>

>> You arn't clear about your update rate, or the amount of data you need 
>> to extract for each GUI update. You also haven't said where you will be 
>> storing the database, in RAM, on a hard disk, or in flash.

>> If the data is only updated every 250 ms, i.e. at a 4 Hz rate, it is not 
>> really very fast, and SQLite should have no problem keeping up even on 
>> an embedded processor.

>> If the GUI has to display only the last value every 250 ms, then you 
>> should have no trouble. If you are trying to redraw a scrolling graph of 
>> the last 400 samples every 250 ms you may have issues to be concerned about.

>> Dennis Cote


Thanks for your reply.

The one thing that scares me is that this will be residing on a flash 
based file system
and not a disk. I'm hoping that's not too much of a performance hit. 
Right now I've
got it running out of RAM and that may be fine 99% of the time but I 
will need to
write it back to flash at some set interval (minutes) to prevent loss of 
data in the case
of a power failure. Not sure how to mix modes like that yet.

The biggest thing on the GUI will be some power level meters. Very slow 
moving
and mostly fixed data. Possibly a spectrum diagram but again, slow 
moving. This
is a control system for commercial TV/Radio transmitter systems.

I didn't give many details because I haven't implemented anything yet. I 
am still
considering my options. I thing SQLite will work I've just got to test 
it. I'm in the
process of setting up a test database and tables that I can work on in a 
way similar to
what I will need. That's probably the best way to be sure in any case. 
If it doesn't
perform fast enough I will work on optimizations. I've dealt with some 
big DB's in the
past (Oracle, Informix, Ingres) but never one quite like this and never 
on an embedded
system. Very new to me.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


RE: [sqlite] Using LIKE to check the first digits?

2008-01-31 Thread James Dennett
> -Original Message-
> From: P Kishor [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 31, 2008 3:35 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Using LIKE to check the first digits?
> 
> On 1/31/08, Gilles <[EMAIL PROTECTED]> wrote:
> > Hello
> >
> > I'm no SQL guru, and need to look up phone numbers in a SQLite
database
> > that start with certain digits. Some customers use
> > http://en.wikipedia.org/wiki/Direct_Inward_Dialing, which means that
the
> > first part is common to all the numbers assigned to this customer,
so I
> > don't need to enter every single one of them, and just assign the
> prefix,
> > eg. "123" matches "1230001", "1230002", etc.
> >
> > Should I use the LIKE command for this? Does someone have an example
to
> do
> > this?
> >
> 
> WHERE col LIKE '123%'
> 
> or WHERE substr(col,1, 3) = '123'

One note:

The optimizer has a decent chance of using an index for LIKE '123%' but
I'd be surprised (and impressed) if it looks inside function calls such
as substr for opportunities to use indexes.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Solaris bus error

2008-01-31 Thread James Dennett
> -Original Message-
> From: Ken [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 31, 2008 2:55 PM
> To: sqlite
> Subject: [sqlite] Solaris bus error
> 
> 
> After addressing the -lrt sched yield on solaris.
> 
> make test resulted in:
> 
> async3-1.0... Ok
> async3-1.1...make: *** [test] Bus Error (core dumped)
> 
> Any ideas?

Getting a stack trace out of that core file with a debugger would seem
to be the next step, and seeing the full text output if there is any
more.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Solaris make test compilation error

2008-01-31 Thread James Dennett
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 31, 2008 2:23 PM
> To: sqlite-users@sqlite.org; [EMAIL PROTECTED]
> Subject: Re: [sqlite] Solaris make test compilation error
> 
> Ken <[EMAIL PROTECTED]> wrote:
> > Solaris 5.8 (solars 8)
> > make test
> >
> > Undefined   first referenced
> >  symbol in file
> > sched_yield /var/tmp//cckDMcyL.o
> > ld: fatal: Symbol referencing errors. No output written to
> .libs/testfixture
> > collect2: ld returned 1 exit status
> > make: *** [testfixture] Error 1
> >
> 
> So how does a thread yield its timeslice on solaris?

sched_yield is there, it just needs -lrt as it is considered part of the
"Realtime Library Functions" according to its manpage.

-- James

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Odd issue when SELECT querying

2008-01-30 Thread James Dennett
David Hautbois [mailto:[EMAIL PROTECTED] wrote:
> 
> I found the solution :
> I replaced this line :
> configvalue = (char *) sqlite3_column_text(stmt, 0);
> by
> configvalue = g_strdup((gchar *) sqlite3_column_text(stmt, 0));
> 
> and the configvalue type : gchar
> 
> Now it works !!
> 
> A newbie error...

Now you just need to watch out for memory leaks.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Odd issue when SELECT querying

2008-01-30 Thread James Dennett
David Hautbois wrote:

> Hi
> I have an odd issue.
> 
> 
> My function :
> 
> char * get_config_value (sqlite3 * db, char * config_name) {

[...]
 
> configvalue = (char *) sqlite3_column_text(stmt, 0);

[...]

> return configvalue;
> }
> Why the variable content changes ??
> 
> Why the variable configvalue has not the same content ??

The variable has the same content (a pointer), but the pointer is
invalid by the time your function returns it.  You need to copy the
*string*, not just a pointer to it.

This is essentially the same issue that Igor described when he wrote:

> Strings passed to the callback are valid only within the callback. As 
> soon as the callback returns, the memory may be deallocated or reused 
> for other purposes. If the callback wants to keep some strings around 
> beyond a single call, it should allocate its own memory and copy the 
> value over.

It's vitally important when using C libraries that you read the
documentation and avoid making any assumptions about the lifetimes of
objects referenced by pointers.

C++ wrappers can return std::string objects and avoid this issue (though
even in C++ it's important to consider validity/lifetime issues for both
pointers and iterators).

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] How to specify regular expression in a query?

2008-01-28 Thread James Dennett
> -Original Message-
> From: Nicolas Williams [mailto:[EMAIL PROTECTED]
> Sent: Monday, January 28, 2008 10:35 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] How to specify regular expression in a query?
> 
> On Mon, Jan 28, 2008 at 06:22:08PM +0100, Ralf Junker wrote:
> > I believe that this API would also ease implementations of Unicode
> > LIKE and GLOB.
> 
> That's what I was thinking of.  The Unicode extensions work by
> redefining the like, glob and regexp functions, and by adding
> collations.  But surely the existing user-defined functions interface
> does not allow for this sort of optimization. 

Right, which is why this conversation is about extending that interface
:)

> Or did I miss something?

No, I think you're in "violent agreement".

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] order by issue?

2008-01-23 Thread James Dennett
> -Original Message-
> From: Ken [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, January 23, 2008 4:10 PM
> To: sqlite
> Subject: [sqlite] order by issue?
> 
> I'm not sure if this a bug or if working as intended:
> 
> the order by b,a seems to cause B to use asc.. Instead of the desc as
> written in the order by statement.
> 
>  create table tst (a integer, b integer );
> sqlite> insert into tst values(1,1);
> sqlite> insert into tst values(1,2);
> sqlite> insert into tst values(1,3);
> sqlite> insert into tst values(2,3);
> sqlite> insert into tst values(2,2);
> sqlite> insert into tst values(2,1);
> sqlite> select * from tst order by b, a desc;
> a|b
> 2|1
> 1|1
> 2|2
> 1|2
> 2|3
> 1|3

Could be that I'm too tired again, but that looks to me like you asked
for order primarily by b (with the default, ascending, order) and
secondarily by a (with inverted/descending order).

select * from tst order by b desc, a desc;

might be what you were wanting?

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Query problem

2008-01-23 Thread James Dennett
> -Original Message-
> From: Dennis Cote [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, January 23, 2008 3:08 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Query problem
> 
> James Dennett wrote:
> >
> > Square brackets don't "escape" thing that way: [[] is a character
class
> > containing only the character '['.  [][], however, is a character
class
> > containing two characters.  The special rule is that the first
character
> > after the opening '[' is part of the class even if it's a ']' or a
'-'.
> >
> >
> James,
> 
> I don't think it is that simple.
> 
> What happens if, as in the OP, the character set is simply []? Is this
> an empty character set, or is a set containing a ] but missing the
> terminal ]?

That's not a valid specification of a character class.

> According to your special rule above, it would be the latter, in which
> case it should generate some kind of error message reporting the
> unterminated character set.

That would be appropriate; the alternative (which bash appears to use)
is to take it as two literal characters.

> If you say it is supposed to be greedy and include all characters it
can
> until the terminal ] before examining the set of characters it
contains,
> then this would be an empty character set. 

No, because there *is* no terminal ].  (The first character after the
opening '[' is *never* the end of the character class: that's exactly
the special rule.)

> What does an empty set match?

Nothing, but you'd have to specify exclusion of every character.

> Normally, a set matches any of the contained characters, but an empty
> set can't match any character, so any pattern containing the empty set
> would always fail.  So, is an empty set a special case, that matches
the
> literal characters [] instead?

That's not an empty character class; it's not a character class
at all.

The POSIX/Single Unix Spec documentation for fnmatch might be a good
source, but I agree with the idea that SQLite should just document what
it does rather than assuming that there's a universal standard for
globbing.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Query problem

2008-01-23 Thread James Dennett
> -Original Message-
> From: Dennis Cote [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, January 23, 2008 2:22 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Query problem
> 
> [EMAIL PROTECTED] wrote:
> >
> > You
> > can escape characters using [..]. To match a * anywhere in a string,
> > for example:
> >
> > x GLOB '*[*]*'
> >
> > The [..] pattern must contain at least one internal character. So
> > to match a "]" you can use the pattern
> >
> > x GLOB '*[]]*'
> >
> So to match the OP's original string he would need to use
> '*1[[][]]1.txt' as his pattern?
> 
> With each of the square brackets to be matched escaped by a pair of
> enclosing square brackets.

Square brackets don't "escape" thing that way: [[] is a character class
containing only the character '['.  [][], however, is a character class
containing two characters.  The special rule is that the first character
after the opening '[' is part of the class even if it's a ']' or a '-'.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Query problem

2008-01-23 Thread James Dennett
DRH wrote:
> 
> Experiments using bash indicate that either ^ or ! is accepted
> as the negation of a character set.  Hence,
> 
> ls -d [^tu]*
> ls -d [!tu]*
> 
> both return the same thing - a list of all files and directories
> in the current directory whose names do not begin with "t" or "u".
> 
> SQLite only supports ^, not !.  I wonder if this is something I
> should change?  It would not be much trouble to get GLOB to support
> both, must like the globber in bash.
> 
> Anybody have an old Bourne shell around?  An authentic C-shell?
> What do they do?

C shell on Solaris 9 gives an error on
  echo [!c]*
as it considers the !c to be an event specification.  Tcsh the same.

Ksh treats
  echo [^c]*
the same as
  echo c*
but does "the right thing" with
  echo [!c]*

bash treats the two the same (as all names starting with a character
other than lower-case 'c').

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] How to repaire Sqlite3 database

2008-01-22 Thread James Mao
Hi,


> My sqlite3 database is damaged, is there anyway to repaire?
> The database file is about 300M bytes
>
> I tried sqlite3 my.db .dump > db.dump
> But it only dump the sqls to create the database, no data at all
>
> I guess the reason it is damage is because the disk is full
>
>
> Thanks in advance!
> James
>


[sqlite] Re: ezmlm response

2008-01-22 Thread James Mao
Hi,


> My sqlite3 database is damaged, is there anyway to repaire?
> The database file is about 300M bytes
>
> I tried sqlite3 my.db .dump > db.dump
> But it only dump the sqls to create the database, no data at all
>
> I guess the reason it is damage is because the disk is full
>
>
> Thanks in advance!
> James
>


RE: [sqlite] Re: Access from Multiple Processes

2008-01-21 Thread James Dennett
(Top-posting and overquoting fixed.)

On Monday, January 21, 2008 1:57 PM, Mark Riehl wrote:
>
> On Jan 21, 2008 4:48 PM, Igor Tandetnik <[EMAIL PROTECTED]> wrote:
> > Mark Riehl <[EMAIL PROTECTED]>
wrote:
> > > For some reason I haven't been able to track down, SQLite is
creating
> > > a journal file after the C++ process connects, however, the
journal
> > > file doesn't go away.
> >
> > Are you, by any chance, opening a transaction and keeping it open?
> >
> > Igor Tandetnik
>
> I'm just executing SQL insert statements.  I'm not using the BEGIN
> TRANSACTION; ... END TRANSACTION; wrappers around the inserts.
> 
> I thought that there was an implied BEGIN ...COMMIT around every
> INSERT statement?

There is.  And if your code has no active transactions, SQLite allows
other processes to access the database.  So something odd is happening,
but I don't think you've shared enough information for this list to
guess what.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] sqlite3 performace

2008-01-17 Thread James Dennett
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of
> Philip Nick
> Sent: Thursday, January 17, 2008 1:48 PM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] sqlite3 performace
> 
> Greetings,
> 
> Currently I am using sqlite3 in a multi-process/multi-threaded server
> setting.
> I use a Mutex to ensure only one process/thread can access the
database at
> one time.
> 
> The current flow of events:
> Get Mutex
> Open Database connection
> Run Query
> Close Database connection
> Release Mutex
> 
> This seems to work well except I have noticed some performance issue
when
> the database grows beyond a MB.

Why not move the Open/Close outside of the mutex, hold a connection
open, and re-use it for all queries?  Otherwise you're making SQLite
reload the schema definition every time you perform a query, as I
understand it.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite File Access Restriction

2008-01-11 Thread Noel James
On Jan 9, 2008 1:12 PM, Dean Brotzel <[EMAIL PROTECTED]>
wrote:

>
> 1) Sqlite database file access restriction: Is there a built-in or
> preferred method to block other processes from writing or even accessing
> a database file when its opened first by another process. I can get this
> to work by having the first process to open the file issue a BEGIN
> Exclusion or BEGIN Restricted but there is always a chance, that right
> after I commit and go to issue another BEGIN, a 2^nd progress can grab
> and hold the file. I would like for the 2^nd process to see if another
> process has control of the file and warn/adapt based on that. Definitely
> NO multiple writers.
>


That is Absolutely the only way get that kind of 'useful' lock on windows.
End of story!


2) From the documentation I see time and time again "Stay away from
> NFS". Is this the NFS designed by Sun or all Network file systems in
> general? I would like (1) to be satisfied for files served from the
> likes of SAMBA.
>


Not sure but don't use any kind of file sharing just to be safe.


Hope that helps


RE: [sqlite] Syntax for Multi-table Join

2008-01-10 Thread James Dennett

A tired James wrote:
>
> select foo as bah from baz, not select foo from baz as bar, I think.
> 
> -- James

Evidently I don't think too well this afternoon; please disregard
this...

-- James

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Syntax for Multi-table Join

2008-01-10 Thread James Dennett
> -Original Message-
> From: Rich Shepard [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 10, 2008 4:10 PM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Syntax for Multi-table Join
> 
>I cannot find the specific syntax on the web site's SELECT page nor
in
> Owens' book. It should not be as difficult to craft the statement as
I'm
> finding it to be.
> 
>Here's what I want (with the sqlite3 error following):
> 
>   SELECT * from Fuzzyset as f
> INNER JOIN (SELECT num_ts from Variable as v
>   WHERE f.parent=v.name AND f.comp=v.comp_name and
> f.subcomp=v.subcomp_name)
> 
> SQL error: near "as": syntax error
> 
>A clue stick is appreciated.

select foo as bah from baz, not select foo from baz as bar, I think.

-- James

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] "Can't we all just get along?" [Was: RE: [sqlite] "always-trim" - feature suggestion]]

2008-01-09 Thread James Dennett
> -Original Message-
> From: Zbigniew Baniewski [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, January 09, 2008 10:57 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] "always-trim" - feature suggestion
> 
> On Wed, Jan 09, 2008 at 07:15:52PM +0100, Kees Nuyt wrote:
> 
> > It's a culture thing. In Eastern Europe this is the normal way
> > of reasoning, and isn't considered rude (my girlfriend is
> > Latvian from Russian parents, so I have some experience with
> > this kind of culture shock).
> 
> I'm afraid, I don't understand. Am I expected here to agree with
everyone,
> because I'll be seen as "rude" otherwise? 

No, not at all: in fact, differences of opinion are common on this list,
and generally don't cause problems because most list participants have
similar notions of culturally "polite" ways to resolve them.

> That's too bad - but it's not my
> way to change my mind under pressure.

The best technical results don't come from such pressure, I'd certainly
agree.

> Everyone here can have his/her own opinion - and so can I. And my
opinion
> can differ from the opinion of the others' (and vice versa).
> 
> Ending the thread (I hope), I want to repeat: I wrote *yesterday*:
"OK, no
> problem". Everyone can check out this lists archive. The devs -
especially
> "the Highest One" - answered "no", and it's enough for me. But my
opinion
> about the proposed feature stays. So what? It's mine. Perhaps really -
> after
> ending my current work - I'll write that patch on my own. Fred, Ken
and
> all
> the others' (even that mentioned "poor ***") can have different
opinions,
> of
> their own. I didn't deny it - as (almost) all the others are denying
my
> right to have my own opinion.
> 
> I can't understand all that people tryin' to start a flamewar here
*after*
> I ended the discussion *yesterday* already. Are they bored, and
looking
> for
> some doubtful "fun"? But why exactly here?

No, they're not trying to start a flamewar -- they're reacting to what
they honestly perceive as _you_ flaming.  But your flaming was, in turn,
your response to a post from Aristotle Pagaltzis which I'm sure you
honestly considered flaming, though by the standards of most list
participants Aristotle was doing exactly what you defend: politely
expressing his disagreement with your opinion, and explaining why he
disagreed.

I don't think there is any bad intention behind any of the posts here.
It's best (for both sides) not to assume bad faith, and to understand
that when we communicate in e-mail from around the world, sometimes
we'll do it in different ways.

To take an example (and I apologize for it being from your message, but
that's a convenient place): when you write "I ended the discussion
*yesterday* already", it's easy for me to take that as being rude
because it implies that you have the power to unilaterally terminate a
discussion on this list.  Now, I think that you really meant that you
stated yesterday that *you* did not need any further discussion, and I
don't really believe that you intended to tell others that they are not
permitted to continue the discussion if they wish to do so.  However, if
I were of a mind to look for "rudeness", I could find it even where none
was intended.

Our goals here are the same -- we want SQLite to continue to be a fine
database within its niche, and to improve.  It's natural that there are
disagreements on what constitutes "improvement", and even that there
will be tensions as the forces behind those disagreements are resolved.
Let's not waste time debating perceived insults on the list?

Regards,

James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Encryption?

2007-12-19 Thread James Steward
On Wed, 2007-12-19 at 16:10 -0800, James Dennett wrote:
> > -Original Message-
> > From: Jason Tudor [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, December 19, 2007 4:06 PM
> > To: sqlite-users@sqlite.org
> > Subject: [sqlite] Encryption?
> > 
> > Is there any encryption functionality built into SQLite? 
> 
> Not in the public domain version.

>From the ministry of silly thoughts, maybe dm_crypt for Linux only ;-)
http://www.saout.de/misc/dm-crypt/

Can be used to encrypt/decrypt whole file systems on the fly!

JS.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Encryption?

2007-12-19 Thread James Dennett
> -Original Message-
> From: Jason Tudor [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, December 19, 2007 4:06 PM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Encryption?
> 
> Is there any encryption functionality built into SQLite? 

Not in the public domain version.

> Also, can I use extensions other than .db for SQLite database files?

Certainly; SQLite doesn't care about the filename conventions.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Improving performance of SQLite. Anyone heard ofDevice SQL?

2007-12-17 Thread James Steward
On Mon, 2007-12-17 at 19:22 -0500, John Elrick wrote:
> John Elrick wrote:
> 
> SNIP
> > When you can hire a forklift operator to program (well, that is),
> 
> 
> To avoid a misunderstanding...I mean - right off the forklift.  I'm sure 
> anyone with the proper motivation can learn to program, but it took me 
> 25 years to realize how little I really knew.

I didn't write the original about forklift operators.  It was Fred
Williams, AFAIK.

I am in agreement with you, and I have nothing against forklift
operators either.

IMHO, this has gone completely off topic, and I shall hence forth cease
to contribute to this, and related threads.

Regards,
James.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Improving performance of SQLite. Anyone heard of Devic eSQL?

2007-12-17 Thread James Steward
On Mon, 2007-12-17 at 15:30 -0600, Fred Williams wrote:
> A hundred or so Visual Basic programmers are cheaper to replace and
> "maintain" than one good Delphi/C++ programmer. ;-)  That is the reason
> management likes "Visual ."  Been there, learned that.  Hire the
> staff from the largest pool, not the most effective.  Besides it's damn
> hard to be a prima donna, when your replacement is ready to jump off
> that forklift and learn a cushy job.

Ouch.  Lucky Visual  is not a cross platform language.  Show the
Visual  programmers a bit of Tcl/Tk and watch them wilt!

It looks like Xilinx wrote their entire ISE GUI in Tcl/Tk, and the
backend apps are cross compiled.  I can run it on Linux as well as
NoDose, and the GUI is identical.  Try that in Visual poop.

You might have cheap programmers today, but tomorrow they will be less
useful.  Look out for Tux!

At every planning meeting I push open source, and cross platform
solutions, because I know today the majority is still under Bill's
spell, but the magic in Vista is fading...


Ciao.
James.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] DeviceSQL

2007-12-13 Thread James Steward

steveweick wrote:

Do you need to read the code to verify reliability as your next few
sentences seems to imply? For that to be true, the reader would have to be
able to spot bugs through inspection.  While that is certainly one way to
spot bugs, I seriously doubt that any shop would rely on code inspection,
when millions of dollars of potential recall costs are on the line.
  
I think many would agree that code inspections do find (serious) bugs, 
that may not show up from testing.  I'm sure your company conducts code 
inspection meetings as a part of all code development.  We (the company 
I work for) certainly do.  I know I've seen change logs that read 
something like "Fixed possible buffer overflow in foo..." for open 
source projects here and there as well.



In fact the SQLite marketing does not rely on code inspection as its
argument for why the code is reliable. Check it out. 
  
That would be bad if they did, I agree.  But all the testing in the 
world won't uncover all the bugs either, in a complex piece of code.  
See http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf .
"The Ptolemy II system itself began to be widely used, and every use of 
the system exercised this
code. No problems were observed until the code deadlocked on April 26, 
2004, four years later."  And that was after code inspections, 
regression tests and belt and braces programming techniques!

All of that said, I do admire the elegance of the SQLite code.  It makes
entertaining reading.  Unfortunately elegance does not translate into
performance or reliability.
  
Not necessarily, but it often does, and can make for better 
maintainability too.  I've not trawled through to SQLite code myself, so 
couldn't comment.  But it does have quite a few big name users, and an 
active and helpful user forum, which gives me good vibes at least.


Cheerio,
James.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] DeviceSQL

2007-12-13 Thread James Steward

steveweick wrote:

Richard has it right this time.  Today DeviceSQL uses no SQLite code. One of
the things we might consider is bolting the SQLite parser/front end to our
table engine, in theory to get the both worlds.  Just an idea at the moment.
  
Such an interesting discussion to be following.  I must say though, it 
seems DeviceSQL has opened the door to speculation due to 
unsubstantiated claims in advertising, as far as I see it.  IMHO, so 
long as there is no independent, unbiased, side by side test results 
presented somewhere by some reliable source, there will always be some 
room for "ifs" and "buts" by both sides.


Maybe DeviceSQL should go open source, so the public can judge for them 
selves the qualities of the two products.  There would still be money to 
be made from paid support.  Who knows, both parties could benefit, and 
customers too.  At least there'd be a clearer view of the pros and cons. 

There is something to be said for a product being open source, that is 
the code is scrutinized by the world.  Closed shop code can possibly 
still be very good, but without seeing it, how would we know?  Reminds 
me of a story about a cat: dead or alive, we won't know until we open 
the box it's in, and prior to that, is it only half dead?


One only has to look at the MSDN code examples to see the ugliness of 
closed source  code development...(sorry Bill)


JS.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Seg fault after calling sqlite3_exec while running under cygwin

2007-12-10 Thread James Steward
On Mon, 2007-12-10 at 20:37 -0700, John Williams wrote:
> Since I'm not really sure where the problem is I've attached a zip of my c
> files.  AptAssist.c is my main file and contains the full program.
> Temp.cis a simple pull out of the problem function.

I didn't get an attachment. Did anyone?

JS.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] version 3.3.12 -- malformed schema after using "limit 0" ??

2007-11-29 Thread James Dennett
[EMAIL PROTECTED] wrote:

> 
> I don't know if the following might be a problem with the new release,
> or just something odd on my system.
> 
> 
> In sqlite 3.3.12, I try to create an empty table copying the schema of
> an existing table by using a command such as the following:
> CREATE TABLE  newtable AS SELECT * FROM oldtable LIMIT 0;
> I seem to create an integrity_check failure (and the "newtable" is not
> created, btw).
> 
> If I then close the database and re-open it, the "malformed database
> schema" error is returned for most any operation, and no table within
> the database can be dumped.
> 
> 
> I can produce the error with the pre-compiled windows binary
sqlite3.exe
> (I'm running winXP SP2).
> 
> When I tried an old sqlite 3.4.2 that I had lying around, it *did*
> create "newtable" and there was *no* integrity_check failure.   Same
> with version 2.8.13 (sqlite.exe).
> 
> Is this an easily-repeatible issue, or do I have some corrupted files
> somewhere?

With your recipe, I can reproduce it on 3.3.10 on Solaris 9/SPARC.
Clearly an upgrade is in order where practical.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] SQLite Manager Firefox extension

2007-11-27 Thread James
On Tue, 2007-11-27 at 11:45 +0300, Igor Sereda wrote:
> It does look good, but it's not quite usable with large databases though.
> For example, I couldn't wait till Browse and Search page showed a 2 million
> rows table -- it seemed to load everything into memory, eating up resources
> and causing Firefox to come up with "stop script" dialogs. 
> 
> Otherwise, a nice UI.

I agree with both comments.  Big tables are slow to load, but otherwise,
great.

Regards,
James.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



<    4   5   6   7   8   9   10   11   >