[sqlite] Test failures on GPFS

2020-01-15 Thread T J
On Sunday, January 12, 2020, Roman Fleysher 
wrote:

>
> I use SQLite over GPFS , but in DELETE (which I think is the default)
> mode. Not WAL mode. No issues with locking, except performance when
> accessing concurrently from multiple nodes. As others pointed out, this has
> to do with the overhead due to lock requests. GPFS must coordinate with
> many nodes. My observation is that when concurrent access is from a few
> nodes, the performance is OK even though number of nodes is always the
> same. Thus, GPFS coordinates in some smart way only between nodes actively
> involved.
>
> One reason I do not use mySQL with its more efficient network access is
> that sys admin must set it up. With SQLite, I am independent. In addition,
> in my SQL there are authentication issues to be dealt with. I rely on GPFS
> file access permissions (access control list, ACL) to regulate access to
> database.
>
> I heard about BeadrockDB, which internally uses SQLite and provides
> network access with replication. I have not tried it and do not know what
> is involved.
>
>
>
MySQL and similar would indeed be nice to use, but in addition to the
administrative cost, there are also developer costs to get things set up so
that every developer can do work in their own db without affecting the
production db, as well as complexity costs with getting data into those
dbs. Contrast this with just copying the sqlite file(s) as needed (though
integrity concerns still exist).

So I'm mostly weighing options. The data is very much many-reads,
few-writes. Also considering just using an external locking service and
simple flat files, but this has obvious downsides of fewer (if any) data
types, no joins, no transactions, etc.

I may give this a try and see if the perf hit is tolerable.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Test failures on GPFS

2020-01-11 Thread T J
Hi,

I was interested in using sqlite over GPFS.  I've seen a few useful threads
on this:

   - Network file system that support sqlite3 well

   
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg117085.html

   - disable file locking mechanism over the network

   
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg116846.html

From these, I can see that there are some performance issues, even if I
willing (which I am not) to make all access (read+write) sequential. [I
don't expect to need many, if any, concurrent writers, but I will typically
have concurrent readers.]

To get a better sense of things, I downloaded 3.31.0 and ran the test suite
on GPFS.  Overall, it looks pretty good, but there were some WAL failures.
Could someone comment on the precise implication of those test failures?
I'm interested to know what usage patterns are likely to cause problems,
and which are likely safe.  Also, which other tests can I run (
https://www.sqlite.org/testing.html)? Perhaps more tests around concurrent
read/writes?

!Failures on these tests: e_walauto-1.1.2 e_walauto-1.1.3
e_walauto-1.1.5 e_walauto-1.1.7 e_walauto-1.1.12.3 e_walauto-1.1.12.5
e_walauto-1.2.2 e_walauto-1.2.3 e_walauto-1.2.5 e_walauto-1.2.7
e_walauto-1.2.12.3 e_walauto-1.2.12.5 zipfile-2.4a.2.1
zipfile-2.4a.2.2


Thanks in advance.  The `make test` output log snippet is below.
---

e_walauto-1.1.0... Ok

e_walauto-1.1.1... Ok

e_walauto-1.1.2...

! e_walauto-1.1.2 expected: [1]

! e_walauto-1.1.2 got:  [0]

e_walauto-1.1.3...

! e_walauto-1.1.3 expected: [1]

! e_walauto-1.1.3 got:  [0]

e_walauto-1.1.4... Ok

e_walauto-1.1.5...

! e_walauto-1.1.5 expected: [1]

! e_walauto-1.1.5 got:  [0]

e_walauto-1.1.6... Ok

e_walauto-1.1.7...

! e_walauto-1.1.7 expected: [1]

! e_walauto-1.1.7 got:  [0]

e_walauto-1.1.7... Ok

e_walauto-1.1.8... Ok

e_walauto-1.1.9... Ok

e_walauto-1.1.10.1... Ok

e_walauto-1.1.10.2... Ok

e_walauto-1.1.11.1... Ok

e_walauto-1.1.11.2... Ok

e_walauto-1.1.11.3... Ok

e_walauto-1.1.12.1... Ok

e_walauto-1.1.12.2... Ok

e_walauto-1.1.12.3...

! e_walauto-1.1.12.3 expected: [2]

! e_walauto-1.1.12.3 got:  [0]

e_walauto-1.1.12.4... Ok

e_walauto-1.1.12.5...

! e_walauto-1.1.12.5 expected: [1559]

! e_walauto-1.1.12.5 got:  [0]

e_walauto-1.2.0... Ok

e_walauto-1.2.1... Ok

e_walauto-1.2.2...

! e_walauto-1.2.2 expected: [1]

! e_walauto-1.2.2 got:  [0]

e_walauto-1.2.3...

! e_walauto-1.2.3 expected: [1]

! e_walauto-1.2.3 got:  [0]

e_walauto-1.2.4... Ok

e_walauto-1.2.5...

! e_walauto-1.2.5 expected: [1]

! e_walauto-1.2.5 got:  [0]

e_walauto-1.2.6... Ok

e_walauto-1.2.7...

! e_walauto-1.2.7 expected: [1]

! e_walauto-1.2.7 got:  [0]

e_walauto-1.2.7... Ok

e_walauto-1.2.8... Ok

e_walauto-1.2.9... Ok

e_walauto-1.2.10.1... Ok

e_walauto-1.2.10.2... Ok

e_walauto-1.2.11.1... Ok

e_walauto-1.2.11.2... Ok

e_walauto-1.2.11.3... Ok

e_walauto-1.2.12.1... Ok

e_walauto-1.2.12.2... Ok

e_walauto-1.2.12.3...

! e_walauto-1.2.12.3 expected: [2]

! e_walauto-1.2.12.3 got:  [0]

e_walauto-1.2.12.4... Ok

e_walauto-1.2.12.5...

! e_walauto-1.2.12.5 expected: [1559]

! e_walauto-1.2.12.5 got:  [0]

e_walauto.test-closeallfiles... Ok

e_walauto.test-sharedcachesetting... Ok

Time: e_walauto.test 92703 ms

...

zipfile2.test-closeallfiles... Ok

zipfile2.test-sharedcachesetting... Ok

Time: zipfile2.test 14 ms

Memory used:  now 24  max9283664  max-size   16908288

Allocation count: now  1  max1311131

Page-cache used:  now  0  max 13  max-size  65800

Page-cache overflow:  now  0  max   20640016

SQLite 2020-01-10 01:05:49
0a500da6aa659a8e73206e6d22ddbf2da5e4f1d1d551eeb66433163a3e13109d

14 errors out of 249964 tests on localhost Linux 64-bit little-endian

!Failures on these tests: e_walauto-1.1.2 e_walauto-1.1.3
e_walauto-1.1.5 e_walauto-1.1.7 e_walauto-1.1.12.3 e_walauto-1.1.12.5
e_walauto-1.2.2 e_walauto-1.2.3 e_walauto-1.2.5 e_walauto-1.2.7
e_walauto-1.2.12.3 e_walauto-1.2.12.5 zipfile-2.4a.2.1
zipfile-2.4a.2.2

All memory allocations freed - no leaks

Memory used:  now  0  max9283664  max-size   16908288

Allocation count: now  0  max1311131

Page-cache used:  now  0  max 13  max-size  65800

Page-cache overflow:  now  0  max   20640016

Maximum memory usage: 9283664 bytes

Current memory usage: 0 bytes

Number of malloc()  : -1 calls
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table was deleted on macOS

2019-10-15 Thread t...@qvgps.com
-- Originalnachricht --
Von: "Simon Slavin" mailto:slav...@bigfraud.org>>
An: "SQLite mailing list" 
mailto:sqlite-users@mailinglists.sqlite.org>>
Gesendet: 15.10.2019 23:24:17
Betreff: Re: [sqlite] Table was deleted on macOS

On 15 Oct 2019, at 10:11pm, t...@qvgps.com<mailto:t...@qvgps.com> wrote:

"no such table: mytable"

The client sent us the database and the table "mytable" was really gone. The 
database also seems to be fine, no errors.

Do you mean by that that you ran integrity_check() ?
is ok



Is everything else there, or did another table, or rows from another table, 
disappear ?

everything else is there, just this one table is gone.


Did the client's computer crashed or otherwise failed at any point ?
He didn't report that, I believe its quiet unlikely with a 2015 MacBookPro.



Is there a chance that the client tried to mess with the table using a database 
editor tool ?
no



Does the client have backups ? Can you look at them and isolate the daterange 
during which the problem occurred ?
no



In our app is no DROP TABLE command, what else can cause a table to be deleted?

ALTER TABLE RENAME
There is no code in the app, which is altering or dropping any table.

Tom


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org<mailto:sqlite-users@mailinglists.sqlite.org>
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Table was deleted on macOS

2019-10-15 Thread t...@qvgps.com
Hi Group,

here is a very strange and rare fault which one of our users experienced on 
macOS .

He was working in our app for a while creating a lot of new data.
So over several hours many INSERT INTO mytable 
Without any errors.

Then he did a query SELECT COUNT(*) FROM mytable WHERE  (deleted=0 OR deleted 
IS NULL) AND IdCat=2
and an exception occurs:
"no such table: mytable"

The client sent us the database and the table "mytable" was really gone.
The database also seems to be fine, no errors.
How can this happen?
In our app is no DROP TABLE command, what else can cause a table to be deleted?

We have a few hundred copies of our app out, and this is the first time this 
error happens.
Its on macOS and the sqlite-version is 3024000


Thanks
Tom



/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com<mailto:t...@qvgps.com>
**   +264 (0)81 3329923Nam mobile
**   +49  (0)175 7313081   D mobile
**   +49  (0)6182 8492599  D office
***/

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] WITHOUT ROWID

2019-09-08 Thread t...@qvgps.com
Hi sqlite-group,

I accidentally created tables WITHOUT ROWID.
These tables are filled with some 100MB of data and delivered to customers 
already.

Now I'm looking for way to fix this bug.
Is it possible to change (ALTER TABLE?) these tables to still include rowid?

One option would be to create a new table with rowid, copy all records from the 
old one, delete old one and rename new table.
But maybe there is more simple, quicker solution?
Like just adding a column?

Thanks,
Tom


/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com
**   +264 (0)81 3329923Nam mobile
**   +49  (0)175 7313081   D mobile
**   +49  (0)6182 8492599  D office
***/

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strategies to reduce page-loads?

2018-08-31 Thread t...@qvgps.com
Right.

Was trying it out now, compiling some osm-dbs with primary key generated 
with this morton encoding from lat,lon and the performance is even 
worse.
Debugging with the sqlite-tool shows, that the page counts for specific 
queries are almost double then before.

Seems like, from the sqlite-side the only options is to have page size 
as big as possible and line-data in the blob-field as much compressed as 
possible.



-- Originalnachricht --
Von: "Simon Slavin" 
An: "SQLite mailing list" 
Gesendet: 31.08.2018 19:07:36
Betreff: Re: [sqlite] Strategies to reduce page-loads?

>On 31 Aug 2018, at 2:46pm, J Decker  wrote:
>
>>There was a voxel engine that was claiming they were going to move to 
>>a
>>morton encoding; and I was working with a different engine, so I built 
>>a
>>simulator to test averge lookup distances; it was far more efficient 
>>to
>>keep sectors of voxels (32x32x32) in flat indexing, which made the 
>>maximum
>>distance 1025
>
>I can confirm that SatNav units do not keep their maps in Morton code 
>order.  It's not a no-brainer go-to solution for mapping.  However, the 
>analysis done to figure out a good storage order is rather complicated 
>and off-topic for this list.
>
>Simon.
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strategies to reduce page-loads?

2018-08-31 Thread t...@qvgps.com
Ok, then WITHOUT ROWID will most properly fit best in our use case.
Then I can fill the PRIMARY KEY with the z-order and store the osm-id 
just in another column.

But do I still need to fill the table in the correct order according to 
z-order?
I mean, we are talking about 1mio rows or so.
At which point during insert are the pages actually written?


-- Originalnachricht --
Von: "Richard Hipp" 
An: "SQLite mailing list" 
Gesendet: 31.08.2018 15:10:15
Betreff: Re: [sqlite] Strategies to reduce page-loads?

>On 8/31/18, t...@qvgps.com  wrote:
>>
>>So is it just the value of the primary key controlling in which page 
>>the
>>row is stored?
>
>The page on which content is stored is determine (approximately) by
>the value of the ROWID, which is the same as the INTEGER PRIMARY KEY
>as long as you declare the primary key to be of type "INTEGER".  If
>you declare the PRIMARY KEY to be something other than "INTEGER" (for
>example: "INT" or "LONG") then the ROWID and the primary key are
>different and the primary key has no influence over where the content
>is stored.
>
>Or if you create a WITHOUT ROWID table, then the PRIMARY KEY is the
>value that determines (approximately) on which page the content is
>stored.
>--
>D. Richard Hipp
>d...@sqlite.org
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strategies to reduce page-loads?

2018-08-31 Thread t...@qvgps.com
>
>(3) Consider deliberating choosing INTEGER PRIMARY KEY values based on
>a "Morton code" or "Z-Order curve" of the coordinates.
>(https://en.wikipedia.org/wiki/Z-order_curve)  That will cause
>features that are close together geographically to tend to be close
>together within the file.

My primary key is actually the Id of the specific object in the 
OpenStreetMap-database, and we also need this osm-id in the app.
I was trying just to order all rows by the "Z-Order curve"-value first 
before inserting them.

for each (line in lines.ordered.by.zvalue)
{
 insert line  into lines
 insert line  into lines_rtree
}

But this doesn't seem to work.
At least, I don't see any improvements in page-usage.

So is it just the value of the primary key controlling in which page the 
row is stored?





___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strategies to reduce page-loads?

2018-08-30 Thread t...@qvgps.com
It would be interesting to "measure" the effect of these ideas during 
the process of optimizing.
I can profile and measure the execution times, but also interesting 
would be to know, how much pages are involved in a specific query.

Is there maybe a way to get the count of pages currently used?



-- Originalnachricht --
Von: "Richard Hipp" 
An: "SQLite mailing list" 
Gesendet: 30.08.2018 13:48:30
Betreff: Re: [sqlite] Strategies to reduce page-loads?

>On 8/30/18, t...@qvgps.com  wrote:
>>
>>Structure is  simple:
>>CREATE TABLE Lines(Id LONG PRIMARY KEY, Label VARCHAR(50), Coordinates
>>BLOB, Flags INT, StyleId INT);
>>And an rtree-index:
>>CREATE VIRTUAL TABLE Lines_bb USING rtree (Id LONG PRIMARY KEY, X0
>>FLOAT, X1 FLOAT, Y0 FLOAT, Y1 FLOAT, Z0 FLOAT, Z1 FLOAT);
>
>Three points that might help, either separately or in combination:
>
>(1) Use exactly "INTEGER PRIMARY KEY".  "LONG PRIMARY KEY" and "INT
>PRIMARY KEY" are not the same thing and do not work as well.
>
>(2) In the very latest versions of SQLite, 3.24,0 and the beta for
>3.25.0, you can put the "Lines" information directly in the RTree:
>
>   CREATE VIRTUAL TABLE Lines_bb USING rtree(id, x0, x1, y0, y1, z0,
>z1, +Label, +Coordinates, +Flags, +StyleId);
>
>The extra columns in r-tree are prefaced by a "+" character so that
>the r-tree module knows that they are auxiliary columns and not extra
>coordinates.
>
>(3) Consider deliberating choosing INTEGER PRIMARY KEY values based on
>a "Morton code" or "Z-Order curve" of the coordinates.
>(https://en.wikipedia.org/wiki/Z-order_curve)  That will cause
>features that are close together geographically to tend to be close
>together within the file.  There is are two extension functions in the
>https://www.sqlite.org/src/artifact/b0ff58fa643afa1d file of the
>SQLite source tree that might help you with this.  Or you can do the
>same using your own functions.
>--
>D. Richard Hipp
>d...@sqlite.org
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strategies to reduce page-loads?

2018-08-30 Thread t...@qvgps.com
Thanks guys for quick and competent answers!

After first sight, this "Z-Order curve" looks very promising, will give 
it a try.

The LONG PRIMARY KEY is because  I need a 64-bit integer (osm-id).
I only learned now, that sqlite-int is also 64 bit long.
Will change to INT PRIMARY KEY now.

Tom


-- Originalnachricht --
Von: "Richard Hipp" 
An: "SQLite mailing list" 
Gesendet: 30.08.2018 13:48:30
Betreff: Re: [sqlite] Strategies to reduce page-loads?

>On 8/30/18, t...@qvgps.com  wrote:
>>
>>Structure is  simple:
>>CREATE TABLE Lines(Id LONG PRIMARY KEY, Label VARCHAR(50), Coordinates
>>BLOB, Flags INT, StyleId INT);
>>And an rtree-index:
>>CREATE VIRTUAL TABLE Lines_bb USING rtree (Id LONG PRIMARY KEY, X0
>>FLOAT, X1 FLOAT, Y0 FLOAT, Y1 FLOAT, Z0 FLOAT, Z1 FLOAT);
>
>Three points that might help, either separately or in combination:
>
>(1) Use exactly "INTEGER PRIMARY KEY".  "LONG PRIMARY KEY" and "INT
>PRIMARY KEY" are not the same thing and do not work as well.
>
>(2) In the very latest versions of SQLite, 3.24,0 and the beta for
>3.25.0, you can put the "Lines" information directly in the RTree:
>
>   CREATE VIRTUAL TABLE Lines_bb USING rtree(id, x0, x1, y0, y1, z0,
>z1, +Label, +Coordinates, +Flags, +StyleId);
>
>The extra columns in r-tree are prefaced by a "+" character so that
>the r-tree module knows that they are auxiliary columns and not extra
>coordinates.
>
>(3) Consider deliberating choosing INTEGER PRIMARY KEY values based on
>a "Morton code" or "Z-Order curve" of the coordinates.
>(https://en.wikipedia.org/wiki/Z-order_curve)  That will cause
>features that are close together geographically to tend to be close
>together within the file.  There is are two extension functions in the
>https://www.sqlite.org/src/artifact/b0ff58fa643afa1d file of the
>SQLite source tree that might help you with this.  Or you can do the
>same using your own functions.
>--
>D. Richard Hipp
>d...@sqlite.org
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Strategies to reduce page-loads?

2018-08-30 Thread t...@qvgps.com
Hi Guys,

we are using SQlite for storing OpenStreetMap ways (lines).
Once filled its readonly.
Then the lines is queried for specific areas to draw a map.

Structure is  simple:
CREATE TABLE Lines(Id LONG PRIMARY KEY, Label VARCHAR(50), Coordinates 
BLOB, Flags INT, StyleId INT);
And an rtree-index:
CREATE VIRTUAL TABLE Lines_bb USING rtree (Id LONG PRIMARY KEY, X0 
FLOAT, X1 FLOAT, Y0 FLOAT, Y1 FLOAT, Z0 FLOAT, Z1 FLOAT);

Queries are always all lines of a specific geographical ares, which is 
very fast because of the rtree.
SELECT Lines.* FROM Lines_bb , Lines WHERE y0 < ? AND y1 > ? AND x0 < ? 
AND x1 > ? AND ? BETWEEN z0 AND z1 AND Lines_bb.Id = Lines.Id ;

Then the rows are fetched and here starts the problems:
var stmt = SQLite3.Prepare2 (handle, select);
while (SQLite3.Step(stmt) == SQLite3.Result.Row)
{
// fetch row
}

The larger the db, the slower is the fetching!

We compared a small db (50mb) and a big db (500mb), both containing the 
same small area:
Reading the same area of  for example 1000 lines from the small db is 2x 
faster then from the large db.
After doing some profiling, it turned out, that the extra time was spent 
in SQLite3.Step.

My assumption is, that in the big db, these 1000 lines are just spread 
over a much higher count of pages.
So more page-loads resulting in more time.

We changed page_size to the maximum value of 64k and it became much 
better, but still I would lke to improve it.

Thanks,
Tom



/
** Flemming Software Development CC
** Thomas Flemming
** PO Box 81244
** Windhoek, Namibia
** http://www.quovadis-gps.com
** mail t...@qvgps.com
** +264 (0)81 3329923 Nam mobile
** +49 (0)175 7313081 D mobile
** +49 (0)6182 8492599 D office
***/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] When is db size an issue?

2017-09-27 Thread Jason T. Slack-Moehrle
Hello All,

Off and on for the last few years I have been writing an e-mail client to
scratch a personal itch. I store the mail in SQLite and attachments on the
file system. However, I recently brought in all of my mail for the last 15
years from mbox format. Now, my database size is over 10gb. I'm not seeing
any real performance issues and my queries are executing nice and fast
during search.

However, does anyone have any thoughts about the size? Should I be
concerned? Is there a theoretical limit I should keep in the back of my
mind?

Jason
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] When is db size an issue?

2017-09-27 Thread Jason T. Slack-Moehrle
Hello All,

Off and on for the last few years I have been writing an e-mail client to
scratch a personal itch. I store the mail in SQLite and attachments on the
file system. However, I recently brought in all of my mail for the last 15
years from mbox format. Now, my database size is over 10gb. I'm not seeing
any real performance issues and my queries are executing nice and fast
during search.

However, does anyone have any thoughts about the size? Should I be
concerned? Is there a theoretical limit I should keep in the back of my
mind?

Jason
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Possible thread-safety bug in lemon parser with ParseTrace()

2017-09-07 Thread Fletcher T. Penney

1)  I hope I am reporting this to the right place.  If not, my apologies.

2) I have been using lemon parsers for a year or more, but am by no 
means an export on the lemon source itself.  I did not see this issue 
referenced elsewhere, my apologies if I missed it.



I *think* there is a thread-safety issue in the ParseTrace() function:

void ParseTrace(FILE *TraceFILE, char *zTracePrompt){
  yyTraceFILE = TraceFILE;
  yyTracePrompt = zTracePrompt;
  if( yyTraceFILE==0 ) yyTracePrompt = 0;
  else if( yyTracePrompt==0 ) yyTraceFILE = 0;
}

It appears that `yyTraceFILE` and `yyTracePrompt` are global variables 
that can conceivably be written to simultaneously on two separate 
threads.  I suspect the negative effects of this would be low 
(overwriting of one prompt with another, and they would likely be 
identical strings anyway).



It is detected by Xcode's Thread Sanitizer, and I wanted to report it in 
case there was a more untoward effect that I was missing.  If nothing 
needs to be done about it, that's fine too.



Thanks!

Fletcher




--
Fletcher T. Penney
fletc...@fletcherpenney.net
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] confused getting started

2017-03-05 Thread NTrewartha T-Online

I too am new to DB's and Sqllite.
I have downloaded the binaries for win 10 and there is a dll and def file.
I clicked on the def file hoping this would integrate the dll into VS 
2015 community, but all what
happened was that the def file get listed. and I cannot seem to find an 
import def  tool.

Have to google that but *I would be grateful for any pointers*.

There is a GUI tool - which I think is better for learners which ought 
to be mentioned.

http://sqlitebrowser.org/ The "DBBrowser for sqllite".

Any examples of a C,C++,C# or Python usage for sqllite.?

I would like sqllite on my raspberry pi 3 after I have gained experience 
under Windows 10.


Judging what the replies to questions, the very new beginners are left a 
bit out in the cold.

Perhaps the documentation ought to cover the need of pure beginners.

Regard to you all,

NT


On 05.03.2017 10:54, a...@zator.com wrote:

Besides the need to include RDBMS engine inside your application, and manage it 
from the code. You have a standalone application (sqlite3.exe) who let manage 
your databese from the CLI (command line interpreter) of your system, and play 
whit the majority options who SQLite offer.

HTH.

--
Adolfo


 Mensaje original 
De: John Albertini 
Para:  sqlite-users@mailinglists.sqlite.org
Fecha:  Sat, 4 Mar 2017 19:10:26 -0500
Asunto:  [sqlite] confused getting started

I'm not a computer novice, but also not a nerd/geek.

Been using PCs since the mid 1980s and have used dBase III+ and Approach
previously.

I can't seem to grasp what I need to download / install to use SQLite?

Can someone guide me through the process?  Looking to use it with
RootsMagic.

Thank you.
John
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



--

Nigel Trewartha
Sonnenweg 3
33397 Rietberg
Germany
Tel: 05244/3631 Fax: 05244/9063266
ntrewar...@t-online.de

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Best way to temporarily store data before processing

2015-04-14 Thread Joseph T.

Mr. Moules, why not skip the raw tables entirely? Or failing that a separate in 
memory db would probably be the best and quickest option.



Sent from my Samsung Epic? 4G TouchJonathan Moules  wrote:Hi List,
I'm wondering if anyone can offer me a "best practice" way of doing this.

I'm doing some log analysis using Python/SQLite. Python parses a log file and 
splits the raw data from each line in the log into one of about 40 tables in an 
SQLite database (I'll call them Raw Tables).

Once a log file has been processed like this, I run some SQL which takes the 
data from the Raw Tables and aggregates/processes it into about 10 different 
"Prepared Tables" which are read with a bunch of Views. The 
aggregation/processing doesn't take long, and the SQL for it is simple.

I'd like to update the Prepared Tables after each log file is read because 
there are thousands of files and I don't want to have to rely on having GB of 
disk space sitting around for temporary Raw Tables.

Once the Prepared Tables have been created, there's no real need to keep the 
data in the Raw Tables.

The Prepared Tables don't have to be in the same database as the Raw Tables. 
I'm happy to use ATTACH.

So my question:
What's the best way to do this with the minimum overhead?

Options that have come to mind (probably missed a lot):
??? - Some sort of empty template database for the Raw Tables which is 
copied/cloned/overwritten for each file processed.
??? - And/Or use "DELETE FROM Raw_Tables" to truncate it after each 
file (there are no indexes).
??? - And/Or place it into :memory:.
??? - And/Or just CREATE the Raw Tables for each file?
??? - And/Or do it within the Prepared Tables database and use "DELETE 
FROM Raw_Tables". (That file you wouldn't want in :memory: of course).


Thoughts welcome, thanks for your time,
Jonathan



HR Wallingford and its subsidiaries uses faxes and emails for confidential and 
legally privileged business communications. They do not of themselves create 
legal commitments. Disclosure to parties other than addressees requires our 
specific consent. We are not liable for unauthorised disclosures nor reliance 
upon them.
If you have received this message in error please advise us immediately and 
destroy all copies of it.

HR Wallingford Limited
Howbery Park, Wallingford, Oxfordshire, OX10 8BA, United Kingdom
Registered in England No. 02562099


___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to insert a pointer data to sqlite?

2015-02-25 Thread Joseph T.

Unless, I'm wrong. What you want to do is use two tables. One to store the node 
values and another that references them for whatever object using them. Say, 
points for a pair of triangles, a,b,c,d,e. If table triangle is a table 
pointing at the table point (id,object,point) you could have a triangle sharing 
points and then when the shared point is changed the triangles would change to 
if reloaded.




Sent from my Samsung Epic? 4G TouchYAN HONG YE  wrote:I 
have a data:
Id  pid namemark
1   0   f1   sample
2   1   f2   sample
3   1   f3   sample
4   2   f4   sample
5   2   *id(2).name *id(2).mark

These means that under id(2) and id(5) have same node, if change one of the 
node, the other update auto,
How to realize this function?
Thank you!

___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-09 Thread J T
I just thought of what may be a simpler solution.

I'm assuming that there is a certain limit to the length of the books 
(positions can be safely assumed to never exceed say, 100,000)

So what can be done is

update page set position=position + 10 where position>='3';
insert into page(book,position) values('1','3');
update page set position=position - 9 where position>10;

This will work around the unique contraint and seems simpler than dropping it 
everytime you want to insert a page.

 
 

 

 

-Original Message-
From: Gwendal Roué 
To: General Discussion of SQLite Database 
Sent: Mon, Dec 8, 2014 12:07 pm
Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail



> Le 8 déc. 2014 à 17:21, Simon Slavin  a écrit :
> 
>> Why not an opt-in way to ask for deferred constraint checking. The key here 
is only to allow perfectly legit requests to run. With all the due respect to 
sqlite implementors and the wonderful design of sqlite.
> 
> SQL-99 includes a syntax for deferred checking.  We don't need to invent our 
own syntax with a PRAGMA. However, it is done when the constraint is defined 
rather than being something one can turn on or off.  So you would need to think 
out whether you wanted row- or transaction-based checking when you define each 
constraint in the first place.

Hi Simon,

This topic is fascinating. Googling for SQL-99 deferred checking, I stumbled 
upon this page which shows how deferred index maintenance affects Oracle query 
plan, and performance : 
https://alexanderanokhin.wordpress.com/deferred-index-maintenance/.

I now understand that the strategy for checking index constraints is tied to 
their maintenance.

The `UPDATE pages SET position = position + 1 WHERE book_id = 0 AND position >= 
1` query we are talking about has indeed to perform both. Such an 
innocuous-looking request, and it sends us right into the very guts of 
relational constraints :-)

Gwendal

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-09 Thread J T
I just thought of what may be a simpler solution.

I'm assuming that there is a certain limit to the length of the books 
(positions can be safely assumed to never exceed say, 100,000)

So what can be done is

update page set position=position + 10 where position>='3';
insert into page(book,position) values('1','3');
update page set position=position - 9 where position>10;

This will work around the unique contraint and seems simpler than dropping it 
everytime you want ot insert a page.

 

 

 

-Original Message-
From: James K. Lowden 
To: sqlite-users 
Sent: Tue, Dec 9, 2014 10:38 am
Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail


On Mon, 08 Dec 2014 22:01:15 +0700
Dan Kennedy  wrote:

> On 12/08/2014 09:55 PM, Nico Williams wrote:
> > Ideally there would be something like DEFERRED foreign key checking
> > for uniqueness constraints...
> 
> You could hack SQLite to do enforce unique constraints the same way
> as FKs. When adding an entry to a UNIQUE index b-tree, you check for
> a duplicate. If one exists, increment a counter. Do the opposite when 
> removing entries - decrement the counter if there are two or more 
> duplicates of the entry you are removing. If your counter is greater 
> than zero at commit time, a UNIQUE constraint has failed.

It's not *deferred* constraint checking.  It's constraint checking.
Best to honor the transaction first.  

Rather than adding to the syntax, perhaps a pragma could cause updates
to happen in a transaction: 

1.  Create a temporary table to hold the after-image of the updated
rows. 
2.  begin transaction
3 . Delete the rows from the target table. 
3.  Insert the updated rows from the temporary table.  
4.  commit
5.  drop temporary table. 

Of course there are more efficient answers available deeper in the
update logic, affecting only the partcular columns at the time the
constraint is enforced.  I guess they all involve deleting the
prior set from the index and inserting the new one.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-08 Thread J T

 

 Cancel that, apparently that only updates the last record...

 

-Original Message-
From: John McKown 
To: General Discussion of SQLite Database 
Sent: Mon, Dec 8, 2014 9:18 am
Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail


On Mon, Dec 8, 2014 at 8:15 AM, Marc L. Allen 
wrote:

> I am like you, Gwendal, in that I don't like that behavior in SQLite;
> however, not liking it doesn't make it a bug.
>

​On another of my forums, this is called a BAD - Broken, As Designed.​ As
opposed to the normal WAD - Working As Designed.

-- 
The temperature of the aqueous content of an unremittingly ogled
culinary vessel will not achieve 100 degrees on the Celsius scale.

Maranatha! <><
John McKown
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-08 Thread J T
Might have another work around.

update page set position=position + 1 where designation=(select designation 
from page where book='1' order by position desc)

and then insert your page.

Please see if that'll work. I tested it, but your results may differ.



 

 

 

-Original Message-
From: RSmith 
To: General Discussion of SQLite Database 
Sent: Mon, Dec 8, 2014 9:15 am
Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail



On 2014/12/08 15:58, Gwendal Roué wrote:
> I'm new to this mailing list, and I won't try to push my opinion, which is : 
yes this is a bug, and this bug could be fixed 
> without introducing any regression (since fixing it would cause failing code 
to suddenly run, and this has never been a 
> compatibility issue). Thank you all for your support and explanations. The 
root cause has been found, and lies in the constraint 
> checking algorithm of sqlite. I have been able to find a work around that is 
good enough for me. Now the subject deserves a rest, 
> until, maybe, someday, one sqlite maintainer who his not attached to the 
constraint-checking algorithm fixes it. Have a nice day, 
> Gwendal Roué 

Your new-ness is irrelevant, if you have a worthy argument it deserves being 
heard. To that end, let me just clarify that nobody was 
saying the idea of deferring the constraint checking is invalid or ludicrous 
(at 
least I had no such intention) and you make a valid 
point, especially since most other DB engines do work as you suggest - and this 
will be fixed in SQLite4 I believe, where 
backward-compatibility is not an issue.

The reason I (and others) will say it isn't a bug is because it isn't working 
different than is intended, or more specifically, than 
is documented. It works exactly like described - whether you or I agree with 
that paradigm or not is up to discussion but does not 
make it a "bug" as long as it works as described.

I hope the work-around you found works great!



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-08 Thread J T
Sorry, wasn't focused on what I was looking at. Though, you said you already 
tried the order by without success which would have been my next suggestion or 
clarification of my first. As, you should be able to update the rows from the 
end down to the page that would be after your insertion (update pages set 
position=position + 1 where book=0 order by position desc.) and then inserting 
the new page at the desired position. But if that's not working, I have to 
agree with your opinion of it being a bug.

 

 

-Original Message-
From: Gwendal Roué 
To: General Discussion of SQLite Database 
Sent: Mon, Dec 8, 2014 8:40 am
Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail


J T,

I did provide a sequence of queries that reliably reproduce the issue (see 
below, from the first CREATE to the last UPDATE). There is no trigger involved, 
as far as I know. Forgive me but I don't see how I could use your advice.

My work around has been to destroy the unique index, and then re-create it 
after 
the update. This solution is good enough as my table is not that big, and the 
"pure" code path remains intact, with only two inserted statements that are 
easily described and commented.

Gwendal Roué

> Le 8 déc. 2014 à 14:24, J T  a écrit :
> 
> Try having your cascade occur before the row is created, updated or deleted.
> 
> http://www.sqlite.org/lang_createtrigger.html
> 
> 
> 
> 
> 
> 
> 
> -Original Message-
> From: Richard Hipp 
> To: General Discussion of SQLite Database 
> Sent: Mon, Dec 8, 2014 8:14 am
> Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail
> 
> 
> On Mon, Dec 8, 2014 at 4:55 AM, Gwendal Roué  wrote:
> 
>> Hi,
>> 
>> Unique indexes make some valid update queries fail.
>> 
>> Please find below the SQL queries that lead to the unexpected error:
>> 
>> -- The `books` and `pages` tables implement a book with several pages.
>> -- Page ordering is implemented via the `position` column in the pages
>> table.
>> -- A unique index makes sure two pages do not share the same position.
>> CREATE TABLE books (
>>id INT PRIMARY KEY
>>)
>> CREATE TABLE pages (
>>book_id INT NOT NULL REFERENCES books(id) ON DELETE CASCADE ON
>> UPDATE CASCADE,
>>position INT
>> )
>> CREATE UNIQUE INDEX pagination ON pages(book_id, position)
>> 
>> -- Let's populate the tables with a single book and three pages:
>> INSERT INTO books VALUES (0);
>> INSERT INTO pages VALUES (0,0);
>> INSERT INTO pages VALUES (0,1);
>> INSERT INTO pages VALUES (0,2);
>> 
>> -- We want to insert a page between the pages at positions 0 and 1. So we
>> have
>> -- to increment the positions of all pages after page 1.
>> -- Unfortunately, this query yields an error: "columns book_id, position
>> are not unique"/
>> 
>> UPDATE pages SET position = position + 1 WHERE book_id = 0 AND position >=
>> 1;
>> 
>> The query should run without any error, since it does not break the unique
>> index.
>> 
> 
> Uniqueness is checked for each row change, not just at the end of the
> transaction.  Hence, uniqueness might fail, depending on the order in which
> the individual rows are updated.
> 
> 
>> 
>> Thank you for considering this issue.
>> 
>> Cheers,
>> Gwendal Roué
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
> 
> 
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-08 Thread J T

 That should have been "trigger occur before...", pardon.

 

 

-Original Message-
From: Richard Hipp 
To: General Discussion of SQLite Database 
Sent: Mon, Dec 8, 2014 8:14 am
Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail


On Mon, Dec 8, 2014 at 4:55 AM, Gwendal Roué  wrote:

> Hi,
>
> Unique indexes make some valid update queries fail.
>
> Please find below the SQL queries that lead to the unexpected error:
>
> -- The `books` and `pages` tables implement a book with several pages.
> -- Page ordering is implemented via the `position` column in the pages
> table.
> -- A unique index makes sure two pages do not share the same position.
> CREATE TABLE books (
> id INT PRIMARY KEY
> )
> CREATE TABLE pages (
> book_id INT NOT NULL REFERENCES books(id) ON DELETE CASCADE ON
> UPDATE CASCADE,
> position INT
> )
> CREATE UNIQUE INDEX pagination ON pages(book_id, position)
>
> -- Let's populate the tables with a single book and three pages:
> INSERT INTO books VALUES (0);
> INSERT INTO pages VALUES (0,0);
> INSERT INTO pages VALUES (0,1);
> INSERT INTO pages VALUES (0,2);
>
> -- We want to insert a page between the pages at positions 0 and 1. So we
> have
> -- to increment the positions of all pages after page 1.
> -- Unfortunately, this query yields an error: "columns book_id, position
> are not unique"/
>
> UPDATE pages SET position = position + 1 WHERE book_id = 0 AND position >=
> 1;
>
> The query should run without any error, since it does not break the unique
> index.
>

Uniqueness is checked for each row change, not just at the end of the
transaction.  Hence, uniqueness might fail, depending on the order in which
the individual rows are updated.


>
> Thank you for considering this issue.
>
> Cheers,
> Gwendal Roué
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-08 Thread J T
Try having your cascade occur before the row is created, updated or deleted.

http://www.sqlite.org/lang_createtrigger.html

 

 

 

-Original Message-
From: Richard Hipp 
To: General Discussion of SQLite Database 
Sent: Mon, Dec 8, 2014 8:14 am
Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail


On Mon, Dec 8, 2014 at 4:55 AM, Gwendal Roué  wrote:

> Hi,
>
> Unique indexes make some valid update queries fail.
>
> Please find below the SQL queries that lead to the unexpected error:
>
> -- The `books` and `pages` tables implement a book with several pages.
> -- Page ordering is implemented via the `position` column in the pages
> table.
> -- A unique index makes sure two pages do not share the same position.
> CREATE TABLE books (
> id INT PRIMARY KEY
> )
> CREATE TABLE pages (
> book_id INT NOT NULL REFERENCES books(id) ON DELETE CASCADE ON
> UPDATE CASCADE,
> position INT
> )
> CREATE UNIQUE INDEX pagination ON pages(book_id, position)
>
> -- Let's populate the tables with a single book and three pages:
> INSERT INTO books VALUES (0);
> INSERT INTO pages VALUES (0,0);
> INSERT INTO pages VALUES (0,1);
> INSERT INTO pages VALUES (0,2);
>
> -- We want to insert a page between the pages at positions 0 and 1. So we
> have
> -- to increment the positions of all pages after page 1.
> -- Unfortunately, this query yields an error: "columns book_id, position
> are not unique"/
>
> UPDATE pages SET position = position + 1 WHERE book_id = 0 AND position >=
> 1;
>
> The query should run without any error, since it does not break the unique
> index.
>

Uniqueness is checked for each row change, not just at the end of the
transaction.  Hence, uniqueness might fail, depending on the order in which
the individual rows are updated.


>
> Thank you for considering this issue.
>
> Cheers,
> Gwendal Roué
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to Verify (know that they are correct) Queries in SQLite GUI Manager Firefox add-on?

2014-12-08 Thread J T
The questions you have to ask is

Are the rows I'm returning identifiable by a unique id -- typically the row id, 
but also unique identifiers, like ISBN for books, Employee ID for employees, 
etc. If you find duplicates of what should be a unique id in a table then its 
probably a sign the data is bad. (Two books with the same ISBN, two employees 
with the same Employee ID.) Of course, the other possibility is that the 
database wasn't normalized and the standard operations (Create, Read, Update, 
Delete) weren't used in a logical fashion (for instance leaving out a way to 
update or delete employees and thus making it impossible to rename an employee 
who's changed their name.)

When I search for this author are the books returned normalized against the 
author's table? That is, is there an identifier shared between the tables that 
allows one table to be searched in relation to the other? (The relational part 
of databases.)

The other thing is finding external sources to verify against, or performing 
tests as mentioned by other members of this list.

Create a test author.
create test books by the test author.
do you get only the books you entered for that author?
If not, why not?
If so, then can you repeat the results?

Another thing to look at are your queries.

Select [fields] from [table] where [condition]
Insert into [table] ([columns]) values ([value for each column])
Delete from [table] where [condition]
Update [table] set [field]=[value], [field2]=[value2] ... where [condition]
Select [fields] from [table] where [condition] limit [rows to skip],[rows to 
return]


If you have doubts about the accuracy of the tool you're using there are free 
SQLite Managers out there.

And then there's always the last option which is reading the file manually. 
This last requires a bit more understanding of the database engine itself as 
you need to be able to identify what type is supposed to be where by the bytes 
of the file, and would probably have to write a program to do this 
programmatically instead of trying to do it manually.



 

 

 

-Original Message-
From: Dwight Harvey 
To: sqlite-users 
Sent: Sun, Dec 7, 2014 9:24 pm
Subject: [sqlite] How to Verify (know that they are correct) Queries in SQLite 
GUI Manager Firefox add-on?


I am a student with no Tech or IT background what so ever.

I am new to Databases and IT in general.

I am taking an accelerated class in database basics and within the last
three weeks I just learned what databases were.

I know very little and Databases are complex and intimidating.

I figured out how to run queries but I don't know if they are
correct/accurate, as in what I requested from the 'RUN' results?

How do you 'VERIFY' your query results?


My instructor wants me to explain how do I KNOW that the records are
accurate. Here is an example of what is expected in the assignment...

 *VERIFICATION:  *What is verification?  Each time you retrieve data, you
should ask yourself, "How do I know I selected the correct data?".   For
example, if you were asked to pull all records written by an author named
Fred Smith, your query might be based on last name equal to Smith.
However, if you might get records for someone with the first name of Fred,
Mary and Ginger.   What would you do to insure you are pulling only Fred?
The person who has requested the data will always want assurance from you
that you are 100% positive you pulled the correct records.  Look at the
records returned and always as yourself, did I pull the correct records?
How would I verify it?

"Capture each query, number of records returned and *explain your
validation of the query.*" Example:

/** First query 1. List all employees **/
SELECT dbo.Firstname, dbo.Lastname
FROM dbo.employees
--records returned: 24
*--Validation:  I did a quick listing of top 200 records and 4 were
returned*.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Search for text in all tables

2014-12-04 Thread J T
Dominique,

Why not get a column count (datareader.fieldcount (C#) or 
cursor.getColumnCount() (Java/Android))?

>From there you should be able to simply do a 
try { 
str = getString(columnIndex); 
checkValue(str); 
} 
catch(Exception e) {  
// wasn't a string or the check (and replace) failed
// you may want to catch different exceptions
}





 

 

 

-Original Message-
From: Dominique Devienne 
To: General Discussion of SQLite Database 
Sent: Thu, Dec 4, 2014 6:57 am
Subject: Re: [sqlite] Search for text in all tables


On Thu, Dec 4, 2014 at 11:45 AM, Baruch Burstein 
wrote:

> On Thu, Dec 4, 2014 at 12:18 PM, Paul Sanderson <
> > If you are a coder then it is a relatively straight forward process
> > Loop through each table
> >Loop through each column
>
> This is the part I am having trouble with. I can loop through tables using
> sqlite3_master, but how do I loop through columns? Parse the schema?


http://www.sqlite.org/pragma.html#pragma_table_info  --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] General R*Tree query

2013-12-18 Thread Brian T. Carcich
On Wed, Dec 18, 2013 at 1:53 AM, Roman Fleysher <
roman.fleys...@einstein.yu.edu> wrote:

> Perhaps this is a weird way for me to get deeper knowledge of R trees, and
> because I vaguely remember that Tyco refers to a specific epoch in which
> coordinates are defined, but would it be possible to search R tree using a
> cone, i.e. stars within a cone of certain degree around given star? This
> would require a trigonometric calculation before comparison can be made but
> can be done in a single comparison.
>
> Or, since RA and DEC coordinates are not area preserving (nor distance) --
> i.e. angle between stars at DEC =0 is bigger than angle between stars at
> DEC=80 when they are the same delta RA apart -- then maybe instead of
> defining rectangular FOV in RA and DEC one should be defining rectangular
> FOV in DEC, sin(RA)? Then one would not need two searches.
>
> The goal is to find neighbors to a given star defined roughly by some
> metric? Since there's nothing magical in RA , DEC coordinates the metric
> could use some other coordinates?



Every [RA,DEC] pair resolves to a unit vector in Cartesian coordinate space
i.e. an [X,Y,Z] triplet on the surface of a unit sphere; that would be a
continuous metric without the RA=0=360 issue.  I don't see why the R*Tree
could not be set up with X, Y, and Z, plus magnitude limits; the set of
nodes is hollow in a 3D sense so the first-level non-leaf nodes would have
a lot of empty space, but I don't think that matters; I've been thinking
about doing it this way for some time.  For my app I already store XYZs in
the outer, non-R*Tree table because all final comparisons have to be in
Cartesian space anyway.

But in general the search region is so small that the cosine[DEC]
dependence of distance per degree of RA is effectively constant for any one
search, and  an [RA,DEC,Mag] tree should be "good enough" because it pares
down the search space quickly from 2.5M stars in Tycho=2 to a few hundred
very quickly, with the caveat that there is a special case near RA=0=360.

In any event the R*Tree is not going to do the final geometric comparison,
rather it reduces the number of stars that need that comparison, and that's
the goal.

Btw, if you want to see something cool, look at chapter 4 of Dustin Lang's
thesis (www.astro.princeton.edu/~dstn/lang-thesis.pdf); the whole paper is
cool, but I really enjoy that chapter.

-b
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] General R*Tree query

2013-12-17 Thread Brian T. Carcich
On Tue, Dec 17, 2013 at 3:57 PM, Roman Fleysher <
roman.fleys...@einstein.yu.edu> wrote:

>
> Since coordinate system is spherical, how do you tell that RA=23:59 and
> RA=00:01 are next to each other using usual comparisons?


I don't; usual comparisons won't work so I do two comparisons:

I am usually looking for stars within a convex field of view (FOV),
typically a frustum with a rectangular footprint, so I determine if and
where RA=0=360 crosses that footprint, and break the FOV into two pieces,
from 0<=RA<=loRA and hiRA<=RA<=360, so loRA becomes hira in one search and
hiRA become lora in the other.

There are only three cases:  zero, one or two crossings.  Zero crossings
means I can do everything in one SELECT; one crossing means either one of
the poles is in the FOV and I search RA=0 to 360; DEC=someDEC to +/-90, or
the FOV touches RA=0(=360) from one side or the other, which reduces to the
zero case; two crossings means the poles are not in the FOV and I can do
two searches as mentioned above, from 0 up to someLowRA and from 360 down
to someHighRA.

There are some edge cases but that is basically it.

I actually handle "two or more crossings" cases the same as two cases, even
though I don't think "more" can happen with a convex FOV footprint.  For
any edge (segment of the great circle between two vertices) of the FOV that
crosses RA=0, which is easily determined since I have the vertices in XYZ
coordinates, I insert a vertex in the edge at the crossing, and then
recurse with subsets of vertices split across RA=0.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] General R*Tree query

2013-12-17 Thread Brian T. Carcich
On Tue, Dec 17, 2013 at 3:51 PM, Dan Kennedy  wrote:

> On 12/18/2013 12:49 AM, Brian T. Carcich wrote:
>
>> [...]
>
> Points are fine. [...]
> Is it working now? How many more stars do you have data for?


Excellent, thanks for the info!

I forgot to mention that we do perform searches using magnitude.

Yes it is working now; I do the normal SQLite3 R*Tree INNER JOIN to get to
the index table (tyc2index) from the indexrtree table (tyc2indexrtree)
regions overlapping the user-supplied RA,DEC limits (hira = High RA limit;
lodec = Low DEC limit; etc), and then do another INNER JOIN ON the index
table start and end offsets with the offsets in the main catalog table
(tyc2catalog_uvs), so it all happens in one call.  The beauty is that all
the work is done up front when I load the data from the star catalog, and
then the SELECT does the rest.  Also, the approach should work for any
catalog that has RA,DEC and Magnitude, which almost all catalogs do.

I think the SELECT is in the Githup repo ... yeah, here it is:


SELECT tyc2catalog_uvs.offset ,tyc2catalog_uvs.x ,tyc2catalog_uvs.y
> ,tyc2catalog_uvs.z ,tyc2catalog_uvs.mag


> FROM tyc2indexrtree



INNER JOIN tyc2index
> ON tyc2indexrtree.offset=tyc2index.offset



INNER JOIN tyc2catalog_uvs
> ON tyc2index.catalogstart<=tyc2catalog_uvs.offset
>AND tyc2index.catalogend>tyc2catalog_uvs.offset
>AND tyc2catalog_uvs.mag   AND tyc2indexrtree.hira>?
>   AND tyc2indexrtree.lora   AND tyc2indexrtree.hidec>?
>   AND tyc2indexrtree.lodec ORDER BY tyc2catalog_uvs.mag asc;
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] General R*Tree query

2013-12-17 Thread Brian T. Carcich
I'm working on an SQLite solution to get at star catalogs; they are usually
searched via Right Ascension (RA), Declination (DEC), and magnitude (mag).
 RA,DEC is a spherical coordinate system to specify a star position on-sky;
magnitude is related to star brightness.

What I have so far is here:

   https://github.com/drbitboy/Tycho2_SQLite_RTree


I started with the Tycho-2 star catalog.  It comprises 2.5 million stars in
a flat ASCII, fixed-width catalog file (actually two files but ignore that
for now), and an index file (map) of ~10k small RA-DEC regions, with an
average of ~250 stars in each region.  The regions do not overlap, and all
the stars in any one region are in contiguous lines in the catalog file.

The index file does not implement any grouping or sorting by magnitude.
 Each index region refers to

A) a contiguous region on-sky with defined by a min-max RA pair and a
min-max DEC pair.

B)  a contiguous range of the lines (stars) in the flat file that are
within that region.

So the data in the index file are a reasonable starting point for an R*Tree
in SQLite3.  I put the index file data into the virtual table using the RA
and DEC limits for each region as the two min-max pairs of columns in the
table, and the index table, referenced by the primary key of the virtual
table, contains the starting and ending+1 indices (offsets actually) of the
stars in the flat catalog file for each region.

So I use the R*Tree module to get a fast lookup into the index table,
returning index regions that overlap an input RA and DEC min-max pair, then
step through the catalog lines for each of those regions.

Here's my question:  is there any advantage to skipping the index step and
putting the star catalog data into the virtual table itself?  One advantage
is that I could include the magnitude in the rtree table.

The reason I ask is that rtree table uses min-max pairs, but each star is a
point so the min and max are equal for each star.  Would that break any
implicit R*Tree rules or negate any efficiencies?

Thanks,

Brian Carcich
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Provenance

2012-07-06 Thread Gavin T Watt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I totally agree with you about about the server location being of little
interest, really, and that the having access to the source is much more
important. However, my customer wanted me to check this anyway,a nd you've
answered the mail.

BTW, do you know if the SQLite team runs any kind of static code analysis
on the source?


Gavin Watt, CISSP
Sr. Prin. Sys. Engr.
Information Assurance
Network Centric Systems (NCS)

714-446-3104 (office)
714-234-8869 (BB)
714-446-4136 (fax)
gw...@raytheon.com

120 S. St. College Blvd..
Mail Station: FU/679/H118
Brea, Ca.  92821, USA



From: Roger Binns 
To: General Discussion of SQLite Database 
Date: 07/06/2012 03:05 PM
Subject: Re: [sqlite] SQLite Provenance
Sent by: sqlite-users-boun...@sqlite.org



* PGP Signed by an unknown key

On 06/07/12 14:58, Gavin T Watt wrote:
> ... the provenance of SQLite for security reasons.

Where the server is is of little interest.  It would be good if the team
actually signed the release in some way then at least you would know it
was what they released.

What is of more importance is that you have access to the code (which
everyone does) and where the code came from.  Especially note the first
and third paragraphs here:

  http://www.sqlite.org/copyright.html

Roger

* Unknown Key
* 0x0DFBD904(L)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


-BEGIN PGP SIGNATURE-
Version: PGP Desktop 10.2.0 (Build 2068)
Charset: utf-8

wsBVAwUBT/dlmo8W3Wkt3UsBAQiR/wgAgcCQ0mcJMWRP9G5aCUp1b5/cWAaoyHbI
2aNHKRdtwST9ugtt6lk2AnEjtIqjM4C9jMudCWsxh2qB+gVguJQhbPegLiVOGHI+
1axfwIiGOMqdgba2BW+uQVbWYHMfsm4u3wThS91S2BwZR4TWGRdbkjg5IxV72JGH
KTvuNLC5Dv6p3f1sOK7qSV7HVsQmFYjXGsWWb1U2MHuJH0rZ0KsVCa0mu6zFdRur
jddMFu8wWrNEOMSozsI+mWWn2k68mBj2CblbkIho9lScHJiGKlK5o75anmqw8xQj
4fsYmUsEjTqanaeRrU3CNKMuphseyW8cYnVrt1+BY67chMipKTm54w==
=P9aB
-END PGP SIGNATURE-___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Provenance

2012-07-06 Thread Gavin T Watt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Thanks for the information.

Not a Jingoistic quest, but we're using SQLite in a system for which there
is an interest in the provenance of SQLite for security reasons. With the
server ins Dalls, we golden!


Gavin Watt, CISSP
Sr. Prin. Sys. Engr.
Information Assurance
Network Centric Systems (NCS)

714-446-3104 (office)
714-234-8869 (BB)
714-446-4136 (fax)
gw...@raytheon.com

120 S. St. College Blvd..
Mail Station: FU/679/H118
Brea, Ca.  92821, USA



From: Roger Binns 
To: General Discussion of SQLite Database 
Date: 07/06/2012 11:01 AM
Subject: Re: [sqlite] SQLite Provenance
Sent by: sqlite-users-boun...@sqlite.org



* PGP Signed by an unknown key

On 06/07/12 08:29, Gavin T Watt wrote:
> Can anyone tell me where (what country) the SQLite server(s) are
> located?

Go to http://centralops.net and enter the site of interest into domain
dossier.

You will see that the sqlite.org domain name is openly registered (not
hidden by a privacy/anonymity registrar) and that the IP address it is
hosted on belongs to Linode who obtained it as a block from ThePlanet
internet services.

If you tick traceroute at the top then you can deduce from the traceroute
(at the bottom) that particular IP address is in Dallas.

If this is some sort of jingoistic quest then all companies involved are
American.

Roger

* Unknown Key
* 0x0DFBD904(L)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


-BEGIN PGP SIGNATURE-
Version: PGP Desktop 10.2.0 (Build 2068)
Charset: utf-8

wsBVAwUBT/dflI8W3Wkt3UsBAQimkQf+M4Fylk+jIEdYmU9qoz8pexgmeMFVgDBo
E22D6T91QrzfBp+8zTtuMCIVYe31Sv+H2E3rdfWuP+xLM82OqldSylv/eaG0uRXl
fRCISD63mvnJpBY63LS9r2tWbw5+1JCsDtLVzcwOTLrRoreBvXT5RXbZKe5g+j84
JZVtZZrjAMyiaw2XC3uyoq8I71HC5wCyYjYuWd+QwaqLGwndC9Nvtua1opXHzTC5
hviMgkhfbq2+q7ZUu1IiBCDPk255aG8tx83sxn2RiKyI6voBMFVJE1E1sv1vvykE
uL7wj4ybw2UDKkgcsBFmAaQDb228tGghbSXqB28npuVpTtugLPRkFg==
=G2iZ
-END PGP SIGNATURE-___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite Provenance

2012-07-06 Thread Gavin T Watt

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Can anyone tell me where (what country) the SQLite server(s) are located?


Gavin Watt, CISSP
Sr. Prin. Sys. Engr.
Information Assurance
Network Centric Systems (NCS)

714-446-3104 (office)
714-234-8869 (BB)
714-446-4136 (fax)
gw...@raytheon.com

120 S. St. College Blvd..
Mail Station: FU/679/H118
Brea, Ca.  92821, USA
-BEGIN PGP SIGNATURE-
Version: PGP Desktop 10.2.0 (Build 2068)
Charset: utf-8

wsBVAwUBT/cEaY8W3Wkt3UsBAQhbYAf/VNDpJBz42Q/1H5rUVQbsoRVJKPiP2qJV
cjqnCNj07g1/PgKkg/+8KXpq2ekuVcZmRa1Nbxvjqvqa3D9FeDK7/GJVhFZvE2Bn
b0FGJrgrg6qzRNYQL8+mjPw75eny/Ugl4LLncQU7UoVdnMhWD4BadCGks4+8kVBf
8eFO3nF2I+59hkCuP7uqDzB33jUz1bOlXTrNlV/CWbuiJ8/veYSToXR4flgW6npr
4ftIoV9VV94/W8tlJODbfvZ8aSCIbpLb9LITz3ketBfnM5R3WLBJrjAyBn1v55Pu
xb4jnJntm9P3inAZffB2qkfdgHL4NR5797IguF7KO70SsJ+AqBvSWg==
=R+P3
-END PGP SIGNATURE-___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple connections to in-memory database

2012-07-05 Thread T Ü
Thank you so much Pavel. I will try with the new version.




 From: Pavel Ivanov 
To: T Ü  
Cc: General Discussion of SQLite Database  
Sent: Thursday, July 5, 2012 4:03 PM
Subject: Re: [sqlite] Multiple connections to in-memory database
 
So this feature shouldn't work for you. From my first message:

> But this possibility was
> introduced in SQLite 3.7.13. So your asp.net provider should be
> compiled with the latest version of SQLite, otherwise it won't work.


Pavel


On Thu, Jul 5, 2012 at 8:56 AM, T Ü  wrote:
> It returns 3.6.23.1
>
> 
> From: Pavel Ivanov 
> To: T Ü 
> Cc: General Discussion of SQLite Database 
> Sent: Thursday, July 5, 2012 3:40 PM
>
> Subject: Re: [sqlite] Multiple connections to in-memory database
>
> On Thu, Jul 5, 2012 at 8:37 AM, T Ü  wrote:
>> By trying I found out that SQLiteConnection("Data
>> Source=:memory:;cache=shared"); worked.
>> In a single aspx.page at cs code, first I open an in-memory database
>> connection
>>        SQLiteConnection conn = new SQLiteConnection ( "Data
>> Source=:memory:;cache=shared" );
>>        conn.Open();
>> than create table and insert some data
>> then without closing that connection open another connection in the
>> sameway,
>> but when I try to select the rows of the table that I created in the
>> previous table, I get no such table error.
>> What am I doing wrong???
>
> Please execute "SELECT sqlite_version()" in your cs code and tell us
> the result of it.
>
> Pavel
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple connections to in-memory database

2012-07-05 Thread T Ü
It returns 3.6.23.1




 From: Pavel Ivanov 
To: T Ü  
Cc: General Discussion of SQLite Database  
Sent: Thursday, July 5, 2012 3:40 PM
Subject: Re: [sqlite] Multiple connections to in-memory database
 
On Thu, Jul 5, 2012 at 8:37 AM, T Ü  wrote:
> By trying I found out that SQLiteConnection("Data
> Source=:memory:;cache=shared"); worked.
> In a single aspx.page at cs code, first I open an in-memory database
> connection
>         SQLiteConnection conn = new SQLiteConnection ( "Data
> Source=:memory:;cache=shared" );
>         conn.Open();
> than create table and insert some data
> then without closing that connection open another connection in the sameway,
> but when I try to select the rows of the table that I created in the
> previous table, I get no such table error.
> What am I doing wrong???

Please execute "SELECT sqlite_version()" in your cs code and tell us
the result of it.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple connections to in-memory database

2012-07-05 Thread T Ü
By trying I found out that SQLiteConnection("Data 
Source=:memory:;cache=shared"); worked.
In a single aspx.page at cs code, first I open an in-memory database connection 

        SQLiteConnection conn = new SQLiteConnection ( "Data 
Source=:memory:;cache=shared" );
    conn.Open();

than create table and insert some data
then without closing that connection open another connection in the sameway, 
but when I try to select the rows of the table that I created in the previous 
table, I get no such table error.
What am I doing wrong???

 

 From: Pavel Ivanov 
To: T Ü ; General Discussion of SQLite Database 
 
Sent: Thursday, July 5, 2012 3:21 PM
Subject: Re: [sqlite] Multiple connections to in-memory database
 
On Thu, Jul 5, 2012 at 7:46 AM, T Ü  wrote:
> I have an asp.net application.
> I open a sqlite in-memory connection with SQLiteConnection conn = new 
> SQLiteConnection ( "Data Source=:memory:" ); command.
> I read that by using cache=shared parameter, I can make that in-memory 
> database reachable from other connections.
>
> 1.What is the way of applying cache=shared parameter in this type of 
> connection? SQLiteConnection conn = new SQLiteConnection ( "Data 
> Source=:memory:?cache=shared" ); is not working?

I think you should write SQLiteConnection("Data
Source=file::memory:?cache=shared"). But this possibility was
introduced in SQLite 3.7.13. So your asp.net provider should be
compiled with the latest version of SQLite, otherwise it won't work.

> 2.What is the way of creating a new connection for accessing the previously 
> opened in-memory database?

You should create new connection the same way as previously opened
one, i.e. SQLiteConnection("Data Source=file::memory:?cache=shared").


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multiple connections to in-memory database

2012-07-05 Thread T Ü
I have an asp.net application.
I open a sqlite in-memory connection with SQLiteConnection conn = new 
SQLiteConnection ( "Data Source=:memory:" ); command.
I read that by using cache=shared parameter, I can make that in-memory database 
reachable from other connections.

1.What is the way of applying cache=shared parameter in this type of 
connection? SQLiteConnection conn = new SQLiteConnection ( "Data 
Source=:memory:?cache=shared" ); is not working?

2.What is the way of creating a new connection for accessing the previously 
opened in-memory database?

Thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How Execute Joint Queries In Sqlite?

2010-12-07 Thread Arunkumar T
I Writed A FULL OUTER JOIN Query In Sqlite But I Got A Message Like Not
Supported. What Should I Do To Do This

Can You Help Me?

Regards

Arunkumar
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ODBC Driver

2010-11-06 Thread christian.wer...@t-online.de
> I do not seem to be able to parse a multiple insert statement through 
> the odbc drive using SQLExecDirect(...) I have tried with BEGIN, COMMIT 
> TRANSACTION does anyone have any ideas?

The SQLite ODBC driver allows only one SQL statement per SQLExecDirect()
and SQLPrepare(). When using transactions the right sequence of
calls is SQLSetConnectAttr(...SQL_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF),
SQLExecDirect() and finally SQLEndTran() to commit or rollback.
There's no need to send BEGIN/COMMIT statements through SQLExecDirect()

Hope that helps,
Christian



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deadlock with two local instances

2009-10-27 Thread Chris T
Thanks!  I ran the example code and it seems like every UPDATE fails
with errors like the following:

SqlStep Timeout on handle: 8 (rc = 6)
SqlStep tries on handle 8: 200
BeginTrans Timeout/Error on handle:  8, Errorcode = 6
Write Thread: DB is busy! tries = 142 handle = 8

Looking at the database contents it looks like none of the updates
were successful at all (though I didn't look extremely carefully).

Are these errors normal?

On Tue, Oct 27, 2009 at 12:55 AM, Marcus Grimm  wrote:
>> Another odd thing is that when I call sqlite3_reset on the prepared
>> statement, it also returns SQLITE_BUSY.  Should I only reset the
>> statement when it has been executed successfully?
>
> one possible approach when getting SQLITE_BUSY is to
> retry the sqlite3_step call until it finally gets thru.
>
> note that sqlite3_reset just returns the same error
> code as the previous sqlite3_step call.
>
> attachments don't work on the list, you will need
> find another way to provide your example code.
>
> you may also take a look at http://www.sqlite.org/cvstrac/wiki?p=SampleCode
> for the busy handling.
>
> hth
>
> Marcus Grimm
>
>>
>> On Mon, Oct 26, 2009 at 2:40 PM, Chris T  wrote:
>>> I'm new to sqlite (and sql in general, actually) and came across
>>> something puzzling.
>>>
>>> I wrote a test program statically linked with the amalgamated sqlite
>>> code.  When I run a single instance, everything is fine.  When I start
>>> a second instance in the same directory they both deadlock.  Every
>>> call to sqlite3_step returns SQLITE_BUSY.
>>>
>>> The source code to my test program is attached.  It was written in
>>> Visual Studio, so feel free to remove the reference to windows.h and
>>> change the calls to Sleep( ) if you don't use Windows.
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deadlock with two local instances

2009-10-26 Thread Chris T
Another odd thing is that when I call sqlite3_reset on the prepared
statement, it also returns SQLITE_BUSY.  Should I only reset the
statement when it has been executed successfully?

On Mon, Oct 26, 2009 at 2:40 PM, Chris T  wrote:
> I'm new to sqlite (and sql in general, actually) and came across
> something puzzling.
>
> I wrote a test program statically linked with the amalgamated sqlite
> code.  When I run a single instance, everything is fine.  When I start
> a second instance in the same directory they both deadlock.  Every
> call to sqlite3_step returns SQLITE_BUSY.
>
> The source code to my test program is attached.  It was written in
> Visual Studio, so feel free to remove the reference to windows.h and
> change the calls to Sleep( ) if you don't use Windows.
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Deadlock with two local instances

2009-10-26 Thread Chris T
I'm new to sqlite (and sql in general, actually) and came across
something puzzling.

I wrote a test program statically linked with the amalgamated sqlite
code.  When I run a single instance, everything is fine.  When I start
a second instance in the same directory they both deadlock.  Every
call to sqlite3_step returns SQLITE_BUSY.

The source code to my test program is attached.  It was written in
Visual Studio, so feel free to remove the reference to windows.h and
change the calls to Sleep( ) if you don't use Windows.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DateTime comparison with custom format

2009-08-21 Thread t-master



Igor Tandetnik wrote:
> 
> t-master  wrote:
>> I have string in a table representing a DateTime.
>> The format is 21.08.2009 00:25:00
> 
> I recommend you change the format. Yours is custom-designed to make your 
> life miserable.
> 
>> And I would like to compare it to "now"
> 
> select case when
>     substr(T, 7, 4)||'-'||substr(T, 4, 2)||'-'||substr(T,1,2)||substr(T, 
> 11) > datetime('now')
> then 'future' else 'past' end
> from (select '21.08.2009 00:25:00' as T);
> 
> Igor Tandetnik 
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

Hi

the problem is, this db is created by another program and I don't have the
access to change the format (btw it's the standard datetime format in
germany, not custom-designed :-P)

-- 
View this message in context: 
http://www.nabble.com/DateTime-comparison-with-custom-format-tp25085040p25088287.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] DateTime comparison with custom format

2009-08-21 Thread t-master

Hi
I have string in a table representing a DateTime.
The format is 21.08.2009 00:25:00
And I would like to compare it to "now"
How can I do this?
-- 
View this message in context: 
http://www.nabble.com/DateTime-comparison-with-custom-format-tp25085040p25085040.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Import Tool

2008-11-13 Thread Keith T
has any one expressed an interest to help out? it does not seem to be all that 
difficult, notwithstanding your mention of time-out issues.

please let me know a bit ore detail about the db you are exporting, and we can 
see if it is a project we can do quickly for you.
thanks-
Keith T
the SLK Groups


On Wednesday 12 November 2008 11:30:48 Baskaran Selvaraj wrote:
> Thanks.  SQLite ODBC driver takes hours for data transfer and most of the
> time it is not success.
>
> > Date: Wed, 12 Nov 2008 12:56:19 -0200> From: [EMAIL PROTECTED]>
> > To: sqlite-users@sqlite.org> Subject: Re: [sqlite] SQLite Import Tool> >
> > Hi> > You can use SQLite ODBC Driver and SQL Management Studio Data
> > Transfer> Wizard to perform this action.> > On Wed, Nov 12, 2008 at 12:45
> > PM, Baskaran Selvaraj <> [EMAIL PROTECTED]> wrote:> > >> >> > Hi
> > All,> > This is Baskaran and I am looking for a vendor tool to automate
> > the import> > process.> > We have an application which is written to use
> > SQLite database. Right now,> > I import the data> > into SQLite database
> > from SQL Server 2005 manually. Looking for a vendor> > software, which
> > can used> > to automate the process of importing the data from SQL Server
> > 2005 to> > SQLite database> > on a daily basis> > .> > Thanks> > Baskaran
> > Selvaraj, DBA> >> >
> > _> > See
> > how Windows(R) connects the people, information, and fun that are part
> > of> > your life> > http://clk.atdmt.com/MRT/go/119463819/direct/01/> >
> > ___> > sqlite-users mailing
> > list> > sqlite-users@sqlite.org> >
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users> >>
> > ___> sqlite-users mailing
> > list> sqlite-users@sqlite.org>
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> _
> See how Windows® connects the people, information, and fun that are part of
> your life http://clk.atdmt.com/MRT/go/119463819/direct/01/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Value between changes

2008-03-14 Thread Bjørn T . Nøstdahl
> Can you give an example of the output?  For example, do you want to
> know the difference between A0001 and A0002 or between to records with
> the same code?

20080314100030 A0001 (9min 30sec to next status change)
20080314101000 A0002 (7min to next status change)
20080314101700 A (3min to next status change)
20080314102000 A0002 (1min to next status change)
20080314102100 A

Expected output: (The total time within that status)
A 300 (3min)
A0001 930 (9min 30sec)
A0002 800 (7min + 1min)

One problem is sadly that the date/time is stored as text, and this will 
make the calculations even more difficult. 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] flexible tables

2008-03-05 Thread r t
Dear SQLite list,

Is there a way to automatically create tables based on
imported text files of unknown field count and unknown
data types?  While I suspect this can be managed with
a shell script interacting with SQLite, perhaps there
is a more direct method?

Zotlud


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite insertion performance

2007-09-15 Thread Kefah T. Issa

Dear Kees and Richard,

Much appreciated.


I tried the ordered-urls-insert the results were better, but it is still 
taking progressively longer time as the number of records increases.

A fundamental question to be asked here :

Shouldn't the time complexity (Big-O) of the insert operation be constant?

I even did a third test where the integer primary key is not auto increment; 
the same problem is observed.

Time complexity is clearly constant when there are no indexes at all (except 
for the implicit auto-increment integer primary key)

But otherwise, time taken increases incrementally (if not exponentially) with 
the number of existing records.

Acknowledging my ignorance on the subject; I really don't see a reason why 
this is happening except being a potential bug or performance improvement 
opportunity.

Regards,
- Kefah.



On Saturday 15 September 2007 12:51:02 Kees Nuyt wrote:
> Hi Kefah,
>
> On Sat, 15 Sep 2007 04:43:46 +0300, you wrote:
> >Thanks Kees,
> >
> >In fact using integer instead of string gives very similar results.
> >
> >Dropping the unique index constraint and then creating it again when
> > needed sounds interesting, as insertion without the unique index is
> > satisfactory and constact over time.
>
> Thanks for the feedback.
>
> >I tried this, but got a trivial question :
> >When creating the unique index, sqlite gives me :
> >SQL error: indexed columns are not unique
> >
> >What should be done here?
>
> Apparently the data in the text column is not unique.
> That is the disadvantage of building the index after the
> insertions: the database can't exercise the contraints on your
> data, so you would have to do that yourself, for example by a
> sort --unique step. My second suggestion for speeding things was
> sorting the input data, so now you have two reasons for a sort.
>
> Such a data cleaning step will take considerable time, so time
> gained in inserting may be lost again in preprocessing.
> It might be better to use the database constraints, and live
> with the slow insertions. Your benchmarks will tell you what's
> best.
>
> >Thanks again,
> >- Kefah.
>
> Good luck.
>
> >On Saturday 15 September 2007 00:25:03 Kees Nuyt wrote:
> >> On Fri, 14 Sep 2007 23:20:53 +0300, you wrote:
> >> >Dear All,
> >> >
> >> >I have been struggling with the performance of insertion in sqlite.
> >> >
> >> >Here we have a very simple case :
> >> >
> >> >A table with an integer autoincrement primary key and a text field that
> >> > is unique.
> >> >
> >> >CREATE TABLE my (id PRIMARY KEY, url);
> >> >
> >> >CREATE UNIQUE INDEX myurl ON my(url);
> >> >
> >> >
> >> >My application requires inserting up to 10 million records in batches
> >> > of 20 thousand records.
> >> >
> >> >I use :
> >> >A. PRAGMA SYNCHRONOUS = OFF
> >> >B. Prepared Statement
> >> >C. setAutoCommit(false), then to true at the end of the batch.
> >> >
> >> >Using the above, the insertion starts off at a good speed, then
> >> > drastically slows down as more records are inserted.
> >> >
> >> >It goes like this :
> >> >
> >> >The first four inserstions (first 4*20K -> 60K records)
> >> >0took786
> >> >1took944
> >> >2took1001
> >> >3took1303
> >> >
> >> >After the first 1 Million records
> >> >50took2560
> >> >51took1921
> >> >55took1900
> >> >53took3990
> >> >
> >> >After the 2nd million records
> >> >2took22393
> >> >3took16789
> >> >4took29419
> >> >5took13993
> >> >
> >> >By the end of the the 3rd million records the insertion time goes up to
> >> > 30 seconds per 20K records.
> >> >
> >> >My app is running from a java code and I am using SqliteJDBC 0.37 (the
> >> > latest) on Sun JDK 1.6.0_02 on CentOS 5.0 4GB Ram, two duo core cpu
> >> > server with raid10 sata-II harddisk.
> >> >
> >> >
> >> >I know I might be stretching sqlite far beyond its limits, I just want
> >> > to verify that there is nothing more that can be done to make a case
> >> > for sqlite in this scenario. I am not sure whats the best next thing
> >> > to do either.
> >> >
> >> >Your feedback and input will be highly appreciated,
> >> >
> >> >- Kefah.
> >>
> >> Most probably the UNIQUE INDEX on the TEXT column is the
> >> culprit.
> >>
> >> My first try would be to create and fill the table first, and
> >> create the UNIQUE INDEX on the TEXT column afterwards.
> >>
> >> The second suggestion would be to INSERT the rows in sorted
> >> order, the sort key being the TEXT column.



-- 
Kefah T. Issa
Manager

>/. freesoft technologies llc
freesoft technologies, LLC.
Cell : +962 777 80 90 50
Office : +962 6 55 23 967
Fax : +962 6 55 61 967
Jabber IM (XMPP) : [EMAIL PROTECTED] 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite insertion performance

2007-09-14 Thread Kefah T. Issa
Thanks Kees,

In fact using integer instead of string gives very similar results.

Dropping the unique index constraint and then creating it again when needed 
sounds interesting, as insertion without the unique index is satisfactory and 
constact over time.

I tried this, but got a trivial question : 
When creating the unique index, sqlite gives me : 
SQL error: indexed columns are not unique

What should be done here?

Thanks again,
- Kefah.


On Saturday 15 September 2007 00:25:03 Kees Nuyt wrote:
> On Fri, 14 Sep 2007 23:20:53 +0300, you wrote:
> >Dear All,
> >
> >I have been struggling with the performance of insertion in sqlite.
> >
> >Here we have a very simple case :
> >
> >A table with an integer autoincrement primary key and a text field that is
> >unique.
> >
> >CREATE TABLE my (id PRIMARY KEY, url);
> >
> >CREATE UNIQUE INDEX myurl ON my(url);
> >
> >
> >My application requires inserting up to 10 million records in batches of
> > 20 thousand records.
> >
> >I use :
> >A. PRAGMA SYNCHRONOUS = OFF
> >B. Prepared Statement
> >C. setAutoCommit(false), then to true at the end of the batch.
> >
> >Using the above, the insertion starts off at a good speed, then
> > drastically slows down as more records are inserted.
> >
> >It goes like this :
> >
> >The first four inserstions (first 4*20K -> 60K records)
> >0took786
> >1took944
> >2took1001
> >3took1303
> >
> >After the first 1 Million records
> >50took2560
> >51took1921
> >55took1900
> >53took3990
> >
> >After the 2nd million records
> >2took22393
> >3took16789
> >4took29419
> >5took13993
> >
> >By the end of the the 3rd million records the insertion time goes up to 30
> >seconds per 20K records.
> >
> >My app is running from a java code and I am using SqliteJDBC 0.37 (the
> > latest) on Sun JDK 1.6.0_02 on CentOS 5.0 4GB Ram, two duo core cpu
> > server with raid10 sata-II harddisk.
> >
> >
> >I know I might be stretching sqlite far beyond its limits, I just want to
> >verify that there is nothing more that can be done to make a case for
> > sqlite in this scenario. I am not sure whats the best next thing to do
> > either.
> >
> >Your feedback and input will be highly appreciated,
> >
> >- Kefah.
>
> Most probably the UNIQUE INDEX on the TEXT column is the
> culprit.
>
> My first try would be to create and fill the table first, and
> create the UNIQUE INDEX on the TEXT column afterwards.
>
> The second suggestion would be to INSERT the rows in sorted
> order, the sort key being the TEXT column.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Sqlite insertion performance

2007-09-14 Thread Kefah T. Issa

Dear All,

I have been struggling with the performance of insertion in sqlite.

Here we have a very simple case : 

A table with an integer autoincrement primary key and a text field that is 
unique.

CREATE TABLE my (id PRIMARY KEY, url);

CREATE UNIQUE INDEX myurl ON my(url);


My application requires inserting up to 10 million records in batches of 20 
thousand records.

I use : 
A. PRAGMA SYNCHRONOUS = OFF
B. Prepared Statement
C. setAutoCommit(false), then to true at the end of the batch.

Using the above, the insertion starts off at a good speed, then drastically 
slows down as more records are inserted.

It goes like this : 

The first four inserstions (first 4*20K -> 60K records)
0took786
1took944
2took1001
3took1303

After the first 1 Million records 
50took2560
51took1921
55took1900
53took3990

After the 2nd million records
2took22393
3took16789
4took29419
5took13993

By the end of the the 3rd million records the insertion time goes up to 30 
seconds per 20K records.

My app is running from a java code and I am using SqliteJDBC 0.37 (the latest) 
on Sun JDK 1.6.0_02 on CentOS 5.0 4GB Ram, two duo core cpu server with 
raid10 sata-II harddisk.


I know I might be stretching sqlite far beyond its limits, I just want to 
verify that there is nothing more that can be done to make a case for sqlite 
in this scenario. I am not sure whats the best next thing to do either.

Your feedback and input will be highly appreciated,

- Kefah.





P.S. here is a snippit of the java code :
Class.forName("org.sqlite.JDBC");
Connection connection = 
DriverManager.getConnection("jdbc:sqlite:./my.db");

long total_time=0;
int iterations = 50;
int records_per_iteration = 2;

for(int i=0; i

Re: [sqlite] Re: Oddball query required

2007-06-30 Thread Lucas (a.k.a T-Bird or bsdfan3)

Thank you very much Igor.  This will do exactly what I need.

Igor Tandetnik wrote:


Lucas (a.k.a T-Bird or bsdfan3)
<[EMAIL PROTECTED]> wrote:


I am using SQLite in an application within a game where I have a table
of player nicknames and cumulative player scores (among other things).
How do I figure out at what (numeric) one of these records would be at
if the table were sorted, say, by the cumulative scores column?



select count(*) from tableName
where scrore > (select score from tableName where playerId = :player);

Igor Tandetnik

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 







-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Oddball query required

2007-06-30 Thread Lucas (a.k.a T-Bird or bsdfan3)
I am using SQLite in an application within a game where I have a table 
of player nicknames and cumulative player scores (among other things).  
How do I figure out at what (numeric) one of these records would be at 
if the table were sorted, say, by the cumulative scores column?  Or 
should I just store that position in the database itself?




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Why do you use SQLite? Comments for an article needed

2007-06-09 Thread Lucas (a.k.a T-Bird or bsdfan3)
I personally like it for embedded (in other applications) jobs because 
of its small code footprint compared to a RDBMS.


Tim Anderson wrote:


Many thanks to those who have commented (more are welcome of course;
though I won't be able to use all of them).

I'll post a link to the piece when it appears.

Thanks again

Tim

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] [Fwd: rummage bin of C API questions]

2007-01-25 Thread Lucas (a.k.a T-Bird or bsdfan3)





Please bear with me here, 
as this list may seem rather long.


1) Which of the C API routines allocate memory that my code must free?
2) Can COMMIT TRANSACTION and ROLLBACK TRANSACTION fail?
3) Which of the C API routines cannot fail?
4) C API routines with pointer return values return NULL if they cannot 
allocate sufficient memory, right?



--T-Bird/bsdfan3/Lucas



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] semi corrupt db (w/stack-traces)

2006-12-05 Thread ara . t . howard

On Tue, 5 Dec 2006, Kees Nuyt wrote:


what would be the best method to dump this db into another one to recover?


sqlite olddb .dump | sqlite newdb


this worked.  thanks a million.


I have no idea what could cause the segfault, except perhaps a version
difference between the sqlite command program and the sqlite library.


i've confirmed that the libraries have not been updated:

  mussel:~ > ldd /dmsp/reference/lib/ruby/site_ruby/1.8/i686-linux/_sqlite.so
  libsqlite.so.0 => /dmsp/reference//lib/libsqlite.so.0 (0x00c3c000)
  libdl.so.2 => /lib/libdl.so.2 (0x008bb000)
  libcrypt.so.1 => /lib/libcrypt.so.1 (0x00df8000)
  libm.so.6 => /lib/tls/libm.so.6 (0x00eff000)
  libc.so.6 => /lib/tls/libc.so.6 (0x00a4)
  /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x00bea000)

  mussel:~ > ls -ltarHh /dmsp/reference//lib/libsqlite.so.0 /lib/libdl.so.2 
/lib/libcrypt.so.1 /lib/tls/libm.so.6 /lib/tls/libc.so.6 
/dmsp/reference/lib/ruby/site_ruby/1.8/i686-linux/_sqlite.so
  -rwxr-xrwx1 nrt  nrt  634K Oct 29  2004 
/dmsp/reference//lib/libsqlite.so.0
  -rwxr-xr-x1 nrt  nrt   25K May 16  2006 
/dmsp/reference/lib/ruby/site_ruby/1.8/i686-linux/_sqlite.so
  -rwxr-xr-x1 root root 182K Jun 16 05:32 /lib/tls/libm.so.6
  -rwxr-xr-x1 root root 1.4M Jun 16 05:32 /lib/tls/libc.so.6
  -rwxr-xr-x1 root root  13K Jun 16 05:32 /lib/libdl.so.2
  -rwxr-xr-x1 root root  22K Jun 16 05:32 /lib/libcrypt.so.1

i've included the ruby stuff because that's how i'm using sqlite mainly.  note,
however, that i can segfault either a ruby program __or__ the sqlite binary, so
i'm reasonably confident that the problem is un-related to ruby.
nevertheless, here's a stack trace from inside a seg faulting ruby program:

  #0  0x0019aeff in raise () from /lib/tls/libc.so.6
  #1  0x0019c705 in abort () from /lib/tls/libc.so.6
  #2  0x080be1b2 in rb_bug (fmt=0x80d74ad "Segmentation fault") at error.c:214
  #3  0x080a20e0 in sigsegv (sig=11) at signal.c:447
  #4  
  #5  0x00453ab4 in checkList (pCheck=0xbffeffc0, isFreeList=1, iPage=33916, N=209, 
zContext=0x483086 "Main freelist: ") at src/btree.c:3238
  #6  0x0045433f in fileBtreeIntegrityCheck (pBt=0x8cc1738, aRoot=0x8cc4a08, 
nRoot=4) at src/btree.c:3447
  #7  0x00478ed1 in sqliteVdbeExec (p=0x8cc55c0) at src/vdbe.c:3738
  #8  0x00472622 in sqlite_step (pVm=0x8cc55c0, pN=0xbfff05c8, 
pazValue=0xbfff05cc, pazColName=0xbfff05d0) at src/vdbe.c:128
  #9  0x00461811 in sqlite_exec (db=0x8cc15d8, zSql=0x88f6260 "PRAGMA 
integrity_check;", xCallback=0xcf5428 , 
pArg=0xbfff0620, pzErrMsg=0xbfff061c)
  at src/main.c:654
  #10 0x00cf5cef in static_database_exec (self=3075567028, sql=147608080, 
callback=1, parm=1) at sqlite.c:526
  #11 0x08065fdb in call_cfunc (func=0xcf5c20 , 
recv=3075567028, len=147648012, argc=3, argv=0xbffeffc0) at eval.c:5550
  #12 0x0805b6b3 in rb_call0 (klass=3074768412, recv=3075567028, id=8945, 
oid=1, argc=3, argv=0xbfff0870, body=0xb7453764, flags=0) at eval.c:5692


and here is one from running sqlite db 'pragma integrity_check'

  #0  0x00c04b80 in checkList (pCheck=0xbfffc810, isFreeList=1, iPage=33602, N=209, 
zContext=0xc34186 "Main freelist: ") at src/btree.c:3238
  #1  0x00c0540b in fileBtreeIntegrityCheck (pBt=0x949b168, aRoot=0x94af5d8, 
nRoot=4) at src/btree.c:3447
  #2  0x00c29ffd in sqliteVdbeExec (p=0x94af020) at src/vdbe.c:3738
  #3  0x00c23762 in sqlite_step (pVm=0x94af020, pN=0xbfffce18, 
pazValue=0xbfffce1c, pazColName=0xbfffce20) at src/vdbe.c:128
  #4  0x00c12919 in sqlite_exec (db=0x949b008, zSql=0x94af3f0 "PRAGMA 
integrity_check;", xCallback=0x80493c0 , pArg=0xbfffceb0, 
pzErrMsg=0xbfffce68) at src/main.c:654
  #5  0x0804b18c in process_input (p=0xbfffceb0, in=0x0) at src/shell.c:1061
  #6  0x0804b86f in main (argc=2, argv=0xbfffe464) at src/shell.c:1342


while knowing little about the sqlite source, this looks like a bug in sqlite
no?  anyone have suggestions on where to go from here?

-a
--
if you want others to be happy, practice compassion.
if you want to be happy, practice compassion.  -- the dalai lama

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] semi corrupt db

2006-12-05 Thread ara . t . howard

On Tue, 5 Dec 2006, Kees Nuyt wrote:


On Tue, 5 Dec 2006 08:06:41 -0700 (MST), you wrote:


what would be the best method to dump this db into another one to recover?


sqlite olddb .dump | sqlite newdb


thanks.


I have no idea what could cause the segfault, except perhaps a
version difference between the sqlite command program and the
sqlite library.


there have been no software updates, but i'm looking into.  i just needed to
recover the system before debugging: the db is part of a 24x7 satellite
processing system!

hopefull more info later...

cheers.

-a
--
if you want others to be happy, practice compassion.
if you want to be happy, practice compassion.  -- the dalai lama

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] semi corrupt db

2006-12-05 Thread ara . t . howard


not sure what to make of this:


jib:~/shared > sqlite q/db " select * from jobs limit 1 "
2076|21|finished|2006-12-04 23:52:11.821620|2006-12-04 
23:52:22.796343|2006-12-05 
00:06:13.294926|830.498583|mussel.ngdc.noaa.gov|clam.ngdc.noaa.gov|stdin/2076|stdout/2076|stderr/2076|25150|0|ut_night|true|gen_directional_ut_grids
 /dmsp/nrt/data/incoming/night_files/F15200612050348.nt.d.OIS 
/dmsp/nrt/data/incoming/mosaic/ut_night/grids/

jib:~/shared > sqlite q/db
SQLite version 2.8.15
Enter ".help" for instructions
sqlite> PRAGMA integrity_check;
[1]+  Killed  sqlite q/db
Segmentation fault (core dumped)


other ops core dump too.

advice?  what would be the best method to dump this db into another one to 
recover?

-a
--
if you want others to be happy, practice compassion.
if you want to be happy, practice compassion.  -- the dalai lama

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite and McAfee Anti-Virus

2006-10-31 Thread ara . t . howard

On Tue, 31 Oct 2006 [EMAIL PROTECTED] wrote:


Dennis Cote <[EMAIL PROTECTED]> wrote:

Roger Binns wrote:


The usual approach is to write a web page about it and then publish a
story on Slashdot.


I have to second this idea. It worked well for Poul-Henning Kamp.



OK.  Thanks everybody for your support.  But let's try to
de-escalate this just a bit.  I've checked in changes so
that the default temp-file name for SQLite no longer has
an "sqlite_" prefix.  And so if I can just get in touch
with responsible developers at McAfee and get them to
recompile, or better yet, recompile with

  -DTEMP_FILE_PREFIX="mcafee_"

then I think the problem will be solved.  And I have received
private email from a former McAfee employee who is hopeful of
being able to put me in touch with the right person.

The phone-call frequency is not currently that bad.  I've only
talked to two irate Mcafee customers so far today.  I can deal
with that.  I just want to make sure the phone-call rate doesn't
grow.  And so if I can get new builds of Mcafee products to
use the latest version of SQLite, or to use the TEMP_FILE_PREFIX
compile-time option shown above, then the problem will eventually
dissipate.

No need to call in a preemtive Slashdot bombardment just yet.
Let's give diplomacy a chance...


despite my initial sarcasm i do feel this is by far the best approach - just
obfusicate the issue with a little (configurable) renaming.  only people in
the know will figure out 'sqlite' is 'etilqs' backwards and, as you point out,
a compile option to override should leave no one any room to complain.

a simple, elegant, non-confrontational solution to be sure.

-a
--
my religion is very simple.  my religion is kindness. -- the dalai lama

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite and McAfee Anti-Virus

2006-10-31 Thread ara . t . howard

On Tue, 31 Oct 2006 [EMAIL PROTECTED] wrote:


I need advice from the community.  The problem
is seen here:

 http://www.sqlite.org/cvstrac/tktview?tn=2049
 http://www.sqlite.org/cvstrac/tktview?tn=1989
 http://www.sqlite.org/cvstrac/tktview?tn=1841
 http://www.sqlite.org/cvstrac/wiki?p=McafeeProblem

It appears that McAfee Anti-Virus uses SQLite internally,
and it leaves lots of files in C:/TEMP that contain
SQLite in their names.  This annoys many windows users.
They get on Google and search around for "sqlite" and
find me.  Then they send me private email or call me at
my office or on my cellphone to complain.  Many refuse
to believe that I have nothing to do with the problem
and I am accused of spreading a virus or malware.

My efforts to contact Mcafee about this problem have
been unfruitful.

Does anybody have an suggestions on how I might deal
with this?  Does anybody know how I can get in touch
with an engineer at Mcafee so that we can at least
change the names of the files in future releases?


i think you should hack the source to detect a macafee install and, if
detected, to name all temp files starting with macafee.

alternatively, setup call forwarding to their number.

then they'll call you ;-)

-a
--
my religion is very simple.  my religion is kindness. -- the dalai lama

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] strange issue with broken links/hung processes in transaction

2006-10-31 Thread ara . t . howard

On Tue, 31 Oct 2006 [EMAIL PROTECTED] wrote:


I suspect that the /var/tmp file is unrelated to your mmap/munmap problem.

When SQLite needs a temporary file on Unix, it creates the file, opens it,
then calls unlink().  Unix removes the inode for the file from its directory
immediately so that the file cannot be opened again by another process.  But
it does not delete the file until the file descriptor is closed.  This is
how you make temporary files in Unix that automatically disappear when they
are closed.


right - that makes sense then...


SQLite never calls mmap() or munmap() - at least not directly.  Those
routines might be called in your implementation of malloc() or as part of
open()/read()/write().  But if so, that is an implementation specific thing.


hmmm.  most likey malloc - also, this kernel uses mmap to read in binaries and
the specific amount of memory being asked for is quite large.

in any case, i'll work on the assumption that sqlite is not at fault.

thanks very much for the fast reply - and for a fantastic product.

btw. this is what my software does with it

  http://www.linuxjournal.com/article/7922

cheers.

-a
--
my religion is very simple.  my religion is kindness. -- the dalai lama

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] strange issue with broken links/hung processes in transaction

2006-10-31 Thread ara . t . howard


i have some code which uses sqlite heavily.  every so often a command using
with hang indefinitely, running an strace on the code shows it to be stuck in
a loop doing


  munmap addr1
  mmap #=> addr1
  munmap addr2
  mmap #=> addr1
  munmap addr1
  ...
  ...
  repeat forever...

checking out the /proc filesystem for the process shows this oddity

  [EMAIL PROTECTED] 20793]$ ls fd
  0  1  2  3  4

  [EMAIL PROTECTED] 20793]$ file fd/*
  fd/0: symbolic link to /dev/pts/3
  fd/1: symbolic link to /dev/pts/3
  fd/2: symbolic link to /dev/pts/3
  fd/3: symbolic link to 
/tmp/_dmsp_reference_bin_rq_20793_609295232_789746/db
  fd/4: broken symbolic link to /var/tmp/sqlite_dZtkItUXB3ppxor (deleted)

and, indeed, the /var//tmp file does not exist.  i assume this file is the
backing store sqlite uses for implementing transactions: it's certainly not
created by my code.

now, this is happening on a compute node with __zero__ other userland process
running so i'm quite sure no other process is removing the file.  also,
disabling tmpwatch has no effect on the problem, which is both transient and
infrequent.

so - has anyone every seen this before?  any thoughts on how to proceed?

details:

  jib:~ > cat /etc/redhat-release
  Red Hat Enterprise Linux WS release 3 (Taroon Update 8)

  jib:~ > uname -srm
  Linux 2.4.21-47.0.1.ELsmp i686

  jib:~ > sqlite -version
  2.8.15


kind regards.

-a
--
my religion is very simple.  my religion is kindness. -- the dalai lama

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] [ANN] rq-2.3.3

2006-06-09 Thread ara . t . howard

On Fri, 9 Jun 2006, Pat Wibbeler wrote:


I'm curious - how does this avoid the buggy NFS fcntl() problem
mentioned under FAQ 7?
http://www.sqlite.org/faq.html

Does it use an external locking mechanism?



sortof.  basically rq setups a directory like this

  q/db
  q/lock
  q/...

here the 'lock' file is a file that is locked using fcntl __before__ any
transaction is started on the sqlite db.  so it applies a write lock for write
transactions and read lock for read transactions.

if, during the transaction, an sqlite locking error is throw something has
gone drastically wrong and a 'recovery' is attemted under the an nfs safe
lockfile class.  this entails doing some inode stuff, sqlite vaccum - etc.
this should never happen and i've seen it only once in two years of 24x7 rq
use on our 30 node cluster.

rq makes a few optimizations - for instance read transactions are attempted
'blind' - rq simply tries without a lock and, if a locking error is thrown,
only then aquires a read lock.  even with a busy queue this means most reads
do not bother to lock the external file - though sqlite obviously performs
it's own locking.

basically the whole thing it to guard against two simoultaneous writers - they
are in effect always serialized.  in fact, i do a lot of work over NFS with
sqlite using this approach - so long as you can prevent multiple writers it's
safe - at least using linux NFS - i've had bad luck with solaris.

the approach has been much more robust that i anticipated : we've been through
disk failures and power outages and never has the db become corrupt.  no users
have reported a db corruption either.

sqlite is a very nice peice of coding!


cheers.

-a
--
suffering increases your inner strength.  also, the wishing for suffering
makes the suffering disappear.
- h.h. the 14th dali lama


[sqlite] Main freelist: 1 pages missing from overflow list

2006-04-26 Thread ara . t . howard


i'm getting this message from PRAGMA integrity_check;

  "*** in database main ***\nMain freelist: 1 pages missing from overflow list"

funny thing is, the database seems to function correctly - which is to say 5
or 6 processes are currently using it with no errors.

what should i make of this then?  is there a way to recover it?

cheers.

-a
--
be kind whenever possible... it is always possible.
- h.h. the 14th dali lama


Re: [sqlite] Sqlite powered Web Sites

2006-01-31 Thread ara . t . howard

On Tue, 31 Jan 2006, Clint Bailey wrote:

Can anyone point me to web sites that are powered by Sqlite? I'm curious as 
to how they function as compared to a MySQL, or brand M$  powered site.


check out the ruby on rails list - their are a few rails sites out there using
sqlite.

-a

--
happiness is not something ready-made.  it comes from your own actions.
- h.h. the 14th dali lama


Re: [sqlite] atomic db replacement

2005-03-14 Thread Ara T Howard


- Original Message -
From: Lothar M=E4rkle <[EMAIL PROTECTED]>
Date: Sunday, March 13, 2005 11:39 pm
Subject: Re: [sqlite] atomic db replacement

> > does anyone have a strategy for doing massive updates to a db and 
> atomicly> replacing it in a multi-process situation?
> 
> Assuming you have a cgi-like application with many processes that just
> looks up a row, displays and then exits. You can simple use the 
> rename call
> to atomically replace the db file(see man 2 rename for bordercases) 
> withanother and without locking.
> 
> lothar

but this will lose any update currently begin written by any of the
active cgi processes  - even if they are in the middle of a transaction?

-a



[sqlite] [ANN] rq-1.0.0

2004-11-10 Thread Ara . T . Howard
URLS
  http://raa.ruby-lang.org/project/rq/
  http://www.codeforpeople.com/lib/ruby/rq/
  (http://rubyforge.org/projects/rqueue/ - under construction)
NAME
  rq v1.0.0
SYNOPSIS
  rq (queue | export RQ_Q=q) mode [mode_args]* [options]*
DESCRIPTION
  ruby queue (rq) is a tool used to create instant linux clusters by managing
  sqlite databases as nfs mounted priority work queues.  multiple instances of
  rq running from multiples hosts can work from these queues to
  distribute processing load to n nodes - bringing many dozens of otherwise
  powerful cpus to their knees with a single blow.  clearly this software 
should
  be kept out of the hands of free radicals, seti enthusiasts, and j. safran.

  the central concept of rq is that n nodes work in isolation to pull
  jobs from an central nfs mounted priority work queue in a synchronized
  fashion.  the nodes have absolutely no knowledge of each other and all
  communication if done via the queue meaning that, so long as the queue is
  available via nfs and a single node is running jobs from it, the system will
  continue to process jobs.  there is no centralized process whatsoever - all
  nodes work to take jobs from the queue and run them as fast as possible. 
this
  creates a system which load balances automatically and is robust in face of
  node failures.

  the first argument to any rq command is the name of the queue.  this
  name may be omitted if, and only if, the environment variable RQ_Q has been
  set to contain the absolute path of target queue.
  rq operates in one of the modes create, submit, list, status,
  delete, update, query, execute, configure, snapshot, lock, backup, help, or
  feed.  depending on the mode of operation and the options used the meaning of
  'mode_args' may change.
MODES
  the following mode abbreviations exist
c  => create
s  => submit
l  => list
    ls => list
t  => status
d  => delete
rm => delete
u  => update
q  => query
e  => execute
C  => configure
S  => snapshot
L  => lock
b  => backup
h  => help
f  => feed
  create, c :
create a queue.  the queue must be located on an nfs mounted file system
visible from all nodes intended to run jobs from it.
examples :
  0) to create a queue
  ~ > rq /path/to/nfs/mounted/q create
or simply
  ~ > rq /path/to/nfs/mounted/q c
  submit, s :
submit jobs to a queue to be proccesed by a feeding node.  any 'mode_args'
are taken as the command to run.  note that 'mode_args' are subject to 
shell
expansion - if you don't understand what this means do not use this feature
and pass jobs on stdin.

when running in submit mode a file may by specified as a list of commands 
to
run using the '--infile, -i' option.  this file is taken to be a newline
separated list of commands to submit, blank lines and comments (#) are
allowed.  if submitting a large number of jobs the input file method is
MUCH, more efficient.  if no commands are specified on the command line rq
automatically reads them from STDIN.  yaml formatted files are also allowed
as input (http://www.yaml.org/) - note that the output of nearly all rq
commands is valid yaml and may, therefore, be piped as input into the 
submit
command.

when submitting the '--priority, -p' option can be used here to determine
the priority of jobs.  priorities may be any whole number - zero is the
default.  note that submission of a high priority job will NOT supplant
currently running low priority jobs, but higher priority jobs WILL always
migrate above lower priority jobs in the queue in order that they be run as
soon as possible.  constant submission of high priority jobs may create a
starvation situation whereby low priority jobs are never allowed to run.
avoiding this situation is the responsibility of the user.  the only
guaruntee rq makes regarding job execution is that jobs are
executed in an 'oldest highest priority' order and that running jobs are
never supplanted.
examples :
  0) submit the job ls to run on some feeding host
~ > rq q s ls
  1) submit the job ls to run on some feeding host, at priority 9
~ > rq -p9 q s ls
  2) submit 42000 jobs (quietly) from a command file.
~ > wc -l cmdfile
42000
~ > rq q s -q < cmdfile
  3) submit 42 priority 9 jobs from a command file.
~ > wc -l cmdfile
42
~ > rq -p9 q s < cmdfile
  4) submit 42 priority 9 jobs from a command file, marking them as
 'important' using the '--tag, -t' option.
~ > wc -l cmdfile
42
~ > rq -p9 -timportant q s < cmdfile
  5) re-submit all the 'important' jobs (see 'query' sec

[sqlite] [ANN] rq-0.1.7

2004-11-01 Thread Ara . T . Howard
URLS
  http://raa.ruby-lang.org/project/rq/
  http://www.codeforpeople.com/lib/ruby/rq/
NAME
  rq v0.1.7
SYNOPSIS
  rq (queue | export RQ_Q=q) mode [mode_args]* [options]*
DESCRIPTION
  ruby queue (rq) is a tool used to create instant linux clusters by managing
  sqlite databases as nfs mounted priority work queues.  multiple instances of
  rq running from multiples hosts can work from these queues to
  distribute processing load to n nodes - bringing many dozens of otherwise
  powerful cpus to their knees with a single blow.  clearly this software 
should
  be kept out of the hands of free radicals, seti enthusiasts, and j. safran.

  the central concept of rq is that n nodes work in isolation to pull
  jobs from an central nfs mounted work priority work queue in a synchronized
  fashion.  the nodes have absolutely no knowledge of each other and all
  communication if done via the queue meaning that, so long as the queue is
  available via nfs and a single node is running jobs from it, the system will
  continue to process jobs.  there is no centralized process whatsoever - all
  nodes work to take jobs from the queue and run them as fast as possible. 
this
  creates a system which load balances automatically and is robust in face of
  node failures.

  the first argument to any rq command is the name of the queue.  this
  name may be omitted if, and only if, the environment variable RQ_Q has been
  set to contain the absolute path of target queue.
  rq operates in one of the modes create, submit, list, status,
  delete, update, query, execute, configure, snapshot, lock, backup, help, or
  feed.  depending on the mode of operation and the options used the meaning of
  'mode_args' may change.
MODES
  the following mode abbreviations exist
c  => create
s  => submit
l  => list
    ls => list
t  => status
d  => delete
rm => delete
u  => update
q  => query
e  => execute
C  => configure
S  => snapshot
L  => lock
b  => backup
h  => help
f  => feed
  create, c :
create a queue.  the queue must be located on an nfs mounted file system
visible from all nodes intended to run jobs from it.
examples :
  0) to create a queue
  ~ > rq /path/to/nfs/mounted/q create
or simply
  ~ > rq /path/to/nfs/mounted/q c
  submit, s :
submit jobs to a queue to be proccesed by a feeding node.  any 'mode_args'
are taken as the command to run.  note that 'mode_args' are subject to 
shell
expansion - if you don't understand what this means do not use this feature
and pass jobs on stdin.

when running in submit mode a file may by specified as a list of commands 
to
run using the '--infile, -i' option.  this file is taken to be a newline
separated list of commands to submit, blank lines and comments (#) are
allowed.  if submitting a large number of jobs the input file method is
MUCH, more efficient.  if no commands are specified on the command line rq
automatically reads them from STDIN.  yaml formatted files are also allowed
as input (http://www.yaml.org/) - note that the output of nearly all rq
commands is valid yaml and may, therefore, be piped as input into the 
submit
command.

when submitting the '--priority, -p' option can be used here to determine
the priority of jobs.  priorities may be any whole number - zero is the
default.  note that submission of a high priority job will NOT supplant
currently running low priority jobs, but higher priority jobs WILL always
migrate above lower priority jobs in the queue in order that they be run as
soon as possible.  constant submission of high priority jobs may create a
starvation situation whereby low priority jobs are never allowed to run.
avoiding this situation is the responsibility of the user.  the only
guaruntee rq makes regarding job execution is that jobs are
executed in an 'oldest highest priority' order and that running jobs are
never supplanted.
examples :
  0) submit the job ls to run on some feeding host
~ > rq q s ls
  1) submit the job ls to run on some feeding host, at priority 9
~ > rq -p9 q s ls
  2) submit 42000 jobs (quietly) from a command file.
~ > wc -l cmdfile
42000
~ > rq q s -q < cmdfile
  3) submit 42 priority 9 jobs from a command file.
~ > wc -l cmdfile
42
~ > rq -p9 q s < cmdfile
  4) submit 42 priority 9 jobs from a command file, marking them as
 'important' using the '--tag, -t' option.
~ > wc -l cmdfile
42
~ > rq -p9 -timportant q s < cmdfile
  5) re-submit all the 'important' jobs (see 'query' section below)
~ > rq q query tag=important | rq

[sqlite] Upgraded from 2.8.6 to 2.8.13

2004-05-26 Thread v t
Hi ,
 
I upgraded sqlite version from 2.8.6 to 2.8.13. 
And I am running into a problem where after inserting and deleting some (eg 30 ) 
entries, I get the error, "database is full" followed by "database disk image is 
malformed
". I use sqlite_exec to run all queries. 
 
The database has a table with string as "key" and few other "string" columns. Any idea 
which part of the sqlite code I should be concentrating on?
 
Thanks
vt


-
Do you Yahoo!?
Friends.  Fun. Try the all-new Yahoo! Messenger

Re: [sqlite] sqlite on vxworks or vrtx?

2004-03-29 Thread v t
Hi,
 
I ported sqlite to Vxworks. It was a straight forward port. File locking is not 
possible in VxWorks. So I do not use it. So I define the macro __DJGPP__  which takes 
care of fcntl call used for file locking.
 
Did not include the files shell.c and tclsqlite.c in my project.
 
I used the following macros -DOS_UNIX -DSQLITE_OMIT_AUTHORIZATION -DSQLITE_OMIT_VACUUM 
-D__DJGPP__  -DSQLITE_TRACE_DEBUG

 
Will have to modify TIMER_START and TIMER_END macros in os.c for VxWorks.
 
Replaced malloc and free with memory pools. realloc requires special attention when 
repcaling with memory pools. Will need to spend some time in tuning memory pool sizes. 
Modified util.c for memory pools.
 
Most of the time use sqlite_exec and sqlite_query_table API calls. Use temporary 
tables too. 
 
Very happy so far.

vt

Brett Granger <[EMAIL PROTECTED]> wrote:

Hi,

I'm new to this list, and have been given a task to investigate various 
alternatives for databases on embedded systems. I like what I have seen 
of sqlite so far!

On the sqlite.org wiki under "when to use sqlite", there is the following:

"# Embedded devices and applications

Because an SQLite database requires little or no administration, SQLite 
is a good choice for devices or services that must work unattended and 
without human support. SQLite is a good fit for use in cellphones, PDAs, 
set-top boxes, and/or appliances. It also works well as an embedded 
database in downloadable consumer applications."

Based on size, speed, and performance the above claim appears to be 
true. However, a lot of set-top boxes out there are not running Linux 
or WinCE, but are instead running vxworks or even vrtx. Has anyone 
ported/compiled sqlite to run on either of these OSes? Does anyone have 
a sense as to how hard it might or might not be if it doesn't already 
exist? Maybe it's not too hard if they're both POSIX compliant?

I know that Berkeley DB from sleepy cat has a vxworks port, but I would 
really prefer to pursue sqlite.

Any thoughts/hints welcomed!
--Brett Granger


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.

Re: [sqlite] tables in in-memory database

2004-02-11 Thread v t
Thanks Richard. It works now. Originally I tried using SQLITE_MASTER and it did not 
work so after searching this forum I found some info on SQLITE_TEMP_MASTER and tried 
that.
 
I figured the reason why SQLITE_MASTER did not work originally is because I opened a 
new handle to the in-memory database and then ran the query using that handle to get 
the table list. I already had the in-memory database open with a different handle. The 
query returns correct results only with the handle I got the first time I opened the 
in-memory database.
 
Can I open and maintain multiple in-memory databases?
 
Thanks
vt

"D. Richard Hipp" <[EMAIL PROTECTED]> wrote:
v t wrote:
> Hi
>
> When I execute the following query on a in-memory database, I get no results.
> The query is " SELECT * FROM SQLITE_TEMP_MASTER WHERE type='table'". I
^^
Should be "SQLITE_MASTER" not "SQLITE_TEMP_MASTER"

> initialize and create tables before I run the above query. I am also able to
> insert and update records in the tables in the in-memory database.
>
> My objective is to get a list of all tables in in-memory database. I am able
> to extract the tables list from SQLITE_MASTER though.
>
> Any ideas what could be going wrong?
>
> Thanks Vt
>
>
> - Do you Yahoo!? Yahoo! Finance: Get your
> refund fast by filing online



-- 
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online

[sqlite] tables in in-memory database

2004-02-09 Thread v t
Hi
 
When I execute the following query on a in-memory database, I get no results. The 
query is " SELECT * FROM SQLITE_TEMP_MASTER WHERE type='table'". I initialize and 
create tables before I run the above query. I am also able to insert and update 
records in the tables in the in-memory database.
 
My objective is to get a list of all tables in in-memory database. I am able to 
extract the tables list from SQLITE_MASTER though.
 
Any ideas what could be going wrong?
 
Thanks
Vt


-
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online

Re: [sqlite] Journalling

2003-10-27 Thread v t
Hello,
 
I am aware that:
 
1) Reading from the database doesnt involve opening a journal file.
2) Writing (INSERT, UPDATE etc) to the database opens a journal file on the disk.
3) PRAGMA default_synchronous = OFF;  (0) is to turn on sync ON or OFF.
 
I am writing to a database file every 200ms on a compact flash.
 
My understanding of the PRAGMA default_synchronous = OFF;  (0) was that it is used for 
flushing the database contents on each write or doing it at the end of a database 
transaction.  (simlilar to doing a buffered I/O eg fwrite, fflush or doing a 
unbuffered I/O like write). 
 
And my understanding of journalling is that it is used to rollback any changes to the 
database in case of failures during a transaction. I am no expert on journalling so I 
could be wrong.
 
So I was just tailoring SQLite to my needs and not trying to throw away the use of a 
very useful feature. ( I may decide to use it when I need it again).
 
vt

[EMAIL PROTECTED] wrote:
Hello,





v t 
27/10/2003 05:16 PM


To: "Mrs. Brisby" 
cc: sqlite 
Subject: Re: [sqlite] Journalling


> I am trying to use sqlite in a context where I will be using it to store 
some configuration about a system. I want to try to minimize the disk 
access. Since journalling uses a file on the disk, I wanted to turn it 
off. I am not worried about rolling back the database to a known state in 
case of some failure.

You're not worried about your database becoming corrupted and all your 
data being destroyed? It doesn't sound like you like your data very 
much... ;)

This is a question that pops up on the list every so often, and there have 
been some good reasons for it. Well. One comes to mind, and that's the use 
of flash memory in embedded devices. When you don't want to write to your 
media too many times you might find that it's better to turn off 
journalling and risk the consequences... perhaps make regular backups... 
rather than write to the media too often.

The problem is that most people don't know what they're talking about when 
they ask how to turn journalling off. They don't understand when the 
journal gets written in the first place and they don't understand which 
operations they're performing that aren't affected by journalling. They 
haven't read the list archives, and they patently haven't read the manual, 
because it's listed under the pragma section of 
http://www.sqlite.org/lang.html.

This is why when you ask the question on this list you get the response
"Well I know you've asked how to turn off journalling, but what do you 
actually want to achieve by this and what alternatives have you 
considered?"

You haven't yet given an explination that makes sense to me, so in the 
spirit of RTFM I'll leave you to find the exact manual reference yourself. 
I think it's worth you understanding, though, that journalling doesn't 
occur when you're only querying the database. It only happens when you 
modify the database. Using transactions while modifying the database is 
not only a good idea for data integrity, it also makes the overhead 
associated with synching the file to disk almost disappear so there's 
usually no need at all to turn off journalling. Given all of this, if you 
still can't find the exact spot in the manuals to turn this off yourself 
perhaps you could offer a more complete discussion about the nature of 
your database and your access to it. You'd be well advised to discuss the 
alternatives you have considered so that the gentle list members will feel 
more compelled to answer your question directly.

Benjamin
--Premature optimisation is the root of all evil


-
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears

Re: [sqlite] Journalling

2003-10-27 Thread v t
I am trying to use sqlite in a context where I will be using it to store some 
configuration about a system. I want to try to minimize the disk access. Since 
journalling uses a file on the disk, I wanted to turn it off. I am not worried about 
rolling back the database to a known state in case of some failure.
 
vt
 
"Mrs. Brisby" <[EMAIL PROTECTED]> wrote:
On Thu, 2003-10-23 at 19:46, v t wrote:
> Hi,
> 
> How do I turn journalling OFF?

Why do you want to? What exactly are you trying to accomplish?


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears

Re: [sqlite] Listserver Digest Attachments

2003-10-24 Thread v t
Matt,
 
as you suggested, the link is very good to follow.
http://news.gmane.org/gmane.comp.db.sqlite.general
 
vt

Matt Sergeant <[EMAIL PROTECTED]> wrote:
On 24 Oct 2003, at 10:02, Steve O'Hara wrote:

> I'm a Windows (followed by Unix then VMS) guy and I use Outlook to 
> view my
> email.

Can I recommend you try viewing via Gmane instead then? Point outlook 
at the Usenet News server news.gmane.org and you can find the SQLite 
list in there.

You might find other lists easier to track this way too.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search

[sqlite] Journalling

2003-10-23 Thread v t

Hi,

How do I turn journalling OFF?

Inside sqlite_open function, I passed TRUE to third argument of sqliteBtreeFactory 
which stands for omitJournal. That did not work. I got an assertion in 
sqlitepager_commit. Following is the surrounding code:

  TRACE1("COMMIT\n");
  if( pPager->dirtyFile==0 ){
/* Exit early (without doing the time-consuming sqliteOsSync() calls)
** if there have been no changes to the database file. */
assert( pPager->needSync==0 );
rc = pager_unwritelock(pPager);
pPager->dbSize = -1;
return rc;
  }
  assert( pPager->journalOpen );

Thanks

vt



-
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search

Re: [sqlite] too few operands on stack at 13(Callback).

2003-10-20 Thread v t
Hi Richard,
 
I tried to open my database file "test.11" by enabling MEMORY_DEBUG=2, it works fine 
now. 
 
I was trying to use memory pools for sqliteMalloc. So I modified the malloc & free 
calls to my internal memory pool calls. In addition to that I saw that some files were 
not calling sqliteMalloc but call malloc directly. I modified all the malloc calls to 
sqliteMalloc.
 
When I use my allocation library call, I do not see any allocation failures but 
sqlite_open fails to the follow the sequence of instructions you said that each time 
it should follow.
 
I will try to debug and see what goes wrong when I use a memory pool allocation 
mechanism.
 
Thanks for your reply,
vt


"D. Richard Hipp" <[EMAIL PROTECTED]> wrote:
v t wrote:
> Anybody please help.
> 
> I get this error during sqlite_open for an exisitng database.
> 
> db=sqlite_open("/ata0/test.11",0,&f)
> 
> f points to string "too few operands on stack at 13"
> 
> Following is trace:
> 
> OPEN 15 /ata0/test.11
> OPEN-EX 16 /temp/saRUxt
> READ 15 1 2
> READ 15 2 0
> 
> 0 ColumnName 4 0 1
> 1 Integer 1 0
> Stack: i:1
> 2 OpenRead 0 2 sqlite_temp_master
> 3 Rewind 0 7
> 7 Close 0 0
> 8 Integer 0 0
> Stack: i:0
> 9 OpenRead 1 2 sqlite_master
> 10 VerifyCookie 0 75
> 11 Rewind 1 15
> 12 Integer 0 0 0
> Stack: si:0
> 13 Callback 5 0
> CLOSE 15
> CLOSE 16
> 

If possible, please e-mail me your "/ata0/test.11" file.


-- 
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search

[sqlite] too few operands on stack at 13(Callback).

2003-10-17 Thread v t
Anybody please help.
 
I get this error during sqlite_open for an exisitng database.
 
 db=sqlite_open("/ata0/test.11",0,&f)
 
f points to string "too few operands on stack at 13"
 
Following is trace:

OPEN15  /ata0/test.11
OPEN-EX 16  /temp/saRUxt
READ151 2
READ152 0

   0 ColumnName  40 1
   1 Integer 10
Stack: i:1
   2 OpenRead02 sqlite_temp_master
   3 Rewind  07
   7 Close   00
   8 Integer 00
Stack: i:0
   9 OpenRead12 sqlite_master
  10 VerifyCookie0   75
  11 Rewind  1   15
  12 Integer 00 0
Stack: si:0
  13 Callback50
CLOSE   15
CLOSE   16

Thanks
Vt
 


-
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search