Re: [sqlite] quickly insert multiple rows into multiple table

2010-07-12 Thread Eric Smith
W.-H.  Gu wrote: 

> For (1), after I wrapped inserts into a transaction, I face an issue: 
> the cpu usage is too high.  I think the reason is that I use prepare 
> statement to insert a row at a time, it than does sqlite3_step for 
> thousands times every second.  

If I'm reading this right, you're saying you're re-preparing the 
statement on every row.  Don't do that.  Prepare it once, and only 
re-bind the variables for each insert.  Then finalize the statement 
after your loop.  

I gather that statement preparation is a non-trivial percentage of 
the compute time taken to insert a single row.

> Every second I insert 9184 rows, 

Your optimal insertion rates will vary strongly depending on your 
hardware, your operating system, the efficiency of data generation 
between calls to the sqlite api, amount of data you're inserting per 
row, amount of RAM sqlite is allowed to use, any indices, triggers, 
check constraints, unique constraints, primary keys, and foreign keys 
you have defined, compile time flags, and dozens of other variables.

I'm no SQL or SQLite expert myself -- the gurus might add many items 
to that list off the tops of their heads.

We have no way of knowing what number you should be shooting for.  

> which leads to cpu usage ~30%.  
> 
> Is there any good way to resolve this issue?  

Obviously there are many variables here.  Why do you think you want low 
CPU usage?  

Generally, I think you *want* to be CPU-bound for the duration of your 
insert batch.  Anything else implies you're doing something besides 
useful computation (like waiting for disk i/o (due to a cache spill or
one of many other possible reasons), or scheduled behind some other 
process, or something).

At commit time, sqlite will write the new rows back to the disk, by 
default blocking til the OS tells SQLite that the write is complete 
(though this is configurable).  During this time you expect cpu usage 
to go down while disk usage goes up.  

Another point is that you need to make sure the OS is really telling you
what you think it's telling you.  E.g. a common mistake for a Linux user
is to think that the load average is the average CPU usage over some 
period of time.  It isn't.

Eric 

-- 
Eric A. Smith

One of my most productive days was throwing away 1000 lines of code.
-- Ken Thompson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Noobie question about importing data into table

2010-07-10 Thread Eric Smith
Calimeron wrote: 

> >What do you mean, "save"?  
> 
> So that when you're done, you have 3 tables "Chinese," "English," "Merged" 
> or the original 2, but with data from the one into the other.  
> 
> I don't know the terms or the procedure, but I'd like to have a new table 
> that has the Chinese and English mixed, which I then can continue to 
> manipulate.  

The answer to that depends strongly on your application.  It could be
that you should just use do whatever manipulations you need in your main
language.

It could be that your use case will be simplified using what's called 
a "view".  This will essentially be a wrapper around your join query.  
You can read from the view but you can't write to it.  The view is 
always consistent with the underlying "real" tables.  

But the view, as far as I understand, doesn't provide a performance 
benefit over the query that defines it.  It only exists for simplified 
understanding of your problem space and your code.  

So if you think that you will run your JOIN query a lot, and you think 
that it will be a performance bottleneck, then a view is probably not 
the right way to go.

Here's what a view looks like (using my previous example code):

[hudson:~] $ sqlite3 ./foo.db 
SQLite version 3.6.20
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE chChar(id INTEGER PRIMARY KEY, char TEXT);
INSERT INTO "chChar" VALUES(1,'char1');
INSERT INTO "chChar" VALUES(2,'char2');
INSERT INTO "chChar" VALUES(3,'char3');
INSERT INTO "chChar" VALUES(4,'char4');
INSERT INTO "chChar" VALUES(5,'char5');
CREATE TABLE chCharDefn(id INTEGER PRIMARY KEY, chChar REFERENCES
chChar, defn TEXT);
INSERT INTO "chCharDefn" VALUES(1,1,'def1');
INSERT INTO "chCharDefn" VALUES(2,2,'def2');
INSERT INTO "chCharDefn" VALUES(3,2,'def3');
INSERT INTO "chCharDefn" VALUES(4,3,'def4');
INSERT INTO "chCharDefn" VALUES(5,4,'def5');
INSERT INTO "chCharDefn" VALUES(6,4,'def6');
INSERT INTO "chCharDefn" VALUES(7,5,'def7');
COMMIT;
sqlite> create view ChineseCharsAndDefns as select chChar.id, chChar.char, 
chCharDefn.id, chCharDefn.defn from chCharDefn join chChar ON 
(chCharDefn.chChar = chChar.id);
sqlite> select * from ChineseCharsAndDefns;
1|char1|1|def1
2|char2|2|def2
2|char2|3|def3
3|char3|4|def4
4|char4|5|def5
4|char4|6|def6
5|char5|7|def7
sqlite>

As you can see, I just prefixed my join query with "create view
ChineseCharsAndDefns as".  Now watch what happens when I change one of
the underlying tables:

sqlite> begin;
sqlite> update chCharDefn set defn='def7CHANGED' where id=7;
sqlite> select * from ChineseCharsAndDefns;
1|char1|1|def1
2|char2|2|def2
2|char2|3|def3
3|char3|4|def4
4|char4|5|def5
4|char4|6|def6
5|char5|7|def7CHANGED
sqlite> rollback;
sqlite> 

Note again that, as far as I know, views in SQLite only exist as 
'syntactic sugar' -- they don't provide any functional difference, 
just a textual difference in your code.  Others can confirm or refute
that.

It could be that you want to create a real table that can be read from
and written to.  I would then ask why you think you need to do that.

Eric

-- 
Eric A. Smith

Sendmail may be safely run set-user-id to root.
-- Eric Allman, "Sendmail Installation Guide"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Noobie question about importing data into table

2010-07-10 Thread Eric Smith
Calimeron wrote: 

> I've understood that the program (I'm 
> using http://www.sqlite.org/sqlite-3_6_23_1.zip) will first compile the 
> entire list 30.000 x 50.000 = 1.5 BILLION lines, after which it filters 
> it.  

One of SQLite's primary goals in life is to fit into a tiny memory 
footprint.  I don't know for sure, but strongly doubt that it will 
create the whole cross product of the two tables before filtering.
I guess it's more likely that either the video is wrong or you're
misunderstanding it.

> --After you've selected the lines you want, how do you save the 
> table (instead of only selecting it)?

What do you mean, "save"?

-- 
Eric A. Smith

gleemites, n.:
Petrified deposits of toothpaste found in sinks.
-- "Sniglets", Rich Hall & Friends
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] open db cx to fork(2)d children

2010-07-10 Thread Eric Smith
Roger Binns wrote: 

> I'll bet you are actually getting exit(3) which means anything 
> registered with atexit will be run.  (SQLite does not register with 
> atexit.)  

I knew what Nico meant :) just repeated him for expositional clarity 
(deeming the distinction to be unimportant for my particular question,
and figuring that most people mean exit(3) when they say exit(2)).

> In my wrapper I provide functionality that can check SQLite objects are 
> not being used across processes.  
> 
> I'd recommend doing something like this 

My fork call is from Tcl -- you've convinced me to add a line in the 
child unsetting the Tcl variable containing the name of the parent's 
database connection.  :) 

Your wrapper is nice -- have you considered folding something like it 
into the core (disabled by default, enabled by a compile-time flag) and 
submitting it to drh for official adoption?  

Thanks much to both of you!

Eric 

-- 
Eric A. Smith

More people would come here if it weren't so crowded.
-- Yogi Berra
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Noobie question about importing data into table

2010-07-10 Thread Eric Smith
Calimeron wrote:

> Chinese Char. No.  Chinese Char.  English Def. No.English Def.
> 1 Char1   1   Def1
> 2 Char2   2   Def2
> 2 Char2   3   Def3
> 3 Char3   4   Def4
> 4 Char4   5   Def5
> 4 Char4   6   Def6
> 5 Char5   7   Def7
> 
> Can this be done? 

What you want is called a "join".

[hudson:~] $ sqlite3 
SQLite version 3.6.20
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table chChar(id INTEGER PRIMARY KEY, char TEXT);
sqlite> create table englCharDefn(id INTEGER PRIMARY KEY, chChar REFERENCES 
chChar, defn TEXT);
sqlite> INSERT INTO chChar VALUES(null, 'char1');
sqlite> INSERT INTO chChar VALUES(null, 'char2');
sqlite> INSERT INTO chChar VALUES(null, 'char3');
sqlite> INSERT INTO chChar VALUES(null, 'char4');
sqlite> INSERT INTO chChar VALUES(null, 'char5');
sqlite> INSERT INTO englCharDefn VALUES(null, 1, 'def1');
sqlite> INSERT INTO englCharDefn VALUES(null, 2, 'def2');
sqlite> INSERT INTO englCharDefn VALUES(null, 2, 'def3');
sqlite> INSERT INTO englCharDefn VALUES(null, 3, 'def4');
sqlite> INSERT INTO englCharDefn VALUES(null, 4, 'def5');
sqlite> INSERT INTO englCharDefn VALUES(null, 4, 'def6');
sqlite> INSERT INTO englCharDefn VALUES(null, 5, 'def7');
sqlite> select chChar.id, chChar.char, englCharDefn.id, englCharDefn.defn from 
englCharDefn join chChar ON (englCharDefn.chChar = chChar.id);
1|char1|1|def1
2|char2|2|def2
2|char2|3|def3
3|char3|4|def4
4|char4|5|def5
4|char4|6|def6
5|char5|7|def7

--
Eric A. Smith

I still maintain the point that designing a monolithic kernel in 
1991 is a fundamental error.  Be thankful you are not my student.  
You would not get a high grade for such a design.
-- Andrew Tanenbaum, to Linus Torvalds
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] open db cx to fork(2)d children

2010-07-09 Thread Eric Smith

> I strongly recommend that you always make the child side of fork(2)
> either exit(2) or exec(2) immediately.  

Sorry Nico, I never saw this response -- I appreciate it!

What do you mean, "immediately"?  As I said, my child comes to life,
does some work without touching (its copy of) existing SQLite strucures,
and then calls exit(2).  The lifetime of the child is small wrt the
lifetime of the parent.

Let's assume for the moment that I don't care about safety wrt
non-sqlite libraries (except of course any libraries on which 
sqlite depends).

> With respect to SQLite3, there are two sets of fork-safety issues: file
> descriptor offsets (use USE_PREAD to avoid this issue), 

I take you to mean that the child and parent's fds point to the same
underlying file description, and if the child changes the file
description then it will have a side effect in the parent.

But I have assumed that the child does not make any sqlite api calls
against existing sqlite structures.  I believe this assumption allows 
me to conclude that sqlite will not touch any existing fd, and hence
will not bear such an impact on the parent (even if the child makes
sqlite api calls against structures the child creates on its own).  
Am I right? 

> and POSIX file byte range locks.

I'm not using POSIX locks, so I'm good to go there.  But even if I were,
I believe my above reasoning applies equally well here, since I believe
your reason for being concerned about it is similar.  The fds that were 
duplicated across the fork refer to the same underlying file
description, so we are technically in a "dangerous" state: the child
*could*, at its whim, release the parent's lock (for example).  But if 
it guarantees not to do so (by guaranteeing to make no sqlite calls
against existing structures), then no harm will result.

Thanks,
Eric

--
Eric A. Smith

Impartial, adj.:
Unable to perceive any promise of personal advantage from
espousing either side of a controversy or adopting either of two
conflicting opinions.
-- Ambrose Bierce, "The Devil's Dictionary"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite query with c++ variable

2010-07-09 Thread Eric Smith
smengl90 wrote:

> Hi, I want to compose a query that will use a c++ variable. For example I
> have:
> 
> int i= 5;
> char * query = "SELECT * from userInfo WHERE count<'i'".
> 
> The syntax does not work. How do I do that?

I think your primary problem is lack of experience in C++.  I 
strongly recommend that you read up on the language itself before
attempting to write SQLite applications.

Are you required to use C++?  You may find it (much, much) easier to 
use Tcl at first.

If I may ask: are you a student, or doing this for fun in your spare 
time, or is this a part of your job?

Eric

--
Eric A. Smith

Windows is *NOT* a virus - viruses are small and efficient.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Null character problem

2010-07-09 Thread Eric Smith
Simo Slavin wrote: 

> (according to your earlier post) 

I'm not OP.  I'm Eric.  OP was someone else.  In this context, I don't 
care about blobs or about the right way of doing anything.

> Read the documentation for memset().  

I know quite well how memset works.  I know character!=byte.  These 
matters are irrelevant to my question.  

My question came purely from a mild curiosity.  I was wondering about 
the behavior of sqlite call sqlite3_bind_text when it is passed a range 
of BYTES that includes nulls.

-- 
Eric A. Smith

It's up.  It sorta works.  That's a start.
-- BJ Premore
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Null character problem

2010-07-09 Thread Eric Smith
Simon Slavin wrote: 

> BLOBs can handle any sequences of bytes without problems, 
> including nulls, ETX, and sequences which be illegal if they were used to 
> express Unicode characters.  You can put anything you like in a BLOB.  

I assume, due to the manifest typing semantics of the library, that 
the declared type of the column will make no difference when I bind a 
weird datum to a variable (as long as it's not an 'INTEGER PRIMARY KEY' 
column).  

Will sqlite3_bind_text work properly if the string contains (internal) 
nulls?  What if I did something like: 

char zText[100];
memset(zText, 0, sizeof(zText)); 
sqlite3_bind_text(stmt, idx, zText, sizeof(zText), SQLITE_TRANSIENT); 

According to a strict reading of the doc, sqlite will blindly copy
sizeof(zText) characters (starting from zText[0]) into the column.  
That is, this will store 100 null bytes into the column.  Is that 
right?

Eric 

-- 
Eric A. Smith

When you come to a fork in the road, take it.
-- Yogi Berra
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Null character problem

2010-07-09 Thread Eric Smith
Kavita Raghunathan wrote: 

> sprintf(SqlStr, "INSERT INTO %s (AttrName, AttrEnum, AttrType, 
> AttrValue, ReadWrite, Entity_id) VALUES('%s', %d, %d, '%s', %d, 
> %d);", tbl_name, db[i]->attr_name, db[i]->attr_num, db[i]->attr_type, 
> db[i]->attr_value, db[i]->attr_src, entity_id); 

Don't do that.  What if attr_name contains a ' character (or, as you
say, some other weird character)?

Instead, prepare a statement with sqlite variables, and bind values to
those variables using the sqlite3_bind* family of interfaces:
http://sqlite.org/c3ref/bind_blob.html

I have no specific knowledge on whether sqlite handles null characters
within the variables' values--but if I were a bettin man, I'd bet that
it handles them quite cleanly.

-- 
Eric A. Smith

I think there's a world market for about five computers.
-- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] work-arounds for (possibly buggy) nfs?

2010-07-08 Thread Eric Smith
Richard Hipp wrote: 

> sqlite3 db ./foo -vfs unix-dotfile 
> 
> That uses an alternative VFS that uses dot-file locking instead of 
> posix advisory locks.  The dot-file locks are someone slower and have less 
> concurrency, so the use of "PRAGMA locking_mode=EXCLUSIVE" might also be a 
> good idea in such a configuration.  

Looking at your code, I see that unix-dotfile locks are in reality 
always exlusive.  My guess is that the existence of a separate file 
indicates that a process owns the (unique) lock, and non-existence 
indicates no one owns the lock.  

I see you defer to open(2) with O_EXCL to get atomicity here.  

The docs leave something unclear: 

If I ATTACH a database with pragma locking_mode=EXCLUSIVE under the 
unix-dotfile VFS, get a lock on the attached db, execute a write 
transaction on it, and then DETACH the database, then I'm assuming the 
lock on the attached db is released.  Is that right?  

Eric 

-- 
Eric A. Smith

I still maintain the point that designing a monolithic kernel in 
1991 is a fundamental error.  Be thankful you are not my student.  
You would not get a high grade for such a design.
-- Andrew Tanenbaum, to Linus Torvalds
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reg: In Memory Database Using SQLite

2010-07-08 Thread Eric Smith
Subhadeep Ghosh wrote: 

> I am aware of the functionality which you mentioned.  But the reason why 
> I ended up coding the feature was because of a very specific requirement.  
> My requirement was such that some of my client applications were running 
> on disk-less systems where I was not using any kind of network storage.  
> But my applications used SQLite.  So I coded the below mentioned feature 
> such that the database was being stored on the server and when the 
> disk-less clients wanted to use them, they fetched the database from the 
> server over the network and directly create a database out of the fetched 
> data (basically serializing and de-serializing the database).  
> 
> I am guessing that SQLite does not support such functionality.  The 
> closest thing would have been to fetch the database over the network, 
> store it in a file (which in this case is not possible) and then open the 
> database.  
> 
> I maybe wrong, but if such a feature already exists then I would be more 
> than happy to adopt it in my code.  

I see.  

Firstly, you may be better off using a client/server db, since that 
seems to map more naturally into your use-case.  But let's forget about 
that for the moment.  

Had I written your application, I first would have considered dumping 
the original database to raw SQL text and reading it into the in-mem 
db on the other end.  That would be pretty trivial to do, and if it 
suits your needs perf-wise, you're golden.  This also gives the extra 
advantage of having a textual communication format between the master 
and the slave, which can be a useful debugging tool.  

I would then have considered using a RAM-backed filesystem on the 
slave.  You can copy the sqlite db as-is to the slave and open it 
using normal sqlite api calls.  SQLite and your app think the db is 
disk-backed because the OS is faking the existence of a disk.  Whether 
this option works well depends on how easy it is to get a ram-backed fs 
up and running on your slave.  (In linux this is very easy, not sure 
about your deployment oS.)  

Eric 

-- 
Eric A. Smith

The people can always be brought to the bidding of the leaders. That 
is easy. All you have to do is tell them they are being attacked and 
denounce the pacifists for lack of patriotism and exposing the 
country to danger. It works the same way in any country. 
-- Herman Goering, at the Nuremberg trials
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] work-arounds for (possibly buggy) nfs?

2010-07-08 Thread Eric Smith
Richard Hipp wrote:

> sqlite3 db ./foo -vfs unix-dotfile

Works like a charm!

> That uses an alternative VFS that uses dot-file locking instead of posix
> advisory locks.  The dot-file locks are someone slower and have less
> concurrency, so the use of "PRAGMA locking_mode=EXCLUSIVE" might also be a
> good idea in such a configuration.

My use-case requires no concurrency whatsoever.  I'll circle back if I
notice any perf changes.

Many thanks!

Eric

--
Eric A. Smith

fenderberg, n.:
The large glacial deposits that form on the insides
of car fenders during snowstorms.
-- "Sniglets", Rich Hall & Friends
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] work-arounds for (possibly buggy) nfs?

2010-07-08 Thread Eric Smith
I'm forced into a situation where I have to use an nfs server that I 
think is buggy.  

I can read/write files normally using fopen() on the exported 
filesystem, but can't do anything useful with sqlite 3.6.23.1:

-bash-2.05b$ tclsh
% package require sqlite
3.6.23
% sqlite3 db ./foo
% db eval {pragma synchronous=off}
disk I/O error
% puts $::errorInfo
disk I/O error
while executing
"db eval {pragma synchronous=off}"
% db eval {create table t(a)}
disk I/O error
% ^C
-bash-2.05b$ lh
total 0
-rw-r--r--  1 esmith  1005 0B Jul  8 10:19 foo
-bash-2.05b$ echo foobar > foo
-bash-2.05b$ lh
total 2
-rw-r--r--  1 esmith  1005 7B Jul  8 10:23 foo
-bash-2.05b$ tclsh
% package require sqlite
3.6.23
% 
% file delete -force foo
% sqlite3 db ./foo
% db eval {pragma locking_mode=exclusive}
exclusive
% db eval {pragma synchronous=off}
disk I/O error

Everything works fine when I use a database on the local disk:

% db close
% sqlite3 db /tmp/foo
% db eval {pragma synchronous=off}
% db eval {create table t(a)}
% db eval {insert into t values('blah')}
% db eval {select count(*) from t}
1
% 

-bash-2.05b$ uname -a
FreeBSD  5.40. FreeBSD 5.40. #0: Fri Sep 19 03:14:59 EDT 2008  i386

Can anyone offer any hints?

Thanks!
Eric

-- 
Eric A. Smith

Furbling, v.:
Having to wander through a maze of ropes at an airport or 
bank even when you are the only person in line.
-- Rich Hall, "Sniglets"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] docs bug: tclsqlite.html

2010-07-08 Thread Eric Smith
tclsqlite.html lists an "unlock_notify" method with no other
documentation.  Trying to use it gives me this:

-bash-2.05b$ tcl
% package require sqlite
3.6.23
% sqlite3 db /tmp/foo
% db unlock_notify
unlock_notify not available in this build
% 

--
Eric A. Smith

The concept is interesting and well-formed, but in order to earn 
better than a 'C,' the idea must be feasible.
-- A Yale University management professor in response to Fred Smith's paper
   proposing reliable overnight delivery service.
   (Smith went on to found Federal Express Corp.)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reg: In Memory Database Using SQLite

2010-07-08 Thread Eric Smith
Subhadeep Ghosh wrote: 

> I finally managed to create a wrapper around the SQLite core to support 
> the creation of in-memory databases.  The wrapper comprises of three 
> functions - one to serialize the database, one to de-serialize a database 
> and the third one to do the cleanup job.  
> 
> The function which serializes a database, copies the entire contents of 
> the database to a block of memory.  The function which de-serializes takes 
> a block of memory as input and creates a database object out of it.  
> 
> I am including the source code for the interested souls and I've not put 
> comments as I hate doing it.  
> 
> I hope you guys find the code useful or at-least interesting.  

I hope you didn't spend too much time on that.  SQLite supports 
in-memory databases natively: if you open the file called ":memory:" you
get an in-memory database that dies when the connection is closed:

http://sqlite.org/c3ref/open.html

-- 
Eric A. Smith

More people would come here if it weren't so crowded.
-- Yogi Berra
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SqLite 3.7.0 amalgamation build warnings

2010-07-07 Thread Eric Smith
Nick Shaw wrote: 

> If it's safe to ignore 

Note that I was careful not to say that the warnings are safe to 
ignore.  :) I only said that the SQLite devs may ignore them without 
further evidence of problems.  

I'm not an expert on the SQLite code, so wouldn't make any specific 
claims about it.  

Eric 

-- 
Eric A. Smith

Peace, n.:
In international affairs, a period of cheating between two
periods of fighting.
-- Ambrose Bierce, "The Devil's Dictionary"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SqLite 3.7.0 amalgamation build warnings

2010-07-07 Thread Eric Smith
Nick Shaw wrote: 

> Realised I also fixed this warning before posting too: 

You may not be getting a lot of responses on this because the SQLite 
devs have a philosophy that, for this project, chasing down compiler 
warnings is generally a waste of time.  

See http://sqlite.org/testing.html#staticanalysis for details on that.  

If you want action regarding the warnings you see, I recommend arguing 
how the relevant code will cause a real-world behavioral problem.  And 
the best argument is to write a bit of code exercising the problem.  

-- 
Eric A. Smith

Where a calculator on the ENIAC is equipped with 18,000 vaccuum 
tubes and weighs 30 tons, computers in the future may have only 
1,000 vaccuum tubes and perhaps weigh 1 1/2 tons.
-- Popular Mechanics, March 1949
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ARM Cross compilation problem

2010-07-01 Thread Eric Smith
Matheus Ledesma wrote: 

> With "-ldl" after "-static" argument I receive the following error: 
> 
> balanca_simula_peso_io_paralela.c:632: warning: 'mseg1' may be used 
> uninitialized in this function 
> 
> /home/generic/CodeSourcery/Sourcery_G++_Lite/bin/../arm-none-linux-gnueabi/libc/lib/libsqlite3.a(sqlite3.o):
>  
> In function `unixDlOpen': 
> /home/generic/OpenController/garten/teste/sqlite3.c:26401: warning: 
> Using 'dlopen' in statically linked applications requires at runtime the 
> shared libraries from the glibc version used for linking 

Not an error, a warning :) If you check, your app binary was probably 
created in your build directory.  

Here's my understanding: 

I can't speak to whether you really need to, but you are passing 
-static.  According to the gcc manpage, this means "this prevents 
linking with ... shared libraries".  

This is a lie.  gcc still does a dynamic link to libgcc, which is 
assumed to be so widely available that it's a waste to statically 
link it.  But gcc is nice and warns you that your application is not 
completely statically linked.

So you can try running your application on the target platform.  If the 
proper shared libs exist there, then you're good to go.  

If not, you need to statically link libgcc as well at compile time.  Do 
that by passing -static-libgcc to your compiler.  

Disclaimer: I'm not a guru in the area, so ymmv.  

Good luck,

Eric 

-- 
Eric A. Smith

The concept is interesting and well-formed, but in order to earn 
better than a 'C,' the idea must be feasible.
-- A Yale University management professor in response to Fred Smith's paper
   proposing reliable overnight delivery service.
   (Smith went on to found Federal Express Corp.)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ARM Cross compilation problem

2010-07-01 Thread Eric Smith
Matheus Ledesma wrote:

> "arm-none-linux-gnueabi-gcc -o balanca 
> balanca_simula_peso_io_paralela.c  -Wall -W -O2 
> -Wl,-R/home/generic/CodeSourcery/Sourcery_G++_Lite/lib -lsqlite3 
> -lpthread -static"

Try adding '-ldl' to your args.

Eric

--
Eric A. Smith

Where a calculator on the ENIAC is equipped with 18,000 vaccuum 
tubes and weighs 30 tons, computers in the future may have only 
1,000 vaccuum tubes and perhaps weigh 1 1/2 tons.
-- Popular Mechanics, March 1949
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] minor website bug (was Re: SQLite 3.7.0 coming soon....)

2010-06-30 Thread Eric Smith
> Please provide feedback - positive, negative, or indifferent - to this  
> mailing list.

Minor: the link in about.html from "fopen()" to 
http://man.he.net/man3/fopen is broken.

Eric

--
Eric A. Smith

Mandelbug, n.:
A bug whose underlying causes are so complex and obscure as to
make its behavior appear chaotic or even non-deterministic.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] open db cx to fork(2)d children

2010-06-28 Thread Eric Smith
>From the docs:

> Under Unix, you should not carry an open SQLite database across a
> fork() system call into the child process. Problems will result if you
> do.

What if I fork a process that promises not to use the handle, and
furthermore the child process certainly dies before the parent does?

Will this still cause problems?

What if I add an additional assumption that the parent process is the
only process that ever accesses the database?

--
Eric A. Smith

Slurm, n.:
The slime that accumulates on the underside of a soap bar when
it sits in the dish too long.
-- Rich Hall, "Sniglets"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How accept sqlite3 commands from stdin

2010-06-24 Thread Eric Smith
Peng Yu wrote:

> On Thu, Jun 24, 2010 at 5:05 AM, Simon Slavin  wrote:
> >
> > On 24 Jun 2010, at 4:50am, Peng Yu wrote:
> >
> >> Is there a way to use Shebang for sqlite3 script?
> >>
> >> http://en.wikipedia.org/wiki/Shebang_%28Unix%29
> >
> > SQLite comes with a command-line tool.  You can feed it with individual 
> > commands on a Unix command-line or tell it to read commands from a file.
> 
> I was asking whether it is possible to use Shebang with sqlite script.
> If it is possible, would you please show me how to modify the
> following script to do so?
> 
> $ ./main.sql
> Error: unknown command or invalid arguments:  "/main.sql". Enter
> ".help" for help
> $ cat main.sql
> #!/usr/bin/sqlite3 main.db
> 
> create table tbl1(one varchar(10), two smallint);
> .quit

You want to pretend a .sql file is an executable and send commands to
sqlite3?  Try something like this:

[hudson:~] $ cat foo.sql
#!/bin/sh

sqlite3 main.db 

Re: [sqlite] 64 bit libsqlite for AIX and Solaris

2010-06-24 Thread Eric Smith
Black, Michael (IS) wrote: 

> Though I'm not sure if there's any advantage/disadvantage to 64-bit 
> binaries for sqlite3, is there?  

64-bit SQLite can cache more than 4Gb of your db.

Eric

-- 
Eric A. Smith

Finagle's First Law: If an experiment works, something has gone wrong.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] linear behavior on INSERT (was Re: unexpected large journal file)

2010-06-22 Thread Eric Smith
Scott Hess wrote: 

> You should reduce your demonstration case to something you'd be 
> willing to post the code for.  Probably using synthetic data 
> (preferably generated data).  There's something missing in the thread 
> right now, and it's unlikely to be exposed by random shots in the 
> dark.  

As I was putting together the answer to this request, I decided to 
double-check the result by #ifdef'ing out the sqlite calls.

Turns out I *was* being stupid.

An old app-level error check ran after a hunk of data was inserted.
The check was supported by the PK definition, which I had just removed.
So sqlite was doing a table scan every batch.  Measurements were 
better with user-level indices because one of the indices was usable 
in the error check.  

*sigh* kill me.  Sorry for wasting your time there. :/

So the summary of this thread for those who follow is: 

1. Primary keys cause implicit indices to be defined.
2. If you insert data out of order (according to the index) then you
   have to read/write more pages per insertion.  This, among other
   things, means the journal file can grow with the starting db size,
   not just with the size of the transaction.
3. Consider reducing churn against OS-level caches (or the disk) 
   by increasing sqlite's cache_size.

Thanks again, everyone, for your help!

Eric 

-- 
Eric A. Smith

You made the fatal mistake of assuming that your friends are genuine.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] linear behavior on INSERT (was Re: unexpected large journal file)

2010-06-22 Thread Eric Smith
Scott Hess wrote: 

> You should reduce your demonstration case to something you'd be 
> willing to post the code for.  Probably using synthetic data 
> (preferably generated data).  There's something missing in the thread 
> right now, and it's unlikely to be exposed by random shots in the 
> dark.  

I'll start doing that and reply here with an obfuscated schema.  In 
the mean time, where can I find version 3.6.18 (whom someone 
claimed definitely does constant-time insertions)?  

-- 
Eric A. Smith

You will never amount to much. 
-- Munich Schoolmaster, to Albert Einstein, age 10
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] linear behavior on INSERT (was Re: unexpected large journal file)

2010-06-22 Thread Eric Smith
Jay A.  Kreibich wrote: 

> What OS/filesystem are you using?  
> 
> SQL inserts should be near-constant, assuming the table does not 
> have an INTEGER PRIMARY KEY with explicit values.  The table's root 
> B-Tree needs to re-balance every now and then, but if the inserts are 
> in-order (which they will be with an automatic ROWID) this should be 
> rare and cheap-- should should get more rare as the number of rows 
> increases.  
> 
> Many *filesystems* do not provide linear access times, however, 
> especially with larger files.  

Interesting.  But behavior is better (logarithmic) with indices 
defined.  

Right now I'm in 64-bit linux 2.6.18, rhel 5.4.  The fs is ext3.  Not 
sure if this particular box has a raid5 array like the other box did.  
But again, I think it's a moot point: even when I'm completely in the 
page cache behavior is linear, and it improves with indices.  This 
suggests a software algo issue.  

(Just got your corrections, I knew what you meant.:-)

Eric

-- 
Eric A. Smith

This non-pronunciation of initial _h_ is especially common among 
French and British people, who can't pronounce English very well.
-- Tim Pulju
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] linear behavior on INSERT (was Re: unexpected large journal file)

2010-06-22 Thread Eric Smith
Richard Hipp wrote: 

> When there are no indices, SQLite can put the rows into the database in 
> any order it wants, and it chooses to put them in ascending order.  Thus, 
> each insert becomes a constant-time append.  (Approximately - the truth is 
> a little more complicated, but by waving our arms vigorously, we can still 
> claim constant time per row.)  But if there are indices, each row must be 
> inserted in index order, which involves a b-tree search to find the right 
> spot and possible a rebalancing operation - O(logN) per row.  

Again, my observation is that you are *not* doing constant-time inserts 
when there are no indices.  

What do you mean, "ascending order"?  

The only constraint on the relevant table is a foreign-key ref to a tiny 
table.  But the asymptotic behavior is the same with pragma foreign_keys 
off or on.  

I double-checked sqlite_master and there are no indices (not even 
auto-indices) on the table.  

Inserts are *faster* at high row counts when there *are* indices.  

I am using 3.6.23.1.  I haven't tested earlier versions (waiting on a 
reply in another thread to find out where to get them).  

How vigorously are you waving?  Can you describe the real algorithm, or 
at least a second-order approximation?  

Eric 

-- 
Eric A. Smith

What the hell is it good for?
-- Robert Lloyd (engineer of the Advanced Computing Systems
   Division of IBM), to colleagues who insisted that the
   microprocessor was the wave of the future, c. 1968
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] where to get historical versions?

2010-06-22 Thread Eric Smith
In another thread in this forum, someone says they noticed a behavior 
in sqlite version 3.6.18 different (better) than what I've observed in 
3.6.23.1.  

Where can I find version 3.6.18 (or, more generally, any old version) 
for testing?  

Eric 

-- 
Eric A. Smith

It's up.  It sorta works.  That's a start.
-- BJ Premore
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] linear behavior on INSERT (was Re: unexpected large journal file)

2010-06-22 Thread Eric Smith
Jim Wilcoxson wrote: 

> Insert times should be constant for the 2nd case: no primary key, no 
> indexes; ie, it doesn't matter how many records are already in the 
> database.  I confirmed this with SQLite 3.6.18.  

Definitely not constant.  Looks linear to me -- you saw the plot, you
can decide for yourself.

I'm in SQLite 3.6.23.1.  How do I score an old version to test it?

> Did you see my earlier note about combining your two integers into the 
> primary key?  This will also give you constant insert times, if you 
> insert items in the order: 

Hey sorry, I didn't see that.  Cute idea, but my accessors are in Tcl, 
I don't want to do bit twiddling or query mangling on the read side from 
Tcl, and I don't want to re-write it in C.  Plus a host of other reasons
that would bore the SQLite community.  I'm actually rather happy without
any primary key definition right now.

Thanks also for the tip on insertion order.  Does that also hold for 
multi-column indices (and not single-column indices transformed from two 
integers)?  I assume it's because we get more cache hits and fewer tree 
rebalances when we insert in key-order?

Before I make any more adjustments, I want to understand why I'm linear
with no indices!

I'm pretty sure I'm not doing anything stupid, like setting evil 
compile-time options or whatever.  But then again most stupid people 
don't think their results come from being stupid.  

Eric 

-- 
Eric A. Smith

Aeropalmics (ayr o palm' iks), n.: 
The study of wind resistance conducted by holding a cupped 
hand out the car window.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] linear behavior on INSERT (was Re: unexpected large journal file)

2010-06-22 Thread Eric Smith
I have confirmed that INSERT times are roughly logarithmic in 
the number of existing records after creating my specific user 
indices.

But INSERT times appeared to be *linear* in the number of existing 
records before I had created any user indices (and with no primary 
keys or unique indices defined).

Can anyone explain this?

Eric

>   time (minutes) to insert 2m records
> 10 ++--+--+---+--+---+-++
>+   +  +   +  A   +   +  +
>  9 ++..++
>|  AAA A |
>  8 ++..AAA..A..++
>| A      |
>  7 ++.A.A..AAA.++
>|    |
>  6 ++..AA...A.A++
>|AAA |
>  5 ++A...AA++
>  4 ++.AA.AA..A.A...++
>|   A AAA  AA|
>  3 ++..++
>|   AAA A  A |
>  2 ++.AA...++
>| AA |
>  1 ++...AAA++
>AAA +  +   +  +   +  +
>  0 ++--+--+---+--+---+-++
>0  100200 300400 500600
>millions of existing records

--
Eric A. Smith

A nickel ain't worth a dime anymore.
-- Yogi Berra
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unexpected large journal file

2010-06-20 Thread Eric Smith
Black, Michael (IS) wrote: 

> Hmmma 6.5X speed diff between RAM and disk?  Sounds pretty good to 
> me.  Not sure why you expect better.  

I don't expect better.  As I said, I'm "not really sure what I should 
expect here, or how to find out what I should expect".  

> 10,000/sec is crappy?  And you think this because  

Because I think anything besides instantaneous results is crappy.  :) 

But I'm willing for now to live with the max I should reasonably expect 
from modern hardware.  I'd like to know e.g.  how much raid5 is hurting, 
so I can know whether to recommend that the user move to a single disk.  

Jay, btw, I saw your email on stripe sizes -- thanks so much for the 
suggestion -- I'll look into it.  But see below for a recent update to 
app behavior.  

> #1 What OS are you on?  There are numerous disk speed testing programs 
> depending on your OS.  

64-bit RHEL5.4, Linux 2.6.18 for the moment.  I could probably convince 
the user to switch to FreeBSD6 if there are big wins there.  My app 
already works in both OSs.  

> #2 Are you multi-threading?  A seperate reader process could help.  

Just one process, one thread.  

I'm sure you're right -- adding a reader process would certainly help 
run time by a few percentage points, but at the (unjustified at the 
moment) expense of increased maintenance cost & dev time.  This is an 
extremely cheap project so far and I'm trying to keep it that way.  

> #3 How many records total?  

Probably a couple billion, where each record is about 300 bytes across 
about 15 columns.  

> #4 Final size of database?  

After indices I'm guessing we'll be at ~500Gb.  When the db has 0.5b 
records and all indices are defined, we're at around 130Gb.  

> #5 How fast can you read your input file?  

I can generate inputs to SQLite at a rate of at least 65k records/sec.  
I haven't measured the input generation separately from the sqlite 
calls.  

> #6 What happens if you just insert the same records the same # of times 

Haven't tested it.  

> #7 What does your CPU usage show?  

After dropping the indices (per Jay et al's suggestion) I think SQLite 
is actually spending more than half its time on the CPU under large 
record sets.  Obviously I need to measure that more carefully under the 
new DB schema.  

My initial guess that I was inserting in O(1) was wrong -- time to 
insert 2m records went up by about a minute per 100m existing records.  

And this part is interesting: I finished the initial seed and created 
my user indices.  Then I added some more records, and found that insert 
times went down from 9 minutes to 2 minutes per 2 million records.  The 
plot is *striking*.  (I'd like to show it to the forum -- is it possible 
to send emails with attachments here?  It's a 60kb jpg file.)  I'm back 
up to inserting 17k records/second and am almost entirely CPU-bound.  I 
think I'm back in RAM!  

What is going on here?  Is SQLite using index data to do inserts more 
quickly?  Do you think that's causing it to need to read fewer pages on 
a particular insert?  

I'm very interested to see how this would look if I defined the user 
indices before the initial seed.  

> I assume you're multicore (as most are now I think).  

Yes, I have multiple CPUs, but I'm in one process in one thread so I'm 
only using one CPU at a time.  

Now that I'm seeing this CPU-bound behavior after adding indices, I'm 
reconsidering the whole multi-process thing.  Still, at least 75% of CPU 
usage is in SQLite.  More testing needed.  

> Depending on what you're doing with this data are you sure you need a 
> database solution?  

No.  But let's assume for now it's the best thing available to solve 
my problem under tight time constraints -- because in any case they're 
interesting questions, right?  :) 

> I don't recall you really explaining your ultimate goal...  

I tried to state the question as generally as possible while capturing 
the relevant specifics of my problem, so that gurus' answers will be 
useful to more people (including future Eric who is writing another 
application).  

I'll try starting off with those user indices and see how we do.

Thanks again!

Eric 

-- 
Eric A. Smith

I have always wished that my computer would be as easy to use as 
my telephone. My wish has come true. I no longer know how to use 
my telephone.
-- Bjarne Stroustrup
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unexpected large journal file

2010-06-20 Thread Eric Smith
Eric Smith wrote: 

> I'd like to show it to the forum -- is it possible to send emails with 
> attachments here?  It's a 60kb jpg file.  

God bless the gnuplot developers, who provided an ascii output option:

  time (minutes) to insert 2m records
10 ++--+--+---+--+---+-++
   +   +  +   +  A   +   +  +
 9 ++..++
   |  AAA A |
 8 ++..AAA..A..++
   | A      |
 7 ++.A.A..AAA.++
   |    |
 6 ++..AA...A.A++
   |AAA |
 5 ++A...AA++
 4 ++.AA.AA..A.A...++
   |   A AAA  AA|
 3 ++..++
   |   AAA A  A |
 2 ++.AA...++
   | AA |
 1 ++...AAA++
   AAA +  +   +  +   +  +
 0 ++--+--+---+--+---+-++
   0  100200 300400 500600
   millions of existing records


You can see how we ran out of RAM at 100m records.  From 100m to 200m
there was a cron job running that was causing the disk cache to get
thrown out; I killed it at around 200m records.  

You can see the linear progression quite clearly.

At ~480m records I halted the process, built user indices, and restarted.
And voila, we're back down in happy-land.

Eric

-- 
Eric A. Smith

The people can always be brought to the bidding of the leaders. That 
is easy. All you have to do is tell them they are being attacked and 
denounce the pacifists for lack of patriotism and exposing the 
country to danger. It works the same way in any country. 
-- Herman Goering, at the Nuremberg trials
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unexpected large journal file

2010-06-19 Thread Eric Smith
Jay A.  Kreibich wrote: 

> Try getting rid of the PK definition and see how much that buys you.  
> It might be worth it, or it might not.  

and Simon Slavin wrote: 

> We know you are doing a huge amount of writing to this database.  Are 
> you also reading it frequently ?  If not, then it might be worth making an 
> index on that primary key only when you're about to need it.  

I think the use case will usually be (only) writes followed by (only) 
reads.  There may be incremental writes later, but they will hopefully 
be small compared to the initial seed.  I intend to create a set of 
maybe 7 indices to serve the particular queries I think the user intends 
to run.  Let's all hope we can update the db with those indices at a 
higher rate than the user can generate data.  :p 

I tried removing the PK definition as you both suggested, and the 
journal stays fixed at less than 20Kb, even against a db size of (at 
the moment) 37Gb.  My insert batch run times are improved by a factor of 
~2.5 and seem to be O(1).  

So, bingo.  :) 

Insert rates are still a rather crappy 10k records/second (when we were 
in RAM we were doing ~65k recs/sec, which I think was bound by my app's 
speed).  I think I'm at the stupid raid5's limit -- not really sure what 
I should expect there, or how to find out what I should expect.  

Anyway, I'll build the indices after the seed.  I'll go out on a limb 
and assume that'll be a lot faster than it would've been under my 
initial approach.  

You guys were incredibly helpful -- thanks very much!  

Eric 

-- 
Eric A. Smith

Carperpetuation (kar' pur pet u a shun), n.:
The act, when vacuuming, of running over a string at least a
dozen times, reaching over and picking it up, examining it, then
putting it back down to give the vacuum one more chance.
-- Rich Hall, "Sniglets"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unexpected large journal file

2010-06-18 Thread Eric Smith
Jay A.  Kreibich wrote: 

> Yes.  Hence the "and this is the important part" comment.  Most of 
> the time when people are building billion-row files, they're building 
> a new DB by importing a static source of data.  If things go wrong, 
> you just throw out the database and try again.  

That's kinda like doing it all in a single big transaction, which I 
wanted to avoid.  :) 

But my take-away from this is conversation is generally "you're SOL": 
we are backed by a tree, so we have to update existing nodes to point 
to the new ones, so we have to touch existing pages on INSERT.  Is that 
about right?  

It's not in any way a result of my schema?  My primary key is a pair of
integers A,B.  The first column in this particular use case is in the 
range A = [0, 2million) and the second is in the range B = [0, infinity).
We

insert records A=0->2million, B=0, then
insert records A=0->2million, B=1,

etc.

Could this have an impact on how many pages need to be touched on
INSERT?

> It would also help to bump the cache up...  

That works great until the db size blows through the total RAM on the 
system, at which point we're of course disk-bound again.  At the moment
I'm only inserting about 4k rows/second. :/

Eric

-- 
Eric A. Smith

I have always wished that my computer would be as easy to use as 
my telephone. My wish has come true. I no longer know how to use 
my telephone.
-- Bjarne Stroustrup
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unexpected large journal file

2010-06-18 Thread Eric Smith
Simon Slavin wrote: 

> How do you pass the handle from your C code to your Tcl code ?  

I don't.  I pass it from Tcl to C.  

The handle in Tcl is a command name registered with the interpreter.  
SQLite attaches a structure to the registration that contains the 
underlying handle as used by the C API.  

I'm writing a Tcl extension in C and I don't want to touch a bunch of 
Tcl code.  The Tcl caller gives me its command-name handle to the DB.  
I ask the Tcl core for the registration data for that command name, and 
use it to get the underlying SQLite handle for my C calls.  

(Checking whether I've got a real Tcl db handle is not 100% fool proof, 
but takes care of basic caller stupidity and is good enough for my 
app.)  

With error checks removed:

Tcl_CmdInfo cmdInfo;
const char* zDbHandle = Tcl_GetStringFromObj(objv[2], 0);
Tcl_GetCommandInfo(pInterp, zDbHandle, &cmdInfo);
sqlite3 *pDb = *((sqlite3**)cmdInfo.objClientData);
/* Carry on, using pDb in C API calls.
*/

I was considering asking about this a while ago on this group, but opted 
not to for fear of being publicly flogged by drh. :)

Actually, I was hoping the SQLite devs would promise not to change the
way they do Tcl command registration in future releases, so this kind 
of thing will continue to work.

> You are sharing the same connection to the database between the two 
> languages, right ?  

Yep.  

> You're not doing your INSERTs from one connection and your COMMITs from 
> another ?  

Right, the one connection is shared.  

Eric 

-- 
Eric A. Smith

Substitute "damn" every time you're inclined to write "very"; your
editor will delete it and the writing will be just as it should be.
-- Mark Twain
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unexpected large journal file

2010-06-18 Thread Eric Smith
Jay A. Kreibich wrote:

> > I'd really love to avoid writing a big journal file.  And I'd love to 
> > avoid doing a billion-row insert in one transaction.
> 
>   So turn journaling off.

... which implies possible corruption on app failure, right?

I want progress to be saved every once in a while so I don't have to
re-build the db from scratch when my app comes back to life.

--
Eric A. Smith

Don Daniels: How stable are tense systems in languages?
Tim Pulju:   I'd say about a 42 on a scale from 1 to 212.
-- Winter 2005
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unexpected large journal file

2010-06-18 Thread Eric Smith

Terribly sorry to self-reply, but I have a correction: 

> I'm using the Tcl API, which probably doesn't matter for this question.  

Don't know what I was thinking when I typed that.  I'm sharing a
connection in a single thread, mixing C API calls and Tcl API calls.  
The C API calls drive the INSERTs; Tcl API calls drive BEGIN/COMMIT.  

That all works fine, so don't worry about it.

Thanks again!  

Eric 

-- 
Eric A. Smith

Absurdity, n.:
A statement or belief manifestly inconsistent with one's 
own opinion.
-- Ambrose Bierce, "The Devil's Dictionary"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] unexpected large journal file

2010-06-18 Thread Eric Smith
I have no user-defined indices in my db, and want to do a largish number 
of inserts (a few billion).  I COMMIT every 10 million INSERTs or so -- 
so if my app dies (or I want to kill it) I don't have to start over.  

Row sizes are small, a couple hundred bytes across 15ish columns.  
The primary key on the table is a pair of integers.

After a few BEGIN/INSERT/COMMIT cycles the journal file grows 
mid-transaction to a pretty big size, e.g.  around 1Gb against a 14Gb db 
file, meaning (right?) that sqlite wrote to ~1Gb of the existing 
pages during that round of INSERTs.  

This means the time spent doing a batch of INSERTs goes up as the number 
of existing rows, which is a big frowny-face.* 

I'd really love to avoid writing a big journal file.  And I'd love to 
avoid doing a billion-row insert in one transaction.

So, if I'm only doing INSERTs and they're always on brand new rows and 
there are no indices, why does SQLite need to update existing pages in 
the DB?  (Is it updating an auto-index based on the primary key?)  Is 
there a way to avoid it?  

Thanks!

Eric 

PS I'm using 3.6.23.1 with defaults, except PRAGMA synchronous=OFF and 
foreign_keys=1 (my only foreign key refs are to tiny tables).  I'm using 
the Tcl API, which probably doesn't matter for this question.  

% db eval {pragma compile_options} 
ENABLE_FTS3 ENABLE_LOCKING_STYLE=0 TEMP_STORE=1 THREADSAFE=1 

* I found out today my users are using a raid5 array on the deployment 
box, so it's an even bigger frowny face than it would have been by 
default.  

-- 
Eric A. Smith

We don't like their sound, and guitar music is on the way out.
-- Decca Recording Co. rejecting the Beatles, 1962.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] create index before or after many inserts?

2010-06-15 Thread Eric Smith
Simon Slavin wrote: 

> The standard assumption about SQLite is that it's faster to do your 
> INSERTs first, then create the indices.  How much of a difference this 
> makes depends on a lot of things.  

On what things does it depend?

-- 
Eric A. Smith

Sendmail may be safely run set-user-id to root.
-- Eric Allman, "Sendmail Installation Guide"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] create index before or after many inserts?

2010-06-15 Thread Eric Smith
Let's say my app has (only) inserts followed by (only) reads.  

The reads are best served by some indices.  So I can create the indices 
before the INSERTs, or after them.  

In general, should I expect a run time perf difference between these two 
options?  

Eric 

-- 
Eric A. Smith

Louis Pasteur's theory of germs is ridiculous fiction.
-- Pierre Pachet, Professor of Physiology at Toulouse, 1872
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] last_insert_rowid & INTEGER PRIMARY KEY columns

2010-06-07 Thread Eric Smith
The tcl interface spec says: 

> The "last_insert_rowid" method returns an integer which is the ROWID of 
> the most recently inserted database row.  

So if I have 

db eval {CREATE TABLE t(the_key INTEGER PRIMARY KEY, data TEXT)}

and I

db eval {INSERT INTO t VALUES(NULL, 'foo')}

then does 

db last_insert_rowid

reliably return the_key of the most recently inserted database row, 
so that the returned value may safely (across vacuums etc) be used 
as a foreign reference to t's the_key column?

Is this the best way of inserting a guaranteed non-conflicting row &
getting the new row's key?

-- 
Eric A. Smith

Money is the root of all wealth.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite file Validation

2010-05-19 Thread Eric Smith
> Devs were told to make sure they are validating their sql statements, 
> but as I have seen in the code, few of them have 

> Question: Do you think that instead of getting them go back throughout 
> their code, it is feasible to create a function that just eliminates the ; 
> and replaces it with a ,?  And if so, any suggested code?  

"I have this programmer who keeps writing stupid code despite explicit 
guidance.  I want an algorithm that converts his bad inputs into good 
inputs."  

Make sure to let us know if you come up with a general solution.  

Eric 

-- 
Eric A. Smith

You are standing in an open field west of a white house, with a 
boarded front door.
There is a small mailbox here.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Database in Shared Memory

2010-05-11 Thread Eric Smith
Manuj Bhatia wrote: 

> I do not have a requirement of persistence in my current design, but I 
> expect that we might extend this shared-queue solution to more areas of 
> the server and will require some sort of persistence then.  
> That is one of the main reasons I do not want to use IPC queues (there are 
> other reasons like fixed message sizes, minimal support for queue/message 
> level metadata).  

OP might consider creating a database file on a tmpfs filesystem.  
The OS tricks SQLite (and everything in user space) into thinking the 
file is a normal file with all the usual properties thereof -- but 
it's backed by RAM and not any persistent medium.  You'll get the perf 
benefits you wanted, along with the relatively easy ability to make the 
DB persistent later.  

Fedora 12 has one of these mounted at /dev/shm by default, though I 
presume any modern Linux will support this.  

Caveat (1).  I ran 'make test' on SQLite 3.6.23.1 on my box (Linux ___ 
2.6.32.11-99.fc12.i686 #1 SMP Mon Apr 5 16:32:08 EDT 2010 i686 athlon 
i386 GNU/Linux) from within a tmpfs filesystem and 23 tests failed: 

shared-1.1.1 shared-1.2.1 shared-1.4.1.1 shared-1.4.1.2 shared-1.4.1.3 
shared-2.1.1 shared-2.2.1 shared-2.4.1.1 shared-2.4.1.2 shared-2.4.1.3 
stmt-1.2 stmt-1.3 stmt-1.4 stmt-1.5 stmt-1.6 stmt-1.7 stmt-2.1 stmt-2.2 
stmt-2.3 stmt-2.5 tempdb-2.2 tempdb-2.3 tkt2565-1.X 

I wanted to investigate to see why but haven't had the time -- it has 
to do with the global Tcl variable sqlite_open_file_count.  Running the 
fixture on just those test files yields passes (every time), but running 
the whole 'veryquick' suite yields failures (every time).  I see there's 
machinery to try to clear all state between test runs -- obviously this 
is not successful in my test.  

The testfixture is dynamically linked against these libraries:
linux-gate.so.1 =>  (0x00511000)
libtcl8.5.so => /usr/lib/libtcl8.5.so (0x005cb000)
libdl.so.2 => /lib/libdl.so.2 (0x00d1f000)
libm.so.6 => /lib/libm.so.6 (0x00d42000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00d26000)
libc.so.6 => /lib/libc.so.6 (0x00ba9000)
/lib/ld-linux.so.2 (0x00b87000)

Caveat (2).  I don't claim this is the best solution for the OP -- just 
a possibility.  

Eric 

-- 
Eric A. Smith

I think there's a world market for about five computers.
-- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Passing Value from one query to another

2010-02-06 Thread Eric Bohlman
BareFeet wrote:
> In general, I think it's much better (performance and logic) to do
> all you can in SQL, without passing values out of SQL results, into
> your non-SQL code, then re-injecting back into another SQL query etc.

With SQLite, that's not really going to make a difference. Since it's an 
embedded library rather than an out-of-process server, 
"passing/injecting" doesn't require marshalling or network overhead, so 
there's no performance hit. In terms of logic, since SQLite doesn't have 
stored procedures the idea of centralizing database interactions into 
the DB itself rather than distributing the logic between applications 
isn't applicable here.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS3 performance rowid vs docid

2009-12-18 Thread Eric Grange
Hello All,

Using 3.6.21 and FTS3, I've noticed than when doing search queries,
using docid as in

select fts.docid from fts where fts.content match "la"

the search is much slower than when using rowid as in

select fts.rowid from fts where fts.content match "la"

Isn't docid supposed to be an alias of rowid in the case of FTS3 tables?
Is there anything wrong that can happen when using rowid instead of docid?
Or is it merely a missing optimization?

The speed difference depends on the number of returned results, but it
can be several orders of magnitude when the index isn't in cache, and
the memory usage is much higher too. I guess in one case the value is
obtained directly from the index, while in the other one it's read from
the fts_content table, is this correct?

Thanks!
Eric

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] unable to backup using command line

2009-07-08 Thread Eric Peterson



C:\InWork\rsm\weekly_status>sqlite3 accounts.db .database
seq  name file
---  ---  ---
0main C:\InWork\rsm\weekly_status\accounts.db



C:\InWork\rsm\weekly_status>sqlite3 accounts.db .help




But I can not do the .backup command from the command-line.


C:\InWork\rsm\weekly_status>sqlite3 accounts.db .backup main a.db
unknown command or invalid arguments:  "backup". Enter ".help" for help



Any ideas why?

Eric


_
Insert movie times and more without leaving Hotmail®. 
http://windowslive.com/Tutorial/Hotmail/QuickAdd?ocid=TXT_TAGLM_WL_HM_Tutorial_QuickAdd_062009
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Building sqlite

2009-06-29 Thread Eric Minbiole
> I'm currently on Windows and I've set up a python script to download the
> sqlite3 amalgamation. However, the ZIP file contains no build system for
> sqlite. I had to create a custom CMake script to build sqlite3 into a
> library. I do not wish to compile the C file with my source, it needs to be
> a static library.
> 
> Do you guys have a package that contains a build system for sqlite3 for
> Windows?

Can you use a dynamic library instead of static?  If so, there is a 
precompiled Windows dll for download on the SQLite site.  Otherwise, you 
will probably need to build manually, as you have done.

(Another option might be to use one of the myriad of Dll -> Static lib 
converters available, though this seems like more work than it's worth.)

~Eric

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-29 Thread Eric Minbiole
> So it seems by best bet is to close and open the connection once every 5 
> minutes or so?

While this might "work", I don't think you should resort to that.  As 
Kees noted earlier, there will be performance drawbacks if you 
close/re-open the database.

As others have indicated, the heap growth is likely due to SQLite 
intentionally caching frequently used disk pages in order to improve 
performance.  Rather than closing/re-opening, I suggest that you set the 
page cache size to a limit that is reasonable for your application:

http://www.sqlite.org/pragma.html#pragma_cache_size

The default is around 2MB.  You can decrease to as little as 10KB.  Note 
that there are some other buffers used by SQLite (Scratch, Lookaside, 
etc).  However, these are typically small compared to the page cache. 
Additional info here:

http://www.sqlite.org/malloc.html

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Near misses

2009-06-28 Thread Eric Bohlman
Simon Slavin wrote:
> On 26 Jun 2009, at 12:25pm, Alberto Simões wrote:
> 
>> one adition, one remotion or one substitution
> 
> I am always amazed at how well people use English.  For your word  
> 'remotion' you probably mean 'removal' or 'omission'.  You have joined  
> the two possibilities together !

Although Alberto has explained the etymology of the term, in general the 
condensation of two or more words into one is called a "portmanteau." My 
favorite portmanteau arose when about 30 years ago a co-worker reported 
that software problems on an embedded device were caused by two routines 
"interfecting with each other." Interacting, interfering, affecting, 
infecting and probably more, all packed with a remarkable economy of 
expression.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DBD::SQLite reporting corruption sqlite3 CL program does not

2009-06-28 Thread Eric Bohlman
Craig Talbert wrote:
>>From Perl, when I attempt to make a database connection using SQLite,
> I get the following error:
> 
> [Tue Jun 23 17:10:22 2009] projectory.cgi:
> DBI->connect(dbname=projectory.sqlite3) failed: database disk image is
> malformed at ./projectory.cgi line 1577
> 
> At line 1577 it is executing this code
> 
> $dbh = DBI->connect("dbi:SQLite:dbname=projectory.sqlite3","","") or
> die "$DBI::errstr\n";
> 
> When I use the sqlite3 tool to do an integrity check, I get the following:
> 
> rintintin> sqlite3 projectory.sqlite3
> SQLite version 3.6.15
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> PRAGMA integrity_check;
> ok
> sqlite> .exit

Pretty obvious, but: since your dbname is unqualified, are you sure your 
working directory in your code is the same as your current directory 
when using the command line? Also, what version of SQLite is linked into 
your DBD::SQLite3? (if you're using DBD::SQLite, make sure it's not so 
old as to be using SQLite v2; I forget when the naming change was made).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CURRENT_TIMESTAMP precision

2009-05-15 Thread Eric Minbiole
Please ignore my previous post.  Doug's suggestion is much better.

~Eric

Eric Minbiole wrote:
>> I would like CURRENT_TIMESTAMP to be more accurate than just one second, 
>> any suggestions on how I might do that once?  My solution is all a C/C++ 
>> interface, so all features are open to me.
> 
> One option would be to create and register a custom SQL function that 
> returned the current time, including fractional seconds.  If you format 
> your result as "-MM-DD HH:MM:SS.SSS", then all standard SQLite date 
> functions should work as-is (up to millisecond resolution).
> 
> Of course, how you actually get the time from the O/S will depend on 
> your particular O/S.  For windows, GetSystemTimeAsFileTime() may be of 
> use.  (Resolution around 1~16ms depending on O/S version.)  I'm sure 
> others can help with APIs for other Operating Systems.
> 
> Some links:
> 
> http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions
> http://www.sqlite.org/c3ref/create_function.html
> http://msdn.microsoft.com/en-us/library/ms724397(VS.85).aspx
> 
> ~Eric
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CURRENT_TIMESTAMP precision

2009-05-15 Thread Eric Minbiole
> I would like CURRENT_TIMESTAMP to be more accurate than just one second, 
> any suggestions on how I might do that once?  My solution is all a C/C++ 
> interface, so all features are open to me.

One option would be to create and register a custom SQL function that 
returned the current time, including fractional seconds.  If you format 
your result as "-MM-DD HH:MM:SS.SSS", then all standard SQLite date 
functions should work as-is (up to millisecond resolution).

Of course, how you actually get the time from the O/S will depend on 
your particular O/S.  For windows, GetSystemTimeAsFileTime() may be of 
use.  (Resolution around 1~16ms depending on O/S version.)  I'm sure 
others can help with APIs for other Operating Systems.

Some links:

http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions
http://www.sqlite.org/c3ref/create_function.html
http://msdn.microsoft.com/en-us/library/ms724397(VS.85).aspx

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [newbie] How to upgrade sqlite3 in Ubuntu?

2009-05-05 Thread Eric Bohlman
Derrell Lipman wrote:
> The amalgamation probably installed into some directory not in your path.
> You should look at where it installed (re-run ../configure and look at its
> output, which should tell you where it will install to. For Ubuntu, you
> almost certainly want it to install into /usr/local with the executable
> going into /usr/local/bin. If it chose some path other than /usr/local, you
> probably want to remove it from wherever it installed to.
> 
> Next, remove the Ubuntu-provided version of sqlite3 since you won't need it
> any longer:
> 
>   sudo apt-get remove sqlite3

The amalgamation builds by default into /usr/local, but the apt package 
goes into /usr. Since some other packages have sqlite3 as a dependency, 
it's best not to remove the original install, but rather to overwrite 
it; otherwise installing new packages may end up "restoring" the old 
version. Simply configure with --prefix=/usr.

Note that the main problem isn't the search path for executables; it's 
the search path for dynamic libraries.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Max Value In Group By Scenario

2009-05-04 Thread Eric Pankoke
I realize this is more of a general SQL question, but I figured while I
was digging for the answer one of you would probably already know it.  I
have a table where the three import fields for my situation are
MfgProductID, ProductID and Weight.  For any given MfgProductID there
can be multiple rows, and for any given MfgProductID / ProductID
combination there can be multiple rows.  The following query: 

 

SELECT MfgProductID, ProductID, SUM(Weight) As MaxWeight FROM
dominantmaterials GROUP BY MfgProductID, ProductID

 

Gets me most of the way to what I need.  What I'm really after, however,
is the ProductID for each MfgProductID that has the greatest combined
weight from all MfgProductID / ProductID rows.  I hope this makes sense,
but as an illustration:

 

Mfg  |  Product ID  |  Weight

1 |   1   |  10

1 |   1   |  10

1 |   2   |  15

 

When I'm done I want only 1 row for MfgProductID 1, where ProductID is 1
(since the combined total of rows with ProductID 1 is 20 for weight,
which is greater than the single row of ProductID 2 at weight 15).
Sorry for a rambling explanation for what I'm sure is a simple solution.

 

Eric Pankoke

Mobile Games Reviewer

http://www.rustysabre.com/

http://www.technobrains.com/

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] record order in select

2009-04-24 Thread Eric Minbiole
> If   "order by"  isn't used  in a select statment, does the result records 
> ordered  in  rowid?

If you omit an ORDER BY clause, the order of the resulting data is 
undefined.  It might happen to be ordered by rowid in some cases, but 
this is not guaranteed.  (Might change in a future version, etc.)

If you need to ensure a particular order, use an ORDER BY clause.  The 
good news is that if you order by rowid / PRIMARY KEY, you likely won't 
see much (if any) performance drop, but you're guaranteed to get the 
order you want.

The following "reverse_unordered_selects" pragma may be of interest: 
http://sqlite.org/pragma.html#pragma_reverse_unordered_selects

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slim down join results (all fields returned)

2009-04-17 Thread Eric Minbiole
> Currently, I return any needed data like this.
> 
>   select * from pubs,notes,publishers where pub_title like '%salem%' 
>   and pubs.note_id=notes.note_id
>   and pubs.publisher_id=publishers.publisher_id
> 
> And it works except for all fields in the matching tables being returned.
> 
> Is there any way using joins (been playing but can't get them working) so
> that a statement like 
> 
> pubs.note_id=notes.note_id
> 
> Would return only the contents of the notes.note_note field and not all the
> fields in the notes table?

Is this what you are looking for:

SELECT pubs.* from pubs, notes, publishers WHERE ...
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attach the data return an error message

2009-04-08 Thread Eric Minbiole
> Hi all,
> I had the application using sqlite and executing the following sql statement:
> executeStmt: Error in executing the statment database TCPADDB is already in 
> use. Error St = 1 , stmt =   ATTACH DATABASE 
> \'/opt/phoenix/monitor/TCPFlowCurDayDB\'  as TCPADDB; insert into 
> tcpFlowTable select (strftime(\'%s\',date(startTime * 60,\'unixepoch\')))/60 
> , appId, remoteId, sum(ptFlowCountAgv) ,sum(proxyFlowCountAgv ), 
> sum(ptFlowCountDiff) , sum(proxyRequestCountDiff) , sum(proxyFlowCountDiff) , 
> sum(failedToProxyCountDiff ) from TCPADDB.tcpFlowTable  group by appId, 
> remoteId ; DETACH DATABASE TCPADDB ; 
>  
> The error message return back is the database(TCPADDB) is alreay in use but I 
> have checked the codes and didn't see any connection is opened for this 
> database so what is the problem here. Please give some hints where to look in 
> the codes to find this problem. I didn't see any connection is currently 
> opened for this database at the time the application executing above sql 
> statement. Any help is greatly appreciated.
> Thanks,
> JP

I don't think the issue is that you have opened a separate connection to 
this database (via sqlite3_open_v2()).  Instead, the message indicates 
that you have already ATTACH-ed the TCPADDB database into the existing 
connection.

A likely candidate is the SQL query you attached above (or one like it). 
  Your query has 3 statements in one: ATTACH, INSERT, and DETACH.  If 
the INSERT portion fails for any reason, the query may abort, and the 
DETACH won't run.  This causes problems for the next query, since you 
are attached when you don't expect to be.

I would suggest that you move the ATTACH / DETACH statements into 
separate queries so that you can ensure they are called at the 
appropriate times.

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Eric Minbiole
> Dear Group:
> 
> When my application launches I want to open the associated database, and if 
> that fails because the file does not exist then I would create a new 
> database. 
> 
> sqlite3_open_v2() is ideal for this purpose because you can leave out 
> SQLITE_OPEN_CREATE flag, and specify SQLITE_OPEN_READWRITE.
> 
> Unfortunately, this is all academic because I am using sqlite3_open16()! 
> Where is the UTF-16 version that accepts the flags as a parameter? How can I 
> achieve the same functionality? Let me add that I am not too keen on 
> modifying sqlite.c so thats not an option (too much hassle when new versions 
> come out).
> 
> How did this oversight happen? And what is the workaround? How can I tell, 
> after a call to sqlite3_open16() if the database was created? The first thing 
> I do when it opens is SELECT * FROM VERSION to see if I have to upgrade the 
> database to a new version of my application data. I guess that call could 
> fail and that would be my clue to create all the tables.
> 
> But what if the SELECT fails for a different reason? How do I distinguish it? 
> How do I make this robust?
> 
> Thanks!

I would assume the intent is that you convert your UTF-16 filename to 
UTF-8, and then call sqlite3_open_v2().  I don't know what platform you 
are running on, but you may have some conversion APIs available to you. 
  If not, unicode.org provides some nice sample code that performs the 
conversion:

http://unicode.org/faq/utf_bom.html#UTF8

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is it using an index?

2009-04-06 Thread Eric Minbiole
> If I have a query:
> 
> SELECT foo FROM bar WHERE id = 14;
> 
> How can I see if that query is optimized to use an index, and which index
> it's using. I thought if you did an EXPLAIN it would show that, but I'm not
> seeing it? Maybe it's not really using an index?

Use the command "EXPLAIN QUERY PLAN {your sql command}".  This will give 
you a high level overview of all the tables being accessed, and which 
indices (if any) will be used for each.

For a simple query like your example, SQLite will almost certainly use 
an index on "id" if one is available.  You can use above to verify this.

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "-journal" file?

2009-04-02 Thread Eric Minbiole
> Whenever my phBB3 install is failing on the PPC Mac Mini, it appears  
> that SQLite is producing an empty database file (size 0) plus a file  
> with the same name plus "-journal" appended.  What kind of error is  
> this?  Does this mean SQLite somehow crashed out while attempting to  
> write to the DB?

The journal file is created when a database transaction begins.  The 
file is used to ensure that the transaction is "atomic"-- ie, that it 
completes fully, or not at all.

In your case, the leftover journal file likely indicates that the phBB3 
install either crashed mid-update, or it neglected to "commit" the 
pending transaction.  The next process that opens the database will 
detect the incomplete transaction, and roll the database back to its 
previous (in this case empty) state.

Some helpful info:

http://www.sqlite.org/tempfiles.html
http://www.sqlite.org/atomiccommit.html

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select the first 2 rows

2009-04-01 Thread Eric Minbiole
> Hi all,
> I have a big table and I want only select the first 2 rows.
> I have tried this :
> select top 2 from table;
> but it doesn't work! Any help please.
> JP

Use a LIMIT clause instead of TOP:

SELECT * FROM table LIMIT 2;

http://www.sqlite.org/lang_select.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Source code position out of sync debugging in VS 2008

2009-03-28 Thread Eric Minbiole
> I've already tried rebuilding everything, checked the settings, etc... but 
> nothing seems to help. I get the feeling this is a problem with the file 
> being so large and containing so many symbols. Has anyone else experienced 
> this problem?

Indeed, the Visual Studio debugger can only handle 2^16 lines in a 
single source file.  (The compiler has a more reasonable limit of 2^24 
lines.)  Some additional information in this thread:

http://social.microsoft.com/Forums/en-US/vsdebug/thread/7d991493-06f7-45f6-8f34-165b988e266c

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] tool to browse a sqlite database

2009-03-07 Thread Eric S. Johansson
I'm loking for some gui tools for looking at and changing my sqlite database,
the only ones I know of are the firefox extension and sqlitemanager.  the
firefox extension fails with:

Error in opening file messages.sqlite - perhaps this is not an sqlite db file
Exception Name: NS_ERROR_FAILURE
Exception Message: Component returned failure code: 0x80004005
(NS_ERROR_FAILURE) [mozIStorageService.openUnsharedDatabase]

sqlitmanager has it's own problems.  the database seems sane, sqlite3 likes it
and the app runs.

any other suggestions on what gui management tools are available?

thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Case Insensitive Sorting With FTS Columns

2009-02-21 Thread Eric Black
Hi,

I'm trying to add full text search (FTS3) to a small project I'm working on 
with SQLite v3.6.11. When I search for matches, I'd like the results to be 
alphabetical (case insensitive), but the FTS table is doing case-sensitive 
sorting. For all other tables, I defined the columns as TEXT COLLATE NOCASE, 
but I read that the data type, collation, and other column options aren't used 
for FTS virtual tables. For example:

CREATE VIRTUAL TABLE ftsa using fts3(col1 text COLLATE NOCASE);
insert into ftsa (col1) values ('z');
insert into ftsa (col1) values ('Z');
insert into ftsa (col1) values ('a');
insert into ftsa (col1) values ('A');
select * from ftsa order by col1;

Returns:
A
Z
a
z

I found I could get it to work with:

select * from ftsa order by Lower(col1);

Returns:
a
A
z
Z

Is that the best way to do it? It seems inefficient to have to change the 
capitalization on every item every time I query them.

I thought about keeping the text in a normal table, and just putting a copy in 
the FTS virtual table (maybe also removing a, an, the, of, etc). Is that what 
people normally do? ...maybe with triggers on the normal table to keep the 
virtual table updated?

I'm using this for a code snippet database. The table needs to record ID, 
Title, SourceCode, FileName, FileBLOB, LastUpdateDate, and I'd like to be able 
to search on Title, SourceCode, and FileName. If it matters, I also need to 
join to other tables based on the ID field--which seems to be challenging when 
using MATCHES, but I found a comment showing how to do it by prefixing the join 
on FTS field condition with a + sign.

Thanks,

Eric

-- 
Be Yourself @ mail.com!
Choose From 200+ Email Addresses
Get a Free Account at www.mail.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] need partial index

2009-02-11 Thread Eric Minbiole
> let's say i have a table MY_TABLE(INT_ID int not null, STRING_ID 
> varchar(255), DATA blob, primary key(INT_ID)). table has millions of 
> rows. i need to map from STRING_ID to INT_ID, but only for some rows. at 
> the time of insertion i know whether the new row needs to be in that map 
> or not. in some environments, most rows are mapped and in others, most 
> rows are not mapped. assume single insert per transaction. i would like 
> to avoid insertion-time performance impact of updating an extra index 
> when i don't need it. i've considered adding a second table with just 
> INT_ID and STRING_ID columns and inserting into that table only when i 
> need that mapping. however, when most rows are mapped, performance of 
> that solution seems worse than just an index on STRING_ID in MY_TABLE 
> table. i have also considered having two tables, one with an index on 
> STRING_ID and one without and inserting into one table or the other 
> table as appropriate. but, that would as much as double the cost of all 
> my INT_ID-based SELECTs and DELETEs because i would need to execute them 
> on two tables.
> 
> ideally, it would be nice if there was a way to index just the rows that 
> i need. is there any SQL/SQLite trick that i am missing?

As far as I know, SQLite does not support Partial Indices.  Your idea of 
potentially using a separate mapping table for just those items that 
need to be indexed seems a reasonable approximation of a partial index. 
  However, as you indicate, there may be substantial performance and/or 
space impacts with this approach-- perhaps worse than indexing everything.

I would take a step back and ask why you are worried about indexing 
"unneeded" rows:  You seem to indicate that "insertion-time performance" 
is your primary concern.  However, you also state that you plan to do a 
single insert per transaction.  If this is the case, my guess is that 
the time required for the disk flush at the end of each transaction* 
will be an order of magnitude longer than the index update.

In other words, if you are okay with the (slowish) performance of single 
insert per transaction, then I doubt you would notice the additional 
time to update an index.  Of course, you should test this hypothesis.

~Eric

* Assuming you haven't turned off synchronous write mode.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Saving and loading SQLite pages from a buffer

2009-02-04 Thread Eric Minbiole
> That depends.  If the change was made using the same database  
> connection that was passed into sqlite3_backup_init(), then only those  
> pages that changed are recopied.  However, if an independent database  
> connection made the change, then the backup process has no way of  
> knowing exactly which pages changed, so it has to start over again  
> from the beginning.

That makes sense, thank you.  (I wasn't sure if the individual pages had 
a "Change Counter", similar to the one in the File Header, that could be 
used.)

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Saving and loading SQLite pages from a buffer

2009-02-04 Thread Eric Minbiole
  > http://www.sqlite.org/draft/c3ref/backup_finish.html

This is excellent!  I've been looking for a clean way to perform live 
backups.  This (draft) API looks perfect.

I have one clarification question about source database changes made 
during the backup.  The documentation states:

"If the source database is modified [...] then the backup will be 
transparently restarted by the next call to sqlite3_backup_step()"

Does this mean that the backup is restarted *to the beginning*, or is it 
able to re-copy only those pages which have just been modified?

Thanks,
  Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] segmentation violation in fulltest on Mac OS X

2009-01-19 Thread Eric Minbiole
> Of course, I wasted 4 hours tracking the problem down.  This is  
> yet another episode that demonstrates how threads are a pernicious  
> evil that should be studiously avoided in any program that you  
> actually want to work.  Threads cause hard-to-trace bugs.  Threads  
> result in non-deterministic behavior.  Threads make programs run  
> slower.  Just say "No" to threads...

Let me start by saying that I have a great respect for SQLite and its 
developers.  I'm extremely pleased with the code itself as well as with 
the great support community. :)

However, I'm a bit surprised by the "threads are evil" mantra. 
Certainly, threads can cause "hard-to-trace bugs" when used improperly. 
  However the same can be said for many other language constructs, such 
as pointers, dynamic allocation, goto statements, etc.  Any tool can get 
you into trouble if abused.

No matter how you slice it, concurrent programing can be tricky.  While 
  multi-thread and multi-process approaches each have pros and cons, the 
dangers are the same: The programmer must take cautions to ensure that 
any shared resource is accessed safely.  When used properly, either 
approach can work reliably.

I have no doubt that there are many cases where the multi-process 
approach has clear benefits.  Indeed, if one prefers the multi-process 
approach, then by all means use it.  However, a multi-threaded approach 
can have benefits as well.  Advocating a "one size fits all" approach 
for everyone, without knowing the details of a particular application, 
just seems an oversimplification to me.

Sorry for my rant :)

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-19 Thread Eric Minbiole
> We need to produce copies of our databases for archive.
> It is a requirement that the size of those copies being as small as
> possible, without having to perform an external compression.
> vacuum doesn't seem to perform a compression (it works on fragmented
> data), is there any other way to do that ?

If you can't use an external compression program (which would almost 
certainly help reduce the size of your archived database), then there 
are a couple of options I can think of:

1. When you create the copy of your database, you could drop all of the 
indices from the copy, then vacuum.  Depending on your schema, this has 
the potential to remove some redundant information.  (At the expense of 
query speed, of course.)  You could always re-create the indices, if 
needed, when reading the archive.

2. If that doesn't help enough, run the sqlite3_analyzer (from 
http://sqlite.org/download.html) to see which table(s) are using the 
most disk space.  Focus on these tables to see if you can save space: 
Can you better normalize the schema to reduce repeated values?  Can some 
(non-vital) data be omitted from the archive?  etc.

If the above two options don't help enough, than I would reconsider the 
external compression tool.  zlib, for example, is a relatively 
lightweight, open source compression library that may do well on your 
database.

Hope this helps,
  Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding data with periods

2008-12-15 Thread Eric Bohlman
Mohd Radzi Ibrahim wrote:
> It seems to works either way.
> 
> I'm just wondering is there any hidden reason that single quote is 
> preferred? Portability?
> Or is double-qoute has some kind of special meaning that we should use it 
> for that special purpose?

If what's enclosed in the double quotes winds up being a column name or 
other identifier, SQLite will treat it as an identifier rather than a 
literal. This can cause all sorts of unpleasant surprises, particularly 
of the "this was working fine until I made this cosmetic change..." sort.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] multiple tables within a database

2008-12-01 Thread Eric S. Johansson
what's the general rule for deciding when to put multiple tables within a single
sqlite db file?  I think the answer is something like you put tables together in
one database file if they refer to different aspects of the same data element
and you put them in separate database files if there's no connection except the
databases are used in the same application.  For example, in my case, I have one
table which contains the raw original data, a  thoroughly cooked form of the
original data, and a series of data elements which are used for searching and
display.  The related table contains information derived from postprocessing and
will also be used for searching and graphing.  The second table's information
could be regenerated anytime at a cost of running through every record in the
database and recalculating it.  As a result of this association, I figure it's
appropriate to place both records in the same database file.

The third table tracks data from another part of the process and has no
connection to the first two tables except that it is used as part of the
postprocessing calculations that feed the second table described above.  I
figure the third table should go in its own database.

For what it's worth, record counts could easily cross 100,000 for each one of
the tables.  Hope it's not time for mysql :-)

Thanks for a feedback.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Journal files

2008-12-01 Thread Eric Minbiole
> I am looking for a way to completely turn off the creation
> of journal files. Any help is much appreciated.

http://www.sqlite.org/pragma.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to copy a big table in limited memory

2008-11-26 Thread Eric Minbiole
>   I use sqlite3 on resource limited embedded devices, and  it
> provide only 250K memory for sqlite.
> Now, I have a table named test_tab,whose size is 300K bytes.
> If I want to copy this big table to another table, this operation will fail
> because of limitde memory.

The first thing I would try to do is to reduce the SQLite page cache. 
By default, it is set for 2000 pages.  At about 1.5K RAM each, that is a 
peak of 3MB.  When you copy the big table, it may exceed your 250KB 
allocation limit by trying to cache the whole table into RAM.

In your case, you might reduce the page cache to 100 pages or so.  (Of 
course, this may have a performance impact.)  This can be done via the 
pragma cache_size directive.  In addition, you will probably want to 
ensure that your temp_store (for temporary tables, etc) is set to FILE. 
  See http://www.sqlite.org/pragma.html for more info.

Finally, take a look at the lookaside memory allocator.  By default, it 
creates a working buffer of 50KB per connection.  Reducing this buffer 
may make more room for page cache (above).  You'll have to find a good 
balance that works for your system.  See http://www.sqlite.org/malloc.html .

Hope this helps,
  Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite 3.6.5 slow on windows

2008-11-25 Thread Eric Minbiole
> Here is a typical outputs from the program running in debug mode:

Debug builds can be substantially slower.  I would expect at least some 
performance improvement with a release build (optimizations on).


> Notice my constraint is these SQL statements must run one by one, not within 
> a transaction.

Alas, this will dramatically reduce write performance.  Since each 
insert statement will now require an explicit disk flush, it will be 
very slow.  Is there *really* no way to group multiple writes together? 
  As you've likely read elsewhere, doing so can provide orders of 
magnitude performance improvements-- well worth the effort to reorganize 
your architecture a bit.


> 1) are these expected performance on windows?
> 2) if not, what can be done to improve it?

If you really must run the statements individually, you have a few (less 
than perfect) alternatives:

- Turn off synchronous writes ("pragma synchronous = OFF").  Though this 
will dramatically improve your insert performance, you will be running 
without any safety net: If the application crashes or exits during a 
write, your database may become (permanently) corrupted.

- The compiler optimizations (above) will likely help somewhat, though 
not terribly much.

- A faster (higher RPM) hard drive will help somewhat.


~Eric

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Escaped values throws error

2008-11-16 Thread Eric Minbiole
> This is my first foree into PDO-SQLite and I saw samples using binding,
> I am using prepare() though without binding.
> 
> So you think that had I used binding my escape issue would not be an issue ?

Correct.  Though I've not used PDO-SQLite (I use the C interface), I 
would think that using bound parameters would eliminate your string 
escaping issues.  I use bound parameters almost exclusively:  You never 
have to worry about escaping or sanitizing your strings-- just let the 
Db engine do the work for you.

A related classic: http://xkcd.com/327/

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Escaped values throws error

2008-11-16 Thread Eric Minbiole
> Now I remember why I wasn't using single quotes in SQLite statements -
> 
> because escaped single quoted values like
> 
> 'Billy\'s bike'
> 
> cause it to fail. Again I think its related to PHP-PDO SQLite 
> implementation.

Have you considered using prepared statements and bound parameters? 
That way, you never have to worry about character escaping, or SQL 
injection problems.

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_bind_text problem

2008-11-14 Thread Eric Minbiole
> I'm compiling sqlite using Visual C++ 6 on windows atm and maybe there is a 
> problem that only shows up in VC6 and not in gcc. So with that in mind I'm 
> building the Mac port of my software (which is built using gcc) and try to 
> reproduce the problem there. I guess if everything is perfect there then it's 
> VC6's fault. If I get the same result then it's something else.

I have used VC6 for many years, and found it to be a very stable 
development platform.  Code bugs are far more common than compiler bugs. 
  Even if the problem does "go away" under a new environment, I would 
take a very close look at your code.  As Igor suggests, you may wish to 
post a short code sample that highlights the problem.

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_bind_text problem

2008-11-14 Thread Eric Minbiole
> Then a bind the values to that statement and write it to the DB. That all 
> works except for the TEXT field "Name". I'm calling sqlite3_bind_text like 
> this:
> 
>   char *Name = "Something";   
>   sqlite3_bind_text(Stmt, 3, (const char*)Name, -1, SQLITE_STATIC);
> 
> And it return SQLITE_OK, however the value in the DB is NULL, or an empty 
> string. Certainly not the "Something" I pass in. All the integer fields are 
> written successfully so I know it's mostly working. The string is valid 
> utf-8, null terminated C-string.

It's possible that the char* variable is going out of scope before 
SQLite3 has a chance to commit the string data.  Try passing 
SQLITE_TRANSIENT to sqlite3_bind_text(), rather than STATIC.  This will 
cause SQLite to create a private copy of the string, ensuring that it 
isn't destroyed prematurely.

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Floating point numbers

2008-10-06 Thread Eric Minbiole
> Does anyone knows why floating point numbers are truncated when they are
> written or read from the database?!

SQLite stores real numbers as 8 byte IEEE floats, which can hold 
approximately 16 significant digits.  See: 
http://www.sqlite.org/datatype3.html

You could get slightly more significant digits by using scaled 64 bit 
integers-- approximately 18-19 decimal digits.  (This comes at the 
expense of additional code complexity and loss of range.)

If you need more than that, you'll need to store numbers as strings (or 
blobs), and do your own arbitrary precision math as needed.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] macro redefinition

2008-10-01 Thread Eric Minbiole
> I tried the first option and i am getting the following error :
> -- Build started: Project: Source Tagging System, Configuration: Debug
> Win32 --
> 
> Compiling...
> sqlite3.c
> c:\Documents and Settings\Administrator\My Documents\Visual Studio
> Projects\Source Tagging System\sqlite-amalgamation\sqlite3.c(22) : fatal
> error C1853: 'Debug/Source Tagging System.pch' precompiled header file is
> from a previous version of the compiler, or the precompiled header is C++
> and you are using it from C (or vice versa)

It appears that you cannot mix the same precompiled header for both C 
and C++ files.  In that case, it's probably best to disable precompiled 
headers for the SQLite source file(s).  Do a quick search on your 
original compiler error code, "C1010"-- this should give plenty of info 
on how to disable this feature.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] macro redefinition

2008-09-30 Thread Eric Minbiole
> Hi,
> I am getting the following error log when i try to build sqlite source :
>  ...
> c:\Documents and Settings\Administrator\My Documents\Visual Studio
> Projects\Source Tagging System\sqlite\where.c(2902): fatal error C1010:
> unexpected end of file while looking for precompiled header directive

By default, Visual Studio projects use "precompiled headers", which can 
greatly speed up compilation times.  To use this feature, you must 
include the special file, "stdafx.h", at the top of each source file.

To correct this error, you can either #include "stdafx.h", or disable 
precompiled headers for sqlite project file(s).  The latter setting can 
be found in the "precompiled headers" section of the project properties. 
  Do a search on "C1010" for more info.

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.6.1 memory leak?

2008-08-30 Thread Eric Minbiole
Ulric Auger wrote:
> Hi,
> 
> Since I updated to SQLite 3.6.1 I have a memory leak when my application
> exits.
> 
> If I compile using SQLite 3.5.8 I don't have the memory leak.

Be sure to call sqlite3_shutdown() just before the application exits-- 
this should free any outstanding resources held by SQLite.  (As a test, 
I commented out this shutdown call, and was able to reproduce the same 
leak warning message you reported.)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GCC -ffast-math safe version of sqlite3IsNaN

2008-08-27 Thread Eric Minbiole
Brown, Daniel wrote:
> Thanks for the clarification Roger, I guess it looks like I will need to
> modify the compiler settings locally then.

If you can modify the amalgamation source code, I would try updating 
sqlite3IsNan() to use the standard C isnan() macro.  DRH commented in 
one of the tickets that isnan() is not used by default since it is not 
available on all platforms.  (In addition, the custom IsNan removes a 
dependency on the standard math library.)

Assuming that isnan() is available to you (and assuming that it works 
with --fast-math), you may be able to trade a tricky build-script change 
for a quick source code change.

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite DB backups on Windows

2008-08-01 Thread Eric Minbiole
> What we're thinking of doing is pushing the PENDING_BYTE from the first 
> byte past the 1GB boundary to somewhere deep in the 64-bit range (such 
> as perhaps the 1TB boundary).  We would have to update many lock and 
> unlock calls in os_win.c to do so, mainly adding a high-order 32-bit 
> number to the lock offset.  Is anyone aware of any issues with doing so, 
> either with SQLite or Windows?  Would we have to change anything else in 
> SQLite other than in os_win.c?  We don't use anything older than Windows 
> 2000, so older systems shouldn't be a concern for us.

Might it be simpler to extract the underlying file handle from the 
sqlite3 structure?  Once you have this handle, you could manually 
perform a file copy, reading & writing a block at a time.  Since it's 
the same handle that was granted the lock, you should have no access 
restrictions.  Though I haven't tried it, I would hope that the 
block-by-block copy would have similar performance to the O/S CopyFile call.

Granted, there may not be any "supported" way to extract the file handle 
from the sqlite3 struct.  However, it has to be at least as easy as 
updating the code to move the PENDING_BYTE location, as you propose 
above.  Plus, it avoids the compatibility issues mentioned by DRH.

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datetime issue with the time part

2008-07-14 Thread Eric Minbiole
> Once again, all of these problems doesn't happen before the creation of 
> the Direct3D device. Does anyone ever used SQLite successfully in a 
> full-screen 3D game ?

 From what you describe, it seems that the compiler is performing 
single-precision, rather than double-precision, math.  After a quick 
Google search, I found a few posts indicating that Direct3D silently 
switches the FPU from double to single precision math, presumably in 
order to improve performance.

While it seems nearly unconscionable that a graphics library would mess 
with the FPU, the good news is that it appears you can override this 
default behavior when creating a 3D device.  See "FpuPreserve" flag:

http://msdn.microsoft.com/en-us/library/bb153282(VS.85).aspx

Hope this helps,
  Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datetime issue with the time part

2008-07-14 Thread Eric Minbiole
Sebastien Robillard wrote:
> Hi everyone,
>   I have an issue with datetimes that doesn't return the "time" part 
> correctly (always 00:00:00 or 18:00:00) when I use SQLite in my C++ 
> code. Whenever I use datetime('now'), or current_timestamp, the time is 
> not correct. However, it works correctly when using the sqlite3 program 
> (sqlite-3_5_9.zip from download page). Right now I use the SQLite dll 
> (sqlitedll-3_5_9.zip) in my code, but I also tried with the source code 
> amalgamation with the same results 

Your code looked correct, so I tried to reproduce the problem:  I 
complied the sample code you provided using Visual Studio 2005 and the 
v3.5.9 amalgamation.  Running under XP, I got the expected result:

DATETIME('NOW') = 2008-07-14 18:29:49

I assume that you are running under some flavor of Windows, since you 
are using sqlitedll.  Perhaps start by adding some traces to SQLite 
function "winCurrentTime()"?

Good luck,
  Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] printf-8.2 test failure on Solaris on SPARC 64-bit

2008-06-20 Thread Eric Minbiole
> printf-8.2...
> Expected: [2147483647 2147483648 4294967295]
>  Got: [2147483647 18446744071562067968 18446744073709551615]
> 
> The code looks like:
> 
> 
> ...
> do_test printf-8.2 {
>   sqlite3_mprintf_int {%lu %lu %lu} 0x7fff 0x8000 0x
> } {2147483647 2147483648 4294967295}
> ...
> 
> where sqlite3_mprintf_int() is a Tcl function written in C that passes
> signed ints to a printf-like function with a format string that uses
> %lu.  I think here we have sign extension going on.  To me it seems
> clear that there's a bug in sqlite3_mprintf_int() -- why use %lu?

I agree that you are on the right track-- the format doesn't portably 
match the values.  However, I think the %lu part is correct-- "long" is 
the only C type guaranteed to be at least 32 bits.  Instead, I think the 
issue is that the hex constants are not explicitly specified as longs, 
so the compiler is treating them as normal int's, causing the mismatch.

Rather than a sign extension problem, I believe the compiler is reading 
8 bytes of parameter data from the stack for each %lu, versus the 4 
bytes supplied.  As confirmation of this, note that 18446744071562067968 
= 8000 hex-- the 2nd and 3rd parameters combined.

I think it's a simple matter of adding the 'L' suffix to the constants. 
  I.e., 0x7fffL, 0x8000L, etc.  This should work portably across 
32/64 bit platforms.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation Fault when using mod_python / mod_wsgi

2008-06-18 Thread Eric Holmberg
This is now officially off-topic since the problem is in the Python
SQLite wrappers and Apache instead of the SQLite C code.  Regardless, I
would like to post the cause and solution so that anybody who ends up
here while doing a web search will know the solution.

> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread -1208731424 (LWP 20956)] PyDict_GetItem 
> (op=0x0, key=0xb7c25e4c) at Objects/dictobject.c:571
> 571 if (!PyDict_Check(op))
> (gdb) bt full
> #0  PyDict_GetItem (op=0x0, key=0xb7c25e4c) at 
> Objects/dictobject.c:571
> hash = 5854612
> ep = (dictentry *) 0x595594
> tstate = (PyThreadState *) 0x0
> #1  0x0058da34 in ?? ()
> No symbol table info available.
> #2  0x in ?? ()
> No symbol table info available.
> 
> Since SQLite works fine at the command line and fine with the 
> Python interpreter, but fails when invoked with mod_python 
> and mod_wsgi through Apache, then I'm assuming that there is 
> either an odd threading issue or something is not being setup 
> correctly in the Apache/mod_* chain.

Background
--

I installed SQLite, Python, mod_python, and mod_wsgi from source and
started getting the segmentation faults.  Recompiling Apache from the
http.spec source solved the segmentation fault.  Now I was able to get a
more sane SystemError response when connecting to the SQLite database
(either file-based or memory based).

System Information
--
RHEL AS 4
Apache Version: 2.0.52
SQLite Version: 3.5.9
SQLite Threadsafety: 1
Python Version: 2.5.2
PySQLite Version: 2.3.2
Mod_python Version: 3.3.1
Mod_wsgi Version: 2.0

Executing this code
---
con = sqlite3.connect(':memory:')

Results in the following error
--
SystemError: NULL result without error in PyObject_Call

Cause & Solution

It turns out that this was a symbol clash between the SQLite library
built into Python 2.5 (PySQLite version 2.3.2) and Apache's
mod_mem_cache module.  There are several possible solutions:

 1. Disable the mod_cache and associated mod_*_cash modules in Apache
 2. Install a newer version of PySQLite > 2.3.2 and then do the
following changes

# Change this
import sqlite3

# to this
from pysqlite2 import dbapi2 as sqlite3

 3. Remove the existing sqlite3 module in Python 2.5 and replace it with
a module that does a from pysqlite2.dbapi2 import *

See the following trouble ticket for more info:
 * http://oss.itsystementwicklung.de/trac/pysqlite/ticket/146


I hope this helps somebody out there.

Regards,

Eric Holmberg
Applications Engineer, Arrow Electronics
Denver, Colorado


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation Fault when using mod_python / mod_wsgi

2008-06-17 Thread Eric Holmberg
> Eric Holmberg wrote:
> > PyDict_GetItem (op=0x0, key=0xb7ce82cc) at Objects/dictobject.c:571
> > 571 if (!PyDict_Check(op))
> 
> You would need to supply more of the backtrace since the 
> calling routine is supplying a null pointer instead of a 
> dictionary.  Nothing points to SQLite being involved so far ...

The full backtrack doesn't provide much info at this point.  I am
attempting to recompile Apache with debugging symbols, but am not
successful as of yet.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1208731424 (LWP 20956)]
PyDict_GetItem (op=0x0, key=0xb7c25e4c) at Objects/dictobject.c:571
571 if (!PyDict_Check(op))
(gdb) bt full
#0  PyDict_GetItem (op=0x0, key=0xb7c25e4c) at Objects/dictobject.c:571
hash = 5854612
ep = (dictentry *) 0x595594
tstate = (PyThreadState *) 0x0
#1  0x0058da34 in ?? ()
No symbol table info available.
#2  0x in ?? ()
No symbol table info available.

Since SQLite works fine at the command line and fine with the Python
interpreter, but fails when envoked with mod_python and mod_wsgi through
Apache, then I'm assuming that there is either an odd threading issue or
something is not being setup correctly in the Apache/mod_* chain.

I was just hoping that if this was a thread related issue, that somebody
may have seen this before and have a quick solution.


Build environment / configuration

export CFLAGS='-g '
export LDFLAGS='-g '

SQLite 3.5.9:
./configure --disable-tcl --enable-threadsafe
--enable-cross-thread-connections --enable-debug

Python 2.5.2:
./configure --with-pydebug --enable-shared

Mod_python:
./configure


Thanks,

Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Source code for 3.4.2

2008-06-16 Thread Eric Holmberg
Okay, I found an old post on mail-archive.com
(http://www.mail-archive.com/sqlite-users@sqlite.org/msg02844.html)
which stated that I should just use the date.

So for version 3.4.2, I used 2007-08-13 18:00 UTC which is marked at
Version 3.4.2 according to the timeline of the wiki (see
http://www.sqlite.org/cvstrac/timeline?d=30&e=2007-Aug-14&c=2&px=sqlite&;
s=0&dm=1&x=1).  Let me know if I'm off base here.

Actual command used:

> cvs update -D '2007-08-13 18:00:00 UTC' 

Thanks,

Eric Holmberg
Applications Engineer, Arrow Electronics

____

From: Eric Holmberg 
Sent: Monday, June 16, 2008 2:48 PM
To: 'sqlite-users@sqlite.org'
Subject: Source code for 3.4.2


Sorry ahead of time for a silly question, but how do I get
version 3.4.2 of the source code?  I didn't see any tags in CVS (but I'm
not a CVS user, so that may be user error on my part).
 
Any help with the exact command would be appreciated.
 
Thanks,
 
Eric Holmberg
Applications Engineer, Arrow Electronics
 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Segmentation Fault when using mod_python / mod_wsgi

2008-06-16 Thread Eric Holmberg
I'm trying to solve a RedHat Enterprise Linux 4 issue where I have
built, from source, Python 2.5.2, mod_python v3.2.8, and SQLite 3.5.9.
SQLite works fine in the Python 2.5.2 interpreter, but when running from
mod_python (I also tried mod_wsgi), I get a segmentation fault in the
Python file Objects/dictobject.c:571 when running a query.

I would like to try SQLite version 3.4.2 to get to a known setup just to
verify the problem isn't being caused elsewhere.

Any ideas are welcome.
 


Python test code

 
"""\
This program verifies that mod_python is able to use the SQLite module
integrated into
Python.
"""
import sqlite3
 
from mod_python import apache
 
def handler(req):
req.content_type = 'text/plain'
req.send_http_header()
 
# query SQLite3
req.write('Opening database connection ')
con = sqlite3.connect('/rh241/u01/TestWsgiPylons/test.db')
 
req.write('[OK]\r\nQuerying database ')

# THIS CAUSES A SEGMENTATION FAULT (see GDB snippet below)
cur = con.execute('select * from test')
 
req.write('[OK]\r\nFetching result set')
results = cur.fetchall()
req.write('[OK]\r\n')
req.write('Query results = ' + str(results) + '\n')
 

return apache.OK

--
GDB Output
--
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1209055008 (LWP 25808)]
PyDict_GetItem (op=0x0, key=0xb7ce82cc) at Objects/dictobject.c:571
571 if (!PyDict_Check(op))
(gdb)



Thanks,
 
Eric Holmberg
Applications Engineer, Arrow Electronics
Engineering Solutions Center
Denver, Colorado
ESC:  877-ESC-8950, [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> 
Direct:  303-824-4537, [EMAIL PROTECTED]
 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Source code for 3.4.2

2008-06-16 Thread Eric Holmberg
Sorry ahead of time for a silly question, but how do I get version 3.4.2
of the source code?  I didn't see any tags in CVS (but I'm not a CVS
user, so that may be user error on my part).
 
Any help with the exact command would be appreciated.
 
Thanks,
 
Eric Holmberg
Applications Engineer, Arrow Electronics
Engineering Solutions Center
Denver, Colorado
ESC:  877-ESC-8950, [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> 
Direct:  303-824-4537, [EMAIL PROTECTED]
 
 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] prepare peformances

2008-06-09 Thread Eric Minbiole
> I would expect that sqlite3_prepare would be faster in such a case, and 
> maybe Toms is pointing out a circumstance where recreating the query 
> seems to be faster.  Or am I misreading the post?

One possible explanation (stab in the dark):

If many of the bound parameters are text (or blob) based, and were bound 
with sqlite3_bind_text(..., SQLITE_TRANSIENT), then SQLite would need to 
allocate & deallocate memory to hold each bound parameter.  (15 params * 
384 queries = 5760 allocations.)

OTOH, the previous sqlite3_exec approach may have simply sprintf'd the 
entire SQL statement(s) into a single pre-allocated buffer, possibly 
avoiding slow-ish dynamic allocations.  If the mem allocation time were 
 > than statement compilation time, the "prepare" approach would appear 
slower.

As I said, just a possibility...

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CANTOPEN error on Windows systems running TortoiseSVN

2008-06-04 Thread Eric Minbiole
> - TSVNCache.exe monitors files and for whatever reason it opens them
> and performs some queries on them.
> - SQLite creates a journal file.
> - TSVNCache notices the new file and opens it.
> - SQLite deletes the journal file and it now enters the "Delete
> Pending" state since TSVNCache still has handle for it.
> - SQLite tries to recreate the journal for next transaction (3 times),
> but it always fails with the ERROR_DELETE_PENDING error.
>   => CANTOPEN error is returned.
> - TSVNCache finally releases the file handle and the file gets deleted.


It looks like you can configure TortoiseSVN to include / exclude 
specific paths during its searches.  Though I've not tried it, I would 
think you could simply exclude any paths that contain SQLite databases. 
  This thread had some good info:

http://www.nabble.com/Disable-TSVNCache.exe-to-minimize-disk-IO--to2385446.html

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Eric Minbiole
> -- Even if you only go down to 1'-by-1' granularity, you've divided the
> world into 64,800 blocks.  Assuming that your 840K cities are all over the
> globe, and that about 70% of Earth is covered by water, that means that only
> about 20,000 blocks would actually have cities in them.  But with 840K
> cities, that means you're only considering about 42 cities for a single
> block.

Nice!  Though I haven't been part of this thread, I wanted to comment 
that this is a very elegant & efficient solution.  Kind of like a 2-D 
hash table.  I can see other areas where a similar approach would be 
helpful.  Thanks for a good tip-- I'm sure it will come in handy at some 
point.

~Eric

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bind arguments for insert and not null columns with default values

2008-05-20 Thread Eric Minbiole
> I have a table like this
> 
> CREATE TABLE foo (bar TEXT NOT NULL DEFAULT 'default_value');
> 
> and I'd like to create a reusable statement to do inserts into foo, like this:
> 
> INSERT INTO foo (bar) VALUES (?);
> 
> Sometimes I have values for bar and sometimes I don't and want the
> default. Is there any way to indicate to the statement that I want the
> bound parameter to be "nothing" therefore giving me the default value?
> If I bind that column to NULL I get a constraint error.

Could you simply create two separate prepared statements (one which sets 
bar and one which doesn't), then use the appropriate one, depending on 
whether you know the value of bar?  Perhaps not as elegant as reusing 
one statement for everything, but it should work easily enough.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3.5.8 alot slower than 3.5.7

2008-04-25 Thread Eric Minbiole
> This works great but when I upgraded from 3.5.7 to 3.5.8 the speed of 
> this select went from 0.2s to around 1 minute. And 3.5.8 is stealing 
> ALOT more memory.

D. Richard Hipp had a very helpful work-around for this issue, by simply 
rearranging the terms of your join's ON clause.  Take a look at this 
thread for details:

http://www.mail-archive.com/sqlite-users%40sqlite.org/msg33267.html

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Populating and scrolling the Listbox using query

2008-04-21 Thread Eric Pankoke
That seems like it would cut down on some of your query times.  When you
say it's running slow, how slow are you talking about?

Eric Pankoke
Founder
Point Of Light Software
http://www.polsoftware.com/
 
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Neville Franks
Sent: Tuesday, April 22, 2008 1:08 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Populating and scrolling the Listbox using query

Tuesday, April 22, 2008, 2:52:37 PM, you wrote:

Why not just keep the rowid's in an array and query the actual row
contents as needed. This is typical with virtual list or tree
controls. That is you keep a reference to the data, not the data
itself.


F> Thanks for the quick reply Epankoke.
F> We tried as you mentioned. But we are restricted with the memory size
and
F> the storage of the needed data occupies some MB's of space in the
memory so
F> we cannot go for it. Is there any other work around to perform the
scrolling
F> condition.

F> Kindly help in this regard.

F> Regards,
F> Farzana.


F> epankoke wrote:
>> 
>> Is it possible to store all of the needed data in memory?  If so, why
not
>> read the required information into an array and just update an index
>> variable to keep track of where you are at in the array when the user
>> clicks the up and down buttons?  That should be quite fast.
>> 
>> --
>> Eric Pankoke
>> Founder / Lead Developer
>> Point Of Light Software
>> http://www.polsoftware.com/
>> 
>>  -- Original message --
>> From: Farzana <[EMAIL PROTECTED]>
>>> 
>>> Thanks for your reply Igor.
>>> 
>>> We tried populating the listbox as mentioned in the URL.We are
successful
>>> in
>>> populating the listbox but when we go for scrolling the data, it
takes
>>> more
>>> time to move forward and backward since it has to execute the query
>>> everytime. We are using a PocketPc so it is much slower. Is there
any
>>> other
>>> way to do this or can some one provide us a sample code for the
same.
>>> We are using a Table say Employees where we have to dsiplay their
Job
>>> Description in ascending order in a user defined listbox with scroll
up
>>> and
>>> scroll down buttons. Can anyone provide us a suggestion.
>>> Thanks in Advance.
>>> 
>>> Regards,
>>> Farzana
>>> 
>>> 
>>> 
>>> Igor Tandetnik wrote:
>>> > 
>>> > "Farzana" <[EMAIL PROTECTED]>
>>> > wrote in message news:[EMAIL PROTECTED]
>>> >> We are working in eVC++ environment with SQLite database.We need
to
>>> >> populate the listbox with the values obtained by executing the
query.
>>> >> We were able to get the values of the query by using the API's
>>> >> sqlite3_prepare and sqlite3_step.
>>> >> But we were able to populate and move the listbox in the downward
>>> >> direction only
>>> > 
>>> > http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor
>>> > 
>>> > Igor Tandetnik 
>>> > 
>>> > 
>>> > 
>>> > ___
>>> > sqlite-users mailing list
>>> > sqlite-users@sqlite.org
>>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>> > 
>>> > 
>>> 
>>> -- 
>>> View this message in context: 
>>>
http://www.nabble.com/Populating-and-scrolling-the-Listbox-using-query-t
p1667617
>>> 8p16806114.html
>>> Sent from the SQLite mailing list archive at Nabble.com.
>>> 
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
>> 



-- 
Best regards,
  Neville Franks, http://www.surfulater.com http://blog.surfulater.com
 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about using sqlite database located on shared folder

2008-04-19 Thread Eric Minbiole
> server machine actually per request from cient. Now if, for some reason, 
> connection with machine taht started writing table goes awol. That 
> client finished his job or is just dead but server still thinks 
> connection is needed and keeps file locked hence preventing any other 
> client to do reading/writing.

Though I haven't tested it, I would hope that the O/S would detect that 
the client disconnected, and would release any file lock(s) held.

Seems like it would be easy enough to test: On a couple of machines, 
start a couple of SQLite command shells to the shared Db.  On the first, 
start a BEGIN IMMEDIATE;, then yank the ethernet plug.  See if the OS 
eventually releases the lock, such that the second client can get access.

I'm sure others have more experience with this.  But it seems like it 
would be easy enough to test.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


<    1   2   3   4   5   6   >