Re: [sqlite] Memory Usage

2006-10-30 Thread Ben Clewett

Nuno,

Thanks for the excelent description of my error.  I have learnt a little 
more about Linux virtual memory model.  Very glad to hear Sqlite is as 
perfect as ever :)


My problem, which is definitely my problem, is that 90 x 16MB of 
reserved memory is still a loss of 1.4G.  Especially as I use hardly any 
of it.


If you can drop me a hint about how to sort this, this would be 
extremely useful.  Otherwise I'll close this thread and look elsewhere...


Kind regards,

Ben Clewett.


Nuno Lucas wrote:

On 10/30/06, Ben Clewett [EMAIL PROTECTED] wrote:

Hi Numo and others,

I am very glad to hear the consensus is that there is nothing wrong with
  libsqlite3.so.0.8.6.

However the fact is that the 'open' still acquires 16MB of memory.

Immediately Before:

   VmSize:   8572 kB
   VmLck:   0 kB
   VmRSS:2252 kB
   VmData:484 kB
   VmStk:  88 kB
   VmExe:  20 kB
   VmLib:6772 kB
   VmPTE:  20 kB

Immediately After:

sqlite3_open(sDatabaseFile, hSqlite) (= SQLITE_OK)

   VmSize:  24960 kB
   VmLck:   0 kB
   VmRSS:2368 kB
   VmData:  16872 kB
   VmStk:  88 kB
   VmExe:  20 kB
   VmLib:6772 kB
   VmPTE:  24 kB


I guess that info is from /proc/pid/status (the nomeclature somewhat
differs for other programs).

The program actually only allocated 2368-2252=116 KB, but reserved
16MB of virtual addresses (which is different from actual allocated
memory, as you can check by running free or other tool). That is
normal if it's the first file you open, as the kernel and libc reserve
a bunch of addresses before for internal buffers (to speed up your
I/O).

RSS (the Resident Set Size), is the important one here (unless your
program had parts of it swaped out, which would make it less usefull
for what we want).


Regards,
~Nuno Lucas

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






--
Ben Clewett
+44(0)1923 46
Project Manager
Road Tech Computer Systems Ltd
http://www.roadrunner.uk.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Memory Usage

2006-10-30 Thread Ben Clewett

Hi Nuno,

Sqlite is one mailing list I have consistently found absolutely excelent 
knowledge, thanks again for your information.  I don't know whether this 
should be off-thread now, but I don't have your email address.


I'll have to research memory allocation further.  But I'm glad to know 
that the 16MB 'VmData' is not reserved per-process.  If you know a good 
URL on Linux virtual memory and allocation, I would be extremely interested.


My code is c++, it's a manifold for many TCP remote mobile devices, like 
GPS tracking equipment.  Therefore it's evolved to link to half the 
libraries on the system.


If I am to correctly account for data used by a collection of processes, 
would this be the sum of VmRSS?


I am suffering a 4GB memory 64-bit Zeon Linux box, which keeps crashing 
with 'No available memory'.  I'm finding it quite hard to break down the 
memory into what processes are paged-in and using what's available. 
Sqlite seemed to be the smoking gun, so although I'm glad it was not the 
case, it means I must start again on my search...


Thanks for the advise,

Ben




Nuno Lucas wrote:

On 10/30/06, Ben Clewett [EMAIL PROTECTED] wrote:

Nuno,

Thanks for the excelent description of my error.  I have learnt a little
more about Linux virtual memory model.  Very glad to hear Sqlite is as
perfect as ever :)

My problem, which is definitely my problem, is that 90 x 16MB of
reserved memory is still a loss of 1.4G.  Especially as I use hardly any
of it.


Each process has it's own virtual address space, so 16MB of reserved
virtual addresses (except when they are kernel addresses) for one
process doesn't do nothing to the ammount of virtual addresses free
for other processes. And as each process usually has 2/3GB of virtual
addresses for it's own use, 16MB is pretty low (it depends on the
system, but 2 GB is the most common minimum, on 32 bits).

I still find strange that your program uses so much virtual addresses,
but you didn't specify (or I don't recall) what language you are using
and what libraries you are linking to. You may want to investigate
this further.


Regards,
~Nuno Lucas

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






--
Ben Clewett
+44(0)1923 46
Project Manager
Road Tech Computer Systems Ltd
http://www.roadrunner.uk.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Memory Usage

2006-10-27 Thread Ben Clewett

Dear Sqlite,

I very much enjoy using Sqlite, it is extremely useful.

I have a memory usage query.

I am linking to libsqlite3.so.0.8.6.  After calling sqlite3_open(...) I 
find my programs data memory jumps by 16392 Kb.


This seems a lot.  The database I am opening is only 26K in size.

I have a similar process opening about 90 times.  This obviously 
consumes a very large amount of memory, 1.4G with 90 processes.


May I ask if this is what would be expected, and whether there is 
anything I can do to lower this loading?


Thanks for your help,

Ben.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Extra functions - New Project?

2006-05-26 Thread Ben Clewett

There is one from MySQL I use a lot:

MD5()

Which comes from this collection:

http://dev.mysql.com/doc/refman/5.0/en/encryption-functions.html

Very useful for implementing Transparent Databases



Mikey C wrote:

These are the functions that I tend to have implemented:

Numeric Functions:

Sqrt
Floor
Ceiling
Sign
Pi - constant function 3.141..
ACos
ASin
ATan
Atn2
Cos
Cot
Degrees
Exp
Log
Log10
Power
Radians
Sin
Square
Tan

String Functions:

Charindex
Patindex
Left
Right
LTrim
RTrim
Trim
Replicate
Reverse
Replace
Difference - numeric diff in Soundex values using built in soundex function.

Aggregate Functions:

StdDev
Variance
Median - Possibly a more flexible function Percentile where 0.5 is the
Median
Mode - Most frequently occuring value
--
View this message in context: 
http://www.nabble.com/Extra+functions+-+New+Project--t1674436.html#a4563121
Sent from the SQLite forum at Nabble.com.






[sqlite] SQLite under mono

2006-05-25 Thread Ben Clewett

Dear SQLite,

I have need to access SQLite using Mono on Linux.

Probably using C# .NET code either natively accessing an sqlite3 
database, or calling libsqlite3.so.0.


I can't see any native .NET drivers for sqlite3, which is where I would 
like to go.


Do any members know if such a lib does exist?  If not, any body got any 
experience of binding mono to the native Linux sqlite3 lib?


Thanks,

Ben







Re: [sqlite] SQLite under mono

2006-05-25 Thread Ben Clewett

Thanks for you help, this is exactly what I was trying to find.

Regards, Ben

Mikey C wrote:

http://www.mono-project.com/SQL_Lite
--
View this message in context: 
http://www.nabble.com/SQLite+under+mono-t1680769.html#a4558894
Sent from the SQLite forum at Nabble.com.





Re: [sqlite] SQlite for Linux on the ARM processor.

2005-12-23 Thread Ben Clewett

Paul,

This work could be enormously useful to me.  The configuration like 
looks what I need.  If you could supply the patch that would be good.


I am looking forward to seeing how SQLite works on my pocket Linux box.

Ben




Paul Bohme wrote:

Ben Clewett wrote:


Dear SQLite

I wish to compile SQLite for the ARM processor running Linux (ARM720T).

The compilation will have to use the cross-compiler arm-linux-gcc 
version 2.95.2 running on Intel Linux, since the devise it's self has 
no compiler.


Does any person have an image for ARM?

If not, would any person have any advise on how this can be made?

Eg, in the configure, can the arm-Linux-gcc be specified as the compiler?

Thanks for any advise,

Ben Clewett.




The official way is to modify the Makefile (tweak the documented options 
to suit).


Given that there are autotools, this seems a bit 'off'.  Unfortunately 
the autotools scripts are slightly broken for doing cross-compile 
builds, but aren't hard to make work.  I had to tweak the configure 
script to allow disabling of the readline library (don't have/want it 
for my ARM target) - I can send along a patch against 3.2.7 for this if 
you'd like.  Then, my configure command line was:


config_TARGET_CC=arm-linux-gcc config_BUILD_CC=gcc ./configure 
--disable-tcl --disable-readline --enable-threadsafe 
--enable-releasemode --host=arm-linux --prefix=/usr/local/arm


You will want to modify the --prefix= option to someplace that you want 
the ARM version installed so your other build scripts can pick it up.


 -P

P.S. to DRH: If you'd like I can properly package up any changes 
required to make SQLite more 'cross-compile friendly' without having 
parallel build mechanisms and pass them along.  Doesn't look like any 
more depth needed than some simple autotools scripting tweaks so no 
damage to vital organs..






[sqlite] SQlite for Linux on the ARM processor.

2005-12-22 Thread Ben Clewett

Dear SQLite

I wish to compile SQLite for the ARM processor running Linux (ARM720T).

The compilation will have to use the cross-compiler arm-linux-gcc 
version 2.95.2 running on Intel Linux, since the devise it's self has no 
compiler.


Does any person have an image for ARM?

If not, would any person have any advise on how this can be made?

Eg, in the configure, can the arm-Linux-gcc be specified as the compiler?

Thanks for any advise,

Ben Clewett.




Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Ben Clewett
An old COBOL system we had did this.  It never allocated less than 64 
blocks of disk space.  It did work.


A lot of modern file systems (eg, EXT2 and EXT3) do this anyway by 
reserving space after your file for later use.  So if you are using a 
file system with plenty of free space, file expansion will (mostly) be 
as a continuous extension of exiting data.


Apart from file fragmentation, there is also table space fragmentation. 
 A sequential read through an index on a table may not be a sequential 
read along a disk cylinder.  Therefore resulting in low performance.  I 
don't know whether VACUUM helps or hinders this effect.


From experience I know that dumping an entire DB as SQL, then 
destroying database, then parsing back in.  Can result in significant 
read performance gains.  Where database is not cached by OS file cache 
system.  I would *guess* that where the database is cached, none of this 
will make much difference. :)


Just my two pence worth...


Cory Nelson wrote:

I think his issue is that the database is changing size too often.  He
wants it to automatically expand in larger chunks so there is less
fragmentation on the disk.

Good idea, assuming it's settable via pragma.

On 9/13/05, Jay Sprenkle [EMAIL PROTECTED] wrote:


On 9/13/05, GreatNews [EMAIL PROTECTED] wrote:


Hi D. Richard Hipp,

I'm developing a desktop rss reader using your excellent sqlite engine.
One
issue my users found is that sqlite database can get heavily fragmented
over
time. I'm wondering if it's a viable suggestion that sqlite pre-allocates
disk space when creating database, and grows the db file by bigger
chunk(e.g. grow by 20% or so in size each time)?




Why not do a vacuum every 10th time (or something similar) you exit the
program?

---
The Castles of Dereth Calendar: a tour of the art and architecture of
Asheron's Call
http://www.lulu.com/content/77264









--
Ben Clewett
+44(0)1923 46
Project Manager
Road Tech Computer Systems Ltd
http://www.roadrunner.uk.com


Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Ben Clewett
A small warning with running VACUUM too often.  Any predefined 
statements will fail if they are defined before the VACUUM and used 
afterwards.  I had a daemon which did a VACUUM autonomously.  Which 
occasionally coincided with a user request, and broke it :)


Jay Sprenkle wrote:

On 9/13/05, Ray Mosley [EMAIL PROTECTED] wrote:


Is it overkill to VACUUM every time the last user terminates?




It depends.
If your program is very active rearranging database records every time it 
runs probably not.





--
Ben Clewett
+44(0)1923 46
Project Manager
Road Tech Computer Systems Ltd
http://www.roadrunner.uk.com


Re: [sqlite] Running in memory only

2005-08-31 Thread Ben Clewett

Benjamin,

This works well on Linux using file system /dev/shm.  This should be 
installed as default and works indistinguishably from normal file 
system.  SQLite writes about 200 times faster than normal file system.


Remember that like all ram disks, this is wiped without warning on 
shut-down.  So consider it only for duplicate data.  I also find having 
an 'rc' script to re-create tables on startup is a great help.


David Pitcher wrote:

Salut Benjamin,

Simplest way to make that happen is to use a ram disk, google for 
'RamDisk' and the platform you are using. Then you can use the existing 
sqlite binary/library but all I/O occurs in memory.


When you want to persist, you just copy it to a hard disk...

D.





Re: [sqlite] sqlite3 importing mysql-dumps: SQL error: database disk image is malformed

2005-07-26 Thread Ben Clewett

When executing 'mysqldump', use the '--compatible=ansi' flag.

Ben

Martin Jenkins wrote:



CREATE TABLE `blacklist` (
 `blcfid` int(11) NOT NULL default '0',
 `blndid` int(11) NOT NULL default '0',
 PRIMARY KEY  (`blcfid`,`blndid`)
) TYPE=MyISAM;
 

If you run the sqlite3 command line utility and .read your script you'll 
see why it doesn't work - you need to change the back-ticks (`) to 
single quotes (') and delete TYPE=MyISAM. I didn't see any disk image 
malformed messages, so can't help there.


Martin





Re: [sqlite] Multi-threading.

2005-07-25 Thread Ben Clewett

Dear Mrs Brisby,

Thanks for your passionate replies to my original posting.  You have 
much information here.  It's obvious I don't know everything about 
threading.


I like what you say about computer science being a Science.  This is 
exactly my point.  A science is a collection of theories which hold true 
within a domain until otherwise dissproven.  For instance Isac Newtons's 
law of gravety and Einstain's law.  Both are true within a fixed domain. 
 Both are used today.  Neither are truelly corrrect.  There will be 
another more complex theroy in time.


This is the same with Threading.  There may be places where this is 
useful.  There may be places where it should not be used.  Both are 
theories within computer science which hold true in a limited domain. 
So when I said this was my opinion, I should have stated this is my 
theory within a fixed domain.  However when you stated that I was wrong, 
I don't think this is compatible of your statement that Computer Science 
is a Science.  Nothing in any science is either right or wrong.  That's 
a Religion.


I don't want to spend ages on the argument, I can see there are 
passionate views.  I only want to explore the idea that threading is a 
viable strategy for a modern high level language and can produce stable 
applications with low development time.


For instance a call to a TCP blocking Wait.  It's perfectly possible for 
one thread to cycle round handing GUI events, then checking on the port 
for new packets.


But an alternate method is to partition the code space into two 
autonomous threads using their own variable set.  Two simple sets of 
code which are not coupled, and remain highly cohesive and encapsulated. 
 Communicating through a thread-safe object.  Such code is fast to 
write, easy to read, robust and does the job.  I can complete this in c# 
in ten minutes, and it will not break.  With large amounts of time, 
better methods may be available.  But this theory is viable within its 
domain.


 I wasn't about to consider Windows a modern GUI system :)

Are you saying it's not modern, or not a GUI?  It may not be prefect and 
it is very complex.  But it's not about to go away, and it's installed 
on modern computers.  More importantly, my views on threads remain.  If 
you give the GUI it's own thread, you have implemented a simple solution 
to ensure that the correct cursor and mouse events are honoured.  Then a 
worker thread is free to do what ever it likes without being coupled to 
the GUI.  Simple robust coding using thread-safe objects.


I am also interested in peoples views that threading is slower.  Since 
all processors are likely to be hyperthreaded, multicore, or both.  I 
can see no place where this will be true in the future.


 Java uses references, not pointers.

Is there any important difference?  My point was not about language.  It 
was a question to Dr Hipp about what he feels is missing from the Java 
pointer (reference) model.  Perhaps I should have explained better.


Yes, I understand that 'c' can make use of goto, and that goto is fast. 
 There are also very very bad places to use goto.  Exceptions, breaks 
and continue statements are linear code, easy to follow and more robust 
to code changes.  Goto is a legacy of assembler programming.  I don't 
think it's time to teach new coders about goto.  Which is my theory 
within a fixed domain.


Regards,

Ben.


Mrs. Brisby wrote:

On Wed, 2005-07-20 at 17:26 +0100, Ben Clewett wrote:


Dr Hipp,

I am just playing devils advocate here because I have completed much
Java programming in a multi-threaded application. :)

I understand the problems of multi-threading.  I am reminded that it
took nearly 20 years of development to get multi-processor support in a
modern OS stable.  Much success for this can be attributed to Semaphore
Flags.  With CPU hardware support to ensure that the semaphore it's self
cannot be controlled by more than one process.



ITC in 1970 supported multiple threads trivially.



Multi-thread applications suffer the same problems.  Without semaphore
flags or 20 years of development.  A novice programmer can happily
create a second thread and quickly create terribly applications.







However the need for multi-threads is compelling.  Especially in a GUI
environment.  For instance a Mail reader.  Where one thread is needed to
ensure the GUI is drawn correctly and respond to GUI events.  Another to
download and dispatch mail.  (My Thunderbird has 10 threads.  This may
be a bit of overkill :)



No. Threads are not a need. They allow you to use blocking system calls
in parallel without extra page table loads.

History has demonstrated that programmers building multithreaded
applications tend to produce buggier code, and code that touches more
pages than a non-threaded version. As a result, the non-threaded version
is easier to write, safer, and runs faster.




As another user also mentioned, a Windows system works better with few
processes

Re: [sqlite] Multi-threading.

2005-07-25 Thread Ben Clewett

Steve O'Hara wrote:

I think the point about multi-threaded apps is not that there might be a few
instances where they are useful, but how easy is it to debug and support
them and is it worth the price for a little more concurrency?


Good point.  The ones I have used are terrible.  I hope things will 
improve in time.  This is hopefully not a reason to give up.



In my experience, most debugging IDE's can't handle multiple threads and
actually turn the application into a pseudo threaded version prior to
running it.  I wouldn't mind betting that's what's going on in Java (it's
worth pointing out that Java didn't used to even support native threads and
I bet most Java programmers have no idea whether they are actually using
Native or pseudo threads)

Because of this, you get the frustrating situation where an application
works fine in debug but fails sporadically in production.  Another good real
world example of this is the IIS/ASP server side debugging.  IIS allocates a
new thread per request which are autonomous but do some semaphoring to share
the Session object (I think).  There's a handy switch you can set on your
IIS virtual directory that turns on server side debugging - great! .
only, not so great!! Suddenly IIS starts allocating the same thread to every
request i.e. it queues your requests and the very problem you were trying to
solve goes away!


:)


The moral of the story is as Richard says, if there isn't a burning need for
multiple threads then don't use them.


I have noticed something.  There are two lines of thought here.  The two 
ideas may not be too different.  (Here is where I am sure I will be 
flamed, but I am sure I'll learn something from it :)


- The single-thread multi-task option.
- The multiple-thread single thread-task.

The first option involves building a context scheduler into our own 
programs.  So GUI events, TCP listen, and everything else can occur 
without blocking.  A virtual thread environment.  As you say, some Java 
and ISS can sometimes be this.


The second option involves letting the OS do the context changes. 
Letting the programmer just code each thread in isolation.  (Programming 
in a thread-safe way.)


So in essence, the two become the same.  A switch could even be added to 
use own context switch, or use the OS.  The OS could be virtual or 
physical.


Externally it's not important: any option should yields the same function.

Therefore, programming ones own context scheduler is a bit like not 
using SQL and accessing the data file directly.  It may be faster, you 
may have more control.  But you may just be giving your self a lot of 
work where SQL does it all for you.  This is why we all like SQLite so much.


I do not know if this is the case today.  My assumption is that thread 
safe programming is easy, by using good tools and good methodology.  I 
truly believe this, and I have not experienced some of the thread 
problems other people have reported.


Considering the future of processor design, should we be getting more 
excited about threads and how we can make them work for us?




By the way, Mrs Brisby is being facetious when he says that he doesn't
regard Windows as a modern GUI - there's no such thing!  They've all got
their origins in concepts put forward over 20 years ago.


:)

Ben.




Steve


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
rg]On Behalf Of Ben Clewett
Sent: 25 July 2005 09:01
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Multi-threading.


Dear Mrs Brisby,

Thanks for your passionate replies to my original posting.  You have
much information here.  It's obvious I don't know everything about
threading.

I like what you say about computer science being a Science.  This is
exactly my point.  A science is a collection of theories which hold true
within a domain until otherwise dissproven.  For instance Isac Newtons's
law of gravety and Einstain's law.  Both are true within a fixed domain.
  Both are used today.  Neither are truelly corrrect.  There will be
another more complex theroy in time.

This is the same with Threading.  There may be places where this is
useful.  There may be places where it should not be used.  Both are
theories within computer science which hold true in a limited domain.
So when I said this was my opinion, I should have stated this is my
theory within a fixed domain.  However when you stated that I was wrong,
I don't think this is compatible of your statement that Computer Science
is a Science.  Nothing in any science is either right or wrong.  That's
a Religion.

I don't want to spend ages on the argument, I can see there are
passionate views.  I only want to explore the idea that threading is a
viable strategy for a modern high level language and can produce stable
applications with low development time.

For instance a call to a TCP blocking Wait.  It's perfectly possible for
one thread to cycle round handing GUI events, then checking on the port
for new

Re: [sqlite] Multi-threading.

2005-07-20 Thread Ben Clewett

Dr Hipp,

I am just playing devils advocate here because I have completed much
Java programming in a multi-threaded application. :)

I understand the problems of multi-threading.  I am reminded that it
took nearly 20 years of development to get multi-processor support in a
modern OS stable.  Much success for this can be attributed to Semaphore
Flags.  With CPU hardware support to ensure that the semaphore it's self
cannot be controlled by more than one process.

Multi-thread applications suffer the same problems.  Without semaphore
flags or 20 years of development.  A novice programmer can happily
create a second thread and quickly create terribly applications.

However the need for multi-threads is compelling.  Especially in a GUI
environment.  For instance a Mail reader.  Where one thread is needed to
ensure the GUI is drawn correctly and respond to GUI events.  Another to
download and dispatch mail.  (My Thunderbird has 10 threads.  This may
be a bit of overkill :)

As another user also mentioned, a Windows system works better with few
processes with many threads.

I believe the issue is not whether to use threads, but to use them
correctly.  Which is not a hard thing to do with a little support.

This is where Java (and .NET) work well.  If you use them correctly.
They provide thread-safe objects.  Which have been designed to use
semaphore flags internally.  If the programmer uses these thread-safe
objects correctly, they will not encounter thread issues.  For instance,
all communication between threads should be exclusively through these
thread-safe objects.

Further, Java and .NET provide Sycronisation methods.  The defining of a
method to be synchronised automatically creates the locks to ensure
thread safe access.

I am also interested in your comments on Pointers and GoTo.  I note that
Java is 100% pointers.  Apart from basic types, all object access is by
pointer.

Using Exceptions correctly, I have never felt the need for a GoTo.
Exceptions do the same as GoTo, accept, maybe, in a slightly more
developed and useful way.

These are just my opinions :)

Regards,

Ben Clewett.

D. Richard Hipp wrote:

On Fri, 2005-07-15 at 16:41 +0530, Roushan Ali wrote:


Hello all,
 Can we use single sqlite_open handle(global) across threads(
if all database operations are serialized by using semaphore) ? Please
help.
  



Opening a database connection in one thread and using it in another
will work on some operating systems but not on others.  You are 
advised not to do it.  See http://www.sqlite.org/cvstrac/tktview?tn=1272

and http://www.sqlite.org/cvstrac/chngview?cn=2521.

Actually, this seems like a good opportunity to repeat my
oft-ignored advice to not use more than one thread in a single
address space.  If you need multiple threads, create multiple
processes.  This has nothing to do with SQLite = it is just
good programming advice.  I have worked on countless multi-
threaded programs over the years, and I have yet to see a 
single one that didn't contain subtle, hard to reproduce, 
and very hard to troubleshoot bugs related to threading issues.


I am constantly amazed at the prevailing idea (exemplified
by Java) that software should be strongly typed and should
not use goto statement or pointers - all in the name of 
reducing bugs - but that it is OK to use multiple threads

within the same address space.  Strong typing helps prevent
only bugs that are trivially easy to locate and fix.  The
use of goto statements and pointers likewise results in
deterministic problems that are easy to test for and
relatively easy to track down and correct.  But threading
bugs tend to manifest themselves as timing-dependent 
glitches and lock-ups that are hardware and platform 
dependent, that never happen the same way twice, and that

only appear for customers after deployment and never in a
testing environment.




Re: [sqlite] Libray routine called out of sequence

2005-07-18 Thread Ben Clewett
I have had this message in the past when I can executed an SQL query 
before my previous query had Completed.


Ben

Roushan Ali wrote:

Hi all,
   I got an error message  library routine called out of sequence
during sqlite_step . Can anybody tell me what this error message does
mean ?


Regards,
Roushan





Re: [sqlite] Error 21, library routine called out of sequence

2005-07-11 Thread Ben Clewett

Gé

Thanks for the advise, I will alter my program.

Do you know which compilers do need this treatment?  (I am using GCC 3.3.3).

Regards,

Ben

Gé Weijers wrote:

On an almost pedantic note:

declare the variable as:

  static volatile sig_atomic_t bKilled = 0;

volatile keeps the compiler from caching the value in a register,  and 
not noticing its change,
and sig_atomic_t is an integer type guaranteed to be written in one  
instruction.


Some processors can only write certain integer sizes atomically,  chars 
may need read and write cycles and are not atomic.


alternatively you can mask the signal before you read 'bKilled'.

Gé

On Jul 8, 2005, at 7:50 AM, Ben Clewett wrote:


Derrell,

Thanks for the idea and the excellent coding example.

This works perfectly, thank!

Regards,

Ben.


[EMAIL PROTECTED] wrote:


Ben Clewett [EMAIL PROTECTED] writes:


Dear SQLite,

I am running a sequence of inserts:

BEGIN
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...

I am catching the 'kill -1' signal (aka CTRL-C) and executing a  
final query:


COMMIT

When I execute the 'COMMIT' I get:

library routine called out of sequence

Every other query command after this returns the same.

My guess is the interrupt is kicking in during SQLite completion  of 
the previous query.  Therefore SQLite is half way through  something 
when this occurs.


Can any person suggest a possible solution as I am out of  options.  
For instance, some command to tidy up SQLite so that  the next 
statement does not fail.  Otherwise I loose all my  inserted data :)


Instead of issuing the COMMIT from the signal handler, set a  global 
flag in
the signal handler which you check in your main code.  If the flag  
has been

set, then COMMIT and exit.
You can do something akin to this (untested code):
- 
-

static int  bKilled = 0;
static void
sigHandler(int signum)
{
if (signum == SIGTERM)
{
bKilled = 1;
}
}
static void
doInserts()
{
char ** ppQueries;
char *  queries[] =
{
INSERT INTO table ...,
INSERT INTO table ...,
INSERT INTO table ...,
NULL
};
/* Start a transaction */
issueQuery(BEGIN;);
/* For each query... */
for (ppQueries = queries; ppQueries != NULL; ppQueries++)
{
/* Issue the query */
issueQuery(*ppQueries);
/* If we've been signaled, exit loop */
if (bKilled)
{
break;
}
}
/*
 * Commit the transaction.
 *
 * Note that signal could have occurred *before* the BEGIN.   
You'll need

 * to handle that case as well (or ignore the error from COMMIT)
 */
issueQuery(COMMIT;);
}








--
Gé Weijers
e-mail: [EMAIL PROTECTED]







[sqlite] Possible enhancement to SQL logic

2005-07-11 Thread Ben Clewett

Dear SQLite,

I have found a possible area where the SQL logic of SQLite is not as 
good as other DBMS.  Like MySQL or PostgreSQL.  It's a simple thing, so 
I am showing it as a candidate for work.  (If I could submit a patch I 
would :)


If you create a table with two indexes:

CREATE TABLE a (
f0 INTEGER PRIMARY KEY,
f1 INTEGER,
f2 INTEGER
);
CREATE INDEX a_f1 ON a (f1);
CREATE INDEX a_f2 ON a (f2);

Now execute a query using both the indexes:

SELECT * FROM a WHERE f1 = 1 OR f2 = 1;

This query will not use the indexes.  The query will increment through 
every row in the table.  This is obviously very slow.


As I mensioned, MySQL and PostgreSQL will use the indexes here, and 
therefore return the result considerably faster.


I can use the 'UNION' to get the result I am after, so it's not a show 
stopper.


I hope this is of interest to somebody.

Regards,

Ben Clewett.






[sqlite] Problem with sqlite3_prepare

2005-07-11 Thread Ben Clewett
I have a problem with SQLite locking when trying to allocate memory.  I 
don't know whether this is my problem, or something else.  But the 
problem occurs inside SQLite.


A backtrack shows:

/usr/local/lib/libsqlite3.so.0(sqlite3ParserAlloc+0x26) [0x4013d1c6]
/usr/local/lib/libsqlite3.so.0(sqlite3RunParser+0xa3) [0x40146033]
/usr/local/lib/libsqlite3.so.0(sqlite3_prepare+0x159) [0x40134fd9]

Which locks in this function, in the file parse.c:

void *sqlite3ParserAlloc(void *(*mallocProc)(size_t)){
  yyParser *pParser;
  pParser = (yyParser*)(*mallocProc)( (size_t)sizeof(yyParser) );  // *
  if( pParser ){
pParser-yyidx = -1;
  }
  return pParser;
}

* = locking line of code. During the 'malloc' stage, which is I think 
passed into this function.


I have tried malloc in my own code, which will happily malloc up to 
gigabytes in size.


This occurs after several calls to sqlite3_prepare, say 22 calls.  In 
each case I call 'sqlite3_finalize'.  During a lock, the code waits 
indefinitely and has to be killed.  Called by Apache.


I am sure this is something I have done.  But I do not know what would 
effect the ability or SQLite to malloc memory.


If there are any users who have experienced this and may know what I can 
do, I would be really glad to hear.


Kind regards,

Ben Clewett.

Apologies if this is wholly irrelevant.











Re: [sqlite] Possible enhancement to SQL logic

2005-07-11 Thread Ben Clewett

D. Richard Hipp wrote:
MySQL and PostgreSQL will use the indexes here, and 
therefore return the result considerably faster.





Really?  I would be very interested to know what
query plan MySQL and PostgreSQL use in this example.


It looks like I didn't look before leaping.  MySQL does do a sequential 
search through all rows in table with same example.


But PostgreSQL does use indexes for the same example.

Regards, Ben.




Re: [sqlite] Possible enhancement to SQL logic

2005-07-11 Thread Ben Clewett

[EMAIL PROTECTED] wrote:

Ben Clewett [EMAIL PROTECTED] writes:



D. Richard Hipp wrote:


MySQL and PostgreSQL will use the indexes here, and therefore return the
result considerably faster.



Really?  I would be very interested to know what
query plan MySQL and PostgreSQL use in this example.


It looks like I didn't look before leaping.  MySQL does do a sequential 
search through all rows in table with same example.


But PostgreSQL does use indexes for the same example.



You can get the PostgreSQL query plan by issuing this query:

EXPLAIN SELECT * FROM a WHERE f1 = 1 OR f2 = 1;

The output from that command is what Dr. Hipp requested.

Derrell


Ok,

From MySQL with 310606 rows:

++-+---+--+---+--+-+--++-+
| id | select_type | table | type | possible_keys | key  | key_len | ref 
 | rows   | Extra   |

++-+---+--+---+--+-+--++-+
|  1 | SIMPLE  | a | ALL  | a_f1,a_f2 | NULL |NULL | 
NULL | 310606 | Using where |

++-+---+--+---+--+-+--++-+

From PSql with 2534 rows:

   QUERY PLAN
-
 Index Scan using a_f1, a_f2 on a  (cost=0.00..39.86 rows=1003 width=12)
   Index Cond: ((f1 = 1) OR (f2 = 1))


Regards,




[sqlite] Error Messages

2005-07-11 Thread Ben Clewett

Sorry to bombard with so much email.

A small request for later version of SQLite.

Would it be possible to change this error message:

logic error or missing database

To:

logic error, missing database, database read-only, or database 
directory read-only.


I hope this is not unreasonable, it took me a long time to work out why 
I was getting this message.


Kind regards,

Ben Clewett.





[sqlite] Error 21, library routine called out of sequence

2005-07-08 Thread Ben Clewett

Dear SQLite,

I am running a sequence of inserts:

BEGIN
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...

I am catching the 'kill -1' signal (aka CTRL-C) and executing a final query:

COMMIT

When I execute the 'COMMIT' I get:

library routine called out of sequence

Every other query command after this returns the same.

My guess is the interrupt is kicking in during SQLite completion of the 
previous query.  Therefore SQLite is half way through something when 
this occurs.


Can any person suggest a possible solution as I am out of options.  For 
instance, some command to tidy up SQLite so that the next statement does 
not fail.  Otherwise I loose all my inserted data :)


Regards,

Ben Clewett.
Version 3.1.6







Re: [sqlite] Error 21, library routine called out of sequence

2005-07-08 Thread Ben Clewett

Derrell,

Thanks for the idea and the excellent coding example.

This works perfectly, thank!

Regards,

Ben.


[EMAIL PROTECTED] wrote:

Ben Clewett [EMAIL PROTECTED] writes:



Dear SQLite,

I am running a sequence of inserts:

BEGIN
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...

I am catching the 'kill -1' signal (aka CTRL-C) and executing a final query:

COMMIT

When I execute the 'COMMIT' I get:

library routine called out of sequence

Every other query command after this returns the same.

My guess is the interrupt is kicking in during SQLite completion of the 
previous query.  Therefore SQLite is half way through something when 
this occurs.


Can any person suggest a possible solution as I am out of options.  For 
instance, some command to tidy up SQLite so that the next statement does 
not fail.  Otherwise I loose all my inserted data :)



Instead of issuing the COMMIT from the signal handler, set a global flag in
the signal handler which you check in your main code.  If the flag has been
set, then COMMIT and exit.

You can do something akin to this (untested code):

--

static int  bKilled = 0;

static void
sigHandler(int signum)
{
if (signum == SIGTERM)
{
bKilled = 1;
}
}

static void
doInserts()
{
char ** ppQueries;
char *  queries[] =
{
INSERT INTO table ...,
INSERT INTO table ...,
INSERT INTO table ...,
NULL
};

/* Start a transaction */
issueQuery(BEGIN;);

/* For each query... */
for (ppQueries = queries; ppQueries != NULL; ppQueries++)
{
/* Issue the query */
issueQuery(*ppQueries);

/* If we've been signaled, exit loop */
if (bKilled)
{
break;
}
}

/*
 * Commit the transaction.
 *
 * Note that signal could have occurred *before* the BEGIN.  You'll need
 * to handle that case as well (or ignore the error from COMMIT)
 */
issueQuery(COMMIT;);
}





Re: [sqlite] using sqlite as a temporary database to process lots of data

2005-06-28 Thread Ben Clewett
If you are using Linux, you can use the /dev/shm.  This is a memory 
resident file system.  I use this and find it about 200 times faster 
than writing to disk.  The only problem is that this is erased when the 
server reboots.  I hope this is of some use :)


Ben

Lorenzo Jorquera wrote:

Hi,
 My company is starting a project in wich we will have to process
large amounts of data and save the results to a db2 database.
 Because this data process needs to be very fast, the idea is to load
the data in memory and apply the transformations in C++. I think that
sqlite inmemory databases could be a good alternative to this, but
when I proposed this solution to my boss, he asked if someone has
already used sqlite for this purpose. I found that this use of sqlite
is cited in the wiki, but I can't find any concrete examples. If any
of you  can share some success histories with me,  I will be very
grateful.
 Thanks in advance,
 Lorenzo.
PD: please, forgive my english, it is a little rusty...





Re: [sqlite] Re: - Re: [sqlite] Training opportunity: The Inner Workings Of SQLite

2005-06-20 Thread Ben Clewett
May I ask if this tutorial will be held in the UK?  If not I'll second 
the requests for DVD or written transcripts.


Kind regards,

Ben Clewett.

D. Richard Hipp wrote:

On Mon, 2005-06-20 at 10:34 +1000, [EMAIL PROTECTED] wrote:


I too would be happy to pay for DVDs if they were to be made available.
Tyranny of distance prevents me from attending (Sydney, Australia based).




The same talk has been proposed for AUUG in Sydney
in October.  It has not yet been accepted, though.




Re: [sqlite] Re: - Re: [sqlite] Training opportunity: The Inner Workings Of SQLite

2005-06-20 Thread Ben Clewett

Dear Richard,

I have forwarded your email to a friend in my university (Open 
University) who often arrange conferences and tutorials.  I am sure due 
to the popularity and respect UK programmers have for SQLite, something 
may be possible.


Regards, Ben Clewett.


D. Richard Hipp wrote:

On Mon, 2005-06-20 at 08:23 +0100, Ben Clewett wrote:

May I ask if this tutorial will be held in the UK?  If not I'll second 
the requests for DVD or written transcripts.





The tutorial has not been proposed for any european
conferences.  Though, if you can suggest one and make
a case that there is sufficient interest, I'm sure
something could be worked out.





Re: [sqlite] client/server

2005-06-08 Thread Ben Clewett

Just another suggestion to the problem which I use my self.

I use a single SQLite database for each client.  Hosting the database 
collection on a bastion host as close to the client as I can get it. 
Therefore no client/server connection used.


I then use a daemon which reads all the client SQLite databases in turn, 
and writes them to a central DBMS, as well as writing information back 
out to the client SQLite databases.  (My central database in my case is 
MySQL because it's client/server and has a better locking model.)


I also use the Linux /dev/shm directory to store these bastion 
databases.  Which reads/writes about 200 times faster than a uncached 
disk.  Although this is erases when the server reboots.


This gives dramatic performance and low load on the central database :)

Ben


Andrea Giammarchi wrote:

Eugene Wee wrote:


Not at all (if I correctly understand what you're trying to say).
For example, there exists a SQLite extension in PHP (which comes 
bundled by default in PHP5, but currently does not support SQLite3).



pecl, PDO extensions allows PHP to use SQLITE Version 3.X too :-)
http://it2.php.net/manual/it/ref.pdo.php#pdo.drivers

andr3a





[sqlite] SQLite Wrapper

2005-06-02 Thread Ben Clewett

All,

D. Richard Hipp was kind enough to allow me to upload my wrapper for 
SQLite.  This assists locking problems by copying result table to memory 
and releasing locks on DB.  As well as allowing a very simple API, and a 
few useful tools.


I've uploaded a new version which contains a SQLite shell.  This outputs 
in MySQL format.  Sorry for all of you who believe MySQL is not worth 
copying.  Probably very rightly.  But I am sorry I like it's clear output :)


Sample:


lw_shell .schema registry
++-+--+
|name|type |sql   |
++-+--+
|registry|table|CREATE TABLE registry (   |
|| |  key VARCHAR(200) COLLATE NOCASE PRIMARY KEY,|
|| |  value VARCHAR(200) COLLATE NOCASE   |
|| | )|
++-+--+
  (0.000 seconds)


Here: http://www.sqlite.org/contrib

Maybe somebody may find this useful :)

Regards,

Ben.

(Sorry but shell part only works in UNIX.)




Re: [sqlite] database schema has changed

2005-06-02 Thread Ben Clewett

Tito Ciuro wrote:

Hi Ben,

What do you mean? Try to reopen the database after VACUUM. That  should 
do it...


The posting seemed to indicate a user was experiencing the 'database 
schema has changed' during the first INSERT after a VACUUM.


The suggested solution was to close/open the db after the VACUUM.

I've tried creating tables with millions of rows, deleting various 
amounts of data, VACUUM and then INSERT.  I cannot replicate this error. 
   It always works.  Possible this was related to an older version of 
SQLite, or there is some special case of VACUUM which I am not encountering.


The only time I get this message is when another thread executes a query 
during a VACUUM.  I use the 'prepare statement' method, and my guess is 
that the prepare and execute on one thread, straddle the VACUUM from 
another, and hence fail.


But maybe I am missing something... :)

Ben





-- Tito

On 02/06/2005, at 9:36, Ben Clewett wrote:


Ben Clewett wrote:

So as I understand the posting.  After each VACUUM it is advisable  
to shutdown and restart SQLite, or run a single INSERT to clear  the 
error...




After a lot of testing, I can find no instance of where SQLite  needs 
restarting, or anything else after a VACUUM.  Although maybe  some 
locking may be needed during the VACUUM in a multi-thread  scenario.


Ben




I'll do some testing :)
Ben
Tito Ciuro wrote:


Hi Ben,

Check this post and see if it helps: http://www.mail-archive.com/  
sqlite-users@sqlite.org/msg04887.html


Regards,

-- Tito

On 01/06/2005, at 12:16, Ben Clewett wrote:



Some extra information:

I also note I run a VACUUM every 50 minutes.  Could this result  
in  the database schema has changed message I sometimes see?
Possibly one thread accessing database during VACUUM?


Kind regards,

Ben Clewett.

Ben Clewett wrote:



Dear SQLite,
I am experiencing a problem.  Every so often, say one in  100,000  
'SELECT' queries, I get this error:

database schema has changed
This is false, the schema has not changed.
Can any person offer a reason why this might be, and possibly  a  
solution to this problem?

I am using SQLite 3.1.6 and storing my database on /dev/shm.
Kind regards,
Ben Clewett.








__

Tito Ciuro
http://www.webbo.com







__

Tito Ciuro
http://www.webbo.com





[sqlite] database schema has changed

2005-06-01 Thread Ben Clewett

Dear SQLite,

I am experiencing a problem.  Every so often, say one in 100,000 
'SELECT' queries, I get this error:


database schema has changed

This is false, the schema has not changed.

Can any person offer a reason why this might be, and possibly a solution 
to this problem?


I am using SQLite 3.1.6 and storing my database on /dev/shm.

Kind regards,

Ben Clewett.





Re: [sqlite] can i use sqlite3_column_***() function on multiple rows?

2005-04-27 Thread Ben Clewett
You may want to try my 'Lite Wrap' wrapper API:
http://www.sqlite.org/contrib
This takes a single call:
lw_query(sqlite3 *handle, const char *query);
This safely returns a table in memory, without locking SQLite, to which 
you have random access.  As well as some nice utils for displaying table 
data.

I have had no feedback as to whether anybody likes this wrapper.  But I 
use is exclusively and find it works well for me :)  If any person uses 
this and can suggest enhancements, please let me know.

Regards, Ben.
Brian Swetland wrote:
[jack wu [EMAIL PROTECTED]]
documentation says if 

int sqlite3_step(sqlite3_stmt*);
returns SQLITE_ROW, (meaning a single row) then i can
use int sqlite3_column_int(sqlite3_stmt*, int iCol);
and other functions to extract the returned values out
of the row. 

can i use the same set of API on a result set which
contains multiple rows? if yes, which API do i call to
move to the next row before calling sqlite3_column_*
again?  

Just keep calling sqlite3_step() -- it will return SQLITE_ROW
for each row of results (which you can use the sqlite_column_*()
calls to extract data from), SQLITE_DONE if there are no more 
rows, or an error if something goes wrong.

Brian



[sqlite] sqlite wrapper to prevent locking

2005-04-21 Thread Ben Clewett
Dear users,
I have attached another version of my wrapper for SQLite.
This returns a table in memory and releases all locks.  Therefore 
providing a safer, simpler and very useful API.

This version uses less memory, gets meta-data and fixes a bug or two.
This version also provides some streaming formats.  Dumping content of 
result to a file pointer.  Two formats for debugging, two for streaming:

Debug Format 1:
+++
|a   |b   |
+++
|1   |A   |
|2   |B   |
|NULL|C   |
|4   |NULL|
|NULL|NULL|
+++
Debug Format 2:
TABLE BORDER=1 CELLPADDING=2 CELLSPACING=0
  TR
TD ALIGN=CENTERBFONT FACE=Couriera/FONT/B/TD
TD ALIGN=CENTERBFONT FACE=Courierb/FONT/B/TD
  /TR
TR
TDFONT FACE=Courier1/FONT/TD
TDFONT FACE=CourierA/FONT/TD
/TR
TR
TDFONT FACE=Courier2/FONT/TD
TDFONT FACE=CourierB/FONT/TD
/TR
TR
TDFONT FACE=CourierNULL/FONT/TD
TDFONT FACE=CourierC/FONT/TD
/TR
TR
TDFONT FACE=Courier4/FONT/TD
TDFONT FACE=CourierNULL/FONT/TD
/TR
TR
TDFONT FACE=CourierNULL/FONT/TD
TDFONT FACE=CourierNULL/FONT/TD
/TR
/TABLE
Stream Format 1:
a,b
1,A
2,B
,C
4,
,
Stream Format 2:
TABLE
  ROW a=1 b=A /
  ROW a=2 b=B /
  ROW a= b=C /
  ROW a=4 b= /
  ROW a= b= /
/TABLE
I do not unfortunately have any place to host this work.  There may be 
somebody on this list who knows of a place where this can be published 
for any members who may with make use of it.

Kind regards,
Ben Clewett.


sqlite_wrapper.tar.gz
Description: GNU Zip compressed data


Re: [sqlite] Locking Methods

2005-04-20 Thread Ben Clewett
This is exactly my problem.  My version is 3.1.6.  The error is 
SQLITE_LOCKED.

Ben
Jay Sprenkle wrote:
I had the same trouble he did. Here's what I did that doesn't work:
  select * from a into result;
   foreach row in result
  ' this fails:
  update b set col = a.value;
   next
But based on what I read here it's supposed to do this.
On 4/19/05, Gerry Blanchette [EMAIL PROTECTED] wrote:
Could you please elaborate your scenario?
I tried a test myself but am afraid I may not have interpreted your test
case properly.
I have 2 tables, fred and bob, each with 1 rows. I select a column
from fred and bind the value obtained from sqlite3_column_int to an
update statement that operates on bob. I loop over fred via
sqlite3_step, where each iteration successfully updates the row in bob.
Both tables exist in the same DB, accessed via the same sqlite3 *.
Have I misinterpreted your scenario somehow, as this works for me?
Thank you for helping clear this up for me.
-- Gerry Blanchette
-Original Message-
From: Ben Clewett [mailto:[EMAIL PROTECTED]
Sent: Monday, April 18, 2005 4:50 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Locking Methods
snip
I am experiencing problems with the locking.  Because SQLite uses
database locking this forces two major problems:
- I can't read through a record set and use the data to execute updates.
 For instance, some parsing exercise which cannot be completed using a
single SQL command.  I have to store all the data locally, get to the
end of the query, then execute and update statements.
Ben Clewett.
/snip




[sqlite] Wrapper function to obtain a result query table in memory, leaving no lock.

2005-04-20 Thread Ben Clewett
Dear SQLite users,
I have put together a small wrapper function to illustrate a method of 
completing a query which will not leave any lock on the database.

This takes the response of a query, eg, SELECT * FROM table and 
returns a memory resident array containing the result.  You may keep the 
array as long as you want, generate as many as you want, and at no time 
leave the database locked.

This also gives a far simpler API call to execute a query, taking just 
the handle and the SQL.

Sample use:
sw_result *result;
result = sw_query(sqlite_handle, SELECT * FROM table);
printf(Data %s, %s, %s\n, sw_cell(0, 0), sw_cell(0, 1), sw_cell(0, 
2));
sw_free_result(result);
There is no warranty, the code provided for illistation reasons only. 
Any user wishing to make use of this does so at their own risk.

Kind regards,
Ben Clewett.


sqlite_wrapper.tar.gz
Description: GNU Zip compressed data


Re: [sqlite] Wrapper function to obtain a result query table in memory, leaving no lock.

2005-04-20 Thread Ben Clewett
Frank,
You spotted the deliberate error!  :)
The call is indeed:
const unsigned char *sw_cell(sw_result *result, int column, int row);
So the example should read:
printf(Data %s, %s, %s\n,
sw_cell(result, 0, 0),
sw_cell(result, 0, 1),
sw_cell(result, 0, 2));
Ben
F.W.A. van Leeuwen wrote:
I assume something is missing in your example.
There seems to be no connection between the 'result' variable and the 
sw_cell() calls.


result = sw_query(sqlite_handle, SELECT * FROM table);
printf(Data %s, %s, %s\n, sw_cell(0, 0), sw_cell(0, 1), sw_cell(0, 
2));
best regards
Frank



Re: [sqlite] Locking Methods

2005-04-19 Thread Ben Clewett
There are two thinks I am working on.  Firstly, just to loop around the 
execution:

do {
execute(sql);
} while (SQLITE_BUSY || SQLITE_LOCKED)
This does the job, but is not very nice.  Also impossible to tell where 
a genuine lock through bad programming, and a lock through heavy load.

Secondly, an alternate simple API to complete a SELECT query.  My 
version will load the table into memory, then complete the query.  That 
way no locking is maintained, and I can use the data for as long as I 
want.  (This is the way MySQL and PostgreSQL work.)  I can also stream 
this table down a TCP/IP connection.  Although this will be only a table 
of text, and require freeing after use.  (Has any person done this 
already to save me the work?)  If my version work, I'll try and offer a 
patch.

But I would still like to offer encouragement to the great developers of 
SQLite to get row locking working.  For me will transform SQLite into a 
very professional system.

Kind regards,
Ben Clewett.

Will Leshner wrote:
On 4/18/05, Ben Clewett [EMAIL PROTECTED] wrote:

I see in the manual there are plans use Row Locking.  This would solve
my problems.  Allowing me to load the database from multiple processes
to a far greater amount without fear of SQL_BUSY.  As well as
simplifying my programs.

I am currently experimenting with using triggers to do row locking and
it seems to work quite well, though I don't know if it will help solve
your particular problem.



Re: [sqlite] Callback when table contents have changed

2005-04-19 Thread Ben Clewett
You could poll the database as Micah suggested.  However you may get 
into the locking problems that I have.  When two processes attempt a 
read and write at the same time, as statistically will happen using this 
method, your have a LOCK failure.

A method I suggest is using a log file.  When you update your database, 
add a line to a log file.  Your second process simply reads the log 
file, checking whether the EOF has moved every second or two.  A more 
complex version would be to use something like RSS.  Broadcast changes 
from one system, or from your own SQLite wrapper.  Read the RSS from 
other processes.

Hope this is of some use :)
Ben
F.W.A. van Leeuwen wrote:
I've asked this two weeks ago but no reply yet...
I've got one application that writes to the database, and one that reads 
from it.  When a table in the database has changed, the reading 
application needs to know that.  Of course I can send a signal from the 
writer to the reader app, but if the database would have a means for that, 
I think it would be more elegant (not directly coupling the writer and the 
reader).  So it would be nice if a C application could subscribe to table 
changed events.  I don't think it is currently possible with SQLite, is 
it?

Frank.



Re: [sqlite] Locking Methods

2005-04-19 Thread Ben Clewett
This is true - selecting * from a table of size exceeding the memory of 
the machine would not work in any way.  Any large table would be slow.

A solution used by MySQL is to have two cursors.
The standard cursor returns a memory resident table containing the 
result of the query.  Ideal for small queries.  No locking problems, 
easy to program with, good for concurrency.  You can move forward and 
back through the result set, and know the dimensions of the result set. 
 This is still very fast.

Then they provide a second cursor which works like SQLite.  Returning a 
row at a time.  This is faster and better for large tables.

However, neither cursor locks more than a row at a time.  Then only 
locking for as long as it takes to generate a copy of the row.  So 
concurrency still works very well.

I would hate to see SQLite become MySQL, there is one too many of this 
already :)  But they have may have some good ideas.

Regards, Ben.
Ken  Deb Allen wrote:
The largest drawback of this approach is scalability -- in my case I 
often deal with databases with 20-30 closely related tables, many of 
which can have 1,000,000 or more records in them, and containing 
multiple relations. Copying this into memory before each query would be 
very time consuming. Having two copies of the database (one in memory 
and one on disk) and ensuring that changes are made to both as part of a 
transaction would be more effective, but this would require large 
amounts of memory!

-Ken
On 19-Apr-05, at 3:36 AM, Ben Clewett wrote:
There are two thinks I am working on.  Firstly, just to loop around 
the execution:

do {
execute(sql);
} while (SQLITE_BUSY || SQLITE_LOCKED)
This does the job, but is not very nice.  Also impossible to tell 
where a genuine lock through bad programming, and a lock through heavy 
load.

Secondly, an alternate simple API to complete a SELECT query.  My 
version will load the table into memory, then complete the query.  
That way no locking is maintained, and I can use the data for as long 
as I want.  (This is the way MySQL and PostgreSQL work.)  I can also 
stream this table down a TCP/IP connection.  Although this will be 
only a table of text, and require freeing after use.  (Has any person 
done this already to save me the work?)  If my version work, I'll try 
and offer a patch.

But I would still like to offer encouragement to the great developers 
of SQLite to get row locking working.  For me will transform SQLite 
into a very professional system.

Kind regards,
Ben Clewett.

Will Leshner wrote:
On 4/18/05, Ben Clewett [EMAIL PROTECTED] wrote:
I see in the manual there are plans use Row Locking.  This would solve
my problems.  Allowing me to load the database from multiple processes
to a far greater amount without fear of SQL_BUSY.  As well as
simplifying my programs.
I am currently experimenting with using triggers to do row locking and
it seems to work quite well, though I don't know if it will help solve
your particular problem.





Re: [sqlite] Locking Methods

2005-04-19 Thread Ben Clewett
Kervin,
I would be happy to help all I can on such a venture.  But I am not sure 
what that may be :)

Regards,
Ben.
Kervin L. Pierre wrote:
I think that's an excellent idea, and I'd like
to help however possible if work starts on a
patch.
My wishlist
1. Finer grain locking ( Row/table )
2. Memory resident resultsets
3. Reduction or elimination SQLITE_SCHEMA
I think memory resident resultsets would be an
excellent feature to start with, at a glance it
seems simple enough ( famous last words right?
:) ).
-
Kervin
Ben Clewett wrote:
This is true - selecting * from a table of size exceeding the memory 
of the machine would not work in any way.  Any large table would be slow.

A solution used by MySQL is to have two cursors.
The standard cursor returns a memory resident table containing the 
result of the query.  Ideal for small queries.  No locking problems, 
easy to program with, good for concurrency.  You can move forward and 
back through the result set, and know the dimensions of the result 
set.  This is still very fast.

Then they provide a second cursor which works like SQLite.  Returning 
a row at a time.  This is faster and better for large tables.

However, neither cursor locks more than a row at a time.  Then only 
locking for as long as it takes to generate a copy of the row.  So 
concurrency still works very well.

I would hate to see SQLite become MySQL, there is one too many of this 
already :)  But they have may have some good ideas.

Regards, Ben.
Ken  Deb Allen wrote:
The largest drawback of this approach is scalability -- in my case I 
often deal with databases with 20-30 closely related tables, many of 
which can have 1,000,000 or more records in them, and containing 
multiple relations. Copying this into memory before each query would 
be very time consuming. Having two copies of the database (one in 
memory and one on disk) and ensuring that changes are made to both as 
part of a transaction would be more effective, but this would require 
large amounts of memory!

-Ken
On 19-Apr-05, at 3:36 AM, Ben Clewett wrote:
There are two thinks I am working on.  Firstly, just to loop around 
the execution:

do {
execute(sql);
} while (SQLITE_BUSY || SQLITE_LOCKED)
This does the job, but is not very nice.  Also impossible to tell 
where a genuine lock through bad programming, and a lock through 
heavy load.

Secondly, an alternate simple API to complete a SELECT query.  My 
version will load the table into memory, then complete the query.  
That way no locking is maintained, and I can use the data for as 
long as I want.  (This is the way MySQL and PostgreSQL work.)  I can 
also stream this table down a TCP/IP connection.  Although this will 
be only a table of text, and require freeing after use.  (Has any 
person done this already to save me the work?)  If my version work, 
I'll try and offer a patch.

But I would still like to offer encouragement to the great 
developers of SQLite to get row locking working.  For me will 
transform SQLite into a very professional system.

Kind regards,
Ben Clewett.

Will Leshner wrote:
On 4/18/05, Ben Clewett [EMAIL PROTECTED] wrote:
I see in the manual there are plans use Row Locking.  This would 
solve
my problems.  Allowing me to load the database from multiple 
processes
to a far greater amount without fear of SQL_BUSY.  As well as
simplifying my programs.

I am currently experimenting with using triggers to do row locking and
it seems to work quite well, though I don't know if it will help solve
your particular problem.









[sqlite] Locking Methods

2005-04-18 Thread Ben Clewett
Dear SQLite,
I have been using SQLite for a few weeks now, and enjoy using the product.
I am experiencing problems with the locking.  Because SQLite uses 
database locking this forces two major problems:

- I can't read through a record set and use the data to execute updates. 
 For instance, some parsing exercise which cannot be completed using a 
single SQL command.  I have to store all the data locally, get to the 
end of the query, then execute and update statements.

- I sometimes get SQL_BUSY when heavy loading on database.  This 
exception is hard to handle, forcing a check on every execution or 
restart of program.  This is a bigger problem as this is unpredictable 
and limiting to the loading I can apply to the DB.

I see in the manual there are plans use Row Locking.  This would solve 
my problems.  Allowing me to load the database from multiple processes 
to a far greater amount without fear of SQL_BUSY.  As well as 
simplifying my programs.

I would be very interested in knowing how this is progressing, as this 
would allow me to plan my future work.

Thanks for the excellent product,
Ben Clewett.


Re: [sqlite] multi threading

2005-04-18 Thread Ben Clewett
Jay Sprenkle wrote:
My advice to *all* programmers is to never use more than one thread
in the same address space.  I have never in 20 years worked on a
multiple threaded program that actually got all of the threading
issues right.  There are always subtle bugs that cause error that
are very difficult to reproduce and fix.  Multithreading is the
fastest road to buggy code that I know of.  Avoid it.  If you
absolutely, positively must have multiple threads of control, put
each thread in its own address space (make it a process.)

I second the motion!
I agree that threads have to be treated with caution.  Locking must be 
used everywhere.  Expect some difficult bugs.  Protect your buffers etc...

But I think there is a valid place...  For instance in event-driver 
programming.  Programming a single thread to handle events without 
sticking on one event, can arguably be as hard and buggy as giving each 
event it's own thread.  See Win95 :)  This can be helped by using 
objects which have been designed to be thread safe.  For instance, 
Java's excellent thread-safe hash-table.  Threads also find a use where 
TCP/IP blocking may be encountered.

Also where a UI is used at the same time as some back-processing.  Eg, 
downloading email on a slow link, whilst writing and sending another 
email.  With a single thread, this would be extremely hard to program well.

I also note the performance gains which can be obtained from modern 
hyperthreaded CPU's.

But these are my opinions, and I have been programming a lot less than 
20 years, as well as being new to this list, so ignore please my ramblings!

Ben


Re: [sqlite] multi threading

2005-04-18 Thread Ben Clewett
Jay Sprenkle wrote:
I also note the performance gains which can be obtained from modern
hyperthreaded CPU's.

Does Linux/Windows make each process a thread on these beasties?
If so, wouldn't making each task a process end up being the same thing?
Each process is a collection of one or more threads.  Since each thread 
has access to the same memory and the same code, a hyperthreaded CPU can 
run more than one thread literally at the same time.  As long as they 
are in the same process.

So if you split your processing between a collection of threads, your 
application will run much faster.  Since this is at CPU level, this 
works on Linux, Windows and any other OS.  (Accept SCO, it has no thread 
ability.)

This of course added to the many forms of bug available to badly 
programmed threaded applications...

Ben



[sqlite] Newbe question on collations

2005-03-22 Thread Ben Clewett
Dear SQLite,
Thanks for providing the application of the decade.  SQLite is simply 
excellent.  Thanks for all the work.

I am using SQLite as a bastion database between an application and 
MySQL.  Which works very well, accept that the collations differ.  MySQL 
is case-insensitive.

I can't seem to find any information on the web site.  Would any person 
have some information on how to change the default collation, this would 
be extremely useful.

Kind regards,
Ben Clewett.