Re: [sqlite] corrupt database recovery

2009-05-28 Thread Gene Allen
We weren't logging the encryption failure.

 The feof() does return true until you attempt to read PAST the end of a
>> file.
The "does" in the above line should be "doesn't"

>> while code
Yeah, this code was written as a 'do...while', instead of a 'while do.'

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin
Sent: Thursday, May 28, 2009 7:19 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] corrupt database recovery

On 29/05/2009 9:34 AM, Gene Allen wrote:
> Yeah.  
> 
> Since my code works in blocks, read/compress/encrypt/write, loop.  Almost
> all the real data was being written to the compressed file, however any
> finalization and flushing of the stream wasn't occurring (since the
encrypt
> was failing)

and the encrypt failure wasn't logged?

 > so the last bit of any SQLite database wouldn't be written.

If so, pragma integrity_check should report that some of the pages 
actually written contain pointers to pages that are past the end of the 
file, shouldn't it?

[snip]

>> Well...a more structured test exposed the problem and it was this:
>>
>> The feof() does return true until you attempt to read PAST the end of a
>> file.

If feof doesn't continue to return true, it is broken.

> So the code worked great until the file's length was a multiple of
> the
>> buffer size (in my case 262,144 bytes).  As you can imagine that doesn't
>> happen too often in the real world.
>>
>> Since I assumed that a feof would return true where there wasn't any more
>> data in the file, I would start another pass at reading a chunk of data
>> (which wouldn't find anything) and run thru the compression/encryption
> code.
>> The compression code worked handled it correctly, but the encryption
>> required that a DWORD boundary (blowfish) and since 0 is on such a
> boundary
>> but at the wrong end...it would fail.  

Silently? Unlogged?

In any case, I would have thought using feof() was not needed ...
long time since I've written C in earnest, but isn't something like this 
the standard idiom:

#define BUFSIZ 262144
buff char[BUFSIZ];
size_t nbytes;
FILE *f;
f = fopen("filename", "rb");
while ((nbytes = fread(buff, 1, BUFSIZ, f)) {
do_something(buff, nbytes);
}

??

HTH,

John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-28 Thread John Machin
On 29/05/2009 10:18 AM, John Machin wrote:
> On 29/05/2009 9:34 AM, Gene Allen wrote:
>> Yeah.  
>>
>> Since my code works in blocks, read/compress/encrypt/write, loop.  Almost
>> all the real data was being written to the compressed file, however any
>> finalization and flushing of the stream wasn't occurring (since the encrypt
>> was failing)
> 
> and the encrypt failure wasn't logged?
> 
>  > so the last bit of any SQLite database wouldn't be written.
> 
> If so, pragma integrity_check should report that some of the pages 
> actually written contain pointers to pages that are past the end of the 
> file, shouldn't it?

Your output from the integrity_check shows complaints about invalid page 
  numbers in the range 462 to 773. At the default page size of 1024, 
those page numbers span (773-462+1)*1024 = 319488 bytes so you are 
missing more than a 262144-byte chunk [unless your page size is 512!].

This doesn't seem to gel with the combination of "almost all the real 
data was being written" and the hypothesis that the database was 
corrupted merely by truncation.

What are the page size, the expected size of the database, and the 
actual (truncated) size of the database?

What evidence do you have that the feof problem actually happened in 
this case?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-28 Thread Jay A. Kreibich
On Thu, May 28, 2009 at 03:12:55PM -0700, Allen Fowler scratched on the wall:
> 
> > just for anybody who is interested:
> 
> > 
> > I translated Jim's function into window code and added
> > a page of 1024 that will be written, instead of a single byte.
> > On my Win-XP system I got 55 TPS, much faster than sqlite
> > seems to write a page but that might be related to the
> > additional overhead sqlite needs to do.
> > 
> > This brings me to a nice to have feature request:
> > How about adding similar test function in the sqlite API ?
> > This might use the vfs to write pages and gives some feedback
> > on the performance of the system where sqlite runs on.
> > It might also detect problems with the commit function...
> > Just an idea...
> > 
> 
> Interesting idea.

  It would make a lot more sense to make this an external utility
  or an extension of the sqlite3 shell.  Adding it to the core library
  is a definite case of code bloat.

  Actually, a whole suite of performance related tests might be
  interesting.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-28 Thread John Machin
On 29/05/2009 9:34 AM, Gene Allen wrote:
> Yeah.  
> 
> Since my code works in blocks, read/compress/encrypt/write, loop.  Almost
> all the real data was being written to the compressed file, however any
> finalization and flushing of the stream wasn't occurring (since the encrypt
> was failing)

and the encrypt failure wasn't logged?

 > so the last bit of any SQLite database wouldn't be written.

If so, pragma integrity_check should report that some of the pages 
actually written contain pointers to pages that are past the end of the 
file, shouldn't it?

[snip]

>> Well...a more structured test exposed the problem and it was this:
>>
>> The feof() does return true until you attempt to read PAST the end of a
>> file.

If feof doesn't continue to return true, it is broken.

> So the code worked great until the file's length was a multiple of
> the
>> buffer size (in my case 262,144 bytes).  As you can imagine that doesn't
>> happen too often in the real world.
>>
>> Since I assumed that a feof would return true where there wasn't any more
>> data in the file, I would start another pass at reading a chunk of data
>> (which wouldn't find anything) and run thru the compression/encryption
> code.
>> The compression code worked handled it correctly, but the encryption
>> required that a DWORD boundary (blowfish) and since 0 is on such a
> boundary
>> but at the wrong end...it would fail.  

Silently? Unlogged?

In any case, I would have thought using feof() was not needed ...
long time since I've written C in earnest, but isn't something like this 
the standard idiom:

#define BUFSIZ 262144
buff char[BUFSIZ];
size_t nbytes;
FILE *f;
f = fopen("filename", "rb");
while ((nbytes = fread(buff, 1, BUFSIZ, f)) {
do_something(buff, nbytes);
}

??

HTH,

John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-28 Thread Allen Fowler



> 
> Since my code works in blocks, read/compress/encrypt/write, loop.  Almost
> all the real data was being written to the compressed file, however any
> finalization and flushing of the stream wasn't occurring (since the encrypt
> was failing) so the last bit of any SQLite database wouldn't be written.  
> 
> I'm guessing that also contributes to how infrequent I saw a problem.
> 

What size blocks where you using? 

So, what _is_ stored in the final "block size" length of the DB file?  It must 
be something not-so-critical as you are able to dump the DB, right?

Perhaps adding a full-file verify pass to your compress/encrypt code would be a 
good idea.


  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] add column creating null columns even with default?

2009-05-28 Thread John Machin
On 29/05/2009 2:53 AM, Simon Slavin wrote:
> On 28 May 2009, at 9:00am, Damien Elmes wrote:
> 
>> alter table cardModels add column allowEmptyAnswer boolean not null  
>> default 1
> 
>> sqlite> update cardModels set allowEmptyAnswer = 0;
> 
> You're obviously used to other implementations of SQL.  'boolean'  
> isn't a legit type name:
> 
> http://www.sqlite.org/datatype3.html

AFAICT that page says nothing about what is a "legit type name". This 
one does: http://www.sqlite.org/syntaxdiagrams.html#type-name

Here are some examples of legitimate type-names:

dos-prompt>sqlite3
SQLite version 3.6.14
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table foo (c1 jabberwocky, c2 very big inteher whoops 
typo, c3 "3.14159", c4 very variable character (-123456, +666.987), c5 
boolean);
sqlite> pragma table_info(foo);
0|c1|jabberwocky|0||0
1|c2|very big inteher whoops typo|0||0
2|c3|3.14159|0||0
3|c4|very variable character (-123456, +666.987)|0||0
4|c5|boolean|0||0
sqlite>

"boolean" as a type-name will cause the column to have NUMERIC affinity 
according to the rules on the page you quoted, and the OP seems to be 
being careful to restrain values to 0, 1, and NULL, so this all looks 
rather sensible to me.

> 
> This may or may not be the cause of the problem you report, but fix it  
> first.

How would you propose to fix it?

Cheers,

John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-28 Thread Gene Allen
Yeah.  

Since my code works in blocks, read/compress/encrypt/write, loop.  Almost
all the real data was being written to the compressed file, however any
finalization and flushing of the stream wasn't occurring (since the encrypt
was failing) so the last bit of any SQLite database wouldn't be written.  

I'm guessing that also contributes to how infrequent I saw a problem.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Allen Fowler
Sent: Thursday, May 28, 2009 5:17 PM
To: General Discussion of SQLite Database; kennethinbox-sql...@yahoo.com
Subject: Re: [sqlite] corrupt database recovery





> I think I found my defect: my old stress tests was based on doing
> compression/encryptions/decryption/decompression passes on files of random
> sizes; so I would do about a 10 million passes or so and say...that's
pretty
> good.
> 
> Well...a more structured test exposed the problem and it was this:
> 
> The feof() does return true until you attempt to read PAST the end of a
> file. So the code worked great until the file's length was a multiple of
the
> buffer size (in my case 262,144 bytes).  As you can imagine that doesn't
> happen too often in the real world.
> 
> Since I assumed that a feof would return true where there wasn't any more
> data in the file, I would start another pass at reading a chunk of data
> (which wouldn't find anything) and run thru the compression/encryption
code.
> The compression code worked handled it correctly, but the encryption
> required that a DWORD boundary (blowfish) and since 0 is on such a
boundary
> but at the wrong end...it would fail.  
> 



But was causing the DB to be corrupt, but a dump to work fine?

What structure was getting damaged?



  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] TOCTOU pager bug in 3.6.14.2

2009-05-28 Thread Brian Dantes
See Ticket 3883



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-28 Thread Allen Fowler




> I think I found my defect: my old stress tests was based on doing
> compression/encryptions/decryption/decompression passes on files of random
> sizes; so I would do about a 10 million passes or so and say...that's pretty
> good.
> 
> Well...a more structured test exposed the problem and it was this:
> 
> The feof() does return true until you attempt to read PAST the end of a
> file. So the code worked great until the file's length was a multiple of the
> buffer size (in my case 262,144 bytes).  As you can imagine that doesn't
> happen too often in the real world.
> 
> Since I assumed that a feof would return true where there wasn't any more
> data in the file, I would start another pass at reading a chunk of data
> (which wouldn't find anything) and run thru the compression/encryption code.
> The compression code worked handled it correctly, but the encryption
> required that a DWORD boundary (blowfish) and since 0 is on such a boundary
> but at the wrong end...it would fail.  
> 



But was causing the DB to be corrupt, but a dump to work fine?

What structure was getting damaged?



  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-28 Thread Allen Fowler

> just for anybody who is interested:

> 
> I translated Jim's function into window code and added
> a page of 1024 that will be written, instead of a single byte.
> On my Win-XP system I got 55 TPS, much faster than sqlite
> seems to write a page but that might be related to the
> additional overhead sqlite needs to do.
> 
> This brings me to a nice to have feature request:
> How about adding similar test function in the sqlite API ?
> This might use the vfs to write pages and gives some feedback
> on the performance of the system where sqlite runs on.
> It might also detect problems with the commit function...
> Just an idea...
> 

Interesting idea.




> Anyway, here is my win32 version Jim's test function:
> 
> ---
> #include
> #include
> 
> #define TESTWRITES1000
> 
> int TestDisk ()
> {
> int fd;
> int n;
> int loops;
> unsigned charPage[1024];
> time_t elap;
> time_t start;
> 
> if ((fd=_open("C:\\testfile.db", O_RDWR+O_CREAT, 0777)) == -1)
> {
>   fprintf(stderr, "Error opening file");
>   exit(1);
> }
> 
> start = time(NULL);
> for(loops=0; loops
> {
>   if (_lseek(fd, 0, SEEK_SET) == -1) {
> fprintf(stderr,"Error seeking file");
> exit(1);
>   }
> 
>   n = _write(fd, Page, 1024);
>   if (n != 1024) {
> fprintf(stderr,"Error writing file");
> exit(1);
>   }
>   _commit(fd);
> }
> elap = time(NULL)-start;
> fprintf(stderr,"Time: %d seconds; TPS=%f\n", (int)elap, 
> TESTWRITES*1.0/((int)elap));
> 
> return(0);
> }
> -
> 
> Jim Wilcoxson wrote:
> > I'm running on Linux with ext3 and just wrote a Python test program to
> > insert rows into a table with a single column, no indexing, and doing
> > a commit after each insert.  When I first ran it, I got around 440
> > inserts per second, which is clearly impossible.  This is a 7200rpm
> > drive, so even if I could write a row on every revolution, the maximum
> > insert rate would be 120 per second.  I tried adding "pragma
> > sychronous=normal", and it sped up to 520 TPS.  With synchronous=full
> > it slowed back to 420-460, so this must be the Python default.  Adding
> > synchronous=off increased the rate to over 6000 TPS -- basically
> > writing to memory instead of disk.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQlite performance on multi process env

2009-05-28 Thread Griggs, Donald

Subject: Re: [sqlite] SQlite performance on multi process env

Hello, Zhrahman,

Regarding:   "... kindly suggest how to properly have the database
shared in memory among n number of processes. So they can execute select
operatins(read only no update on teh
database) effeciently."


If the large number of processes are unavoidable, you might want to
consider:
   1) Writing a server layer that would be the only process to open the
DB file and service the requests for the other processes.
   2) Modifying such a layer already written by others.
   3) Using a database designed for lots of simultaneous clients, such
as Postgres or MySql.  (I believe the succinct advice has been something
like, "Think of sqlite as a replacement for fwrite, not as a replacement
for Oracle.")
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQlite performance on multi process env

2009-05-28 Thread Jay A. Kreibich
On Thu, May 28, 2009 at 10:53:34AM -0700, zhrahman scratched on the wall:
> 
> Few other info
> I am running it on Linux. So to make the long story short, kindly suggest
> how to properly have the database shared in memory among n number of
> processes.

  You can't.  ":memory:" databases cannot be shared.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is this test failure serious: *malloc*.transient.42 failures (expected [1 1], got [0 {}])?

2009-05-28 Thread D. Richard Hipp

On May 28, 2009, at 4:12 PM, Nicolas Williams wrote:
>
>> To debug, set a breakpoint on sqlite3Fault().  That routine is called
>> whenever an OOM fault is simulated.  Run to the point of the OOM  
>> fault
>> that is causing the problem.  Figure out which malloc() is falling  
>> and
>> why the problem is not being reported back up to the top level.
>
> How would I run a single test?

You can't.  You have to run an entire test script.  Skip over the  
breakpoints prior to the point of interest.

 gdb testfixture
 br sqlite3Fault
 r ../sqlite/test/aftermalloc.test

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is this test failure serious: *malloc*.transient.42 failures (expected [1 1], got [0 {}])?

2009-05-28 Thread Nicolas Williams
On Thu, May 28, 2009 at 04:00:07PM -0400, D. Richard Hipp wrote:
> These are tests that simulate a malloc() failure (aka "out-of-memory"  
> or OOM fault) to verify that nothing crashes or leaks following an OOM  
> fault and that an SQLITE_NOMEM error is reported back out at the top  
> level.  The problem here seems to be that the SQLITE_NOMEM is not  
> making it back up to the top.  This is not a serious error because (1)  
> nothing crashed and (2) you probably will never hit a real OOM fault  
> on virtual-memory platform like Solaris anyhow.

Thanks.  Solaris can definitely return ENOMEM.  The Solaris VM subsystem
does not oversubscribe memory+swap (there's no OOM killer in Solaris).

> To debug, set a breakpoint on sqlite3Fault().  That routine is called  
> whenever an OOM fault is simulated.  Run to the point of the OOM fault  
> that is causing the problem.  Figure out which malloc() is falling and  
> why the problem is not being reported back up to the top level.

How would I run a single test?

> I really did run a complete regression with no errors on Linux x86,  
> x86_64, and mac x86 prior to releasing 3.6.14.2.  I'm very curious to  
> know what is different about Solaris that is causing the extra  
> undetected OOM fault.

I'll let you know.  What I'll do (once I know how to run a single test
:) is use DTrace like this:

# Run the test, catch a call to sqlite3Fault(), print a stack trace,
# stop the process, and exit.
dtrace -w -n 'pid$target::sqlite3Fault:entry{ustack(); stop(); exit();}' 


Then I'll use DTrace again to trace all function call entry/return
events as the program unwinds, something like this:

# Set the stopped program to run and trace the time on every function
# call and all return values.
dtrace -F -n 'BEGIN {system("prun $1");}' \
-n 'pid$1:::entry{trace(timestamp);}' \
-n 'pid$1:::return{trace(arg1);}' 

> > Also, I got this more serious looking test failure on x86, 32-bit:
> >
> > perm-memsubsys1.fts3expr-3.18...
> > Expected: [AND one wo]
> > Got: [AND one woe]

Did you mean to tell me about this one?

Nico
-- 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-28 Thread Gene Allen
I have not! but I just might...thanks for the tip.

I think I found my defect: my old stress tests was based on doing
compression/encryptions/decryption/decompression passes on files of random
sizes; so I would do about a 10 million passes or so and say...that's pretty
good.

Well...a more structured test exposed the problem and it was this:

The feof() does return true until you attempt to read PAST the end of a
file. So the code worked great until the file's length was a multiple of the
buffer size (in my case 262,144 bytes).  As you can imagine that doesn't
happen too often in the real world.

Since I assumed that a feof would return true where there wasn't any more
data in the file, I would start another pass at reading a chunk of data
(which wouldn't find anything) and run thru the compression/encryption code.
The compression code worked handled it correctly, but the encryption
required that a DWORD boundary (blowfish) and since 0 is on such a boundary
but at the wrong end...it would fail.  

I fixed the code to test for 0 bytes being read, instead of solely relying
on feof() and all seems well. :)

I would like to say thank you for all your help and advice.

Gene

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ken
Sent: Thursday, May 28, 2009 11:32 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] corrupt database recovery


Gene,

Im sure others have suggested, but have you tried running your code through
valgrind? 
Can you remove the custom VFS ?


--- On Wed, 5/27/09, Gene Allen  wrote:

> From: Gene Allen 
> Subject: Re: [sqlite] corrupt database recovery
> To: mgr...@medcom-online.de, "'General Discussion of SQLite Database'"

> Date: Wednesday, May 27, 2009, 4:51 PM
> I've reviewed the code over and over
> again, but am yet to find it.  But it
> is a good bit of very complicated code (blowfish and
> compression code).
> 
> That's why I was asking about the format of the
> database.  Since I'm able to
> successfully do a .dump but the integrity_check whines, I'm
> thinking that
> maybe if I focus on where what could cause that, it might
> give me a clue.  
> 
> For example, if the tree is stored at the end of file maybe
> I'm falling out
> the loop without writing the final bytes.  But if the
> tree is not localized
> to one area of the file, I'll have to rethink my approach.
> 
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 2:44 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
> 
> so, if you think it is a coding error on your side it will
> be a hard work to find the source. I can only image side
> effects
> on an used API like sqlite3 with the classics:
> 
> - stack frame overload in a function that calls sqlite
> functions
> - using a local stack variable as a return pointer and
> reuse
>   this external.
> - memory overwriting on malloc'ed pointers or reusing of
> allready
>   free'd memory pages.
> 
> one and two might be found using a code review and
> identify
> local variables or particular arrays that are written:
> So wherever you call anything from sqlite check the stack
> declaration and review the usage of these variables...
> of course just don't do number two... ;)
> 
> memory errors might be detected using special debug
> versions of the
> malloc/free library, by code review, or manually by adding
> some
> test variables whereever you malloc or free a pointer.
> 
> I'm sure you allready did some of these... anyway, good
> luck
> 
> are you using threads? would be another beautiful issue to
> trace...;)
> 
> Marcus
> 
> 
> 
> > It is a server class machine running Windows 2003 with
> 4 GB.  No, it's a
> > local drive with 20GB free on it.
> >
> > I'm sure that it's a coding error on my part. 
> SQLite is very stable, in
> > my
> > opinion.  I'm just trying to get a rough idea on
> where I'm screwing up the
> > database.
> >
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of John Elrick
> > Sent: Wednesday, May 27, 2009 12:58 PM
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] corrupt database recovery
> >
> > What platform?  Any chance they are using a
> network drive?
> >
> >
> > John
> >
> > Gene wrote:
> >> My code is outside the database layer.  So I
> do all my database work,
> >> then
> >> compress and encrypt it.  No errors are
> returned anywhere.  I'm guessing
> >> that it's going to be an uninitialized variable or
> byte alignment
> >> problems
> >> somewhere.
> >>
> >> This code is running on hundreds of machines
> without a problem and I've
> >> never reproduced it but every now and again I get
> a support ticket
> >> showing
> > a
> >> 

Re: [sqlite] Is this test failure serious: *malloc*.transient.42 failures (expected [1 1], got [0 {}])?

2009-05-28 Thread D. Richard Hipp

On May 28, 2009, at 3:32 PM, Nicolas Williams wrote:
> I'm getting these failures in gmake fulltest output for SQLite3  
> 3.6.14.2
> on Solaris x86 (32- and 64-bit):
>
> altermalloc-1.transient.42...
> Expected: [1 1]
> Got: [0 {}]
>
> Are these failures a problem?  How would I start to debug this?

These are tests that simulate a malloc() failure (aka "out-of-memory"  
or OOM fault) to verify that nothing crashes or leaks following an OOM  
fault and that an SQLITE_NOMEM error is reported back out at the top  
level.  The problem here seems to be that the SQLITE_NOMEM is not  
making it back up to the top.  This is not a serious error because (1)  
nothing crashed and (2) you probably will never hit a real OOM fault  
on virtual-memory platform like Solaris anyhow.

To debug, set a breakpoint on sqlite3Fault().  That routine is called  
whenever an OOM fault is simulated.  Run to the point of the OOM fault  
that is causing the problem.  Figure out which malloc() is falling and  
why the problem is not being reported back up to the top level.

I really did run a complete regression with no errors on Linux x86,  
x86_64, and mac x86 prior to releasing 3.6.14.2.  I'm very curious to  
know what is different about Solaris that is causing the extra  
undetected OOM fault.



>
>
> Also, I got this more serious looking test failure on x86, 32-bit:
>
> perm-memsubsys1.fts3expr-3.18...
> Expected: [AND one wo]
> Got: [AND one woe]
>
> Thanks,
>
> Nico
> -- 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Default page size based on filesystem

2009-05-28 Thread Nicolas Williams
Looking at the changelog I see that in 3.6.12 SQLite3 gained the ability
to set the default page size for new databases according to the
filesystem's preference:

# The default page size on windows is automatically adjusted to match
  the capabilities of the underlying filesystem. 

I had a patch to do the same on Unix, but I refrained from using it
because many tests would fail.  The failures were clearly due to the
tests assuming a specific default page size, so where a test expected a
file to have N pages at some point, well, that test would fail when run
on, say, a ZFS filesystem because the number of pages in the db depends
on the page size.

Have the tests been fixed to take into account the dynamically
determined default page size?  Or should I assume that if the tests pass
with default page size = 1KB then making the default page size dynamic
will be OK?

Nico
-- 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Is this test failure serious: *malloc*.transient.42 failures (expected [1 1], got [0 {}])?

2009-05-28 Thread Nicolas Williams
I'm getting these failures in gmake fulltest output for SQLite3 3.6.14.2
on Solaris x86 (32- and 64-bit):

altermalloc-1.transient.42...
Expected: [1 1]
 Got: [0 {}]
...
attachmalloc-1.transient.42...
Expected: [1 1]
 Got: [0 {}]
...
malloc6-1.transient.42...
Expected: [1 1]
 Got: [0 {}]
...
mallocG-1.transient.42...
Expected: [1 1]
 Got: [0 {}]
...
shared_err-5.transient.44...
Expected: [1 1]
 Got: [0 {}]
...
shared_err-5.transient.104...
Expected: [1 1]
 Got: [0 {}]
...
perm-memsubsys1.altermalloc-1.transient.41...
Expected: [1 1]
 Got: [0 {}]
...
perm-memsubsys1.attachmalloc-1.transient.41...
Expected: [1 1]
 Got: [0 {}]
...
perm-memsubsys1.malloc6-1.transient.41...
Expected: [1 1]
 Got: [0 {}]
...
perm-memsubsys1.mallocG-1.transient.41...
Expected: [1 1]
 Got: [0 {}]
...
perm-memsubsys2.altermalloc-1.transient.42...
Expected: [1 1]
 Got: [0 {}]
...
perm-memsubsys2.attachmalloc-1.transient.42...
Expected: [1 1]
 Got: [0 {}]
...

Are these failures a problem?  How would I start to debug this?

Also, I got this more serious looking test failure on x86, 32-bit:

perm-memsubsys1.fts3expr-3.18...
Expected: [AND one wo]
 Got: [AND one woe]

Thanks,

Nico
-- 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQlite performance on multi process env

2009-05-28 Thread John Elrick
zhrahman wrote:
> Few other info
> I am running it on Linux. So to make the long story short, kindly suggest
> how to properly have the database shared in memory among n number of
> processes. So they can execute select operatins(read only no update on teh
> database) effeciently.
>   

Multiprocessing is a magic trick.  The slight of hand makes the system 
appear to be faster running multiple tasks by relying upon different 
processes or threads needing access to different interrupts in a given 
period of time.

We can go back to the ancient days of punched cards and line printing to 
make a simple example.  The card reader takes time to process data.  
While a given process is waiting on the card reader, another process can 
chunk through some computations.  Meanwhile, those computations can be 
interrupted to let another process send data to a printer buffer which 
soon fills up and that process has to wait.  Back to computations.  Now 
the card reader buffer is full and can be processed so we switch over 
there.  You get the idea.  From the outside it looks like the system as 
a whole runs faster, but it's an illusion.  We're just making use of 
dead time.

If you throw nearly identical processes into the mix, such as you are 
doing, you defeat the magic.  Suddenly, you're back to no better than 
serially processing the data, with the penalty of task switching added 
in.  Given a quad core processor, you would likely be further ahead with 
three forced processes dividing the work among three cores -- leaving 
the fourth core for operating system processing.  You gain nothing by 
throwing more processes at the problem.


John Elrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQlite performance on multi process env

2009-05-28 Thread zhrahman

Few other info
I am running it on Linux. So to make the long story short, kindly suggest
how to properly have the database shared in memory among n number of
processes. So they can execute select operatins(read only no update on teh
database) effeciently.


-- 
View this message in context: 
http://www.nabble.com/SQlite-performance-on-multi-process-env-tp23752056p23766728.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] add column creating null columns even with default?

2009-05-28 Thread Simon Slavin

On 28 May 2009, at 9:00am, Damien Elmes wrote:

> alter table cardModels add column allowEmptyAnswer boolean not null  
> default 1

> sqlite> update cardModels set allowEmptyAnswer = 0;

You're obviously used to other implementations of SQL.  'boolean'  
isn't a legit type name:

http://www.sqlite.org/datatype3.html

This may or may not be the cause of the problem you report, but fix it  
first.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQlite performance on multi process env

2009-05-28 Thread zhrahman

 So yes the hard disk is shared. I tried to even load the database in memory.
It is still horribly slow. I want to understand this. If I load the database
in memory how can I make the memory sharable among 100 processes. I am
running in quad core environement. So my goal here is to load the database
in memory, each process will open read only connection to it and execute the
select operations. For doing 1 slect operations a single process was
taking 1 second and when 100 process was running each took 10+ seconds. What
are the ways to share the database among all the processes in memory. I hope
some shared memory approach is used here. But what I see is unless I
recreate all the tables individually for each process the memory sharing
doesnt happen. Please help me out here 

-- 
View this message in context: 
http://www.nabble.com/SQlite-performance-on-multi-process-env-tp23752056p23765638.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-28 Thread Ken

Gene,

Im sure others have suggested, but have you tried running your code through 
valgrind? 
Can you remove the custom VFS ?


--- On Wed, 5/27/09, Gene Allen  wrote:

> From: Gene Allen 
> Subject: Re: [sqlite] corrupt database recovery
> To: mgr...@medcom-online.de, "'General Discussion of SQLite Database'" 
> 
> Date: Wednesday, May 27, 2009, 4:51 PM
> I've reviewed the code over and over
> again, but am yet to find it.  But it
> is a good bit of very complicated code (blowfish and
> compression code).
> 
> That's why I was asking about the format of the
> database.  Since I'm able to
> successfully do a .dump but the integrity_check whines, I'm
> thinking that
> maybe if I focus on where what could cause that, it might
> give me a clue.  
> 
> For example, if the tree is stored at the end of file maybe
> I'm falling out
> the loop without writing the final bytes.  But if the
> tree is not localized
> to one area of the file, I'll have to rethink my approach.
> 
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 2:44 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
> 
> so, if you think it is a coding error on your side it will
> be a hard work to find the source. I can only image side
> effects
> on an used API like sqlite3 with the classics:
> 
> - stack frame overload in a function that calls sqlite
> functions
> - using a local stack variable as a return pointer and
> reuse
>   this external.
> - memory overwriting on malloc'ed pointers or reusing of
> allready
>   free'd memory pages.
> 
> one and two might be found using a code review and
> identify
> local variables or particular arrays that are written:
> So wherever you call anything from sqlite check the stack
> declaration and review the usage of these variables...
> of course just don't do number two... ;)
> 
> memory errors might be detected using special debug
> versions of the
> malloc/free library, by code review, or manually by adding
> some
> test variables whereever you malloc or free a pointer.
> 
> I'm sure you allready did some of these... anyway, good
> luck
> 
> are you using threads? would be another beautiful issue to
> trace...;)
> 
> Marcus
> 
> 
> 
> > It is a server class machine running Windows 2003 with
> 4 GB.  No, it's a
> > local drive with 20GB free on it.
> >
> > I'm sure that it's a coding error on my part. 
> SQLite is very stable, in
> > my
> > opinion.  I'm just trying to get a rough idea on
> where I'm screwing up the
> > database.
> >
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of John Elrick
> > Sent: Wednesday, May 27, 2009 12:58 PM
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] corrupt database recovery
> >
> > What platform?  Any chance they are using a
> network drive?
> >
> >
> > John
> >
> > Gene wrote:
> >> My code is outside the database layer.  So I
> do all my database work,
> >> then
> >> compress and encrypt it.  No errors are
> returned anywhere.  I'm guessing
> >> that it's going to be an uninitialized variable or
> byte alignment
> >> problems
> >> somewhere.
> >>
> >> This code is running on hundreds of machines
> without a problem and I've
> >> never reproduced it but every now and again I get
> a support ticket
> >> showing
> > a
> >> corrupt database.  So I'm trying to figure
> out WHERE to look.
> >>
> >> -Original Message-
> >> From: sqlite-users-boun...@sqlite.org
> >> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of John Elrick
> >> Sent: Wednesday, May 27, 2009 10:59 AM
> >> To: General Discussion of SQLite Database
> >> Subject: Re: [sqlite] corrupt database recovery
> >>
> >> Gene Allen wrote:
> >>
> >>> Ok...it's happened again and I've decided that
> I need to track this
> >>> down
> >>> once and for all!
> >>>
> >>> Here is what I'm seeing: I get errors when I
> do a integrity_check (see
> >>> below), but I can .dump it to a text file and
> then .read it into
> >>> another
> >>> database ok.
> >>>
> >>> It seems to me that I'm screwing up an index
> or something.  Are indexes
> >>> stored at the end of the database file? 
> All I can think of is that my
> >>> compression/encryption routines are messing
> something up and I'm trying
> > to
> >>> figure out 'where' to look.
> >>>
> >>> I guess the real question is, what would I
> have to do to make an
> >>> integrity_check fail, but still let a dump
> work correctly?
> >>>
> >>> Many thanks for any advice on tracking down
> this ugliness.
> >>>
> >>>
> >>
> >> SNIP
> >>
> >> Personally, I'd refactor the code to allow me to
> verify the operation of
> >> the compression/encryption routines independently
> of the database
> >> operation.  How are you injecting the
> 

Re: [sqlite] GROUPY BY alias backward incompatibility

2009-05-28 Thread Virgilio Alexandre Fornazin
Curious... even it does not make sense, it pass also on oracle 11g (sql
server actively refused to run)

SQL Server 2008:

SELECT (SELECT COUNT(*) FROM T t_inner GROUP BY t_outer.c) FROM T t_outer
-- Msg 164, Level 15, State 1, Line 1
-- Each GROUP BY expression must contain at least one column that is not an
outer reference.

SELECT (SELECT COUNT(*) FROM T t_inner GROUP BY t_inner.c) FROM T t_outer
---
(0 row(s) affected)


Oracle 11g:

SQL> create table t (c int);
Table created

SQL> select (select count(*) from t t_inner group by t_inner.c) from t
t_outer;
No rows selected

SQL> select (select count(*) from t t_inner group by t_outer.c) from t
t_outer;
No rows selected


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ralf Junker
Sent: quinta-feira, 28 de maio de 2009 11:30
To: General Discussion of SQLite Database
Subject: Re: [sqlite] GROUPY BY alias backward incompatibility

At 15:37 28.05.2009, D. Richard Hipp wrote:

>Have you tried these two queries on other SQL database engines besides  
>SQLite?  What do PostgreSQL and MySQL make of them?

MySQL (5.0.21) reports no erros on either of both queries:

  select
(select count(*) from t t_inner
 group by t_outer.c) -- t_outer !!!
  from t t_outer;

  select
(select count(*) from t t_inner
 group by t_inner.c) -- t_inner !!!
  from t t_outer;

I do not have access to PostgreSQL right now.

The SQLite help [1] says: "The expressions in the GROUP BY clause do not
have to be expressions that appear in the result." Reading this, I'd expect
that both queries should run - even if the 1st one does not make much sense.
Opinions?

Ralf

[1] http://www.sqlite.org/lang_select.html  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUPY BY alias backward incompatibility

2009-05-28 Thread Ralf Junker
At 15:37 28.05.2009, D. Richard Hipp wrote:

>Have you tried these two queries on other SQL database engines besides  
>SQLite?  What do PostgreSQL and MySQL make of them?

MySQL (5.0.21) reports no erros on either of both queries:

  select
(select count(*) from t t_inner
 group by t_outer.c) -- t_outer !!!
  from t t_outer;

  select
(select count(*) from t t_inner
 group by t_inner.c) -- t_inner !!!
  from t t_outer;

I do not have access to PostgreSQL right now.

The SQLite help [1] says: "The expressions in the GROUP BY clause do not have 
to be expressions that appear in the result." Reading this, I'd expect that 
both queries should run - even if the 1st one does not make much sense. 
Opinions?

Ralf

[1] http://www.sqlite.org/lang_select.html  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 question

2009-05-28 Thread John Machin
On 28/05/2009 10:53 PM, Igor Tandetnik wrote:
> "Oza, Hiral_Dineshbhai"
>  wrote in
> message
> news:24ea477c0c5854409ba742169a5d71c406bd4...@mailhyd2.hyd.deshaw.com
>> Can you please let me know meaning of 'Cell' in Btrees used in
>> sqlite3.
> 
> Can you point to the text where you saw sqlite3, B-trees and the word 
> "Cell" mentioned together?

Possibly here: http://www.sqlite.org/fileformat.html


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUPY BY alias backward incompatibility

2009-05-28 Thread D. Richard Hipp

On May 28, 2009, at 7:01 AM, Ralf Junker wrote:
>
>  drop table if exists t;
>  create table t (c);
>  select
>(select count() from t  as t_inner
> group by t_outer.c)
>  from t as t_outer;
>
> This behaviour changed in version 3.5.4. From then on, SQLite issues  
> an "SQL error near line 4: no such column: t_outer.c". This also  
> shows in the most recent version (3.6.14).

I missed the space in between "t" and "t_inner" in your first post,  
which is why I did not understand the query.  I edited above to insert  
the AS keyword for readability.

But the query still makes no sense to me.  How can you GROUP BY  
something that is not part of the query?  The t_outer.c value is a  
constant within the scope of the inner query, so the GROUP BY is  
meaningless.

If this worked in 3.5.3 and earlier, I guess that is a bug in 3.5.3  
and earlier that was fixed in 3.5.4.
>
> Questions:
>
> * Does the ORDER BY change also apply to GROUP BY?

GROUP BY and ORDER BY are mostly the same thing, just applied at  
different times during processing.  So most of the code for the two is  
in common.  Changes to one tend to effect the other.  But I am still  
not seeing a "change" here - unless you call better detection of an  
error condition a change.
>
>
> * Are there any test cases for the new behaviour?
>  I failed to find any in 3.6.14 test suite.

There are test cases for everything.  But they can often take a good  
bit of effort to locate.
>
>
> * Post 3.5.3, the query works fine if I GROUP BY the inner table:
>
>  select
>(select count() from t as t_inner
> group by t_inner.c)
>  from t as t_outer;
>
>  Is this the intended behaviour and should users update
>  their SQL accordingly?

This query makes sense because now the GROUP BY is against a column in  
the table that is being queried.  And so one would expect this to work.

Have you tried these two queries on other SQL database engines besides  
SQLite?  What do PostgreSQL and MySQL make of them?

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUPY BY alias backward incompatibility

2009-05-28 Thread Ralf Junker
At 13:27 28.05.2009, D. Richard Hipp wrote:

>On May 28, 2009, at 7:01 AM, Ralf Junker wrote:
>
>> Hello!
>>
>> Up to version 3.5.3, SQLite would happily execute this SQL without  
>> complaints:
>>
>>  drop table if exists t;
>>
>>  create table t (c);
>>
>>  select
>>(select count() from t t_inner
>> group by t_outer.c)
>>  from t t_outer;
>>
>
>This query does not make any sense.  What are you trying to do?

The above query is just a shortened version to demonstrate the backward 
incompatibilty. The original is far longer and calculates counts and average 
counts:

DROP TABLE IF EXISTS RemSymptoms;
DROP TABLE IF EXISTS SymCategory;

CREATE TABLE RemSymptoms (SymId);
CREATE TABLE SymCategory (CatID, SymId);

SELECT SymCategory.CatID, round(1000*COUNT(*) /
  (SELECT COUNT(*) 
 FROM SymCategory TempSymCategory 
 WHERE TempSymCategory.CatId=SymCategory.CatID 
 GROUP BY SymCategory.CatID))
 AS CatAvgRemCount, 
  (SELECT COUNT(*) 
 FROM SymCategory TempSymCategory 
 WHERE TempSymCategory.CatId=SymCategory.CatID 
 GROUP BY SymCategory.CatID) 
 AS CatSymCount 
  FROM RemSymptoms 
INNER JOIN SymCategory ON RemSymptoms.SymId=SymCategory.SymId
  GROUP BY SymCategory.CatID;

A user found that this query no longer works after upgrading to SQLite 3.5.4 or 
later.

Ralf 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUPY BY alias backward incompatibility

2009-05-28 Thread Ralf Junker
At 13:27 28.05.2009, D. Richard Hipp wrote:

>On May 28, 2009, at 7:01 AM, Ralf Junker wrote:
>
>> Hello!
>>
>> Up to version 3.5.3, SQLite would happily execute this SQL without  
>> complaints:
>>
>>  drop table if exists t;
>>
>>  create table t (c);
>>
>>  select
>>(select count() from t t_inner
>> group by t_outer.c)
>>  from t t_outer;
>>
>
>This query does not make any sense.  What are you trying to do?

The above query is just a shortened version to demonstrate the backward 
incompatibilty. The original is far longer and calculates counts and average 
counts:

DROP TABLE IF EXISTS RemSymptoms;
DROP TABLE IF EXISTS SymCategory;

CREATE TABLE RemSymptoms (SymId);
CREATE TABLE SymCategory (CatID, SymId);

SELECT SymCategory.CatID, round(1000*COUNT(*) /
  (SELECT COUNT(*) 
 FROM SymCategory TempSymCategory 
 WHERE TempSymCategory.CatId=SymCategory.CatID 
 GROUP BY SymCategory.CatID))
 AS CatAvgRemCount, 
  (SELECT COUNT(*) 
 FROM SymCategory TempSymCategory 
 WHERE TempSymCategory.CatId=SymCategory.CatID 
 GROUP BY SymCategory.CatID) 
 AS CatSymCount 
  FROM RemSymptoms 
INNER JOIN SymCategory ON RemSymptoms.SymId=SymCategory.SymId
  GROUP BY SymCategory.CatID;

A user found that this query no longer works after upgrading to SQLite 3.5.4 or 
later.

Ralf 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 question

2009-05-28 Thread Igor Tandetnik
"Oza, Hiral_Dineshbhai"
 wrote in
message
news:24ea477c0c5854409ba742169a5d71c406bd4...@mailhyd2.hyd.deshaw.com
> Can you please let me know meaning of 'Cell' in Btrees used in
> sqlite3.

Can you point to the text where you saw sqlite3, B-trees and the word 
"Cell" mentioned together?

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 question

2009-05-28 Thread D. Richard Hipp

On May 28, 2009, at 8:43 AM, Oza, Hiral_Dineshbhai wrote:
>
> Can you please let me know meaning of 'Cell' in Btrees used in  
> sqlite3.

See line 43 of the btreeInt.h source file for the definition.  You  
will want to read the previous 42 lines of that same file for context.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL question - not sure if it's possible at all

2009-05-28 Thread Dennis Volodomanov
Igor Tandetnik wrote:
> Well, there's no way to tell how records with the same Date are going to 
> end up ordered in the original query. If you impose some deterministic 
> order there, e.g. with
>
> ORDER BY Date, Id
>
> then you can do something like this:
>
> select count(*)
> from Data d, (select Date, ID from Data where ID=?) myRow
> where PlotOnGraph=1 and d.Date <= myRow.Date and
>   (d.Date < myRow.Date or d.ID < myRow.ID);
>
>   

Ok, I'll investigate if I can do anything having that as a base - thank 
you. Unfortunately the results are sorted by the application (so, the 
grid control sorts them by date, but it doesn't take the ID in to 
account, so it's impossible to predict what the order is, as compared to 
the DB's sorted order). A bit of a mess...

   Dennis

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite3 question

2009-05-28 Thread Oza, Hiral_Dineshbhai
Hi,

I am new to this list, so I am not sure whether this is the right forum
to get this information...

Can you please let me know meaning of 'Cell' in Btrees used in sqlite3.

Thank you.
-Hiral
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL question - not sure if it's possible at all

2009-05-28 Thread Igor Tandetnik
"Dennis Volodomanov" 
wrote in message news:4a1e80dd.9040...@psunrise.com
> Igor Tandetnik wrote:
>>> and let's say, "Data" has an "ID" field (primary index).
>>>
>>> Is it possible to find out the *position* in the returned result set
>>> of a Data with a given ID?
>>>
>>
>> select count(*) from Data
>> where PlotOnGraph=1
>> and Date < (select Date from Data where ID=?);
>>
>>
> This works well, until if hits a non-unique date (which happens often
> in this application). Is there a way to make it work with non-unique
> dates as well?

Well, there's no way to tell how records with the same Date are going to 
end up ordered in the original query. If you impose some deterministic 
order there, e.g. with

ORDER BY Date, Id

then you can do something like this:

select count(*)
from Data d, (select Date, ID from Data where ID=?) myRow
where PlotOnGraph=1 and d.Date <= myRow.Date and
  (d.Date < myRow.Date or d.ID < myRow.ID);

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL question - not sure if it's possible at all

2009-05-28 Thread Dennis Volodomanov
Igor Tandetnik wrote:
>> and let's say, "Data" has an "ID" field (primary index).
>>
>> Is it possible to find out the *position* in the returned result set
>> of a Data with a given ID?
>> 
>
> select count(*) from Data
> where PlotOnGraph=1
> and Date < (select Date from Data where ID=?);
>
>   
This works well, until if hits a non-unique date (which happens often in 
this application). Is there a way to make it work with non-unique dates 
as well?

Thank you!

   Dennis

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQL question - not sure if it's possible at all

2009-05-28 Thread Dennis Volodomanov
Hello all,

I'm not sure if this can be done in SQL, but if it can, I'd appreciate 
your help.

Let's say I've got this query:

SELECT * FROM (SELECT * FROM Data WHERE PlotOnGraph=1 ORDER BY Date ASC) 
LIMIT ?2 OFFSET ?1

and let's say, "Data" has an "ID" field (primary index).

Is it possible to find out the *position* in the returned result set of 
a Data with a given ID? So, let's say the returned result was (just IDs):

1   2   3   4   5   6   9   7   11   14   12   13   10

Can I find out that ID=9 is number 7 (or 6 if zero-based, doesn't 
matter) using only SQL?

Thanks in advance!

   Dennis

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUPY BY alias backward incompatibility

2009-05-28 Thread D. Richard Hipp

On May 28, 2009, at 7:01 AM, Ralf Junker wrote:

> Hello!
>
> Up to version 3.5.3, SQLite would happily execute this SQL without  
> complaints:
>
>  drop table if exists t;
>
>  create table t (c);
>
>  select
>(select count() from t t_inner
> group by t_outer.c)
>  from t t_outer;
>

This query does not make any sense.  What are you trying to do?


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] GROUPY BY alias backward incompatibility

2009-05-28 Thread Ralf Junker
Hello!

Up to version 3.5.3, SQLite would happily execute this SQL without complaints:

  drop table if exists t;

  create table t (c);

  select 
(select count() from t t_inner
 group by t_outer.c)
  from t t_outer;

This behaviour changed in version 3.5.4. From then on, SQLite issues an "SQL 
error near line 4: no such column: t_outer.c". This also shows in the most 
recent version (3.6.14).

I searched both the change log and the timeline for 3.5.4 but could not find 
any explicit mention of GROUP BY.

I did note, however, that http://www.sqlite.org/releaselog/3_5_4.html talks 
about bringing "the processing of ORDER BY into compliance with the SQL 
standard".

Questions:

* Does the ORDER BY change also apply to GROUP BY?

* Are there any test cases for the new behaviour?
  I failed to find any in 3.6.14 test suite.

* Post 3.5.3, the query works fine if I GROUP BY the inner table:

  select 
(select count() from t t_inner
 group by t_inner.c)
  from t t_outer;

  Is this the intended behaviour and should users update 
  their SQL accordingly?

Thanks for any answers,

Ralf

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted database with Atomic write sector ?

2009-05-28 Thread SuperCommit

Hi,

We have performance issues with SQLite running on our file system (remote FS
using Atomic Write Sector).

In fact we are limited with the number of write by second allowed by our
file system, let’s say we have 10 write/s

After a series of benchmarks the log file (journal) seems to be the
bottleneck, SQLite writes to the log file many times for a simple
transaction (insert), and it consumes most of the write/s allowed.

One solution consists to use the journal only in memory (PRAGMA journal_mode
= MEMORY). 

We can continue to use the rollback systems in case of nominal mode, but on
application crash or power failure the SQLite recovery can’t be used, so we
plan to perform our own recovery system based on SQL queries.

And the question is : On Application crash or power failure with an Atomic
write sector FS does anybody knows if the SQLite database file system will
be corrupted and will need some specific SQLite recovery ?

Kind Regards.

-- 
View this message in context: 
http://www.nabble.com/Corrupted-database-with-Atomic-write-sector---tp23757941p23757941.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-28 Thread Marcus Grimm
just for anybody who is interested:

I translated Jim's function into window code and added
a page of 1024 that will be written, instead of a single byte.
On my Win-XP system I got 55 TPS, much faster than sqlite
seems to write a page but that might be related to the
additional overhead sqlite needs to do.

This brings me to a nice to have feature request:
How about adding similar test function in the sqlite API ?
This might use the vfs to write pages and gives some feedback
on the performance of the system where sqlite runs on.
It might also detect problems with the commit function...
Just an idea...

Anyway, here is my win32 version Jim's test function:

---
#include
#include

#define TESTWRITES  1000

int TestDisk ()
{
int fd;
int n;
int loops;
unsigned char   Page[1024];
time_t elap;
time_t start;

if ((fd=_open("C:\\testfile.db", O_RDWR+O_CREAT, 0777)) == -1)
{
  fprintf(stderr, "Error opening file");
  exit(1);
}

start = time(NULL);
for(loops=0; loops I'm running on Linux with ext3 and just wrote a Python test program to
> insert rows into a table with a single column, no indexing, and doing
> a commit after each insert.  When I first ran it, I got around 440
> inserts per second, which is clearly impossible.  This is a 7200rpm
> drive, so even if I could write a row on every revolution, the maximum
> insert rate would be 120 per second.  I tried adding "pragma
> sychronous=normal", and it sped up to 520 TPS.  With synchronous=full
> it slowed back to 420-460, so this must be the Python default.  Adding
> synchronous=off increased the rate to over 6000 TPS -- basically
> writing to memory instead of disk.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign Keys and Indexes

2009-05-28 Thread s . breitholz
> s.breith...@staubli.com wrote:
> > So when I have a query over serveral tables normally the primay key
> > is set as index when the table is joined.
> 
> Only if you actually join on primary key column(s). Doing so is common, 
> but not mandatory.
> 
> > If I need to add a where
> > clause
> > to a field of the joined table no single index on that field can be
> > used.
> 
> SQLite may choose to join in a "reverse" order. Consider:
> 
> select * from A join B on A.b_id = B.id
> where B.x = 42;
> 
> One way to satisfy this query would be to scan all rows in A, for each 
> such row look up matching rows in B using the primary key (assuming B.id 

> is in fact its primary key), and look for those where B.x=42 (an index 
> on B(id, x) could be useful here). Alternatively, one could scan all 
> rows in B looking for those with B.x=42 (possibly using an index on 
> B(x) ), and for each such row look up a matching row in A (possibly 
> using an index on A(b_id) ). SQLite engine decides which plan is better, 

> depending on which indexes exist.

Thank you for the perfect explanation Igor!
Perhaps some information like that could be added to the documentation.
Especially that SQLite decides which plan is the best on its own.

> 
> Igor Tandetnik 





Best regards / Mit freundlichen Grüssen

Stefan Breitholz

---
Staeubli GmbH - Theodor-Schmidt-Str. 19
DE - 95448 Bayreuth
Phone: +49(0)921/883-126 Fax: +49(0)921/883-58126
mailto:s.breith...@staubli.com
http://www.staubli.com

Registered under HRB 175 - Local Court Bayreuth - Managing Director: Karl 
Kirschner
---


This e-mail and any attachment (the 'message') are confidential and privileged 
and intended solely for the person or the entity to which it is adressed. If 
you have received it in error, please advise the sender by return e-mail and 
delete it immediately. Any use not in accordance with its purpose, any 
dissemination or reproduction, either whole or partial, by entities other than 
the intended recipient is strictly prohibited.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] add column creating null columns even with default?

2009-05-28 Thread Damien Elmes
Hi,

My application issues an 'add column' statement to an sqlite database, like:

alter table cardModels add column allowEmptyAnswer boolean not null default 1
alter table cardModels add column typeAnswer text not null default

Most of the time this works, but sometimes users send me a database
which is in an odd state:

mobile% sqlite ~/FilipsDeck.anki
SQLite version 3.6.12
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> update cardModels set allowEmptyAnswer = 0;
SQL error: cardModels.typeAnswer may not be NULL
sqlite> update cardModels set typeAnswer = "";
SQL error: cardModels.allowEmptyAnswer may not be NULL
sqlite> update cardModels set typeAnswer = "", allowEmptyAnswer = 0;
sqlite>

So the problem is fixable, but I'm wondering how it happened in the
first place. Older versions of my app used older versions of sqlite -
is this a known issue that has been fixed in a more recent release?

Cheers,

Damien
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQlite performance on multi process env

2009-05-28 Thread Sylvain Pointeau
the hard disk is shared so it is a critical resource,as well as 100
processes doesn't seem realistic on a single processor, dual core or not.

so I can understand your result, I find even them not too bad ...

Cheers,
Sylvain

On Thu, May 28, 2009 at 12:38 AM, zhrahman  wrote:

>
> I am doing a simple test on sqlite where in step one, I just write 10,000
> records on a simple database having three fileds. In step 2, after the
> write
> is done,  I want 100 process reading from the database and doing simple
> SQLs, no update on the databse just simple select read operations. I am
> getting horrible performance when I increase the number of processes. Now
> each process only opens the database connection in read only mode. Still I
> am guessing some lock is placed on the database and that why even the read
> operation is slowing down drastically with the increase of the number of
> processes. Any idea what I need to do to boost performance? For 100
> processes doing reads it takes like 10 seconds to do 1 SQL by each
> process. If I reduce the number of processes from 1000 to 1 it only
> finishes
> in a second
> --
> View this message in context:
> http://www.nabble.com/SQlite-performance-on-multi-process-env-tp23752056p23752056.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users